Briefly, this error occurs when Elasticsearch is unable to process a bulk request due to reasons like insufficient memory, incorrect data format, or exceeding the maximum allowed size for a bulk request. To resolve this, you can increase the heap size, ensure the data format is correct, or split the bulk request into smaller chunks. Also, check for any underlying issues like network problems or disk space issues.
This guide will help you check for common problems that cause the log ” Error while executing bulk request ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: request, bulk, client, benchmark.
Overview
In Elasticsearch, when using the Bulk API it is possible to perform many write operations in a single API call, which increases the indexing speed. Using the Bulk API is more efficient than sending multiple separate requests. This can be done for the following four actions:
- Index
- Update
- Create
- Delete
Examples
The bulk request below will index a document, delete another document, and update an existing document.
POST _bulk { "index" : { "_index" : "myindex", "_id" : "1" } } { "field1" : "value" } { "delete" : { "_index" : "myindex", "_id" : "2" } } { "update" : {"_id" : "1", "_index" : "myindex"} } { "doc" : {"field2" : "value5"} }
Notes
- Bulk API is useful when you need to index data streams that can be queued up and indexed in batches of hundreds or thousands, such as logs.
- There is no correct number of actions or limits to perform on a single bulk call, but you will need to figure out the optimum number by experimentation, given the cluster size, number of nodes, hardware specs etc.
Overview
Any application that interfaces with Elasticsearch to index, update or search data, or to monitor and maintain Elasticsearch using various APIs can be considered a client
It is very important to configure clients properly in order to ensure optimum use of Elasticsearch resources.
Examples
There are many open-source client applications for monitoring, alerting and visualization, such as ElasticHQ, Elastalerts, and Grafana to name a few. On top of Elastic client applications such as filebeat, metricbeat, logstash and kibana that have all been designed to integrate with Elasticsearch.
However it is frequently necessary to create your own client application to interface with Elasticsearch. Below is a simple example of the python client (taken from the client documentation):
from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch() doc = { 'author': 'Testing', 'text': 'Elasticsearch: cool. bonsai cool.', 'timestamp': datetime.now(), } res = es.index(index="test-index", doc_type='tweet', id=1, body=doc) print(res['result']) res = es.get(index="test-index", doc_type='tweet', id=1) print(res['_source']) es.indices.refresh(index="test-index") res = es.search(index="test-index", body={"query": {"match_all": {}}}) print("Got %d Hits:" % res['hits']['total']['value']) for hit in res['hits']['hits']: print("%(timestamp)s %(author)s: %(text)s" % hit["_source"])
All of the official Elasticsearch clients follow a similar structure, working as light wrappers around the Elasticsearch rest API, so if you are familiar with Elasticsearch query structure they are usually quite straightforward to implement.
Notes and Good Things to Know
Use official Elasticsearch libraries.
Although it is possible to connect with Elasticsearch using any HTTP method, such as a curl request, the official Elasticsearch libraries have been designed to properly implement connection pooling and keep-alives.
Official Elasticsearch clients are available for java, javascript, Perl, PHP, python, ruby and .NET. Many other programming languages are supported by community versions.
Keep your Elasticsearch version and client versions in sync.
To avoid surprises, always keep your client versions in line with the Elasticsearch version you are using. Always test clients with Elasticsearch since even minor version upgrades can cause issues due to dependencies or a need for code changes.
Load balance across appropriate nodes.
Make sure that the client properly load balances across all of the appropriate nodes in the cluster. In small clusters this will normally mean only across data nodes (never master nodes), or in larger clusters, all dedicated coordinating nodes (if implemented) .
Ensure that the Elasticsearch application properly handles exceptions.
In the case of Elasticsearch being unable to cope with the volume of requests, designing a client application to handle this gracefully (such as through some sort of queueing mechanism) will be better than simply inundating a struggling cluster with repeated requests.
Log Context
Log “Error while executing bulk request” classname is BulkBenchmarkTask.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
// measure only service time; latency is not that interesting for a throughput benchmark long start = System.nanoTime(); try { success = bulkRequestExecutor.bulkIndex(currentBulk); } catch (Exception ex) { logger.warn("Error while executing bulk request"; ex); } long stop = System.nanoTime(); if (iteration >= warmupIterations) { sampleRecorder.addSample(new Sample("bulk"; start; start; stop; success)); }