Briefly, this error occurs when Elasticsearch fails to reindex documents due to bulk or search failures. This could be due to a variety of reasons such as insufficient memory, incorrect mappings, or network issues. To resolve this, you can increase the heap size to provide more memory, ensure that the mappings are correct, or check the network connectivity. Additionally, you can also try to reindex in smaller batches to reduce the load on the server.
This guide will help you check for common problems that cause the log ” error occurred while reindexing; bulk failures [{}]; search failures [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: search, bulk.
Overview
Search refers to the searching of documents in an index or multiple indices. The simple search is just a GET API request to the _search endpoint. The search query can either be provided in query string or through a request body.
Examples
When looking for any documents in this index, if search parameters are not provided, every document is a hit and by default 10 hits will be returned.
GET my_documents/_search
A JSON object is returned in response to a search query. A 200 response code means the request was completed successfully.
{ "took" : 1, "timed_out" : false, "_shards" : { "total" : 2, "successful" : 2, "failed" : 0 }, "hits" : { "total" : 2, "max_score" : 1.0, "hits" : [ ... ] } }
Notes and good things to know
- Distributed search is challenging and every shard of the index needs to be searched for hits, and then those hits are combined into a single sorted list as a final result.
- There are two phases of search: the query phase and the fetch phase.
- In the query phase, the query is executed on each shard locally and top hits are returned to the coordinating node. The coordinating node merges the results and creates a global sorted list.
- In the fetch phase, the coordinating node brings the actual documents for those hit IDs and returns them to the requesting client.
- A coordinating node needs enough memory and CPU in order to handle the fetch phase.
Overview
In Elasticsearch, when using the Bulk API it is possible to perform many write operations in a single API call, which increases the indexing speed. Using the Bulk API is more efficient than sending multiple separate requests. This can be done for the following four actions:
- Index
- Update
- Create
- Delete
Examples
The bulk request below will index a document, delete another document, and update an existing document.
POST _bulk { "index" : { "_index" : "myindex", "_id" : "1" } } { "field1" : "value" } { "delete" : { "_index" : "myindex", "_id" : "2" } } { "update" : {"_id" : "1", "_index" : "myindex"} } { "doc" : {"field2" : "value5"} }
Notes
- Bulk API is useful when you need to index data streams that can be queued up and indexed in batches of hundreds or thousands, such as logs.
- There is no correct number of actions or limits to perform on a single bulk call, but you will need to figure out the optimum number by experimentation, given the cluster size, number of nodes, hardware specs etc.
Log Context
Log “error occurred while reindexing; bulk failures [{}]; search failures [{}]” classname is SystemIndexMigrator.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
? Strings.collectionToCommaDelimitedString(bulkByScrollResponse.getBulkFailures()) : ""; String searchFailures = (bulkByScrollResponse.getSearchFailures() != null) ? Strings.collectionToCommaDelimitedString(bulkByScrollResponse.getSearchFailures()) : ""; logger.error("error occurred while reindexing; bulk failures [{}]; search failures [{}]"; bulkFailures; searchFailures); return new ElasticsearchException( "error occurred while reindexing; bulk failures [{}]; search failures [{}]"; bulkFailures; searchFailures );