Briefly, this error occurs when Elasticsearch attempts to perform a bulk index operation for a data transformation that has previously failed. This could be due to various reasons such as insufficient memory, incorrect data format, or network issues. To resolve this, you can try increasing the memory allocation, checking the data format for any inconsistencies, or ensuring the network connectivity is stable. Additionally, check the transform configuration and logs for any specific issues. If the problem persists, consider breaking down the bulk operation into smaller parts.
This guide will help you check for common problems that cause the log ” Attempted to do a bulk index request for failed transform [{}]. ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, index, bulk, request.
Overview
In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.
Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.
Examples
Create index
The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1
PUT /test_index1?pretty { "settings" : { "number_of_shards" : 2, "number_of_replicas" : 1 }, "mappings" : { "properties" : { "tags" : { "type" : "keyword" }, "updated_at" : { "type" : "date" } } } }
List indices
All the index names and their basic information can be retrieved using the following command:
GET _cat/indices?v
Index a document
Let’s add a document in the index with the command below:
PUT test_index1/_doc/1 { "tags": [ "opster", "elasticsearch" ], "date": "01-01-2020" }
Query an index
GET test_index1/_search { "query": { "match_all": {} } }
Query multiple indices
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.
GET test_index1,test_index2/_search
Delete indices
DELETE test_index1
Common problems
- It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
- Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*
To disable this, you can add the following lines in the elasticsearch.yml:
action.destructive_requires_name: true
Overview
In Elasticsearch, when using the Bulk API it is possible to perform many write operations in a single API call, which increases the indexing speed. Using the Bulk API is more efficient than sending multiple separate requests. This can be done for the following four actions:
- Index
- Update
- Create
- Delete
Examples
The bulk request below will index a document, delete another document, and update an existing document.
POST _bulk { "index" : { "_index" : "myindex", "_id" : "1" } } { "field1" : "value" } { "delete" : { "_index" : "myindex", "_id" : "2" } } { "update" : {"_id" : "1", "_index" : "myindex"} } { "doc" : {"field2" : "value5"} }
Notes
- Bulk API is useful when you need to index data streams that can be queued up and indexed in batches of hundreds or thousands, such as logs.
- There is no correct number of actions or limits to perform on a single bulk call, but you will need to figure out the optimum number by experimentation, given the cluster size, number of nodes, hardware specs etc.
Log Context
Log “Attempted to do a bulk index request for failed transform [{}].” class name is ClientTransformIndexer.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
@Override protected void doNextBulk(BulkRequest request; ActionListenernextPhase) { if (context.getTaskState() == TransformTaskState.FAILED) { logger.debug("[{}] attempted to bulk index while failed."; getJobId()); nextPhase.onFailure(new ElasticsearchException("Attempted to do a bulk index request for failed transform [{}]."; getJobId())); return; } ClientHelper.executeWithHeadersAsync( transformConfig.getHeaders(); ClientHelper.TRANSFORM_ORIGIN;