Briefly, this error occurs when Elasticsearch tries to create a new index, but it finds that metadata for the index already exists. This could be due to a previous failed attempt to create the index or a replication issue. To resolve this, you can try deleting the existing index metadata before creating the new index. Alternatively, you can check for any replication issues and fix them. Also, ensure that the index name you’re trying to create is unique to avoid conflicts with existing indices.
This guide will help you check for common problems that cause the log ” applying create index request using existing index [{}] metadata ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: metadata, index, cluster, request.
Overview
Metadata in Elasticsearch refers to additional information stored for each document. This is achieved using the specific metadata fields available in Elasticsearch. The default behavior of some of these metadata fields can be customized during mapping creation.
Examples
Using _meta meta-field for storing application-specific information with the mapping:
PUT /my_index?pretty { "mappings": { "_meta": { "domain": "security", "release_information": { "date": "18-01-2020", "version": "7.5" } } } }
Notes
- In version 2.x, Elasticsearch had a total 13 meta fields available, which are: _index, _uid, _type, _id, _source, _size, _all, _field_names, _timestamp, _ttl, _parent, _routing, _meta
- In version 5.x, _timestamp and _ttl meta fields were removed.
- In version 6.x, the _parent meta field was removed.
- In version 7.x, _uid and _all meta fields were removed.
Overview
In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.
Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.
Examples
Create index
The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1
PUT /test_index1?pretty { "settings" : { "number_of_shards" : 2, "number_of_replicas" : 1 }, "mappings" : { "properties" : { "tags" : { "type" : "keyword" }, "updated_at" : { "type" : "date" } } } }
List indices
All the index names and their basic information can be retrieved using the following command:
GET _cat/indices?v
Index a document
Let’s add a document in the index with the command below:
PUT test_index1/_doc/1 { "tags": [ "opster", "elasticsearch" ], "date": "01-01-2020" }
Query an index
GET test_index1/_search { "query": { "match_all": {} } }
Query multiple indices
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.
GET test_index1,test_index2/_search
Delete indices
DELETE test_index1
Common problems
- It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
- Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*
To disable this, you can add the following lines in the elasticsearch.yml:
action.destructive_requires_name: true
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “applying create index request using existing index [{}] metadata” classname is MetadataCreateIndexService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final boolean silent; final IndexMetadata sourceMetadata; final BiConsumermetadataTransformer; final ActionListener rerouteListener ) throws Exception { logger.info("applying create index request using existing index [{}] metadata"; sourceMetadata.getIndex().getName()); final Map mappings = MapperService.parseMapping(xContentRegistry; request.mappings()); if (mappings.isEmpty() == false) { throw new IllegalArgumentException( "mappings are not allowed when creating an index from a source index; " + "all mappings are copied from the source index"