Briefly, this error occurs when Elasticsearch encounters a setting in its configuration that it doesn’t recognize. In this case, the setting is “archiving”. This could be due to a typo, deprecated setting, or a setting that’s not applicable to the current version of Elasticsearch. To resolve this issue, you can either remove the unknown setting from the configuration file or replace it with a valid one. If it’s a deprecated setting, you should find the updated equivalent and use that instead. Always ensure that your settings are compatible with your Elasticsearch version.
This guide will help you check for common problems that cause the log ” ignoring existing unknown {} setting: [{}] with value [{}]; archiving ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: admin, cluster, settings.
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Settings in Elasticsearch
In Elasticsearch, you can configure cluster-level settings, node-level settings and index level settings. Here is a quick rundown of each level.
A. Cluster settings
These settings can either be:
- Persistent, meaning they apply across restarts, or
- Transient, meaning they won’t survive a full cluster restart.
If a transient setting is reset, the first one of these values that is defined is applied:
- The persistent setting
- The setting in the configuration file
- The default value
The order of precedence for cluster settings is:
- Transient cluster settings
- Persistent cluster settings
- Settings in the elasticsearch.yml configuration file
Examples
An example of persistent cluster settings update:
PUT /_cluster/settings { "persistent" : { "indices.recovery.max_bytes_per_sec" : "500mb" } }
An example of a transient update:
PUT /_cluster/settings { "transient" : { "indices.recovery.max_bytes_per_sec" : "40mb" } }
B. Index settings
These are the settings that are applied to individual indices. There is an API to update index level settings.
Examples
The following API call will set the number of replica shards to 5 for my_index index.
PUT /my_index/_settings { "index" : { "number_of_replicas" : 5 } }
To revert a setting to the default value, use null.
PUT /my_index/_settings { "index" : { "refresh_interval" : null } }
C. Node settings
These settings apply to nodes. Nodes can fulfill different roles. These include the master, data, and coordination roles. Node settings are set through the elasticsearch.yml file for each node.
Examples
Setting a node to be a data node (in the elasticsearch.yml file):
node.data: true
Disabling the ingest role for the node (which is enabled by default):
node.ingest: false
For production clusters, you will need to run each type of node on a dedicated machine with two or more instances of each, for HA (minimum three for master nodes).
Notes and good things to know
- Learning more about the cluster settings and index settings is important – it can spare you a lot of trouble. For example, if you are going to ingest huge amounts of data into an index and the number of replica shards is set to say, 5, the indexing process will be super slow because the data will be replicated at the same time it is indexed. What you can do to speed up indexing is to set the replica shards to 0 by updating the settings, and set it back to the original number when indexing is done, using the settings API.
- Another useful example of using cluster-level settings is when a node has just joined the cluster and the cluster is not assigning any shards to the node. Although shard allocation is enabled by default on all nodes, someone may have disabled shard allocation at some point (for example, in order to perform a rolling restart), and forgot to re-enable it later. To enable shard allocation, you can update the Cluster Settings API:
PUT /_cluster/settings{"transient":{"cluster.routing.allocation.enable":"all"}}
- It’s better to set cluster-wide settings with Settings API instead of with the elasticsearch.yml file and to use the file only for local changes. This will keep the same setting on all nodes. However, if you define different settings on different nodes by accident using the elasticsearch.yml configuration file, it is hard to notice these discrepancies.
- See also: Recovery
Log Context
Log “ignoring existing unknown {} setting: [{}] with value [{}]; archiving” classname is SettingsUpdater.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
settingsWithUnknownOrInvalidArchived.filter(k -> k.startsWith(ARCHIVED_SETTINGS_PREFIX)) ); } private static void logUnknownSetting(final String settingType; final Map.Entrye; final Logger logger) { logger.warn("ignoring existing unknown {} setting: [{}] with value [{}]; archiving"; settingType; e.getKey(); e.getValue()); } private static void logInvalidSetting( final String settingType; final Map.Entry e;