Briefly, this error occurs when Elasticsearch is unable to clear its cache due to issues like insufficient permissions, disk space, or a network problem. To resolve this, you can try increasing the disk space, checking the network connectivity, or ensuring that Elasticsearch has the necessary permissions to perform the operation. Additionally, check for any underlying system issues that might be causing this error.
This guide will help you check for common problems that cause the log ” unexpectedly failed to clear cache ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cache, cluster.
Overview
Elasticsearch uses three types of caches to improve the efficiency of operation.
- Node request cache
- Shard data cache
- Field data cache
How they work
Node request cache maintains the results of queries used in a filter context. The results are evicted on a least recently used basis.
Shard data cache maintains the results of frequently used queries where size=0, particularly the results of aggregations. This cache is particularly relevant for logging use cases where data is not updated on old indices, and regular aggregations can be kept in cache to be reused.
The field data cache is used for sorting and aggregations. To keep these operations quick Elasticsearch loads these values into memory.
Examples
Elasticsearch usually manages cache behind the scenes, without the need for any specific settings. However, it is possible to monitor and limit the amount of memory being used on each node for a given cache type by putting the following in elasticsearch.yml :
indices.queries.cache.size: 10% indices.fielddata.cache.size: 30%
Note, the above values are in fact the defaults, and there is no need to set them specifically. The default values are good for most use cases, and should rarely be modified.
You can monitor the use of caches on each node like this:
GET /_nodes/stats/indices/fielddata GET /_nodes/stats/indices/query_cache GET /_nodes/stats/indices/request_cache
Notes and good things to know
Construct your queries with reusable filters. There are certain parts of your query which are good candidates to be reused across a large number of queries, and you should design your queries with this in mind. Anything thing that does not need to be scored should go in the filter section of a bool query. For example, time ranges, language selectors, or clauses that exclude inactive documents are all likely to be excluded in a large number of queries, and should be included in filter parts of the query so that they can be cached and reused.
In particular, take care with time filters. “now-15m” cannot be reused, because “now” will continually change as the time window moves on. On the other hand “now-15/m” will round to the nearest minute, and can be re-used (via cache) for 60 seconds before rolling over to the next minute.
For example when a user enters the search term “brexit”, we may want to also filter on language and time period to return relevant articles. The query below leaves only the query term “brexit” in the “must” part of the query, because this is the only part which should affect the relevance score. The time filter and language filter can be reused time and time again for new queries for different searches.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Limit the use of field data. Be careful about using fielddata=true in your mapping where the number of terms will result in a high cardinality. If you must use fielddata=true, you can also reduce the requirement of fielddata cache by limiting the requirements for fielddata for a given index using a field data frequency filter.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “unexpectedly failed to clear cache” classname is JoinValidationService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} private final AbstractRunnable cacheClearer = new AbstractRunnable() { @Override public void onFailure(Exception e) { logger.error("unexpectedly failed to clear cache"; e); assert false : e; } @Override protected void doRun() {