Briefly, this error occurs when Elasticsearch tries to process items from a queue, but the queue is empty. This could be due to a lack of data being sent to Elasticsearch, or a problem with the data ingestion process. To resolve this issue, you can check if your data source is sending data correctly. If it is, then check your ingestion pipeline for any issues. You could also consider increasing the frequency of data ingestion or adjusting the queue size to better match your data flow.
This guide will help you check for common problems that cause the log ” queue processor found no items ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: queue, cluster.
Overview
The queue term in Elasticsearch is used in the context of thread pools. Each node of the Elasticsearch cluster holds various thread pools to manage the memory consumption on that node for different types of requests. The queues come up with initial default limits as per node size but can be modified dynamically using _settings REST endpoint.
What it is used for
Queues are used to hold the pending requests for the corresponding thread pool instead of requests being rejected. For example, if there are too many search requests coming on the node which can not be processed at the same time, the requests are sent to the search thread pool queue.
Examples
Monitoring the thread pools using _cat API:
GET /_cat/thread_pool?v
Get details about each thread pool, including current size:
GET /_nodes/thread_pool
Notes
- Thread pool queues are one of the most important stats to monitor in Elasticsearch as they have a direct impact on the cluster performance and may halt the indexing and search requests.
- The specific thread pool queue size can be changed using its type-specific parameters.
- It is possible to update thread pool queue size dynamically using cluster setting API in version 2.x.
- From Elasticsearch version 5.x onward, it is not possible to update the thread pool settings dynamically via the cluster setting API. Rather, it is a node level setting and it must be configured inside elasticsearch.yml on each node and a node restart is required after the updates.
Common problems
- The most common problem that arises in Elasticsearch related to queues is EsRejectedExecutionException that occurs when queues are full and Elasticsearch nodes cannot keep up with the speed of the requests. This may lead to nodes not responding as well. To deal with this issue, thread pools need continuous monitoring and based on thread pool queue utilization, you may need to review and control the indexing/search requests or increase the resources of the cluster.
- In case of bulk indexing queue rejection, increasing the size of the queue may cause the node to keep more data in memory, which may cause requests taking longer to complete and more heap space to be consumed. As a result you may face impact on cluster performance and stability.
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “queue processor found no items” classname is MasterService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
if (batch != null) { currentlyExecutingBatch = batch; return batch; } } logger.error("queue processor found no items"); assert false : "queue processor found no items"; throw new IllegalStateException("queue processor found no items"); } private void forkQueueProcessor() {