Elasticsearch Cluster Manager Task Throttling in OpenSearch

By Opster Expert Team - Gustavo

Updated: Sep 6, 2023

| 3 min read

Quick links

Introduction

Cluster manager nodes (known as master nodes before OpenSearch 2.0) are in charge of holding the cluster metadata, keeping track of cluster state holding information, such as cluster settings, which shards are located on which nodes, and executing administration tasks

Cluster manager nodes are responsible for executing actions like the creation and deletion of indices, index templates, snapshots, ingest pipelines, aliases and the cluster settings creation and deletion, mappings, and ensuring that these actions are coherently propagated across all nodes in the cluster.

Tasks run in a single threaded environment, so if too many tasks are sent to the cluster manager node, they will start queuing without limit, potentially affecting the entire cluster’s availability. 

Cluster Manager Task Throttling

To mitigate this issue, OpenSearch introduced “cluster manager task throttling,” which allows users to throttle how many pending tasks the queue can have, based on the task type. After the threshold is reached, new requests will be rejected. The rejected tasks will be retried with exponential backoff. If retries are unsuccessful within the timeout period, OpenSearch will return a cluster timeout error.

The task type part is important because it means that users can have different thresholds for different task types, allowing for task prioritization, while avoiding problems with one type blocking another one.

Supported task types: 

  • create-index
  • update-settings
  • cluster-update-settings
  • auto-create
  • delete-index
  • delete-dangling-index
  • create-data-stream
  • remove-data-stream
  • rollover-index
  • index-aliases
  • put-mapping
  • create-index-template
  • remove-index-template
  • create-component-template
  • remove-component-template
  • create-index-template-v2
  • remove-index-template-v2
  • put-pipeline
  • delete-pipeline
  • create-persistent-task
  • finish-persistent-task
  • remove-persistent-task
  • update-task-state
  • put-script
  • delete-script
  • put-repository
  • delete-repository
  • create-snapshot
  • delete-snapshot
  • update-snapshot-state
  • restore-snapshot
  • cluster-reroute-api

Cluster manager task throttling is disabled for all task types by default.

How to use cluster manager task throttling

To use the setting users must apply it in the cluster settings:

PUT _cluster/settings
{
  "persistent": {
    "cluster_manager.throttling": {
      "retry": {
        "max.delay": "25s",
        "base.delay": "1s"
      },
      "thresholds": {
        "put-mapping": {
          "value": 100
        },
        "create-index": {
          "value": 25
        }
      }
    }
  }
}

The configuration above will reject the put-mapping tasks after 100 tasks are pending, and create-index after 25 pending tasks.

A recent Pull Request enabled the configuration of the throttling delay dynamically via base delay and max delay settings.

base.delay (default 5s): This setting represents the initial delay before the first retry attempt is made after a task is rejected. Here, the “base.delay” is set to “1s,” meaning that the system will wait 1 second before attempting to resubmit the task after it’s been rejected.

max.delay (default 30s): This setting represents the maximum delay between retry attempts. As the system continues within the throttling threshold and increases the delay according to the exponential backoff strategy, the delay will never exceed the “max.delay” value. In this case, it’s set to “25s,” so even if the exponential backoff calculations call for a longer delay, the system will not wait more than 25 seconds between retry attempts.

The purpose of these settings is to provide a balance between promptly addressing retries and a reasonable retry time. 

Stats API

Users can check the current status of the cluster manager task throttling using the following:

GET /_nodes/stats/cluster_manager_throttling

The throttling section of a response looks like this:

 …
        "cluster_manager_throttling" : {
        "cluster_manager_stats" : {
          "TotalThrottledTasks" : 18,
          "ThrottledTasksPerTaskType" : {
            "put-mapping" : 18
          }
        }
      }

Conclusion

OpenSearch 2.5’s introduction of cluster manager task throttling is a significant evolution in cluster operation management. This feature allows users to mitigate the risk for task overflow, enhancing cluster availability by setting queue thresholds for various task types. It further enhances task management by prioritizing different task types and preventing one type from blocking another.

The cluster manager task throttling feature is particularly relevant if an application can (intentionally or otherwise) flood cluster manager nodes with a large number of calls to create indices/templates or any of the other tasks listed above that could potentially destabilize the cluster.  

The ability to configure the throttling delay dynamically adds flexibility to the system, promoting a balance between timely task retries and reasonable retry times. The feature’s ‘off’ default setting also offers users the freedom to opt-in, ensuring customization and user control.

All in all, the cluster manager task throttling feature is a robust solution for maintaining cluster operations in a stable and efficient way. It offers improved control, better task management, and is a significant step forward in the evolution of cluster node operation management.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?