Elasticsearch Elasticsearch Unlimited Shards per Node Issue

By Opster Team

Updated: Mar 10, 2024

| 3 min read

Introduction

In order to understand the context of this event, we first need to explain the different settings at play that govern the allowed number of shards per node. It is also worth noting that both primary and replica shards count towards the configured limit, whether they are assigned to a data node or not. Only shards from closed indices are not taken into account.

Background

First, there is a cluster-wide configuration setting named `cluster.max_shards_per_node` that doesn’t enforce a specific amount of shards per node, but provides an upper limit of how many shards can be created in the whole cluster given the amount of hot/warm/cold/content data nodes available (i.e. number_of_data_nodes * cluster.max_shards_per_node). The default value for this configuration setting is 1000 shards per hot/warm/cold/content data node for normal (non-frozen) indices. 

Similarly, there is a sibling setting for frozen data nodes named `cluster.max_shards_per_node.frozen` whose default value is 3000 shards of frozen indices per frozen data node. For example, if a cluster has 6 non-frozen data nodes and 1 frozen data node, it is allowed to contain 6000 shards within the nodes in the non-frozen tiers and 3000 shards on the node in the frozen tier.

Then, in parallel to these cluster-wide settings, there is also a node-specific setting named `cluster.routing.allocation.total_shards_per_node` that specifies a hard limit of shards that a data node can host. This setting is set to -1 by default, which means that there is no limit as to how many shards each specific data node is allowed to host, the only constraint is that all data nodes together are not allowed to host more than `cluster.max_shards_per_node` shards.

Finally, it is also possible to define an index-specific setting named `index.routing.allocation.total_shards_per_node`, which specifies a hard limit of shards of a given index that can be hosted on a specific data node. That setting is also set to -1 by default, which means that there is no limit as to how many shards of a given index can be hosted on a specific data node.

Now that we have reviewed all the settings related to the shard count limit in a cluster, we can review how they play together within the context of this event.

What does this mean?

This issue indicates that there is no limit set on the number of shards that can be created on each node in your Elasticsearch cluster. This means that your applications can create an ever-growing number of shards up to the cluster-wide limit as defined by the `cluster.max_shards_per_node` and `cluster.max_shards_per_node.frozen` settings. Not setting any limit is the default setting, but in certain circumstances, it may eventually lead to instability in your cluster because the data nodes might not have enough resources to handle them.

Why does this occur?

This issue occurs when the total number of shards per node (i.e. `cluster.routing.allocation.total_shards_per_node`) is configured to -1, which means that there is no limit to the number of shards that can be hosted on a given node. Even though this is the default configuration for this setting, it allows applications to create as many shards as they want on each node, without any restrictions.

Possible impact and consequences of unlimited shards per node

The possible impact of this issue is that your Elasticsearch cluster may become unstable due to the ever-growing number of shards on certain nodes, especially in the case of ill-defined shard allocation filtering rules. Each shard consumes a given amount of RAM, CPU and storage space, so as the number of shards increases, the cluster may experience performance degradation, increased resource usage, and potential outages. This can negatively affect the overall performance and reliability of your Elasticsearch cluster.

How to resolve

To resolve this issue, it is first important to verify whether you are leveraging the frozen tier or not. If it is the case, we recommend leaving the `cluster.routing.allocation.total_shards_per_node` to the default value of -1 (i.e. unlimited) mainly because of [this issue](https://github.com/elastic/elasticsearch/issues/90871).

If it is not the case, we recommend creating a limit to the number of shards allowed on each node despite the fact that there is already a cluster-wide setting already in place by default. Before doing so, it is also worth considering your application architecture to evaluate how many indices are likely to be created, with how many shards, and determine whether this is likely to make you go over the limit of shards you want to set per node. For more information and a discussion of sharding architecture, read this guide

As a preliminary step, it is a good idea to check the current configured value of the cluster-wide `cluster.max_shards_per_node` setting in order to see how many shards your cluster can currently host. If you are running Elasticsearch 8.8.x, you can leverage the new Health API in order to learn about current limits using the command below:

GET _health_report/shards_capacity

This command will give you two important numbers, namely `data.max_shards_in_cluster` and `data.current_used_shards`. The former is based on the product of the number of data nodes and the current value of the `cluster.max_shards_per_node` setting. Similarly if you’re leveraging a frozen tier, you’ll also get `frozen.max_shards_in_cluster` and `frozen.current_used_shards` which follow the same logic.

If you are running Elasticsearch 8.7.x or earlier, you can run the two commands below:

GET _cluster/settings?&include_defaults=true&filter_path=*.cluster.max_shards_per_node*
GET /_cluster/health?filter_path=number_of_data_nodes,active_shards

The first command will give you the values of `cluster.max_shards_per_node` and `cluster.max_shards_per_node.frozen` and the second command will give you `number_of_data_nodes` which you can insert into the same formula as above in order to find the total number of shards allowed in the cluster.

The second command also gives you the current number of `active_shards` which you can compare against the total number of shards you’ve just computed. If the two numbers are too close, you might either need to raise the `cluster.max_shards_per_node` setting value or delete some shards or indexes.

If you decide to increase the cluster-wide limit on the number of shards, you can use the following command:

PUT _cluster/settings
{
  "persistent": {
    "cluster.max_shards_per_node": <desired_limit>
  }
}

Replace `<desired_limit>` with the maximum number of shards you want to allow within the cluster. This command will apply the limit persistently across the cluster.

If instead you prefer to specify a hard limit of shards that a node can host, you can use the following command:

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.total_shards_per_node": <hard_limit>
  }
}

A sensible limit for this setting would be 1000 so that it is in line with the `cluster.max_shards_per_node` default value. After this command is run, each node will not be allowed to host more than <hard_limit> shards. Use with extreme care as this might lead to situations where shards cannot be allocated to any node.

Conclusion

In this guide, we discussed the issue of having an unlimited number of shards per node in Elasticsearch, its possible impact, and how to resolve it. By setting a limit on the number of shards per node, you can prevent your cluster from becoming unstable and ensure optimal performance and reliability.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?