Elasticsearch Understanding and Managing Elasticsearch Shards Limit

By Opster Team

Updated: Nov 7, 2023

| 2 min read

Quick links

The Concept of Sharding in Elasticsearch

Elasticsearch utilizes a concept known as sharding to distribute data across multiple nodes. Sharding is a fundamental aspect of Elasticsearch’s architecture that allows it to handle large volumes of data efficiently. However, it’s crucial to understand the limitations and best practices associated with Elasticsearch shards to ensure optimal performance and avoid common pitfalls.

In Elasticsearch, an index can potentially store a large amount of data that can exceed the hardware limits of a single node. To overcome this limitation, Elasticsearch provides the ability to subdivide your index into multiple pieces called shards. When you create an index, you can define the number of shards that you want. Each shard is in itself a fully-functional and independent Lucene “index” that can be hosted on any node in the cluster.

The Limitations of Elasticsearch Shards

While sharding is a powerful mechanism that allows Elasticsearch to handle large data volumes and perform parallel operations, it’s not without its limitations. One of the most significant limitations is the maximum number of shards per node.

Elasticsearch imposes a soft limit on the total number of shards in a cluster to prevent performance degradation. This limit is set to 1,000 shards per non-frozen data node and 3,000 shards per frozen data node by default but can be changed using the `cluster.max_shards_per_node` setting. If a cluster attempts to create more shards than this limit, the operation will fail with an error.

It’s important to note that this limit includes both primary and replica shards of open indices, even unassigned ones. Shards from closed indices do not count toward this limit. For example, if you have an open index with five primary shards and one replica (for a total of 10 shards) and a closed index with 2 primary shards and one replica (for a total of 4 shards), 10 shards count towards the limit.

Managing Elasticsearch Shards Limit

Understanding and managing the shards limit is crucial for maintaining the performance and stability of your Elasticsearch cluster. Here are some strategies to effectively manage the shards limit:

1. Plan Your Sharding Strategy: Before creating an index, consider the amount of data that the index will store and the query load it will need to handle. Use this information to determine the appropriate number of shards. Remember, having too many shards can be as problematic as having too few.

2. Monitor Shard Count: Regularly monitor the total shard count in your cluster to ensure it stays within the limit. You can use the `_cat/shards` API to get a count of the total number of shards in your cluster.

3. Use The Shrink API: If you need to change the number of shards in an existing index, you can use the Shrink API. The Shrink API allows you to reduce the number of shards in an index.

4. Delete Unused Indices: If your cluster is nearing the shards limit, consider deleting unused or old indices. This will not only free up shards but also help improve the overall performance of your cluster.

5. Consolidate Small Indices: If you have many small time based indices (e.g., daily), you can consolidate them into bigger ones (e.g., weekly or monthly) in order to reduce the number of shards. You might also have to revise your index lifecycle policies to rollover your indices only after a longer period of time and not create too many small indices.

6. Add Data Nodes: If your indices and lifecycle policies are already optimal and you cannot delete any data, you might need to add new data nodes in order to decrease the number of shards per node.

7. Adjust the Shards Limit: If you have been through all the above options , you might need to adjust the shards limit using the `cluster.max_shards_per_node` setting. However, increasing the limit should be done with caution as it can lead to performance issues.

Conclusion

In conclusion, while Elasticsearch’s sharding mechanism provides a powerful way to handle large data volumes, it’s crucial to understand and manage the shards limit to ensure the performance and stability of your cluster. By planning your sharding strategy, monitoring shard count, deleting or consolidating indices, adding more data nodes, adjusting the shards limit when necessary, and using the Shrink API, you can effectively manage the shards limit in your Elasticsearch cluster. 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?