Elasticsearch Elasticsearch Too Much Memory Allocated to Frozen Nodes

By Opster Team

Updated: Mar 10, 2024

| 2 min read

What does this mean? 

If the current memory allocation for the frozen nodes in your Elasticsearch cluster is higher than necessary, the memory-to-disk ratio can be optimized to improve performance and reduce costs. 

Frozen nodes are nodes that hold searchable snapshots of indices, which are read-only and require less memory compared to active indices. By reducing the memory allocated to these nodes, you can optimize the memory-to-disk ratio, leading to improved performance and cost savings.

Why does this occur?

This event may have occurred due to the following reasons:

  1. Over-allocation of memory to frozen nodes during initial setup or scaling.
  2. Inefficient memory-to-disk ratio, leading to increased memory usage and costs.
  3. Changes in the data stored in the cluster, resulting in a need to reevaluate memory allocation.

Possible impact and consequences of high memory allocation on frozen tier

The possible impact of this event includes:

  1. Increased costs due to unnecessary memory allocation.
  2. Reduced performance efficiency, as resources are not optimally utilized.
  3. Hindered scaling efficiency, as over-allocated memory can limit the ability to scale the cluster.
  4. Increased I/O operations, leading to slower query response times.
  5. Inefficient resource consolidation, resulting in wasted hardware resources.

How to resolve

To resolve the issue of excessive memory allocated to frozen nodes, follow these recommendations:

1. Improve your memory-to-disk ratio by moving to instances with a smaller amount of memory or reducing the memory allocated to the nodes. This can be done by updating the Elasticsearch configuration file (elasticsearch.yml) and setting the `indices.frozen_cache.size` parameter to a lower value. For example:

indices.frozen_cache.size: 1gb

2. Reduce the number of frozen data nodes and increase disk allocated accordingly: By reducing the number of frozen data nodes in your cluster, you can consolidate resources and allocate more disk space to the remaining frozen nodes. This can be done by updating the cluster settings to drain data to other frozen nodes so that the specified frozen node can be deprovisioned:

PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.exclude._ip": "IP_ADDRESS_OF_NODE_TO_REMOVE"
  }
}

Replace “IP_ADDRESS_OF_NODE_TO_REMOVE” with the IP address of the frozen data node you want to remove from the cluster.

Conclusion

By addressing the issue of excessive memory allocated to frozen nodes in Elasticsearch, you can optimize your cluster’s performance, reduce costs, and improve resource utilization. Following the recommendations provided in this guide will help you achieve a more efficient memory-to-disk ratio and ensure the smooth operation of your Elasticsearch deployment.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?