%s Calculated potential scaled down capacity %s – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 8.5-8.9

Briefly, this error occurs when Elasticsearch calculates that the potential capacity of the cluster needs to be scaled down. This could be due to a variety of reasons such as low disk space, high CPU usage, or memory pressure. To resolve this issue, you can either increase the resources of your cluster (like adding more nodes, increasing disk space, or upgrading your hardware), optimize your indices (like deleting old indices, reducing shard count, or using index lifecycle management), or adjust your queries to reduce load on the system.

This guide will help you check for common problems that cause the log ” %s Calculated potential scaled down capacity [%s] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.

Log Context

Log “%s Calculated potential scaled down capacity [%s]” classname is MlMemoryAutoscalingDecider.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                        totalAssignedJobs;
                        MAX_OPEN_JOBS_PER_NODE.getKey();
                        maxOpenJobsCopy;
                        MAX_OPEN_JOBS_PER_NODE.getKey()
                    );
                    logger.info(() -> format("%s Calculated potential scaled down capacity [%s]"; msg; scaleDownDecisionResult));
                    return MlMemoryAutoscalingCapacity.from(context.currentCapacity()).setReason(msg).build();
                }
            }

            long msLeftToScale = scaleTimer.markDownScaleAndGetMillisLeftFromDelay(configuration);

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?