Flood stage disk watermark exceeded on – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 7.13-7.15

Briefly, this error occurs when the disk usage exceeds the “flood stage” watermark level, which is 95% by default in Elasticsearch. This is a safety feature to prevent nodes from running out of disk space. When this threshold is exceeded, Elasticsearch will block write operations to indices on that node. To resolve this issue, you can either increase your disk space, delete unnecessary data, or adjust the “flood stage” watermark level. However, adjusting the watermark level should be done cautiously as it could lead to disk space issues.

This guide will help you check for common problems that cause the log ” flood stage disk watermark [{}] exceeded on {} ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cluster, routing, allocation.

Log Context

Log “flood stage disk watermark [{}] exceeded on {}” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

            if (isDedicatedFrozenNode(routingNode)) {
                ByteSizeValue total = ByteSizeValue.ofBytes(usage.getTotalBytes());
                long frozenFloodStageThreshold = diskThresholdSettings.getFreeBytesThresholdFrozenFloodStage(total).getBytes();
                if (usage.getFreeBytes() 

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?