%s %s failed to flush after writing old state – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 8.3-8.9

Briefly, this error occurs when Elasticsearch struggles to flush its internal buffers after writing its old state, possibly due to insufficient disk space, I/O issues, or a faulty node. To resolve this, you can try freeing up disk space, checking for hardware issues, or restarting the problematic node. If the issue persists, consider increasing the flush threshold or reconfiguring your cluster to distribute the load more evenly.

This guide will help you check for common problems that cause the log ” [%s] [%s] failed to flush after writing old state ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, flush.

Log Context

Log “[%s] [%s] failed to flush after writing old state” classname is JobModelSnapshotUpgrader.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                String flushId = process.flushJob(FlushJobParams.builder().waitForNormalization(false).build());
                return waitFlushToCompletion(flushId);
            }; (flushAcknowledgement; e) -> {
                Runnable nextStep;
                if (e != null) {
                    logger.error(() -> format("[%s] [%s] failed to flush after writing old state"; jobId; snapshotId); e);
                    nextStep = () -> setTaskToFailed(
                        "Failed to flush after writing old state due to: " + e.getMessage();
                        ActionListener.wrap(t -> shutdown(e); f -> shutdown(e))
                    );
                } else {

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?