Closing job – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-7.14

Briefly, this error occurs when Elasticsearch is trying to close a job, but it’s unable to do so due to various reasons such as the job being locked by another process, or the job not existing. To resolve this issue, you can try to unlock the job if it’s locked, ensure the job exists before trying to close it, or check if there are any underlying issues with your Elasticsearch cluster that might be causing this error.

This guide will help you check for common problems that cause the log ” Closing job [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.

Log Context

Log “Closing job [{}]” classname is AutodetectProcessManager.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

            // its context will already have been removed from this map
            jobKilled = (processByAllocation.containsKey(allocationId) == false);
            if (jobKilled) {
                logger.debug("[{}] Cleaning up job opened after kill"; jobId);
            } else if (reason == null) {
                logger.info("Closing job [{}]"; jobId);
            } else {
                logger.info("Closing job [{}]; because [{}]"; jobId; reason);
            }

            AutodetectCommunicator communicator = processContext.getAutodetectCommunicator();

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?