Briefly, this error occurs when Elasticsearch Machine Learning (ML) fails to retrieve the anomaly detector job during a memory update. This could be due to issues like network connectivity, insufficient permissions, or the job not existing. To resolve this, ensure that the job ID is correct and the job exists. Check the network connectivity between the nodes. Also, verify that the user has the necessary permissions to access the job. If the issue persists, consider restarting the Elasticsearch cluster or increasing the memory allocation for ML jobs.
This guide will help you check for common problems that cause the log ” [” + jobId + “] failed to get anomaly detector job during ML memory update ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, memory.
Log Context
Log “[” + jobId + “] failed to get anomaly detector job during ML memory update” classname is MlMemoryTracker.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
// job assignment process; so that scenario should be very rare; i.e. somebody has closed // the .ml-config index (which would be unexpected and unsupported for an internal index) // during the memory refresh. logger.trace("[{}] anomaly detector job deleted during ML memory update"; jobId); } else { logIfNecessary(() -> logger.error(() -> "[" + jobId + "] failed to get anomaly detector job during ML memory update"; e)); } memoryRequirementByAnomalyDetectorJob.remove(jobId); listener.onResponse(null); }));