Briefly, this error occurs when the maximum number of concurrently running jobs in Elasticsearch is reached. This limit is set to prevent overloading the system. To resolve this issue, you can either increase the ‘xpack.ml.max_open_jobs’ setting in the Elasticsearch configuration file, or manage your jobs better by ensuring that jobs are closed or deleted when they are no longer needed. Also, consider spreading jobs across multiple nodes if your cluster setup allows it.
This guide will help you check for common problems that cause the log ” max running job capacity [” + maxAllowedRunningJobs + “] reached ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.
Log Context
Log “max running job capacity [” + maxAllowedRunningJobs + “] reached” class name is AutodetectProcessManager.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
AutodetectCommunicator create(JobTask jobTask; Job job; AutodetectParams autodetectParams; BiConsumerhandler) { // Closing jobs can still be using some or all threads in MachineLearning.AUTODETECT_THREAD_POOL_NAME // that an open job uses; so include them too when considering if enough threads are available. int currentRunningJobs = processByAllocation.size(); if (currentRunningJobs > maxAllowedRunningJobs) { throw new ElasticsearchStatusException("max running job capacity [" + maxAllowedRunningJobs + "] reached"; RestStatus.TOO_MANY_REQUESTS); } String jobId = jobTask.getJobId(); notifyLoadingSnapshot(jobId; autodetectParams);