Briefly, this error occurs when Elasticsearch tries to lock the memory but fails due to insufficient limits set by the operating system. Elasticsearch uses memory locking to prevent swapping of its memory, which can degrade performance. To resolve this issue, you can increase the RLIMIT_MEMLOCK limits. This can be done by editing the limits.conf file (usually located in /etc/security/) and adding or modifying lines for the Elasticsearch user to increase the soft and hard limits. Alternatively, you can disable memory locking in the Elasticsearch configuration if it’s not critical for your use case.
This guide will help you check for common problems that cause the log ” Increase RLIMIT_MEMLOCK; soft limit: {}; hard limit: {} ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: bootstrap.
Log Context
Log “Increase RLIMIT_MEMLOCK; soft limit: {}; hard limit: {}” classname is JNANatives.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
// mlockall failed for some reason logger.warn("Unable to lock JVM Memory: error={}; reason={}"; errno ; errMsg); logger.warn("This can result in part of the JVM being swapped out."); if (errno == JNACLibrary.ENOMEM) { if (rlimitSuccess) { logger.warn("Increase RLIMIT_MEMLOCK; soft limit: {}; hard limit: {}"; rlimitToString(softLimit); rlimitToString(hardLimit)); if (Constants.LINUX) { // give specific instructions for the linux case to make it easy String user = System.getProperty("user.name"); logger.warn("These can be adjusted by modifying /etc/security/limits.conf; for example: \n" +