Elasticsearch Optimizing Elasticsearch Logging for Better Troubleshooting and Performance

By Opster Team

Updated: Jul 23, 2023

| 2 min read

Introduction

Elasticsearch is a widely used distributed search and analytics engine that requires proper logging to ensure optimal performance and effective troubleshooting. In this article, we will discuss the best practices for Elasticsearch logging, including log levels, log formats, and log rotation strategies. If you want to learn about how to activate and use Elasticsearch slow logs, check out this guide. You should also take a look at this guide, which contains a detailed explanation on How to Ensure Slow Logs Don’t Get Cut Off (Applicable before ES 8.0).

1. Configuring Log Levels

Elasticsearch uses Log4j 2 for logging, which allows you to configure log levels for different components. The log levels, in increasing order of verbosity, are ERROR, WARN, INFO, DEBUG, and TRACE. By default, Elasticsearch is configured to log at the INFO level.

To change the log level, you can modify the `log4j2.properties` file located in the `config` directory of your Elasticsearch installation. For example, to set the log level for the `org.elasticsearch.transport` package to DEBUG, add the following line:

logger.transport.level = debug

You can also change the log level dynamically using the Cluster Update Settings API. For example, to set the log level for the `org.elasticsearch.transport` package to DEBUG, execute the following command:

PUT /_cluster/settings
{
  "transient": {
    "logger.org.elasticsearch.transport": "debug"
  }
}

2. Customizing Log Formats

Elasticsearch uses a default log format that includes the timestamp, log level, logger name, and log message. You can customize the log format by modifying the `log4j2.properties` file. For example, to include the thread name in the log format, update the `appender.console.layout.pattern` property as follows:

appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}]%marker [%t] %m%n

3. Implementing Log Rotation

Log rotation is essential for managing log files and preventing them from consuming too much disk space. Elasticsearch uses a built-in log rotation mechanism that creates a new log file every day or when the current log file reaches a certain size.

By default, Elasticsearch keeps the last 2GB of log files. You can change this setting by modifying the `appender.rolling.strategy.action.condition.age` property in the `log4j2.properties` file. For example, to keep the last 30 days of log files, update the property as follows:

appender.rolling.strategy.action.condition.nested_condition.exceeds = 5GB 

4. Monitoring Elasticsearch Logs

Monitoring Elasticsearch logs is crucial for identifying issues and ensuring the health of your cluster. You can use the Elasticsearch Cat API to monitor various aspects of your cluster, such as node health, shard allocation, and index status. For example, to check the health of your cluster, execute the following command:

GET /_cat/health?v

Additionally, you can use the Elasticsearch Query DSL to search and analyze log data stored in Elasticsearch. For example, to search for log entries with a specific log level, execute the following command:

GET /_search
{
  "query": {
    "match": {
      "log.level": "ERROR"
    }
  }
}

5. Securing Elasticsearch Logs

Securing Elasticsearch logs is essential for protecting sensitive information and ensuring compliance with data protection regulations. You can use the Elasticsearch Security features to secure your logs, such as index-level access control, document-level security, and field-level security.

For example, to restrict access to a specific index, you can create a role with the necessary privileges and assign it to a user. To create a role with read-only access to the `logs-*` indices, execute the following command:

PUT /_security/role/log_reader
{
  "indices": [
    {
      "names": ["logs-*"],
      "privileges": ["read"]
    }
  ]
}

Conclusion

In conclusion, optimizing Elasticsearch logging is crucial for ensuring optimal performance, effective troubleshooting, and maintaining the health of your cluster. By following the best practices outlined in this article, you can improve your Elasticsearch logging strategy and make the most of your Elasticsearch deployment.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?