Elasticsearch Elasticsearch Hardware Requirements: A High-Level Overview

By Opster Team

Updated: Nov 2, 2023

| 2 min read

Introduction

When deploying an Elasticsearch cluster, it is crucial to consider the hardware requirements to ensure optimal performance, stability, and scalability. This article will provide a detailed guide on the hardware requirements for Elasticsearch, including CPU, memory, storage, and network considerations. If you want to learn about Elasticsearch memory usage guide, check out this guide.

Hardware requirements

1. CPU Requirements

Elasticsearch is a CPU-intensive application, as it performs various tasks such as indexing, searching, and aggregating data. Therefore, it is essential to have a sufficient number of CPU cores to handle these tasks efficiently. The number of CPU cores required depends on the workload and the specific use case.

For example, if your Elasticsearch cluster is primarily used for indexing, you may need more CPU cores to handle the indexing workload. On the other hand, if your cluster is mainly used for searching and aggregating data, you may need fewer CPU cores but with higher clock speeds to ensure faster query response times.

As a general rule of thumb, it is recommended to have at least one CPU core per Elasticsearch node. However, for production environments, it is advisable to have at least two CPU cores per node to handle the workload efficiently.

2. Memory Requirements

Memory is a critical component of Elasticsearch, as it directly impacts the performance of the cluster. Elasticsearch heavily relies on the Java heap memory for storing and managing data structures, caches, and buffers. The more memory available, the better the performance of the cluster.

The recommended heap size for Elasticsearch is 50% of the available RAM, with a maximum of ~30GB. Allocating more than ~30GB of heap memory can lead to performance degradation due to Java’s garbage collection behavior and also compressed ordinary object pointers in the JVM.

You can check if your nodes are using compressed ordinary object pointers by running the following command. If it’s not the case (i.e. the returned value is false), you can decrease the amount of memory dedicated to the heap until you get below the threshold, usually around 26GB and 30GB depending on the system.

GET _nodes/_all/jvm?filter_path=**.using_compressed_ordinary_object_pointers

That information can also be found in the logs when your node starts up, the log line looks like this:

heap size [16gb], compressed ordinary object pointers [true]

In addition to heap memory, Elasticsearch also uses off-heap memory for various purposes, such as file system caching and thread stacks. Therefore, it is essential to have enough RAM to accommodate both heap and off-heap memory requirements.

As a general guideline, it is recommended to have at least 8GB of RAM for small Elasticsearch clusters. For larger clusters or production environments, it is advisable to have 16GB or more RAM per node.

3. Storage Requirements

Elasticsearch stores data on disk in the form of indices, which are composed of multiple shards. The storage requirements for Elasticsearch depend on the size of the data, the number of replicas, and the desired retention period.

There are two primary types of storage devices used in Elasticsearch deployments: Hard Disk Drives (HDDs) and Solid State Drives (SSDs). HDDs are generally slower but offer higher storage capacity at a lower cost. SSDs, on the other hand, provide faster read and write speeds, which can significantly improve the performance of the cluster.

For most use cases, it is recommended to use SSDs for Elasticsearch storage, as they offer better performance and lower latency compared to HDDs. However, if storage capacity is a higher priority than performance, HDDs can be used as a cost-effective alternative.

When determining the storage capacity required for your Elasticsearch cluster, consider the following factors:

  • Data size: The total size of the data you plan to store in the cluster and the frequency at which new data flows into the cluster
  • Replicas: The number of replica shards for each primary shard, which affects the total storage capacity required.
  • Retention period: The duration for which you plan to retain the data in the cluster.

4. Network Requirements

Elasticsearch relies on a fast and stable network for communication between nodes and clients. A high-speed network is essential for ensuring low-latency search and indexing operations, as well as maintaining the overall health and stability of the cluster.

It is recommended to use a dedicated network for Elasticsearch traffic, with a minimum bandwidth of 1Gbps. For larger clusters or high-throughput use cases, a 10Gbps network may be required to handle the increased traffic.

Conclusion

In conclusion, when planning the hardware requirements for your Elasticsearch deployment, consider the CPU, memory, storage, and network requirements based on your specific use case and workload. By carefully selecting the appropriate hardware, you can ensure optimal performance, stability, and scalability for your Elasticsearch cluster.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?