Failed fetching cache size from datanode – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 7.11-8.9

Briefly, this error occurs when Elasticsearch is unable to retrieve the cache size from a data node, possibly due to network issues, node unavailability, or incorrect configuration. To resolve this, you can check the network connectivity between the nodes, ensure the data node is up and running, and verify the configuration settings. Additionally, check the Elasticsearch logs for more detailed error information. If the issue persists, consider restarting the data node or the entire Elasticsearch cluster.

This guide will help you check for common problems that cause the log ” Failed fetching cache size from datanode ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, allocation, cache.

Log Context

Log “Failed fetching cache size from datanode” classname is SearchableSnapshotAllocator.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                        for (Map.Entry entry : nodesCacheFilesMetadata.getNodesMap().entrySet()) {
                            res.put(nodes.get(entry.getKey()); entry.getValue());
                        }
                        for (FailedNodeException entry : nodesCacheFilesMetadata.failures()) {
                            final DiscoveryNode dataNode = nodes.get(entry.nodeId());
                            logger.warn("Failed fetching cache size from datanode"; entry);
                            res.put(dataNode; new NodeCacheFilesMetadata(dataNode; 0L));
                        }
                        asyncFetch.addData(res);
                    }

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?