Briefly, this error occurs when Elasticsearch is unable to read metadata from the local store during a restore operation. This could be due to corruption or inaccessibility of the metadata files. To resolve this issue, you can try the following: 1) Check the file permissions and ensure Elasticsearch has the necessary access. 2) Verify the integrity of the metadata files. If they are corrupted, you may need to restore them from a backup. 3) If the error persists, consider reindexing your data from the source, as this will create a new set of metadata files.
This guide will help you check for common problems that cause the log ” {} Can’t read metadata from store; will not reuse any local file while restoring ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: blobstore, repositories and repository-azure.
Overview
An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register an “fs” repository:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud require a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes if you are planning to use the repo type of file system. Otherwise, it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Log Context
Log “{} Can’t read metadata from store; will not reuse any local file while restoring” classname is BlobStoreRepository.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} catch (IndexNotFoundException e) { // happens when restore to an empty shard; not a big deal logger.trace("[{}] [{}] restoring from to an empty shard"; shardId; snapshotId); recoveryTargetMetadata = Store.MetadataSnapshot.EMPTY; } catch (IOException e) { logger.warn((Supplier>) () -> new ParameterizedMessage("{} Can't read metadata from store; will not reuse any local file while restoring"; shardId); e); recoveryTargetMetadata = Store.MetadataSnapshot.EMPTY; } final ListfilesToRecover = new ArrayList(); final Map snapshotMetaData = new HashMap();