Elasticsearch Parsing JSON Fields in Elasticsearch

By Opster Team

Updated: Jul 23, 2023

| 2 min read

Introduction

In this article, we will discuss how to parse JSON fields in Elasticsearch, which is a common requirement when dealing with log data or other structured data formats. We will cover the following topics:

  1. Ingesting JSON data into Elasticsearch
  2. Using an Ingest Pipeline to parse JSON fields
  3. Querying and aggregating JSON fields

 1. Ingesting JSON data into Elasticsearch

When ingesting JSON data into Elasticsearch, it is essential to ensure that the data is properly formatted and structured. Elasticsearch can automatically detect and map JSON fields, but it is recommended to define an explicit mapping for better control over the indexing process.

To create an index with a custom mapping, you can use the following API call:

PUT /my_index
{
  "mappings": {
    "properties": {
      "message": {
        "type": "text"
      },
      "json_field": {
        "type": "nested",
        "properties": {
          "field1": {
            "type": "keyword"
          },
          "field2": {
            "type": "integer"
          }
        }
      }
    }
  }
}

In this example, we create an index called `my_index` with a custom mapping for a JSON field named `json_field`. The `json_field` is of type `nested`, which allows us to index and query its subfields (`field1` and `field2`) separately.

2. Using an Ingest Pipeline to parse JSON fields

If your JSON data is stored as a string within a field, you can use the Ingest Pipeline feature in Elasticsearch to parse the JSON string and extract the relevant fields. The Ingest Pipeline provides a set of built-in processors, including the `json` processor, which can be used to parse JSON data.

To create an ingest pipeline with the `json` processor, use the following API call:

PUT _ingest/pipeline/json_parser
{
  "description": "Parse JSON field",
  "processors": [
    {
      "json": {
        "field": "message",
        "target_field": "json_field"
      }
    }
  ]
}

In this example, we create an ingest pipeline called `json_parser` that parses the JSON string stored in the `message` field and stores the resulting JSON object in a new field called `json_field`.

To index a document using this pipeline, use the following API call:

POST /my_index/_doc?pipeline=json_parser
{
  "message": "{\"field1\": \"value1\", \"field2\": 42}"
}

The document will be indexed with the parsed JSON fields:

{
  "_index": "my_index",
  "_type": "_doc",
  "_id": "1",
  "_source": {
    "message": "{\"field1\": \"value1\", \"field2\": 42}",
    "json_field": {
      "field1": "value1",
      "field2": 42
    }
  }
}

3. Querying and aggregating JSON fields

Once the JSON fields are indexed, you can query and aggregate them using the Elasticsearch Query DSL. For example, to search for documents with a specific value in the `field1` subfield, you can use the following query:

GET /my_index/_search
{
  "query": {
    "nested": {
      "path": "json_field",
      "query": {
        "term": {
          "json_field.field1": "value1"
        }
      }
    }
  }
}

To aggregate the values of the `field2` subfield, you can use the following aggregation:

GET /my_index/_search
{
  "size": 0,
  "aggs": {
    "field2_sum": {
      "nested": {
        "path": "json_field"
      },
      "aggs": {
        "sum": {
          "sum": {
            "field": "json_field.field2"
          }
        }
      }
    }
  }
}

Conclusion

In conclusion, parsing JSON fields in Elasticsearch can be achieved using custom mappings, the Ingest Node feature, and the Elasticsearch Query DSL. By following these steps, you can efficiently index, query, and aggregate JSON data in your Elasticsearch cluster.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?