logo

Get in touch

Awesome Image Awesome Image

ElasticSearch Engine December 4, 2023

Elasticsearch Query: A Guide to Optimize Query Performance

Written by Mahipalsinh Rana

32,555

Elasticsearch Query_ A Guide to Optimize Query Performance

Elasticsearch is a popular open-source search and analytics engine that is built on Apache Lucene. Elasticsearch is a highly scalable, distributed RESTful search and analytics engine designed for horizontal scalability, reliability, and easy management. It allows storing, searching, and analyzing big volumes of data quickly and in near real-time.

How to Optimize Elasticsearch Query for Optimize Query Performance

Approach 1: Scaling Shards for Enhanced Performance

Elasticsearch is a powerful and versatile search and analytics engine that excels in handling large volumes of data. One of the key aspects of optimizing Elasticsearch performance is the thoughtful configuration of shards, which are the fundamental units for data distribution and parallel processing. In this document, we explore the benefits and steps for increasing the number of shards in Elasticsearch to achieve better performance, particularly when dealing with substantial datasets.

The Importance of Shards:

Shards are the building blocks of Elasticsearch indices, and they play a pivotal role in achieving efficient data distribution and parallelism across a cluster. By increasing the number of shards in your Elasticsearch setup, you can harness several benefits:

  • Parallel Processing: More shards allow for parallel processing of data and queries, significantly improving search and indexing performance.
  • Optimal Resource Utilization: Distributed data across multiple shards ensures that system resources, such as CPU and memory, are efficiently utilized.
  • Scalability: As your data grows, adding more shards can provide scalability without overburdening existing shards.
  • High Availability: With multiple shards, you can distribute data redundantly across nodes, enhancing data availability and resilience.

Also Read: Explore Elasticsearch and Why It’s Worth Using?

Real-World Impact:

For testing this we hit the same Elastic Search Query in 2 indexes first with one shard and another with 3 shards. By this, we noticed that the response time is boosted to a great extent.

1. Before Increasing Shards

Settings of index:

Settings of index

Search output:

Search output

Response Time: 1m 28s

2. After increasing the number of shards

Settings of index:

Setting of Index

Search output:

Search Output 1

Response Time: 4s

Approach 2: Updating max_open_scroll_context of Elastic Search Cluster

The recommendation is to adjust the max_open_scroll_context setting of the Elasticsearch cluster. This setting controls the maximum number of open scroll contexts across the cluster. Scroll contexts are crucial for scrolling through a large number of search results while consuming system resources, particularly memory.

The search.max_open_scroll_context setting in Elasticsearch controls the maximum number of scroll contexts that can be opened at the same time per node.

Also read: Google Cloud & Elasticsearch: Interactive Search Intro

Some key things to know about this setting:
  • A scroll context is opened when a search query uses the scroll API to retrieve results in batches.
  • Each scroll context uses resources on the node holding it open – memory for the result set, threads for rebuilding iterates, etc.
  • A value between 512 – 1024 is reasonable for most clusters.

By setting this limit, you can control resource usage and prevent the system from being overwhelmed with too many open scroll contexts. This is particularly important in scenarios where you have a large number of concurrent scrolling searches.

For example, if you set max_open_scroll_context to 500, Elasticsearch will allow up to 500 open scroll contexts for that index. Once the limit is reached, you won’t be able to open additional scroll contexts until some of the existing ones are closed.

To configure max_open_scroll_context:

curl -x “” -X PUT localhost:9200/_cluster/settings -H ‘Content-Type: application/json’ -d'{
    “persistent” : {
        “search.max_open_scroll_context”: 1024
    },
    “transient”: {
        “search.max_open_scroll_context”: 1024
    }
}’

We have tried this local you can find the results below:

Firstly, with default search.max_open_scroll_context this was the result for 13 lakhs+

local results

Response time: Around 1900ms

After updating the search.max_open_scroll_context to 500 this was the result:

local results 1

Response time: Around 1370ms

After updating the search.max_open_scroll_context to 1024 this was the result:

local result 2

Response time: Around 1300ms

Approach 3: Async Search – A Cautionary Tale

Asynchronous search lets you search requests that run in the background. You can monitor the progress of these searches and get back partial results as they become available. After the search finishes, you can save the results to examine at a later time.

In this, we can use asyn_search API instead of search API. This optimizes the search result. But the Asyn search doesn’t guarantee results every time sometimes it results in no response. So this isn’t an optimal approach.

These are the following disadvantages of Asyn Search:
  • Queue overflow – Async search queues up the search requests and runs them concurrently. If the queue fills up from too many requests, new searches may get rejected.
  • Timeout – There is a timeout for an async search after which partial/no results are returned. With a large result set, the query may not finish in time.
  • Memory constraints – Async search loads results into memory before returning them. With too many hits, it may exceed the available memory and fail.
  • Thread pool saturation – Async search uses concurrent threads to run searches. A high number of large searches can saturate the thread pool and limit capacity.

Suggestions:
Here are some suggestions to optimize the performance of the Elasticsearch query

  • Should not use Highlight in the query as it takes a long time for large documents.
    • The use of highlighting in Elasticsearch queries, especially with large documents or extensive result sets, can indeed impact performance negatively. Highlighting involves additional processing to identify and mark up the parts of the document that match the query, and when dealing with substantial data, this process can be resource-intensive and time-consuming.
  • Should not use wildcard query in search
    • Wildcard queries, especially when used without proper consideration, can lead to inefficient searches and increased resource consumption

With over 20 years of industry experience, our CTO is a distinguished expert in Liferay Enterprise DXP, AI/ML, and a broad array of cutting-edge technologies including Enterprise Search, ElasticSearch, Java, Spring Boot, .NET, Microservices, Python (Django, FastAPI, Flask), ReactJs, and NodeJs. As a seasoned Technical Architect, he has a proven track record of leading complex, large-scale projects across diverse industries.

Bringing Software Development Expertise to Every
Corner of the World

United States

India

Germany

United Kingdom

Canada

Singapore

Australia

New Zealand

Dubai

Qatar

Kuwait

Finland

Brazil

Netherlands

Ireland

Japan

Kenya

South Africa