Elasticsearch: Find substring match

To search for partial field matches and exact matches, it will work better if you define the fields as "not analyzed" or as keywords (rather than text), then use a wildcard query.

See also this.

To use a wildcard query, append * on both ends of the string you are searching for:

POST /my_index/my_type/_search
{
"query": {
    "wildcard": {
       "name": {
          "value": "*en's*"
       }
    }
}
}

To use with case insensitivity, use a custom analyzer with a lowercase filter and keyword tokenizer.

Custom Analyzer:

"custom_analyzer": {
    "tokenizer": "keyword",
    "filter": ["lowercase"]
}

Make the search string lowercase

If you get search string as AsD: change it to *asd*


The answer given by @BlackPOP will work, but it uses the wildcard approach, which is not preferred as it has a performance issue and if abused can create a huge domino effect (performance issue) in the Elastic cluster.

I have written a detailed blog on partial search/autocomplete covering the latest options available in Elasticsearch as of today (Dec 2020) with performance in mind. For more trade-off information please refer to this answer.

IMHO a better approach will be to use the customized n-gram tokenizer according to use-case, which will have already tokens needed for search term so it will be faster, although it will have a bigger index size, but you size is not that costly and speed will be better with more control on how exactly you want substring search to work.

Also size can be controlled if you are conservative in defining the min and max gram in tokenizer setting.