How to create simple fuzzy search with PostgreSQL only?

Paul told you about levenshtein(). That's a very useful tool, but it's also very slow with big tables. It has to calculate the Levenshtein distance from the search term for every single row. That's expensive and cannot use an index. The "accelerated" variant levenshtein_less_equal() is faster for long strings, but still slow without index support.

If your requirements are as simple as the example suggests, you can still use LIKE. Just replace any - in your search term with % in the WHERE clause. So instead of:

WHERE code ILIKE '%AB-123-lHdfj%'

Use:

WHERE code ILIKE '%AB%123%lHdfj%'

Or, dynamically:

WHERE code ILIKE '%' || replace('AB-123-lHdfj', '-', '%') || '%'

% in LIKE patterns stands for 0-n characters. Or use _ for exactly one character. Or use regular expressions for a smarter match:

WHERE code ~* 'AB.?123.?lHdfj'

.? ... 0 or 1 characters

Or:

WHERE code ~* 'AB\-?123\-?lHdfj'

\-? ... 0 or 1 dashes

You may want to escape special characters in LIKE or regexp patterns. See:

  • Escape function for regular expression or LIKE patterns

If your actual problem is more complex and you need something faster then there are various options, depending on your requirements:

  • There is full text search, of course. But this may be an overkill in your case.

  • A more likely candidate is trigram-matching with the additional module pg_trgm. See:

    • Using Levenshtein function on each element in a tsvector?
    • PostgreSQL LIKE query performance variations
    • Related blog post by Depesz

    Can be combined it with LIKE, ILIKE, ~, or ~* since PostgreSQL 9.1.
    Also interesting in this context: the similarity() function or % operator of that module.

  • Last but not least you can implement a hand-knit solution with a function to normalize the strings to be searched. For instance, you could transform AB1-23-lHdfj --> ab123lhdfj, save it in an additional column and search with terms transformed the same way.

    Or use an index on the expression instead of the redundant column. (Involved functions must be IMMUTABLE.) Possibly combine that with pg_tgrm from above.

Overview of pattern-matching techniques:

  • Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL

Postgres provides a module with several string comparsion functions such as soundex and metaphone. But you will want to use the levenshtein edit distance function.

Example:

test=# SELECT levenshtein('GUMBO', 'GAMBOL');
 levenshtein
-------------
           2
(1 row)

The 2 is the edit distance between the two words. When you apply this against a number of words and sort by the edit distance result you will have the type of fuzzy matches that you're looking for.

Try this query sample: (with your own object names and data of course)

SELECT * 
FROM some_table
WHERE levenshtein(code, 'AB123-lHdfj') <= 3
ORDER BY levenshtein(code, 'AB123-lHdfj')
LIMIT 10

This query says:

Give me the top 10 results of all data from some_table where the edit distance between the code value and the input 'AB123-lHdfj' is less than 3. You will get back all rows where the value of code is within 3 characters difference to 'AB123-lHdfj'...

Note: if you get an error like:

function levenshtein(character varying, unknown) does not exist

Install the fuzzystrmatch extension using:

test=# CREATE EXTENSION fuzzystrmatch;