Grep huge number of patterns from huge file

The problem, of course, is that you run grep on the big file 10,000 times. You should read both files only once. If you want to stay outside scripting languages, you can do it this way:

  1. Extract all numbers from file 1 and sort them
  2. Extract all numbers from file 2 and sort them
  3. Run comm on the sorted lists to get what's only on the second list

Something like this:

$ grep -o '^[0-9]\{12\}$' file1 | sort -u -o file1.sorted
$ grep -o  '[0-9]\{12\}'  file2 | sort -u -o file2.sorted
$ comm -13 file1.sorted file2.sorted > file3

See man comm.

If you could truncate the big file every day (like a log file) you could keep a cache of sorted numbers and wouldn't need to parse it whole every time.


This answer is based on the awk answer posted by potong..
It is twice as fast as the comm method (on my system), for the same 6 million lines in main-file and 10 thousand keys... (now updated to use FNR,NR)

Although awk is faster than your current system, and will give you and your computer(s) some breathing space, be aware that when data processing is as intense as you've described, you will get best overall results by switching to a dedicated database; eg. SQlite, MySQL...


awk '{ if (/^[^0-9]/) { next }              # Skip lines which do not hold key values
       if (FNR==NR) { main[$0]=1 }          # Process keys from file "mainfile"
       else if (main[$0]==0) { keys[$0]=1 } # Process keys from file "keys"
     } END { for(key in keys) print key }' \
       "mainfile" "keys" >"keys.not-in-main"

# For 6 million lines in "mainfile" and 10 thousand keys in "keys"

# The awk  method
# time:
#   real    0m14.495s
#   user    0m14.457s
#   sys     0m0.044s

# The comm  method
# time:
#   real    0m27.976s
#   user    0m28.046s
#   sys     0m0.104s


Yes, definitely do use a database. They're made exactly for tasks like this.