Fast way of finding lines in one file that are not in another?
The comm command (short for "common") may be useful comm - compare two sorted files line by line
#find lines only in file1
comm -23 file1 file2
#find lines only in file2
comm -13 file1 file2
#find lines common to both files
comm -12 file1 file2
The man
file is actually quite readable for this.
Like konsolebox suggested, the posters grep solution
grep -v -f file2 file1
actually works great (faster) if you simply add the -F
option, to treat the patterns as fixed strings instead of regular expressions. I verified this on a pair of ~1000 line file lists I had to compare. With -F
it took 0.031 s (real), while without it took 2.278 s (real), when redirecting grep output to wc -l
.
These tests also included the -x
switch, which are necessary part of the solution in order to ensure totally accuracy in cases where file2 contains lines which match part of, but not all of, one or more lines in file1.
So a solution that does not require the inputs to be sorted, is fast, flexible (case sensitivity, etc) is:
grep -F -x -v -f file2 file1
This doesn't work with all versions of grep, for example it fails in macOS, where a line in file 1 will be shown as not present in file 2, even though it is, if it matches another line that is a substring of it. Alternatively you can install GNU grep on macOS in order to use this solution.
You can achieve this by controlling the formatting of the old/new/unchanged lines in GNU diff
output:
diff --new-line-format="" --unchanged-line-format="" file1 file2
The input files should be sorted for this to work. With bash
(and zsh
) you can sort in-place with process substitution <( )
:
diff --new-line-format="" --unchanged-line-format="" <(sort file1) <(sort file2)
In the above new and unchanged lines are suppressed, so only changed (i.e. removed lines in your case) are output. You may also use a few diff
options that other solutions don't offer, such as -i
to ignore case, or various whitespace options (-E
, -b
, -v
etc) for less strict matching.
Explanation
The options --new-line-format
, --old-line-format
and --unchanged-line-format
let you control the way diff
formats the differences, similar to printf
format specifiers. These options format new (added), old (removed) and unchanged lines respectively. Setting one to empty "" prevents output of that kind of line.
If you are familiar with unified diff format, you can partly recreate it with:
diff --old-line-format="-%L" --unchanged-line-format=" %L" \
--new-line-format="+%L" file1 file2
The %L
specifier is the line in question, and we prefix each with "+" "-" or " ", like diff -u
(note that it only outputs differences, it lacks the ---
+++
and @@
lines at the top of each grouped change).
You can also use this to do other useful things like number each line with %dn
.
The diff
method (along with other suggestions comm
and join
) only produce the expected output with sorted input, though you can use <(sort ...)
to sort in place. Here's a simple awk
(nawk) script (inspired by the scripts linked-to in Konsolebox's answer) which accepts arbitrarily ordered input files, and outputs the missing lines in the order they occur in file1.
# output lines in file1 that are not in file2
BEGIN { FS="" } # preserve whitespace
(NR==FNR) { ll1[FNR]=$0; nl1=FNR; } # file1, index by lineno
(NR!=FNR) { ss2[$0]++; } # file2, index by string
END {
for (ll=1; ll<=nl1; ll++) if (!(ll1[ll] in ss2)) print ll1[ll]
}
This stores the entire contents of file1 line by line in a line-number indexed array ll1[]
, and the entire contents of file2 line by line in a line-content indexed associative array ss2[]
. After both files are read, iterate over ll1
and use the in
operator to determine if the line in file1 is present in file2. (This will have have different output to the diff
method if there are duplicates.)
In the event that the files are sufficiently large that storing them both causes a memory problem, you can trade CPU for memory by storing only file1 and deleting matches along the way as file2 is read.
BEGIN { FS="" }
(NR==FNR) { # file1, index by lineno and string
ll1[FNR]=$0; ss1[$0]=FNR; nl1=FNR;
}
(NR!=FNR) { # file2
if ($0 in ss1) { delete ll1[ss1[$0]]; delete ss1[$0]; }
}
END {
for (ll=1; ll<=nl1; ll++) if (ll in ll1) print ll1[ll]
}
The above stores the entire contents of file1 in two arrays, one indexed by line number ll1[]
, one indexed by line content ss1[]
. Then as file2 is read, each matching line is deleted from ll1[]
and ss1[]
. At the end the remaining lines from file1 are output, preserving the original order.
In this case, with the problem as stated, you can also divide and conquer using GNU split
(filtering is a GNU extension), repeated runs with chunks of file1 and reading file2 completely each time:
split -l 20000 --filter='gawk -f linesnotin.awk - file2' < file1
Note the use and placement of -
meaning stdin
on the gawk
command line. This is provided by split
from file1 in chunks of 20000 line per-invocation.
For users on non-GNU systems, there is almost certainly a GNU coreutils package you can obtain, including on OSX as part of the Apple Xcode tools which provides GNU diff
, awk
, though only a POSIX/BSD split
rather than a GNU version.