Recursive grep vs find / -type f -exec grep {} \; Which is more efficient/faster?
I'm not sure:
grep -r -i 'the brown dog' /*
is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in /
(but still look inside hidden files and dirs inside those).
Assuming you meant:
grep -r -i 'the brown dog' /
A few things to note:
- Not all
grep
implementations support-r
. And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in/dev/zero
for instance) or pipes or binary files..., some will not. - It's efficient as
grep
starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases)
Your:
find / -type f -exec grep -i 'the brown dog' {} \;
(removed the -r
which didn't make sense here) is terribly inefficient because you're running one grep
per file. ;
should only be used for commands that accept only one argument. Moreover here, because grep
looks only in one file, it will not print the file name, so you won't know where the matches are.
You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like /proc/mem
.
find / -type f -exec grep -i 'the brown dog' {} +
would be a lot better because as few grep
commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use:
find / -type f -exec grep -i 'the brown dog' /dev/null {} +
or with GNU grep
:
find / -type f -exec grep -Hi 'the brown dog' {} +
Note that grep
will not be started until find
has found enough files for it to chew on, so there will be some initial delay. And find
will not carry on searching for more files until the previous grep
has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than a grep -r
that doesn't follow symlink or look inside devices.
With GNU tools:
find / -type f -print0 | xargs -r0 grep -Hi 'the brown dog'
As above, as few grep
instances as possible will be run, but find
will carry on looking for more files while the first grep
invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives, find
and grep
accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (where find
and grep
may access different disks) or on SSDs, that might make a positive difference.
In a RAID setup, running several concurrent grep
invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks,
find / -type f -print0 | xargs -r0 -P2 grep -Hi 'the brown dog'
might increase the performance significantly. Note however that the second grep
will only be started once enough files have been found to fill up the first grep
command. You can add a -n
option to xargs
for that to happen sooner (and pass fewer files per grep
invocation).
Also note that if you're redirecting xargs
output to anything but a terminal device, then the greps
s will start buffering their output which means that the output of those grep
s will probably be incorrectly interleaved. You'd have to use stdbuf -oL
(where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end.
Here, the string you're looking for is fixed (not a regexp) so using the -F
option might make a difference (unlikely as grep
implementations know how to optimise that already).
Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale:
find / -type f -print0 | LC_ALL=C xargs -r0 -P2 grep -Hi 'the brown dog'
To avoid looking inside /proc
, /sys
..., use -xdev
and specify the file systems you want to search in:
LC_ALL=C find / /home -xdev -type f -exec grep -i 'the brown dog' /dev/null {} +
Or prune the paths you want to exclude explicitly:
LC_ALL=C find / \( -path /dev -o -path /proc -o -path /sys \) -prune -o \
-type f -exec grep -i 'the brown dog' /dev/null {} +
If the *
in the grep
call is not important to you then the first should be more efficient as only one instance of grep
is started, and forks aren't free. In most cases it will be faster even with the *
but in edge cases the sorting could reverse that.
There may be other find
-grep
structures which work better especially with many small files. Reading big amounts of file entries and inodes at once may give a performance improvement on rotating media.
But let's have a look at the syscall statistics:
find
> strace -cf find . -type f -exec grep -i -r 'the brown dog' {} \;
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
97.86 0.883000 3619 244 wait4
0.53 0.004809 1 9318 4658 open
0.46 0.004165 1 6875 mmap
0.28 0.002555 3 977 732 execve
0.19 0.001677 2 980 735 stat
0.15 0.001366 1 1966 mprotect
0.09 0.000837 0 1820 read
0.09 0.000784 0 5647 close
0.07 0.000604 0 5215 fstat
0.06 0.000537 1 493 munmap
0.05 0.000465 2 244 clone
0.04 0.000356 1 245 245 access
0.03 0.000287 2 134 newfstatat
0.03 0.000235 1 312 openat
0.02 0.000193 0 743 brk
0.01 0.000082 0 245 arch_prctl
0.01 0.000050 0 134 getdents
0.00 0.000045 0 245 futex
0.00 0.000041 0 491 rt_sigaction
0.00 0.000041 0 246 getrlimit
0.00 0.000040 0 489 244 ioctl
0.00 0.000038 0 591 fcntl
0.00 0.000028 0 204 188 lseek
0.00 0.000024 0 489 set_robust_list
0.00 0.000013 0 245 rt_sigprocmask
0.00 0.000012 0 245 set_tid_address
0.00 0.000000 0 1 uname
0.00 0.000000 0 245 fchdir
0.00 0.000000 0 2 1 statfs
------ ----------- ----------- --------- --------- ----------------
100.00 0.902284 39085 6803 total
grep only
> strace -cf grep -r -i 'the brown dog' .
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
40.00 0.000304 2 134 getdents
31.71 0.000241 0 533 read
18.82 0.000143 0 319 6 openat
4.08 0.000031 4 8 mprotect
3.29 0.000025 0 199 193 lseek
2.11 0.000016 0 401 close
0.00 0.000000 0 38 19 open
0.00 0.000000 0 6 3 stat
0.00 0.000000 0 333 fstat
0.00 0.000000 0 32 mmap
0.00 0.000000 0 4 munmap
0.00 0.000000 0 6 brk
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 245 244 ioctl
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 471 fcntl
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 1 futex
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 132 newfstatat
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.000760 2871 466 total
If you're on an SSD and seek time is negligble, you could use GNU parallel:
find /path -type f | parallel --gnu --workdir "$PWD" -j 8 '
grep -i -r 'the brown dog' {}
'
This will execute up to 8 grep processes at the same time based on what find
found.
This will thrash a hard disk drive, but an SSD should cope pretty well with it.