Fast way to determine if a file is a SQLite database
2-3 files per second tested with file
seems very slow to me. file
actually performs a number of different tests to try and determine the file type. Since you are looking for one particular type of file (sqlite), and you don't care about identifying all the others, you can experiment on a known sqlite file to determine which test actually identifies it. You can then exclude the others using the -e
flag, and run against your full file set. See the man page:
-e, --exclude testname
Exclude the test named in testname from the list of tests made to
determine the file type. Valid test names are:
apptype
EMX application type (only on EMX).
text
Various types of text files (this test will try to guess the
text encoding, irrespective of the setting of the ‘encoding’
option).
encoding
Different text encodings for soft magic tests.
tokens
Looks for known tokens inside text files.
cdf
Prints details of Compound Document Files.
compress
Checks for, and looks inside, compressed files.
elf
Prints ELF file details.
soft
Consults magic files.
tar
Examines tar files.
Edit: I tried some tests myself. Summary:
- Applying my advice with the right flags can speed up
file
by about 15%, for tests to determine sqlite. Which is something, but not the huge improvement I expected. - Your file tests are really slow. I did 500 on a standard machine in the time you did 2-3. Are you on slow hardware, or checking enormous files, running an ancient version of
file
, or...? - You must keep the 'soft' test to successfully identify a file as sqlite.
For a 16MB sqlite DB file, I did:
#!/bin/bash
for i in {1..1000}
do
file sqllite_file.db | tail > out
done
Timing on the command line:
~/tmp$ time ./test_file_times.sh; cat out
real 0m2.424s
user 0m0.040s
sys 0m0.288s
sqllite_file.db: SQLite 3.x database
Trying the different test excludes, and assuming the determination is made based on a single test, it is the 'soft' (i.e. magic file lookup) test which identifies the file. Accordingly, I modified the file
command to exclude all the other tests:
file -e apptype -e ascii -e encoding -e tokens -e cdf -e compress -e elf -e tar sqllite_file.db | tail > out
Running this 1000 times:
~/tmp$ time ./test_file_times.sh; cat out
real 0m2.119s
user 0m0.060s
sys 0m0.280s
sqllite_file.db: SQLite 3.x database
If you look at: http://www.sqlite.org/fileformat.html, the SQLite format starts with the string "SQLite format 3\000". It seems to me that you could check head -c 16
of the file to check the format. I'd expect this to be faster then using more general purpose tools.
If you look at file
's magic description for sqlite files, it looks for either SQLite format 3
or ** This file contains an SQLite
at the beginning of the file.
So you could either create a magic file with just those checks (and disable the builtin tests as in @ire_and_curses solution) or do the check manually:
case $(head -c 31 < "$file") in
("** This file contains an SQLite"*) echo sqlite 2;;
("SQLite format 3"*) echo sqlite 3;;
esac
Not very efficient as it runs a head
per file. With a little effort you, you could probably do it in perl, to read the first 31 bytes of several files in one perl invocation.