How to detect if a file has a UTF-8 BOM in Bash?
First, let's demonstrate that head
is actually working correctly:
$ printf '\xef\xbb\xbf' >file
$ head -c 3 file
$ head -c 3 file | hexdump -C
00000000 ef bb bf |...|
00000003
Now, let's create a working function has_bom
. If your grep
supports -P
, then one option is:
$ has_bom() { head -c3 "$1" | LC_ALL=C grep -qP '\xef\xbb\xbf'; }
$ has_bom file && echo yes
yes
Currently, only GNU grep
supports -P
.
Another option is to use bash's $'...'
:
$ has_bom() { head -c3 "$1" | grep -q $'\xef\xbb\xbf'; }
$ has_bom file && echo yes
yes
ksh
and zsh
also support $'...'
but this construct is not POSIX and dash
does not support it.
Notes:
The use of an explicit
return $?
is optional. The function will, by default, return with the exit code of the last command run.I have used the POSIX form for defining functions. This is equivalent to the bash form but gives you one less problem to deal with if you ever have to run the function under another shell.
bash does accept the use of the character
-
in a function name but this is a controversial feature. I replaced it with_
which is more widely accepted. (For more on this issue, see this answer.)The
-q
option togrep
makes it quiet, meaning that it still sets a proper exit code but it does not send any characters to stdout.
I applied the followings for the first read line:
read c
if (( "$(printf "%d" "'${c:0:1}")" == 65279 )) ; then c="${c:1}" ; fi
This simply removes the BOM from the variable.