iconv any encoding to UTF-8

Compiling all them. Go to dir, create dir2utf8.sh:

#!/bin/bash
# converting all files in a dir to utf8

for f in *
do
  if test -f $f then
    echo -e "\nConverting $f"
    CHARSET="$(file -bi "$f"|awk -F "=" '{print $2}')"
    if [ "$CHARSET" != utf-8 ]; then
      iconv -f "$CHARSET" -t utf8 "$f" -o "$f"
    fi
  else
    echo -e "\nSkipping $f - it's a regular file";
  fi
done

Maybe you are looking for enca:

Enca is an Extremely Naive Charset Analyser. It detects character set and encoding of text files and can also convert them to other encodings using either a built-in converter or external libraries and tools like libiconv, librecode, or cstocs.

Currently it supports Belarusian, Bulgarian, Croatian, Czech, Estonian, Hungarian, Latvian, Lithuanian, Polish, Russian, Slovak, Slovene, Ukrainian, Chinese, and some multibyte encodings independently on language.

Note that in general, autodetection of current encoding is a difficult process (the same byte sequence can be correct text in multiple encodings). enca uses heuristics based on the language you tell it to detect (to limit the number of encodings). You can use enconv to convert text files to a single encoding.


You can get what you need using standard gnu utils file and awk. Example:

file -bi .xsession-errors gives me: "text/plain; charset=us-ascii"

so file -bi .xsession-errors |awk -F "=" '{print $2}' gives me "us-ascii"

I use it in scripts like so:

CHARSET="$(file -bi "$i"|awk -F "=" '{print $2}')"

if [ "$CHARSET" != utf-8 ]; then
  iconv -f "$CHARSET" -t utf8 "$i" -o outfile
fi