fontenc vs inputenc

The two packages address different problems.

  1. inputenc allows the user to input accented characters directly from the keyboard;

  2. fontenc is oriented to output, that is, what fonts to use for printing characters.

The two packages are not connected, though it is best to call fontenc first and then inputenc.

With \usepackage[T1]{fontenc} you choose an output font encoding that has support for the accented characters used by the most widespread European languages (German, French, Italian, Polish and others), which is important because otherwise TeX would not correctly hyphenate words containing accented letters.

With \usepackage[<encoding>]{inputenc} you can directly input accented and other characters. What's important is that <encoding> matches the encoding with which the file has been written and this depends on your operating system and the settings of your text editor.

If calling only

\usepackage[T1]{fontenc}

you seem to get correct output, then your files are probably encoded with Latin-1 (also called ISO 8859-1), but beware that the correspondence is not complete: for example, typing ß you'd get SS in output, which is obviously incorrect. Thus your editor might be set up for Latin-1 and so the correct call should be

\usepackage[T1]{fontenc}
\usepackage[latin1]{inputenc}

How do the packages work? Let's do the example for these two encodings and the character ä.

First of all one must remember that TeX knows nothing about file encodings: all it really sees is the character number.

  1. When you type ä in an editor set up for Latin-1, the machine stores character number 228.

  2. When TeX reads the file it finds the character number 228 and the macros of inputenc transform this into \"a.

  3. Now fontenc comes into action; the command \" has an associated table of the known accented characters the font has available, and ä is among these, so the sequence \"a is transformed into the command "print character 228" in the current (T1-encoded) font.

In this case the two coincide. This is not the case, for instance, of ß:

  1. The machine stores character number 223

  2. The macros of inputenc change this into \ss

  3. fontenc transforms this into "print character 255" (where T1 encoded fonts have a ß character).

UTF-8

The situation is a bit different when \usepackage[utf8]{inputenc} is used (and the file is UTF-8 encoded, of course). When the text editor shows ä or ß, the file actually contains two byte sequences, respectively <C3><A4> and <C3><9F>

The first byte is a prefix that contains some information, the main one is that it introduces a two byte character. Now inputenc makes all legal prefixes active, so <C3> behaves like a macro; its definition is to look at the next character and then interpret, according to Unicode rules, the whole pair and transforms it into the corresponding code point, respectively U+00E4 and U+00DF.

Other prefixes announce three or four byte combinations, but the behavior is essentially the same: instead of one character more, two or three others are absorbed and the translation into a code point is performed.

In ot1enc.dfu and t1enc.dfu we find

\DeclareUnicodeCharacter{00DF}{\ss}

\DeclareUnicodeCharacter{00E4}{\"a}

Oh, wait! There's something more! Yes, in this case inputenc interacts with fontenc (which it doesn't for other input encodings): for every loaded encoding, the corresponding .dfu file (Unicode definitions) is read before the document starts. This is the reason why I prefer to always load fontenc before inputenc (though not really necessary).

Those declarations provide the necessary setup: the combinations <C3><A4> and <C3><9F> get translated into \"a and \ss respectively and everything works from now on as described for latin1.

Caveat

Here's another issue that can pop up at times (see Available Characters with iso-8859-1). The Latin-1 encoding provides at slot 0xA5 (decimal 165) the yen character. According to the description above, the latin1 option to inputenc defines the \textyen translation for this, but the T1 output encoding reserves no slot for this, so inputting ¥ results in a runtime LaTeX error. One has to load a package providing a default output for \textyen, for instance textcomp. It would be the same with the utf8 input encoding.

Only characters covered by the output encoding or that are given a suitable rendering in terms of it can be safely input.


Yes, you are confused. You should use both packages with pdflatex or a different approach with xelatex or lualatex.

pdflatex

\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}

xelatex

\usepackage{fontspec}
\defaultfontfeatures{Ligatures=TeX}

lualatex

\usepackage{luatextra}
\defaultfontfeatures{Ligatures=TeX}

Or in a complete document:

\documentclass[a4paper]{scrartcl}
\usepackage[ngerman]{babel}
\usepackage{iftex}
\ifPDFTeX
   \usepackage[utf8]{inputenc}
   \usepackage[T1]{fontenc}
   \usepackage{lmodern}
\else
   \ifXeTeX
     \usepackage{fontspec}
   \else 
     \usepackage{luatextra}
   \fi
   \defaultfontfeatures{Ligatures=TeX}
\fi
\usepackage{blindtext}
\begin{document}
äöüßÄÖÜ

\blindtext
\end{document}

This assumes that your input is utf8, but you should do this anyway. It's 2012, after all.

See also:

  • Why you should use T1 with pdflatex
  • An introduction to XeTeX

You need inputenc to specify the character encoding used in your input file if it is not ascii. So latin1 (or more general and modern, utf8).

fontenc specifies the encoding used in the fonts. this is (more or less) independent of the input encoding. Classical TeX uses Tex-specific font encodings such as "T1" for 8-bit fonts suitable for "latin1" languages.

Conceptually the input and font encodings would be totally independent but they are related in that you need a font encoding that includes the characters in the desired language and (regrettably) hyphenation patterns used to hyphenate words in the document are tied to the font encoding due to the way TeX works.