What is the difference between Fragile and Robust commands?
The key concept here is that, when TeX handles its input, it is doing two distinct things, called expanding and executing stuff. Normally, these activities are interleaved: TeX takes a token (ie, an elementary piece of input), expands it, then executes it (if possible). Then it does so with the next token. But in certain circumstances, most notably when writing to a file, TeX only expands things without executing them (the result will most probably be (re-expanded and) executed later when TeX reads the file back). Some macros, for proper operation, rely on something being properly executed before the next token is expanded. Those are called "fragile", since they work only in the normal (interleaved) mode, but not in expansion-only contexts (such as "moving arguments" which often means writing to a file).
That's the general picture. Now let's give a "few" more details. Feel free to skip to "what to do in practice" :)
Expansion vs execution
The distinction between expansion and execution is somewhat arbitrary, but as a rule of thumb:
- expansion changes only the input stream, ie "what TeX is going to read next";
- execution is everything else.
For example, macros are expandable (TeX is going to read their replacement text next), \input
is expandable (TeX is going to read the given file next), etc. \def
is not expandable (it changes the meaning of the defined macro), \kern
is not expandable (it changes the content of the current paragraph or page), etc.
How things can go wrong
Now, consider a macro \foo
:
\newcommand\foo[1]{\def\arg{#1}\ifx\arg\empty T\else F\fi}
In normal context, \foo{}
gives T
and foo{stuff}
gives F
.) In normal context, TeX will try to expand \def
(which does nothing) then execute it (which removes \arg{#1}
from the input stream and defines \arg
) then expand the next token \ifx
(which removes \arg\empty
and possibly everything up to, but not including, the matching \else
from the input stream), etc.
In expansion-only context, TeX will try to expand \def
(does nothing), then expand whatever comes next ie the \arg
. At this point, anything could happen. Maybe \arg
is not defined and you get a (confusing) error message. Maybe it is defined to something like abc
, so \foo{}
will expand to \def abc{} F
. You'll not get an error when writing this to the file, but it will crash when read back. Perhaps \arg
is defined to \abc
, then \foo{}
will expand to \def\abc{} F
. Then you get no error message either when writing nor at readback, but not only you get F
while you're expecting T
, but also \abc
is redefined, which can have all kinds of consequences if this is an important macro (and good luck for tracking the bug down).
How protection works
Edited to add (not in the original question, but someone asked in a comment): so how does \protect
works? Well, in normal context \protect
expands to \relax
which does nothing. When a LaTeX (not TeX) command is about to process one of its arguments in expansion-only mode, it changes \protect
to mean something based on \noexpand
, which avoids expansion of the next token, thus protecting it from being expanded-but-not-executed. (See 11.4 in source2e.pdf for full details.)
For example, with \foo
as above, if you try \section{\foo{}}
chaos ensues as explained above. Now if you do \section{\protect\foo{}}
then when LaTeX prints the section title it's in normal (interleaved) mode, \protect
expands to \relax
, then \foo{}
expands-and-executes normally and you get a big T in your document. Before LaTeX writes your section title to the .aux
file for the table of contents, it changes \protect
to \noexpand\protect\noexpand
, so \protect\foo
expands to \noexpand\protect\noexpand\foo
and \protect\foo
is written to the aux file. When that line of the aux file is moved to the toc file, LaTeX defines \protect
to \noexpand
, so just \foo
gets written to the toc file. When the toc file is finally read in normal mode, then and only then \foo
is expanded-and-executed and you get a T in your document again.
You can play with the following document, looking at the contents of the .aux
and .toc
files without and with \protect
. Notes: (1) you want to run pdflatex
manually on the file, as opposed to latexmk
or your IDE which might do multiple runs at once, and (2) you will need to remove the toc file to recover after trying the non-\protect
ed version.
\documentclass{article}
\newcommand\foo[1]{\def\arg{#1}\ifx\arg\empty T\else F\fi}
\begin{document}
\tableofcontents
\section{\foo{}} % first run writes garbage to the aux file, second crashes
%\section{\protect\foo{}} % this is fine
\end{document}
Fun fact: the unprotected version fails in a different way (as explained above) if we replace every occurrence of \arg
with \lol
in the definition of \foo
.
Which macros are fragile
This was the easy (read: TeXnical, but well-defined) part of your question. Now, the hard part: when to use \protect
? Well it depends. You cannot know whether a macro is fragile or not without looking at is implementation. For example, the \foo
macro above could use an expandable trick to test for emptyness and would not be fragile. Also, some macros are "self-\protect
ing" (those defined with \DeclareRobustCommand
for example). As Joseph mentioned, \(
is fragile unless you (or another package) loaded fixltx2e
. (As a rule of thumb, most mathmode macros are fragile.) Also, you cannot know whether a particular macro tries to expand-only its arguments, but you can at least be sure all moving arguments will be expanded-only at some point.
What to do in practice
So, my advice is: when you see a weird error happening in or near a moving argument (ie a piece of text that's moved to another part of the document, like a footnote (to the bottom of the page), a section title (to the table of contents), etc), try \protect
ing every macro in it. It solves 99% of the problems.
(This can make you a hero when applied to a colleague's article, due today and "mysteriously" crashing: look at their document for a few seconds before you see a math formula inside a \section
title, say "add a \protect
here", then go back to work and let them call you a wizard. Cheap trick, but works.)
The key concept here is expansion. I'll take as an example a hypothetical function \foo
which is 'fragile', used in the argument of \section
:
\section{Some text \foo[option]{argument}}
When LaTeX processes the \section
macro, it does a number of things. One of those is to write the text of the section name to the .aux
file. Now, the key point here is that this uses the \write
primitive, effectively:
\immediate\write\@auxout{Some text \foo[option]{argument}}
The \write
primitive expands its argument in the same way as \edef
. However, I've said that \foo
is 'fragile'. That means that trying to \edef
it will either lead to an error or the wrong result. A classic case for this is any macro with an optional argument: the detection of these cannot be expanded inside an \edef
. The other is where something will be numbered based on where it is in the input, which can give bad numbering in the output. See for example http://texblog.net/help/latex/fragile.html for more details on macros which are fragile (but note that the fixltx2e
package sorts out some of these).
When you use \protect
, it prevents TeX expanding the next token during the \write
. So the text is written 'as given' to the .aux
file. This of course requires that you know which functions need to be protected. As TH notes, it also needs the correct use of \protected@write
or \protected@edef
to work correctly. (The way that these macros work is by altering the definition of \protect
to achieve the desired effects. So inside \protected@edef
, the expansion of \protect
is \noexpand\protect\noexpand
, for example.)
The macro \DeclareRobustCommand
is available in LaTeX2e. This adds some automatic protect into the macro itself, so that \protect
is not needed. This again works inside a \protected@write
situation.
This is good, but a better method is e-TeX's \protected
system:
\protected\def\foo....
Macros defined in this way are not expanded inside an \edef
or a \write
at all, as the engine itself knows to leave them alone. This is the approach taken by etoolbox
and xparse
for defining truly robust macros. Macros which are engine-protected don't rely on LaTeX2e's mechanisms at all, so are safe inside a plain \edef
.
This question is perhaps best answered by an example. Consider the fragile command \title
. Here are the relevant definitions from latex.ltx
:
\def\title#1{\gdef\@title{#1}}
\def\@title{\@latex@error{No \noexpand\title given}\@ehc}
Now imagine that you include \title{This is the title}
in the argument of a command that first executes this argument, then writes it to the aux file, or otherwise “moves” it. We can run an interactive experiment to see what would happen, using \edef
to immediately see the result:
; latex
This is pdfTeX, Version 3.1415926-1.40.11 (TeX Live 2010)
restricted \write18 enabled.
**\relax
[…]
*\title{This is the title}
*\edef\foo{\title{This is the title}}
*\show\foo
> \foo=macro:
->\gdef This is the title{This is the title}.
<*> \show\foo
As you can imagine (try it if you must), actually executing \foo
will not work as you might have expected before running this experiment. The problem, of course, is that \@title
got expanded.
To find out how \protect
works, you could do worse than running texdoc source2e
and looking at section 11.4, Robust commands and protect.