How to prolong compilation time while engaging in leisure activities?
With pdfTeX
, add
\everypar{\ifnum\pdfelapsedtime<\maxdimen
\the\expandafter\everypar\else\pdfresettimer\fi}
(after the \begin{document}
if you are using pdfLaTeX
). This will wait four hours and a half (\maxdimen
scaled seconds, that is, 2^{30-16} seconds) before starting each paragraph. Replace \maxdimen
by 65536 (and a space) to get a one second delay at each paragraph. Compiling a book with, say, 1000 paragraphs takes about six months. Not bad for a 90 characters addition.
With a bit more code (184 characters), an engine-agnostic implementation of the Ackermann function lets us get pretty much any delay by changing the arguments of \A
just a little bit (I am not sure at what point TeX's limits are exceeded). With
\def\A#1#2{\the\numexpr\B{#1}{#2+1}\empty
{\A{#1-1}{\B{#2}1\empty{\A{#1}{#2-1}}.}}.\relax}
\def\B#1{\ifnum\numexpr#1=0 \C\else\C\fi}
\def\C#1#2#3#4#5.{#1#3#4}
\everypar{{\count0=\A{3}{1}}}
I add about 6 minutes per paragraph to the compilation time on my machine. An additional benefit of this answer is that the definitions can easily be scattered in the preamble to avoid detection. Using xint
lets us manipulate larger numbers, hence give larger arguments to \A
:
\usepackage{xint}
\def\C#1#2#3#4#5.{#1#3#4}
\def\b#1{\if0\s{#1}\C\else\C\fi}
\let\s\xintSgn
\let\d\romannumeral
\let\x\xintAdd
\def\a#1#2{\b{#1}{\x{#2}1}\empty{%
\a{\x{#1}{-1}}
{\d-`-\b{#2}1\empty{\a{#1}{\d-`-\x{#2}{-1}}}.}}.}
\everypar{\d-\xintSgn{\A{3}{1}} }
This last code is not tested: xint
puts an overhead on TeX's arithmetic, which makes the whole thing way too slow. Slower, even, would be to use the bigintcalc
package, which predates xint
and is not as optimized.
This should keep you busy for a minute or two (I stopped my test run after 5 minutes)
\documentclass{article}
\begin{document}
\everypar{\ifnum\thepage000<\maxdimen\par\fi}
hello
\end{document}
Disclaimer: I'm not responsible for any damage this may cause to your CPU! :D
A solution based on hyperref
and foreach
:
\documentclass{article}
\usepackage{tikz}
\usepackage{hyperref}
\usepackage{lipsum}
\begin{document}
\title{The art of wasting time}
\author{dcmst}
\maketitle
\lipsum[1]
\begin{tikzpicture}
\def \n {1000}
\foreach \s in {1,...,\n}{
\node {\hypertarget{\s}{}};
\node {\hyperlink{\s}{}};
}
\end{tikzpicture}
\lipsum[1]
\end{document}
On a i7 laptop a loop of 1000 compiles in 3-4 seconds; a loop of 10000 in 53 seconds! Etc.
Just increase the number and you'll have more time to play with [insert you favorite video game here].
This does not produce any output at all, so you can also blame your old machine and ask for a newer, faster, one :)
Edit: looks like that 16383 is the maximum accepted value for \n
. Then you can obviously stack more than one loop adding as many foreach
as you need. Two foreach
istances with a value of 10000 got me a TeX capacity exceeded
error.
Edit.2: to remove compilation warnings a ~
or \null
character can be added into the hypertarget
and link
second argument.
Edit.3: after increasing the main_memory
amount I managed to keep the compilation going for 4 minutes before stopping manually. Instead of adding another foreach
the time can be prolonged adding more couples of nodes like the two in the MWE.