Why is seq bad?

Another counterexample is given in this answer - monads fail to satisfy monad laws with seq and undefined. And since undefined cannot be avoided in a Turing-complete language, the one to blame is seq.


The angst about Prelude.seq (frequently in association with ) is mostly attributed to a few reasons:

  1. it weakens extensionality

     -- (\ x -> f x) == f
    
    seq (\ x -> ⊥ x) y = y
    
    seq ⊥ y = ⊥
    
  2. it weakens parametricity

     --  foldr k z (build g) == g k z
    
    foldr ⊥ 0 (build seq) = foldr ⊥ 0 (seq (:) [])
                          = foldr ⊥ 0 []
                          = 0
    
    seq ⊥ 0 = ⊥
    
  3. it invalidates various laws e.g. those for the monadic interface

     --  m >>= return == m
    
    seq (⊥ >>= return :: State s a) True = True
    
    seq (⊥ :: State s a) True = ⊥
    

However:

  1. Extensionality is also weakened by the combination of call-by-need semantics and the use of weak-head normal form, as noted by Philip Johnson-Freyd, Paul Downen and Zena Ariola.

  2. Parametricity is similarly weakened by the combination of GADTs and the corresponding map functions satisfying the functor laws, as shown by Patricia Johann, Enrico Ghiorzi, and Daniel Jeffries.

  3. The presence of the polymorphic fixed-point operator:

    yet :: (a -> a) -> a
    yet f = f (yet f)
    

    also imposes restrictions on parametricity, as specified by Philip Wadler.

  4. That certain combinations of operators and values can invalidate basic logical or mathematical rules isn't new e.g. division and zero. Like division, selective strictness is necessary - algorithms for determining strictness (a nontrivial property) are subject to Rice's theorem: they may not always succeed, resulting in unexpectedly-excessive memory usage (i.e. space leaks) in programs. As for the choice to use primitive operators or annotations (for patterns or types), that usually doesn't change the adverse impact on important theorems and laws.

    Another option could be to use an augmented form of call-by-value semantics, but that presumes the method of augmentation being used is sufficiently "well-behaved".


At some time in the future, perhaps there will be one or more advances in the computing sciences that resolve the matter. Until then, the most practical option is to manage the awkward interaction between useful laws and theorems and this along with other real-world programming features.


As far as I know a polymorphic seq function is bad because it weakens free theorems or, in other words, some equalities that are valid without seq are no longer valid with seq. For example, the equality

map g (f xs) = f (map g xs)

holds for all functions g :: tau -> tau', all lists xs :: [tau] and all polymorphic functions f :: [a] -> [a]. Basically, this equality states that f can only reorder the elements of its argument list or drop or duplicate elements but cannot invent new elements.

To be honest, it can invent elements as it could "insert" a non-terminating computation/run-time error into the lists, as the type of an error is polymorphic. That is, this equality already breaks in a programming language like Haskell without seq. The following function definitions provide a counter example to the equation. Basically, on the left hand side g "hides" the error.

g _ = True
f _ = [undefined]

In order to fix the equation, g has to be strict, that is, it has to map an error to an error. In this case, the equality holds again.

If you add a polymorphic seq operator, the equation breaks again, for example, the following instantiation is a counter example.

g True = True
f (x:y:_) = [seq x y]

If we consider the list xs = [False, True], we have

map g (f [False, True]) = map g [True] = [True]

but, on the other hand

f (map g [False, True]) = f [undefined, True] = [undefined]

That is, you can use seq to make the element of a certain position of the list depend on the definedness of another element in the list. The equality holds again if g is total. If you are intereseted in free theorems check out the free theorem generator, which allows you to specify whether you are considering a language with errors or even a language with seq. Although, this might seem to be of less practical relevance, seq breaks some transformations that are used to improve the performence of functional programs, for example, foldr/build fusion fails in the presence of seq. If you are intereseted in more details about free theorems in the presence of seq, take a look into Free Theorems in the Presence of seq.

As far as I know it had been known that a polymorphic seq breaks certain transformations, when it was added to the language. However, the althernatives have disadvantages as well. If you add a type class based seq, you might have to add lots of type class constraints to your program, if you add a seq somewhere deep down. Furthermore, it had not been a choice to omit seq as it had already been known that there are space leaks that can be fixed using seq.

Finally, I might miss something, but I don't see how a seq operator of type a -> a would work. The clue of seq is that it evaluates an expression to head normal form, if another expression is evaluated to head normal form. If seq has type a -> a there is no way of making the evaluation of one expression depend on the evaluation of another expression.