What are the requirements to prefer CPS over algebraic data types?
This change was made March 2, 2009 in commit a98a3ccb by Antoine Latter. The commit comment was just:
commit a98a3ccbca9835fe749b8cd2d475a0494a84a460
Author: Antoine Latter <[email protected]>
Date: Mon Mar 2 00:20:00 2009 +0000
move core data type over to CPS
and it replaced the original ADT for ParsecT:
data ParsecT s u m a
= ParsecT { runParsecT :: State s u -> m (Consumed (m (Reply s u a))) }
with the new version in your question, adding a runParsecT
adapter to convert everything back:
runParsecT :: Monad m => ParsecT s u m a -> State s u -> m (Consumed (m (Reply s u a)))
runParsecT p s = unParser p s cok cerr eok eerr
where cok a s' err = return . Consumed . return $ Ok a s' err
cerr err = return . Consumed . return $ Error err
eok a s' err = return . Empty . return $ Ok a s' err
eerr err = return . Empty . return $ Error err
I see that he made a blog post in February 2009 where he wrote about finally understanding CPS style and wrote about a CPS version of MaybeT
. He didn't talk about any performance advantages, but merely noted that the CPS style had the advantage that Monad
and MonadPlus
instances for MaybeT
could be written without calling >>=
or return
on the underlying monad or doing explicit case analysis of Maybe
values.
In a later December 2009 blog post, he writes about his "obsession with functionalizing monad transformers", gives an example of ErrorT
and explicitly notes:
I haven't benchmarked whether or not this is faster for anything, but I find the whole thing a lot of fun.
However, he then goes on in the same blog post to talk about how adding functionality to Parsec 3 to make it a monad transformer instead of a plain old monad and parametrizing it over the input type led to poor performance (about 1.8x slower on some benchmarks). He discovered that converting over to CPS style made Parsec 3 as fast as Parsec 2, at least when those new abstractions (transformers) weren't being used.
Speculating about the timing, I think that Antoine thought CPS was "cool" and had stylistic advantages that appealed to him, so he had it top of mind when working on Parsec 3. When new abstractions in Parsec 3 led to a performance problem, he fortuitously discovered that converting it to CPS appeared to fix them, though he didn't do a detailed study of why (just some speculation in the blog post). However, it's a little unclear to me if he actually converted to CPS first, before discovering the performance advantage, or tried CPS with the expectation that it might help performance. The blog post reads like the conversion was done intentionally to address performance, but that might have just been for sake of simpler exposition in the blog post.
One big issue is that ParsecT
is a monad transformer, and monad transformers defined using ADTs block optimizations more than monad transformers using CPS.
The expression pure x >>= k :: ExceptT e m a
gives a minimal illustrative example.
With
ExceptT e m a
defined asm (Either e a)
, that expressions doesn't simplify well, because it involves(>>=)
of the underlying monadm
which is abstract.With
ExceptT e m a
defined asforall r. (Either e a -> m r) -> m r
,pure x >>= k
basically beta-reduces tok x
, without making any assumption aboutm
.
You need to specialize m
to optimize a term of type m (Either e a)
at all, whereas the continuation-based variant can go a long way without specialization.
However, CPS is also not the ideal representation in Haskell, because the continuations are functions which get allocated on the heap. Functions are cheap but not zero cost.
At the end of the day the abstractness of m
in monad transformers really gets in the way, to optimize, you have to specialize, i.e., break modularity. Thus a more promising approach is to use some special primitive monad (IO
) with dedicated support from the run-time system, as the basis of an effect system.
See also the talk Effects for Less, by Alexis King, and the related library eff.