the way merge-sort faster than insertion-sort puzzles me
Behind the scene is lazy evaluation. The start of the sorted lists is determined before the sort is complete, so it can be output before the work is finished. Since a mergesort is faster, the merge-sorted list is printed out faster.
As requested: how sorting [5,2,6,3,1,4]
proceeds. I use insert_sort = foldr ins []
for brevity.
insert_sort [5,2,6,3,1,4]
= foldr ins [] [5,2,6,3,1,4]
= 5 `ins` foldr ins [] [2,6,3,1,4]
= 5 `ins` 2 `ins` [6,3,1,4] ...
= 5 `ins` 2 `ins` 6 `ins` 3 `ins` 1 `ins` 4 `ins` []
= 5 `ins` 2 `ins` 6 `ins` 3 `ins` 1 `ins` (4:[])
= 5 `ins` 2 `ins` 6 `ins` 3 `ins` (1:4:[])
= 5 `ins` 2 `ins` 6 `ins` (1 : (3 `ins` (4:[])))
= 5 `ins` 2 `ins` (1 : (6 `ins` (3 `ins` (4:[]))))
= 5 `ins` (1 : (2 `ins` (6 `ins` (3 `ins` (4:[])))))
= 1 : (5 `ins` (2 `ins` (6 `ins` (3 `ins` (4:[]))))) -- now 1 can be output
= 1 : (5 `ins` (2 `ins` (6 `ins` (3:4:[]))))
= 1 : (5 `ins` (2 `ins` (3 : (6 `ins` (4:[])))))
= 1 : (5 `ins` (2 : (3 : (6 `ins` (4:[])))))
= 1 : 2 : (5 `ins` (3 : (6 `ins` (4:[])))) -- now 2 can be output
= 1 : 2 : 3 : (5 `ins` (6 `ins` (4:[]))) -- now 3
= 1 : 2 : 3 : (5 `ins` (4:6:[]))
= 1 : 2 : 3 : 4 : (5 `ins` (6:[])) -- now 4
= 1 : 2 : 3 : 4 : 5 : 6 : [] -- done
And merge sort (abbreviations: merge = mg
, merge_sort = ms
):
merge_sort [5,2,6,3,1,4]
= mg (ms [5,2,6]) (ms [3,1,4])
= mg (mg (ms [5]) (ms [2,6])) (mg (ms [3]) (ms [1,4]))
= mg (mg [5] (mg [2] [6])) (mg [3] (mg [1] [4]))
= mg (mg [5] [2,6]) (mg [3] [1,4])
= mg (2 : mg [5] [6]) (1 : mg [3] [4])
= 1 : mg (2 : mg [5] [6]) (mg [3] [4]) -- now 1 can be output
= 1 : mg (2 : mg [5] [6]) [3,4]
= 1 : 2 : mg (mg [5] [6]) [3,4] -- now 2 can be output
= 1 : 2 : mg [5,6] [3,4]
= 1 : 2 : 3 : mg [5,6] [4] -- now 3
= 1 : 2 : 3 : 4 : mg [5,6] [] -- now 4
= 1 : 2 : 3 : 4 : 5 : 6 : [] -- now 5 and 6
Admittedly I've taken a few short cuts, but Haskell isn't the only lazy one.
OK here's the break down. You want me to print out:
merge_sort $ take 100000 $ randomRs (1,100000) $ mkStdGen 1 ::[Int]
I happen to know that this is a list. So First I'll print out an open brace
[
Then I'll look for the first element of the list, print that out, and then a comma. That means I have to start evaluating that expression until I can figure out what the first element of the list is.
merge_sort THUNK0
Well now I need to pattern match. Either THUNK matches (x:[])
or it doesn't. But I don't know yet. So I'll evaluate that thunk a bit. I make that thunk produce the first two random numbers (out of 100000). Now I know that it doesn't match the first definition, so I take the second definition of merge_sort
.
merge_sort keys = merge THUNK1 THUNK2 -- keys = THUNK0
Well that's easy enough...it's just a call to merge. I'll expand that definition. Oh crap, there are three different patterns this might match. I guess I should evaluate THUNK1 a little and see if it matches the first definition's pattern, (x:xs)
merge_sort (take THUNK3 THUNK0)
Back to merge_sort
again, are we? That means I need to evaluate (take THUNK3 THUNK0)
just enough to tell if it matches (x:[])
or not. Oh CRAP. take
is strict in its first argument...that means I have to fully evaluate THUNK3. Ok...deep breaths...
len = length THUNK0 `div` 2
Now here's an irritating case. To calculate length
on THUNK0 (which is a list), I have to expand the WHOLE SPINE. I don't have to actually calculate the values inside, but I do need to flesh out the structure of the entire list. This, of course, is done one pattern match at a time, determining whether it is []
, or (x:xs)
. But as a whole, length
is "spine strict".
short pause whilst I flesh out the spine of a 100000-element list
Phew, got that done. Now I know the length, which means I know len = 500000
. THUNK0 is finally fully evaluated! Phew! Where was I?
merge_sort (take 500000 THUNK3)
And so forth. merge_sort
will continue trying to be as lazy as possible. Recursive calls to merge_sort
will be as lazy as possible. Eventually, to determine the very first element of the outermost merge_sort
, we will need to know the very first element of both recursive calls to merge_sort
. And to know the first element of those...we'll need the first element of subsequent recursive calls, etc. So there will be about O(n) work done, because every element needs to be evaluated (performing the random number generation for each one).
Then, think of it like a tournament. Each element is paired up against another element. The "winning" (lowest) elements move on to the next round (becoming the first element of the recursive call to the lowest merge_sort
s). There is another competition with 1/2 as many combatants, and 1/2 of those (1/4 of the total) move on to the next round, etc. This also turns out to be O(n) work, since the (n/2) comparisons are performed during the first round, and subsequent rounds grow smaller much too quickly to be significant. (The sum 1/2 + 1/4 + 1/8 ... converges at 1, meaning a total of n comparisons are performed.)
All in all, O(n) work needs to be performed in order to finally produce the first element. Additional work needs to be performed for subsequent elements, but the total amount of work ends up being O(n log(n)).
Now contrast this with insert_sort
. It too produces the first element in O(n) time, but the total amount of work is O(n2) in the worst case: it "inserts" the list's elements into the sorted list one by one, and traverses this sorted list potentially to its very end each time.