What performance increases can we expect as the Perl 6 implementations mature?
There are really various reasons why Rakudo is so slow.
The first and maybe most important reason is that Rakudo doesn't do any optimizations yet. The current goals are more explore new features, and to become more robust. You know, they say "first make it run, then make it right, then make it fast".
The second reason is that parrot doesn't offer any JIT compilation yet, and the garbage collector isn't the fastest. There are plans for a JIT compiler, and people are working on it (the previous one was ripped out because it was i386 only and a maintenance nightmare). There are also thoughts of porting Rakudo to other VMs, but that'll surely wait till after end of July.
In the end, nobody can really tell how fast a complete, well-optimized Perl 6 implementation will be until we have one, but I do expect it to be much better than now.
BTW the case you cited [+] 1..$big_number
could be made to run in O(1), because 1..$big_number
returns a Range, which is introspectable. So you can use a sum formula for the [+] Range
case. Again it's something that could be done, but that hasn't been done yet.
Another thing you have to understand about the lack of optimization is that it's compounded. A large portion of Rakudo is written in Perl 6. So for example the [+]
operator is implemented by the method Any.reduce
(called with $expression
set to &infix:<+>
), which has as its inner loop
for @.list {
@args.push($_);
if (@args == $arity) {
my $res = $expression.(@args[0], @args[1]);
@args = ($res);
}
}
in other words, a pure-perl implementation of reduce, which itself is being run by Rakudo. So not only is the code you can see not getting optimized, the code that you don't see that's making your code run is also not getting
optimized. Even instances of the +
operator are actually method calls, since although the +
operator on Num
is implemented by Parrot, there's nothing yet in Rakudo to recognize that you've got two Num
s and optimize away the method call, so there's a full dynamic dispatch before Rakudo finds multi sub infix:<+>(Num $a, Num $b)
and realizes that all it's really doing is an 'add' opcode. It's a reasonable excuse for being 100-1000x slower than Perl 5 :)
Update 8/23/2010
More information from Jonathan Worthington on the kinds of changes that need to happen with the Perl 6 object model (or at least Rakudo's conception of it) to make things fast while retaining Perl 6's "everything is method calls" nature.
Update 1/10/2019
Since I can see that this is still getting attention... over the years, Rakudo/MoarVM have gotten JIT, inlining, dynamic specialization, and tons of work by many people optimizing every part of the system. The result is that most of those method calls can be "compiled out" and have nearly zero runtime cost. Perl 6 scores hundreds or thousands of times faster on many benchmarks than it did in 2010, and in some cases it's faster than Perl 5.
In the case of the sum-to-100,000 problem that the question started with, Rakudo 2018.06 is still a bit slower than perl 5.26.2:
$ time perl -e 'use List::Util 'sum'; print sum(1 .. 100000), "\n";' >/dev/null
real 0m0.023s
user 0m0.015s
sys 0m0.008s
$ time perl6 -e 'say [+] 1 .. 100000;' >/dev/null
real 0m0.089s
user 0m0.107s
sys 0m0.022s
But if we amortize out startup cost by running the code 10,000 times, we see a different story:
$ time perl -e 'use List::Util 'sum'; for (1 .. 10000) { print sum(1 .. 100000), "\n"; }' > /dev/null
real 0m16.320s
user 0m16.317s
sys 0m0.004s
$ time perl6 -e 'for 1 .. 10000 { say [+] 1 .. 100000; }' >/dev/null
real 0m0.214s
user 0m0.245s
sys 0m0.021s
perl6 uses a few hundred more milliseconds than perl5 on startup and compilation, but then it figures out how to do the actual summation around 70 times faster.