Why are Python and Ruby so slow, while Lisp implementations are fast?
I don't know about your racket installation, but the Racket I just apt-get install
'd uses JIT compilation if run without flags. Running with --no-jit
gives a time much closer to the Python time (racket
: 3s, racket --no-jit
: 37s, python
: 74s). Also, assignment at module scope is slower than local assignment in Python for language design reasons (very liberal module system), moving the code into a function puts Python at 60s. The remaining gap can probably be explained as some combination of coincidence, different optimization focus (function calls have to be crazy fast in Lisp, Python people care less), quality of implementation (ref-counting versus proper GC, stack VM versus register VM), etc. rather than a fundamental consequence of respective the language designs.
Natively compiled Lisp systems are usually quite a bit faster than non-natively compiled Lisp, Ruby or Python implementations.
Definitions:
- natively compiled -> compiles to machine code
- compiled -> compiles to machine code or some other target (like byte code, JVM instructions, C code, ...)
- interpreted Lisp -> runs s-expressions directly without compilation
- interpreted Python -> runs compiled Python in a byte-code interpreter. The default Python implementation is not really interpreted, but using a compiler to a byte code instruction set. The byte code gets interpreted. Typically byte code interpreters are slower than execution of native code.
But keep in mind the following:
- SBCL uses a native code compiler. It does not use a byte code machine or something like a JIT compiler from byte code to native code. SBCL compiles all code from source code to native code, before runtime. The compiler is incremental and can compile individual expressions. Thus it is used also by the EVAL function and from the Read-Eval-Print-Loop.
- SBCL uses an optimizing compiler which makes use of type declarations and type inference. The compiler generates native code.
- Common Lisp allows various optimizations which make the code less dynamic or not dynamic (inlining, early binding, no type checks, code specialized for declared types, tail-call optimizations, ...). Code which makes use of these advanced features can look complicated - especially when the compiler needs to be told about these things.
- Without these optimizations compiled Lisp code is still faster than interpreted code, but slower than optimized compiled code.
- Common Lisp provides CLOS, the Common Lisp Object System. CLOS code usually is slower than non-CLOS - where this comparison makes sense. A dynamic functional language tends to be faster than a dynamic object-oriented language.
- If a language implementation uses a highly optimized runtime, for example for bignum arithmetic operations, a slow language implementation can be faster than an optimizing compiler. Some languages have many complex primitives implemented in C. Those tend to be fast, while the rest of the language can be very slow.
- there can also be implementations of Python, which generate and run machine code, like the JIT compiler from PyPy. Ruby also now has a JIT compiler since Ruby 2.6.
Also some operations may look similar, but could be different. Is a for
loop iterating over an integer variable really the same as a for
loop which iterates over a range?
Method dispatch in Ruby/Python/etc is expensive, and Ruby/Python/etc programs compute primarily by calling methods. Even for
loops in Ruby are just syntactic sugar for a method call to each
.