Are recursive functions used in R?
The answer is yes recursive functions are used in R. While much of R is itself written in R, some highly optimized routines are wrappers to C or FORTRAN. Furthermore much of R-BASE is primitive. Basically, fast routines where recursion is most likely to be used is least likely to be seen unless someone actually looks through the binaries.
A good example of a recursive function is probably the most commonly used function of all:
> c
function (..., recursive = FALSE) .Primitive("c")
> set.seed(1)
> x <- list('a'=list(1:2, list(rnorm(10)), 'b'=rnorm(3)))
> c(x)
$a
$a[[1]]
[1] 1 2
$a[[2]]
$a[[2]][[1]]
[1] -0.6264538 0.1836433 -0.8356286 1.5952808 0.3295078 -0.8204684 0.4874291 0.7383247
[9] 0.5757814 -0.3053884
$a$b
[1] 1.5117812 0.3898432 -0.6212406
> c(x, recursive=TRUE)
a1 a2 a3 a4 a5 a6 a7 a8
1.0000000 2.0000000 -0.6264538 0.1836433 -0.8356286 1.5952808 0.3295078 -0.8204684
a9 a10 a11 a12 a.b1 a.b2 a.b3
0.4874291 0.7383247 0.5757814 -0.3053884 1.5117812 0.3898432 -0.6212406
>
I think that the most natural way to express mathematical computations in R is through mathematical/statistical notation. The whole point of the language is to make it easy to express statistical computations in a natural manner.
You own example of factorial
being implementated using gamma
fits nicely with this view. I don't know how gamma
is implemented but we don't need to know that in order to use R. As a user, the most important thing is usually to get the right answer. If the code proves to be prohibitively slow, that is when you optimize. The first place to start would again be math and the choice of algorithm, not implementation details.
David Heffernan is correct that recursion is usually slower than iteration but what he doesn't seem to consider is that it rarely matters. Using his own example, the Fibonacci numbers, the really important thing is to avoid recomputing the numbers, which can be done through memorization. This makes the computation run in linear time instead of exponential time - a huge improvement. Once you have done that, you can still get a minor improvement by implementing the algorithm using a loop but then it probably doesn't matter. Besides, there is a closed form.
Both the factorial function and the fibonacci numbers also grow very quickly. This means that each arithmetic operation (additions, multiplications, etc.) will start to take a long time, while the recursion doesn't become more expensive or at least not as quickly. Again, mathematical considerations trump implementation details.
TL;DR
My advice is:
- Write the algorithm in the simplest possible way.
- Ensure that it is correct.
- If and only if the algorithm is too slow on realistic input:
- Find out which part of the algorithm is taking the time / what the time complexity is.
- Fix the that part and only that part.
- Repeat the process, if necessary.
For calculation of integer factorial, the recursive implementation is slower and more complex. Invariably, iteration is used in production code.
The factorial
function you refer to is in the base package. It operates on real values rather than integers, hence that implementation. Its documentation states:
factorial(x) (x! for non-negative integer x) is defined to be gamma(x+1)
A more interesting example is code to implement the Fibonnaci series which is extraordinarily wasteful when implemented with a naive recursion. The recursive approach can be made efficient through memoization but simple iteration is always to be preferred if performance is at stake.
Another common algorithm that is expressed naturally in a recursive way is Quicksort. This can, like all algorithms be implemented without recursion, but is quite complex to do so. There is little benefit in using a non-recursive Quicksort and so it's common to use the naive recursive implementation.
Recursion is a good implementation choice:
- if performance is not compromised, and
- if it is more natural (hence easier to verify and maintain) to implement recursively.