Wednesday, April 6, 2011

Speeding up R computations

The past few days I've been going through some R code that I wrote last year, when I was preparing a massive simulation-based power study for some tests for multivariate normality that I've been working on. My goal was to reduce the time needed to run the simulation. I wasn't expecting great improvement, since I've always believed that the most common R functions are properly vectorized and optimized for speed. Turns out I was wrong. Very wrong.


The first thing that I did was that I replaced all parentheses ( ) by curly brackets { }. I was inspired to do so by this post (and this, via Xi'Ans Og) over at Radford Neal's blog. As he pointed out, code that uses parentheses is actually slower than the same code with curly brackets:

> system.time( for(i in 1:1000000) { 1*(1+1) } )
   user  system elapsed 
  1.337   0.005   1.349 
> system.time( for(i in 1:1000000) { 1*{1+1} } )
   user  system elapsed 
  1.072   0.003   1.076 

Similarly, you can compare a*a and a^2:

> system.time( for(i in 1:10000000) 3^2 )
   user  system elapsed 
  5.048   0.028   5.088 
> system.time( for(i in 1:10000000) 3*3 )
   user  system elapsed 
  4.721   0.024   4.748 

So, a^2 is slower than a*a. This made me wonder, are there other built-in R functions that are slower than they ought to be?

One thing that I found very surprising, and frankly rather disturbing, is that mean(x) takes ten times as long to calculate the mean value of the 50 real numbers in the vector x as the "manual" function sum(x)/50:


> x<-rnorm(50)
> system.time(for(i in 1:100000){mean(x)})
   user  system elapsed 
  1.522   0.000   1.523 
> system.time(for(i in 1:100000){sum(x)/length(x)})
   user  system elapsed 
  0.200   0.000   0.200 
> system.time(for(i in 1:100000){sum(x)/50})
   user  system elapsed 
  0.167   0.000   0.167 
> system.time(for(i in 1:100000){ overn<-rep(1/50,50); x%*%overn })
   user  system elapsed 
  0.678   0.000   0.677 
> overn<-rep(1/50,50); system.time(for(i in 1:100000){ x%*%overn })
   user  system elapsed 
  0.164   0.000   0.164 

I guess that the R development core team have been focusing on making R an easy-to-use high level programming language rather than optimizing all functions, but the poor performance of mean is just embarrassing.

Similarly, the var function can be greatly improved upon. Here are some of the many possibilites:


> x <- rnorm(50)
> system.time( for(i in 1:100000) { var(x) } )
   user  system elapsed 
  4.921   0.000   4.925 
> system.time( for(i in 1:100000) { sum((x-mean(x))^2)/{length(x)-1} } )
   user  system elapsed 
  2.322   0.000   2.325 
> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/length(x)}/{length(x)-1} } )
   user  system elapsed 
  0.736   0.000   0.737 
> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/50}/49 } )
   user  system elapsed 
  0.618   0.000   0.618 

> system.time( for(i in 1:100000) { sx<-sum(x); {sum(x*x)-sx*sx/50}/49 } )
   user  system elapsed 
  0.567   0.000   0.568

I changed all the uses of mean in my code to "sum/n" instead (and avoided using var entirely) and found that this sped things up quite a bit.

Another trick to speed up your computations is to create the vectors that you wish to change within a loop with the right number of elements. While

a<-NA
for(j in 1:100) a[j]<-j 

works just fine, it is actually quite a bit slower than

a<-rep(NA,100)
for(j in 1:100) a[j]<-j

You could create a in other ways as well of course, for instance by a<-vector(length=100). Here are the numbers:


> system.time( for(i in 1:100000) { a<-NA; for(j in 1:100) a[j]<-j })
   user  system elapsed 
 37.383   0.092  37.482 
> system.time( for(i in 1:100000) { a<-rep(NA,100); for(j in 1:100) a[j]<-j })
   user  system elapsed 
 25.866   0.065  25.936 
> system.time( for(i in 1:100000) { a<-vector(length=100); for(j in 1:100) a[j]<-j })
   user  system elapsed 
 25.517   0.022  25.548

In my case, I'd been a bit sloppy with creating the vectors in my loops in the proper way, so I changed this in my code as well.

In my simulation study, I simulate multivariate random variables, compute some test statistics and use these to estimate the powers of the normality tests against various alternatives. After doing the changes mentioned above, I compared the performance of my old code to that of the new code, for 1000 iterations of the procedure:

> system.time( source("oldCode.R") )
   user  system elapsed 
548.045   0.273 548.622 
> system.time( source("newCode.R") )
   user  system elapsed 
 93.138   0.002  93.194

The improved code is almost 6 times faster than the old code. When you do ten million or so iterations, that matters. A lot.

In conclusion, it's definitely possible to speed up your code significantly if you know of the pitfalls of R. I suspect that I'll be obsessed with finding more pitfalls in the next few weeks, so I'd be thankful for any hints about other weaknesses that R has.

It should probably be mentioned that R is really fast when things are properly vectorized. Last year, a coworker that uses Matlab challenged me to perform a number of matrix computations faster in R than in Matlab. To his great surprise, R won.

As a final remark, I'm now facing a bit of a dilemma. Should I write readable code; a^6; or fast code; a*a*a*a*a*a?

Update: looking to speed up your R computations even more? See my posts on compiling your code and parallelization.

19 comments:

  1. Wow, thats really interesting (to me, at least). Thanks for the post.

    That being said, i suspect a reason for the poor performance of mean and var is coming from both their need to check the length of the vector and the checks they presumably run for NA's.

    Then again, I think mean fails when you supply it with NA's without specifying the action to take (unless you change the default options).

    It does seem somewhat surprising that the call to length can make that much difference though.

    ReplyDelete
  2. be careful with numerical instabilities that arise, e.g. when calculating variances http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance

    ReplyDelete
  3. I noticed the slowness of the built in functions when i had to count a large number of jackknife correlations in a bigish gene expression data set.

    looping cor() was incredibly slow and the jackknife function of the bootstrap (?) package was a disaster.

    I managed to work around it, though my solution is probably far from optimal (biologist!!), by McGyvering my own cor-function the quite fast rowSums/rowMeans functions. In the end I got my processing time down speeded up by a ton and the analysis done over night in stead of in a week

    /Cheers from Lund

    ReplyDelete
  4. try mean.default() instead of just plain old mean(). That'll get you from 1/20th the speed to 1/2 the speed from mean. Then look at the code of mean.default to see where the rest of the slowdown comes from.

    When you do that you'll see the simplest call to mean(); the one most comparable to the much simpler function sum(). If you try .Internal(mean(x)) you'll be twice as fast as sum(x)/length(x).

    ReplyDelete
  5. Your dilemma is easily solved with regards to easy to write code and your specific example. x^6 is faster than x*x*x*x*x*x. x*x is a special case and one of only two where multiply beats exponent (x*x*x works as well).

    ReplyDelete
  6. OK, now I'm up to 3 comments but I really meant to suggest generalizing my method for mean. You can calculate variance really fast if you know the data going in with the function .Internal(cov(x, NULL, 1, FALSE)). It's 20x faster than var().

    ReplyDelete
  7. The slow down is not "embarassing". It's expected because mean() does a lot more than sum(x)/length(x), and it should. For example, it ensures arguments are numeric and gives a warning when they aren't, and it removes NA values if na.rm=TRUE. Also, your length() denominator won't work when x contains NA's.

    As others have said, calling .Internal(mean(x)) is much faster than any of your alternatives, and that's because it's calling direcly to the C code. In fact, it's calling the SAME function that does sum() (do_summary), but with different flags.

    Since R makes it easy to view the source code, you could have done so and determined in your code that whatever you're passing has no possibility of being non-numeric, and doesn't require na.rm or trim features of the mean() function. Then you should be using .Internal(mean()) rather than mean().

    ReplyDelete
  8. The .Internal method is really quite striking. See the following:

    a<-rnorm(100000000)

    > system.time(for(a in 1:100000) mean(a))
    user system elapsed
    1.319 0.019 1.338
    > system.time(for(a in 1:100000) mean.default(a))
    user system elapsed
    0.478 0.001 0.480
    > system.time(for(a in 1:100000) .Internal(mean(a)))
    user system elapsed
    0.030 0.001 0.031

    ReplyDelete
  9. Thanks for the great comments, people!

    And thanks for the tips about .Internal. I've never used that before, but it really seems to be the way to go here. I'm a bit surprised that the documentation for mean() fails to mention it.

    eduardo: Thanks, that was interesting to read!

    jc: That a^6 thing is funny; before publishing the blog post I thought to myself that I ought to check whether the exponent still was slower for higher products. Clearly I forgot to. :)

    Sean X: Right, you certainly have a number of valid points. Since mean() is a high level function I expected it to be a bit slower, but not THAT much slower, which was what I was trying to say. Sorry if "embarrassing" came off sounding too strong - that's always a danger for someone like me, who's not a native speaker. I tried to look at the source code for mean() in R (by simply typing the function's name), but that only says "UseMethod("mean")" and I didn't know where to go from there. I guess that I have to go directly to the C source to find out how mean() works?

    ReplyDelete
  10. My results: > x <- rnorm(50)
    > y <- sum(x)
    > z <- length(x)
    > Mean <- function(x){Mean = sum(x)/length(x)}
    > system.time(for(i in 1:100000){mean(x)})
    User System verstrichen
    2.61 0.05 2.74
    > system.time(for(i in 1:100000){Mean(x)})
    User System verstrichen
    0.52 0.00 0.52
    > system.time(for(i in 1:100000){sum(x)/50})
    User System verstrichen
    0.25 0.00 0.25
    > system.time(for(i in 1:100000){y/z})
    User System verstrichen
    0.15 0.02 0.17

    So, defining your own simple function saves about 80% of time, and you can trim that by two thirds by calculating the components beforehand. But nice one about the .internal command.

    ReplyDelete
  11. I received a notification about a comment that I can't see in the above list, but it had an interesting link that I thought I'd share: http://www.johndcook.com/blog/2008/11/05/how-to-calculate-pearson-correlation-accurately/

    Also, it was pointed out in that comment that my post mainly concerns known pitfalls. So just to be clear, I'm not trying to claim that I've discovered new caveats, but rather wanted to comment on some things that were new to me.

    ReplyDelete
  12. Another way for speeding up R code is to interface C code within it, is quite easy, see here for a simple example: http://statisfaction.wordpress.com/2011/02/04/speed-up-your-r-code-with-c/

    ReplyDelete
  13. Great link, Julyan. Looks like I may have to brush up on my C skills :)

    ReplyDelete
  14. Using the .Internal method, my improved code is now 7 times faster than it was before I started to look into this. Nice!

    ReplyDelete
  15. Måns, generic functions, like mean(), may not have much code revealed by typing 'mean' at the command line. However, you can get an idea of what you do need to type by checking methods(mean). This will list all of the various mean methods that you currently have, one of which will be mean.default(). That's the one you check the code of.

    This is true for other generic functions as well.

    ReplyDelete
  16. Thanks jc, that was really helpful. Now I have lots of functions that I want to investigate more closely :)

    ReplyDelete
  17. Also, regarding your exponential findings... I was wrong that x^2 and x^3 were special cases when it's slower.... at least I was wrong in implying that this is always true

    I believe it's machine dependent upon implementation of the pow() function in C (which it relies on).

    On my laptop I discovered that x^2 is just as fast as x*x for pretty much any array. Then, after that, while x^n was a fixed speed (no difference for fairly high n's to 20), x*x... was of course slower for every added x. The jump from x^2 to x^3 was very large so x^3 you wouldn't want to use the exponent, but if the exponent could vary to a large number then you most definitely would.

    ReplyDelete
  18. When the computational power goes up the will to optimize usualy fades, resulting in blocking of clusters and ridicoulus amounts of unnecessary data. So keep up the optimization! =)

    ReplyDelete
  19. Using mean.default instead of generic mean will save you time too. So choosing the right method takes some time - but if you think about it, you don't really need to choose the right method for 100000 times if you know the data are of the same type. the rest of the difference comes from processing the extra arguments (na.rm and trim)

    > x <- rnorm(100)
    > system.time(for(i in 1:100000){mean(x)})
    user system elapsed
    2.59 0.00 2.59
    > system.time(for(i in 1:100000){sum(x)/length(x)})
    user system elapsed
    0.39 0.00 0.39
    > system.time(for(i in 1:100000){mean.default(x)})
    user system elapsed
    0.6 0.0 0.6

    But then, look at the code for mean.default and there's a good hint at the very end:

    > system.time(for(i in 1:100000){.Internal(mean(x))})
    user system elapsed
    0.19 0.00 0.19

    Which is about two times faster than your custom function.

    ReplyDelete