The first thing that I did was that I replaced all parentheses ( ) by curly brackets { }. I was inspired to do so by this post (and this, via Xi'Ans Og) over at Radford Neal's blog. As he pointed out, code that uses parentheses is actually slower than the same code with curly brackets:

> system.time( for(i in 1:1000000) { 1*(1+1) } )

user system elapsed

1.337 0.005 1.349

> system.time( for(i in 1:1000000) { 1*{1+1} } )

user system elapsed

1.072 0.003 1.076

Similarly, you can compare a*a and a^2:

> system.time( for(i in 1:10000000) 3^2 )

user system elapsed

5.048 0.028 5.088

> system.time( for(i in 1:10000000) 3*3 )

user system elapsed

4.721 0.024 4.748

So, a^2 is slower than a*a. This made me wonder, are there other built-in R functions that are slower than they ought to be?

One thing that I found very surprising, and frankly rather disturbing, is that mean(x) takes ten times as long to calculate the mean value of the 50 real numbers in the vector x as the "manual" function sum(x)/50:

> x<-rnorm(50)

> system.time(for(i in 1:100000){mean(x)})

user system elapsed

1.522 0.000 1.523

> system.time(for(i in 1:100000){sum(x)/length(x)})

user system elapsed

0.200 0.000 0.200

> system.time(for(i in 1:100000){sum(x)/50})

user system elapsed

0.167 0.000 0.167

> system.time(for(i in 1:100000){ overn<-rep(1/50,50); x%*%overn })

user system elapsed

0.678 0.000 0.677

> overn<-rep(1/50,50); system.time(for(i in 1:100000){ x%*%overn })

user system elapsed

0.164 0.000 0.164

I guess that the R development core team have been focusing on making R an easy-to-use high level programming language rather than optimizing all functions, but the poor performance of mean is just embarrassing.

Similarly, the var function can be greatly improved upon. Here are some of the many possibilites:

> x <- rnorm(50)

> system.time( for(i in 1:100000) { var(x) } )

user system elapsed

4.921 0.000 4.925

> system.time( for(i in 1:100000) { sum((x-mean(x))^2)/{length(x)-1} } )

user system elapsed

2.322 0.000 2.325

> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/length(x)}/{length(x)-1} } )

user system elapsed

0.736 0.000 0.737

> system.time( for(i in 1:100000) { {sum(x*x)-sum(x)*sum(x)/50}/49 } )

user system elapsed

0.618 0.000 0.618

> system.time( for(i in 1:100000) { sx<-sum(x); {sum(x*x)-sx*sx/50}/49 } )

user system elapsed

0.567 0.000 0.568

I changed all the uses of mean in my code to "sum/n" instead (and avoided using var entirely) and found that this sped things up quite a bit.

Another trick to speed up your computations is to create the vectors that you wish to change within a loop with the right number of elements. While

a<-NA

for(j in 1:100) a[j]<-j

works just fine, it is actually quite a bit slower than

a<-rep(NA,100)

for(j in 1:100) a[j]<-j

You could create a in other ways as well of course, for instance by a<-vector(length=100). Here are the numbers:

> system.time( for(i in 1:100000) { a<-NA; for(j in 1:100) a[j]<-j })

user system elapsed

37.383 0.092 37.482

> system.time( for(i in 1:100000) { a<-rep(NA,100); for(j in 1:100) a[j]<-j })

user system elapsed

25.866 0.065 25.936

> system.time( for(i in 1:100000) { a<-vector(length=100); for(j in 1:100) a[j]<-j })

user system elapsed

25.517 0.022 25.548

In my simulation study, I simulate multivariate random variables, compute some test statistics and use these to estimate the powers of the normality tests against various alternatives. After doing the changes mentioned above, I compared the performance of my old code to that of the new code, for 1000 iterations of the procedure:

> system.time( source("oldCode.R") )

user system elapsed

548.045 0.273 548.622

> system.time( source("newCode.R") )

As a final remark, I'm now facing a bit of a dilemma. Should I write readable code; a^6; or fast code; a*a*a*a*a*a?

> system.time( source("oldCode.R") )

user system elapsed

548.045 0.273 548.622

> system.time( source("newCode.R") )

user system elapsed

93.138 0.002 93.194

The improved code is almost 6 times faster than the old code. When you do ten million or so iterations, that matters. A lot.

In conclusion, it's definitely possible to speed up your code significantly if you know of the pitfalls of R. I suspect that I'll be obsessed with finding more pitfalls in the next few weeks, so I'd be thankful for any hints about other weaknesses that R has.

It should probably be mentioned that R is really fast when things are properly vectorized. Last year, a coworker that uses Matlab challenged me to perform a number of matrix computations faster in R than in Matlab. To his great surprise, R won.

In conclusion, it's definitely possible to speed up your code significantly if you know of the pitfalls of R. I suspect that I'll be obsessed with finding more pitfalls in the next few weeks, so I'd be thankful for any hints about other weaknesses that R has.

It should probably be mentioned that R is really fast when things are properly vectorized. Last year, a coworker that uses Matlab challenged me to perform a number of matrix computations faster in R than in Matlab. To his great surprise, R won.

As a final remark, I'm now facing a bit of a dilemma. Should I write readable code; a^6; or fast code; a*a*a*a*a*a?

**Update:**looking to speed up your R computations even more? See my posts on compiling your code and parallelization.