Meditations on programming, startups, and technology
New Relic

The Great Ruby Shootout (July 2010)

The Great Ruby Shootout measures the performance of several Ruby implementations by testing them against a series of synthetic benchmarks. Recently I ran Mac and Windows shootouts as well, which tested a handful of implementations. However this article reports on the results of extensive benchmark testing of eight different Ruby implementations on Linux.

The setup

For this shootout I included a subset of the Ruby Benchmark Suite. I opted to primarily exclude tests that were executed in fractions of a second in most VMs, focusing instead of more substantial benchmarks (several of which came from the Computer Language Benchmarks Game). The best times and least memory allocations out of five runs are reported here for each benchmark.

All tests were run on Ubuntu 10.4 LTS x86_64, on an Intel Core 2 Quad Q6600 2.40 GHz, 8 GB DDR2 RAM, with two 500 GB 7200 rpm disks.

8 implementations

The implementations tested were:

  • Ruby 1.8.7 p299
  • Ruby 1.9.1 p378
  • Ruby 1.9.2 RC2
  • IronRuby 1.0 (Mono 2.4.4)
  • JRuby 1.5.1 (Java HotSpot(TM) 64-Bit Server VM 1.6.0_20)
  • MagLev (rev 23832)
  • Ruby Enterprise Edition 2010.02
  • Rubinius 1.0.1

JRuby was run with the –fast and –server optimization flags.


Synthetic benchmarks cannot predict how fast your programs will be when dealing with a particular implementation. They provide an (entertaining) educated guess, but you shouldn’t draw overly definitive conclusions from them. The values reported here should be assumed to be characteristic of server-side — and long running — processes; they should be taken with a grain of salt.

Time Results

Please find below the execution times for the selected tests. Timeouts indicate that the execution of a single iteration for a given test took more than 300 seconds and had to be interrupted. Bold, green values indicate the best performer out of each test.

Warning: The bm_primes.rb benchmark was originally written to aid the development of the Prime class. As such in 1.9.2 it was rewritten in C, which makes it a poor representation of implementation performance. This benchmark will removed in the future.

Time Table on Linux

If you are not interested in the individual test results, the information presented in the table above is summarized directly below:

  Ruby 1.9.2         JRuby          
Min.   : 0.013   Min.   : 0.382 
1st Qu.: 3.258   1st Qu.: 3.051
Median : 4.543   Median : 4.997
Mean   : 9.262   Mean   : 9.180
3rd Qu.: 8.573   3rd Qu.: 8.969
Max.   :45.009   Max.   :48.850

    MagLev         Ruby 1.9.1   
Min.   : 0.351   Min.   : 0.015
1st Qu.: 2.140   1st Qu.: 3.387
Median : 6.069   Median : 6.205
Mean   : 9.100   Mean   :10.860
3rd Qu.: 9.266   3rd Qu.:11.559
Max.   :51.221   Max.   :46.849

 Ruby 1.8.7         IronRuby     
Min.   : 0.708   Min.   :  3.601
1st Qu.: 5.102   1st Qu.: 10.505
Median : 8.380   Median : 12.912
Mean   :18.785   Mean   : 26.539
3rd Qu.:24.793   3rd Qu.: 36.115
Max.   :75.653   Max.   :135.204
   Rubinius           REE       
Min.   : 0.484   Min.   : 0.584
1st Qu.: 3.087   1st Qu.: 4.343
Median : 9.636   Median : 6.660
Mean   :13.232   Mean   :15.036
3rd Qu.:17.674   3rd Qu.:21.336
Max.   :73.050   Max.   :61.960

For the sake of convenience, I also produced a box plot from the successful data points:

Box plot of times

There are a few considerations based on these results that I feel are worth mentioning:

  • As you can see Ruby 1.9, JRuby and MagLev converge towards a similar performance level according to these tests.
  • Ruby 1.9.2 manages to squeeze in a bit of extra speed when compared to Ruby 1.9.1 (which is a welcome improvement).
  • Ruby 1.9 seems to be either much faster than Ruby 1.8 or roughly as fast, depending on the test. This appears to be in line with what I’ve seen in real world programs. There are programs that will only receive a 10-20% boost from 1.9, while others improve drastically. The results really depends on what a program spends its time doing.
  • Performance wise, Rubinius is really starting to be a much more serious contender.
  • Ruby Enterprise Edition is slightly faster than Ruby 1.8.7, to the extent where this is clearly visible in almost all of the tests.
  • IronRuby running on Mono was the worse of the lot.

Memory Results

The following table shows the approximate memory consumption for each implementation when running each benchmark:

Memory allocation on Linux


  Ruby 1.9.2        Ruby 1.9.1          
Min.   :  4.320   Min.   :  4.580     
1st Qu.:  4.378   1st Qu.:  4.695     
Median :  6.285   Median :  6.920     
Mean   : 20.795   Mean   : 15.669     
3rd Qu.: 10.162   3rd Qu.: 11.383     
Max.   :171.500   Max.   :100.570
   Ruby 1.8            REE     
Min.   :  3.040   Min.   :  8.220
1st Qu.:  4.290   1st Qu.:  9.682
Median :  7.745   Median : 15.565
Mean   : 20.698   Mean   : 27.014
3rd Qu.: 11.273   3rd Qu.: 38.620
Max.   :103.520   Max.   :125.910

  Rubinius           MagLev       
Min.   : 37.63   Min.   : 81.74   
1st Qu.: 39.78   1st Qu.: 82.52   
Median : 45.48   Median : 83.53   
Mean   : 65.70   Mean   : 96.29   
3rd Qu.: 58.22   3rd Qu.: 98.10   
Max.   :215.33   Max.   :175.85   
Min.   : 49.04  
1st Qu.: 71.23  
Median :176.72  
Mean   :169.41  
3rd Qu.:209.04  
Max.   :404.06

And finally, in graph form:

Memory Box Plot

A few considerations on memory:

  • Memory readings for IronRuby were not available, so they were not included.
  • Ruby 1.9.2 uses the least amount of memory (as one might expect).
  • JRuby was by far the most memory intensive of the group.
  • Ruby Enterprise Edition used less memory than 1.8.7 in a few tests, but overall was more memory hungry than 1.8.7. This is really odd and entirely unexpected.

Linux Vs. Windows

This shootout and the Windows one were both performed on the same machine, thus we can compare how the same implementation perform under different operating systems. The only adjustment that’s required is the timeout. Every result longer than 60 seconds from this shootout needs to be considered a timeout, because the previous shootout did so as well.

It is commonly believed that Ruby performs much better on Linux than on Windows (with the exception of IronRuby). Let’s find out if these test results confirm that notion.

Ruby 1.8.7:

Ruby 1.8.7 on Linux and Windows

Ruby 1.9.2:

Ruby 1.9.2 on Linux and Windows


JRuby on Linux and Windows

Finally, in chart form (yellow entries are on Windows as indicated by the labels containing W):

Ruby on Linux Vs. Windows

To use a beloved MythBusters expression, this myth is confirmed.

Note: As requested by a few commenters, here is a comparison of IronRuby as well (.NET 4.0 Vs. Mono 2.4.4):

Ruby 1.8.7 on Linux and Windows


In conclusion, let me just state that it’s nice to see several implementations getting faster. Ruby 1.9.2, JRuby, MagLev and Rubinius are all becoming serious competitors and working their respective ways closer to a similar performance level. If you think these benchmark shootouts are becoming boring, then the results are becoming more stable and predictable. I suspect that as time goes on, performance will not be the real distinguishing factor when choosing a Ruby implementation, other features will be.

If you enjoyed this post, then make sure you subscribe to my Newsletter and/or Feed.

receive my posts by email

34 Responses to “The Great Ruby Shootout (July 2010)”

  1. Great article, I’m glad to see 1.9.2 getting pretty close to par with JRuby. And as you mentioned Rubinius is coming along. Thanks for the work.

  2. All the 1.8 implementations are seriously outperformed by the 1.9 implementations on bm_primes.rb. This is due to a difference in algorithm between 1.8 and 1.9 standard libraries, not to underlying VM speed.

    If the 1.8 standard library were updated to the 1.9 algorithm, the differences between implementations would likely be similar to other math heavy benchmarks.

  3. Aaron says:

    Would you consider adding MacRuby 0.6 to the shootout?

  4. There’s an error in your chart. Rubinius is faster than MagLev on bm_fannkuch.rb, so you need to switch the green in that row.

  5. Burke Libbey says:

    I think the reason REE used so much more memory than 1.8.x is that its garbage collection settings are tweaked to let ruby allocate many times as much memory before it performs garbage collection, and allocates many times as much memory when it needs to grow the heap. The goal was pretty much to optimize it for massive, memory-hungry rails applications.

  6. Isaac Gouy says:

    The benchmarks game measurements are also made on 2.4GHz Q6600 with Ubuntu™ 10.04 – but most Ruby implementations are only measured on x86 rather than x64

    Perhaps x86 versus x64 is the explanation for JRuby bettering Ruby 1.9 in the benchmarks game?

    Perhaps the much longer run times in the benchmarks game versus “The Great Ruby Shootout” provides the explanation?

  7. Hongli Lai says:

    It is as Burke said, the difference in REE memory usage can be explained by the default GC settings. Another thing that might play a role is the tcmalloc memory allocator, which releases memory back to the OS at a much slower rate than the normal system memory allocator but is generally faster.

  8. Dave Duchene says:

    Thanks for posting this. If and when you run another shootout, would you consider also testing IronRuby with Microsoft’s .NET runtime? I’m very curious to see how well it performs compared to Mono.

  9. Orion Edwards says:

    The prevailing wisdom seems to be that mono lags behind the Microsoft CLR (and the JVM) in various areas, so is the reason IronRuby is slow IronRuby’s fault, or is it just mono?

    It seems a bit dubious that you’d test IronRuby only on mono on linux. I initially thought “OK, maybe he’s only got linux boxes” but then you go and include a bunch of windows tests? If you’ve already got a bunch of windows benchmarks set up, why not run IronRuby on the Microsoft CLR?

    • Orion, the Windows tests were run a few weeks ago and they include IronRuby on .NET 4.0.

      The Linux vs Windows section of this post is only a recap that compares the Windows results from a few weeks ago with the new Linux results, for the fastest implementations available.

      If you wish to compare IronRuby on Mono/Linux with IronRuby on .NET/Windows, you can read this table I posted in a comment above:

      • Ryan Riley says:

        Thanks for posting the benchmarks. I’ve long enjoyed seeing how the different implementations compare. I’m not surprised IronRuby is lagging far behind at the moment. One thing I’d be curious is what difference Mono 2.7 might make. While working on IronRuby, I couldn’t build easily unless I was on 2.7, so I’m really not surprised at the errors. It may not make a difference, but several people did a lot of work to get IR running on Mono. It’d be nice to see if that really paid off.


  10. Orion Edwards says:

    OK, I see at the bottom of the comments there is a link to a previous benchmark, comparing IronRuby on Mono vs the Microsoft CLR; At a glance it appears that IronRuby on windows is over twice as fast as on mono.

    It would be nice if these values were included in the Shootout results – to leave them out gives an unfair impression of IronRuby.

    The casual observer is not likely to realize that there is such a difference between the CLR and Mono, and simply infer that IronRuby has terrible performance, when this appears to be the fault of Mono rather than IronRuby

  11. Keith Pitty says:

    Thanks to all those who have put in the effort to improve the performance of the various Ruby implementations. I’m eagerly anticipating the release of Ruby 1.9.2!

  12. raggi says:

    It’s really excellent to see all these implementations showing such good numbers.

    I’m especially impressed when you compare IronRuby on Windows to the numbers for JRuby (even on Linux) – it’s really close!

    Rubinius seems to be doing excellently, and 1.9.2 is also still showing progress, it’s all so positive.

    There’s some really swift options available for everyone, all the implementors should be congratulated.

  13. […] on the heels of his Windows Ruby implementation shootout comes Antonio Cangiano's Great Ruby Shootout of July 2010 where Antonio pits 8 different Ruby implementations against each other in a performance […]

  14. Simon says:

    Great shootout. The new .NET CLR IronRuby results are amazing. Would it be possible to add this column to the general results/graphs? It looks like it’s the fastest implementation on a number of benchmarks.

  15. Simon says:

    Actually, it would also be great if you could post the results in a Google Spreadsheet, then we can do it ourself :)

  16. roger says:

    No rdoc benchmark? :)

  17. roger says:

    A couple of notes:
    1) even though jruby uses tons of RAM, it manages that RAM efficiently, and avoids slowdown for larger apps. I think I’ll add a benchmark to show this fact a bit better.

    2) jruby can start quickly–for me on windows it only takes about 1s if I use the faster_rubygems gem.

    Thanks for the nice shootout.

  18. Hello folks,

    I am late to this fascinating discussion.

    A bug in Mono’s runtime caused IronRuby to enter a code path that calls Debug.Log too often which slows down the code by an order of magnitude.

    We did not find out about these errors until recently, due to a mistake on my side. The Microsoft IronRuby team reached out to us months before they released the latest IronRuby, but I never noticed their email until it was too late where they raised a number of bug reports and pointed out some of our limitations.

    I only looked at this about two weeks ago while clearing my mail queue and we were able to fix these problems quickly. We have gained the performance back and these fixes will be in our upcoming Mono 2.8, or are available today from GitHub’s Mono (

    The Microsoft guys were kind enough to file the bugs that they identified in Mono and we have fixed almost all of them now. There are a couple of them that we are still working on.

    That being said, with Mono 2.8, you can expect the IronRuby test suite to run 10 times faster (not sure about this benchmark, but if someone emails me the instructions, we can look at testing it as well).

    There is also a nice performance boost from using our new GC (also available on github, and on the upcoming 2.8 release) depending on the test we get a 10% to 40% performance increase (this is from an email from Paolo who tried some of the tests on this page).

    We still have a few unoptimized code paths in Mono that will likely impact IronRuby performance on Linux, but they should be fixed eventually. Our goal is to match the .NET performance on equivalent hardware.


  19. my22301 says:

    will it not benefit ruby’s speed if we use a true compiler rather than use an interpreter ?

    Is there any difficulty to make a compiler?

  20. roger says:

    @my22301 yes it would be faster, but Ruby is tricky because it does runtime setup of methods, so it wouldn’t be easy…

  21. Isaac Gouy says:

    The Computer Language Benchmarks Game measurements have now been updated for Ruby 1.9.2 and it still seems that JRuby is a little ahead on both x86 and x64 – so why doesn’t it look like that in “The Great Ruby Shootout (July 2010)” ?

    Could it really be that “The Great Ruby Shootout (July 2010)” results are that badly broken from the Ruby 1.9.2 Prime benchmarks being rewritten in C ?

    Please re-run the statistics with Prime excluded as a sanity check.

  22. Cristian M?gheru?an-Stanciu says:

    Can you do a new benchmark so we can see the current state?


  23. Finity says:

    please kindly do another benchmark for 2012 ^^

    thank you very much

  24. […] last shootout (over two years ago now) and its predecessors were highly popular posts. In fact, benchmarking in […]

  25. kiz says:

    would you please benchmark for 2012 or when mri 2.0 released? ^^ thanks

Leave a Reply

I sincerely welcome and appreciate your comments, whether in agreement or dissenting with my article. However, trolling will not be tolerated. Comments are automatically closed 15 days after the publication of each article.

Copyright © 2005-2014 Antonio Cangiano. All rights reserved.