The long awaited Ruby virtual machine shootout is here. In this report I’ve compared the performances of several Ruby implementations against a set of synthetic benchmarks. The implementations that I tested were Ruby 1.8 (aka MRI), Ruby 1.9 (aka Yarv), Ruby Enterprise Edition (aka REE), JRuby 1.1.6RC1, Rubinius, MagLev, MacRuby 0.3 and IronRuby.
Just as with the previous shootout, before proceeding to the results, I urge you to consider the following important points:
All of the Ruby implementations that were able to run the current Ruby Benchmark Suite have been grouped together in one main shootout. This group consists of Ruby 1.8.7 (p72, built from source, and installed through apt-get), Ruby 1.9.1 (from trunk, p5000 revision 20560), Ruby Enterprise Edition (1.8.6-20081205), JRuby 1.1.6RC1 and Rubinius (from trunk), all of them were tested on Ubuntu 8.10 x64, plus Ruby 1.8.6 (p287. from the One-Click Installer) on Windows Vista Ultimate x64. The hardware used for this benchmark was my desktop workstation with an Intel Core 2 Quad Q6600 (2.4 GHz) CPU and 8 GB of RAM. JRuby was run with the -J-server option enabled and by specifying 4 Mb of stack (required to pass certain recursive benchmarks). The best times out of five iterations were reported, and these do not include startup times or the time required to parse and compile classes and method for the first time. Several of these new tests also have variable input sizes.
The MagLev team provided me with an early alpha version of MagLev for the purpose of testing it in this shootout. Since this VM is not mature enough yet to run the Ruby Benchmark Suite, I used custom scripts against an old version of the Ruby Benchmark Suite on Ubuntu 8.10 x64. MagLev was tested, along with Ruby 1.8.6 (p287), on the same machine as that of the main shotoout, though the benchmarks were different (even when they had the same names as the ones in the main shootout).
MacRuby 0.3 and Ruby 1.8.6 (p114) were tested on Mac OS X Leopard using the previous version of the Ruby Benchamrk Suite. Since my MacBook Pro died (sigh), for this benchmark I used a Mac Pro, with two Quad-Core Intel Xeon 2.8 Ghz processors and 18 GB of RAM.
IronRuby (from trunk) and Ruby 1.8.6 (p287) were tested on a previous version of the Ruby Benchmark Suite on Windows Vista x64 on the same quad-core used for the main shootout. The MagLev, MacRuby and IronRuby numbers reported here were the best times out of five iterations, and include startup time. IronRuby on Mono was not tested because I couldn’t get it to work on my machine, despite having tried several IronRuby versions and two different Mono versions. Please also notice that Ruby 1.8.6 (p287) was tested twice on Windows, once for the main shootout against the current Ruby Benchmark Suite, and a second time to compare it with IronRuby, against the old benchmarks.
Note: As tempting as it is, do not compare implementations that belong to different shootouts directly to one another. It would be very disingenuous to directly compare VMs tested with different benchmarks and/or different machines. The only comparisons that make sense are the ones within each of the four groups.
The following table shows the run times for the main implementations. The table is fairly wide, so you’ll have to click on the image to view the data in a new tab.
Green, bold values indicate that the given virtual machine was faster than Ruby 1.8.7 on GNU/Linux (our baseline), whereas a yellow background indicates the absolute fastest implementation for a given benchmark. Values in red are slower than the baseline. Timeout indicates that the script didn’t terminate in a reasonable amount of time and was (automatically) interrupted. The values reported at the bottom are the total amounts of time (in seconds) that it would take to run the common subset of benchmarks which were successfully executed by every virtual machine. When our baseline VM generated an error, others were used, starting with Ruby 1.8.7 on Vista (for color coding purposes only).
The following image shows a bar chart of the total time requested for the common subset of successfully executed benchmarks (those whose names are in blue within the tables):
More interestingly, the following table shows the ratios of each Ruby implementation based on the baseline (MRI):
The baseline time is divided by the time at hand to obtain a number that tells us “how many times faster” an implementation is for a given benchmark. 2.0 means twice as fast, while 0.5 means half the speed (so twice as slow). The geometric mean at the bottom of the table tells us how much faster or slower a virtual machine was when compared to the main Ruby interpreter, on “average”. Just as with the totals above, only those 101 tests, which were successfully run by each VM, where included in the calculation.
More concisely, here is a bar chart showing the geometric mean of the ratios for the various implementations tested:
I prefer to let the data speak for itself, but I’d like to briefly comment on these results. Just a few quick considerations.
Working off of the geometric mean of the ratios for the successful tests, Ruby MRI compiled from source is twice as fast than the Ruby shipped by Ubuntu, and by the One-Click Installer on Vista. The huge performance gap between ./configure && make && sudo make install and sudo apt-get install ruby-full should not be taken lightly when deploying in production. These numbers also reveal what most of us already knew: Ruby is particularly slow on Windows (800-pound gorillas in the room, or not).
Performance-wise Rubinius has more work left to be done to catch up with Ruby 1.8.7 and other faster VMs, particularly if we take into account the number of timeouts. But it has improved in the past year and I think it’s on the right track.
Ruby Enterprise Edition is about as fast as Ruby 1.8.7 compiled from source, which is reasonable considering that it’s a patched version of Ruby 1.8.6 aimed at the reduction of memory consumption (a parameter which wasn’t tested within the current shootout).
Speaking of excellent results, Ruby 1.9.1 and JRuby 1.1.6 both did very well. It looks like we finally have a couple of relatively fast alternatives to what is a slow main interpreter. According to the results above, and with the exception of a few tests, on average they are respectively 2.5 and 2 times faster than Ruby 1.8.7 (from source), and 5 and 4 times faster than Ruby 1.8.7 installed through apt-get on Ubuntu or Ruby 1.8.6 installed through the One-Click installer on Vista. Again, this does not mean than every program (particularly Rails) will gain that kind of speed, but these results are very encouraging nevertheless.
There has been a lot of buzz about MagLev since Avi Bryant’s first benchmarks were shown a few months ago. Here we finally see it being put to the test. The table below shows the times obtained by running MagLev and Ruby 1.8.6 (p287) against MagLev’s set of benchmarks based on the old Ruby Benchmark Suite:
And here are the ratios:
You’ll notice how MagLev swings from being much faster than MRI to being much slower. I believe there is much room for improvement, but at almost twice the speed of MRI, these early results are definitely promising.
These are the times for MacRuby 0.3 on Mac OS X 10.5.5:
And of course, the ratios against the MRI baseline:
MacRuby is relatively new, so these are not bad results. More work is required, but it’s a good start.
Finally (I promise these are the last ones), here are the two tables for IronRuby and Ruby 1.8.6:
IronRuby is slower than Ruby 1.8.6 on Windows, which in turn is much slower than Ruby 1.8.7 on GNU/Linux. This is not very surprising. This project has been focusing on integrating with .NET and catching up with the implementation of the language by improving the RSpec pass rate, as opposed to performing any optimizations and/or fine tuning (as per John Lam’s presentation at RubyConf 2008). We’ll measure its improvements in the next shootouts.
Overall I think these are great results. Ruby 1.8 (MRI), with its slowness and memory leaks, belongs to the past. It’s time for the community to move forward and on to something better and faster – and we don’t lack interesting alternatives to do so at this stage.
I hope that for the next shootout, MagLev, MacRuby and IronRuby will be able to run the benchmark suite, so that they can all be tested and directly compared with each other. I also hope to include Tim Bray’s XML benchmark, some sort of “Pet Shop” sample Rails and Merb application and, above all, include memory usage statistics.
You can find the Excel file for the main shootout here. That’s all for now. Feel free to comment, subscribe to my feed, share this link and promote it on Hacker News, Reddit, DZone, StumbleUpon, Twitter, and Co. Putting together this shootout was a lot of work, so I definitely appreciate you spreading the word about it. Until next time…
Update (December 10, 2008): This article has been updated to correct a couple of major issues with yesterday’s results. I adjusted my commentary as well, in light of the corrected figures.
Update (February 7, 2009): Thanks to Makoto Kuwata, a Japanese version of this article was published in the Rubyist Magazine.
I sincerely welcome and appreciate your comments, whether in agreement or dissenting with my article. However, trolling will not be tolerated. Comments are automatically closed 15 days after the publication of each article.