Comparing the performance of IronRuby, Ruby 1.8 and Ruby 1.9 on Windows

In my latest article I discussed the importance of JRuby as a means of introducing Ruby to the Enterprise world. Most of the companies that belong to this ecosystem are Java based, but we cannot forget that a sizable portion of them are Microsoft-centric. Within these companies, Ruby will be far more welcome if a .NET implementation is available. The answer to this need is sufficiently fulfilled by IronRuby (version 0.9 was just released).

IronRuby has been progressing fast lately. First came support for Rails and then, with this release, a great deal of effort has been placed on improving performance. In the past, IronRuby was all but fine-tuned. In fact, it was several times slower than Ruby MRI, as the team worked on improving compatibility with Ruby 1.8 and mostly ignored performance.

In this article I’m going to provide some performance results for IronRuby 0.9 on Windows, which I’m sure will interest readers of this blog as well as of my book. Before revealing all the details, let’s start with the setup and a disclaimer. Please read through it carefully, because the old, trite comments about how “micro-benchmarks are useless” won’t be published. We’ve already been there, folks. Thank you all for your understanding.

Setup

  • The benchmarks were run within a virtual machine with 2 GB of DDR3 RAM and a 2.66 GHz Intel Core 2 Duo processor. The operating system adopted was Windows XP SP3 (32 bit) with the .NET Framework 3.5 SP1 installed.
  • Here I employed a large subset of the current Ruby Benchmark Suite project. The source code for all of the benchmarks is available within the repository.
  • The best time out of five runs is reported for each benchmark. When the results report a Timeout, it means that more than 300 seconds were required for a single iteration and was therefore interrupted. Conversely, N/A means that a test that was compatible for the given implementation was not available (2 tests) or the setup on my machine was lacking the required libraries to execute it (1 test).
  • I realize that there is a MinGW One-Click Installer effort that may improve the performance of Ruby on Windows. Here however, I used the regular One-Click Installer as it is the most common one (it’s been downloaded almost 3.5 million times so far). For Ruby 1.9 I used the binary provided by the download page on the official Ruby site.

Disclaimer

  • An attempt has been made to improve the quality of the tests. Some of them may be more representative of realistic workloads, but most of them remain micro-benchmarks. They are indicative of how IronRuby compares to MRI (Ruby 1.8) and KRI (Ruby 1.9) performance-wise, but cannot be viewed as a guarantee of how this will actually affect your own programs.
  • I didn’t calculate a RubySpec completeness score for IronRuby. That said, IronRuby 0.9 should be a fairly complete implementation of Ruby 1.8.
  • There are lies, damned lies, and statistics.

Benchmark results

The table below shows the times for each benchmark, for IronRuby 0.9, Ruby 1.8.6 (2008-08-11 patchlevel 287) and Ruby 1.9.1p0 (2009-01-30 revision 21907):

Benchmark File # Ruby 1.8.6 IronRuby Ruby 1.9.1
macro-benchmarks/bm_gzip.rb 100 Timeout IOError N/A
macro-benchmarks/bm_hilbert_matrix.rb 20 1.891 0.453 0.125
macro-benchmarks/bm_hilbert_matrix.rb 30 7.422 1.719 0.656
macro-benchmarks/bm_hilbert_matrix.rb 40 21.500 4.625 2.266
macro-benchmarks/bm_hilbert_matrix.rb 50 56.765 10.031 5.109
macro-benchmarks/bm_hilbert_matrix.rb 60 111.859 18.781 11.297
macro-benchmarks/bm_norvig_spelling.rb 50 Timeout 41.313 31.453
macro-benchmarks/bm_sudoku.rb 1 43.734 Timeout 6.313
micro-benchmarks/bm_app_factorial.rb 5000 1.328 0.063 0.266
micro-benchmarks/bm_app_fib.rb 30 6.156 0.594 0.813
micro-benchmarks/bm_app_fib.rb 35 74.125 6.922 9.344
micro-benchmarks/bm_app_mandelbrot.rb 1 11.953 6.922 0.641
micro-benchmarks/bm_app_pentomino.rb 1 Timeout 59.938 75.859
micro-benchmarks/bm_app_strconcat.rb 1.5M 30.469 2.141 4.813
micro-benchmarks/bm_app_tak.rb 7 5.516 0.531 0.578
micro-benchmarks/bm_app_tak.rb 8 15.609 1.484 1.703
micro-benchmarks/bm_app_tak.rb 9 45.843 3.953 4.531
micro-benchmarks/bm_app_tarai.rb 3 19.985 1.844 2.156
micro-benchmarks/bm_app_tarai.rb 4 19.796 2.219 2.656
micro-benchmarks/bm_app_tarai.rb 5 24.235 2.688 3.063
micro-benchmarks/bm_binary_trees.rb 1 Timeout 53.078 37.375
micro-benchmarks/bm_count_multithreaded.rb 16 0.297 0.266 0.328
micro-benchmarks/bm_count_shared_thread.rb 16 0.250 0.188 0.203
micro-benchmarks/bm_fannkuch.rb 8 3.625 0.344 0.563
micro-benchmarks/bm_fannkuch.rb 10 Timeout 40.750 65.438
micro-benchmarks/bm_fasta.rb 1M 192.937 23.703 35.234
micro-benchmarks/bm_fractal.rb 5 43.172 4.672 5.781
micro-benchmarks/bm_gc_array.rb 1 228.672 32.031 59.828
micro-benchmarks/bm_gc_mb.rb 500K 8.109 1.109 0.469
micro-benchmarks/bm_gc_mb.rb 1M 16.172 2.391 1.016
micro-benchmarks/bm_gc_mb.rb 3M 44.953 6.906 2.938
micro-benchmarks/bm_gc_string.rb 1 47.937 25.250 11.938
micro-benchmarks/bm_knucleotide.rb 1 9.625 2.906 2.016
micro-benchmarks/bm_lucas_lehmer.rb 9689 122.672 18.125 36.250
micro-benchmarks/bm_lucas_lehmer.rb 9941 156.750 19.625 39.391
micro-benchmarks/bm_lucas_lehmer.rb 11213 179.915 28.844 61.063
micro-benchmarks/bm_lucas_lehmer.rb 19937 Timeout 159.078 Timeout
micro-benchmarks/bm_mandelbrot.rb 1 Timeout 65.781 81.766
micro-benchmarks/bm_mbari_bogus1.rb 1 0.031 40.406 8.781
micro-benchmarks/bm_mbari_bogus2.rb 1 0.156 Timeout N/A
micro-benchmarks/bm_mergesort_hongli.rb 3000 25.282 3.531 6.031
micro-benchmarks/bm_mergesort.rb 1 24.735 3.906 3.219
micro-benchmarks/bm_meteor_contest.rb 1 147.704 19.713 19.781
micro-benchmarks/bm_monte_carlo_pi.rb 10M 79.406 5.109 20.672
micro-benchmarks/bm_nbody.rb 100K 37.625 8.281 10.938
micro-benchmarks/bm_nsieve_bits.rb 8 69.656 33.156 6.531
micro-benchmarks/bm_nsieve.rb 9 58.344 5.453 N/A
micro-benchmarks/bm_partial_sums.rb 2.5M 93.391 10.797 26.422
micro-benchmarks/bm_pathname.rb 100 Timeout Timeout Timeout
micro-benchmarks/bm_primes.rb 3000 21.359 9.594 0.031
micro-benchmarks/bm_primes.rb 30K Timeout Timeout 0.469
micro-benchmarks/bm_primes.rb 300K Timeout Timeout 5.281
micro-benchmarks/bm_primes.rb 3M Timeout Timeout 100.406
micro-benchmarks/bm_quicksort.rb 1 51.046 11.594 8.703
micro-benchmarks/bm_regex_dna.rb 20 181.172 21.188 11.938
micro-benchmarks/bm_reverse_compliment.rb 1 61.875 48.469 138.047
micro-benchmarks/bm_so_ackermann.rb 7 2.234 0.563 0.484
micro-benchmarks/bm_so_ackermann.rb 9 50.000 14.938 9.281
micro-benchmarks/bm_so_array.rb 9000 26.328 8.984 10.781
micro-benchmarks/bm_so_count_words.rb 100 Timeout 60.688 42.250
micro-benchmarks/bm_so_exception.rb 500K 78.125 Timeout 32.672
micro-benchmarks/bm_so_lists_small.rb 1000 13.906 4.250 3.172
micro-benchmarks/bm_so_lists.rb 1000 64.531 22.266 16.797
micro-benchmarks/bm_so_matrix.rb 60 8.312 2.781 2.125
micro-benchmarks/bm_so_object.rb 500K 16.375 5.313 1.672
micro-benchmarks/bm_so_object.rb 1M 29.312 10.500 2.844
micro-benchmarks/bm_so_object.rb 1.5M 43.312 16.000 4.281
micro-benchmarks/bm_so_sieve.rb 4000 241.922 37.859 35.688
micro-benchmarks/bm_socket_transfer_1mb.rb 10K 13.266 SocketError 3.359
micro-benchmarks/bm_spectral_norm.rb 100 5.110 0.922 0.719
micro-benchmarks/bm_sum_file.rb 100 Timeout 20.406 23.797
micro-benchmarks/bm_word_anagrams.rb 1 70.828 30.188 8.125
TOTAL TIME 2933.334 607.088 664.094

Red values are errors, timeouts, inapplicable tests and times that were worse than Ruby 1.8.6. Green, bold values are better times than what Ruby 1.8.6 delivered. A pale yellow background indicates the best time for a given benchmark. Total time is the runtime for the subset of benchmarks that were successfully executed by all three implementations (whose cardinality is 54).

The total runtime is summarized by the chart below:

Total runtime

And let’s compare each of the “macro-benchmarks” on an individual basis:

Macro-benchmarks chart

Conclusions

IronRuby went from being much slower than Ruby MRI to considerably faster across nearly all the tests. That’s major progress for sure, and the team behind the project deserves mad props for it.

One final warning before we get too excited here. IronRuby is not faster than Ruby 1.9.1 at this stage. Don’t let that first chart mislead you. While it’s faster in certain tests, it’s also slower is many others. Currently, it’s situated between Ruby 1.8.6 and Ruby 1.9.1, but much closer to the latter. The reason why this chart is misleading is that it doesn’t take into account any tests that timed out, and several of such timeouts were caused by IronRuby (more than those caused by Ruby 1.9.1). If you were to add, say, 300 seconds to the total, for each timeout for the two implementations, you’d quickly see that Ruby 1.9.1 still has the edge. The second chart that compares macro-benchmarks does a better job at realistically showing how IronRuby sits between Ruby 1.8.6 and Ruby 1.9.1 from a performance standpoint. If you were to plot every single benchmark on a chart, you’d find a similar outcomes for a large percentage of the tests.

Whether it’s faster than Ruby 1.9 or not, now that good performances are staring to show up, it’s easier to see IronRuby delivering on it’s goal of becoming the main implementation choice for those who both develop and deploy on Windows. This, paired with the .NET and possible Visual Studio integration, the great tools available to .NET developers, and the ability to execute Ruby code in the browser client-side thanks to projects like Silverlight/Moonlight and Gestalt, make the project all the more interesting.

What are your thoughts on IronRuby, and how will this dramatic performance gain affect your projects?

Get more stuff like this

Subscribe to my mailing list to receive similar updates about programming.

Thank you for subscribing. Please check your email to confirm your subscription.

Something went wrong.

15 Comments

  1. Luis Lavena August 3, 2009
  2. Antonio Cangiano August 3, 2009
  3. Pingback: DotNetKicks.com August 3, 2009
  4. Matt Haley August 3, 2009
  5. Antonio Cangiano August 3, 2009
  6. Marco Mastrodonato August 3, 2009
  7. Antonio Cangiano August 3, 2009
  8. Luis Lavena August 3, 2009
  9. Paolo August 3, 2009
  10. Ian Dees August 3, 2009
  11. Luis Lavena August 3, 2009
  12. Antonio Cangiano August 3, 2009
  13. Antonio Cangiano August 3, 2009
  14. Charles Roper August 4, 2009
  15. Antonio Cangiano August 4, 2009
  16. brianoh January 22, 2010

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.