I’m not at RailsConf, which is being held in Portland, but it doesn’t take a great leap of faith for me to believe that the session on MagLev was the star of the conference. Avi’s demo created quite an echo and it’s currently the most discussed topic in the Ruby and Rails communities, despite the announcements of IronRuby on Rails and today’s release of Rails 2.1.
I had no doubt that this would be the case. For the past few days, and before RailsConf, I’ve been in touch with the nice people at Gemstone and witnessed, before most, the speed of the current implementation. My first comment truly was “Holy Shit!” and I don’t like to swear. 😉 In a side-by-side comparison, it was like watching a race between an eight week old chubby bulldog (Ruby MRI) and a full grown jaguar (MagLev). Ruby MRI looked cute, I was almost cheering for it (“come on Ruby, you can do it”). It was so spectacular that you couldn’t help but think that they were pulling your leg.
MagLev is not complete yet, but it’s very remarkable that it accomplished so much, both in terms of tests passed and its speed – despite being only three months old. It speaks volume about the capabilities of the Gemstone platform (on which MagLev is based upon) and MagLev’s team work.
Let’s be clear about one point, this announcement changes everything; it has the potential to revolutionize the Ruby community. It’s not just a matter of a substantial speed increase, Ruby’s main weakness, it’s also a matter of scalability, paradigm shift and Enterprise perception.
When you get to use an object persistence model that can hold up to 17 Petabytes (that means 17 Million Gigabytes for you), with the ability to scale by simply adding instances, the whole “Ruby doesn’t scale” FUD starts to fade real fast. Also, aside from all the technical advantages brought by MagLev, selling a Gemstone based solution to environments that are typically harder for Ruby to penetrate into, becomes a no-brainer. It’s a newfound ticket for Ruby and Rails into many sectors of the Enterprise, and an easy entrance in the financial world as well.
MagLev is unconventional, but that’s because most impressive innovations tend to be so. That’s what the future looks like when seen through the eyes of the present. The reality is that MagLev’s OODB paradigm and architecture are quite a good fit for common Web development too, so if properly pulled off, MagLev could change the way we write Rails applications.
A few people objected that it won’t fly, because it’s going to be a commercial product. I don’t think that’s the case. The Rails community adopts Macs, TextMate, and other commercial software without blinking, so why would they have any issues with paying for a truly superior Ruby VM (if it proves to be so)? The terms of release for MagLev have not been finalized yet, but it’s clear that some part of it will be open and others (e.g. the actual VM written in C) closed source. It’s likely that there will be different pricing levels, including a free version with some limits in place. But price will mostly be irrelevant.
Planning the next Great Ruby Shootout
The great news is that the next edition of my shootout will include MagLev, so we’ll get to test it in a fair and controlled environment, against all the other major implementations. The results of the shootout will be published sometime in June. These are the implementations that I’ll most likely test:
- Ruby MRI
- Ruby 1.9
- Ruby Enterprise Edition
- IronRuby on Mono
I’ll probably test them on Ubuntu 8.04 (32 and 64 bit), Mac OS X Leopard and Windows Vista (32 bit). Please note that not all eight of them can be tested on each of the four platforms, so you won’t see MagLev on Windows or Linux 32 bit, for example. I’d like to test all four operating systems from the same machine (my MacBook Pro Core 2 Duo 2GHz SR, with 2GB of RAM) so that we can also compare how these major implementations behave on different operating systems, but I may have to opt for some native installations and carrying out others from VMware Fusion; I’ve yet to sort all the details out. 😉
A criticism that I’ve often heard is that these micro-benchmarks are well known and specifically targeted for speed by the implementations’ authors. The new shootout will also include a set of tests that no one has seen before (I just started to write/put them together). They may still be synthetic benchmarks, but they’ll help us form a better idea of the actual speed of these VMs.