More on Fibonacci. Oops, Sorry Lisp… Haskell runs it 5 times faster

My post about Ruby 1.9’s impressive improvement over Ruby 1.8.6 created quite an echo within the developer community. Sure, the headline was an attention grabber, just like this one is :-P, but in a matter of a few hours, there were all sorts of blog entries with variants in many languages, more than 200 comments on Reddit, and fifty comments on my own blog. There were however, also a few misconceptions. It was great though because such a simple post generated a lot of discussion amongst developers, with some insightful arguments taking place – and besides it almost created a new meme with the whole “holy shmoly” thing. Fun as that certainly was, let’s try to summarize and clarify a few points.

First and foremost, for those who stopped at the title of the article and didn’t read on, I made it very clear in several places that I didn’t even begin to predict that Ruby 1.9 will be faster than Python 2.5.1 when it comes to real world applications. I ran a simple micro-benchmark where this just happened to be the case. Chances are that Python will have the edge in many instances, especially if we consider that it has several optimized libraries which may still be missing or suboptimal in Ruby. Within the scope of the recursive Fibonacci’s test, which essentially stress-tests method/function calling, Ruby 1.9 seems to be more than 13 times faster than Ruby 1.8.6, but within the Ruby community it’s well known that this improvement factor is not very often replicated in other micro-benchmarks or actual applications.

Ruby is improving, its progress is impressive, so let’s drive this point home and try not to speculate too much. Someone also pointed out Ruby’s lack of Unicode support and in other cases, its disadvantages over Python. That’s missing the point, the article isn’t aimed at comparing or claiming that one language is better than the other. The essence of the message was and remains, that Ruby’s main interpreter — which was typically fairly slow — will soon be replaced by Yarv, which will drastically improve Ruby’s speed, to the point where perhaps it will be comparable with the not-too-fast but acceptable CPython. And yes, even for Python there are psyco and pypy both of which would change the outcome of my test. With this out of the way, let’s move on.

Dima Dogadaylo and my friend Valentino Volonghi both blogged about a much faster version (they employed the use of generators) of my Python snippet. Many other people in this blog and on Reddit proposed faster algorithms too. That’s all well and fine, but it really compares apples and oranges. If we switch from a naive recursive algorithm to an iterative one, even Ruby 1.8.6 will be faster and able to compute the task in a few moments. The computationally expensive and inefficient recursive algorithm should be used by those who want to compare other languages with my results, otherwise the comparison will be meaningless. Using a fast language to implement a slow algorithm is always going to be slower than using a slow language with an efficient algorithm, for N sufficiently large.

The Fibonacci function is mathematically defined as follows:

Fibonacci function

In the intentionally naive and recursive algorithm I adopted in my original post, I essentially wrote the mathematical formula in Ruby and Python syntax, and then executed it for n in a range that goes from 0 up to 35. This is very inefficient, because the tree-recursive process generated in computing F(n) grows exponentially. We have:

F(0) = 0
F(1) = 1
F(2) = F(1) + F(1)
F(3) = F(2) + F(1) = F(1) + F(1) + F(1)
F(4) = F(3) + F(2) = F(1) + F(1) + F(1) + F(1) + F(1)
F(5) = F(4) + F(3) = F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1)
F(6) = F(5) + F(4) = F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1) + F(1)

You can see where this is going. The recurrence relation generated by the algorithm, implies that we execute 3*F(n) – 2 lines of code (for n nontrivial). When n=35, this is a huge number, since F(35) = 9227465. Now you can see why the algorithm really stress-tests function calling and why Ruby 1.8.6 chokes. If we want to evaluate the computational complexity of the algorithm in terms of big-O notation, we have O(phi^n), where phi is the Golden Ratio and equal to (sqrt(5) + 1)/2. That’s exponential. Valentino, Dima and many others came up with variants of iterative or tail-recursive algorithms, whose complexity is linear (O(n)). Using memoization (the technique of keeping a cache of the previous calculations) or a simple iterative loop changes the algorithm’s efficiency. The difference between my original algorithm and these others is humongous and there is no point in comparing them as a means of evaluating the runtime speed of a given language. At that point we could very well use Binet’s formula, or Edsger Dijkstra’s algorithm or even 2×2 matrix exponentiation, but we’d not be proving any point there. If you want to learn more about algorithms (a necessity for any serious programmer), I strongly suggest the introductory textbook “Introduction to Algorithms”.

Don Stewart, a real celebrity in the Haskell community, has replied to my post with two articles that essentially illustrate a couple of points that are not new to many people: 1) Haskell is fast, much faster than Python and Ruby, 2) Haskell’s ability to take advantage of multiple cores by following parallelism hints placed in the code for the compiler, is just plain awesome and easy on programmers. Don did use old versions of Ruby and Python, but I appreciate his response a lot, because he kept the same algorithm in place. He didn’t bait and switch, using one of the many fast implementations available on the Haskell wiki. His fair comparison showed, despite the very limited scope of the test, what kind of performance we can expect from this functional language’s main compiler (GHC).

As I said in the past, Haskell really is a language worth getting excited about. But it’s not all about performance and the trend of increasingly multicored CPUs. So I’m glad that we have both Ruby and Haskell with their strengths and weaknesses. While Ruby 1.9 will hopefully give us a runtime that’s seriously fast enough™ in most circumstances, it’d be nice if in Ruby’s future there were features that allowed us to take advantage of multiple cores, just like Haskell does without cumbersome code modifications.

Amongst the other replies there was a mix of everything, including Assembly and LOLcode, but I’d like to point out the post by a lisper, who took the Haskell vs Lisp approach in “Dude, your quad-cores have been smoking way too much Haskell!”. He runs the following code, first for n=45 and then for n=4500:

(defun printfib-trec (start end)
  (format t "n=~D => ~D~%" start (fib-trec start))
  (if (< start end)
      (printfib-trec (+ start 1) end)))

(defun fib-trec (n)
  "Tail-recursive Fibonacci number function"
  (labels ((calc-fib (n a b)
         (if (= n 0)
         a
         (calc-fib (- n 1) b (+ a b)))))
    (calc-fib n 0 1)))

(time (printfib-trec 0 4500))

On my machine this runs in 3.291 seconds. Algorithm 101, guys. Quick question for my readers: how can an algorithm that is supposed to be O(phi^n) execute in 3 seconds, per n=4500? Simple, it’s that blogger who is being naive and not the algorithm that he adopted. If you pay attention you can see that he’s trying to compare the linear O(n) tail-recursive implementation of Fibonacci in Lisp, with the naive recursive one in Haskell, and from this he concludes “Oops. Sorry Haskell…”. Slow down, cowboy! You want to compare Lisp with Haskell? Let’s do a fair comparison then. Let’s keep the same algorithm for both and use n=45, shall we?

Here is my naive/recursive Lisp version:

(defun printfib (start end)
  (format t "n=~D => ~D~%" start (fib start))
  (if (< start end)
      (printfib (+ start 1) end)))

(defun fib (n)
  "Naive-recursive Fibonacci number function"
  (if (or (= n 0) (= n 1))
	  n
	(+ (fib (- n 1)) (fib (- n 2)))))
	
(time (printfib 0 45))

And here is the Haskell one:

import Control.Monad
import Text.Printf

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

main = forM_ [0..45] $ \i ->
            printf "n=%d => %d\n" i (fib i)

On my MacBook Pro, Intel Core 2 Duo 2.2 GHz and 2 GB of RAM, Lisp (well sbcl, which supposedly uses both cores, though there is no documented proof of this) took 259.743 seconds. See the difference between O(n) and O(phi^n)? Try n=4500 with this algorithm and the sun will have burned out before the computation is finished. Haskell, used only 1 core, and took 77.779 seconds. Hmmm, Haskell was 3.3 times faster than Lisp without even parallelizing it.

Just out of curiosity, let’s try again with Don’s code which still implements the same algorithm (whose complexity is O(phi^n)), but which introduces parallelism hints for the compiler. Remember, this is being done out of curiosity, we already established that for this particular micro-benchmark Haskell smokes Lisp away.

import Control.Parallel
import Control.Monad
import Text.Printf

cutoff = 35

fib' :: Int -> Integer
fib' 0 = 0
fib' 1 = 1
fib' n = fib' (n-1) + fib' (n-2)

fib :: Int -> Integer
fib n | n < cutoff = fib' n
      | otherwise  = r `par` (l `pseq` l + r)
 where
    l = fib (n-1)
    r = fib (n-2)

main = forM_ [0..45] $ \i ->
       printf "n=%d => %d\n" i (fib i)

Now that Haskell’s program uses two cores (assuming that Lisp does too, which is unlikely), Haskell runs it in 52.248 seconds versus Lisp’s 259.743 seconds. That’s about 5 times faster than Lisp. Does this prove that Haskell is faster than Lisp in general (or to be more exact, that GHC is faster than sbcl on Mac OS X 10.5)? Nope, guys it’s just a silly micro-benchmark after all. But damn the temptation to say “Oops. Sorry Lisp…” was too strong. 😉

Get more stuff like this

Subscribe to my mailing list to receive similar updates about programming.

Thank you for subscribing. Please check your email to confirm your subscription.

Something went wrong.

21 Comments

  1. Valentino Volonghi November 30, 2007
  2. Brian Adkins November 30, 2007
  3. Brian Adkins November 30, 2007
  4. Antonio Cangiano November 30, 2007
  5. Jeremy Shaw November 30, 2007
  6. Alec November 30, 2007
  7. Valentino Volonghi November 30, 2007
  8. Antonio Cangiano November 30, 2007
  9. Larbo McFly November 30, 2007
  10. Bill Mill November 30, 2007
  11. Antonio Cangiano November 30, 2007
  12. w-g November 30, 2007
  13. w-g December 1, 2007
  14. Antonio Cangiano December 1, 2007
  15. Brian Adkins December 1, 2007
  16. Geir Aalberg December 14, 2007
  17. Mike Hansen December 26, 2007
  18. Geir Aalberg January 2, 2008
  19. Isaac Gouy January 7, 2008
  20. Muha April 24, 2008
  21. Austin August 13, 2014

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.