Laziness and Parallelism

By Alex Beal
February 12, 2018

What follows is a brief tour of another1 type of modularity that laziness enables. Please note that this article is not a tutorial. Other resources2 already exist that do a great job of explaining the mechanics of this API in great detail. Instead, this article serves as an illustration of, what I think, is a particularly interesting and undersold benefit of laziness. This article does not require knowledge of Haskell, but I’ve provided code examples for those who are interested. They can be safely skipped over.

One interesting benefit of laziness is that I can write an expression that (lazily) generates some data structure, and then, in a separate bit of code, describe how to evaluate that data structure in parallel.

Suppose I have some CPU intensive function that takes an Int and generates an Int (maybe this calculates digits of pi to some arbitrary precision). I’ll call this function expensive.

-- | The type of an expensive function.
expensive :: Int -> Int

Now suppose I wish to apply this function to each element of a list of Ints. I’ll call this function listExpensive.

-- | Apply `expensive` to each element in a list of Ints `xs`
listExpensive :: [Int] -> [Int]
listExpensive xs = fmap expensive xs

Since expensive is a pure CPU bound operation, it would make sense to calculate each element of the list in parallel. In other languages, I might fork off multiple threads, supply each thread with a chunk of the list, and then combine their results. If I had access to a higher level API like Scala’s Future, I might turn expensive into an async operation that runs on a thread pool, and use Future’s combinators to apply the async operation to a list of Ints. Unfortunately, both of these techniques force me to change the definition of listExpensive, either by forking and joining or by having listFuture return a Future rather than just a list of Ints.

All of the above are viable options in Haskell, but because it’s lazy by default, there is one more option: parallel strategies. What parallel strategies do is separate the definition of listExpensive from the way in which it’s evaluated. In practice, this means I can instruct the run time to evaluate the elements of the result of listExpensive in parallel without changing its definition.

This works by applying a parallel strategy to the result of the listExpensive function. Because the result of listExpensive is lazy, I can defer the description of how to evaluate it. In other words, I can separate the definition of listExpensive from the way in which it is parallelized.

Haskell’s parallel package provides a set of functions that allow me to describe a strategy for evaluation. With the basic functions I can force evaluation and do that either in parallel or in sequence. These primitives can then be combined with a set of combinators to describe more complex strategies on larger data structures, such as evaluating the elements of a list in parallel. Once this parallel strategy is built, it can be applied to the result of listExpensive, and its lazy result will be evaluated in parallel.

-- | An example of parallelizing the evaluation of listExpensive
-- | without changing its definition.

import Control.Parallel.Strategies -- from the 'parallel' package

-- A list of Ints...  
xs :: [Int]

-- Compute the result of listExpensive in parallel.
(listExpensive xs) `using` evalList (rpar `dot` rdeepseq)

(The following paragraph can be skipped, but I think it’s understandable even if you don’t know Haskell.) In the code example above, the infix function `using` applies the parallel strategy on the right (evalList (rpar `dot` rdeepseq)) to the result of listExpensive. The parallel strategy works as follows: evalList is a combinator that converts an arbitrary parallel strategy to one that operates on each element of a list. rpar `dot` rdeepseq is the strategy that applies to each element, and it says to evaluate each element fully (rdeepseq) and to do it in parallel (rpar).

So how does this perform? Below is the same program, but with expensive filled in as an inefficient Fibonacci generator. listExpensive is applied to a list of six random numbers between 40 and 45. In parallel, this takes slightly less than 6 seconds. In series, it takes around 13.

import Control.Parallel.Strategies
import System.Random

expensive :: Int -> Int
expensive 0 = 0
expensive 1 = 1
expensive i = expensive (i-2) + expensive (i-1)

listExpensive :: [Int] -> [Int]
listExpensive = fmap expensive

main :: IO ()
main = do
  let rs = take 6 (randomRs (40, 45) (mkStdGen 12))
  let l = (listExpensive rs) `using` evalList (rpar `dot` rdeepseq)
  putStrLn (show l)

1 Footnotes


  1. See my post Selling Laziness for more examples.↩︎

  2. “Parallel and Concurrent Programming in Haskell” is a wonderful book. The website for it is here. That page contains a link to a free online copy.↩︎