This is part 1 of a 2 part series I’m writing about Clojure concurrency, part 2 is here
where we are today
We demand more from our computer programs than ever before, and with those demands comes the demand to process more data, to do it faster and do it more efficiently. For decades, programmers writing user-land code have been able to mostly ignore developing with concurrency in mind, instead counting on faster clock speeds to make their programs run faster. We’ve had multi-threaded runtimes for a long time, but the added complexity of writing concurrent code often makes it not worth the effort.
I could talk all about Moore’s law, and how ignoring it leads to all sorts of peril, but I’d rather not go that direction. Instead, I like to think of learning concurrent programming as a chance to maximize the resources I’ve been given as a developer. If I was a truck driver, I would be tasked with keeping my truck as full as possible as I make runs across the country to deliver my goods. If I drove with my truck half empty all the time, you’d say I’m being inefficient. If we want to maximize the resources that hardware makers are creating, we’re going to have to learn to get better at multi-threaded, parallel and concurrent programming. 24 core servers are the norm these days, and utilizing them is really important. To do any less would simply be a waste.
When you think about the means of production and value creation, the goal is to take raw materials and turn them into something more valuable than you started with. If you drill oil out of the ground, the goal is to take that oil and refine it, distribute it, and turn it into something more valuable. Programmers use power, CPUs, silicon and hard drives as raw materials. We combine them with our ideas and our programs to convert the raw materials into something more valuable than what we started with. Creating good software is far more expensive than the hardware part, but why not squeeze the most out of the raw materials we’ve already been given?
As a Rubyist learning Clojure, I’ve grown used to doing concurrency the “Ruby way” - lots of forked processes and minimal control. Headache usually ensues. Sure, Ruby has the JRuby and Rubinius runtimes, which have their own strengths and weaknesses, but both are usually demoted to second class citizens in the Ruby community. Perhaps you’ve worked in a runtime that forces a process level parallelism pattern. After spending some time with Clojure, I’ve realized that it doesn’t have to be this way. Writing concurrent code can be simple, and can help you write programs that do more than you thought was possible.
what’s so hard about concurrent programming?
The general wisdom that I hear about concurrent programming is, “stay away from it unless you know what you’re doing”. Writing truly multi-threaded programs has been an art rarely embraced outside from those writing system level code or perhaps someone looking for extreme levels of pain. One of the big problems concurrent programs face is dealing with shared data. Most parallel and concurrent programs deal with shared data in a way that’s hard to reason about and painful to get correct. Locks, semaphores, mutexes are all important and necessary tools, but they often approach the problems of concurrent programming from the completely wrong angle. Concurrent locks were invented at a time where a differently architected machine ruled the world. Gone are the days of single or dual core computers, and gone are the days when locks will help you squeeze performance out of your programs with ease. Locks just don’t scale the way they used to. It’s time to embrace something different.
why clojure for concurrency?
Clojure addresses all these concurrency headaches with several features.
- Persistent, immutable data structures for default collection types
- A software transactional memory system
- Several semantic language level concurrency primitives
We’ll talk about each one of these items shortly.
This list of features comes baked in to Clojure itself, and as such, they are considered part of the core design of Clojure. Can these features be added to other programming languages with libraries? Yes, they can, but often at a cost to completeness and support. Having concurrency constructs at the language level offers power and flexibility that few libraries can rival. For most other languages, putting these primitives at the library level often means rewriting your application using solid thread-safe data structures (which may or may not exist), and rethinking the flow control of your entire program. In Clojure, switching your code to run concurrently can sometimes mean as little as changing a ‘map’ function to a ‘pmap’! All your data is persistent and immutable by default.
persistent data ptructures
Clojure is able to achieve simple concurrent functionality because its core sequence data structures are persistent. That doesn’t mean persistent like database persistent, but rather persistent in its immutability and its structural sharing between data writes. We’ll go into this in more depth shortly.
most data structures
When you think about most of your core collection based data structures: arrays, hash maps and sets it’s common to think of these as ‘cubby holes’ where your data lives. For example, when you want to switch an item in the middle of an array, you find the memory address for that position and then you change the data at that location. When you mutate the value at a specific location and replace it with another, the value that was there previously is gone forever. Rich Hickey, the creator of Clojure refers to this term as ‘place oriented programming’, where data lives in a certain ‘cubby hole’ in the computer now, and might be replaced with another value at an unknown point in the future.
Why do mutable data structures make it hard to program concurrently? Because to perform a safe write or read operation, the programmer has to be diligent about mutex locking. When a program makes heavy use of locks, it then has to be concerned with incidental complexities like lock ordering, deadlock, lock contention, race conditions, and other nasty problems that are sometimes next to impossible to debug. What makes these impossible to debug? In a multithreaded program, one of the hardest things to reason about is the state of your data. You have to ask yourself questions like, “Is it safe to read this data without acquiring a lock?”, “Am I guaranteed this data won’t change while I’m working with it?”, “Am I allowed to pass this data pointer to another thread safetly?”. All of these questions are incidental complexity that comes from working with mutable data structures. Your end-user doesn’t care that you had to spend weeks getting your code thread safe, they only care that it works fast and lets them do what they want. Let’s give them what they want, shall we?
clojure data structures
How are Clojure’s collection data structures different? Like I mentioned earlier, Clojure data structures are persistent, which means they are both immutable and share structure with previous generations when writes occur.
Let’s talk first about immutability. One of the amazing qualities of immutable data is its benefit in multi-threaded programming. Wondering “Am I guaranteed this data won’t change while my thread is reading it or holding on to it?” becomes a non-issue. Why? Because that piece of data you’re holding a reference to is never going to change. It’s immutable, so reading from it and passing it to another thread are incidental complexities that go away. You can hold on to that piece of data as long as you want, and never have to worry about it changing out from under you. In practice, this means that every time you make a change to a Clojure collection, you’re getting back a new immutable collection. Immutability is a boon to the multi-threaded programmer because it allows read operations to scale with significantly less mental overhead and program complexity.
Clojure’s data structures aren’t just immutable, but are also persistent. With immutability, we know that we get an immutable collection every time we make a modification to an existing collection, but this does not mean that Clojure copies the entire collection every time you make a change to it. Instead, under the hood Clojure does structural sharing between one generation of the collection and the next. Because the Clojure collections use tree structures internally to represent data, only a small amount of data needs to be created or updated because of a change. This is called persistence. If the entire collection needed to be copied every time a write operation occured, performance would take a nose-dive for data that changes quickly. Persistence helps the multi-threaded programmer think about their problem domain instead of worrying about collection performance.
I would argue that immutability resolves most difficult problems of concurrency. When you choose immutability, you easily sidestep many of the pitfalls and perils that await you in an uncontrolled, mutating world.
Let’s think a little bit about how the real world works and how that might relate to parallel and concurrent programming. If you’re watching a professional baseball game you’re ‘reading’ the state of the world without having to worry about who else is reading it. You can happily eat your hotdog and drink your beer and watch the game without having to think about what the other fans are doing. Your eyeball is receiving images at a certain point of time based on the state of the world when you saw it. You don’t have to stop everyone else from observing the game, so only you can watch - that would be just absurd. Instead everyone can saftely watch the game as their own independent person.
When you think about scaling up read operations, thinking in terms of immutability is a huge win. If all you’re doing is concurrent reads, immutable and persistent data structures are usually all you need.
In the next post, I’ll cover some of the other concurrency primitives that Clojure gives you - and how to leverage and use them effectively