So far in this series, we’ve discussed GC::INTERNAL_CONSTANTS, the Tri-Color Mark and Sweep algorithm, Generational GC and Incremental GC. We’ll build on what we’ve learned in this post about the newest addition to Ruby’s GC: compaction.

Fragmentation

Before we dive into compaction, we need to learn about fragmentation. Fragmentation is the term we use to describe memory when it’s allocated non-contiguously. This means that there are gaps in memory between where we store meaningful information, and where we don’t store meaningful information.

More concretely, in the Ruby Heap, fragmentation is when there are free, unallocated slots (without RVALUES) in between allocated slots (with RVALUES). This diagram shows us an extreme example of fragmentation, where we can see that there are many unallocated slots in between the slots which actually have meaningful information, our familiar RVALUES:

sparse

Why compact?

Compaction is the solution to fragmentation. Well okay, but what’s wrong with fragmentation? What is compaction solving?

The most obvious problem is that a fragmented heap will take up more overall memory than a compacted heap. If we look again at our extreme example from above, all of those RVALUES could have fit into just one page, instead of five. Like most things, memory can be a limited resource, so obviously it’s beneficial to utilize it efficiently.

sparse-compacted

But it turns out there are other reasons to compact, too. Let’s discuss some of them!

CPU Cache Hits

The OS usually loads small chunks of memory into a cache which is quicker to access than the rest of memory. It will take less time to access an RVALUE already in the cache than it will to access an RVALUE not in the cache.

So it benefits us to have as many RVALUES as possible in each chunk of memory that the CPU might load into the cache. In this way, we have a higher likelihood that an RVALUE we want to access is already available to us in the cache. If our Heap is fragmented, however, then for each chunk that the CPU loads into its cache, there are a bunch of free slots which aren’t giving us more efficient access to any real RVALUE. But if we compact the Heap, then each chunk in the CPU cache will be packed with RVALUES, so more likely to give us an RVALUE we were looking to access.

Copy on Write Friendliness

Another motivation for compaction is to increase the efficiency of copy on write friendliness. In Ruby, when processes are forked, a child process’ memory points to the parent process’ memory.

However, if the child wants to write into memory, it can no longer point to the parent process’ memory. Instead, the memory where the child process wants to write is copied into the child process’ memory itself. It turns out, though, that the CPU copies memory in chunks which are the sizes of OS pages, much bigger than the small bit of memory that a child process may want to write into.

In a fragmented heap, this might mean that most of the memory copied down is actually already allocated, or already contains RVALUES. In these cases, the OS needs to keep copying more memory each time it wants to write.

But in a compacted heap, most (if not all) of the memory copied down in this chunk would be unallocated, free slots. This means that if the child process continues to write to memory, the OS won’t need to keep copying over more chunks, making the copy on write friendliness more efficient.

Efficient Memory Usage

So this last reason to motivate compaction isn’t actively relevant to Ruby (yet…). Feel free to skip it if you want.

Another reason often cited for compaction is efficient memory usage. As we know, in Ruby, each RVALUE is exactly 40 bytes. But, in some other memory systems, the objects aren’t all the exact same size.

In these systems, we can imagine that there could be cases where the total memory available is more than enough for a new object, but it’s been structured in a way that simply doesn’t have room.

too-big

If we compact this fragmented memory space, we would have room:

fits

This is often the primary reason cited for compaction. Except like I noted, it doesn’t apply to Ruby’s GC because each RVALUE is exactly the same size. But… there is a world where the future of Ruby GC has RVALUES with different sizes (🤯), which would mean that compaction would also be helpful for more efficient memory usage.

Two Finger Compaction

Anyways, enough about why compact. Let’s discuss how Ruby compaction works. The algorithm Ruby uses for compaction is called a two-finger compaction algorithm. I know, I know, tri-color, two-finger… I’m also excited to see what one-prefixed strategy Ruby’s GC employs in the future!

This algorithm was actually first written about in “The Programming Language LISP: Its Operation and Applications” by Edmund C. Berkeley and Daniel G. Bobrow in 1964. It’s quite incredible that this is the algorithm Ruby uses today (57 years later!), so I thought it’d be fun to share some of the original text they wrote:

Two pointers are set, one to the top of free storage and one to the bottom. The top pointer scans words, advancing downward, looking for one not marked. When one is found, the bottom pointer scans words, advancing upward, looking for a marked word. When one is found, it is moved into the location identified by the other pointer. A pointer is left at the location the word was moved from, pointed to whether it was moved to. The top pointer is then advanced as before. The process terminates when the two pointers meet.

Okay, now to decipher this text. This snippet is actually all discussing the first part of two-finger compaction, which is moving objects.

Move Objects

The two pointers are the two “fingers” in the algorithm. One starts at the beginning of the heap and the other at the end. The gist of the strategy is that these two pointers are going to converge, incrementing from the beginning and decrementing from the end, filling in free slots from the beginning with RVALUES in slots from the end.

We can write this as a small Ruby snippet:

left = 0
right = heap.size - 1

while left < right
  if heap[left].empty? && !heap[right].empty?
    swap(heap[left], heap[right])
  end

  left += 1 while !heap[left].empty?
  right -= 1 while heap[right].empty?
end

And have an accompanying diagram (left is top, right is bottom):

move-objects

Update references

But, if you’ve been following along in this series, you may remember that some RVALUES can reference other RVALUES. A simple example of this is an array. An array itself will occupy an RVALUE which can reference other RVALUES that are its elements.

So, if we move one of the elements in an array through our move objects phase, we need to make sure the original array knows about this new location. In place of the RVALUE that is moved, Ruby’s GC will leave a forwarding address. The forwarding address says the new location of the RVALUE.

forwarding

This brings us to our second phase of the Two-Finger Compaction algorithm: Updating References. Once Ruby’s GC has moved RVALUES and left forwarding addresses, it needs to actually update the references and clear the forwarding addresses. If it didn’t update the references, then those forwarding addresses would need to stay in perpetuity, making the newly empty slots useless.

This process is fairly straightforward. Ruby’s GC iterates over every slot in the Heap. If the slot is empty, it moves on. If the slot is occupied by an RVALUE, it looks at any references from that RVALUE. For each reference, it looks at what is in memory for current address it has, and if there’s a forwarding address, it will update the reference to point to the forwarding address.

Caveat: C extensions

There is one big caveat in the reference updating step. It assumes that Ruby’s GC knows how to read an RVALUE’s references. For built-in Ruby objects, this is fairly straightforward. But, Ruby also allows for objects to be defined through C extensions. (Small aside, Peter Zhu is writing a great series about C extensions: A Rubyist’s Walk Along the C-Side. It is definitely worth a read if you’re interested in learning more here.)

Some of these C extensions indicate to Ruby how they store their references and so can be updated. But others don’t. In these cases, the RVALUES can’t be moved in compaction. Instead, they are “pinned” in place and not moved. This is a small percentage of objects, so doesn’t drastically interfere with the goals defined earlier.

Current state of compaction

It’s important to also note that as of Ruby 3.0, compaction is still under development and actually currently degrades performance on most apps. It’s therefore not automatically turned on. It is still possible to turn it on manually using GC.auto_compact=(true), but as the documentation notes, “Enabling compaction will degrade performance on major collections.”

TL;DR

And that’s all for now! In the next (mini) post in this series, I’ll walk through implications of compaction on Object IDs.

Let’s summarize the key terms from this post:

  • fragmentation: when memory is allocated non-contiguously
  • compaction: the solution to fragmentation, grouping allocated memory together
  • two-finger compaction: the algorithm Ruby uses to solve fragmentation