Copy
You're reading the Ruby/Rails performance newsletter by Speedshop.

What's swap memory, why should you care, and what's its impact on performance?

Memory is important resource in any application. All programs have to access data in various places, usually in many different places.

This matters for performance because where we store data determines how long it takes to access it.

Swap memory is a kind of "special" memory behavior used by Linux (it's in Windows and Mac too, but nobody deploys there and it works slightly differently in those environments so we're gonna skip that). When chunks of memory are either infrequently accessed or the box is low on memory, it may "page out" certain memory areas to the disk. This is swap memory.

So, when you're accessing memory that's been swapped, you're accessing memory that isn't on a RAM stick - instead, it's on an SSD or HDD.

You might be aware of the classic "latency numbers every programmer should know":

That should give you an idea of the latency penalty we're talking about here: accessing swap is 250-10,000 slower than a read from RAM. That's a huge difference. It's not as bad as it used to be, thanks to the prevalence of solid state drives nowadays, but it can still bring an ordinary program to a crawl.

So, any time you see swap memory usage on your cloud provider dashboard, you should freak out, right? You should just disable swapping, right? Obviously bad, yeah?

Not so fast.

What's the alternative to swap? It's killing processes. On Linux, if you disable swap, something called the "OOM Killer" (which is a badass name) starts killing processes once you run out of memory. It's configurable, yes, but what if it kills the wrong process? Kills something critical? It's a dangerous strategy. It's far more robust to keep processes alive but slow rather than dead!

Secondly, Linux will sometimes start swapping long before memory has run out. What it's doing is trying to free up RAM by moving infrequently-accessed pages to disk. Why free up RAM? To get a bigger file cache.

What's the file cache? It's the thing that ate your ram!!!


The disk cache is Linux's "anti-swap". It's trying to make file access faster by putting those files straight into RAM. It's clever and it generally increases performance. More disk cache == faster apps. So allowing your app to swap out a bit can actually make things faster. On Linux, this is tunable via the "swappiness" value.

So, when should you be worried? Be worried when total memory usage is high (greater than 90%) and "page in/page out" increases. We don't actually care if memory is sitting in swap, but we do care if memory is frequently being moved in and out of swap (paging).

Your APM or server monitoring solution should be able to give you this information (or just watch for sudden increases in swap when memory usage is high). It is also available through  `sar -B`, part of the sysstat package.

-Nate
You can share this email with this permalink: https://mailchi.mp/railsspeed/whats-swap-memory-and-when-should-you-be-concerned?e=[UNIQID]

Copyright © 2020 Nate Berkopec, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.