Overview

What Is This?

This series of blog posts is intended to give some basic knowledge, tools, and show ways I’d solve some issues with how we manage memory. This first entry is somewhat bland in that it just describes the problem space. Later entries in the series provide information on how memory works, methods to identify problems, generic tools to address and combat issues, metrics, and put it all together with how to fix problems. Some things may be useful, others not so much. Take everything with a grain of salt. Come to your own conclusions.

Almost exclusively, I’ll be using Windows 10, Visual Studio, and very simple C/C++. I assume basic C/C++ programming skills, that you know what a pointer is, and have a general idea of how a computer works. I’ll cover some basic memory informaton, but there are more resources for this elsewhere. The most advanced we’ll get on the code side is overloading the new operator. It’s a series about memory management, not l33t c0ding sk1llz.

top

Do We Need Memory Management?

My hope is no. Instead, I hope you use this information to fix problems by deleting code, complexity, and bugs, rather than adding them. Code is a liability, not a sign of productivity.

That being said, it’s absolutely ok and encouraged to experiment, fail, and learn. We learn more from our failures than our successes, so write weird allocators, come up with crazy new ideas, try things.

top

Prove It’s A Problem

First, prove a problem exists and describe it in some concrete way. This doesn’t mean that you have to spend hours in profilers, learn all sorts of crazy statistical nomenclature for “lots”, “some”, and “a little” but instead you have to be able to point and say “This is a problem because X.”

Is your project running out of memory? Is dynamic allocation causing performance problems? These are both great places to start, but you also need to quantify it. Is the win condition simply “it works” or do you know (or can you estimate) what is needed? How much time are memory operations (allocation, deallocation, garbage collection, cache misses, etc.) taking? If the goal to simply experiment and learn, that’s a valid problem space too.

You’ll want to make a concrete baseline that you can gauge future progress from. A great example is “loading level X results in Y memory used with a Z fragmentation level.”

top

Prove ImPROVEments

Once you’ve got it quantified, you can later compare your changes to the baseline above. If it’s better, good job! If it’s worse, at least you learned something. If you can’t objectively gauge progress, at best you’re randomly changing things and at worst you’re adding more problems.

Anyone can write thousands of lines of code and point to it as a masterpiece, but you’ve got to also be able to show that it’s better. If you can’t prove it, then you haven’t done anything useful.

top

What Exactly Are We Solving

Fix Causes, Not Symptoms

A large focus of this blog series will be on fixing problems, not symptoms. I’ll discuss or present snippets of plenty of code, but probably present very little complete code. My intent is that you understand the topic, rather than simply copy-paste code to magically make all your woes disappear. Fixing code by adding more code is often a losing battle. The cake is a lie. Nobody ever said you had to take either pill.

top

Memory Usage

Using memory in and of itself isn’t an issue, but our concern is often how much is in use at one time. If you’re working on an embedded system and you’ve got 32KB of memory, your definition of the problem space is vastly different than someone running with a terabyte of memory.

Being out of memory is easy to diagnose: everything dies in a fire. The why can be an interesting journey, however. It can simply be “too much stuff!” or it could be due to other circumstances. To diagnose what can be reduced, you’ll need to know what’s in use, so there’s a posts coming on metrics gathering, tooling, and memory tagging.

Global memory usage is, at the same time, the easiest and hardest problem to solve. Easy problems are usually identified by data gathering: large textures on small items, unused or underused buffers, etc. Always pick the low hanging fruit first.

top

Fragmentation

If you ask for a 1MB of memory and the allocator says it has plenty of space available, it can still fail. This may be due to fragmentation, where it is unable to find a single contiguous block to give back. Some allocation strategies take a large block of memory and segment it into smaller blocks. When two allocations occur followed by the first being freed, a hole is created in the heap. The memory is free, but only requests of the same size or smaller can fit in that block. As memory is allocated and freed, these holes occur, change size, and move around causing “fragments” of free memory.

Fragmentation can be calculated as a percentage:

(total_free_space – size_of_largest_free_segment) / total_free_space

Generally, the lower the fragmentation percentage, the more likely that the request for memory will succeed (assuming we’re not out of memory.)

See also: Measuring the Impact of the Linux Memory Manager

top

Waste

Another source of memory usage is overhead caused by some allocation strategies. For example, when splitting segments of memory, each allocation has a reserved section at the start and possibly the end of the allocation describing it or performing special functionality. It may allow the heap to be walked like a linked list from beginning to end, the ability to query the size of a block of memory, or even act as a guard to detect overwrites. If there are a large number of allocations, this overhead can cause excessive memory usage.

For custom allocators, this can be tightly controlled, but for generic allocators or operating system level ones, you’re sort of out of luck here.

top

Performance

Dynamic allocation is a silent performance killer. Substantial portions of video game level loads are spent doing dynamic memory allocation. Individually, they’re not much, but they add up quickly.

Another issue is that most allocations are for temporary data. Passing a C style string to a function that takes a std::string by reference? Allocation. Followed by a deallocation. The allocation didn’t even stick around, so we paid a high cost for something we didn’t even much use out of.

Metrics helps a lot here, but another way to track down temporary allocations is to simply put a breakpoint in your allocation function and see who calls it.

top

The Care And Feeding Of Bugs

Leaks

Memory leaks occur when an allocation is never freed. They can be easily identified by watching task manager. Perform a repeated action over and over again (idle, load a level, fire a gun, etc.) and watch to see if memory steadily increases. If it does, you’ve got a leak. Figuring out who is allocating the memory can be done by putting a breakpoint in the allocator and hitting it repeatedly (a poor man’s sampling profiler,) by using memory tagging to report allocations by type, or using metrics that track allocations.

Once the allocation is identified, it’s a matter of understanding the surrounding code to see where it should have been freed.

top

Use-After-Free

Use-after-free bugs occur when memory is read from or written to after it has been freed. The memory may be invalid and cause an access violation, be owned by another system, or just be sitting waiting for someone else to come along and use it. The biggest problem with use-after-free is that writes can corrupt someone else’s memory. This is a neverending source of bugs and frustration, especially when dealing with multithreaded environments.

For heap memory, many allocators have the option to fill memory with special filler bytes based on it’s state. In Windows, the heaps can use many tags, some of which include:

  • 0xcd – The memory was [c]reated but not filled with data yet.
  • 0xdd – The memory was [d]eleted and should not be used.
  • 0xfd – No Man’s Land – this memory should never be used.
(via https://docs.microsoft.com/en-us/visualstudio/debugger/crt-debug-heap-details)

For stack memory, Visual Studio can use the /GX option to fill local variables with 0xcc by default. If they are used before being set to anything, it makes it at least slightly more obvious.

If and when you write your own custom allocators, I highly suggest having optionally filling as it significantly eases debugging crashes later. Having the fills be togglable so the entire application doesn’t have to be recompiled in debug/release is incredibly valuable.

top

Double-free

Double-frees are exactly what they sound like. Some code freed the memory and then later tried to free it again. This often occurs when an object has a Shutdown function which the destructor calls. If Shutdown doesn’t clear the pointer, the destructor’s call of Shutdown will cause it to be freed again. Sadly, freeing the memory a second time does not, in fact, add more free memory back into the system. Neither does downlodaing more RAM, sorry folks.

Most allocators that uses sign-posts, like our segment allocator above, have the ability to detect double-frees where the memory was truly freed twice. Few allocators can detect double frees where the block was reallocated to someone else between the two frees.

top

Overrun and Underrun

Overrun occurs when someone writes past the end of an array or other buffer. Underrun, which is actually pretty rare, is when writing prior to the start of the allocated region.

Heaps will often detect this by putting guard bytes around the allocation. The guard bytes are checked when the memory is freed and if they’re not intact, the application crashes. This approach occurs after the fact, so while you know that memory was corrupted, you don’t know the culprit.

top

What’s Next?

The next blog post will contain enough knowledge on alignment, caches, virtual memory and translation lookaside buffers to get you started, then we’ll dive into identifying how and why things go terribly, terribly wrong.

top

Memory Management: An Introduction
Tagged on:         

3 thoughts on “Memory Management: An Introduction

  • October 2, 2019 at 6:26 pm
    Permalink

    I һave learn some good stіff here. Definitely vаlue bookmarking for
    гevisiting. I surprise howw much effort you place to create any
    such great informativе web sіte.

    Reply
  • October 24, 2019 at 9:30 pm
    Permalink

    I feel that is one of the most vital information for me.
    And i am satisfied reading your article. But want to statement on few normal things, The site style is great,
    the articles is in point of fact nice : D.
    Excellent job, cheers

    Reply
  • Pingback:Important (go) Reading - Golang News

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.