Introduction

After writing the Kubernetes (K8s) CPU Limits post, I was wondering what happens when a K8s memory limit is set for the same service. I’ve been hearing at workshops and at Ardan how people are experiencing Out Of Memory (OOM) problems with their PODs when setting K8s memory limits. I’m often told that when an OOM occurs, K8s will terminate and restart the POD.

I wanted to experience an OOM, but I was also curious about three other things.

  • Is there a way to identify the minimal amount of memory a Go service needs to prevent an OOM?
  • Does setting the Go runtime GOMEMLIMIT variable to match the K8s memory limit value have any significant effect on performance?
  • How can I effectively use K8s requests, limits, and GOMEMLIMIT to possibly prevent an OOM?

This post is an experiment and my results may not correlate with your services. I will provide the data I’ve gathered and let you decide how relevant these findings are to your own services.

Before You Start

If you want to understand more about how the GC works in Go, read these posts.

This post is provided by Google to help with understanding requests and limits.

Creating an OOM

I need to force an OOM to occur when running the service. I don’t want to do this randomly by using some ridiculously low amount of memory. I want to find an amount that runs the service on the edge of an OOM, where it eventually happens. So I need to know how much memory the Go service is using when running a load without any knobs set. I can identify this amount by using Go memory metrics.

Luckily I have the expvar package already integrated into the service. This package provides the memory stats and I can use curl to access the expvar endpoint to read the stats once the service is running.

Listing 1

$ curl localhost:4000/debug/vars/

Listing 1 shows how I will access the Go memory values using curl. I mapped this endpoint manually in the service project inside the debug package.

There are more than a handful of memory stats that are available, but I think these three are the best ones to help understand the amount of memory the service is using when running a load.

  • HeapAlloc: The bytes of allocated heap objects. Allocated heap objects include all reachable objects, as well as unreachable objects that the garbage collector has not yet freed. Specifically, HeapAlloc increases as heap objects are allocated and decreases as the heap is swept and unreachable objects are freed. Sweeping occurs incrementally between GC cycles, so these two processes occur simultaneously, and as a result HeapAlloc tends to change smoothly (in contrast with the sawtooth that is typical of stop-the-world garbage collectors).

  • HeapSys: The bytes of heap memory obtained from the OS. HeapSys measures the amount of virtual address space reserved for the heap. This includes virtual address space that has been reserved but not yet used, which consumes no physical memory, but tends to be small, as well as virtual address space for which the physical memory has been returned to the OS after it became unused (see HeapReleased for a measure of the latter). HeapSys estimates the largest size the heap has had.

  • Sys: The sum of the XSys fields below. Sys measures the virtual address space reserved by the Go runtime for the heap, stacks, and other internal data structures. It’s likely that not all of the virtual address space is backed by physical memory at any given moment, though in general it all was at some point.

Start The System

I will start by running the service with no K8s memory limit. This will allow the service to run with the full 10 GiGs of memory that is available. If you didn’t read the K8s CPU limits post, I am running K8s inside of Docker using KIND.

Listing 2

$ make talk-up
$ make talk-build
$ make token
$ export TOKEN=<COPY-TOKEN>
$ make users

Output:
{"items":[{"id":"45b5fbd3-755f-4379-8f07-a58d4a30fa2f","name":"User
Gopher","email":"user@example.com","roles":["USER"],"department":"","enabled":true,"da
teCreated":"2019-03-24T00:00:00Z","dateUpdated":"2019-03-24T00:00:00Z"},{"id":"5cf3726
6-3473-4006-984f-9325122678b7","name":"Admin
Gopher","email":"admin@example.com","roles":["ADMIN"],"department":"","enabled":true,"
dateCreated":"2019-03-24T00:00:00Z","dateUpdated":"2019-03-24T00:00:00Z"}],"total":2,"
page":1,"rowsPerPage":2}

In listing 2, you can see all the commands I need to run to bring up the K8s cluster, get the sales POD running, and hit the endpoint I will use in the load test.

Now I can look at the initial memory stats by hitting the expvar endpoint.

Listing 3

$ curl localhost:4000/debug/vars/
{
  "goroutines": 13,
  "memstats": {"Sys":13980936,"HeapAlloc":2993840,"HeapSys":7864320}
}

Memory Amounts:
HeapAlloc:  3 MiB
HeapSys:    8 MiB
Sys:       13 MiB

Listing 3 shows the initial memory stats. The amount of live memory being used by the heap is ~3 MiB, the total amount of memory being used by the heap is ~8 MiB, and at this point the service is using a total of ~13 MiB. Remember these values represent virtual memory and I have no idea how much of that is currently backed by physical memory. I can assume a majority of the Sys memory has physical backing.

Force an OOM

Now I want to run a small load of 1000 requests through the service and look at the memory amounts again. This will show me how much memory the service is using to handle the load.

Listing 4

$ make talk-load

Output:
 Total:        33.7325 secs
 Slowest:       1.2045 secs
 Fastest:       0.0069 secs
 Average:       0.3230 secs
 Requests/sec: 29.6450

 Total data:   481000 bytes
 Size/request:    481 bytes

In listing 4, you can see the results of running the load through the service. You can see the service is handling ~29.6 requests per second.

How much memory has been used to handle those requests?

Listing 5

$ curl localhost:4000/debug/vars/
{
 "goroutines": 13,
 "memstats": {"Sys":23418120,"HeapAlloc":7065200,"HeapSys":16056320}
}

Memory Amounts:
HeapAlloc:  7 MiB
HeapSys:   16 MiB
Sys:       23 MiB

In listing 5, you can see the memory amounts. The amount of live memory being used by the heap is ~7 MiB, the total amount of memory being used by the heap is ~16 MiB, and at this point the service is using a total of ~23 MiB. This increase in memory usage is expected since the service processed 1000 requests with a total data size of 481k bytes and needed to access a database. I don’t expect these memory amounts to change much as I continue to run load.

This tells me if I set a K8s memory limit that is less than 23 MiB, I should be able to force an OOM. It’s hard to tell how much physical memory is actually backing that 23 MiB of virtual memory. To not waste your time, I tried a few numbers below 23 MiB and I reached my first OOM when I used 17 MiB.

Listing 6

    containers:
    - name: sales-api
      resources:
        limits:
          cpu: "250m"
          memory: "17Mi"

In listing 6, I’m showing you how I set the K8s memory limit to 17 MiB in the configuration. Once I made this change, I needed to apply it to the POD and then check that the change was accepted.

Listing 7

$ make talk-apply
$ make talk-describe

Output:
   Restart Count:  0
   Limits:
     cpu:     250m
     memory:  17Mi

In listing 7, you can see the call to apply and the output of the describe command. You can see the K8s memory limit of 17 MiB has been applied. Now when I run the load, the OOM occurs.

Listing 8

$ make talk-load
$ make dev-status    # Monitoring Status For Restart
$ make talk-describe

Output:
   Last State:     Terminated
     Reason:       OOMKilled
     Exit Code:    137
   Restart Count:  1
   Limits:
     cpu:     250m
     memory:  17Mi

Last Memory Amounts:
HeapAlloc:  7 MiB
HeapSys:   16 MiB
Sys:       23 MiB

In listing 8, you see the results of running the load with a K8s memory limit of 17 MiB. K8s is reporting the last state of the POD was Terminated with a reason of OOMKilled. Now I have an OOM, but it took 6 MiB less than the Sys number. I think looking at the Sys amount is the best amount to look at since it represents the total amount of virtual memory the Go service is using. Now I know that the service needs ~18 MiB of physical memory not to OOM.

Performance Testing

I was curious what the performance of the service would be if I used K8s memory limits of 18 MiB (Minimal Number), 23 MiB (Go’s Number), 36 MiB (2 times minimum), and 72 MiB (4 times minimum). I think these amounts are interesting since they represent reasonable multiples of the minimum amount and by using Go’s calculated amount, I can compare the runtime’s decision against my own.

I was also curious what the performance of the service would be if I gave the K8s memory limit amount to the Go runtime. I can do this by using the GOMEMLIMIT variable.

Listing 9

      env:
      - name: GOGC
        value: "off"

      - name: GOMEMLIMIT
        valueFrom:
          resourceFieldRef:
            resource: limits.memory

In listing 9, you see how to set the GOMEMLIMIT variable to match the K8s memory limit amount. K8s provides the limits.memory variable which contains the value I set in the YAML from listing 6. This is nice because I can change the amount in one place and it will apply for both K8s and Go.

Notice that I turned the GC off. When setting GOMEMLIMIT you don’t need to do this, but I think it’s a good idea. This tells the GC to use all of the memory assigned to the GOMEMLIMIT. At this point, you know what the memory constraint is, so you might as well have Go use all of it before it starts a GC.

Here are the results when using K8s memory limits of 18 MiB, 23 MiB, 36 MiB, and 72 MiB with and without the GOMEMLIMIT value set.

Listing 10

No Knobs: 23 MiB (Go’s Number)
 Total:        33.7325 secs
 Slowest:       1.2045 secs
 Fastest:       0.0069 secs
 Average:       0.3230 secs
 Requests/sec: 29.6450   

Limit: 18 MiB (Minimal)         With GOMEMLIMIT: 18 MiB
 Total:        35.1985 secs      Total:        34.2020 secs
 Slowest:       1.1907 secs      Slowest:       1.1017 secs
 Fastest:       0.0054 secs      Fastest:       0.0042 secs
 Average:       0.3350 secs      Average:       0.3328 secs
 Requests/sec: 28.4103           Requests/sec: 29.2380

Limit 23 MiB (Go’s Number)      With GOMEMLIMIT: 23 MiB
 Total:        33.5513 secs      Total:        29.9747 secs
 Slowest:       1.0979 secs      Slowest:       0.9976 secs
 Fastest:       0.0029 secs      Fastest:       0.0047 secs
 Average:       0.3285 secs      Average:       0.2891 secs
 Requests/sec: 29.8051           Requests/sec: 33.3615

Limit 36 MiB (2*Minimal)        With GOMEMLIMIT: 36 MiB
 Total:        35.3504 secs      Total:        28.2876 secs
 Slowest:       1.2809 secs      Slowest:       0.9867 secs
 Fastest:       0.0056 secs      Fastest:       0.0036 secs
 Average:       0.3393 secs      Average:       0.2763 secs
 Requests/sec: 28.2883           Requests/sec: 35.3512

Limit 72 MiB (4*Minimal)        With GOMEMLIMIT 72 MiB
 Total:        34.1320 secs      Total:        27.8793 secs
 Slowest:       1.2031 secs      Slowest:       0.9876 secs
 Fastest:       0.0033 secs      Fastest:       0.0046 secs
 Average:       0.3369 secs      Average:       0.2690 secs
 Requests/sec: 29.2980           Requests/sec: 35.8689

In listing 10, you can see all the results. I was happy to see better performance when I told the Go runtime how much memory was available to use. This makes sense since the Go runtime is able to use more memory than it’s using on its own.

What’s interesting is Go’s number of 23 MiB seems to be the right number when not setting GOMEMLIMIT to match. It shows how amazing the GC and the algorithms are.

When setting GOMEMLIMIT to match the K8s memory limit, using 36 MiB was a bit faster, with an extra ~2 requests per second. When I use 72 MiB, the performance increase is insignificant.

Preventing an OOM with GOMEMLIMIT

I was also curious if GOMEMLIMIT could be used to prevent an OOM. I started playing with the idea and had some success, however I quickly came to this conclusion - If the service doesn’t have enough memory, you’re not helping the service by trying to keep it alive.

I was able to keep the service running without an OOM at 13 MiB using GOMEMLIMIT.

Listing 11

Limit: 13 MiB With GOMEMLIMIT
  Total:      105.8621 secs
  Slowest:      3.4944 secs
  Fastest:      0.0154 secs
  Average:      1.0306 secs
  Requests/sec: 9.4462

Memory Amounts
"memstats": {"Sys":14505224,"HeapAlloc":2756280,"HeapSys":7733248}

HeapAlloc:  3 MiB
HeapSys:    8 MiB
Sys:       15 MiB

If you look at the performance, the service is running at 9.4 requests per second. This is a major performance loss. The GC must be over-pacing and causing the service to spend a lot of time performing garbage collection instead of application work. If memory has to be kept to a minimum, I think it’s best to find the amount of memory where you live on the edge of an OOM, but never OOM.

Conclusion

Something that keeps bothering me is that I know exactly what is running on the node and I know the node has enough memory to accommodate all the services. Without that guarantee, I’m not sure anything I’m testing in this post is relevant.

If you’re experiencing an OOM, it’s possible that the node is over-saturated with services and there isn’t enough memory on the node to accommodate everything that is running. At this point, it won’t matter what the K8s memory limit settings are since there isn’t enough physical memory to meet the demand.

With this in mind and all the other things I’ve shared in the post, I have these soft recommendations.

  • If you don’t need to use K8s memory limits, don’t. Use CPU limits to decide what services are running on what nodes. This allows the Go services to use the amount of memory it needs. We saw in the post that Go is good at finding a low amount of memory to work with. If you’re running services written in different languages on the same node, I still feel good about not setting K8s memory limits. If you end up with an OOM, then you know there isn’t enough physical memory on the node.

  • If you’re not going to use K8s memory limits, then don’t do anything with GOMEMLIMIT. The Go runtime is really good at finding the sweet spot for your memory requirements.

  • If you’re going to use K8s memory limits, then set the request value to match the limit. However, all services on the node must do this. This will provide a guarantee there is enough physical memory to meet all the K8s memory limit requirements. If you end up with an OOM, then you know the service doesn’t have enough physical memory.

  • If you’re going to use K8s memory limits, then you should experiment with GOMEMLIMIT and set it to match the K8s limit amount. Obviously you need to test this for your service and load, but I think it’s worth trying. You paid for the memory and you know it’s assigned to this service, why not use all of it.

  • One caveat of having GOGC off and using GOMEMLIMIT. In this scenario, the GOMEMLIMIT number becomes the point when a GC starts. You might need the GOMEMLIMIT number to be some percentage smaller than the K8s memory limit so the GC starts before the K8s limit is reached. An OOM could occur if the full amount of virtual memory being used by Go is being backed by physical memory at the time of the GC. However in my experiments, I didn’t have a problem with the two settings being the same.

Trusted by top technology companies

We've built our reputation as educators and bring that mentality to every project. When you partner with us, your team will learn best practices and grow along the way.

30,000+

Engineers Trained

1,000+

Companies Worldwide

12+

Years in Business