text
stringlengths
0
59.1k
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated.
```
$ cluster/kubectl.sh delete rc --all
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
```
### Memory limits
If you specify a memory limit, you can constrain the amount of memory your container can use.
For example, let's limit our container to 200Mi of memory, and just consume 100MB.
```
$ cluster/kubectl.sh run memhog \
--image=derekwaynecarr/memhog \
--limits=memory=200Mi \
--command -- /bin/sh -c "while true; do memhog -r100 100m; sleep 1; done"
```
If you look at output of docker stats on the node:
```
$ docker stats $(docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
5a7c22ae1837 125.23% 109.4 MB/209.7 MB 52.14% 0 B/0 B
c1d7579c9291 0.00% 1.421 MB/1.042 GB 0.14% 1.038 kB/816 B
```
As you can see, we are limited to 200Mi memory, and are only consuming 109.4MB on the node.
Let's demonstrate what happens if you exceed your allowed memory usage by creating a replication controller
whose pod will keep being OOM killed because it attempts to allocate 300MB of memory, but is limited to 200Mi.
```
$ cluster/kubectl.sh run memhog-oom --image=derekwaynecarr/memhog --limits=memory=200Mi --command -- memhog -r100 300m
```
If we describe the created pod, you will see that it keeps restarting until it ultimately goes into a CrashLoopBackOff.
The reason it is killed and restarts is because it is OOMKilled as it attempts to exceed its memory limit.
```
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
memhog-oom-gj9hw 0/1 CrashLoopBackOff 2 26s
$ cluster/kubectl.sh describe pods/memhog-oom-gj9hw | grep -C 3 "Terminated"
memory: 200Mi
State: Waiting
Reason: CrashLoopBackOff
Last Termination State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 23 Sep 2015 15:23:58 -0400
```
Let's clean-up before proceeding further.
```
$ cluster/kubectl.sh delete rc --all
```
### What if my node runs out of memory?
If you only schedule __Guaranteed__ memory containers, where the request is equal to the limit, then you are not in major danger of
causing an OOM event on your node. If any individual container consumes more than their specified limit, it will be killed.
If you schedule __BestEffort__ memory containers, where the request and limit is not specified, or __Burstable__ memory containers, where
the request is less than any specified limit, then it is possible that a container will request more memory than what is actually available on the node.
If this occurs, the system will attempt to prioritize the containers that are killed based on their quality of service. This is done
by using the OOMScoreAdjust feature in the Linux kernel which provides a heuristic to rank a process between -1000 and 1000. Processes
with lower values are preserved in favor of processes with higher values. The system daemons (kubelet, kube-proxy, docker) all run with
low OOMScoreAdjust values.
In simplest terms, containers with __Guaranteed__ memory containers are given a lower value than __Burstable__ containers which has
a lower value than __BestEffort__ containers. As a consequence, containers with __BestEffort__ should be killed before the other tiers.
To demonstrate this, let's spin up a set of different replication controllers that will over commit the node.
```
$ cluster/kubectl.sh run mem-guaranteed --image=derekwaynecarr/memhog --replicas=2 --requests=cpu=10m --limits=memory=600Mi --command -- memhog -r100000 500m
$ cluster/kubectl.sh run mem-burstable --image=derekwaynecarr/memhog --replicas=2 --requests=cpu=10m,memory=600Mi --command -- memhog -r100000 100m
$ cluster/kubectl.sh run mem-besteffort --replicas=10 --image=derekwaynecarr/memhog --requests=cpu=10m --command -- memhog -r10000 500m
```
This will induce a SystemOOM
```
$ cluster/kubectl.sh get events | grep OOM
43m 8m 178 10.245.1.3 Node SystemOOM {kubelet 10.245.1.3} System OOM encountered
```
If you look at the pods:
```
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE