text stringlengths 0 59.1k |
|---|
``` |
$ sudo docker stats $(sudo docker ps -q) |
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O |
089e2d061dee 9.24% 786.4 kB/1.042 GB 0.08% 0 B/0 B |
0be33d6e8ddb 10.48% 823.3 kB/1.042 GB 0.08% 0 B/0 B |
0f4e3c4a93e0 10.43% 786.4 kB/1.042 GB 0.08% 0 B/0 B |
``` |
Each container is getting 10% of the CPU time per their scheduling request, and we are unable to schedule more. |
As you can see CPU requests are used to schedule pods to the node in a manner that provides weighted distribution of CPU time |
when under contention. If the node is not being actively consumed by other containers, a container is able to burst up to as much |
available CPU time as possible. If there is contention for CPU, CPU time is shared based on the requested value. |
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated. |
``` |
$ cluster/kubectl.sh delete rc --all |
$ cluster/kubectl.sh get pods |
NAME READY STATUS RESTARTS AGE |
``` |
### CPU limits |
So what do you do if you want to control the maximum amount of CPU that your container can burst to use in order provide a consistent |
level of service independent of CPU contention on the node? You can specify an upper limit on the total amount of CPU that a pod's |
container may consume. |
To enforce this feature, your node must run a docker version >= 1.7, and your operating system kernel must |
have support for CFS quota enabled. Finally, your the Kubelet must be started with the following flag: |
``` |
kubelet --cpu-cfs-quota=true |
``` |
To demonstrate, let's create the same pod again, but this time set an upper limit to use 50% of a single CPU. |
``` |
$ cluster/kubectl.sh run cpuhog \ |
--image=busybox \ |
--requests=cpu=100m \ |
--limits=cpu=500m \ |
-- md5sum /dev/urandom |
``` |
Let's SSH into the node, and look at usage stats. |
``` |
$ vagrant ssh node-1 |
$ sudo su |
$ docker stats $(docker ps -q) |
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O |
2a196edf7de2 47.38% 835.6 kB/1.042 GB 0.08% 0 B/0 B |
... |
``` |
As you can see, the container is no longer allowed to consume all available CPU on the node. Instead, it is being limited to use |
50% of a CPU over every 100ms period. As a result, the reported value will be in the range of 50% but may oscillate above and below. |
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated. |
``` |
$ cluster/kubectl.sh delete rc --all |
$ cluster/kubectl.sh get pods |
NAME READY STATUS RESTARTS AGE |
``` |
### Memory requests |
By default, a container is able to consume as much memory on the node as possible. In order to improve placement of your |
pods in the cluster, it is recommended to specify the amount of memory your container will require to run. The scheduler |
will then take available node memory capacity into account prior to binding your pod to a node. |
Let's demonstrate this by creating a pod that runs a single container which requests 100Mi of memory. The container will |
allocate and write to 200MB of memory every 2 seconds. |
``` |
$ cluster/kubectl.sh run memhog \ |
--image=derekwaynecarr/memhog \ |
--requests=memory=100Mi \ |
--command \ |
-- /bin/sh -c "while true; do memhog -r100 200m; sleep 1; done" |
``` |
If you look at output of docker stats on the node: |
``` |
$ docker stats $(docker ps -q) |
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O |
2badf74ae782 0.00% 1.425 MB/1.042 GB 0.14% 816 B/348 B |
a320182967fa 105.81% 214.2 MB/1.042 GB 20.56% 0 B/0 B |
``` |
As you can see, the container is using approximately 200MB of memory, and is only limited to the 1GB of memory on the node. |
We scheduled against 100Mi, but have burst our memory usage to a greater value. |
We refer to this as memory having __Burstable__ quality of service for this container. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.