text stringlengths 0 59.1k |
|---|
- image: node:0.10.40 |
command: ['/bin/sh', '-c'] |
args: ['cd /home && git clone https://github.com/ijason/NodeJS-Sample-App.git demo && cd demo/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js'] |
name: web |
ports: |
- containerPort: 3000 |
name: http-server |
``` |
[Download file](web-controller-demo.yaml) |
This will use the default Node.js container, and will pull and execute code at run time. This is not recommended; typically, your code should be part of the container. |
To start the Controller, run: |
```sh |
kubectl create -f examples/nodesjs-mongodb/web-controller-demo.yaml |
``` |
### Testing it out |
Now that all the components are running, visit the IP address of the load balancer to access the website. |
With Google Cloud Platform, get the IP address of all load balancers with the following command: |
```sh |
gcloud compute forwarding-rules list |
``` |
<|endoftext|> |
# source: k8s_examples/_archived/runtime-constraints/README.md type: docs |
## Runtime Constraints example |
This example demonstrates how Kubernetes enforces runtime constraints for compute resources. |
### Prerequisites |
For the purpose of this example, we will spin up a 1 node cluster using the Vagrant provider that |
is not running with any additional add-ons that consume node resources. This keeps our demonstration |
of compute resources easier to follow by starting with an empty cluster. |
``` |
$ export KUBERNETES_PROVIDER=vagrant |
$ export NUM_NODES=1 |
$ export KUBE_ENABLE_CLUSTER_MONITORING=none |
$ export KUBE_ENABLE_CLUSTER_DNS=false |
$ export KUBE_ENABLE_CLUSTER_UI=false |
$ cluster/kube-up.sh |
``` |
We should now have a single node cluster running 0 pods. |
``` |
$ cluster/kubectl.sh get nodes |
NAME LABELS STATUS AGE |
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready 17m |
$ cluster/kubectl.sh get pods --all-namespaces |
``` |
When demonstrating runtime constraints, it's useful to show what happens when a node is under heavy load. For |
this scenario, we have a single node with 2 cpus and 1GB of memory to demonstrate behavior under load, but the |
results extend to multi-node scenarios. |
### CPU requests |
Each container in a pod may specify the amount of CPU it requests on a node. CPU requests are used at schedule time, and represent a minimum amount of CPU that should be reserved for your container to run. |
When executing your container, the Kubelet maps your containers CPU requests to CFS shares in the Linux kernel. CFS CPU shares do not impose a ceiling on the actual amount of CPU the container can use. Instead, it defines a relative weight across all containers on the system for how much CPU time the container should... |
Let's demonstrate this concept using a simple container that will consume as much CPU as possible. |
``` |
$ cluster/kubectl.sh run cpuhog \ |
--image=busybox \ |
--requests=cpu=100m \ |
-- md5sum /dev/urandom |
``` |
This will create a single pod on your node that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume |
on the node. |
To demonstrate this, if you SSH into your machine, you will see it is consuming as much CPU as possible on the node. |
``` |
$ vagrant ssh node-1 |
$ sudo docker stats $(sudo docker ps -q) |
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O |
6b593b1a9658 0.00% 1.425 MB/1.042 GB 0.14% 1.038 kB/738 B |
ae8ae4ffcfe4 150.06% 831.5 kB/1.042 GB 0.08% 0 B/0 B |
``` |
As you can see, its consuming 150% of the total CPU. |
If we scale our replication controller to 20 pods, we should see that each container is given an equal proportion of CPU time. |
``` |
$ cluster/kubectl.sh scale rc/cpuhog --replicas=20 |
``` |
Once all the pods are running, you will see on your node that each container is getting approximately an equal proportion of CPU time. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.