text stringlengths 0 59.1k |
|---|
available. When that is not the case, the persistent volume claims need |
to be created manually. See [minikube.sh](minikube.sh) for the necessary |
steps. If you're on GCE or AWS, where dynamic provisioning is supported, no |
manual work is needed to create the persistent volumes. |
## Testing locally on minikube |
Follow the steps in [minikube.sh](minikube.sh) (or simply run that file). |
## Testing in the cloud on GCE or AWS |
Once you have a Kubernetes cluster running, just run |
`kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster. |
This works because GCE and AWS support dynamic volume provisioning by default, |
so persistent volumes will be created for the CockroachDB pods as needed. |
## Accessing the database |
Along with our StatefulSet configuration, we expose a standard Kubernetes service |
that offers a load-balanced virtual IP for clients to access the database |
with. In our example, we've called this service `cockroachdb-public`. |
Start up a client pod and open up an interactive, (mostly) Postgres-flavor |
SQL shell using: |
```console |
$ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public --insecure |
``` |
You can see example SQL statements for inserting and querying data in the |
included [demo script](demo.sh), but can use almost any Postgres-style SQL |
commands. Some more basic examples can be found within |
[CockroachDB's documentation](https://www.cockroachlabs.com/docs/learn-cockroachdb-sql.html). |
## Accessing the admin UI |
If you want to see information about how the cluster is doing, you can try |
pulling up the CockroachDB admin UI by port-forwarding from your local machine |
to one of the pods: |
```shell |
kubectl port-forward cockroachdb-0 8080 |
``` |
Once you’ve done that, you should be able to access the admin UI by visiting |
http://localhost:8080/ in your web browser. |
## Simulating failures |
When all (or enough) nodes are up, simulate a failure like this: |
```shell |
kubectl exec cockroachdb-0 -- /bin/bash -c "while true; do kill 1; done" |
``` |
You can then reconnect to the database as demonstrated above and verify |
that no data was lost. The example runs with three-fold replication, so |
it can tolerate one failure of any given node at a time. Note also that |
there is a brief period of time immediately after the creation of the |
cluster during which the three-fold replication is established, and during |
which killing a node may lead to unavailability. |
The [demo script](demo.sh) gives an example of killing one instance of the |
database and ensuring the other replicas have all data that was written. |
## Scaling up or down |
Scale the Stateful Set by running |
```shell |
kubectl scale statefulset cockroachdb --replicas=4 |
``` |
Note that you may need to create a new persistent volume claim first. If you |
ran `minikube.sh`, there's a spare volume so you can immediately scale up by |
one. If you're running on GCE or AWS, you can scale up by as many as you want |
because new volumes will automatically be created for you. Convince yourself |
that the new node immediately serves reads and writes. |
## Cleaning up when you're done |
Because all of the resources in this example have been tagged with the label `app=cockroachdb`, |
we can clean up everything that we created in one quick command using a selector on that label: |
```shell |
kubectl delete statefulsets,persistentvolumes,persistentvolumeclaims,services,poddisruptionbudget -l app=cockroachdb |
``` |
<|endoftext|> |
# source: k8s_examples/_archived/podsecuritypolicy/rbac/pod_priv.yaml type: yaml |
apiVersion: v1 |
kind: Pod |
metadata: |
name: nginx |
labels: |
name: nginx |
spec: |
containers: |
- name: nginx |
image: nginx |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.