text
stringlengths
0
59.1k
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_LRS
location: eastus
storageAccount: azure_storage_account_name
```
The parameters are the same as those used by [Azure Disk](#azure-disk)
### User provisioning requests
Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim` using `spec.storageClassName` attribute.
It is required that this value matches the name of a `StorageClass` configured by the administrator.
```
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
},
"storageClassName": "slow"
}
}
```
### Sample output
#### GCE
This example uses GCE but any provisioner would follow the same flow.
First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created
and automatically bound to the claim requesting storage.
```
$ kubectl get pv
$ kubectl create -f examples/persistent-volume-provisioning/gce-pd.yaml
storageclass "slow" created
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO Bound default/claim1 4s
$ kubectl get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
claim1 <none> Bound pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO 7s
# delete the claim to release the volume
$ kubectl delete pvc claim1
persistentvolumeclaim "claim1" deleted
# the volume is deleted in response to being release of its claim
$ kubectl get pv
```
#### Ceph RBD
This section will guide you on how to configure and use the Ceph RBD provisioner.
##### Pre-requisites
For this to work you must have a functional Ceph cluster, and the `rbd` command line utility must be installed on any host/container that `kube-controller-manager` or `kubelet` is running on.
##### Configuration
First we must identify the Ceph client admin key. This is usually found in `/etc/ceph/ceph.client.admin.keyring` on your Ceph cluster nodes. The file will look something like this:
```
[client.admin]
key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
auid = 0
caps mds = "allow"
caps mon = "allow *"
caps osd = "allow *"
```
From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our `StorageClass`. In this example we've set the namespace to `kube-system`.