{"_id":"doc-en-website-4d931fe999353187c06548c585f3b70717d13486ff34c63a20088e3e57735361","title":"","text":"kubectl get statefulset web ``` ``` NAME DESIRED CURRENT AGE web 2 1 20s NAME READY AGE web 2/2 37s ``` ### Ordered Pod creation"} {"_id":"doc-en-website-ecc8275384a843e56c3a837ce446673c765d876b5fa888ab63d7487e4142204a","title":"","text":"image again: ```shell kubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"gcr.io/google_containers/nginx-slim:0.8\"}]' kubectl patch statefulset web --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\":\"registry.k8s.io/nginx-slim:0.8\"}]' ``` ``` statefulset.apps/web patched"} {"_id":"doc-en-website-ecf0e4a44494a263191661470d953fe8bf18155f81289cf7bd94f19218d5a994","title":"","text":"used in this tutorial. Follow the necessary steps, based on your environment, storage configuration, and provisioning method, to ensure that all storage is reclaimed. {{< /note >}} No newline at end of file {{< /note >}} "} {"_id":"doc-en-website-8db1c68a74169337aafc4e8e2f76878241c5f6bdbdb3da7c18ef80536f7aeba3","title":"","text":"spec: containers: - name: nginx image: registry.k8s.io/nginx-slim:0.8 image: registry.k8s.io/nginx-slim:0.7 ports: - containerPort: 80 name: web"} {"_id":"doc-en-website-feb1b56d73eb351ca2c0e9187e1de25f88dfe0a0838a071c3a942c2f86ef5ee6","title":"","text":"(backward-compatible) changes to the way the Kubernetes cluster DNS server processes DNS queries, to facilitate the lookup of federated services (which span multiple Kubernetes clusters). See the [Cluster Federation Administrators' Guide](/docs/admin/federation/index.md) for more See the [Cluster Federation Administrators' Guide](/docs/admin/federation) for more details on Cluster Federation and multi-site support. ## References"} {"_id":"doc-en-website-0131b3cf171ebce25201685759bb09324016e9d2074e036d39fbb0811320f0cc","title":"","text":" # *Stop. This guide has been superseded by [Minikube](../minikube/). The link below is present only for historical purposes* The document has been moved to [here](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/local-cluster/docker.md) "} {"_id":"doc-en-website-aa24104ef5b6f48b9b3410565bcf1354e6025f250daed1139d4471fff99d9c2b","title":"","text":"### SEE ALSO * [kubectl](kubectl.md)\t - kubectl controls the Kubernetes cluster manager * [kubectl](../kubectl.md)\t - kubectl controls the Kubernetes cluster manager ###### Auto generated by spf13/cobra on 12-Aug-2016"} {"_id":"doc-en-website-2d2136945f6f69d736d0bb595266ff6153f7bbc042fc17fcefd8d21d820d6c44","title":"","text":"__Important: You must have your own Ceph server running with the share exported before you can use it__ See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/cephfs/) for more details. See the [CephFS example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/volumes/cephfs/) for more details. ### gitRepo"} {"_id":"doc-en-website-03d8dd2526c0dd1890944f8882ba747c0697e3d582500fabf9046a1f44309df4","title":"","text":"path: /docs/getting-started-guides/ovirt/ - title: OpenStack Heat path: /docs/getting-started-guides/openstack-heat/ - title: CoreOS on Multinode Cluster path: /docs/getting-started-guides/coreos/coreos_multinode_cluster/ - title: rkt section: - title: Running Kubernetes with rkt"} {"_id":"doc-en-website-b3fd57e578e9311b491bcd23b71b470b465aae9c0c10c0176f394a59d7c3e969","title":"","text":"[CoreOS](http://coreos.com) templates for CloudStack are built [nightly](http://stable.release.core-os.net/amd64-usr/current/). CloudStack operators need to [register](http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/templates.html) this template in their cloud before proceeding with these Kubernetes deployment instructions. This guide uses an [Ansible playbook](https://github.com/runseb/ansible-kubernetes). This is a completely automated, a single playbook deploys Kubernetes based on the coreOS [instructions](/docs/getting-started-guides/coreos/coreos_multinode_cluster). This is completely automated, a single playbook deploys Kubernetes. This [Ansible](http://ansibleworks.com) playbook deploys Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init."} {"_id":"doc-en-website-728c2cd9d487c1cefed85d0c42db500055e666bd9a7cb7fab155fd875ed0ad74","title":"","text":" --- assignees: - dchen1107 --- Use the [master.yaml](/docs/getting-started-guides/coreos/cloud-configs/master.yaml) and [node.yaml](/docs/getting-started-guides/coreos/cloud-configs/node.yaml) cloud-configs to provision a multi-node Kubernetes cluster. > **Attention**: This requires at least CoreOS version **[695.0.0][coreos695]**, which includes `etcd2`. [coreos695]: https://coreos.com/releases/#695.0.0 * TOC {:toc} ### AWS *Attention:* Replace `` below for a [suitable version of CoreOS image for AWS](https://coreos.com/docs/running-coreos/cloud-providers/ec2/). #### Provision the Master ```shell aws ec2 create-security-group --group-name kubernetes --description \"Kubernetes Security Group\" aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes ``` ```shell aws ec2 run-instances --image-id --key-name --region us-west-2 --security-groups kubernetes --instance-type m3.medium --user-data file://master.yaml ``` #### Capture the private IP address ```shell aws ec2 describe-instances --instance-id ``` #### Edit node.yaml Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. #### Provision worker nodes ```shell aws ec2 run-instances --count 1 --image-id --key-name --region us-west-2 --security-groups kubernetes --instance-type m3.medium --user-data file://node.yaml ``` ### Google Compute Engine (GCE) *Attention:* Replace `` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/). #### Provision the Master ```shell gcloud compute instances create master --image-project coreos-cloud --image --boot-disk-size 200GB --machine-type n1-standard-1 --zone us-central1-a --metadata-from-file user-data=master.yaml ``` #### Capture the private IP address ```shell gcloud compute instances list ``` #### Edit node.yaml Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. #### Provision worker nodes ```shell gcloud compute instances create node1 --image-project coreos-cloud --image --boot-disk-size 200GB --machine-type n1-standard-1 --zone us-central1-a --metadata-from-file user-data=node.yaml ``` #### Establish network connectivity Next, setup an ssh tunnel to the master so you can run kubectl from your local host. In one terminal, run `gcloud compute ssh master --ssh-flag=\"-L 8080:127.0.0.1:8080\"` and in a second run `gcloud compute ssh master --ssh-flag=\"-R 8080:127.0.0.1:8080\"`. ### OpenStack These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard. These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack. #### Make sure you can connect with OpenStack Make sure the environment variables are set for OpenStack such as: ```shell OS_TENANT_ID OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME ``` Test this works with something like: ```shell nova list ``` #### Get a Suitable CoreOS Image You'll need a [suitable version of CoreOS image for OpenStack](https://coreos.com/os/docs/latest/booting-on-openstack.html) Once you download that, upload it to glance. An example is shown below: ```shell glance image-create --name CoreOS723 --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True ``` #### Create security group ```shell nova secgroup-create kubernetes \"Kubernetes Security Group\" nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0 nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0 ``` #### Provision the Master ```shell nova boot --image --key-name --flavor --security-group kubernetes --user-data files/master.yaml kube-master ``` `` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723' `` is the keypair name that you already generated to access the instance. `` is the flavor ID you use to size the instance. Run `nova flavor-list` to get the IDs. 3 on the system this was tested with gives the m1.large size. The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work. Next, assign it a public IP address: ```shell nova floating-ip-list ``` Get an IP address that's free and run: ```shell nova floating-ip-associate kube-master ``` where `` is the IP address that was available from the `nova floating-ip-list` command. #### Provision Worker Nodes Edit `node.yaml` and replace all instances of `` with the private IP address of the master node. You can get this by running `nova show kube-master` assuming you named your instance kube master. This is not the floating IP address you just assigned it. ```shell nova boot --image --key-name --flavor --security-group kubernetes --user-data files/node.yaml minion01 ``` This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master. No newline at end of file"} {"_id":"doc-en-website-3faadd1ebf3ea382a78bc1ca86d5e63fcae9e8cba05d372577426b3fc065720c","title":"","text":"These guides are maintained by community members, cover specific platforms and use cases, and experiment with different ways of configuring Kubernetes on CoreOS. [**Multi-node Cluster**](/docs/getting-started-guides/coreos/coreos_multinode_cluster) Set up a single master, multi-worker cluster on your choice of platform: AWS, GCE, or VMware Fusion.
[**Easy Multi-node Cluster on Google Compute Engine**](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md) Scripted installation of a single master, multi-worker cluster on GCE. Kubernetes components are managed by [fleet](https://github.com/coreos/fleet)."} {"_id":"doc-en-website-6b95f7507ad222ed95f54470f6a423639050a72e2c91b9a7efe99f9bc678ad2c","title":"","text":"$ kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case) $ kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell $ kubectl attach my-pod -i # Attach to Running Container $ kubectl port-forward my-pod 5000:6000 # Forward port 6000 of Pod to your to 5000 on your local machine $ kubectl port-forward my-pod 5000:6000 # Listen on port 5000 on the local machine and forward to port 6000 on my-pod $ kubectl exec my-pod -- ls / # Run command in existing pod (1 container case) $ kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case) $ kubectl top pod POD_NAME --containers # Show metrics for a given pod and its containers"} {"_id":"doc-en-website-2b138d0d39e3f9ad260d2372d2b67dd4ac8fca76f01c167c349b6ba8c968e806","title":"","text":"* GCEPersistentDisk * AWSElasticBlockStore * AzureFile * FC (Fibre Channel) * NFS * iSCSI * RBD (Ceph Block Device) * CephFS * Cinder (OpenStack block storage) * Glusterfs * VsphereVolume * HostPath (single node testing only -- local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)"} {"_id":"doc-en-website-44501423f42947375be7c7485709dcaad51c8f25b312dc47a645258dff42120a","title":"","text":"### Access Modes A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. A `PersistentVolume` can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. The access modes are:"} {"_id":"doc-en-website-b9d15ed0a224da430e4aca4f64497af6b94cae361237ffc753c041af8e65893a","title":"","text":"> __Important!__ A volume can only be mounted using one access mode at a time, even if it supports many. For example, a GCEPersistentDisk can be mounted as ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time. | Volume Plugin | ReadWriteOnce| ReadOnlyMany| ReadWriteMany| | :--- | :---: | :---: | :---: | | AWSElasticBlockStore | x | - | - | | AzureFile | x | x | x | | CephFS | x | x | x | | Cinder | x | - | - | | FC | x | x | - | | FlexVolume | x | x | - | | GCEPersistentDisk | x | x | - | | Glusterfs | x | x | x | | HostPath | x | - | - | | iSCSI | x | x | - | | NFS | x | x | x | | RDB | x | x | - | | VsphereVolume | x | - | - | ### Recycling Policy Current recycling policies are:"} {"_id":"doc-en-website-baa2618ac4d8e0345feceaf47aa22fa0c801b129310d6e123f15e41adf77e60a","title":"","text":"This page shows how to create an External Load Balancer. When creating a service, you have the option of automatically creating a cloud network load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes _provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package_. cloud network load balancer. This provides an externally-accessible IP address that sends traffic to the correct port on your cluster nodes _provided your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package_. For information on provisioning and using an Ingress resource that can give services externally-reachable URLs, load balance the traffic, terminate SSL etc., please check the [Ingress](/docs/concepts/services-networking/ingress/) documentation. {% endcapture %}"} {"_id":"doc-en-website-af860cd1b49baa2c7e24a4a9f53510f2a4e27d71079f0369650def17b873bc85","title":"","text":"1. Create the Pod: kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/pod-redis.yaml kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/pod-redis.yaml 1. Verify that the Pod's Container is running, and then watch for changes to the Pod: kubectl get --watch pod redis kubectl get --watch pod redis The output looks like this: NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s 1. In another terminal, get a shell to the running Container: kubectl exec -it redis -- /bin/bash kubectl exec -it redis -- /bin/bash 1. In your shell, go to `/data/redis`, and create a file: root@redis:/data/redis# echo Hello > test-file root@redis:/data/redis# echo Hello > test-file 1. In your shell, list the running processes: root@redis:/data/redis# ps aux root@redis:/data/redis# ps aux The output is similar to this: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND redis 1 0.1 0.1 33308 3828 ? Ssl 00:46 0:00 redis-server *:6379 root 12 0.0 0.0 20228 3020 ? Ss 00:47 0:00 /bin/bash root 15 0.0 0.0 17500 2072 ? R+ 00:48 0:00 ps aux 1. In your shell, kill the redis process: root@redis:/data/redis# kill root@redis:/data/redis# kill where `` is the redis process ID (PID). 1. In your original terminal, watch for changes to the redis Pod. Eventually, you will see something like this: NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m redis 1/1 Running 1 6m NAME READY STATUS RESTARTS AGE redis 1/1 Running 0 13s redis 0/1 Completed 0 6m redis 1/1 Running 1 6m At this point, the Container has terminated and restarted. This is because the redis Pod has a"} {"_id":"doc-en-website-1e6d1d942dbf599a531946df16af7d4f92ddd925e17c5727c1962f2653dda1c3","title":"","text":"1. Get a shell into the restarted Container: kubectl exec -it redis -- /bin/bash kubectl exec -it redis -- /bin/bash 1. In your shell, goto `/data/redis`, and verify that `test-file` is still there."} {"_id":"doc-en-website-dfe165b27027fe5d2f8e266588c9c247f12bea4c6b4300f0c8df873b45509623","title":"","text":" Many applications rely on configuration which is used during either application initialization or runtime. Most of the times there is a requirement to adjust values assigned to configuration parameters. ConfigMaps is the kubernetes way to inject application pods with configuration data. ConfigMaps are the Kubernetes way to inject application pods with configuration data. ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps."} {"_id":"doc-en-website-9bb4140590615da30cd68838dc5662a398db0c01599d75880f90a4a76a8eb061","title":"","text":"You can project keys to specific paths and specific permissions on a per-file basis. The [Secrets](/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod) user guide explains the syntax. ### Optional References A ConfigMap reference may be marked \"optional\". If the ConfigMap is non-existent, the mounted volume will be empty. If the ConfigMap exists, but the referenced key is non-existent the path will be absent beneath the mount point. ### Mounted ConfigMaps are updated automatically When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into existence after a pod has started. Kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. {{< note >}} A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. {{< /note >}} "} {"_id":"doc-en-website-7e29ee430b8c990b7e4a900bf8d07223a402bae2bc54119cf61ad64788e785c2","title":"","text":"### Restrictions - You must create a ConfigMap before referencing it in a Pod specification (unless you mark the ConfigMap as \"optional\"). If you reference a ConfigMap that doesn't exist, the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. - You must create a ConfigMap before referencing it in a Pod specification, or mark the ConfigMap as \"optional\" (see [Optional ConfigMaps](#optional-configmaps)). If you reference a ConfigMap that doesn't exist, or hasn't been marked as \"optional\" the Pod won't start. Likewise, references to keys that don't exist in the ConfigMap will prevent the pod from starting. - If you use `envFrom` to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The pod will be allowed to start, but the invalid names will be recorded in the event log (`InvalidVariableNames`). The log message lists each skipped key. For example:"} {"_id":"doc-en-website-c099f7a26ce8744a369b5ee2928035bda62bb95677d152ca73a51db7ac450d39","title":"","text":"- You can't use ConfigMaps for {{< glossary_tooltip text=\"static pods\" term_id=\"static-pod\" >}}, because the Kubelet does not support this. ### Optional ConfigMaps In a Pod, or pod template, you can mark a reference to a ConfigMap as _optional_. If the ConfigMap is non-existent, the configuration for which it provides data in the Pod (e.g. environment variable, mounted volume) will be empty. If the ConfigMap exists, but the referenced key is non-existent the data is also empty. #### Optional ConfigMap in environment variables There might be situations where environment variables are not always required. You can mark an environment variables for a container as optional, like this: ```yaml apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: a-config key: akey optional: true # mark the variable as optional restartPolicy: Never ``` If you run this pod, and there is no ConfigMap named `a-config`, the output is empty. If you run this pod, and there is a ConfigMap named `a-config` but that ConfigMap doesn't have a key named `akey`, the output is also empty. If you do set a value for `akey` in the `a-config` ConfigMap, this pod prints that value and then terminates. #### Optional ConfigMap via volume plugin Volumes and files provided by a ConfigMap can be also be marked as optional. The ConfigMap or the key specified does not have to exist. The mount path for such items will always be created. ```yaml apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"ls /etc/config\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: no-config optional: true # mark the source ConfigMap as optional restartPolicy: Never ``` If you run this pod, and there is no ConfigMap named `no-config`, the mounted volume will be empty. ### Mounted ConfigMaps are updated automatically When a mounted ConfigMap is updated, the projected content is eventually updated too. This applies in the case where an optionally referenced ConfigMap comes into existence after a pod has started. The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, it uses its local TTL-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + TTL of ConfigMaps cache (1 minute by default) in kubelet. {{< note >}} A container using a ConfigMap as a [subPath](/docs/concepts/storage/volumes/#using-subpath) volume will not receive ConfigMap updates. {{< /note >}} ## {{% heading \"whatsnext\" %}}"} {"_id":"doc-en-website-54de64bf48abb13034d701d6c164d8dfd189b9bc379464ff0ee14c6d45ab26c5","title":"","text":"- title: kube-proxy CLI path: /docs/admin/kube-proxy/ - title: kub-scheduler CLI - title: kube-scheduler CLI path: /docs/admin/kube-scheduler/ - title: kubelet CLI"} {"_id":"doc-en-website-86cc4b806eaeb0f38635c500043d972eb50367de723cee9fc9cbabffc176c7d2","title":"","text":"date: 2018-04-12 full_link: /docs/tasks/administer-cluster/running-cloud-controller/ short_description: > Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. Cloud Controller Manager is a Kubernetes component that embeds cloud-specific control logic. aka: tags:"} {"_id":"doc-en-website-c49b468192e00508463de9246a92b587411ef84856f808f548c85bd36375759e","title":"","text":"- architecture - operation --- Cloud Controller Manager is an alpha feature in 1.8. In upcoming releases it will be the preferred way to integrate Kubernetes with any cloud. Cloud Controller Manager is a Kubernetes component that embeds cloud-specific control logic. Kubernetes v1.6 contains a new binary called cloud-controller-manager. cloud-controller-manager is a daemon that embeds cloud-specific control loops. These cloud-specific control loops were originally in the kube-controller-manager. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently from the core Kubernetes code. Originally part of the kube-controller-manager, the cloud-controller-manager is responsible to decoupling the interoperability logic between Kubernetes and the underlying cloud infrastructure, enabling cloud providers to release features at a different pace compared to the main project. "} {"_id":"doc-en-website-2a19ac527ac25c522def08b1bba5d6b38e2be6582834ba2336fcd3d8533297cc","title":"","text":"title: HostAliases id: HostAliases date: 2019-01-31 full_link: /docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-core full_link: /docs/reference/generated/kubernetes-api/{{< param \"version\" >}}/#hostalias-v1-core short_description: > A HostAliases is a mapping between the IP address and hostname to be injected into a Pod's hosts file."} {"_id":"doc-en-website-d2a78167c7da40e6c5d331e730a6420c0be4a1100189f8249cafe2262e1f2693","title":"","text":" [HostAliases](/docs/reference/generated/kubernetes-api/v1.13/#hostalias-v1-corev) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. [HostAliases](/docs/reference/generated/kubernetes-api/{{< param \"version\" >}}/#hostalias-v1-core) is an optional list of hostnames and IP addresses that will be injected into the Pod's hosts file if specified. This is only valid for non-hostNetwork Pods. "} {"_id":"doc-en-website-8c08d97a4959d0f76a2a0de1bb3b7bbe2b767a7f50422c252ddf7829350fb9c0","title":"","text":"### Submitting Documentation Pull Requests If you're fixing an issue in the existing documentation, you should submit a PR against the master branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/). For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/). "} {"_id":"doc-en-website-513cc5990fb057c00abf297f1fd6a1af9386cdb39a8e4cfdb58fc36a5b4c66dd","title":"","text":"## Metric lifecycle Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deletion Alpha metric → Stable metric → Deprecated metric → Hidden metric → Deleted metric Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time. Alpha metrics have no stability guarantees. These metrics can be modified or deleted at any time. Stable metrics can be guaranteed to not change; Specifically, stability means: Stable metrics are guaranteed to not change. This means: * A stable metric without a deprecated signature will not be deleted or renamed * A stable metric's type will not be modified * the metric itself will not be deleted (or renamed) * the type of metric will not be modified Deprecated metrics are slated for deletion, but are still available for use. These metrics include an annotation about the version in which they became deprecated. Deprecated metric signal that the metric will eventually be deleted; to find which version, you need to check annotation, which includes from which kubernetes version that metric will be considered deprecated. For example: Before deprecation: * Before deprecation ``` # HELP some_counter this counts things # TYPE some_counter counter some_counter 0 ``` ``` # HELP some_counter this counts things # TYPE some_counter counter some_counter 0 ``` After deprecation: * After deprecation ``` # HELP some_counter (Deprecated since 1.15.0) this counts things # TYPE some_counter counter some_counter 0 ``` ``` # HELP some_counter (Deprecated since 1.15.0) this counts things # TYPE some_counter counter some_counter 0 ``` Once a metric is hidden then by default the metrics is not published for scraping. To use a hidden metric, you need to override the configuration for the relevant cluster component. Hidden metrics are no longer published for scraping, but are still available for use. To use a hidden metric, please refer to the [Show hidden metrics](#show-hidden-metrics) section. Once a metric is deleted, the metric is not published. You cannot change this using an override. Deleted metrics are no longer published and cannot be used. ## Show Hidden Metrics ## Show hidden metrics As described above, admins can enable hidden metrics through a command-line flag on a specific binary. This intends to be used as an escape hatch for admins if they missed the migration of the metrics deprecated in the last release."} {"_id":"doc-en-website-e230ddde4b774c0eca0154d6de0c203cce92111279131a54bff125e716919c7e","title":"","text":"## IBM Cloud Kubernetes Service ### Compute nodes By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-plan_clusters#plan_clusters). By using the IBM Cloud Kubernetes Service provider, you can create clusters with a mixture of virtual and physical (bare metal) nodes in a single zone or across multiple zones in a region. For more information, see [Planning your cluster and worker node setup](https://cloud.ibm.com/docs/containers?topic=containers-planning_worker_nodes). The name of the Kubernetes Node object is the private IP address of the IBM Cloud Kubernetes Service worker node instance."} {"_id":"doc-en-website-366d5535a3a8728bd42c9da4e713cd2d1ee6f8dcb5d1f6b87745f0cc68f675aa","title":"","text":"title: 클러스터(Cluster) id: cluster date: 2019-06-15 full_link: full_link: short_description: > 컨테이너화된 애플리케이션을 실행하는 {{< glossary_tooltip text=\"노드\" term_id=\"node\" >}}라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. 컨테이너화된 애플리케이션을 실행하는 노드라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. aka: tags: - fundamental - operation --- 컨테이너화된 애플리케이션을 실행하는 노드라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. 컨테이너화된 애플리케이션을 실행하는 {{< glossary_tooltip text=\"노드\" term_id=\"node\" >}}라고 하는 워커 머신의 집합. 모든 클러스터는 최소 한 개의 워커 노드를 가진다. 워커 노드는 애플리케이션의 구성요소인 {{< glossary_tooltip text=\"파드\" term_id=\"pod\" >}}를 호스트한다. {{< glossary_tooltip text=\"컨트롤 플레인\" term_id=\"control-plane\" >}}은 워커 노드와"} {"_id":"doc-en-website-88b1c8b615dd2be5154effa2359dc4a5852ffe62c8a8b4bdb6bd8f2a78bee0fb","title":"","text":" "} {"_id":"doc-en-website-401c0b1324b400dc0eed991046b248b5ece37b930825bd44788b7e5bc44102cc","title":"","text":" --- title: Container Lifecycle Hooks content_type: concept weight: 30 --- Questa pagina descrive come i Container gestiti con kubelet possono utilizzare il lifecycle hook framework dei Container per l'esecuzione di codice eseguito in corrispondenza di alcuni eventi durante il loro ciclo di vita. ## Overview Analogamente a molti framework di linguaggi di programmazione che hanno degli hooks legati al ciclo di vita dei componenti, come ad esempio Angular, Kubernetes fornisce ai Container degli hook legati al loro ciclo di vita dei Container. Gli hook consentono ai Container di essere consapevoli degli eventi durante il loro ciclo di gestione ed eseguire del codice implementato in un handler quando il corrispondente hook viene eseguito. ## Container hooks Esistono due tipi di hook che vengono esposti ai Container: `PostStart` Questo hook viene eseguito successivamente alla creazione del container. Tuttavia, non vi è garanzia che questo hook venga eseguito prima dell'ENTRYPOINT del container. Non vengono passati parametri all'handler. `PreStop` Questo hook viene eseguito prima della terminazione di un container a causa di una richiesta API o di un evento di gestione, come ad esempio un fallimento delle sonde di liveness/startup, preemption, risorse contese e altro. Una chiamata all'hook di `PreStop` fallisce se il container è in stato terminated o completed e l'hook deve finire prima che possa essere inviato il segnale di TERM per fermare il container. Il conto alla rovescia per la terminazione del Pod (grace period) inizia prima dell'esecuzione dell'hook `PreStop`, quindi indipendentemente dall'esito dell'handler, il container terminerà entro il grace period impostato. Non vengono passati parametri all'handler. Una descrizione più dettagliata riguardante al processo di terminazione dei Pod può essere trovata in [Terminazione dei Pod](/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination). ### Implementazione degli hook handler I Container possono accedere a un hook implementando e registrando un handler per tale hook. Ci sono due tipi di handler che possono essere implementati per i Container: * Exec - Esegue un comando specifico, tipo `pre-stop.sh`, all'interno dei cgroup e namespace del Container. Le risorse consumate dal comando vengono contate sul Container. * HTTP - Esegue una richiesta HTTP verso un endpoint specifico del Container. ### Esecuzione dell'hook handler Quando viene richiamato l'hook legato al lifecycle del Container, il sistema di gestione di Kubernetes esegue l'handler secondo l'azione dell'hook, `httpGet` e `tcpSocket` vengono eseguiti dal processo kubelet, mentre `exec` è eseguito nel Container. Le chiamate agli handler degli hook sono sincrone rispetto al contesto del Pod che contiene il Container. Questo significa che per un hook `PostStart`, l'ENTRYPOINT e l'hook si attivano in modo asincrono. Tuttavia, se l'hook impiega troppo tempo per essere eseguito o si blocca, il container non può raggiungere lo stato di `running`. Gli hook di `PreStop` non vengono eseguiti in modo asincrono dall'evento di stop del container; l'hook deve completare la sua esecuzione prima che l'evento TERM possa essere inviato. Se un hook di `PreStop` si blocca durante la sua esecuzione, la fase del Pod rimarrà `Terminating` finchè il Pod non sarà rimosso forzatamente dopo la scadenza del suo `terminationGracePeriodSeconds`. Questo grace period si applica al tempo totale necessario per effettuare sia l'esecuzione dell'hook di `PreStop` che per l'arresto normale del container. Se, per esempio, il `terminationGracePeriodSeconds` è di 60, e l'hook impiega 55 secondi per essere completato, e il container impiega 10 secondi per fermarsi normalmente dopo aver ricevuto il segnale, allora il container verrà terminato prima di poter completare il suo arresto, poiché `terminationGracePeriodSeconds` è inferiore al tempo totale (55+10) necessario perché queste due cose accadano. Se un hook `PostStart` o `PreStop` fallisce, allora il container viene terminato. Gli utenti dovrebbero mantenere i loro handler degli hook i più leggeri possibili. Ci sono casi, tuttavia, in cui i comandi di lunga durata hanno senso, come il salvataggio dello stato del container prima della sua fine. ### Garanzia della chiamata dell'hook La chiamata degli hook avviene *almeno una volta*, il che significa che un hook può essere chiamato più volte da un dato evento, come per `PostStart` o `PreStop`. Sta all'implementazione dell'hook gestire correttamente questo aspetto. Generalmente, vengono effettuate singole chiamate agli hook. Se, per esempio, la destinazione di hook HTTP non è momentaneamente in grado di ricevere traffico, non c'è alcun tentativo di re invio. In alcuni rari casi, tuttavia, può verificarsi una doppia chiamata. Per esempio, se un kubelet si riavvia nel mentre dell'invio di un hook, questo potrebbe essere chiamato per una seconda volta dopo che il kubelet è tornato in funzione. ### Debugging Hook handlers I log di un handler di hook non sono esposti negli eventi del Pod. Se un handler fallisce per qualche ragione, trasmette un evento. Per il `PostStart`, questo è l'evento di `FailedPostStartHook`, e per il `PreStop`, questo è l'evento di `FailedPreStopHook`. Puoi vedere questi eventi eseguendo `kubectl describe pod `. Ecco alcuni esempi di output di eventi dall'esecuzione di questo comando: ``` Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned test-1730497541-cq1d2 to gke-test-cluster-default-pool-a07e5d30-siqd 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulling pulling image \"test:1.0\" 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Created Created container with docker id 5c6a256a2567; Security:[seccomp=unconfined] 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Pulled Successfully pulled image \"test:1.0\" 1m 1m 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Started Started container with docker id 5c6a256a2567 38s 38s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 5c6a256a2567: PostStart handler: Error executing in Docker Container: 1 37s 37s 1 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Normal Killing Killing container with docker id 8df9fdfd7054: PostStart handler: Error executing in Docker Container: 1 38s 37s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} Warning FailedSync Error syncing pod, skipping: failed to \"StartContainer\" for \"main\" with RunContainerError: \"PostStart handler: Error executing in Docker Container: 1\" 1m 22s 2 {kubelet gke-test-cluster-default-pool-a07e5d30-siqd} spec.containers{main} Warning FailedPostStartHook ``` ## {{% heading \"whatsnext\" %}} * Approfondisci [Container environment](/docs/concepts/containers/container-environment/). * Esegui un tutorial su come [definire degli handlers per i Container lifecycle events](/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/). "} {"_id":"doc-en-website-e376e6af3e9f8ac996e3375ebc9261c1fe5146419b39ae3c95d842e2c549b065","title":"","text":"{{- $prepend := .Get \"prepend\" }} {{- $glossaryBundle := site.GetPage \"page\" \"docs/reference/glossary\" -}} {{- $glossaryItems := $glossaryBundle.Resources.ByType \"page\" -}} {{- $term_info := $glossaryItems.GetMatch (printf \"%s*\" $id ) -}} {{- $term_info := $glossaryItems.GetMatch (printf \"%s.md\" $id ) -}} {{- if not $term_info -}} {{- errorf \"[%s] %q: %q is not a valid glossary term_id, see ./docs/reference/glossary/* for a full list\" site.Language.Lang .Page.Path $id -}} {{- end -}}"} {"_id":"doc-en-website-a1658d3352e07099be7e97c8a0de29bfd4bcc68ccca7daee38b247da21625c69","title":"","text":"- Assign `Doc Review: Open Issues` or `Tech Review: Open Issues` for PRs that have been reviewed and require further input or action before merging. - Assign `/lgtm` and `/approve` labels to PRs that can be merged. - Merge PRs when they are ready, or close PRs that shouldn’t be accepted. - Consider accepting accurate technical content even if the content meets only some of the docs' [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns. - Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. ### Helpful GitHub queries for wranglers"} {"_id":"doc-en-website-35354c9e05afb4d42662d0575327f179e0fe4a08f6e86eb93abe1eae956d6abf","title":"","text":"* `TYPE`: Specifies the [resource type](#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms. For example, the following commands produce the same output: ```shell kubectl get pod pod1 kubectl get pods pod1 kubectl get po pod1 ``` ```shell kubectl get pod pod1 kubectl get pods pod1 kubectl get po pod1 ``` * `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, for example `kubectl get pods`."} {"_id":"doc-en-website-b370ea681d2424c945f225dc97fdf0ec4fbb6b0de4429b5f2d5f6d790c952633","title":"","text":"hello world ``` ``` ```shell # we can \"uninstall\" a plugin, by simply removing it from our PATH sudo rm /usr/local/bin/kubectl-hello ```"} {"_id":"doc-en-website-4a12fd7fd13af4ab625fee2d2d928f8277b80c986ad454f5c32022875ec6c750","title":"","text":"/usr/local/bin/kubectl-foo /usr/local/bin/kubectl-bar ``` ``` ```shell # this command can also warn us about plugins that are # not executable, or that are overshadowed by other # plugins, for example"} {"_id":"doc-en-website-773f12d9752dc3546335bc166274d348e12a64fb0c9791cd85edd5d4de2a69ba","title":"","text":"abstract: \"自動化されたコンテナのデプロイ・スケール・管理\" cid: home --- {{< announcement >}} {{< deprecationwarning >}} {{< blocks/section id=\"oceanNodes\" >}} {{% blocks/feature image=\"flower\" %}}"} {"_id":"doc-en-website-b82de94597e065b86b9dd3324c5c8c508ba3ac24b447194f755c510a37c6dac5","title":"","text":"abstract: \"자동화된 컨테이너 배포, 스케일링과 관리\" cid: home --- {{< announcement >}} {{< deprecationwarning >}} {{< blocks/section id=\"oceanNodes\" >}} {{% blocks/feature image=\"flower\" %}}"} {"_id":"doc-en-website-0e4acf3835db8ce8075d61092647fd34ef50ef456a2c3dc360f55e1cd1b50939","title":"","text":"I0710 14:43:38.274797 3655 portforward.go:225] Forwarding from [::1]:7000 -> 6379 ``` {{< note >}} `kubectl port-forward` does not return. To continue with the exercises, you will need to open another terminal. {{< /note >}} 2. Start the Redis command line interface: ```shell"} {"_id":"doc-en-website-e09cc9e8327ccbc8bbaf55b2bf0d6b28e77ac3f19d7c093a91e7ab65eff0f86f","title":"","text":"## {{% heading \"whatsnext\" %}} [Cron expression format](https://pkg.go.dev/github.com/robfig/cron?tab=doc#hdr-CRON_Expression_Format) [Cron expression format](https://en.wikipedia.org/wiki/Cron) documents the format of CronJob `schedule` fields. For instructions on creating and working with cron jobs, and for an example of CronJob"} {"_id":"doc-en-website-18a8d9560964f988ab65116da7967d09d55f0e929d96ff0753b5660e9afc15a7","title":"","text":"## {{% heading \"whatsnext\" %}} [크론 표현 포맷](https://pkg.go.dev/github.com/robfig/cron?tab=doc#hdr-CRON_Expression_Format)은 [크론 표현 포맷](https://ko.wikipedia.org/wiki/Cron)은 크론잡 `schedule` 필드의 포맷을 문서화 한다. 크론 잡 생성과 작업에 대한 지침과 크론잡 매니페스트의"} {"_id":"doc-en-website-2c0c555c3434db3fe855847dfb2155065ee869b1d9b6ecfa8670fb3766f103b1","title":"","text":"If you would like to write a concept page, see [Page Content Types](/docs/home/contribute/style/page-content-types/#concept) [Page Content Types](/docs/contribute/style/page-content-types/#concept) for information about the concept page types."} {"_id":"doc-en-website-a770e60485153ce8fa5b29f090e970c2efe6b76b85cfc625347050b52d58f69b","title":"","text":"If you would like to write a tutorial, see [Content Page Types](/docs/home/contribute/style/page-content-types/) [Content Page Types](/docs/contribute/style/page-content-types/) for information about the tutorial page type."} {"_id":"doc-en-website-41c8a5d6d76b4e8515db545f74777371d38457a814a166f80035827930f66cfe","title":"","text":"Creation and deletion of namespaces are described in the [Admin Guide documentation for namespaces](/docs/admin/namespaces). {{< note >}} Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces. {{< /note >}} ### Viewing namespaces You can list the current namespaces in a cluster using:"} {"_id":"doc-en-website-fbbbfc6d2f97c8a975c43e3a09a2d3ab1bff9083def84641e675ee5b98ee074a","title":"","text":"## Creating a new namespace {{< note >}} Avoid creating namespace with prefix `kube-`, since it is reserved for Kubernetes system namespaces. {{< /note >}} 1. Create a new YAML file called `my-namespace.yaml` with the contents: ```yaml"} {"_id":"doc-en-website-66905aaae6933fa0c48468178ffb4b5516ab1bfc8b65dad882e3788b95c52ead","title":"","text":"priority: 1.0 --- {{< site-searchbar >}} {{< blocks/section id=\"oceanNodes\" >}} {{% blocks/feature image=\"flower\" %}} [Kubernetes]({{< relref \"/docs/concepts/overview/\" >}}), also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications."} {"_id":"doc-en-website-585c0940264dbfb5f9126f810fc5360e2c04f2299120eecf1eb9ec1e8e5ab43d","title":"","text":"
{{partial \"search-input\" .}}
No newline at end of file"} {"_id":"doc-en-website-b0d583cd439498d7f7c9aa9c94e20ef87aa9c111c19bb9361063822b09783097","title":"","text":" ## Be the PR Wrangler for a week SIG Docs [approvers](/docs/contribute/participating/#approvers) take week-long turns [wrangling PRs](https://github.com/kubernetes/website/wiki/PR-Wranglers) for the repository. The PR wrangler’s duties include: - Review [open pull requests](https://github.com/kubernetes/website/pulls) daily for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides. - Review the smallest PRs (`size/XS`) first, then iterate towards the largest (`size/XXL`). - Review as many PRs as you can. - Ensure that the CLA is signed by each contributor. - Help new contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md). - Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to automatically remind contributors that haven’t signed the CLA to sign the CLA. - Provide feedback on proposed changes and help facilitate technical reviews from members of other SIGs. - Provide inline suggestions on the PR for the proposed content changes. - If you need to verify content, comment on the PR and request more details. - Assign relevant `sig/` label(s). - If needed, assign reviewers from the `reviewers:` block in the file's front matter. - Assign `Docs Review` and `Tech Review` labels to indicate the PR's review status. - Assign `Needs Doc Review` or `Needs Tech Review` for PRs that haven't yet been reviewed. - Assign `Doc Review: Open Issues` or `Tech Review: Open Issues` for PRs that have been reviewed and require further input or action before merging. - Assign `/lgtm` and `/approve` labels to PRs that can be merged. - Merge PRs when they are ready, or close PRs that shouldn’t be accepted. - Consider accepting accurate technical content even if the content meets only some of the docs' [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns. - Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. ### Helpful GitHub queries for wranglers The following queries are helpful when wrangling. After working through these queries, the remaining list of PRs to be reviewed is usually small. These queries specifically exclude localization PRs, and only include the `master` branch (except for the last one). - [No CLA, not eligible to merge](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): Remind the contributor to sign the CLA. If they have already been reminded by both the bot and a human, close the PR and remind them that they can open it after signing the CLA. **Do not review PRs whose authors have not signed the CLA!** - [Needs LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): If it needs technical review, loop in one of the reviewers suggested by the bot. If it needs docs review or copy-editing, either suggest changes or add a copyedit commit to the PR to move it along. - [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): Determine whether any additional changes or updates need to be made for the PR to be merged. If you think the PR is ready to be merged, comment `/approve`. - [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): If it’s a small PR against master with no clear blockers. (change \"XS\" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]). - [Not against master](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): If it's against a `dev-` branch, it's for an upcoming release. Make sure the [release meister](https://github.com/kubernetes/sig-release/tree/master/release-team) knows about it by adding a comment with `/assign @`. If it's against an old branch, help the PR author figure out whether it's targeted against the best branch. ### When to close Pull Requests Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. - Close any PR where the CLA hasn’t been signed for two weeks. PR authors can reopen the PR after signing the CLA, so this is a low-risk way to make sure nothing gets merged without a signed CLA. - Close any PR where the author has not responded to comments or feedback in 2 or more weeks. Don't be afraid to close pull requests. Contributors can easily reopen and resume works in progress. Oftentimes a closure notice is what spurs an author to resume and finish their contribution. To close a pull request, leave a `/close` comment on the PR. {{< note >}} An automated service, [`fejta-bot`](https://github.com/fejta-bot) automatically marks issues as stale after 90 days of inactivity, then closes them after an additional 30 days of inactivity when they become rotten. PR wranglers should close issues after 14-30 days of inactivity. {{< /note >}} ## Propose improvements SIG Docs [members](/docs/contribute/participating/#members) can propose improvements."} {"_id":"doc-en-website-80220942f11cc5d9dd4a8eff29f17dd8e2d3408e15aecb350a3634284db7a333","title":"","text":"When you’re ready to stop recording, click Stop. The video uploads automatically to YouTube. The video uploads automatically to YouTube. No newline at end of file"} {"_id":"doc-en-website-d01e3f14db03d6646e7288edf7ce0e22a408b209c1061872031a7f6f6dfaf745","title":"","text":" --- title: PR wranglers content_type: concept weight: 20 --- SIG Docs [approvers](/docs/contribute/participating/roles-and-responsibilites/#approvers) take week-long shifts [managing pull requests](https://github.com/kubernetes/website/wiki/PR-Wranglers) for the repository. This section covers the duties of a PR wrangler. For more information on giving good reviews, see [Reviewing changes](/docs/contribute/review/). ## Duties Each day in a week-long shift as PR Wrangler: - Triage and tag incoming issues daily. See [Triage and categorize issues](/docs/contribute/review/for-approvers/#triage-and-categorize-issues) for guidelines on how SIG Docs uses metadata. - Review [open pull requests](https://github.com/kubernetes/website/pulls) for quality and adherence to the [Style](/docs/contribute/style/style-guide/) and [Content](/docs/contribute/style/content-guide/) guides. - Start with the smallest PRs (`size/XS`) first, and end with the largest (`size/XXL`). Review as many PRs as you can. - Make sure PR contributors sign the [CLA](https://github.com/kubernetes/community/blob/master/CLA.md). - Use [this](https://github.com/zparnold/k8s-docs-pr-botherer) script to remind contributors that haven’t signed the CLA to do so. - Provide feedback on changes and ask for technical reviews from members of other SIGs. - Provide inline suggestions on the PR for the proposed content changes. - If you need to verify content, comment on the PR and request more details. - Assign relevant `sig/` label(s). - If needed, assign reviewers from the `reviewers:` block in the file's front matter. - Use the `/approve` comment to approve a PR for merging. Merge the PR when ready. - PRs should have a `/lgtm` comment from another member before merging. - Consider accepting technically accurate content that doesn't meet the [style guidelines](/docs/contribute/style/style-guide/). Open a new issue with the label `good first issue` to address style concerns. ### Helpful GitHub queries for wranglers The following queries are helpful when wrangling. After working through these queries, the remaining list of PRs to review is usually small. These queries exclude localization PRs. All queries are against the main branch except the last one. - [No CLA, not eligible to merge](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+label%3A%22cncf-cla%3A+no%22+-label%3Ado-not-merge+label%3Alanguage%2Fen): Remind the contributor to sign the CLA. If both the bot and a human have reminded them, close the PR and remind them that they can open it after signing the CLA. **Do not review PRs whose authors have not signed the CLA!** - [Needs LGTM](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-label%3Algtm+): Lists PRs that need an LGTM from a member. If the PR needs technical review, loop in one of the reviewers suggested by the bot. If the content needs work, add suggestions and feedback in-line. - [Has LGTM, needs docs approval](https://github.com/kubernetes/website/pulls?q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+label%3Algtm): Lists PRs that need an `/approve` comment to merge. - [Quick Wins](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Apr+is%3Aopen+base%3Amaster+-label%3A%22do-not-merge%2Fwork-in-progress%22+-label%3A%22do-not-merge%2Fhold%22+label%3A%22cncf-cla%3A+yes%22+label%3A%22size%2FXS%22+label%3A%22language%2Fen%22+): Lists PRs against the main branch with no clear blockers. (change \"XS\" in the size label as you work through the PRs [XS, S, M, L, XL, XXL]). - [Not against the main branch](https://github.com/kubernetes/website/pulls?utf8=%E2%9C%93&q=is%3Aopen+is%3Apr+-label%3Ado-not-merge+label%3Alanguage%2Fen+-base%3Amaster): If the PR is against a `dev-` branch, it's for an upcoming release. Assign the [docs release manager](https://github.com/kubernetes/sig-release/tree/master/release-team#kubernetes-release-team-roles) using: `/assign @`. If the PR is against an old branch, help the author figure out whether it's targeted against the best branch. ### When to close Pull Requests Reviews and approvals are one tool to keep our PR queue short and current. Another tool is closure. Close PRs where: - The author hasn't signed the CLA for two weeks. Authors can reopen the PR after signing the CLA. This is a low-risk way to make sure nothing gets merged without a signed CLA. - The author has not responded to comments or feedback in 2 or more weeks. Don't be afraid to close pull requests. Contributors can easily reopen and resume works in progress. Often a closure notice is what spurs an author to resume and finish their contribution. To close a pull request, leave a `/close` comment on the PR. {{< note >}} The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity. {{< /note >}} No newline at end of file"} {"_id":"doc-en-website-f3594193750eec65a99e32a1e70e4c68608a6f4651b951eb8e9da4bfda62fe8c","title":"","text":"- Visit the Netlify page preview for a PR to make sure things look good before approving. - Participate in the [PR Wrangler rotation schedule](https://github.com/kubernetes/website/wiki/PR-Wranglers) for weekly rotations. SIG Docs expects all approvers to participate in this rotation. See [Be the PR Wrangler for a week](/docs/contribute/advanced#be-the-pr-wrangler-for-a-week) rotation. See [PR wranglers](/docs/contribute/participating/pr-wranglers/). for more details. ### Becoming an approver"} {"_id":"doc-en-website-56f2f2bd06b6e9f07a8c1a6dcefa1b03c9bc6c96c0c9fde6b04e9c9d3be5f051","title":"","text":"2. Assign the PR to one or more current SIG Docs approvers. If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added, [K8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) assigns and suggests you as a reviewer on new pull requests. No newline at end of file If approved, a SIG Docs lead adds you to the appropriate GitHub team. Once added, [@k8s-ci-robot](https://github.com/kubernetes/test-infra/tree/master/prow#bots-home) assigns and suggests you as a reviewer on new pull requests. ## {{% heading \"whatsnext\" %}} - Read about [PR wrangling](/docs/contribute/participating/pr-wranglers), a role all approvers take on rotation. No newline at end of file"} {"_id":"doc-en-website-78d9fd82fbdc8a8db1d19e4221c78adcffabc2a74ffa74b355b17aee56867ddd","title":"","text":"--- title: Intro to Windows support in Kubernetes title: KubernetesのWindowsサポート概要 content_type: concept weight: 65 --- Windows applications constitute a large portion of the services and applications that run in many organizations. [Windows containers](https://aka.ms/windowscontainers) provide a modern way to encapsulate processes and package dependencies, making it easier to use DevOps practices and follow cloud native patterns for Windows applications. Kubernetes has become the defacto standard container orchestrator, and the release of Kubernetes 1.14 includes production support for scheduling Windows containers on Windows nodes in a Kubernetes cluster, enabling a vast ecosystem of Windows applications to leverage the power of Kubernetes. Organizations with investments in Windows-based applications and Linux-based applications don't have to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments, regardless of operating system. Windowsアプリケーションは、多くの組織で実行されているサービスやアプリケーションの大部分を占めています。[Windowsコンテナ](https://aka.ms/windowscontainers)は、プロセスとパッケージの依存関係を一つにまとめる最新の方法を提供し、DevOpsプラクティスの使用とWindowsアプリケーションのクラウドネイティブパターンの追求を容易にします。Kubernetesは事実上、標準的なコンテナオーケストレータになりました。Kubernetes 1.14のリリースでは、Kubernetesクラスター内のWindowsノードでWindowsコンテナをスケジューリングする本番環境サポートが含まれたので、Windowsアプリケーションの広大なエコシステムにおいて、Kubernetesを有効的に活用できます。WindowsベースのアプリケーションとLinuxベースのアプリケーションに投資している組織は、ワークロードを管理する個別のオーケストレーターが不要となるため、オペレーティングシステムに関係なく導入全体の運用効率が向上します。 ## Windows containers in Kubernetes ## KubernetesのWindowsコンテナ To enable the orchestration of Windows containers in Kubernetes, simply include Windows nodes in your existing Linux cluster. Scheduling Windows containers in [Pods](/ja/docs/concepts/workloads/pods/pod-overview/) on Kubernetes is as simple and easy as scheduling Linux-based containers. KubernetesでWindowsコンテナのオーケストレーションを有効にするには、既存のLinuxクラスターにWindowsノードを含めるだけです。Kubernetesの[Pod](/ja/docs/concepts/workloads/pods/pod-overview/)でWindowsコンテナをスケジュールすることは、Linuxベースのコンテナをスケジュールするのと同じくらいシンプルで簡単です。 In order to run Windows containers, your Kubernetes cluster must include multiple operating systems, with control plane nodes running Linux and workers running either Windows or Linux depending on your workload needs. Windows Server 2019 is the only Windows operating system supported, enabling [Kubernetes Node](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node) on Windows (including kubelet, [container runtime](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd), and kube-proxy). For a detailed explanation of Windows distribution channels see the [Microsoft documentation](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19). Windowsコンテナを実行するには、Kubernetesクラスターに複数のオペレーティングシステムを含める必要があります。コントロールプレーンノードはLinux、ワーカーノードはワークロードのニーズに応じてWindowsまたはLinuxで実行します。Windows Server 2019は、サポートされている唯一のWindowsオペレーティングシステムであり、Windows(kubelet、[コンテナランタイム](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd)、kube-proxyを含む)で[Kubernetesノード](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md#the-kubernetes-node)を有効にします。Windowsディストリビューションチャンネルの詳細については、[Microsoftのドキュメント](https://docs.microsoft.com/en-us/windows-server/get-started-19/servicing-channels-19)を参照してください。 {{< note >}} The Kubernetes control plane, including the [master components](/ja/docs/concepts/overview/components/), continues to run on Linux. There are no plans to have a Windows-only Kubernetes cluster. [マスターコンポーネント](/ja/docs/concepts/overview/components/)を含むKubernetesコントロールプレーンは、Linuxで実行し続けます。WindowsのみのKubernetesクラスターを導入する計画はありません。 {{< /note >}} {{< note >}} In this document, when we talk about Windows containers we mean Windows containers with process isolation. Windows containers with [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container) is planned for a future release. このドキュメントでは、Windowsコンテナについて説明する場合、プロセス分離のWindowsコンテナを意味します。[Hyper-V分離](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)のWindowsコンテナは、将来リリースが計画されています。 {{< /note >}} ## Supported Functionality and Limitations ## サポートされている機能と制限 ### Supported Functionality ### サポートされている機能 #### Compute #### コンピュート From an API and kubectl perspective, Windows containers behave in much the same way as Linux-based containers. However, there are some notable differences in key functionality which are outlined in the limitation section. APIとkubectlの観点から見ると、WindowsコンテナはLinuxベースのコンテナとほとんど同じように動作します。ただし、制限セクションで概説されている主要な機能には、いくつかの顕著な違いがあります。 Let's start with the operating system version. Refer to the following table for Windows operating system support in Kubernetes. A single heterogeneous Kubernetes cluster can have both Windows and Linux worker nodes. Windows containers have to be scheduled on Windows nodes and Linux containers on Linux nodes. オペレーティングシステムのバージョンから始めましょう。KubernetesのWindowsオペレーティングシステムのサポートについては、次の表を参照してください。単一の異種Kubernetesクラスターは、WindowsとLinuxの両方のワーカーノードを持つことができます。WindowsコンテナはWindowsノードで、LinuxコンテナはLinuxノードでスケジュールする必要があります。 | Kubernetes version | Host OS version (Kubernetes Node) | | | | Kubernetes バージョン | ホストOS バージョン (Kubernetes ノード) | | | | --- | --- | --- | --- | | | *Windows Server 1709* | *Windows Server 1803* | *Windows Server 1809/Windows Server 2019* | | *Kubernetes v1.14* | Not Supported | Not Supported| Supported for Windows Server containers Builds 17763.* with Docker EE-basic 18.09 | | *Kubernetes v1.14* | サポートされていません | サポートされていません| Windows Server containers Builds 17763.* と Docker EE-basic 18.09 がサポートされています | {{< note >}} We don't expect all Windows customers to update the operating system for their apps frequently. Upgrading your applications is what dictates and necessitates upgrading or introducing new nodes to the cluster. For the customers that chose to upgrade their operating system for containers running on Kubernetes, we will offer guidance and step-by-step instructions when we add support for a new operating system version. This guidance will include recommended upgrade procedures for upgrading user applications together with cluster nodes. Windows nodes adhere to Kubernetes [version-skew policy](/ja/docs/setup/release/version-skew-policy/) (node to control plane versioning) the same way as Linux nodes do today. すべてのWindowsユーザーがアプリのオペレーティングシステムを頻繁に更新することは望んでいません。アプリケーションのアップグレードは、クラスターに新しいノードをアップグレードまたは導入することを要求する必要があります。Kubernetesで実行されているコンテナのオペレーティングシステムをアップグレードすることを選択したお客様には、新しいオペレーティングシステムバージョンのサポート追加時に、ガイダンスと段階的な指示を提供します。このガイダンスには、クラスターノードと共にアプリケーションをアップグレードするための推奨アップグレード手順が含まれます。Windowsノードは、現在のLinuxノードと同じように、Kubernetes[バージョンスキューポリシー](/ja/docs/setup/release/version-skew-policy/)(ノードからコントロールプレーンのバージョン管理)に準拠しています。 {{< /note >}} {{< note >}} The Windows Server Host Operating System is subject to the [Windows Server ](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing) licensing. The Windows Container images are subject to the [Supplemental License Terms for Windows containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/images-eula). Windows Serverホストオペレーティングシステムには、[Windows Server](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing)ライセンスが適用されます。Windowsコンテナイメージには、[Windowsコンテナの追加ライセンス条項](https://docs.microsoft.com/en-us/virtualization/windowscontainers/images-eula)ライセンスが提供されます。 {{< /note >}} {{< note >}} Windows containers with process isolation have strict compatibility rules, [where the host OS version must match the container base image OS version](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility). Once we support Windows containers with Hyper-V isolation in Kubernetes, the limitation and compatibility rules will change. プロセス分離のWindowsコンテナには、厳格な互換性ルールがあります[ホストOSのバージョンはコンテナのベースイメージのOSバージョンと一致する必要があります](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility)。KubernetesでHyper-V分離のWindowsコンテナをサポートすると、制限と互換性ルールが変更されます。 {{< /note >}} Key Kubernetes elements work the same way in Windows as they do in Linux. In this section, we talk about some of the key workload enablers and how they map to Windows. Kubernetesの主要な要素は、WindowsでもLinuxと同じように機能します。このセクションでは、主要なワークロードイネーブラーのいくつかと、それらがWindowsにどのようにマップされるかについて説明します。 * [Pods](/ja/docs/concepts/workloads/pods/pod-overview/) A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. You may not deploy Windows and Linux containers in the same Pod. All containers in a Pod are scheduled onto a single Node where each Node represents a specific platform and architecture. The following Pod capabilities, properties and events are supported with Windows containers: Podは、Kubernetesの基本的なビルディングブロックです。作成またはデプロイするKubernetesオブジェクトモデルの最小かつ最も単純なユニットです。WindowsとLinuxのコンテナを同じPodにデプロイすることはできません。Pod内のすべてのコンテナは、各ノードが特定のプラットフォームとアーキテクチャを表す単一のノードにスケジュールされます。次のPod機能、プロパティ、およびイベントがWindowsコンテナでサポートされています。: * Single or multiple containers per Pod with process isolation and volume sharing * Pod status fields * Readiness and Liveness probes * postStart & preStop container lifecycle events * ConfigMap, Secrets: as environment variables or volumes * プロセス分離とボリューム共有を備えたPodごとの単一または複数のコンテナ * Podステータスフィールド * ReadinessとLiveness Probe * postStartとpreStopコンテナのライフサイクルイベント * ConfigMap, Secrets: 環境変数またはボリュームとして * EmptyDir * Named pipe host mounts * Resource limits * 名前付きパイプホストマウント * リソース制限 * [Controllers](/ja/docs/concepts/workloads/controllers/) Kubernetes controllers handle the desired state of Pods. The following workload controllers are supported with Windows containers: Kubernetesコントローラは、Podの望ましい状態を処理します。次のワークロードコントローラーは、Windowsコンテナでサポートされています。: * ReplicaSet * ReplicationController"} {"_id":"doc-en-website-57f46a4e46aa41f046de1205eeca488f7dd0ac9ffe95635e64c628f03ca9126a","title":"","text":"* CronJob * [Services](/ja/docs/concepts/services-networking/service/) A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. You can use services for cross-operating system connectivity. In Windows, services can utilize the following types, properties and capabilities: Kubernetes Serviceは、Podの論理セットとPodにアクセスするためのポリシーを定義する抽象概念です。マイクロサービスと呼ばれることもあります。オペレーティングシステム間の接続にServiceを使用できます。WindowsでのServiceは、次のタイプ、プロパティと機能を利用できます。: * Service Environment variables * サービス環境変数 * NodePort * ClusterIP * LoadBalancer * ExternalName * Headless services Pods, Controllers and Services are critical elements to managing Windows workloads on Kubernetes. However, on their own they are not enough to enable the proper lifecycle management of Windows workloads in a dynamic cloud native environment. We added support for the following features: Pod、Controller、Serviceは、KubernetesでWindowsワークロードを管理するための重要な要素です。ただし、それだけでは、動的なクラウドネイティブ環境でWindowsワークロードの適切なライフサイクル管理を可能にするのに十分ではありません。次の機能のサポートを追加しました: * Pod and container metrics * Horizontal Pod Autoscaler support * Podとコンテナのメトリクス * Horizontal Pod Autoscalerサポート * kubectl Exec * Resource Quotas * Scheduler preemption * リソースクウォータ * Schedulerのプリエンプション #### Container Runtime #### コンテナランタイム Docker EE-basic 18.09 is required on Windows Server 2019 / 1809 nodes for Kubernetes. This works with the dockershim code included in the kubelet. Additional runtimes such as CRI-ContainerD may be supported in later Kubernetes versions. KubernetesのWindows Server 2019/1809ノードでは、Docker EE-basic 18.09が必要です。これは、kubeletに含まれているdockershimコードで動作します。CRI-ContainerDなどの追加のランタイムは、Kubernetesの以降のバージョンでサポートされる可能性があります。 #### Persistent Storage #### 永続化ストレージ Kubernetes [volumes](/docs/concepts/storage/volumes/) enable complex applications, with data persistence and Pod volume sharing requirements, to be deployed on Kubernetes. Management of persistent volumes associated with a specific storage back-end or protocol includes actions such as: provisioning/de-provisioning/resizing of volumes, attaching/detaching a volume to/from a Kubernetes node and mounting/dismounting a volume to/from individual containers in a pod that needs to persist data. The code implementing these volume management actions for a specific storage back-end or protocol is shipped in the form of a Kubernetes volume [plugin](/docs/concepts/storage/volumes/#types-of-volumes). The following broad classes of Kubernetes volume plugins are supported on Windows: Kubernetes[ボリューム](/docs/concepts/storage/volumes/)を使用すると、データの永続性とPodボリュームの共有要件を備えた複雑なアプリケーションをKubernetesにデプロイできます。特定のストレージバックエンドまたはプロトコルに関連付けられた永続化ボリュームの管理には、ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesノードへのボリュームのアタッチ/デタッチ、およびデータを永続化する必要があるPod内の個別のコンテナへのボリュームのマウント/マウント解除などのアクションが含まれます。特定のストレージバックエンドまたはプロトコルに対してこれらのボリューム管理アクションを実装するコードは、Kubernetesボリューム[プラグイン](/docs/concepts/storage/volumes/#types-of-volumes)の形式で出荷されます。次の幅広いクラスのKubernetesボリュームプラグインがWindowsでサポートされています。: ##### In-tree Volume Plugins Code associated with in-tree volume plugins ship as part of the core Kubernetes code base. Deployment of in-tree volume plugins do not require installation of additional scripts or deployment of separate containerized plugin components. These plugins can handle: provisioning/de-provisioning and resizing of volumes in the storage backend, attaching/detaching of volumes to/from a Kubernetes node and mounting/dismounting a volume to/from individual containers in a pod. The following in-tree plugins support Windows nodes: ##### In-treeボリュームプラグイン In-treeボリュームプラグインに関連付けられたコードは、コアKubernetesコードベースの一部として提供されます。In-treeボリュームプラグインのデプロイでは、追加のスクリプトをインストールしたり、個別のコンテナ化されたプラグインコンポーネントをデプロイしたりする必要はありません。これらのプラグインは、ストレージバックエンドでのボリュームのプロビジョニング/プロビジョニング解除とサイズ変更、Kubernetesノードへのボリュームのアタッチ/アタッチ解除、Pod内の個々のコンテナーへのボリュームのマウント/マウント解除を処理できます。次のIn-treeプラグインは、Windowsノードをサポートしています。: * [awsElasticBlockStore](/docs/concepts/storage/volumes/#awselasticblockstore) * [azureDisk](/docs/concepts/storage/volumes/#azuredisk)"} {"_id":"doc-en-website-197b1e8bd8f87194941b7c4efc822d45dd3d52e532f959d6e8079839a2f403bf","title":"","text":"* [vsphereVolume](/docs/concepts/storage/volumes/#vspherevolume) ##### FlexVolume Plugins Code associated with [FlexVolume](/docs/concepts/storage/volumes/#flexVolume) plugins ship as out-of-tree scripts or binaries that need to be deployed directly on the host. FlexVolume plugins handle attaching/detaching of volumes to/from a Kubernetes node and mounting/dismounting a volume to/from individual containers in a pod. Provisioning/De-provisioning of persistent volumes associated with FlexVolume plugins may be handled through an external provisioner that is typically separate from the FlexVolume plugins. The following FlexVolume [plugins](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows), deployed as powershell scripts on the host, support Windows nodes: [FlexVolume](/docs/concepts/storage/volumes/#flexVolume)プラグインに関連付けられたコードは、ホストに直接デプロイする必要があるout-of-treeのスクリプトまたはバイナリとして出荷されます。FlexVolumeプラグインは、Kubernetesノードとの間のボリュームのアタッチ/デタッチ、およびPod内の個々のコンテナとの間のボリュームのマウント/マウント解除を処理します。FlexVolumeプラグインに関連付けられた永続ボリュームのプロビジョニング/プロビジョニング解除は、通常FlexVolumeプラグインとは別の外部プロビジョニング担当者を通じて処理できます。次のFlexVolume[プラグイン](https://github.com/Microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows)は、Powershellスクリプトとしてホストにデプロイされ、Windowsノードをサポートします。: * [SMB](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~smb.cmd) * [iSCSI](https://github.com/microsoft/K8s-Storage-Plugins/tree/master/flexvolume/windows/plugins/microsoft.com~iscsi.cmd) ##### CSI Plugins ##### CSIプラグイン {{< feature-state for_k8s_version=\"v1.16\" state=\"alpha\" >}} Code associated with {{< glossary_tooltip text=\"CSI\" term_id=\"csi\" >}} plugins ship as out-of-tree scripts and binaries that are typically distributed as container images and deployed using standard Kubernetes constructs like DaemonSets and StatefulSets. CSI plugins handle a wide range of volume management actions in Kubernetes: provisioning/de-provisioning/resizing of volumes, attaching/detaching of volumes to/from a Kubernetes node and mounting/dismounting a volume to/from individual containers in a pod, backup/restore of persistent data using snapshots and cloning. CSI plugins typically consist of node plugins (that run on each node as a DaemonSet) and controller plugins. {{< glossary_tooltip text=\"CSI\" term_id=\"csi\" >}}プラグインに関連付けられたコードは、通常、コンテナイメージとして配布され、DaemonSetsやStatefulSetsなどの標準のKubernetesコンポーネントを使用してデプロイされるout-of-treeのスクリプトおよびバイナリとして出荷されます。CSIプラグインは、Kubernetesの幅広いボリューム管理アクションを処理します:ボリュームのプロビジョニング/プロビジョニング解除/サイズ変更、Kubernetesノードへのボリュームのアタッチ/ボリュームからのデタッチ、Pod内の個々のコンテナへのボリュームのマウント/マウント解除、バックアップ/スナップショットとクローニングを使用した永続データのバックアップ/リストア。CSIプラグインは通常、ノードプラグイン(各ノードでDaemonSetとして実行される)とコントローラープラグインで構成されます。 CSI node plugins (especially those associated with persistent volumes exposed as either block devices or over a shared file-system) need to perform various privileged operations like scanning of disk devices, mounting of file systems, etc. These operations differ for each host operating system. For Linux worker nodes, containerized CSI node plugins are typically deployed as privileged containers. For Windows worker nodes, privileged operations for containerized CSI node plugins is supported using [csi-proxy](https://github.com/kubernetes-csi/csi-proxy), a community-managed, stand-alone binary that needs to be pre-installed on each Windows node. Please refer to the deployment guide of the CSI plugin you wish to deploy for further details. CSIノードプラグイン(特に、ブロックデバイスまたは共有ファイルシステムとして公開された永続ボリュームに関連付けられているプラ​​グイン)は、ディスクデバイスのスキャン、ファイルシステムのマウントなど、さまざまな特権操作を実行する必要があります。これらの操作は、ホストオペレーティングシステムごとに異なります。Linuxワーカーノードの場合、コンテナ化されたCSIノードプラグインは通常、特権コンテナとしてデプロイされます。Windowsワーカーノードの場合、コンテナ化されたCSIノードプラグインの特権操作は、[csi-proxy](https://github.com/kubernetes-csi/csi-proxy)を使用してサポートされます。各Windowsノードにプリインストールされている。詳細については、展開するCSIプラグインの展開ガイドを参照してください。 #### Networking #### ネットワーキング Networking for Windows containers is exposed through [CNI plugins](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). Windows containers function similarly to virtual machines in regards to networking. Each container has a virtual network adapter (vNIC) which is connected to a Hyper-V virtual switch (vSwitch). The Host Networking Service (HNS) and the Host Compute Service (HCS) work together to create containers and attach container vNICs to networks. HCS is responsible for the management of containers whereas HNS is responsible for the management of networking resources such as: Windowsコンテナのネットワークは、[CNIプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)を通じて公開されます。Windowsコンテナは、ネットワークに関して仮想マシンと同様に機能します。各コンテナには、Hyper-V仮想スイッチ(vSwitch)に接続されている仮想ネットワークアダプター(vNIC)があります。Host Network Service(HNS)とHost Compute Service(HCS)は連携してコンテナを作成し、コンテナvNICをネットワークに接続します。HCSはコンテナの管理を担当するのに対し、HNSは次のようなネットワークリソースの管理を担当します。: * Virtual networks (including creation of vSwitches) * Endpoints / vNICs * Namespaces * Policies (Packet encapsulations, Load-balancing rules, ACLs, NAT'ing rules, etc.) * 仮想ネットワーク(vSwitchの作成を含む) * エンドポイント/vNIC * ネームスペース * ポリシー(パケットのカプセル化、負荷分散ルール、ACL、NATルールなど) The following service spec types are supported: 次のServiceタイプがサポートされています。: * NodePort * ClusterIP * LoadBalancer * ExternalName Windows supports five different networking drivers/modes: L2bridge, L2tunnel, Overlay, Transparent, and NAT. In a heterogeneous cluster with Windows and Linux worker nodes, you need to select a networking solution that is compatible on both Windows and Linux. The following out-of-tree plugins are supported on Windows, with recommendations on when to use each CNI: Windowsは、L2bridge、L2tunnel、Overlay、Transparent、NATの5つの異なるネットワークドライバー/モードをサポートしています。WindowsとLinuxのワーカーノードを持つ異種クラスターでは、WindowsとLinuxの両方で互換性のあるネットワークソリューションを選択する必要があります。以下のツリー外プラグインがWindowsでサポートされており、各CNIをいつ使用するかに関する推奨事項があります。: | Network Driver | Description | Container Packet Modifications | Network Plugins | Network Plugin Characteristics | | ネットワークドライバー | 説明 | コンテナパケットの変更 | ネットワークプラグイン | ネットワークプラグインの特性 | | -------------- | ----------- | ------------------------------ | --------------- | ------------------------------ | | L2bridge | Containers are attached to an external vSwitch. Containers are attached to the underlay network, although the physical network doesn't need to learn the container MACs because they are rewritten on ingress/egress. Inter-container traffic is bridged inside the container host. | MAC is rewritten to host MAC, IP remains the same. | [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge), [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md), Flannel host-gateway uses win-bridge | win-bridge uses L2bridge network mode, connects containers to the underlay of hosts, offering best performance. Requires user-defined routes (UDR) for inter-node connectivity. | | L2Tunnel | This is a special case of l2bridge, but only used on Azure. All packets are sent to the virtualization host where SDN policy is applied. | MAC rewritten, IP visible on the underlay network | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNI allows integration of containers with Azure vNET, and allows them to leverage the set of capabilities that [Azure Virtual Network provides](https://azure.microsoft.com/en-us/services/virtual-network/). For example, securely connect to Azure services or use Azure NSGs. See [azure-cni for some examples](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking) | | Overlay (Overlay networking for Windows in Kubernetes is in *alpha* stage) | Containers are given a vNIC connected to an external vSwitch. Each overlay network gets its own IP subnet, defined by a custom IP prefix.The overlay network driver uses VXLAN encapsulation. | Encapsulated with an outer header. | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay), Flannel VXLAN (uses win-overlay) | win-overlay should be used when virtual container networks are desired to be isolated from underlay of hosts (e.g. for security reasons). Allows for IPs to be re-used for different overlay networks (which have different VNID tags) if you are restricted on IPs in your datacenter. This option requires [KB4489899](https://support.microsoft.com/help/4489899) on Windows Server 2019. | | Transparent (special use case for [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)) | Requires an external vSwitch. Containers are attached to an external vSwitch which enables intra-pod communication via logical networks (logical switches and routers). | Packet is encapsulated either via [GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/) or [STT](https://datatracker.ietf.org/doc/draft-davie-stt/) tunneling to reach pods which are not on the same host.
Packets are forwarded or dropped via the tunnel metadata information supplied by the ovn network controller.
NAT is done for north-south communication. | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [Deploy via ansible](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib). Distributed ACLs can be applied via Kubernetes policies. IPAM support. Load-balancing can be achieved without kube-proxy. NATing is done without using iptables/netsh. | | NAT (*not used in Kubernetes*) | Containers are given a vNIC connected to an internal vSwitch. DNS/DHCP is provided using an internal component called [WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/) | MAC and IP is rewritten to host MAC/IP. | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | Included here for completeness |
| L2bridge | コンテナは外部のvSwitchに接続されます。コンテナはアンダーレイネットワークに接続されますが、物理ネットワークはコンテナのMACを上り/下りで書き換えるため、MACを学習する必要はありません。コンテナ間トラフィックは、コンテナホスト内でブリッジされます。 | MACはホストのMACに書き換えられ、IPは変わりません。| [win-bridge](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-bridge)、[Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md)、Flannelホストゲートウェイは、win-bridgeを使用します。 | win-bridgeはL2bridgeネットワークモードを使用して、コンテナをホストのアンダーレイに接続して、最高のパフォーマンスを提供します。ノード間接続にはユーザー定義ルート(UDR)が必要です。 | | L2Tunnel | これはl2bridgeの特殊なケースですが、Azureでのみ使用されます。すべてのパケットは、SDNポリシーが適用されている仮想化ホストに送信されます。| MACが書き換えられ、IPがアンダーレイネットワークで表示されます。 | [Azure-CNI](https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md) | Azure-CNIを使用すると、コンテナをAzure vNETと統合し、[Azure Virtual Networkが提供](https://azure.microsoft.com/en-us/services/virtual-network/)する一連の機能を活用できます。たとえば、Azureサービスに安全に接続するか、Azure NSGを使用します。[azure-cniのいくつかの例](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)を参照してください。| | オーバーレイ(KubernetesのWindows用のオーバーレイネットワークは *アルファ* 段階です)| コンテナには、外部のvSwitchに接続されたvNICが付与されます。各オーバーレイネットワークは、カスタムIPプレフィックスで定義された独自のIPサブネットを取得します。オーバーレイネットワークドライバーは、VXLANを使用してカプセル化します。 | 外部ヘッダーでカプセル化されます。 | [Win-overlay](https://github.com/containernetworking/plugins/tree/master/plugins/main/windows/win-overlay)、Flannel VXLAN (win-overlayを使用) | win-overlayは、仮想コンテナーネットワークをホストのアンダーレイから分離する必要がある場合に使用する必要があります(セキュリティ上の理由など)。データセンター内のIPが制限されている場合に、(異なるVNIDタグを持つ)異なるオーバーレイネットワークでIPを再利用できるようにします。このオプションには、Windows Server 2019で[KB4489899](https://support.microsoft.com/help/4489899)が必要です。| | 透過的([ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes)の特別な使用例) | 外部のvSwitchが必要です。コンテナは外部のvSwitchに接続され、論理ネットワーク(論理スイッチおよびルーター)を介したPod内通信を可能にします。 | パケットは、[GENEVE](https://datatracker.ietf.org/doc/draft-gross-geneve/)または[STT](https://datatracker.ietf.org/doc/draft-davie-stt/)トンネリングを介してカプセル化され、同じホスト上にないポッドに到達します。パケットは、ovnネットワークコントローラーによって提供されるトンネルメタデータ情報を介して転送またはドロップされます。NATは南北通信のために行われます。 | [ovn-kubernetes](https://github.com/openvswitch/ovn-kubernetes) | [ansible経由でデプロイ](https://github.com/openvswitch/ovn-kubernetes/tree/master/contrib)します。分散ACLは、Kubernetesポリシーを介して適用できます。 IPAMをサポートします。負荷分散は、kube-proxyなしで実現できます。 NATは、ip​​tables/netshを使用せずに行われます。 | | NAT(*Kubernetesでは使用されません*) | コンテナには、内部のvSwitchに接続されたvNICが付与されます。DNS/DHCPは、[WinNAT](https://blogs.technet.microsoft.com/virtualization/2016/05/25/windows-nat-winnat-capabilities-and-limitations/)と呼ばれる内部コンポーネントを使用して提供されます。 | MACおよびIPはホストMAC/IPに書き換えられます。 | [nat](https://github.com/Microsoft/windows-container-networking/tree/master/plugins/nat) | 完全を期すためにここに含まれています。 | As outlined above, the [Flannel](https://github.com/coreos/flannel) CNI [meta plugin](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel) is also supported on [Windows](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel#windows-support-experimental) via the [VXLAN network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) (**alpha support** ; delegates to win-overlay) and [host-gateway network backend](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw) (stable support; delegates to win-bridge). This plugin supports delegating to one of the reference CNI plugins (win-overlay, win-bridge), to work in conjunction with Flannel daemon on Windows (Flanneld) for automatic node subnet lease assignment and HNS network creation. This plugin reads in its own configuration file (cni.conf), and aggregates it with the environment variables from the FlannelD generated subnet.env file. It then delegates to one of the reference CNI plugins for network plumbing, and sends the correct configuration containing the node-assigned subnet to the IPAM plugin (e.g. host-local). 上で概説したように、[Flannel](https://github.com/coreos/flannel) CNI[メタプラグイン](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel)は、[VXLANネットワークバックエンド](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)(**アルファサポート**、win-overlayへのデリゲート)および[ホストゲートウェイネットワークバックエンド](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#host-gw)(安定したサポート、win-bridgeへのデリゲート)を介して[Windows](https://github.com/containernetworking/plugins/tree/master/plugins/meta/flannel#windows-support-experimental)でもサポートされます。このプラグインは、参照CNIプラグイン(win-overlay、win-bridge)の1つへの委任をサポートし、WindowsのFlannelデーモン(Flanneld)と連携して、ノードのサブネットリースの自動割り当てとHNSネットワークの作成を行います。このプラグインは、独自の構成ファイル(cni.conf)を読み取り、FlannelDで生成されたsubnet.envファイルからの環境変数と統合します。次に、ネットワークプラミング用の参照CNIプラグインの1つに委任し、ノード割り当てサブネットを含む正しい構成をIPAMプラグイン(ホストローカルなど)に送信します。 For the node, pod, and service objects, the following network flows are supported for TCP/UDP traffic: Node、Pod、およびServiceオブジェクトの場合、TCP/UDPトラフィックに対して次のネットワークフローがサポートされます。: * Pod -> Pod (IP) * Pod -> Pod (Name) * Pod -> Service (Cluster IP) * Pod -> Service (PQDN, but only if there are no \".\") * Pod -> Service (PQDN、ただし、「.」がない場合のみ) * Pod -> Service (FQDN) * Pod -> External (IP) * Pod -> External (DNS) * Node -> Pod * Pod -> Node The following IPAM options are supported on Windows: Windowsでは、次のIPAMオプションがサポートされています。 * [Host-local](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local) * HNS IPAM (Inbox platform IPAM, this is a fallback when no IPAM is set) * [Azure-vnet-ipam](https://github.com/Azure/azure-container-networking/blob/master/docs/ipam.md) (for azure-cni only) * [ホストローカル](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local) * HNS IPAM (受信トレイプラットフォームIPAM、これはIPAMが設定されていない場合のフォールバック) * [Azure-vnet-ipam](https://github.com/Azure/azure-container-networking/blob/master/docs/ipam.md)(azure-cniのみ) ### Limitations ### 制限 #### Control Plane #### コントロールプレーン Windows is only supported as a worker node in the Kubernetes architecture and component matrix. This means that a Kubernetes cluster must always include Linux master nodes, zero or more Linux worker nodes, and zero or more Windows worker nodes. Windowsは、Kubernetesアーキテクチャとコンポーネントマトリックスのワーカーノードとしてのみサポートされています。つまり、Kubernetesクラスタには常にLinuxマスターノード、0以上のLinuxワーカーノード、0以上のWindowsワーカーノードが含まれている必要があります。 #### Compute #### コンピュート ##### Resource management and process isolation ##### リソース管理とプロセス分離 Linux cgroups are used as a pod boundary for resource controls in Linux. Containers are created within that boundary for network, process and file system isolation. The cgroups APIs can be used to gather cpu/io/memory stats. In contrast, Windows uses a Job object per container with a system namespace filter to contain all processes in a container and provide logical isolation from the host. There is no way to run a Windows container without the namespace filtering in place. This means that system privileges cannot be asserted in the context of the host, and thus privileged containers are not available on Windows. Containers cannot assume an identity from the host because the Security Account Manager (SAM) is separate. Linux cgroupsは、Linuxのリソースを制御するPodの境界として使用されます。コンテナは、ネットワーク、プロセス、およびファイルシステムを分離するのために、その境界内に作成されます。cgroups APIを使用して、cpu/io/memoryの統計を収集できます。対照的に、Windowsはシステムネームスペースフィルターを備えたコンテナごとのジョブオブジェクトを使用して、コンテナ内のすべてのプロセスを格納し、ホストからの論理的な分離を提供します。ネームスペースフィルタリングを行わずにWindowsコンテナを実行する方法はありません。これは、ホストの環境ではシステム特権を主張できないため、Windowsでは特権コンテナを使用できないことを意味します。セキュリティアカウントマネージャー(SAM)が独立しているため、コンテナはホストからIDを引き受けることができません。 ##### Operating System Restrictions ##### オペレーティングシステムの制限 Windows has strict compatibility rules, where the host OS version must match the container base image OS version. Only Windows containers with a container operating system of Windows Server 2019 are supported. Hyper-V isolation of containers, enabling some backward compatibility of Windows container image versions, is planned for a future release. Windowsには厳密な互換性ルールがあり、ホストOSのバージョンとコンテナのベースイメージOSのバージョンは、一致する必要があります。Windows Server 2019のコンテナオペレーティングシステムを備えたWindowsコンテナのみがサポートされます。Hyper-V分離のコンテナは、Windowsコンテナのイメージバージョンに下位互換性を持たせることは、将来のリリースで計画されています。 ##### Feature Restrictions ##### 機能制限 * TerminationGracePeriod: not implemented * Single file mapping: to be implemented with CRI-ContainerD * Termination message: to be implemented with CRI-ContainerD * Privileged Containers: not currently supported in Windows containers * HugePages: not currently supported in Windows containers * The existing node problem detector is Linux-only and requires privileged containers. In general, we don't expect this to be used on Windows because privileged containers are not supported * Not all features of shared namespaces are supported (see API section for more details) * TerminationGracePeriod:実装されていません * 単一ファイルのマッピング:CRI-ContainerDで実装されます * 終了メッセージ:CRI-ContainerDで実装されます * 特権コンテナ:現在Windowsコンテナではサポートされていません * HugePages:現在Windowsコンテナではサポートされていません * 既存のノード問題を検出する機能はLinux専用であり、特権コンテナが必要です。一般的に、特権コンテナはサポートされていないため、これがWindowsで使用されることは想定していません。 * ネームスペース共有については、すべての機能がサポートされているわけではありません(詳細については、APIセクションを参照してください) ##### Memory Reservations and Handling ##### メモリ予約と処理 Windows does not have an out-of-memory process killer as Linux does. Windows always treats all user-mode memory allocations as virtual, and pagefiles are mandatory. The net effect is that Windows won't reach out of memory conditions the same way Linux does, and processes page to disk instead of being subject to out of memory (OOM) termination. If memory is over-provisioned and all physical memory is exhausted, then paging can slow down performance. Windowsには、Linuxのようなメモリ不足のプロセスキラーはありません。Windowsは常に全ユーザーモードのメモリ割り当てを仮想として扱い、ページファイルは必須です。正味の効果は、WindowsはLinuxのようなメモリ不足の状態にはならず、メモリ不足(OOM)終了の影響を受ける代わりにページをディスクに処理します。メモリが過剰にプロビジョニングされ、物理メモリのすべてが使い果たされると、ページングによってパフォーマンスが低下する可能性があります。 Keeping memory usage within reasonable bounds is possible with a two-step process. First, use the kubelet parameters `--kubelet-reserve` and/or `--system-reserve` to account for memory usage on the node (outside of containers). This reduces [NodeAllocatable](/ja/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)). As you deploy workloads, use resource limits (must set only limits or limits must equal requests) on containers. This also subtracts from NodeAllocatable and prevents the scheduler from adding more pods once a node is full. 2ステップのプロセスで、メモリ使用量を妥当な範囲内に保つことが可能です。まず、kubeletパラメータ`--kubelet-reserve`や`--system-reserve`を使用して、ノード(コンテナ外)でのメモリ使用量を明確にします。これにより、[NodeAllocatable](/ja/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable))が削減されます。ワークロードをデプロイするときは、コンテナにリソース制限をかけます(制限のみを設定するか、制限が要求と等しくなければなりません)。これにより、NodeAllocatableも差し引かれ、ノードのリソースがフルな状態になるとSchedulerがPodを追加できなくなります。 A best practice to avoid over-provisioning is to configure the kubelet with a system reserved memory of at least 2GB to account for Windows, Docker, and Kubernetes processes. 過剰なプロビジョニングを回避するためのベストプラクティスは、Windows、Docker、およびKubernetesのプロセスに対応するために、最低2GBのメモリを予約したシステムでkubeletを構成することです。 The behavior of the flags behave differently as described below: フラグの振舞いについては、次のような異なる動作をします。: * `--kubelet-reserve`, `--system-reserve` , and `--eviction-hard` flags update Node Allocatable * Eviction by using `--enforce-node-allocable` is not implemented * Eviction by using `--eviction-hard` and `--eviction-soft` are not implemented * MemoryPressure Condition is not implemented * There are no OOM eviction actions taken by the kubelet * Kubelet running on the windows node does not have memory restrictions. `--kubelet-reserve` and `--system-reserve` do not set limits on kubelet or processes running on the host. This means kubelet or a process on the host could cause memory resource starvation outside the node-allocatable and scheduler * `--kubelet-reserve`、`--system-reserve`、および`--eviction-hard`フラグはノードの割り当て可能数を更新します * `--enforce-node-allocable`を使用した排除は実装されていません * `--eviction-hard`および`--eviction-soft`を使用した排除は実装されていません * MemoryPressureの制約は実装されていません * kubeletによって実行されるOOMを排除することはありません * Windowsノードで実行されているKubeletにはメモリ制限がありません。`--kubelet-reserve`と`--system-reserve`は、ホストで実行されているkubeletまたはプロセスに制限を設定しません。これは、ホスト上のkubeletまたはプロセスが、NodeAllocatableとSchedulerの外でメモリリソース不足を引き起こす可能性があることを意味します。 #### Storage #### ストレージ Windows has a layered filesystem driver to mount container layers and create a copy filesystem based on NTFS. All file paths in the container are resolved only within the context of that container. Windowsには、コンテナレイヤーをマウントして、NTFSに基づいて複製されたファイルシステムを作るためのレイヤー構造のファイルシステムドライバーがあります。コンテナ内のすべてのファイルパスは、そのコンテナの環境内だけで決められます。 * Volume mounts can only target a directory in the container, and not an individual file * Volume mounts cannot project files or directories back to the host filesystem * Read-only filesystems are not supported because write access is always required for the Windows registry and SAM database. However, read-only volumes are supported * Volume user-masks and permissions are not available. Because the SAM is not shared between the host & container, there's no mapping between them. All permissions are resolved within the context of the container * ボリュームマウントは、コンテナ内のディレクトリのみを対象にすることができ、個別のファイルは対象にできません * ボリュームマウントは、ファイルまたはディレクトリをホストファイルシステムに投影することはできません * WindowsレジストリとSAMデータベースには常に書き込みアクセスが必要であるため、読み取り専用ファイルシステムはサポートされていません。ただし、読み取り専用ボリュームはサポートされています * ボリュームのユーザーマスクと権限は使用できません。SAMはホストとコンテナ間で共有されないため、それらの間のマッピングはありません。すべての権限はコンテナの環境内で決められます As a result, the following storage functionality is not supported on Windows nodes その結果、次のストレージ機能はWindowsノードではサポートされません。 * Volume subpath mounts. Only the entire volume can be mounted in a Windows container. * Subpath volume mounting for Secrets * Host mount projection * DefaultMode (due to UID/GID dependency) * Read-only root filesystem. Mapped volumes still support readOnly * Block device mapping * Memory as the storage medium * File system features like uui/guid, per-user Linux filesystem permissions * NFS based storage/volume support * Expanding the mounted volume (resizefs) * ボリュームサブパスのマウント。Windowsコンテナにマウントできるのはボリューム全体だけです。 * シークレットのサブパスボリュームのマウント * ホストマウントプロジェクション * DefaultMode(UID/GID依存関係による) * 読み取り専用のルートファイルシステム。マップされたボリュームは引き続き読み取り専用をサポートします * ブロックデバイスマッピング * 記憶媒体としてのメモリ * uui/guid、ユーザーごとのLinuxファイルシステム権限などのファイルシステム機能 * NFSベースのストレージ/ボリュームのサポート * マウントされたボリュームの拡張(resizefs) #### Networking #### ネットワーキング Windows Container Networking differs in some important ways from Linux networking. The [Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) contains additional details and background. Windowsコンテナネットワーキングは、Linuxネットワーキングとはいくつかの重要な実装方法の違いがあります。[Microsoft documentation for Windows Container Networking](https://docs.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture)には、追加の詳細と背景があります。 The Windows host networking networking service and virtual switch implement namespacing and can create virtual NICs as needed for a pod or container. However, many configurations such as DNS, routes, and metrics are stored in the Windows registry database rather than /etc/... files as they are on Linux. The Windows registry for the container is separate from that of the host, so concepts like mapping /etc/resolv.conf from the host into a container don't have the same effect they would on Linux. These must be configured using Windows APIs run in the context of that container. Therefore CNI implementations need to call the HNS instead of relying on file mappings to pass network details into the pod or container. Windowsホストネットワーキングサービスと仮想スイッチはネームスペースを実装して、Podまたはコンテナの必要に応じて仮想NICを作成できます。ただし、DNS、ルート、メトリックなどの多くの構成は、Linuxのような/etc/...ファイルではなく、Windowsレジストリデータベースに保存されます。コンテナのWindowsレジストリはホストのレジストリとは別であるため、ホストからコンテナへの/etc/resolv.confのマッピングなどの概念は、Linuxの場合と同じ効果をもたらしません。これらは、そのコンテナの環境で実行されるWindows APIを使用して構成する必要があります。したがって、CNIの実装は、ファイルマッピングに依存する代わりにHNSを呼び出して、ネットワークの詳細をPodまたはコンテナに渡す必要があります。 The following networking functionality is not supported on Windows nodes 次のネットワーク機能はWindowsノードではサポートされていません * Host networking mode is not available for Windows pods * Local NodePort access from the node itself fails (works for other nodes or external clients) * Accessing service VIPs from nodes will be available with a future release of Windows Server * Overlay networking support in kube-proxy is an alpha release. In addition, it requires [KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887) to be installed on Windows Server 2019 * Local Traffic Policy and DSR mode * Windows containers connected to l2bridge, l2tunnel, or overlay networks do not support communicating over the IPv6 stack. There is outstanding Windows platform work required to enable these network drivers to consume IPv6 addresses and subsequent Kubernetes work in kubelet, kube-proxy, and CNI plugins. * Outbound communication using the ICMP protocol via the win-overlay, win-bridge, and Azure-CNI plugin. Specifically, the Windows data plane ([VFP](https://www.microsoft.com/en-us/research/project/azure-virtual-filtering-platform/)) doesn't support ICMP packet transpositions. This means: * ICMP packets directed to destinations within the same network (e.g. pod to pod communication via ping) work as expected and without any limitations * TCP/UDP packets work as expected and without any limitations * ICMP packets directed to pass through a remote network (e.g. pod to external internet communication via ping) cannot be transposed and thus will not be routed back to their source * Since TCP/UDP packets can still be transposed, one can substitute `ping ` with `curl ` to be able to debug connectivity to the outside world. * ホストネットワーキングモードはWindows Podでは使用できません * ノード自体からのローカルNodePortアクセスは失敗します(他のノードまたは外部クライアントで機能) * ノードからのService VIPへのアクセスは、Windows Serverの将来のリリースで利用可能になる予定です * kube-proxyのオーバーレイネットワーキングサポートはアルファリリースです。さらに、[KB4482887](https://support.microsoft.com/en-us/help/4482887/windows-10-update-kb4482887)がWindows Server 2019にインストールされている必要があります * ローカルトラフィックポリシーとDSRモード * l2bridge、l2tunnel、またはオーバーレイネットワークに接続されたWindowsコンテナは、IPv6スタックを介した通信をサポートしていません。これらのネットワークドライバーがIPv6アドレスを使用できるようにするために必要な機能として、優れたWindowsプラットフォームの機能があり、それに続いて、kubelet、kube-proxy、およびCNIプラグインといったKubernetesの機能があります。 * win-overlay、win-bridge、およびAzure-CNIプラグインを介したICMPプロトコルを使用したアウトバウンド通信。具体的には、Windowsデータプレーン([VFP](https://www.microsoft.com/en-us/research/project/azure-virtual-filtering-platform/))は、ICMPパケットの置き換えをサポートしていません。これの意味は: * 同じネットワーク内の宛先に向けられたICMPパケット(pingを介したPod間通信など)は期待どおりに機能し、制限はありません * TCP/UDPパケットは期待どおりに機能し、制限はありません * リモートネットワーク(Podからping経由の外部インターネット通信など)を通過するように指示されたICMPパケットは置き換えできないため、ソースにルーティングされません。 * TCP/UDPパケットは引き続き置き換えできるため、`ping `を`curl `に置き換えることで、外部への接続をデバッグできます。 These features were added in Kubernetes v1.15: これらの機能はKubernetes v1.15で追加されました。 * `kubectl port-forward` ##### CNI Plugins ##### CNIプラグイン * Windows reference network plugins win-bridge and win-overlay do not currently implement [CNI spec](https://github.com/containernetworking/cni/blob/master/SPEC.md) v0.4.0 due to missing \"CHECK\" implementation. * The Flannel VXLAN CNI has the following limitations on Windows: * Windowsリファレンスネットワークプラグインのwin-bridgeとwin-overlayは、[CNI仕様](https://github.com/containernetworking/cni/blob/master/SPEC.md)v0.4.0において「CHECK」実装がないため、今のところ実装されていません。 * Flannel VXLAN CNIについては、Windowsで次の制限があります。: 1. Node-pod connectivity isn't possible by design. It's only possible for local pods with Flannel [PR 1096](https://github.com/coreos/flannel/pull/1096) 2. We are restricted to using VNI 4096 and UDP port 4789. The VNI limitation is being worked on and will be overcome in a future release (open-source flannel changes). See the official [Flannel VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan) backend docs for more details on these parameters. 1. Node-podの直接間接続は設計上不可能です。Flannel[PR 1096](https://github.com/coreos/flannel/pull/1096)を使用するローカルPodでのみ可能です 2. VNI 4096とUDPポート4789の使用に制限されています。VNIの制限は現在取り組んでおり、将来のリリースで解決される予定です(オープンソースのflannelの変更)。これらのパラメーターの詳細については、公式の[Flannel VXLAN](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)バックエンドのドキュメントをご覧ください。 ##### DNS {#dns-limitations} * ClusterFirstWithHostNet is not supported for DNS. Windows treats all names with a '.' as a FQDN and skips PQDN resolution * On Linux, you have a DNS suffix list, which is used when trying to resolve PQDNs. On Windows, we only have 1 DNS suffix, which is the DNS suffix associated with that pod's namespace (mydns.svc.cluster.local for example). Windows can resolve FQDNs and services or names resolvable with just that suffix. For example, a pod spawned in the default namespace, will have the DNS suffix **default.svc.cluster.local**. On a Windows pod, you can resolve both **kubernetes.default.svc.cluster.local** and **kubernetes**, but not the in-betweens, like **kubernetes.default** or **kubernetes.default.svc**. * On Windows, there are multiple DNS resolvers that can be used. As these come with slightly different behaviors, using the `Resolve-DNSName` utility for name query resolutions is recommended. * ClusterFirstWithHostNetは、DNSでサポートされていません。Windowsでは、FQDNとしてすべての名前を「.」で扱い、PQDNでの名前解決はスキップします。 * Linuxでは、PQDNで名前解決しようとするときに使用するDNSサフィックスリストがあります。Windowsでは、1つのDNSサフィックスしかありません。これは、そのPodのNamespaceに関連付けられているDNSサフィックスです(たとえば、mydns.svc.cluster.local)。Windowsでは、そのサフィックスだけで名前解決可能なFQDNおよびServiceまたはNameでの名前解決ができます。たとえば、defaultのNamespaceで生成されたPodには、DNSサフィックス**default.svc.cluster.local**が付けられます。WindowsのPodでは、**kubernetes.default.svc.cluster.local**と**kubernetes**の両方を名前解決できますが、**kubernetes.default**や**kubernetes.default.svc**のような中間での名前解決はできません。 * Windowsでは、複数のDNSリゾルバーを使用できます。これらには少し異なる動作が付属しているため、ネームクエリの解決には`Resolve-DNSName`ユーティリティを使用することをお勧めします。 ##### Security ##### セキュリティ Secrets are written in clear text on the node's volume (as compared to tmpfs/in-memory on linux). This means customers have to do two things Secretはノードのボリュームに平文テキストで書き込まれます(Linuxのtmpfs/in-memoryの比較として)。これはカスタマーが2つのことを行う必要があります 1. Use file ACLs to secure the secrets file location 2. Use volume-level encryption using [BitLocker](https://docs.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server) 1. ファイルACLを使用してSecretファイルの場所を保護する 2. [BitLocker](https://docs.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-how-to-deploy-on-windows-server)を使って、ボリュームレベルの暗号化を使用する [RunAsUser ](/docs/concepts/policy/pod-security-policy/#users-and-groups)is not currently supported on Windows. The workaround is to create local accounts before packaging the container. The RunAsUsername capability may be added in a future release. [RunAsUser](/docs/concepts/policy/pod-security-policy/#users-and-groups)は、現在Windowsではサポートされていません。回避策は、コンテナをパッケージ化する前にローカルアカウントを作成することです。RunAsUsername機能は、将来のリリースで追加される可能性があります。 Linux specific pod security context privileges such as SELinux, AppArmor, Seccomp, Capabilities (POSIX Capabilities), and others are not supported. SELinux、AppArmor、Seccomp、特性(POSIX機能)のような、Linux固有のPodセキュリティ環境の権限はサポートされていません。 In addition, as mentioned already, privileged containers are not supported on Windows. さらに、既に述べたように特権付きコンテナは、Windowsにおいてサポートされていません。 #### API There are no differences in how most of the Kubernetes APIs work for Windows. The subtleties around what's different come down to differences in the OS and container runtime. In certain situations, some properties on workload APIs such as Pod or Container were designed with an assumption that they are implemented on Linux, failing to run on Windows. ほとんどのKubernetes APIがWindowsでも機能することに違いはありません。そのわずかな違いはOSとコンテナランタイムの違いによるものです。特定の状況では、PodやコンテナなどのワークロードAPIの一部のプロパティが、Linuxで実装されているが、Windowsでは実行できないことを前提に設計されています。 At a high level, these OS concepts are different: 高いレベルで、これらOSのコンセプトに違いがります。: * Identity - Linux uses userID (UID) and groupID (GID) which are represented as integer types. User and group names are not canonical - they are just an alias in `/etc/groups` or `/etc/passwd` back to UID+GID. Windows uses a larger binary security identifier (SID) which is stored in the Windows Security Access Manager (SAM) database. This database is not shared between the host and containers, or between containers. * File permissions - Windows uses an access control list based on SIDs, rather than a bitmask of permissions and UID+GID * File paths - convention on Windows is to use `` instead of `/`. The Go IO libraries typically accept both and just make it work, but when you're setting a path or command line that's interpreted inside a container, `` may be needed. * Signals - Windows interactive apps handle termination differently, and can implement one or more of these: * A UI thread handles well-defined messages including WM_CLOSE * Console apps handle ctrl-c or ctrl-break using a Control Handler * Services register a Service Control Handler function that can accept SERVICE_CONTROL_STOP control codes * ID - Linuxでは、Integer型として表されるuserID(UID)とgroupID(GID)を使用します。ユーザー名とグループ名は正規ではありません - それらは、UID+GIDの背後にある`/etc/groups`または`/etc/passwd`の単なるエイリアスです。Windowsは、Windows Security Access Manager(SAM)データベースに格納されているより大きなバイナリセキュリティ識別子(SID)を使用します。このデータベースは、ホストとコンテナ間、またはコンテナ間で共有されません。 * ファイル権限 - Windowsは、権限とUID+GIDのビットマスクではなく、SIDに基づくアクセス制御リストを使用します * ファイルパス - Windowsの規則では、`/`ではなく``を使用します。Go IOライブラリは通常両方を受け入れ、それを機能させるだけですが、コンテナ内で解釈されるパスまたはコマンドラインを設定する場合、``が必要になる場合があります。 * シグナル - Windowsのインタラクティブなアプリは終了を異なる方法で処理し、次の1つ以上を実装できます。: * UIスレッドは、WM_CLOSEを含む明確に定義されたメッセージを処理します * コンソールアプリは、コントロールハンドラーを使用してctrl-cまたはctrl-breakを処理します * サービスは、SERVICE_CONTROL_STOP制御コードを受け入れることができるサービスコントロールハンドラー関数を登録します。 Exit Codes follow the same convention where 0 is success, nonzero is failure. The specific error codes may differ across Windows and Linux. However, exit codes passed from the Kubernetes components (kubelet, kube-proxy) are unchanged. 終了コードは、0が成功、0以外が失敗の場合と同じ規則に従います。特定のエラーコードは、WindowsとLinuxで異なる場合があります。ただし、Kubernetesのコンポーネント(kubelet、kube-proxy)から渡される終了コードは変更されていません。 ##### V1.Container * V1.Container.ResourceRequirements.limits.cpu and V1.Container.ResourceRequirements.limits.memory - Windows doesn't use hard limits for CPU allocations. Instead, a share system is used. The existing fields based on millicores are scaled into relative shares that are followed by the Windows scheduler. [see: kuberuntime/helpers_windows.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/helpers_windows.go), [see: resource controls in Microsoft docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls) * Huge pages are not implemented in the Windows container runtime, and are not available. They require [asserting a user privilege](https://docs.microsoft.com/en-us/windows/desktop/Memory/large-page-support) that's not configurable for containers. * V1.Container.ResourceRequirements.requests.cpu and V1.Container.ResourceRequirements.requests.memory - Requests are subtracted from node available resources, so they can be used to avoid overprovisioning a node. However, they cannot be used to guarantee resources in an overprovisioned node. They should be applied to all containers as a best practice if the operator wants to avoid overprovisioning entirely. * V1.Container.SecurityContext.allowPrivilegeEscalation - not possible on Windows, none of the capabilities are hooked up * V1.Container.SecurityContext.Capabilities - POSIX capabilities are not implemented on Windows * V1.Container.SecurityContext.privileged - Windows doesn't support privileged containers * V1.Container.SecurityContext.procMount - Windows doesn't have a /proc filesystem * V1.Container.SecurityContext.readOnlyRootFilesystem - not possible on Windows, write access is required for registry & system processes to run inside the container * V1.Container.SecurityContext.runAsGroup - not possible on Windows, no GID support * V1.Container.SecurityContext.runAsNonRoot - Windows does not have a root user. The closest equivalent is ContainerAdministrator which is an identity that doesn't exist on the node. * V1.Container.SecurityContext.runAsUser - not possible on Windows, no UID support as int. * V1.Container.SecurityContext.seLinuxOptions - not possible on Windows, no SELinux * V1.Container.terminationMessagePath - this has some limitations in that Windows doesn't support mapping single files. The default value is /dev/termination-log, which does work because it does not exist on Windows by default. * V1.Container.ResourceRequirements.limits.cpuおよびV1.Container.ResourceRequirements.limits.memory - Windowsは、CPU割り当てにハード制限を使用しません。代わりに、共有システムが使用されます。ミリコアに基づく既存のフィールドは、Windowsスケジューラーによって追従される相対共有にスケーリングされます。[参照: kuberuntime/helpers_windows.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/helpers_windows.go)、[参照: resource controls in Microsoft docs](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls) * Huge Pagesは、Windowsコンテナランタイムには実装されてないので、使用できません。コンテナに対して設定できない[ユーザー特権を主張](https://docs.microsoft.com/en-us/windows/desktop/Memory/large-page-support)する必要があります。 * V1.Container.ResourceRequirements.requests.cpuおよびV1.Container.ResourceRequirements.requests.memory - リクエストはノードの利用可能なリソースから差し引かれるので、ノードのオーバープロビジョニングを回避するために使用できます。ただし、過剰にプロビジョニングされたノードのリソースを保証するために使用することはできません。オペレーターが完全にプロビジョニングし過ぎないようにする場合は、ベストプラクティスとしてこれらをすべてのコンテナに適用する必要があります。 * V1.Container.SecurityContext.allowPrivilegeEscalation - Windowsでは使用できません、接続されている機能はありません * V1.Container.SecurityContext.Capabilities - POSIX機能はWindowsでは実装されていません * V1.Container.SecurityContext.privileged - Windowsでは特権コンテナをサポートしていません * V1.Container.SecurityContext.procMount - Windowsでは/procファイルシステムがありません * V1.Container.SecurityContext.readOnlyRootFilesystem - Windowsでは使用できません、レジストリおよびシステムプロセスがコンテナ内で実行するには、書き込みアクセスが必要です * V1.Container.SecurityContext.runAsGroup - Windowsでは使用できません、GIDのサポートもありません * V1.Container.SecurityContext.runAsNonRoot - Windowsではrootユーザーが存在しません。最も近いものは、ノードに存在しないIDであるContainerAdministratorです。 * V1.Container.SecurityContext.runAsUser - Windowsでは使用できません。intとしてのUIDはサポートされていません。 * V1.Container.SecurityContext.seLinuxOptions - Windowsでは使用できません、SELinuxがありません * V1.Container.terminationMessagePath - これは、Windowsが単一ファイルのマッピングをサポートしないという点でいくつかの制限があります。デフォルト値は/dev/termination-logであり、デフォルトではWindowsに存在しないため動作します。 ##### V1.Pod * V1.Pod.hostIPC, v1.pod.hostpid - host namespace sharing is not possible on Windows * V1.Pod.hostNetwork - There is no Windows OS support to share the host network * V1.Pod.dnsPolicy - ClusterFirstWithHostNet - is not supported because Host Networking is not supported on Windows. * V1.Pod.podSecurityContext - see V1.PodSecurityContext below * V1.Pod.shareProcessNamespace - this is a beta feature, and depends on Linux namespaces which are not implemented on Windows. Windows cannot share process namespaces or the container's root filesystem. Only the network can be shared. * V1.Pod.terminationGracePeriodSeconds - this is not fully implemented in Docker on Windows, see: [reference](https://github.com/moby/moby/issues/25982). The behavior today is that the ENTRYPOINT process is sent CTRL_SHUTDOWN_EVENT, then Windows waits 5 seconds by default, and finally shuts down all processes using the normal Windows shutdown behavior. The 5 second default is actually in the Windows registry [inside the container](https://github.com/moby/moby/issues/25982#issuecomment-426441183), so it can be overridden when the container is built. * V1.Pod.volumeDevices - this is a beta feature, and is not implemented on Windows. Windows cannot attach raw block devices to pods. * V1.Pod.volumes - EmptyDir, Secret, ConfigMap, HostPath - all work and have tests in TestGrid * V1.emptyDirVolumeSource - the Node default medium is disk on Windows. Memory is not supported, as Windows does not have a built-in RAM disk. * V1.VolumeMount.mountPropagation - mount propagation is not supported on Windows. * V1.Pod.hostIPC、v1.pod.hostpid - Windowsではホストのネームスペースを共有することはできません * V1.Pod.hostNetwork - ホストのネットワークを共有するためのWindows OSサポートはありません * V1.Pod.dnsPolicy - ClusterFirstWithHostNet - Windowsではホストネットワーキングがサポートされていないため、サポートされていません。 * V1.Pod.podSecurityContext - 以下のV1.PodSecurityContextを参照 * V1.Pod.shareProcessNamespace - これはベータ版の機能であり、Windowsに実装されていないLinuxのNamespace機能に依存しています。Windowsでは、プロセスのネームスペースまたはコンテナのルートファイルシステムを共有できません。共有できるのはネットワークだけです。 * V1.Pod.terminationGracePeriodSeconds - これはWindowsのDockerに完全には実装されていません。[リファレンス](https://github.com/moby/moby/issues/25982)を参照してください。今日の動作では、ENTRYPOINTプロセスにCTRL_SHUTDOWN_EVENTが送信され、Windowsではデフォルトで5秒待機し、最後に通常のWindowsシャットダウン動作を使用してすべてのプロセスをシャットダウンします。5秒のデフォルトは、実際にはWindowsレジストリー[コンテナ内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)にあるため、コンテナ作成時にオーバーライドできます。 * V1.Pod.volumeDevices - これはベータ機能であり、Windowsには実装されていません。Windowsでは、rawブロックデバイスをPodに接続できません。 * V1.Pod.volumes-EmptyDir、Secret、ConfigMap、HostPath - すべて動作し、TestGridにテストがあります * V1.emptyDirVolumeSource - ノードのデフォルトのメディアはWindowsのディスクです。Windowsでは、RAMディスクが組み込まれていないため、メモリはサポートされていません。 * V1.VolumeMount.mountPropagation - mount propagationは、Windowsではサポートされていません。 ##### V1.PodSecurityContext None of the PodSecurityContext fields work on Windows. They're listed here for reference. Windowsでは、PodSecurityContextフィールドはどれも機能しません。これらは参照用にここにリストされています。 * V1.PodSecurityContext.SELinuxOptions - SELinux is not available on Windows * V1.PodSecurityContext.RunAsUser - provides a UID, not available on Windows * V1.PodSecurityContext.RunAsGroup - provides a GID, not available on Windows * V1.PodSecurityContext.RunAsNonRoot - Windows does not have a root user. The closest equivalent is ContainerAdministrator which is an identity that doesn't exist on the node. * V1.PodSecurityContext.SupplementalGroups - provides GID, not available on Windows * V1.PodSecurityContext.Sysctls - these are part of the Linux sysctl interface. There's no equivalent on Windows. * V1.PodSecurityContext.SELinuxOptions - SELinuxは、Windowsでは使用できません * V1.PodSecurityContext.RunAsUser - UIDを提供しますが、Windowsでは使用できません * V1.PodSecurityContext.RunAsGroup - GIDを提供しますが、Windowsでは使用できません * V1.PodSecurityContext.RunAsNonRoot - Windowsにはrootユーザーがありません。最も近いものは、ノードに存在しないIDであるContainerAdministratorです。 * V1.PodSecurityContext.SupplementalGroups - GIDを提供しますが、Windowsでは使用できません * V1.PodSecurityContext.Sysctls - これらはLinuxのsysctlインターフェースの一部です。Windowsには同等のものはありません。 ## Getting Help and Troubleshooting {#troubleshooting} ## ヘルプとトラブルシューティングを学ぶ {#troubleshooting} Your main source of help for troubleshooting your Kubernetes cluster should start with this [section](/docs/tasks/debug-application-cluster/troubleshooting/). Some additional, Windows-specific troubleshooting help is included in this section. Logs are an important element of troubleshooting issues in Kubernetes. Make sure to include them any time you seek troubleshooting assistance from other contributors. Follow the instructions in the SIG-Windows [contributing guide on gathering logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs). Kubernetesクラスターのトラブルシューティングの主なヘルプソースは、この[セクション](/docs/tasks/debug-application-cluster/troubleshooting/)から始める必要があります。このセクションには、いくつか追加的な、Windows固有のトラブルシューティングヘルプが含まれています。ログは、Kubernetesにおけるトラブルシューティング問題の重要な要素です。他のコントリビューターからトラブルシューティングの支援を求めるときは、必ずそれらを含めてください。SIG-Windows[ログ収集に関するコントリビュートガイド](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs)の指示に従ってください。 1. How do I know start.ps1 completed successfully? 1. start.ps1が正常に完了したことをどのように確認できますか? You should see kubelet, kube-proxy, and (if you chose Flannel as your networking solution) flanneld host-agent processes running on your node, with running logs being displayed in separate PowerShell windows. In addition to this, your Windows node should be listed as \"Ready\" in your Kubernetes cluster. ノード上でkubelet、kube-proxy、および(ネットワーキングソリューションとしてFlannelを選択した場合)flanneldホストエージェントプロセスが実行され、実行ログが個別のPowerShellウィンドウに表示されます。これに加えて、WindowsノードがKubernetesクラスターで「Ready」として表示されているはずです。 1. Can I configure the Kubernetes node processes to run in the background as services? 1. Kubernetesノードのプロセスをサービスとしてバックグラウンドで実行するように構成できますか? Kubelet and kube-proxy are already configured to run as native Windows Services, offering resiliency by re-starting the services automatically in the event of failure (for example a process crash). You have two options for configuring these node components as services. Kubeletとkube-proxyは、ネイティブのWindowsサービスとして実行するように既に構成されています、障害(例えば、プロセスのクラッシュ)が発生した場合にサービスを自動的に再起動することにより、復元性を提供します。これらのノードコンポーネントをサービスとして構成するには、2つのオプションがあります。 1. As native Windows Services 1. ネイティブWindowsサービスとして Kubelet & kube-proxy can be run as native Windows Services using `sc.exe`. Kubeletとkube-proxyは、`sc.exe`を使用してネイティブのWindowsサービスとして実行できます。 ```powershell # Create the services for kubelet and kube-proxy in two separate commands # 2つの個別のコマンドでkubeletおよびkube-proxyのサービスを作成する sc.exe create binPath= \" --service \" # Please note that if the arguments contain spaces, they must be escaped. # 引数にスペースが含まれている場合は、エスケープする必要があることに注意してください。 sc.exe create kubelet binPath= \"C:kubelet.exe --service --hostname-override 'minion' \" # Start the services # サービスを開始する Start-Service kubelet Start-Service kube-proxy # Stop the service # サービスを停止する Stop-Service kubelet (-Force) Stop-Service kube-proxy (-Force) # Query the service status # サービスの状態を問い合わせる Get-Service kubelet Get-Service kube-proxy ``` 1. Using nssm.exe 1. nssm.exeの使用 You can also always use alternative service managers like [nssm.exe](https://nssm.cc/) to run these processes (flanneld, kubelet & kube-proxy) in the background for you. You can use this [sample script](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/register-svc.ps1), leveraging nssm.exe to register kubelet, kube-proxy, and flanneld.exe to run as Windows services in the background. また、[nssm.exe](https://nssm.cc/)などの代替サービスマネージャーを使用して、これらのプロセス(flanneld、kubelet、kube-proxy)をバックグラウンドで実行することもできます。この[サンプルスクリプト](https://github.com/Microsoft/SDN/tree/master/Kubernetes/flannel/register-svc.ps1)を使用すると、nssm.exeを利用してkubelet、kube-proxy、flanneld.exeを登録し、Windowsサービスとしてバックグラウンドで実行できます。 ```powershell register-svc.ps1 -NetworkMode -ManagementIP -ClusterCIDR -KubeDnsServiceIP -LogDir # NetworkMode = The network mode l2bridge (flannel host-gw, also the default value) or overlay (flannel vxlan) chosen as a network solution # ManagementIP = The IP address assigned to the Windows node. You can use ipconfig to find this # ClusterCIDR = The cluster subnet range. (Default value 10.244.0.0/16) # KubeDnsServiceIP = The Kubernetes DNS service IP (Default value 10.96.0.10) # LogDir = The directory where kubelet and kube-proxy logs are redirected into their respective output files (Default value C:k) # NetworkMode = ネットワークソリューションとして選択されたネットワークモードl2bridge(flannel host-gw、これもデフォルト値)またはoverlay(flannel vxlan) # ManagementIP = Windowsノードに割り当てられたIPアドレス。 ipconfigを使用してこれを見つけることができます # ClusterCIDR = クラスターのサブネット範囲。(デフォルト値 10.244.0.0/16) # KubeDnsServiceIP = Kubernetes DNSサービスIP(デフォルト値 10.96.0.10) # LogDir = kubeletおよびkube-proxyログがそれぞれの出力ファイルにリダイレクトされるディレクトリ(デフォルト値 C:k) ``` If the above referenced script is not suitable, you can manually configure nssm.exe using the following examples. 上記のスクリプトが適切でない場合は、次の例を使用してnssm.exeを手動で構成できます。 ```powershell # Register flanneld.exe # flanneld.exeを登録する nssm install flanneld C:flannelflanneld.exe nssm set flanneld AppParameters --kubeconfig-file=c:kconfig --iface= --ip-masq=1 --kube-subnet-mgr=1 nssm set flanneld AppEnvironmentExtra NODE_NAME= nssm set flanneld AppDirectory C:flannel nssm start flanneld # Register kubelet.exe # Microsoft releases the pause infrastructure container at mcr.microsoft.com/k8s/core/pause:1.2.0 # For more info search for \"pause\" in the \"Guide for adding Windows Nodes in Kubernetes\" # kubelet.exeを登録 # マイクロソフトは、mcr.microsoft.com/k8s/core/pause:1.2.0としてポーズインフラストラクチャコンテナをリリース # 詳細については、「KubernetesにWindowsノードを追加するためのガイド」で「pause」を検索してください nssm install kubelet C:kkubelet.exe nssm set kubelet AppParameters --hostname-override= --v=6 --pod-infra-container-image=mcr.microsoft.com/k8s/core/pause:1.2.0 --resolv-conf=\"\" --allow-privileged=true --enable-debugging-handlers --cluster-dns= --cluster-domain=cluster.local --kubeconfig=c:kconfig --hairpin-mode=promiscuous-bridge --image-pull-progress-deadline=20m --cgroups-per-qos=false --log-dir= --logtostderr=false --enforce-node-allocatable=\"\" --network-plugin=cni --cni-bin-dir=c:kcni --cni-conf-dir=c:kcniconfig nssm set kubelet AppDirectory C:k nssm start kubelet # Register kube-proxy.exe (l2bridge / host-gw) # kube-proxy.exeを登録する (l2bridge / host-gw) nssm install kube-proxy C:kkube-proxy.exe nssm set kube-proxy AppDirectory c:k nssm set kube-proxy AppParameters --v=4 --proxy-mode=kernelspace --hostname-override=--kubeconfig=c:kconfig --enable-dsr=false --log-dir= --logtostderr=false"} {"_id":"doc-en-website-639dea367b8ef09de28b52be732d75a781f06354d322d95c474dc5b3ecd2ba65","title":"","text":"nssm set kube-proxy DependOnService kubelet nssm start kube-proxy # Register kube-proxy.exe (overlay / vxlan) # kube-proxy.exeを登録する (overlay / vxlan) nssm install kube-proxy C:kkube-proxy.exe nssm set kube-proxy AppDirectory c:k nssm set kube-proxy AppParameters --v=4 --proxy-mode=kernelspace --feature-gates=\"WinOverlay=true\" --hostname-override= --kubeconfig=c:kconfig --network-name=vxlan0 --source-vip= --enable-dsr=false --log-dir= --logtostderr=false"} {"_id":"doc-en-website-8e369f1e1c70eed088a4ec5e57af3eda808e43872acb70d7150fe84e70029e4b","title":"","text":"``` For initial troubleshooting, you can use the following flags in [nssm.exe](https://nssm.cc/) to redirect stdout and stderr to a output file: 最初のトラブルシューティングでは、[nssm.exe](https://nssm.cc/)で次のフラグを使用して、stdoutおよびstderrを出力ファイルにリダイレクトできます。: ```powershell nssm set AppStdout C:kmysvc.log nssm set AppStderr C:kmysvc.log ``` For additional details, see official [nssm usage](https://nssm.cc/usage) docs. 詳細については、公式の[nssmの使用法](https://nssm.cc/usage)のドキュメントを参照してください。 1. My Windows Pods do not have network connectivity 1. Windows Podにネットワーク接続がありません If you are using virtual machines, ensure that MAC spoofing is enabled on all the VM network adapter(s). 仮想マシンを使用している場合は、すべてのVMネットワークアダプターでMACスプーフィングが有効になっていることを確認してください。 1. My Windows Pods cannot ping external resources 1. Windows Podが外部リソースにpingできません Windows Pods do not have outbound rules programmed for the ICMP protocol today. However, TCP/UDP is supported. When trying to demonstrate connectivity to resources outside of the cluster, please substitute `ping ` with corresponding `curl ` commands. 現在、Windows Podには、ICMPプロトコル用にプログラムされた送信ルールはありません。ただし、TCP/UDPはサポートされています。クラスター外のリソースへの接続を実証する場合は、`ping `に対応する`curl `コマンドに置き換えてください。 If you are still facing problems, most likely your network configuration in [cni.conf](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf) deserves some extra attention. You can always edit this static file. The configuration update will apply to any newly created Kubernetes resources. それでも問題が解決しない場合は、[cni.conf](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf)のネットワーク構成に値する可能性があるので、いくつかの特別な注意が必要です。この静的ファイルはいつでも編集できます。構成の更新は、新しく作成されたすべてのKubernetesリソースに適用されます。 One of the Kubernetes networking requirements (see [Kubernetes model](/ja/docs/concepts/cluster-administration/networking/)) is for cluster communication to occur without NAT internally. To honor this requirement, there is an [ExceptionList](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf#L20) for all the communication where we do not want outbound NAT to occur. However, this also means that you need to exclude the external IP you are trying to query from the ExceptionList. Only then will the traffic originating from your Windows pods be SNAT'ed correctly to receive a response from the outside world. In this regard, your ExceptionList in `cni.conf` should look as follows: Kubernetesのネットワーキング要件の1つ(参照[Kubernetesモデル](/ja/docs/concepts/cluster-administration/networking/))は、内部でNATを使用せずにクラスター通信を行うためのものです。この要件を遵守するために、すべての通信に[ExceptionList](https://github.com/Microsoft/SDN/blob/master/Kubernetes/flannel/l2bridge/cni/config/cni.conf#L20)があり、アウトバウンドNATが発生しないようにします。ただし、これは、クエリしようとしている外部IPをExceptionListから除外する必要があることも意味します。そうして初めて、Windows PodからのトラフィックがSNAT処理され、外部からの応答を受信できるようになります。この点で、`cni.conf`のExceptionListは次のようになります。: ```conf \"ExceptionList\": [ \"10.244.0.0/16\", # Cluster subnet \"10.96.0.0/12\", # Service subnet \"10.127.130.0/24\" # Management (host) subnet \"10.244.0.0/16\", # クラスターのサブネット \"10.96.0.0/12\", # Serviceのサブネット \"10.127.130.0/24\" # 管理 (ホスト) のサブネット ] ``` 1. My Windows node cannot access NodePort service 1. WindowsノードがNodePort Serviceにアクセスできません Local NodePort access from the node itself fails. This is a known limitation. NodePort access works from other nodes or external clients. ノード自体からのローカルNodePortアクセスは失敗します。これは既知の制限です。NodePortアクセスは、他のノードまたは外部クライアントから行えます。 1. vNICs and HNS endpoints of containers are being deleted 1. コンテナのvNICとHNSエンドポイントが削除されています This issue can be caused when the `hostname-override` parameter is not passed to [kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/). To resolve it, users need to pass the hostname to kube-proxy as follows: この問題は、`hostname-override`パラメータが[kube-proxy](/docs/reference/command-line-tools-reference/kube-proxy/)に渡されない場合に発生する可能性があります。これを解決するには、ユーザーは次のようにホスト名をkube-proxyに渡す必要があります。: ```powershell C:kkube-proxy.exe --hostname-override=$(hostname) ``` 1. With flannel my nodes are having issues after rejoining a cluster 1. flannelを使用すると、クラスターに再参加した後、ノードに問題が発生します Whenever a previously deleted node is being re-joined to the cluster, flannelD tries to assign a new pod subnet to the node. Users should remove the old pod subnet configuration files in the following paths: 以前に削除されたノードがクラスターに再参加するときはいつも、flannelDは新しいPodサブネットをノードに割り当てようとします。ユーザーは、次のパスにある古いPodサブネット構成ファイルを削除する必要があります。: ```powershell Remove-Item C:kSourceVip.json Remove-Item C:kSourceVipRequest.json ``` 1. After launching `start.ps1`, flanneld is stuck in \"Waiting for the Network to be created\" 1. `start.ps1`を起動した後、flanneldが「ネットワークが作成されるのを待っています」と表示されたままになります There are numerous reports of this [issue which are being investigated](https://github.com/coreos/flannel/issues/1066); most likely it is a timing issue for when the management IP of the flannel network is set. A workaround is to simply relaunch start.ps1 or relaunch it manually as follows: この[調査中の問題](https://github.com/coreos/flannel/issues/1066)に関する多数の報告があります。最も可能性が高いのは、flannelネットワークの管理IPが設定されるタイミングの問題です。回避策は、単純にstart.ps1を再起動するか、次のように手動で再起動することです。: ```powershell PS C:> [Environment]::SetEnvironmentVariable(\"NODE_NAME\", \"\") PS C:> C:flannelflanneld.exe --kubeconfig-file=c:kconfig --iface= --ip-masq=1 --kube-subnet-mgr=1 ``` 1. My Windows Pods cannot launch because of missing `/run/flannel/subnet.env` 1. `/run/flannel/subnet.env`がないため、Windows Podを起動できません This indicates that Flannel didn't launch correctly. You can either try to restart flanneld.exe or you can copy the files over manually from `/run/flannel/subnet.env` on the Kubernetes master to` C:runflannelsubnet.env` on the Windows worker node and modify the `FLANNEL_SUBNET` row to a different number. For example, if node subnet 10.244.4.1/24 is desired: これは、Flannelが正しく起動しなかったことを示しています。 flanneld.exeの再起動を試みるか、Kubernetesマスターの`/run/flannel/subnet.env`からWindowsワーカーノードの`C:runflannelsubnet.env`に手動でファイルをコピーすることができます。「FLANNEL_SUBNET」行を別の番号に変更します。たとえば、ノードサブネット10.244.4.1/24が必要な場合は以下となります。: ```env FLANNEL_NETWORK=10.244.0.0/16"} {"_id":"doc-en-website-433efe7806cef70a191f4b6c25e16f2e62b7e58a58baa9405d7f4750acc72934","title":"","text":"FLANNEL_IPMASQ=true ``` 1. My Windows node cannot access my services using the service IP 1. WindowsノードがService IPを使用してServiceにアクセスできない This is a known limitation of the current networking stack on Windows. Windows Pods are able to access the service IP however. これは、Windows上の現在のネットワークスタックの既知の制限です。ただし、Windows PodはService IPにアクセスできます。 1. No network adapter is found when starting kubelet 1. kubeletの起動時にネットワークアダプターが見つかりません The Windows networking stack needs a virtual adapter for Kubernetes networking to work. If the following commands return no results (in an admin shell), virtual network creation — a necessary prerequisite for Kubelet to work — has failed: WindowsネットワーキングスタックがKubernetesネットワーキングを動かすには、仮想アダプターが必要です。次のコマンドを実行しても結果が返されない場合(管理シェルで)、仮想ネットワークの作成(Kubeletが機能するために必要な前提条件)に失敗したことになります。: ```powershell Get-HnsNetwork | ? Name -ieq \"cbr0\" Get-NetAdapter | ? Name -Like \"vEthernet (Ethernet*\" ``` Often it is worthwhile to modify the [InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L6) parameter of the start.ps1 script, in cases where the host's network adapter isn't \"Ethernet\". Otherwise, consult the output of the `start-kubelet.ps1` script to see if there are errors during virtual network creation. ホストのネットワークアダプターが「イーサネット」ではない場合、多くの場合、start.ps1スクリプトの[InterfaceName](https://github.com/microsoft/SDN/blob/master/Kubernetes/flannel/start.ps1#L6)パラメーターを修正する価値があります。そうでない場合は`start-kubelet.ps1`スクリプトの出力結果を調べて、仮想ネットワークの作成中にエラーがないか確認します。 1. My Pods are stuck at \"Container Creating\" or restarting over and over 1. Podが「Container Creating」と表示されたまま動かなくなったり、何度も再起動を繰り返します Check that your pause image is compatible with your OS version. The [instructions](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources) assume that both the OS and the containers are version 1803. If you have a later version of Windows, such as an Insider build, you need to adjust the images accordingly. Please refer to the Microsoft's [Docker repository](https://hub.docker.com/u/microsoft/) for images. Regardless, both the pause image Dockerfile and the sample service expect the image to be tagged as :latest. PauseイメージがOSバージョンと互換性があることを確認してください。[説明](https://docs.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/deploying-resources)では、OSとコンテナの両方がバージョン1803であると想定しています。それ以降のバージョンのWindowsを使用している場合は、Insiderビルドなどでは、それに応じてイメージを調整する必要があります。イメージについては、Microsoftの[Dockerレジストリ](https://hub.docker.com/u/microsoft/)を参照してください。いずれにしても、PauseイメージのDockerfileとサンプルサービスの両方で、イメージに:latestのタグが付けられていると想定しています。 Starting with Kubernetes v1.14, Microsoft releases the pause infrastructure container at `mcr.microsoft.com/k8s/core/pause:1.2.0`. For more information search for \"pause\" in the [Guide for adding Windows Nodes in Kubernetes](../user-guide-windows-nodes). Kubernetes v1.14以降、MicrosoftはPauseインフラストラクチャコンテナを`mcr.microsoft.com/k8s/core/pause:1.2.0`でリリースしています。詳細については、[KubernetesにWindowsノードを追加するためのガイド](../user-guide-windows-nodes)で「Pause」を検索してください。 1. DNS resolution is not properly working 1. DNS名前解決が正しく機能していない Check the DNS limitations for Windows in this [section](#dns-limitations). この[セクション](#dns-limitations)でDNSの制限を確認してください。 1. `kubectl port-forward` fails with \"unable to do port forwarding: wincat not found\" 1. `kubectl port-forward`が「ポート転送を実行できません:wincatが見つかりません」で失敗します This was implemented in Kubernetes 1.15, and the pause infrastructure container `mcr.microsoft.com/k8s/core/pause:1.2.0`. Be sure to use these versions or newer ones. If you would like to build your own pause infrastructure container, be sure to include [wincat](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat) これはKubernetes 1.15、およびPauseインフラストラクチャコンテナ`mcr.microsoft.com/k8s/core/pause:1.2.0`で実装されました。必ずこれらのバージョン以降を使用してください。 独自のPauseインフラストラクチャコンテナを構築する場合は、必ず[wincat](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat)を含めてください。 1. My Kubernetes installation is failing because my Windows Server node is behind a proxy 1. Windows Serverノードがプロキシの背後にあるため、Kubernetesのインストールが失敗します If you are behind a proxy, the following PowerShell environment variables must be defined: プロキシの背後にある場合は、次のPowerShell環境変数を定義する必要があります。: ```PowerShell [Environment]::SetEnvironmentVariable(\"HTTP_PROXY\", \"http://proxy.example.com:80/\", [EnvironmentVariableTarget]::Machine) [Environment]::SetEnvironmentVariable(\"HTTPS_PROXY\", \"http://proxy.example.com:443/\", [EnvironmentVariableTarget]::Machine) ``` 1. What is a `pause` container? 1. `pause`コンテナとは何ですか In a Kubernetes Pod, an infrastructure or \"pause\" container is first created to host the container endpoint. Containers that belong to the same pod, including infrastructure and worker containers, share a common network namespace and endpoint (same IP and port space). Pause containers are needed to accommodate worker containers crashing or restarting without losing any of the networking configuration. Kubernetes Podでは、インフラストラクチャまたは「pause」コンテナが最初に作成され、コンテナエンドポイントをホストします。インフラストラクチャやワーカーコンテナなど、同じPodに属するコンテナは、共通のネットワークネームスペースとエンドポイント(同じIPとポートスペース)を共有します。Pauseコンテナは、ネットワーク構成を失うことなくクラッシュまたは再起動するワーカーコンテナに対応するために必要です。 The \"pause\" (infrastructure) image is hosted on Microsoft Container Registry (MCR). You can access it using `docker pull mcr.microsoft.com/k8s/core/pause:1.2.0`. For more details, see the [DOCKERFILE](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat). 「pause」(インフラストラクチャ)イメージは、Microsoft Container Registry(MCR)でホストされています。`docker pull mcr.microsoft.com/k8s/core/pause:1.2.0`を使用してアクセスできます。詳細については、[DOCKERFILE](https://github.com/kubernetes-sigs/sig-windows-tools/tree/master/cmd/wincat)をご覧ください。 ### Further investigation ### さらなる調査 If these steps don't resolve your problem, you can get help running Windows containers on Windows nodes in Kubernetes through: これらの手順で問題が解決しない場合は、次の方法で、KubernetesのWindowsノードでWindowsコンテナを実行する際のヘルプを利用できます。: * StackOverflow [Windows Server Container](https://stackoverflow.com/questions/tagged/windows-server-container) topic * Kubernetes Official Forum [discuss.kubernetes.io](https://discuss.kubernetes.io/) * StackOverflow [Windows Server Container](https://stackoverflow.com/questions/tagged/windows-server-container)トピック * Kubernetesオフィシャルフォーラム [discuss.kubernetes.io](https://discuss.kubernetes.io/) * Kubernetes Slack [#SIG-Windows Channel](https://kubernetes.slack.com/messages/sig-windows) ## Reporting Issues and Feature Requests ## IssueとFeatureリクエストの報告 If you have what looks like a bug, or you would like to make a feature request, please use the [GitHub issue tracking system](https://github.com/kubernetes/kubernetes/issues). You can open issues on [GitHub](https://github.com/kubernetes/kubernetes/issues/new/choose) and assign them to SIG-Windows. You should first search the list of issues in case it was reported previously and comment with your experience on the issue and add additional logs. SIG-Windows Slack is also a great avenue to get some initial support and troubleshooting ideas prior to creating a ticket. バグのようなものがある場合、またはFeatureリクエストを行う場合は、[GitHubのIssueシステム](https://github.com/kubernetes/kubernetes/issues)を使用してください。[GitHub](https://github.com/kubernetes/kubernetes/issues/new/choose)でIssueを開いて、SIG-Windowsに割り当てることができます。以前に報告された場合は、まずIssueリストを検索し、Issueについての経験をコメントして、追加のログを加える必要があります。SIG-Windows Slackは、チケットを作成する前に、初期サポートとトラブルシューティングのアイデアを得るための素晴らしい手段でもあります。 If filing a bug, please include detailed information about how to reproduce the problem, such as: バグを報告する場合は、問題の再現方法に関する次のような詳細情報を含めてください。: * Kubernetes version: kubectl version * Environment details: Cloud provider, OS distro, networking choice and configuration, and Docker version * Detailed steps to reproduce the problem * [Relevant logs](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) * Tag the issue sig/windows by commenting on the issue with `/sig windows` to bring it to a SIG-Windows member's attention * Kubernetesのバージョン: kubectlのバージョン * 環境の詳細: クラウドプロバイダー、OSのディストリビューション、選択したネットワーキングと構成、およびDockerのバージョン * 問題を再現するための詳細な手順 * [関連するログ](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs) * `/sig windows`でIssueにコメントして、Issueにsig/windowsのタグを付けて、SIG-Windowsメンバーが気付くようにします ## {{% heading \"whatsnext\" %}} We have a lot of features in our roadmap. An abbreviated high level list is included below, but we encourage you to view our [roadmap project](https://github.com/orgs/kubernetes/projects/8) and help us make Windows support better by [contributing](https://github.com/kubernetes/community/blob/master/sig-windows/). ロードマップには多くの機能があります。高レベルの簡略リストを以下に示しますが、[ロードマッププロジェクト](https://github.com/orgs/kubernetes/projects/8)を見て、[貢献すること](https://github.com/kubernetes/community/blob/master/sig-windows/)によってWindowsサポートを改善することをお勧めします。 ### CRI-ContainerD {{< glossary_tooltip term_id=\"containerd\" >}} is another OCI-compliant runtime that recently graduated as a {{< glossary_tooltip text=\"CNCF\" term_id=\"cncf\" >}} project. It's currently tested on Linux, but 1.3 will bring support for Windows and Hyper-V. [[reference](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)] {{< glossary_tooltip term_id=\"containerd\" >}}は、最近{{< glossary_tooltip text=\"CNCF\" term_id=\"cncf\" >}}プロジェクトとして卒業した、もう1つのOCI準拠ランタイムです。現在Linuxでテストされていますが、1.3はWindowsとHyper-Vをサポートします。[[リファレンス](https://blog.docker.com/2019/02/containerd-graduates-within-the-cncf/)] The CRI-ContainerD interface will be able to manage sandboxes based on Hyper-V. This provides a foundation where RuntimeClass could be implemented for new use cases including: CRI-ContainerDインターフェイスは、Hyper-Vに基づいてサンドボックスを管理できるようになります。これにより、RuntimeClassを次のような新しいユースケースに実装できる基盤が提供されます: * Hypervisor-based isolation between pods for additional security * Backwards compatibility allowing a node to run a newer Windows Server version without requiring containers to be rebuilt * Specific CPU/NUMA settings for a pod * Memory isolation and reservations * Pod間のハイパーバイザーベースの分離により、セキュリティを強化 * 下位互換性により、コンテナの再構築を必要とせずにノードで新しいWindows Serverバージョンを実行 * Podの特定のCPU/NUMA設定 * メモリの分離と予約 ### Hyper-V isolation ### Hyper-V分離 The existing Hyper-V isolation support, an experimental feature as of v1.10, will be deprecated in the future in favor of the CRI-ContainerD and RuntimeClass features mentioned above. To use the current features and create a Hyper-V isolated container, the kubelet should be started with feature gates `HyperVContainer=true` and the Pod should include the annotation `experimental.windows.kubernetes.io/isolation-type=hyperv`. In the experiemental release, this feature is limited to 1 container per Pod. 既存のHyper-V分離サポートは、v1.10の試験的な機能であり、上記のCRI-ContainerD機能とRuntimeClass機能を優先して将来廃止される予定です。現在の機能を使用してHyper-V分離コンテナを作成するには、kubeletのフィーチャーゲートを`HyperVContainer=true`で開始し、Podにアノテーション`experimental.windows.kubernetes.io/isolation-type=hyperv`を含める必要があります。実験的リリースでは、この機能はPodごとに1つのコンテナに制限されています。 ```yaml apiVersion: apps/v1"} {"_id":"doc-en-website-a0214217bb691dbe7b10cdc75237b8e57c73447ab3e388567647647c26bc08e6","title":"","text":"- containerPort: 80 ``` ### Deployment with kubeadm and cluster API ### kubeadmとクラスターAPIを使用したデプロイ Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm will come in a future release. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. Kubeadmは、ユーザーがKubernetesクラスターをデプロイするための事実上の標準になりつつあります。kubeadmのWindowsノードのサポートは、将来のリリースで提供予定です。Windowsノードが適切にプロビジョニングされるように、クラスターAPIにも投資しています。 ### A few other key features * Beta support for Group Managed Service Accounts * More CNIs * More Storage Plugins ### その他の主な機能 * グループ管理サービスアカウントのベータサポート * その他のCNI * その他のストレージプラグイン "} {"_id":"doc-en-website-6e4d622ca28dee8dc9ad3501195eabd05cff4b78a6a30ddf84644e827745dec5","title":"","text":"convert your old configuration files to a newer version. `kubeadm config images list` and `kubeadm config images pull` can be used to list and pull the images that kubeadm requires. For more information navigate to [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file) or [Using kubeadm join with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-join/#config-file). In Kubernetes v1.13.0 and later to list/pull kube-dns images instead of the CoreDNS image the `--config` method described [here](/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-addon) has to be used."} {"_id":"doc-en-website-9513b7f0fadec31b929984eeefcde9502f19bbb438beaa2b42c4c579e37b73a0","title":"","text":"It's possible to configure `kubeadm init` with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed with the `--config` option. configuration file options. This file is passed using the `--config` flag and it must contain a `ClusterConfiguration` structure and optionally more structures separated by `---n` Mixing `--config` with others flags may not be allowed in some cases. The default configuration can be printed out using the [kubeadm config print](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. It is **recommended** that you migrate your old `v1beta1` configuration to `v1beta2` using If your configuration is not using the latest version it is **recommended** that you migrate using the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. For more details on each field in the `v1beta2` configuration you can navigate to our [API reference pages](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2). For more information on the fields and usage of the configuration you can navigate to our API reference page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories). ### Adding kube-proxy parameters {#kube-proxy}"} {"_id":"doc-en-website-00aa66da7bd6e747d4783005f15f5f1a75641b841a0fa7651cd77fc665499846","title":"","text":"It's possible to configure `kubeadm join` with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed using the `--config` flag and it must contain a `JoinConfiguration` structure. contain a `JoinConfiguration` structure. Mixing `--config` with others flags may not be allowed in some cases. To print the default values of `JoinConfiguration` run the following command: The default configuration can be printed out using the [kubeadm config print](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. ```shell kubeadm config print join-defaults ``` If your configuration is not using the latest version it is **recommended** that you migrate using the [kubeadm config migrate](/docs/reference/setup-tools/kubeadm/kubeadm-config/) command. For details on individual fields in `JoinConfiguration` see [the godoc](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#JoinConfiguration). For more information on the fields and usage of the configuration you can navigate to our API reference page and pick a version from [the list](https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#pkg-subdirectories). ## {{% heading \"whatsnext\" %}}"} {"_id":"doc-en-website-b1ff40d596806422812db70a8a3b363c0d90a7a30d68053e4392db33a7a3e784","title":"","text":"For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/). For a complete list of configuration options, see the [configuration file documentation](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file). To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/)."} {"_id":"doc-en-website-c1ac60e6c92845d2b6c78858c7070e6e0b78f5d60ded7c02a1b354daf3c2940a","title":"","text":" --- title: 用户认证 content_type: concept weight: 10 --- 本页提供身份认证有关的概述。 ## Kubernetes 中的用户 {#users-in-kubernetes} 所有 Kubernetes 集群都有两类用户:由 Kubernetes 管理的服务账号和普通用户。 Kuernetes 假定使用以下方式之一来利用与集群无关的服务来管理普通用户: - 负责分发私钥的管理员 - 类似 Keystone 或者 Google Accounts 这类用户数据库 - 包含用户名和密码列表的文件 有鉴于此,_Kubernetes 并不包含用来代表普通用户账号的对象_。 普通用户的信息无法通过 API 调用添加到集群中。 尽管无法通过 API 调用来添加普通用户,Kubernetes 仍然认为能够提供由集群的证书 机构签名的合法证书的用户是通过身份认证的用户。基于这样的配置,Kubernetes 使用证书中的 'subject' 的通用名称(Common Name)字段(例如,\"/CN=bob\")来 确定用户名。接下来,基于角色访问控制(RBAC)子系统会确定用户是否有权针对 某资源执行特定的操作。进一步的细节可参阅 [证书请求](/zh/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user) 下普通用户主题。 与此不同,服务账号是 Kubernetes API 所管理的用户。它们被绑定到特定的名字空间, 或者由 API 服务器自动创建,或者通过 API 调用创建。服务账号与一组以 Secret 保存 的凭据相关,这些凭据会被挂载到 Pod 中,从而允许集群内的进程访问 Kubernetes API。 API 请求则或者与某普通用户相关联,或者与某服务账号相关联,亦或者被视作 [匿名请求](#anonymous-requests)。这意味着集群内外的每个进程在向 API 服务器发起 请求时都必须通过身份认证,否则会被视作匿名用户。这里的进程可以是在某工作站上 输入 `kubectl` 命令的操作人员,也可以是节点上的 `kubelet` 组件,还可以是控制面 的成员。 ## 身份认证策略 {#authentication-strategies} Kubernetes 使用身份认证插件利用客户端证书、持有者令牌(Bearer Token)、身份认证代理(Proxy) 或者 HTTP 基本认证机制来认证 API 请求的身份。HTTP 请求发给 API 服务器时, 插件会将以下属性关联到请求本身: * 用户名:用来辩识最终用户的字符串。常见的值可以是 `kube-admin` 或 `jane@example.com`。 * 用户 ID:用来辩识最终用户的字符串,旨在比用户名有更好的一致性和唯一性。 * 用户组:取值为一组字符串,其中各个字符串用来标明用户是某个命名的用户逻辑集合的成员。 常见的值可能是 `system:masters` 或者 `devops-team` 等。 * 附加字段:一组额外的键-值映射,键是字符串,值是一组字符串;用来保存一些鉴权组件可能 觉得有用的额外信息。 所有(属性)值对于身份认证系统而言都是不透明的,只有被 [鉴权组件](/zh/docs/reference/access-authn-authz/authorization/) 解释过之后才有意义。 你可以同时启用多种身份认证方法,并且你通常会至少使用两种方法: - 针对服务账号使用服务账号令牌 - 至少另外一种方法对用户的身份进行认证 当集群中启用了多个身份认证模块时,第一个成功地对请求完成身份认证的模块会 直接做出评估决定。API 服务器并不保证身份认证模块的运行顺序。 对于所有通过身份认证的用户,`system:authenticated` 组都会被添加到其组列表中。 与其它身份认证协议(LDAP、SAML、Kerberos、X509 的替代模式等等)都可以通过 使用一个[身份认证代理](#authenticating-proxy)或 [身份认证 Webhoook](#webhook-token-authentication)来实现。 ### X509 客户证书 {#x509-client-certs} 通过给 API 服务器传递 `--client-ca-file=SOMEFILE` 选项,就可以启动客户端证书身份认证。 所引用的文件必须包含一个或者多个证书机构,用来验证向 API 服务器提供的客户端证书。 如果提供了客户端证书并且证书被验证通过,则 subject 中的公共名称(Common Name)就被 作为请求的用户名。 自 Kubernetes 1.4 开始,客户端证书还可以通过证书的 organization 字段标明用户的组成员信息。 要包含用户的多个组成员信息,可以在证书种包含多个 organization 字段。 例如,使用 `openssl` 命令行工具生成一个证书签名请求: ``` bash openssl req -new -key jbeda.pem -out jbeda-csr.pem -subj \"/CN=jbeda/O=app1/O=app2\" ``` 此命令将使用用户名 `jbeda` 生成一个证书签名请求(CSR),且该用户属于 \"app\" 和 \"app2\" 两个用户组。 参阅[管理证书](/zh/docs/concepts/cluster-administration/certificates/)了解如何生成客户端证书。 ### 静态令牌文件 {#static-token-file} 当 API 服务器的命令行设置了 `--token-auth-file=SOMEFILE` 选项时,会从文件中 读取持有者令牌。目前,令牌会长期有效,并且在不重启 API 服务器的情况下 无法更改令牌列表。 令牌文件是一个 CSV 文件,包含至少 3 个列:令牌、用户名和用户的 UID。 其余列被视为可选的组名。 {{< note >}} 如果要设置的组名不止一个,则对应的列必须用双引号括起来,例如 ```conf token,user,uid,\"group1,group2,group3\" ``` {{< /note >}} #### 在请求中放入持有者令牌 {#putting-a-bearer-token-in-a-request} 当使用持有者令牌来对某 HTTP 客户端执行身份认证时,API 服务器希望看到 一个名为 `Authorization` 的 HTTP 头,其值格式为 `Bearer THETOKEN`。 持有者令牌必须是一个可以放入 HTTP 头部值字段的字符序列,至多可使用 HTTP 的编码和引用机制。 例如:如果持有者令牌为 `31ada4fd-adec-460c-809a-9e56ceb75269`,则其 出现在 HTTP 头部时如下所示: ```http Authorization: Bearer 31ada4fd-adec-460c-809a-9e56ceb75269 ``` ### 启动引导令牌 {#bootstrap-tokens} {{< feature-state for_k8s_version=\"v1.18\" state=\"stable\" >}} 为了支持平滑地启动引导新的集群,Kubernetes 包含了一种动态管理的持有者令牌类型, 称作 *启动引导令牌(Bootstrap Token)*。 这些令牌以 Secret 的形式保存在 `kube-system` 名字空间中,可以被动态管理和创建。 控制器管理器包含的 `TokenCleaner` 控制器能够在启动引导令牌过期时将其删除。 这些令牌的格式为 `[a-z0-9]{6}.[a-z0-9]{16}`。第一个部分是令牌的 ID;第二个部分 是令牌的 Secret。你可以用如下所示的方式来在 HTTP 头部设置令牌: ```http Authorization: Bearer 781292.db7bc3a58fc5f07e ``` 你必须在 API 服务器上设置 `--enable-bootstrap-token-auth` 标志来启用基于启动 引导令牌的身份认证组件。 你必须通过控制器管理器的 `--controllers` 标志来启用 TokenCleaner 控制器; 这可以通过类似 `--controllers=*,tokencleaner` 这种设置来做到。 如果你使用 `kubeadm` 来启动引导新的集群,该工具会帮你完成这些设置。 身份认证组件的认证结果为 `system:bootstrap:<令牌 ID>`,该用户属于 `system:bootstrappers` 用户组。 这里的用户名和组设置都是有意设计成这样,其目的是阻止用户在启动引导集群之后 继续使用这些令牌。 这里的用户名和组名可以用来(并且已经被 `kubeadm` 用来)构造合适的鉴权 策略,以完成启动引导新集群的工作。 请参阅[启动引导令牌](/zh/docs/reference/access-authn-authz/bootstrap-tokens/) 以了解关于启动引导令牌身份认证组件与控制器的更深入的信息,以及如何使用 `kubeadm` 来管理这些令牌。 ### Static Password File 通过向 API 服务器传递 `--basic-auth-file=SOMEFILE` 选项可以启用基本的 身份认证。目前,基本身份认证所涉及的凭据信息会长期有效,并且在不重启 API 服务器的情况下无法改变用户的密码。 要注意的是,对基本身份认证的支持目前仅是出于方便性考虑。 与此同时我们正在增强前述的、更为安全的模式的易用性。 基本身份认证数据文件是一个 CSV 文件,包含至少 3 列:密码、用户名和用户 ID。 在 Kuernetes 1.6 及后续版本中,你可以指定一个可选的第 4 列,在其中给出用逗号 分隔的用户组名。如果用户组名不止一个,你必须将第 4 列的值用双引号括起来。 参见下面的例子: ```conf password,user,uid,\"group1,group2,group3\" ``` 当在 HTTP 客户端使用基本身份认证机制时,API 服务器会期望看到名为 `Authorization` 的 HTTP 头部,其值形如 `Basic USER:PASSWORD的Base64编码字符串` ### 服务账号令牌 {#service-account-tokens} 服务账号(Service Account)是一种自动被启用的用户认证机制,使用经过签名的 持有者令牌来验证请求。该插件可接受两个可选参数: * `--service-account-key-file` 一个包含用来为持有者令牌签名的 PEM 编码密钥。 若未指定,则使用 API 服务器的 TLS 私钥。 * `--service-account-lookup` 如果启用,则从 API 删除的令牌会被回收。 服务账号通常由 API 服务器自动创建并通过 `ServiceAccount` [准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/) 关联到集群中运行的 Pod 上。 持有者令牌会挂载到 Pod 中可预知的为之,允许集群内进程与 API 服务器通信。 服务账号也可以使用 Pod 规约的 `serviceAccountName` 字段显式地关联到 Pod 上。 {{< note >}} `serviceAccountName` 通常会被忽略,因为关联关系是自动建立的。 {{< /note >}} ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: default spec: replicas: 3 template: metadata: # ... spec: serviceAccountName: bob-the-bot containers: - name: nginx image: nginx:1.14.2 ``` 在集群外部使用服务账号持有者令牌也是完全合法的,且可用来为长时间运行的、需要与 Kubernetes API 服务器通信的任务创建标识。要手动创建服务账号,可以使用 `kubectl create serviceaccount <名称>` 命令。此命令会在当前的名字空间中生成一个 服务账号和一个与之关联的 Secret。 ```bash kubectl create serviceaccount jenkins ``` ``` serviceaccount/jenkins created ``` 查验相关联的 Secret: ```bash kubectl get serviceaccounts jenkins -o yaml ``` ```yaml apiVersion: v1 kind: ServiceAccount metadata: # ... secrets: - name: jenkins-token-1yvwg ``` 所创建的 Secret 中会保存 API 服务器的公开的 CA 证书和一个已签名的 JSON Web 令牌(JWT)。 ```bash kubectl get secret jenkins-token-1yvwg -o yaml ``` ```yaml apiVersion: v1 data: ca.crt: namespace: ZGVmYXVsdA== token: kind: Secret metadata: # ... type: kubernetes.io/service-account-token ``` {{< note >}} 字段值是按 Base64 编码的,这是因为 Secret 数据总是采用 Base64 编码来存储。 {{< /note >}} 已签名的 JWT 可以用作持有者令牌,并将被认证为所给的服务账号。 关于如何在请求中包含令牌,请参阅[前文](#putting-a-bearer-token-in-a-request)。 通常,这些 Secret 数据会被挂载到 Pod 中以便集群内访问 API 服务器时使用, 不过也可以在集群外部使用。 服务账号被身份认证后,所确定的用户名为 `system:serviceaccount:<名字空间>:<服务账号>`, 并被分配到用户组 `system:serviceaccounts` 和 `system:serviceaccounts:<名字空间>`。 警告:由于服务账号令牌保存在 Secret 对象中,任何能够读取这些 Secret 的用户 都可以被认证为对应的服务账号。在为用户授予访问服务账号的权限时,以及对 Secret 的读权限时,要格外小心。 ### OpenID Connect(OIDC)令牌 {#openid-connect-tokens} [OpenID Connect](https://openid.net/connect/) 是一种 OAuth2 认证方式, 被某些 OAuth2 提供者支持,例如 Azure 活动目录、Salesforce 和 Google。 协议对 OAuth2 的主要扩充体现在有一个附加字段会和访问令牌一起返回, 这一字段称作 [ID Token(ID 令牌)](https://openid.net/specs/openid-connect-core-1_0.html#IDToken)。 ID 令牌是一种由服务器签名的 JSON Web 令牌(JWT),其中包含一些可预知的字段, 例如用户的邮箱地址, 要识别用户,身份认证组件使用 OAuth2 [令牌响应](https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse) 中的 `id_token`(而非 `access_token`)作为持有者令牌。 关于如何在请求中设置令牌,可参见[前文](#putting-a-bearer-token-in-a-request)。 ![Kubernetes OpenID Connect Flow](/images/docs/admin/k8s_oidc_login.svg) 1. 登录到你的身份服务(Identity Provider) 2. 你的身份服务将为你提供 `access_token`、`id_token` 和 `refresh_token` 3. 在使用 `kubectl` 时,将 `id_token` 设置为 `--token` 标志值,或者将其直接添加到 `kubeconfig` 中 4. `kubectl` 将你的 `id_token` 放到一个称作 `Authorization` 的头部,发送给 API 服务器 5. API 服务器将负责通过检查配置中引用的证书来确认 JWT 的签名是合法的 6. 检查确认 `id_token` 尚未过期 7. 确认用户有权限执行操作 8. 鉴权成功之后,API 服务器向 `kubectl` 返回响应 9. `kubectl` 向用户提供反馈信息 由于用来验证你是谁的所有数据都在 `id_token` 中,Kubernetes 不需要再去联系 身份服务。在一个所有请求都是无状态请求的模型中,这一工作方式可以使得身份认证 的解决方案更容易处理大规模请求。不过,此访问也有一些挑战: 1. Kubernetes 没有提供用来触发身份认证过程的 \"Web 界面\"。 因为不存在用来收集用户凭据的浏览器或用户接口,你必须自己先行完成 对身份服务的认证过程。 2. `id_token` 令牌不可收回。因其属性类似于证书,其生命期一般很短(只有几分钟), 所以,每隔几分钟就要获得一个新的令牌这件事可能很让人头疼。 3. 如果不使用 `kubectl proxy` 命令或者一个能够注入 `id_token` 的反向代理, 向 Kubernetes 控制面板执行身份认证是很困难的。 #### 配置 API 服务器 {#configuring-the-api-server} 要启用此插件,须在 API 服务器上配置以下标志: | 参数 | 描述 | 示例 | 必需? | | --------- | ----------- | ------- | ------- | | `--oidc-issuer-url` | 允许 API 服务器发现公开的签名密钥的服务的 URL。只接受模式为 `https://` 的 URL。此值通常设置为服务的发现 URL,不含路径。例如:\"https://accounts.google.com\" 或 \"https://login.salesforce.com\"。此 URL 应指向 .well-known/openid-configuration 下一层的路径。 | 如果发现 URL 是 `https://accounts.google.com/.well-known/openid-configuration`,则此值应为 `https://accounts.google.com` | 是 | | `--oidc-client-id` | 所有令牌都应发放给此客户 ID。 | kubernetes | 是 | | `--oidc-username-claim` | 用作用户名的 JWT 申领(JWT Claim)。默认情况下使用 `sub` 值,即最终用户的一个唯一的标识符。管理员也可以选择其他申领,例如 `email` 或者 `name`,取决于所用的身份服务。不过,除了 `email` 之外的申领都会被添加令牌发放者的 URL 作为前缀,以免与其他插件产生命名冲突。 | sub | 否 | | `--oidc-username-prefix` | 要添加到用户名申领之前的前缀,用来避免与现有用户名发生冲突(例如:`system:` 用户)。例如,此标志值为 `oidc:` 时将创建形如 `oidc:jane.doe` 的用户名。如果此标志未设置,且 `--oidc-username-claim` 标志值不是 `email`,则默认前缀为 `<令牌发放者的 URL>#`,其中 `<令牌发放者 URL >` 的值取自 `--oidc-issuer-url` 标志的设定。此标志值为 `-` 时,意味着禁止添加用户名前缀。 | `oidc:` | 否 | | `--oidc-groups-claim` | 用作用户组名的 JWT 申领。如果所指定的申领确实存在,则其值必须是一个字符串数组。 | groups | 否 | | `--oidc-groups-prefix` | 添加到组申领的前缀,用来避免与现有用户组名(如:`system:` 组)发生冲突。例如,此标志值为 `oidc:` 时,所得到的用户组名形如 `oidc:engineering` 和 `oidc:infra`。 | `oidc:` | 否 | | `--oidc-required-claim` | 取值为一个 key=value 偶对,意为 ID 令牌中必须存在的申领。如果设置了此标志,则 ID 令牌会被检查以确定是否包含取值匹配的申领。此标志可多次重复,以指定多个申领。 | `claim=value` | 否 | | `--oidc-ca-file` | 指向一个 CA 证书的路径,该 CA 负责对你的身份服务的 Web 证书提供签名。默认值为宿主系统的根 CA。 | `/etc/kubernetes/ssl/kc-ca.pem` | 否 | 很重要的一点是,API 服务器并非一个 OAuth2 客户端,相反,它只能被配置为 信任某一个令牌发放者。这使得使用公共服务(如 Google)的用户可以不信任发放给 第三方的凭据。 如果管理员希望使用多个 OAuth 客户端,他们应该研究一下那些支持 `azp` (Authorized Party,被授权方)申领的服务。 `azp` 是一种允许某客户端代替另一客户端发放令牌的机制。 Kubernetes 并未提供 OpenID Connect 的身份服务。 你可以使用现有的公共的 OpenID Connect 身份服务(例如 Google 或者 [其他服务](https://connect2id.com/products/nimbus-oauth-openid-connect-sdk/openid-connect-providers))。 或者,你也可以选择自己运行一个身份服务,例如 CoreOS [dex](https://github.com/coreos/dex)、 [Keycloak](https://github.com/keycloak/keycloak)、 CloudFoundry [UAA](https://github.com/cloudfoundry/uaa) 或者 Tremolo Security 的 [OpenUnison](https://github.com/tremolosecurity/openunison)。 要在 Kubernetes 环境中使用某身份服务,该服务必须: 1. 支持 [OpenID connect 发现](https://openid.net/specs/openid-connect-discovery-1_0.html); 但事实上并非所有服务都具备此能力 2. 运行 TLS 协议且所使用的加密组件都未过时 3. 拥有由 CA 签名的证书(即使 CA 不是商业 CA 或者是自签名的 CA 也可以) 关于上述第三条需求,即要求具备 CA 签名的证书,有一些额外的注意事项。 如果你部署了自己的身份服务,而不是使用云厂商(如 Google 或 Microsoft)所提供的服务, 你必须对身份服务的 Web 服务器证书进行签名,签名所用证书的 `CA` 标志要设置为 `TRUE`,即使用的是自签名证书。这是因为 GoLang 的 TLS 客户端实现对证书验证 标准方面有非常严格的要求。如果你手头没有现成的 CA 证书,可以使用 CoreOS 团队所开发的[这个脚本](https://github.com/coreos/dex/blob/1ee5920c54f5926d6468d2607c728b71cfe98092/examples/k8s/gencert.sh)来创建一个简单的 CA 和被签了名的证书与密钥对。 或者你也可以使用[这个类似的脚本](https://raw.githubusercontent.com/TremoloSecurity/openunison-qs-kubernetes/master/src/main/bash/makessl.sh),生成一个合法期更长、密钥尺寸更大的 SHA256 证书。 特定系统的安装指令: - [UAA](https://docs.cloudfoundry.org/concepts/architecture/uaa.html) - [Dex](https://github.com/dexidp/dex/blob/master/Documentation/kubernetes.md) - [OpenUnison](https://www.tremolosecurity.com/orchestra-k8s/) #### 使用 kubectl {#using-kubectl} ##### 选项一 - OIDC 身份认证组件 第一种方案是使用 kubectl 的 `oidc` 身份认证组件,该组件将 `id_token` 设置 为所有请求的持有者令牌,并且在令牌过期时自动刷新。在你登录到你的身份服务之后, 可以使用 kubectl 来添加你的 `id_token`、`refresh_token`、`client_id` 和 `client_secret`,以配置该插件。 如果服务在其刷新令牌响应中不包含 `id_token`,则此插件无法支持该服务。 这时你应该考虑下面的选项二。 ```bash kubectl config set-credentials USER_NAME --auth-provider=oidc --auth-provider-arg=idp-issuer-url=( issuer url ) --auth-provider-arg=client-id=( your client id ) --auth-provider-arg=client-secret=( your client secret ) --auth-provider-arg=refresh-token=( your refresh token ) --auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) --auth-provider-arg=id-token=( your id_token ) ``` 作为示例,在完成对你的身份服务的身份认证之后,运行下面的命令: ```bash kubectl config set-credentials mmosley --auth-provider=oidc --auth-provider-arg=idp-issuer-url=https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP --auth-provider-arg=client-id=kubernetes --auth-provider-arg=client-secret=1db158f6-177d-4d9c-8a8b-d36869918ec5 --auth-provider-arg=refresh-token=q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXqHega4GAXlF+ma+vmYpFcHe5eZR+slBFpZKtQA= --auth-provider-arg=idp-certificate-authority=/root/ca.pem --auth-provider-arg=id-token=eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw ``` 此操作会生成以下配置: ```yaml users: - name: mmosley user: auth-provider: config: client-id: kubernetes client-secret: 1db158f6-177d-4d9c-8a8b-d36869918ec5 id-token: eyJraWQiOiJDTj1vaWRjaWRwLnRyZW1vbG8ubGFuLCBPVT1EZW1vLCBPPVRybWVvbG8gU2VjdXJpdHksIEw9QXJsaW5ndG9uLCBTVD1WaXJnaW5pYSwgQz1VUy1DTj1rdWJlLWNhLTEyMDIxNDc5MjEwMzYwNzMyMTUyIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL29pZGNpZHAudHJlbW9sby5sYW46ODQ0My9hdXRoL2lkcC9PaWRjSWRQIiwiYXVkIjoia3ViZXJuZXRlcyIsImV4cCI6MTQ4MzU0OTUxMSwianRpIjoiMm96US15TXdFcHV4WDlHZUhQdy1hZyIsImlhdCI6MTQ4MzU0OTQ1MSwibmJmIjoxNDgzNTQ5MzMxLCJzdWIiOiI0YWViMzdiYS1iNjQ1LTQ4ZmQtYWIzMC0xYTAxZWU0MWUyMTgifQ.w6p4J_6qQ1HzTG9nrEOrubxIMb9K5hzcMPxc9IxPx2K4xO9l-oFiUw93daH3m5pluP6K7eOE6txBuRVfEcpJSwlelsOsW8gb8VJcnzMS9EnZpeA0tW_p-mnkFc3VcfyXuhe5R3G7aa5d8uHv70yJ9Y3-UhjiN9EhpMdfPAoEB9fYKKkJRzF7utTTIPGrSaSU6d2pcpfYKaxIwePzEkT4DfcQthoZdy9ucNvvLoi1DIC-UocFD8HLs8LYKEqSxQvOcvnThbObJ9af71EwmuE21fO5KzMW20KtAeget1gnldOosPtz1G5EwvaQ401-RPQzPGMVBld0_zMCAwZttJ4knw idp-certificate-authority: /root/ca.pem idp-issuer-url: https://oidcidp.tremolo.lan:8443/auth/idp/OidcIdP refresh-token: q1bKLFOyUiosTfawzA93TzZIDzH2TNa2SMm0zEiPKTUwME6BkEo6Sql5yUWVBSWpKUGphaWpxSVAfekBOZbBhaEW+VlFUeVRGcluyVF5JT4+haZmPsluFoFu5XkpXk5BXq name: oidc ``` 当你的 `id_token` 过期时,`kubectl` 会尝试使用你的 `refresh_token` 来刷新你的 `id_token`,并且在 `client_secret` 中存放 `refresh_token` 的新值,同时把 `id_token` 的新值写入到 `.kube/config` 文件中。 ##### 选项二 - 使用 `--token` 选项 `kubectl` 命令允许你使用 `--token` 选项传递一个令牌。 你可以将 `id_token` 的内容复制粘贴过来,作为此标志的取值: ```bash kubectl --token=eyJhbGciOiJSUzI1NiJ9.eyJpc3MiOiJodHRwczovL21sYi50cmVtb2xvLmxhbjo4MDQzL2F1dGgvaWRwL29pZGMiLCJhdWQiOiJrdWJlcm5ldGVzIiwiZXhwIjoxNDc0NTk2NjY5LCJqdGkiOiI2RDUzNXoxUEpFNjJOR3QxaWVyYm9RIiwiaWF0IjoxNDc0NTk2MzY5LCJuYmYiOjE0NzQ1OTYyNDksInN1YiI6Im13aW5kdSIsInVzZXJfcm9sZSI6WyJ1c2VycyIsIm5ldy1uYW1lc3BhY2Utdmlld2VyIl0sImVtYWlsIjoibXdpbmR1QG5vbW9yZWplZGkuY29tIn0.f2As579n9VNoaKzoF-dOQGmXkFKf1FMyNV0-va_B63jn-_n9LGSCca_6IVMP8pO-Zb4KvRqGyTP0r3HkHxYy5c81AnIh8ijarruczl-TK_yF5akjSTHFZD-0gRzlevBDiH8Q79NAr-ky0P4iIXS8lY9Vnjch5MF74Zx0c3alKJHJUnnpjIACByfF2SCaYzbWFMUNat-K1PaUk5-ujMBG7yYnr95xD-63n8CO8teGUAAEMx6zRjzfhnhbzX-ajwZLGwGUBT4WqjMs70-6a7_8gZmLZb2az1cZynkFRj2BaCkVT3A2RrjeEwZEtGXlMqKJ1_I2ulrOVsYx01_yD35-rw get nodes ``` ### Webhook 令牌身份认证 {#webhook-token-authentication} Webhook 身份认证是一种用来验证持有者令牌的回调机制。 * `--authentication-token-webhook-config-file` 指向一个配置文件,其中描述 如何访问远程的 Webhook 服务。 * `--authentication-token-webhook-cache-ttl` 用来设定身份认证决定的缓存时间。 默认时长为 2 分钟。 配置文件使用 [kubeconfig](/docs/concepts/configuration/organize-cluster-access-kubeconfig/) 文件的格式。文件中,`clusters` 指代远程服务,`users` 指代远程 API 服务 Webhook。下面是一个例子: ```yaml # Kubernetes API 版本 apiVersion: v1 # API 对象类别 kind: Config # clusters 指代远程服务 clusters: - name: name-of-remote-authn-service cluster: certificate-authority: /path/to/ca.pem # 用来验证远程服务的 CA server: https://authn.example.com/authenticate # 要查询的远程服务 URL。必须使用 'https'。 # users 指代 API 服务的 Webhook 配置 users: - name: name-of-api-server user: client-certificate: /path/to/cert.pem # Webhook 插件要使用的证书 client-key: /path/to/key.pem # 与证书匹配的密钥 # kubeconfig 文件需要一个上下文(Context),此上下文用于本 API 服务器 current-context: webhook contexts: - context: cluster: name-of-remote-authn-service user: name-of-api-sever name: webhook ``` 当客户端尝试在 API 服务器上使用持有者令牌完成身份认证( 如[前](#putting-a-bearer-token-in-a-request)所述)时, 身份认证 Webhook 会用 POST 请求发送一个 JSON 序列化的对象到远程服务。 该对象是 `authentication.k8s.io/v1beta1` 组的 `TokenReview` 对象, 其中包含持有者令牌。 Kubernetes 不会强制请求提供此 HTTP 头部。 要注意的是,Webhook API 对象和其他 Kubernetes API 对象一样,也要受到同一 [版本兼容规则](/zh/docs/concepts/overview/kubernetes-api/)约束。 实现者要了解对 Beta 阶段对象的兼容性承诺,并检查请求的 `apiVersion` 字段, 以确保数据结构能够正常反序列化解析。此外,API 服务器必须启用 `authentication.k8s.io/v1beta1` API 扩展组 (`--runtime-config=authentication.k8s.io/v1beta1=true`)。 POST 请求的 Body 部分将是如下格式: ```json { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"spec\": { \"token\": \"<持有者令牌>\" } } ``` 远程服务应该会填充请求的 `status` 字段,以标明登录操作是否成功。 响应的 Body 中的 `spec` 字段会被忽略,因此可以省略。 如果持有者令牌验证成功,应该返回如下所示的响应: ```json { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": true, \"user\": { \"username\": \"janedoe@example.com\", \"uid\": \"42\", \"groups\": [ \"developers\", \"qa\" ], \"extra\": { \"extrafield1\": [ \"extravalue1\", \"extravalue2\" ] } } } } ``` 而不成功的请求会返回: ```json { \"apiVersion\": \"authentication.k8s.io/v1beta1\", \"kind\": \"TokenReview\", \"status\": { \"authenticated\": false } } ``` HTTP 状态码可用来提供进一步的错误语境信息。 ### 身份认证代理 {#authenticating-proxy} API 服务器可以配置成从请求的头部字段值(如 `X-Remote-User`)中辩识用户。 这一设计是用来与某身份认证代理一起使用 API 服务器,代理负责设置请求的头部字段值。 * `--requestheader-username-headers` 必需字段,大小写不敏感。用来设置要获得用户身份所要检查的头部字段名称列表(有序)。第一个包含数值的字段会被用来提取用户名。 * `--requestheader-group-headers` 可选字段,在 Kubernetes 1.6 版本以后支持,大小写不敏感。 建议设置为 \"X-Remote-Group\"。用来指定一组头部字段名称列表,以供检查用户所属的组名称。 所找到的全部头部字段的取值都会被用作用户组名。 * `--requestheader-extra-headers-prefix` 可选字段,在 Kubernetes 1.6 版本以后支持,大小写不敏感。 建议设置为 \"X-Remote-Extra-\"。用来设置一个头部字段的前缀字符串,API 服务器会基于所给 前缀来查找与用户有关的一些额外信息。这些额外信息通常用于所配置的鉴权插件。 API 服务器会将与所给前缀匹配的头部字段过滤出来,去掉其前缀部分,将剩余部分 转换为小写字符串并在必要时执行[百分号解码](https://tools.ietf.org/html/rfc3986#section-2.1) 后,构造新的附加信息字段键名。原来的头部字段值直接作为附加信息字段的值。 {{< note >}} 在 1.13.3 版本之前(包括 1.10.7、1.9.11),附加字段的键名只能包含 [HTTP 头部标签的合法字符](https://tools.ietf.org/html/rfc7230#section-3.2.6)。 {{< /note >}} 例如,使用下面的配置: ``` --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- ``` 针对所收到的如下请求: ```http GET / HTTP/1.1 X-Remote-User: fido X-Remote-Group: dogs X-Remote-Group: dachshunds X-Remote-Extra-Acme.com%2Fproject: some-project X-Remote-Extra-Scopes: openid X-Remote-Extra-Scopes: profile ``` 会生成下面的用户信息: ```yaml name: fido groups: - dogs - dachshunds extra: acme.com/project: - some-project scopes: - openid - profile ``` 为了防范头部信息侦听,在请求中的头部字段被检视之前, 身份认证代理需要向 API 服务器提供一份合法的客户端证书, 供后者使用所给的 CA 来执行验证。 警告:*不要* 在不同的上下文中复用 CA 证书,除非你清楚这样做的风险是什么以及 应如何保护 CA 用法的机制。 * `--requestheader-client-ca-file` 必需字段,给出 PEM 编码的证书包。 在检查请求的头部字段以提取用户名信息之前,必须提供一个合法的客户端证书, 且该证书要能够被所给文件中的机构所验证。 * `--requestheader-allowed-names` 可选字段,用来给出一组公共名称(CN)。 如果此标志被设置,则在检视请求中的头部以提取用户信息之前,必须提供 包含此列表中所给的 CN 名的、合法的客户端证书。 ## 匿名请求 {#anonymous-requests} 启用匿名请求支持之后,如果请求没有被已配置的其他身份认证方法拒绝,则被视作 匿名请求(Anonymous Requests)。这类请求获得用户名 `system:anonymous` 和 对应的用户组 `system:unauthenticated`。 例如,在一个配置了令牌身份认证且启用了匿名访问的服务器上,如果请求提供了非法的 持有者令牌,则会返回 `401 Unauthorized` 错误。 如果请求没有提供持有者令牌,则被视为匿名请求。 在 1.5.1-1.5.x 版本中,匿名访问默认情况下是被禁用的,可以通过为 API 服务器设定 `--anonymous-auth=true` 来启用。 在 1.6 及之后版本中,如果所使用的鉴权模式不是 `AlwaysAllow`,则匿名访问默认是被启用的。 从 1.6 版本开始,ABAC 和 RBAC 鉴权模块要求对 `system:anonymous` 用户或者 `system:unauthenticated` 用户组执行显式的权限判定,所以之前的为 `*` 用户或 `*` 用户组赋予访问权限的策略规则都不再包含匿名用户。 ## 用户伪装 {#user-impersonation} 一个用户可以通过伪装(Impersonation)头部字段来以另一个用户的身份执行操作。 使用这一能力,你可以手动重载请求被身份认证所识别出来的用户信息。 例如,管理员可以使用这一功能特性来临时伪装成另一个用户,查看请求是否被拒绝, 从而调试鉴权策略中的问题, 带伪装的请求首先会被身份认证识别为发出请求的用户,之后会切换到使用被伪装的用户 的用户信息。 * 用户发起 API 调用时 _同时_ 提供自身的凭据和伪装头部字段信息 * API 服务器对用户执行身份认证 * API 服务器确认通过认证的用户具有伪装特权 * 请求用户的信息被替换成伪装字段的值 * 评估请求,鉴权组件针对所伪装的用户信息执行操作 以下 HTTP 头部字段可用来执行伪装请求: * `Impersonate-User`:要伪装成的用户名 * `Impersonate-Group`:要伪装成的用户组名。可以多次指定以设置多个用户组。 可选字段;要求 \"Impersonate-User\" 必须被设置。 * `Impersonate-Extra-<附加名称>`:一个动态的头部字段,用来设置与用户相关的附加字段。 此字段可选;要求 \"Impersonate-User\" 被设置。为了能够以一致的形式保留, `<附加名称>`部分必须是小写字符,如果有任何字符不是 [合法的 HTTP 头部标签字符](https://tools.ietf.org/html/rfc7230#section-3.2.6), 则必须是 utf8 字符,且转换为[百分号编码](https://tools.ietf.org/html/rfc3986#section-2.1)。 {{< note >}} 在 1.11.3 版本之前(以及 1.10.7、1.9.11),`<附加名称>` 只能包含 合法的 HTTP 标签字符。 {{< /note >}} 头部字段集合的示例: ```http Impersonate-User: jane.doe@example.com Impersonate-Group: developers Impersonate-Group: admins Impersonate-Extra-dn: cn=jane,ou=engineers,dc=example,dc=com Impersonate-Extra-acme.com%2Fproject: some-project Impersonate-Extra-scopes: view Impersonate-Extra-scopes: development ``` 在使用 `kubectl` 时,可以使用 `--as` 标志来配置 `Impersonate-User` 头部字段值, 使用 `--as-group` 标志配置 `Impersonate-Group` 头部字段值。 ```bash kubectl drain mynode ``` ```none Error from server (Forbidden): User \"clark\" cannot get nodes at the cluster scope. (get nodes mynode) ``` 设置 `--as` 和 `--as-group` 标志: ```bash kubectl drain mynode --as=superman --as-group=system:masters ``` ```none node/mynode cordoned node/mynode drained ``` 要伪装成某个用户、某个组或者设置附加字段,执行伪装操作的用户必须具有对所伪装的 类别(“user”、“group” 等)执行 “impersonate” 动词操作的能力。 对于启用了 RBAC 鉴权插件的集群,下面的 ClusterRole 封装了设置用户和组伪装字段 所需的规则: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: impersonator rules: - apiGroups: [\"\"] resources: [\"users\", \"groups\", \"serviceaccounts\"] verbs: [\"impersonate\"] ``` 附加字段会被作为 `userextras` 资源的子资源来执行权限评估。 如果要允许用户为附加字段 “scopes” 设置伪装头部,该用户需要被授予以下规则: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: scopes-impersonator rules: # 可以设置 \"Impersonate-Extra-scopes\" 头部 - apiGroups: [\"authentication.k8s.io\"] resources: [\"userextras/scopes\"] verbs: [\"impersonate\"] ``` 你也可以通过约束资源可能对应的 `resourceNames` 限制伪装头部的取值: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: limited-impersonator rules: # 可以伪装成用户 \"jane.doe@example.com\" - apiGroups: [\"\"] resources: [\"users\"] verbs: [\"impersonate\"] resourceNames: [\"jane.doe@example.com\"] # 可以伪装成用户组 \"developers\" 和 \"admins\" - apiGroups: [\"\"] resources: [\"groups\"] verbs: [\"impersonate\"] resourceNames: [\"developers\",\"admins\"] # 可以将附加字段 \"scopes\" 伪装成 \"view\" 和 \"development\" - apiGroups: [\"authentication.k8s.io\"] resources: [\"userextras/scopes\"] verbs: [\"impersonate\"] resourceNames: [\"view\", \"development\"] ``` ## client-go 凭据插件 {#client-go-credential-plugins} {{< feature-state for_k8s_version=\"v1.11\" state=\"beta\" >}} `k8s.io/client-go` 及使用它的工具(如 `kubectl` 和 `kubelet`)可以执行某个外部 命令来获得用户的凭据信息。 这一特性的目的是便于客户端与 `k8s.io/client-go` 并不支持的身份认证协议(LDAP、 Kerberos、OAuth2、SAML 等)继承。 插件实现特定于协议的逻辑,之后返回不透明的凭据以供使用。 几乎所有的凭据插件使用场景中都需要在服务器端存在一个支持 [Webhook 令牌身份认证组件](#webhook-token-authentication)的模块, 负责解析客户端插件所生成的凭据格式。 ### 示例应用场景 {#example-use-case} 在一个假想的应用场景中,某组织运行这一个外部的服务,能够将特定用户的已签名的 令牌转换成 LDAP 凭据。此服务还能够对 [Webhook 令牌身份认证组件](#webhook-token-authentication)的请求做出响应以 验证所提供的令牌。用户需要在自己的工作站上安装一个凭据插件。 要对 API 服务器认证身份时: * 用户发出 `kubectl` 命令。 * 凭据插件提示用户输入 LDAP 凭据,并与外部服务交互,获得令牌。 * 凭据插件将令牌返回该 client-go,后者将其用作持有者令牌提交给 API 服务器。 * API 服务器使用[Webhook 令牌身份认证组件](#webhook-token-authentication)向 外部服务发出 `TokenReview` 请求。 * 外部服务检查令牌上的签名,返回用户的用户名和用户组信息。 ### 配置 {#configuration} 凭据插件通过 [kubectl 配置文件](/zh/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 来作为 user 字段的一部分设置。 ```yaml apiVersion: v1 kind: Config users: - name: my-user user: exec: # 要执行的命令。必需。 command: \"example-client-go-exec-plugin\" # 解析 ExecCredentials 资源时使用的 API 版本。必需。 # # 插件返回的 API 版本必需与这里列出的版本匹配。 # # 要与支持多个版本的工具(如 client.authentication.k8sio/v1alpha1)集成, # 可以设置一个环境变量或者向工具传递一个参数标明 exec 插件所期望的版本。 apiVersion: \"client.authentication.k8s.io/v1beta1\" # 执行此插件时要设置的环境变量。可选字段。 env: - name: \"FOO\" value: \"bar\" # 执行插件时要传递的参数。可选字段。 args: - \"arg1\" - \"arg2\" clusters: - name: my-cluster cluster: server: \"https://172.17.4.100:6443\" certificate-authority: \"/etc/kubernetes/ca.pem\" contexts: - name: my-cluster context: cluster: my-cluster user: my-user current-context: my-cluster ``` 解析相对命令路径时,kubectl 将其视为与配置文件比较而言的相对路径。 如果 KUBECONFIG 被设置为 `/home/jane/kubeconfig`,而 exec 命令为 `./bin/example-client-go-exec-plugin`,则要执行的可执行文件为 `/home/jane/bin/example-client-go-exec-plugin`。 ```yaml - name: my-user user: exec: # 对 kubeconfig 目录而言的相对路径 command: \"./bin/example-client-go-exec-plugin\" apiVersion: \"client.authentication.k8s.io/v1beta1\" ``` ### 输出和输出格式 {#input-and-output-formats} 所执行的命令会在 `stdout` 打印 `ExecCredential` 对象。 `k8s.io/client-go` 使用 `status` 中返回的凭据信息向 Kubernetes API 服务器 执行身份认证。 在交互式会话中运行时,`stdin` 是直接暴露给插件使用的。 插件应该使用 [TTY check](https://godoc.org/golang.org/x/crypto/ssh/terminal#IsTerminal) 来确定是否适合用交互方式请求用户输入。 与使用持有者令牌凭据,插件在 `ExecCredential` 的状态中返回一个令牌: ```json { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\" } } ``` 另一种方案是,返回 PEM 编码的客户端证书和密钥,以便执行 TLS 客户端身份认证。 如果插件在后续调用中返回了不同的证书或密钥,`k8s.io/client-go` 会终止其与服务器的连接,从而强制执行新的 TLS 握手过程。 如果指定了这种方式,则 `clientKeyData` 和 `clientCertificateData` 字段都必需存在。 `clientCertificateData` 字段可能包含一些要发送给服务器的中间证书(Intermediate Certificates)。 ```json { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"clientCertificateData\": \"-----BEGIN CERTIFICATE-----n...n-----END CERTIFICATE-----\", \"clientKeyData\": \"-----BEGIN RSA PRIVATE KEY-----n...n-----END RSA PRIVATE KEY-----\" } } ``` 作为一种可选方案,响应中还可以包含以 RFC3339 时间戳格式给出的证书到期时间。 证书到期时间的有无会有如下影响: - 如果响应中包含了到期时间,持有者令牌和 TLS 凭据会被缓存,直到到期期限到来、 或者服务器返回 401 HTTP 状态码,或者进程退出。 - 如果未指定到期时间,则持有者令牌和 TLS 凭据会被缓存,直到服务器返回 401 HTTP 状态码或者进程退出。 ```json { \"apiVersion\": \"client.authentication.k8s.io/v1beta1\", \"kind\": \"ExecCredential\", \"status\": { \"token\": \"my-bearer-token\", \"expirationTimestamp\": \"2018-03-05T17:30:20-08:00\" } } ``` "} {"_id":"doc-en-website-ae7cd14c09aa49a62ceaa79e049408ae5a9b8a7a9abe17f58be26f43d50e1aef","title":"","text":"However, the particular path specified in the custom recycler Pod template in the `volumes` part is replaced with the particular path of the volume that is being recycled. ### Reserving a PersistentVolume The control plane can [bind PersistentVolumeClaims to matching PersistentVolumes](#binding) in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them. By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its `claimRef` field, then the PersistentVolume and PersistentVolumeClaim will be bound. The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/), access modes, and requested storage size are valid. ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: foo-pvc namespace: foo spec: volumeName: foo-pv ... ``` This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the `claimRef` field of the PV so that other PVCs can not bind to it. ``` apiVersion: v1 kind: PersistentVolume metadata: name: foo-pv spec: claimRef: name: foo-pvc namespace: foo ... ``` This is useful if you want to consume PersistentVolumes that have their `claimPolicy` set to `Retain`, including cases where you are reusing an existing PV. ### Expanding Persistent Volumes Claims {{< feature-state for_k8s_version=\"v1.11\" state=\"beta\" >}}"} {"_id":"doc-en-website-847986075ca03777665ba7a4dc56c633b8d5ad258ec9f948cc1c6c1e488a53ff","title":"","text":" --- title: ネットワークポリシー content_type: concept weight: 50 --- ネットワークポリシーは、{{< glossary_tooltip text=\"Pod\" term_id=\"pod\">}}のグループが、Pod相互や他のネットワークエンドポイントと通信する場合に許可を与える方法を指定するための仕様です。 NetworkPolicyリソースは、{{< glossary_tooltip text=\"ラベル\" term_id=\"label\">}}を使用してPodを選択し、選択したPodに対してどんなトラフィックを許可するかを指定するルールを定義します。 ## 前提条件 ネットワークポリシーは、[ネットワークプラグイン](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)により実装されます。ネットワークポリシーを使用するには、NetworkPolicyをサポートするネットワークソリューションを使用しなければなりません。ネットワークポリシーを実装したコントローラーを使用せずにNetworkPolicyリソースを作成した場合は、何も効果はありません。 ## 分離されたPodと分離されていないPod デフォルトでは、Podは分離されていない状態(non-isolated)となるため、すべてのソースからのトラフィックを受信します。 Podを選択するNetworkPolicyが存在すると、Podは分離されるようになります。名前空間内に特定のPodを選択するNetworkPolicyが1つでも存在すると、そのPodはいずれかのNetworkPolicyで許可されていないすべての接続を拒否するようになります。(同じ名前空間内のPodでも、どのNetworkPolicyにも選択されなかった他のPodは、引き続きすべてのトラフィックを許可します。) ネットワークポリシーは追加式であるため、競合することはありません。複数のポリシーがPodを選択する場合、そのPodに許可されるトラフィックは、それらのポリシーのingress/egressルールの和集合で制限されます。したがって、評価の順序はポリシーの結果には影響がありません。 ## NetworkPolicyリソース {#networkpolicy-resource} リソースの完全な定義については、リファレンスの[NetworkPolicy](/docs/reference/generated/kubernetes-api/{{< param \"version\" >}}/#networkpolicy-v1-networking-k8s-io)のセクションを参照してください。 以下は、NetworkPolicyの一例です。 ```yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 172.17.0.0/16 except: - 172.17.1.0/24 - namespaceSelector: matchLabels: project: myproject - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 5978 ``` {{< note >}} 選択したネットワークソリューションがネットワークポリシーをサポートしていない場合には、これをクラスターのAPIサーバーにPOSTしても効果はありません。 {{< /note >}} __必須フィールド__: 他のKubernetesの設定と同様に、NetworkPolicyにも`apiVersion`、`kind`、`metadata`フィールドが必須です。設定ファイルの扱い方に関する一般的な情報については、[ConfigMapを使用してコンテナを構成する](/ja/docs/tasks/configure-pod-container/configure-pod-configmap/)と[オブジェクト管理](/ja/docs/concepts/overview/working-with-objects/object-management)を参照してください。 __spec__: NetworkPolicyの[spec](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status)を見ると、指定した名前空間内で特定のネットワークポリシーを定義するのに必要なすべての情報が確認できます。 __podSelector__: 各NetworkPolicyには、ポリシーを適用するPodのグループを選択する`podSelector`が含まれます。ポリシーの例では、ラベル\"role=db\"を持つPodを選択しています。`podSelector`を空にすると、名前空間内のすべてのPodが選択されます。 __policyTypes__: 各NetworkPolicyには、`policyTypes`として、`Ingress`、`Egress`、またはその両方からなるリストが含まれます。`policyTypes`フィールドでは、指定したポリシーがどの種類のトラフィックに適用されるかを定めます。トラフィックの種類としては、選択したPodへの内向きのトラフィック(Ingress)、選択したPodからの外向きのトラフィック(Egress)、またはその両方を指定します。`policyTypes`を指定しなかった場合、デフォルトで常に `Ingress`が指定され、NetworkPolicyにegressルールが1つでもあれば`Egress`も設定されます。 __ingress__: 各NetworkPolicyには、許可する`ingress`ルールのリストを指定できます。各ルールは、`from`および`ports`セクションの両方に一致するトラフィックを許可します。ポリシーの例には1つのルールが含まれ、このルールは、3つのソースのいずれかから送信された1つのポート上のトラフィックに一致します。1つ目のソースは`ipBlock`で、2つ目のソースは`namespaceSelector`で、3つ目のソースは`podSelector`でそれぞれ定められます。 __egress__: 各NetworkPolicyには、許可する`egress`ルールのリストを指定できます。各ルールは、`to`および`ports`セクションの両方に一致するトラフィックを許可します。ポリシーの例には1つのルールが含まれ、このルールは、1つのポート上で`10.0.0.0/24`の範囲内の任意の送信先へ送られるトラフィックに一致します。 したがって、上のNetworkPolicyの例では、次のようにネットワークポリシーを適用します。 1. \"default\"名前空間内にある\"role=db\"のPodを、内向きと外向きのトラフィックに対して分離する(まだ分離されていない場合) 2. (Ingressルール) \"default\"名前空間内の\"role=db\"ラベルが付いたすべてのPodのTCPの6379番ポートへの接続のうち、次の送信元からのものを許可する * \"default\"名前空間内のラベル\"role=frontend\"が付いたすべてのPod * ラベル\"project=myproject\"が付いた名前空間内のすべてのPod * 172.17.0.0–172.17.0.255および172.17.2.0–172.17.255.255(言い換えれば、172.17.1.0/24の範囲を除く172.17.0.0/16)の範囲内のすべてのIPアドレス 3. (Egressルール) \"role=db\"というラベルが付いた\"default\"名前空間内のすべてのPodからの、TCPの5978番ポート上でのCIDR 10.0.0.0/24への接続を許可する 追加の例については、[ネットワークポリシーを宣言する](/ja/docs/tasks/administer-cluster/declare-network-policy/)の説明を参照してください。 ## `to`と`from`のセレクターの振る舞い `ingress`の`from`セクションまたは`egress`の`to`セクションに指定できるセレクターは4種類あります。 __podSelector__: NetworkPolicyと同じ名前空間内の特定のPodを選択して、ingressの送信元またはegressの送信先を許可します。 __namespaceSelector__: 特定の名前空間を選択して、その名前空間内のすべてのPodについて、ingressの送信元またはegressの送信先を許可します。 __namespaceSelector__ *および* __podSelector__: 1つの`to`または`from`エントリーで`namespaceSelector`と`podSelector`の両方を指定して、特定の名前空間内の特定のPodを選択します。正しいYAMLの構文を使うように気をつけてください。このポリシーの例を以下に示します。 ```yaml ... ingress: - from: - namespaceSelector: matchLabels: user: alice podSelector: matchLabels: role: client ... ``` このポリシーには、1つの`from`要素があり、ラベル`user=alice`の付いた名前空間内にある、ラベル`role=client`の付いたPodからの接続を許可します。しかし、*以下の*ポリシーには注意が必要です。 ```yaml ... ingress: - from: - namespaceSelector: matchLabels: user: alice - podSelector: matchLabels: role: client ... ``` このポリシーには、`from`配列の中に2つの要素があります。そのため、ラベル`role=client`の付いた名前空間内にあるすべてのPodからの接続、*または*、任意の名前空間内にあるラベル`user=alice`の付いたすべてのPodからの接続を許可します。 正しいルールになっているか自信がないときは、`kubectl describe`を使用すると、Kubernetesがどのようにポリシーを解釈したのかを確認できます。 __ipBlock__: 特定のIPのCIDRの範囲を選択して、ingressの送信元またはegressの送信先を許可します。PodのIPは一時的なもので予測できないため、ここにはクラスター外のIPを指定するべきです。 クラスターのingressとegressの仕組みはパケットの送信元IPや送信先IPの書き換えを必要とすることがよくあります。その場合、NetworkPolicyの処理がIPの書き換えの前後どちらで行われるのかは定義されていません。そのため、ネットワークプラグイン、クラウドプロバイダー、`Service`の実装などの組み合わせによっては、動作が異なる可能性があります。 内向きのトラフィックの場合は、実際のオリジナルの送信元IPに基づいてパケットをフィルタリングできる可能性もあれば、NetworkPolicyが対象とする「送信元IP」が`LoadBalancer`やPodのノードなどのIPになってしまっている可能性もあることになります。 外向きのトラフィックの場合は、クラスター外のIPに書き換えられたPodから`Service`のIPへの接続は、`ipBlock`ベースのポリシーの対象になる場合とならない場合があることになります。 ## デフォルトのポリシー デフォルトでは、名前空間にポリシーが存在しない場合、その名前空間内のPodの内向きと外向きのトラフィックはすべて許可されます。以下の例を利用すると、その名前空間内でのデフォルトの振る舞いを変更できます。 ### デフォルトですべての内向きのトラフィックを拒否する すべてのPodを選択して、そのPodへのすべての内向きのトラフィックを許可しないNetworkPolicyを作成すると、その名前空間に対する「デフォルト」の分離ポリシーを作成できます。 {{< codenew file=\"service/networking/network-policy-default-deny-ingress.yaml\" >}} このポリシーを利用すれば、他のいかなるNetworkPolicyにも選択されなかったPodでも分離されることを保証できます。このポリシーは、デフォルトの外向きの分離の振る舞いを変更しません。 ### デフォルトで内向きのすべてのトラフィックを許可する (たとえPodを「分離されたもの」として扱うポリシーが追加された場合でも)名前空間内のすべてのPodへのすべてのトラフィックを許可したい場合には、その名前空間内のすべてのトラフィックを明示的に許可するポリシーを作成できます。 {{< codenew file=\"service/networking/network-policy-allow-all-ingress.yaml\" >}} ### デフォルトで外向きのすべてのトラフィックを拒否する すべてのPodを選択して、そのPodからのすべての外向きのトラフィックを許可しないNetworkPolicyを作成すると、その名前空間に対する「デフォルト」の外向きの分離ポリシーを作成できます。 {{< codenew file=\"service/networking/network-policy-default-deny-egress.yaml\" >}} このポリシーを利用すれば、他のいかなるNetworkPolicyにも選択されなかったPodでも、外向きのトラフィックが許可されないことを保証できます。このポリシーは、デフォルトの内向きの分離の振る舞いを変更しません。 ### デフォルトで外向きのすべてのトラフィックを許可する (たとえPodを「分離されたもの」として扱うポリシーが追加された場合でも)名前空間内のすべてのPodからのすべてのトラフィックを許可したい場合には、その名前空間内のすべての外向きのトラフィックを明示的に許可するポリシーを作成できます。 {{< codenew file=\"service/networking/network-policy-allow-all-egress.yaml\" >}} ### デフォルトで内向きと外向きのすべてのトラフィックを拒否する 名前空間内に以下のNetworkPolicyを作成すると、その名前空間で内向きと外向きのすべてのトラフィックを拒否する「デフォルト」のポリシーを作成できます。 {{< codenew file=\"service/networking/network-policy-default-deny-all.yaml\" >}} このポリシーを利用すれば、他のいかなるNetworkPolicyにも選択されなかったPodでも、内向きと外向きのトラフィックが許可されないことを保証できます。 ## SCTPのサポート {{< feature-state for_k8s_version=\"v1.12\" state=\"alpha\" >}} この機能を利用するには、クラスター管理者がAPIサーバーで`--feature-gates=SCTPSupport=true,…`と指定して、`SCTPSupport`[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)を有効にする必要があります。フィーチャーゲートが有効になれば、NetworkPolicyの`protocol`フィールドに`SCTP`が指定できるようになります。 {{< note >}} SCTPプロトコルのネットワークポリシーをサポートする{{< glossary_tooltip text=\"CNI\" term_id=\"cni\" >}}プラグインを使用している必要があります。 {{< /note >}} ## {{% heading \"whatsnext\" %}} - [ネットワークポリシーを宣言する](/ja/docs/tasks/administer-cluster/declare-network-policy/)で追加の例の説明を読む。 - NetworkPolicyリソースで実現できるよくあるシナリオのためのさまざまな[レシピ](https://github.com/ahmetb/kubernetes-network-policy-recipes)を確認する。 "} {"_id":"doc-en-website-f58b916b792edcc3e8334c28f7b35cbcc3bbc22434f5d976a2ac90cd94ba3d80","title":"","text":" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress spec: podSelector: {} egress: - {} policyTypes: - Egress "} {"_id":"doc-en-website-8dc2afd63988141a5d13a514eb7d2afe6ce7faebcb6266f861c02adb73d42df1","title":"","text":" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-ingress spec: podSelector: {} ingress: - {} policyTypes: - Ingress "} {"_id":"doc-en-website-f13d982f83614bd85070e08af8512f1bf6f9ca95ea76212f42a22ed9226a6754","title":"","text":" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all spec: podSelector: {} policyTypes: - Ingress - Egress "} {"_id":"doc-en-website-203f37b200db88b91a4eaa11287ab731a304e7e130d4cc61e721bad6ae63d95a","title":"","text":" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress spec: podSelector: {} policyTypes: - Egress "} {"_id":"doc-en-website-a3a019c74ffd026c879a8bdfb18ba45ae980fc7a9eac9cf425ffef3992796d7e","title":"","text":" --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-ingress spec: podSelector: {} policyTypes: - Ingress "} {"_id":"doc-en-website-e2a17ad78f4ef316d4e635c365bc84c44e29dea9701003f0cfa1a0b2966e7ef4","title":"","text":"- kow3ns title: DaemonSet content_type: concept weight: 50 weight: 40 --- "} {"_id":"doc-en-website-c50c85aa95128188038077005a9161a75158c0c30a8e4caefd44fa618f4f3c75","title":"","text":"Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn't kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions. content_type: concept weight: 30 weight: 10 --- "} {"_id":"doc-en-website-8ed23991a601bb3d59c43d0c6b9c58c7755a5a41cf9413dd978cec5320030cbc","title":"","text":"--- title: Garbage Collection content_type: concept weight: 70 weight: 60 --- "} {"_id":"doc-en-website-f38b4786bf3ecf28a3054989c3c4c42f8281bcf1be0ae5619dc168d4ad39dfa1","title":"","text":"title: Batch execution description: > In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired. weight: 60 weight: 50 --- "} {"_id":"doc-en-website-6b72e98a69ed1c2724093ea1712f1851f2a6bff3da0fb0def0f346b081706f18","title":"","text":"- madhusudancs title: ReplicaSet content_type: concept weight: 10 weight: 20 --- "} {"_id":"doc-en-website-99f44636cbcdc6781289c5d243fb7f5234cb786090d552a6df7242698c70b8af","title":"","text":"Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. content_type: concept weight: 20 weight: 90 --- "} {"_id":"doc-en-website-f99932f3d6d5ff60e0e37aafefe9fd3ddba47dad8bfb1eace38b48d1b9743495","title":"","text":"- smarterclayton title: StatefulSets content_type: concept weight: 40 weight: 30 --- "} {"_id":"doc-en-website-3c97f130fbbc9eac616f02b78cc7b079e64e8d2e915c2b47b248644f5e1d4e0f","title":"","text":"(ex: `ca.crt`, `ca.key`, `front-proxy-ca.crt`, and `front-proxy-ca.key`) to all your control plane nodes in the Kubernetes certificates directory. 1. Update *Kubernetes controller manager's* `--root-ca-file` to include both old and new CA and restart controller manager. 1. Update {{< glossary_tooltip text=\"kube-controller-manager\" term_id=\"kube-controller-manager\" >}}'s `--root-ca-file` to include both old and new CA. Then restart the component. Any service account created after this point will get secrets that include both old and new CAs. {{< note >}} Remove the flag `--client-ca-file` from the *Kubernetes controller manager* configuration. You can also replace the existing client CA file or change this configuration item to reference a new, updated CA. [Issue 1350](https://github.com/kubernetes/kubeadm/issues/1350) tracks an issue with *Kubernetes controller manager* being unable to accept a CA bundle. The files specified by the kube-controller-manager flags `--client-ca-file` and `--cluster-signing-cert-file` cannot be CA bundles. If these flags and `--root-ca-file` point to the same `ca.crt` file which is now a bundle (includes both old and new CA) you will face an error. To workaround this problem you can copy the new CA to a separate file and make the flags `--client-ca-file` and `--cluster-signing-cert-file` point to the copy. Once `ca.crt` is no longer a bundle you can restore the problem flags to point to `ca.crt` and delete the copy. {{< /note >}} 1. Update all service account tokens to include both old and new CA certificates."} {"_id":"doc-en-website-8c74b647383d9f4a1b1053d1ac988bee6476f7f68471e30edd0055a1888c80f0","title":"","text":" --- reviewers: - michmike - patricklang title: Windowsノードの追加 min-kubernetes-server-version: 1.17 content_type: tutorial weight: 30 --- {{< feature-state for_k8s_version=\"v1.18\" state=\"beta\" >}} Kubernetesを使用してLinuxノードとWindowsノードを混在させて実行できるため、Linuxで実行するPodとWindowsで実行するPodを混在させることができます。このページでは、Windowsノードをクラスターに登録する方法を示します。 ## {{% heading \"prerequisites\" %}} {{< version-check >}} * WindowsコンテナをホストするWindowsノードを構成するには、[Windows Server 2019ライセンス](https://www.microsoft.com/en-us/cloud-platform/windows-server-pricing)(またはそれ以上)を取得します。 VXLAN/オーバーレイネットワークを使用している場合は、[KB4489899](https://support.microsoft.com/help/4489899)もインストールされている必要があります。 * コントロールプレーンにアクセスできるLinuxベースのKubernetes kubeadmクラスター([kubeadmを使用した単一のコントロールプレーンクラスターの作成](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/)を参照) ## {{% heading \"objectives\" %}} * Windowsノードをクラスターに登録する * LinuxとWindowsのPodとServiceが相互に通信できるようにネットワークを構成する ## はじめに: クラスタへのWindowsノードの追加 ### ネットワーク構成 LinuxベースのKubernetesコントロールプレーンノードを取得したら、ネットワーキングソリューションを選択できます。このガイドでは、簡単にするためにVXLANモードでのFlannelの使用について説明します。 #### Flannel構成 1. FlannelのためにKubernetesコントロールプレーンを準備する クラスター内のKubernetesコントロールプレーンでは、多少の準備が推奨されます。Flannelを使用する場合は、iptablesチェーンへのブリッジIPv4トラフィックを有効にすることをお勧めします。すべてのLinuxノードで次のコマンドを実行する必要があります: ```bash sudo sysctl net.bridge.bridge-nf-call-iptables=1 ``` 1. Linux用のFlannelをダウンロードして構成する 最新のFlannelマニフェストをダウンロード: ```bash wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ``` VNIを4096、ポートを4789に設定するために、flannelマニフェストの`net-conf.json`セクションを変更します。次のようになります。: ```json net-conf.json: | { \"Network\": \"10.244.0.0/16\", \"Backend\": { \"Type\": \"vxlan\", \"VNI\" : 4096, \"Port\": 4789 } } ``` {{< note >}}Linux上のFlannelがWindows上のFlannelと相互運用するには、VNIを4096およびポート4789に設定する必要があります。これらのフィールドの説明については、[VXLANドキュメント](https://github.com/coreos/flannel/blob/master/Documentation/backends.md#vxlan)を参照してください。{{< /note >}} {{< note >}}L2Bridge/Host-gatewayモードを使用するには、代わりに`Type`の値を`\"host-gw\"`に変更し、`VNI`と`Port`を省略します。{{< /note >}} 1. Flannelマニフェストを適用して検証する Flannelの構成を適用しましょう: ```bash kubectl apply -f kube-flannel.yml ``` 数分後、Flannel Podネットワークがデプロイされていれば、すべてのPodが実行されていることがわかります。 ```bash kubectl get pods -n kube-system ``` 出力結果には、実行中のLinux flannel DaemonSetが含まれているはずです: ``` NAMESPACE NAME READY STATUS RESTARTS AGE ... kube-system kube-flannel-ds-54954 1/1 Running 0 1m ``` 1. Windows Flannelとkube-proxy DaemonSetを追加する これで、Windows互換バージョンのFlannelおよびkube-proxyを追加できます。 順番に互換性のあるバージョンのkube-proxyを確実に入手するには、イメージのタグを代用する必要があります。 次の例は、Kubernetes{{< param \"fullversion\" >}}の使用方法を示していますが、 独自のdeploymentに合わせてバージョンを調整する必要があります。 ```bash curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param \"fullversion\" >}}/g' | kubectl apply -f - kubectl apply -f https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml ``` {{< note >}} ホストゲートウェイを使用している場合は、代わりにhttps://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-host-gw.ymlを使用してください。 {{< /note >}} {{< note >}} Windowsノードでイーサネット(「Ethernet0 2」など)ではなく別のインターフェースを使用している場合は、次の行を変更する必要があります。: ```powershell wins cli process run --path /k/flannel/setup.exe --args \"--mode=overlay --interface=Ethernet\" ``` `flannel-host-gw.yml`または`flannel-overlay.yml`ファイルで、それに応じてインターフェースを指定します。 ```bash # 例 curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/flannel-overlay.yml | sed 's/Ethernet/Ethernet0 2/g' | kubectl apply -f - ``` {{< /note >}} ### Windowsワーカーノードの参加 {{< note >}} `Containers`機能をインストールし、Dockerをインストールする必要があります。 行うための指示としては、[Dockerエンジンのインストール - Windowsサーバー上のエンタープライズ](https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/docker-engine-enterprise/dee-windows.html)を利用できます。 {{< /note >}} {{< note >}} Windowsセクションのすべてのコードスニペットは、 Windowsワーカーノードの(管理者)権限を持つPowerShell環境で実行されます。 {{< /note >}} 1. wins、kubelet、kubeadmをインストールします。 ```PowerShell curl.exe -LO https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/PrepareNode.ps1 .PrepareNode.ps1 -KubernetesVersion {{< param \"fullversion\" >}} ``` 1. `kubeadm`を実行してノードに参加します コントロールプレーンホストで`kubeadm init`を実行したときに提供されたコマンドを使用します。 このコマンドがなくなった場合、またはトークンの有効期限が切れている場合は、`kubeadm token create --print-join-command` (コントロールプレーンホスト上で)を実行して新しいトークンを生成します。 #### インストールの確認 次のコマンドを実行して、クラスター内のWindowsノードを表示できるようになります: ```bash kubectl get nodes -o wide ``` 新しいノードが`NotReady`状態の場合は、flannelイメージがまだダウンロードされている可能性があります。 `kube-system`名前空間のflannel Podを確認することで、以前と同様に進行状況を確認できます: ```shell kubectl -n kube-system get pods -l app=flannel ``` flannel Podが実行されると、ノードは`Ready`状態になり、ワークロードを処理できるようになります。 ## {{% heading \"whatsnext\" %}} - [Windows kubeadmノードのアップグレード](/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes) "} {"_id":"doc-en-website-1a9ac40673f97eaf8592befc06dc5c4e6a84e94dbf695b24bb410aa617232f1b","title":"","text":"and if `concurrencyPolicy` is set to `Allow`, the jobs will always run at least once. {{< caution >}} If `startingDeadlineSeconds` is set to a value less than 10 seconds, the CronJob may not be scheduled. This is because the CronJob controller checks things every 10 seconds. {{< /caution >}} For every CronJob, the CronJob {{< glossary_tooltip term_id=\"controller\" >}} checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error ````"} {"_id":"doc-en-website-a98ceb7fe3bb59d60487d02767f2552727e8a1ce1c19a240e1234e31d4e55ef4","title":"","text":"For instructions on creating and working with cron jobs, and for an example of CronJob manifest, see [Running automated tasks with cron jobs](/docs/tasks/job/automated-tasks-with-cron-jobs)."} {"_id":"doc-en-website-441191e60dcd7321753e2b91c1ee2bbd65bc28778e494c7ce0b1c84e60db5dc7","title":"","text":"* Provider support for dual-stack networking (Cloud provider or otherwise must be able to provide Kubernetes nodes with routable IPv4/IPv6 network interfaces) * A [network plugin](/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) that supports dual-stack (such as Kubenet or Calico) * Kube-proxy running in mode IPVS * [Dual-stack enabled](/docs/concepts/services-networking/dual-stack/) cluster {{< version-check >}}"} {"_id":"doc-en-website-a194380ffd84616a0b7e313dfbe09cc026570e30f8a7037208c3b643d62a035d","title":"","text":" --- title: コンテンツの改善を提案する slug: suggest-improvements content_type: concept weight: 10 card: name: contribute weight: 20 --- Kubernetesのドキュメントに何か問題を見つけたり、新しいコンテンツに関してアイデアを思い付いたときは、issueを作ってください。必要なものは、[GitHubアカウント](https://github.com/join)とウェブブラウザーだけです。 Kubernetesのドキュメント上の新しい作業は、ほとんどの場合、GitHubのissueから始まります。Kubernetesのコントリビューターは、必要に応じてレビュー、分類、タグ付けを行います。次に、あなたやKubernetesコミュニティの他のメンバーが、そのissueを解決するための変更を加えるpull requestを開きます。 ## issueを作る 既存のコンテンツに対して改善を提案したい場合や、間違いを発見した場合は、issueを作ってください。 1. ページの右側のサイドバーにある**ドキュメントのissueを作成**ボタンをクリックします。GitHubのissueページにリダイレクトし、一部のヘッダーが自動的に挿入されます。 2. 問題や改善の提案について書きます。できる限り多くの詳細情報を提供するようにしてください。 3. **Submit new issue**ボタンをクリックします。 送信後、定期的にissueを確認するか、GitHubの通知を設定してください。レビュアや他のコミュニティメンバーが、issueに対して作業を行うために、あなたに何か質問をするかもしれません。 ## 新しいコンテンツの提案 新しいコンテンツに関するアイデアがあるものの、どの場所に追加すればわからないときでも、issueを作ることができます。次のいずれかを選択して行ってください。 - コンテンツが追加されるべきだと思う既存のページを選択し、**ドキュメントのissueを作成**ボタンをクリックする。 - [GitHub](https://github.com/kubernetes/website/issues/new/)に移動し、直接issueを作る。 ## よいissueを作るには issueを作るときは、以下のことを心に留めてください。 - 明確なissueの説明を書く。不足している点、古くなっている点、誤っている点、改善が必要な点など、どの点がそうであるか明確に書く。 - issueがユーザーに与える具体的な影響を説明する。 - 合理的な作業単位になるように、特定のissueのスコープを制限する。スコープの大きな問題については、小さな複数のissueに分割する。たとえば、\"Fix the security docs\"(セキュリティのドキュメントを修正する)というのはスコープが大きすぎますが、\"Add details to the 'Restricting network access' topic\"(トピック「ネットワークアクセスの制限」に詳細情報を追加する)であれば十分に作業可能な大きさです。 - すでにあるissueを検索し、関連または同様のissueがないかどうか確認する。 - 新しいissueがほかのissueやpull requestと関係する場合は、完全なURLを参照するか、issueやpull requestの数字の前に`#`の文字を付けて参照する。例えば、`Introduced by #987654`のように書きます。 - [行動規範](/ja/community/code-of-conduct/)に従って、仲間のコントリビューターに敬意を払いましょう。たとえば、\"The docs are terrible\"(このドキュメントは最悪だ)のようなコメントは、役に立つ敬意のあるフィードバックではありません。 "} {"_id":"doc-en-website-412b6408b600d4afc4fefdefe1276c75a37aadf2c3aca487971d27c2f35efea6","title":"","text":"/docs/reference/glossary/maintainer/ /docs/reference/glossary/approver/ 301 /docs/reference/kubectl/kubectl-cmds/ /docs/reference/generated/kubectl/kubectl-commands/ 301! /docs/reference/kubectl/kubectl/kubectl_*.md /docs/reference/generated/kubectl/kubectl-commands#:splat 301 /docs/reference/scheduling/profiles/ /docs/reference/scheduling/config/#profiles 301"} {"_id":"doc-en-website-7ffcdade5ea5eba50f4fd9850d6c33a55cc34efdeece99be5c7ff7a172ff99ab","title":"","text":" --- title: 升级 Windows 节点 min-kubernetes-server-version: 1.17 content_type: task weight: 40 --- {{< feature-state for_k8s_version=\"v1.18\" state=\"beta\" >}} 本页解释如何升级[用 kubeadm 创建的](/zh/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes) Windows 节点。 ## {{% heading \"prerequisites\" %}} {{< include \"task-tutorial-prereqs.md\" >}} {{< version-check >}} * 熟悉[更新 kubeadm 集群中的其余组件](/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade)。 在升级你的 Windows 节点之前你会想要升级控制面节点。 ## 升级工作节点 {#upgrading-worker-nodes} ### 升级 kubeadm {#upgrade-kubeadm} 1. 在 Windows 节点上升级 kubeadm: ```powershell # 将 {{< param \"fullversion\" >}} 替换为你希望的版本 curl.exe -Lo C:kkubeadm.exe https://dl.k8s.io/{{< param \"fullversion\" >}}/bin/windows/amd64/kubeadm.exe ``` ### 腾空节点 {#drain-the-node} 1. 在一个能访问到 Kubernetes API 的机器上,将 Windows 节点标记为不可调度并 驱逐其上的所有负载,以便准备节点维护操作: ```shell # 将 <要腾空的节点> 替换为你要腾空的节点的名称 kubectl drain <要腾空的节点> -ignore-daemonsets ``` 你应该会看到类似下面的输出: ``` node/ip-172-31-85-18 cordoned node/ip-172-31-85-18 drained ``` ### 升级 kubelet 配置 {#upgrade-the-kubelet-configuration} 1. 在 Windows 节点上,执行下面的命令来同步新的 kubelet 配置: ```powershell kubeadm upgrade node ``` ### 升级 kubelet {#upgrade-kubelet} 1. 在 Windows 节点上升级并重启 kubelet: ```powershell stop-service kubelet curl.exe -Lo C:kkubelet.exe https://dl.k8s.io/{{< param \"fullversion\" >}}/bin/windows/amd64/kubelet.exe restart-service kubelet ``` ### 对节点执行 uncordon 操作 {#uncordon-the-node} 1. 从一台能够访问到 Kubernetes API 的机器上,通过将节点标记为可调度,使之 重新上线: ```shell # 将 <要腾空的节点> 替换为你的节点名称 kubectl uncordon <要腾空的节点> ``` ### 升级 kube-proxy {#upgrade-kube-proxy} 1. 在一台可访问 Kubernetes API 的机器上和,将 {{< param \"fullversion\" >}} 替换成你 期望的版本后再次执行下面的命令: ```shell curl -L https://github.com/kubernetes-sigs/sig-windows-tools/releases/latest/download/kube-proxy.yml | sed 's/VERSION/{{< param \"fullversion\" >}}/g' | kubectl apply -f - ``` "} {"_id":"doc-en-website-4f878ca05d06cd45d7721d273a47aedaa9eab31c493fd2965e364a25c4990c08","title":"","text":" --- title: 为 Windows Pod 和容器配置 GMSA content_type: task weight: 20 --- {{< feature-state for_k8s_version=\"v1.18\" state=\"stable\" >}} 本页展示如何为将运行在 Windows 节点上的 Pod 和容器配置 [组管理的服务账号(Group Managed Service Accounts,GMSA)](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview)。 组管理的服务账号是活动目录(Active Directory)的一种特殊类型,提供自动化的 密码管理、简化的服务主体名称(Service Principal Name,SPN)管理以及跨多个 服务器将管理操作委派给其他管理员等能力。 在 Kubernetes 环境中,GMSA 凭据规约配置为 Kubernetes 集群范围的自定义资源 (Custom Resources)形式。Windows Pod 以及各 Pod 中的每个容器可以配置为 使用 GMSA 来完成基于域(Domain)的操作(例如,Kerberos 身份认证),以便 与其他 Windows 服务相交互。自 Kubernetes 1.16 版本起,Docker 运行时为 Windows 负载支持 GMSA。 ## {{% heading \"prerequisites\" %}} 你需要一个 Kubernetes 集群,以及 `kubectl` 命令行工具,且工具必须已配置 为能够与你的集群通信。集群预期包含 Windows 工作节点。 本节讨论需要为每个集群执行一次的初始操作。 ### 安装 GMSACredentialSpec CRD 你需要在集群上配置一个用于 GMSA 凭据规约资源的 [CustomResourceDefinition](/zh/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)(CRD), 以便定义类型为 `GMSACredentialSpec` 的自定义资源。 首先下载 GMSA CRD [YAML](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-crd.yml) 并将其保存为 `gmsa-crd.yaml`。接下来执行 `kubectl apply -f gmsa-crd.yaml` 安装 CRD。 ### 安装 Webhook 来验证 GMSA 用户 你需要为 Kubernetes 集群配置两个 Webhook,在 Pod 或容器级别填充和检查 GMSA 凭据规约引用。 1. 一个修改模式(Mutating)的 Webhook,将对 GMSA 的引用(在 Pod 规约中体现为名字) 展开为完整凭据规约的 JSON 形式,并保存回 Pod 规约中。 1. 一个验证模式(Validating)的 Webhook,确保对 GMSA 的所有引用都是已经授权 给 Pod 的服务账号使用的。 安装以上 Webhook 及其相关联的对象需要执行以下步骤: 1. 创建一个证书密钥对(用于允许 Webhook 容器与集群通信) 1. 安装一个包含如上证书的 Secret 1. 创建一个包含核心 Webhook 逻辑的 Deployment 1. 创建引用该 Deployment 的 Validating Webhook 和 Mutating Webhook 配置 你可以使用[这个脚本](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/deploy-gmsa-webhook.sh) 来部署和配置上述 GMSA Webhook 及相关联的对象。你还可以在运行脚本时设置 `--dry-run=server` 选项以便审查脚本将会对集群做出的变更。 脚本所使用的[YAML 模板](https://github.com/kubernetes-sigs/windows-gmsa/blob/master/admission-webhook/deploy/gmsa-webhook.yml.tpl) 也可用于手动部署 Webhook 及相关联的对象,不过需要对其中的参数作适当替换。 ## 在活动目录中配置 GMSA 和 Windows 节点 在配置 Kubernetes 中的 Pod 以使用 GMSA 之前,需要按 [Windows GMSA 文档](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#BKMK_Step1) 中描述的那样先在活动目录中准备好期望的 GMSA。 Windows 工作节点(作为 Kubernetes 集群的一部分)需要被配置到活动目录中,以便 访问与期望的 GSMA 相关联的秘密凭据数据。这一操作的描述位于 [Windows GMSA 文档](https://docs.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts#to-add-member-hosts-using-the-set-adserviceaccount-cmdlet) 中。 ## 创建 GMSA 凭据规约资源 当(如前所述)安装了 GMSACredentialSpec CRD 之后,你就可以配置包含 GMSA 凭据 规约的自定义资源了。GMSA 凭据规约中并不包含秘密或敏感数据。 其中包含的信息主要用于容器运行时,便于后者向 Windows 描述容器所期望的 GMSA。 GMSA 凭据规约可以使用 [PowerShell 脚本](https://github.com/kubernetes-sigs/windows-gmsa/tree/master/scripts/GenerateCredentialSpecResource.ps1) 以 YAML 格式生成。 下面是手动以 JSON 格式生成 GMSA 凭据规约并对其进行 YAML 转换的步骤: 1. 导入 CredentialSpec [模块](https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/windows-server-container-tools/ServiceAccounts/CredentialSpec.psm1): `ipmo CredentialSpec.psm1` 1. 使用 `New-CredentialSpec` 来创建一个 JSON 格式的凭据规约。 要创建名为 `WebApp1` 的 GMSA 凭据规约,调用 `New-CredentialSpec -Name WebApp1 -AccountName WebApp1 -Domain $(Get-ADDomain -Current LocalComputer)`。 1. 使用 `Get-CredentialSpec` 来显示 JSON 文件的路径。 1. 将凭据规约从 JSON 格式转换为 YAML 格式,并添加必要的头部字段 `apiVersion`、`kind`、`metadata` 和 `credspec`,使其成为一个可以在 Kubernetes 中配置的 GMSACredentialSpec 自定义资源。 下面的 YAML 配置描述的是一个名为 `gmsa-WebApp1` 的 GMSA 凭据规约: ```yaml apiVersion: windows.k8s.io/v1alpha1 kind: GMSACredentialSpec metadata: name: gmsa-WebApp1 # 这是随意起的一个名字,将用作引用 credspec: ActiveDirectoryConfig: GroupManagedServiceAccounts: - Name: WebApp1 # GMSA 账号的用户名 Scope: CONTOSO # NETBIOS 域名 - Name: WebApp1 # GMSA 账号的用户名 Scope: contoso.com # DNS 域名 CmsPlugins: - ActiveDirectory DomainJoinConfig: DnsName: contoso.com # DNS 域名 DnsTreeName: contoso.com # DNS 域名根 Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a # GUID MachineAccountName: WebApp1 # GMSA 账号的用户名 NetBiosName: CONTOSO # NETBIOS 域名 Sid: S-1-5-21-2126449477-2524075714-3094792973 # GMSA 的 SID ``` 上面的凭据规约资源可以保存为 `gmsa-Webapp1-credspec.yaml`,之后使用 `kubectl apply -f gmsa-Webapp1-credspec.yml` 应用到集群上。 ## 配置集群角色以启用对特定 GMSA 凭据规约的 RBAC 你需要为每个 GMSA 凭据规约资源定义集群角色。 该集群角色授权某主体(通常是一个服务账号)对特定的 GMSA 资源执行 `use` 动作。 下面的示例显示的是一个集群角色,对前文创建的凭据规约 `gmsa-WebApp1` 执行鉴权。 将此文件保存为 `gmsa-webapp1-role.yaml` 并执行 `kubectl apply -f gmsa-webapp1-role.yaml`。 ```yaml # 创建集群角色读取凭据规约 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: webapp1-role rules: - apiGroups: [\"windows.k8s.io\"] resources: [\"gmsacredentialspecs\"] verbs: [\"use\"] resourceNames: [\"gmsa-WebApp1\"] ``` ## 将角色指派给要使用特定 GMSA 凭据规约的服务账号 你需要将某个服务账号(Pod 配置所对应的那个)绑定到前文创建的集群角色上。 这一绑定操作实际上授予该服务账号使用所指定的 GMSA 凭据规约资源的访问权限。 下面显示的是一个绑定到集群角色 `webapp1-role` 上的 default 服务账号,使之 能够使用前面所创建的 `gmsa-WebApp1` 凭据规约资源。 ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: allow-default-svc-account-read-on-gmsa-WebApp1 namespace: default subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: webapp1-role apiGroup: rbac.authorization.k8s.io ``` ## 在 Pod 规约中配置 GMSA 凭据规约引用 Pod 规约字段 `securityContext.windowsOptions.gmsaCredentialSpecName` 可用来 设置对指定 GMSA 凭据规约自定义资源的引用。 设置此引用将会配置 Pod 中的所有容器使用所给的 GMSA。 下面是一个 Pod 规约示例,其中包含了对 `gmsa-WebApp1` 凭据规约的引用: ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds template: metadata: labels: run: with-creds spec: securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-webapp1 containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis nodeSelector: kubernetes.io/os: windows ``` Pod 中的各个容器也可以使用对应容器的 `securityContext.windowsOptions.gmsaCredentialSpecName` 字段来设置期望使用的 GMSA 凭据规约。 例如: ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds template: metadata: labels: run: with-creds spec: containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-Webapp1 nodeSelector: kubernetes.io/os: windows ``` 当 Pod 规约中填充了 GMSA 相关字段(如上所述),在集群中应用 Pod 规约时会依次 发生以下事件: 1. Mutating Webhook 解析对 GMSA 凭据规约资源的引用,并将其全部展开, 得到 GMSA 凭据规约的实际内容。 1. Validating Webhook 确保与 Pod 相关联的服务账号有权在所给的 GMSA 凭据规约 上执行 `use` 动作。 1. 容器运行时为每个 Windows 容器配置所指定的 GMSA 凭据规约,这样容器就可以以 活动目录中该 GMSA 所代表的身份来执行操作,使用该身份来访问域中的服务。 ## 故障排查 如果在你的环境中配置 GMSA 时遇到了困难,你可以采取若干步骤来排查可能 的故障。 首先,确保凭据规约已经被传递到 Pod。要实现这点,你需要先通过 `exec` 进入到 你的 Pod 之一,检查 `nltest.exe /parentdomain` 命令的输出。 在下面的例子中,Pod 未能正确地获得凭据规约: ```shell kubectl exec -it iis-auth-7776966999-n5nzr powershell.exe Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. PS C:> nltest.exe /parentdomain Getting parent domain failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE PS C:> ``` 如果 Pod 未能正确获得凭据规约,则下一步就要检查与域之间的通信。 首先,从 Pod 内部快速执行一个 nslookup 操作,找到域根。 这一操作会告诉我们三件事情: 1. Pod 能否访问域控制器(DC) 1. DC 能否访问 Pod 1. DNS 是否正常工作 如果 DNS 和通信测试通过,接下来你需要检查是否 Pod 已经与域之间建立了 安全通信通道。要执行这一检查,你需要再次通过 `exec` 进入到你的 Pod 中 并执行 `nltest.exe /query` 命令。 ```shell PS C:> nltest.exe /query I_NetLogonControl failed: Status = 1722 0x6ba RPC_S_SERVER_UNAVAILABLE ``` 这一输出告诉我们,由于某些原因,Pod 无法使用凭据规约中的账号登录到域。 你可以通过运行 `nltest.exe /sc_reset:domain.example` 命令尝试修复安全通道。 ```shell PS C:> nltest /sc_reset:domain.example Flags: 30 HAS_IP HAS_TIMESERV Trusted DC Name dc10.domain.example Trusted DC Connection Status Status = 0 0x0 NERR_Success The command completed successfully PS C:> ``` 如果上述命令修复了错误,你就可以通过向你的 Pod 规约添加生命周期回调来将此操作 自动化。如果上述命令未能奏效,你就需要再次检查凭据规约,以确保其数据时正确的 而且是完整的。 ```yaml image: registry.domain.example/iis-auth:1809v1 lifecycle: postStart: exec: command: [\"powershell.exe\",\"-command\",\"do { Restart-Service -Name netlogon } while ( $($Result = (nltest.exe /query); if ($Result -like '*0x0 NERR_Success*') {return $true} else {return $false}) -eq $false)\"] imagePullPolicy: IfNotPresent ``` 如果你向你的 Pod 规约中添加如上所示的 `lifecycle` 节,则 Pod 会自动执行所 列举的命令来重启 `netlogon` 服务,直到 `nltest.exe /query` 命令返回时没有错误信息。 ## GMSA 的局限 在使用 [Windows 版本的 ContainerD 运行时](/zh/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#cri-containerd) 时,通过 GMSA 域身份标识访问受限制的网络共享资源时会出错。 容器会收到身份标识且 `nltest.exe /query` 调用也能正常工作。 当需要访问网络共享资源时,建议使用 [Docker EE 运行时](/zh/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#docker-ee)。 Windows Server 团队正在 Windows 内核中解决这一问题,并在将来发布解决此问题的补丁。 你可以在 [Microsoft Windows Containers 问题跟踪列表](https://github.com/microsoft/Windows-Containers/issues/44) 中查找这类更新。 "} {"_id":"doc-en-website-32f98857ace0affa5292646b0169ac5d25825bb52a7637d028694f190cca91e4","title":"","text":"from the community. Please try it out and give us feedback! {{< /caution >}} ## kubeadm alpha certs {#cmd-certs} A collection of operations for operating Kubernetes certificates. {{< tabs name=\"tab-certs\" >}} {{< tab name=\"overview\" include=\"generated/kubeadm_alpha_certs.md\" />}} {{< /tabs >}} ## kubeadm alpha certs renew {#cmd-certs-renew} You can renew all Kubernetes certificates using the `all` subcommand or renew them selectively."} {"_id":"doc-en-website-e80171b7dd4877f030999c2954a1adfe84cb2b592e80747ff657cd418e2bedfb","title":"","text":"{{< tab name=\"certificate-key\" include=\"generated/kubeadm_alpha_certs_certificate-key.md\" />}} {{< /tabs >}} ## kubeadm alpha certs generate-csr {#cmd-certs-generate-csr} This command can be used to generate certificate signing requests (CSRs) which can be submitted to a certificate authority (CA) for signing. {{< tabs name=\"tab-certs-generate-csr\" >}} {{< tab name=\"certificate-generate-csr\" include=\"generated/kubeadm_alpha_certs_generate-csr.md\" />}} {{< /tabs >}} ## kubeadm alpha certs check-expiration {#cmd-certs-check-expiration} This command checks expiration for the certificates in the local PKI managed by kubeadm."} {"_id":"doc-en-website-2ce02b5e125a9f0426e33b42cb3a04fff0c5f588e26bcdb7a7d4c33842816428","title":"","text":"{{< tab name=\"bootstrap-token\" include=\"generated/kubeadm_init_phase_bootstrap-token.md\" />}} {{< /tabs >}} ## kubeadm init phase kubelet-finialize {#cmd-phase-kubelet-finalize-all} Use the following phase to update settings relevant to the kubelet after TLS bootstrap. You can use the `all` subcommand to run all `kubelet-finalize` phases. {{< tabs name=\"tab-kubelet-finalize\" >}} {{< tab name=\"kublet-finalize\" include=\"generated/kubeadm_init_phase_kubelet-finalize.md\" />}} {{< tab name=\"kublet-finalize-all\" include=\"generated/kubeadm_init_phase_kubelet-finalize_all.md\" />}} {{< tab name=\"kublet-finalize-cert-rotation\" include=\"generated/kubeadm_init_phase_kubelet-finalize_experimental-cert-rotation.md\" />}} {{< /tabs >}} ## kubeadm init phase addon {#cmd-phase-addon}"} {"_id":"doc-en-website-e73272fc307fa06cd9a5b41ab4d6746d992729921d75341e4d572777f7687c64","title":"","text":"{{< tabs name=\"tab-phase\" >}} {{< tab name=\"phase\" include=\"generated/kubeadm_upgrade_node_phase.md\" />}} {{< tab name=\"preflight\" include=\"generated/kubeadm_upgrade_node_phase_preflight.md\" />}} {{< tab name=\"control-plane\" include=\"generated/kubeadm_upgrade_node_phase_control-plane.md\" />}} {{< tab name=\"kubelet-config\" include=\"generated/kubeadm_upgrade_node_phase_kubelet-config.md\" />}} {{< /tabs >}}"} {"_id":"doc-en-website-6e11be1948d80bb941faa002fad29feb3e4476585ac9e7af27d78f63c5065a13","title":"","text":"Notice how the number of desired replicas is 3 according to `.spec.replicas` field. 3. To see the Deployment rollout status, run `kubectl rollout status deployment.v1.apps/nginx-deployment`. 3. To see the Deployment rollout status, run `kubectl rollout status deployment/nginx-deployment`. The output is similar to: ```"} {"_id":"doc-en-website-de5ffb1452603cbc4db5a3b9781526ee514b8709f6e6e1ebb78a9cb403cd10c1","title":"","text":"2. To see the rollout status, run: ```shell kubectl rollout status deployment.v1.apps/nginx-deployment kubectl rollout status deployment/nginx-deployment ``` The output is similar to this:"} {"_id":"doc-en-website-06b5581579f8807ff216abdb556eb85ebd6712671293cdd0c363c6ce149bad99","title":"","text":"* The rollout gets stuck. You can verify it by checking the rollout status: ```shell kubectl rollout status deployment.v1.apps/nginx-deployment kubectl rollout status deployment/nginx-deployment ``` The output is similar to this:"} {"_id":"doc-en-website-3792c0c3df3dfe991c53f67a91f423727807bb8098e48f807d0497099bbc9115","title":"","text":"successfully, `kubectl rollout status` returns a zero exit code. ```shell kubectl rollout status deployment.v1.apps/nginx-deployment kubectl rollout status deployment/nginx-deployment ``` The output is similar to this: ```"} {"_id":"doc-en-website-9c795e9f047b5c8dd4af4081f7a71e181cb22b467251f7fe071d593d208d44cf","title":"","text":"returns a non-zero exit code if the Deployment has exceeded the progression deadline. ```shell kubectl rollout status deployment.v1.apps/nginx-deployment kubectl rollout status deployment/nginx-deployment ``` The output is similar to this: ```"} {"_id":"doc-en-website-2d6dbe718e3ba24a719cf147ffb38e44425109815ab6d3e81e0789008d726e94","title":"","text":" --- title: 对象 id: object date: 2020-10-12 full_link: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/#kubernetes-objects short_description: > Kubernetes 系统中的实体, 代表了集群的部分状态。 aka: tags: - fundamental --- Kubernetes 系统中的实体。Kubernetes API 用这些实体表示集群的状态。 Kubernetes 对象通常是一个“目标记录”-一旦你创建了一个对象,Kubernetes {{< glossary_tooltip text=\"控制平面\" term_id=\"control-plane\" >}} 不断工作,以确保它代表的项目确实存在。 创建一个对象相当于告知 Kubernetes 系统:你期望这部分集群负载看起来像什么;这也就是你集群的期望状态。 No newline at end of file"} {"_id":"doc-en-website-982c1fe6bd95ba2bf44397503ee5e931229e3ec1489f868e24a8cd70ad95d272","title":"","text":"--- reviewers: title: Configuring kubelet Garbage Collection title: Garbage collection for container images content_type: concept weight: 70 --- Garbage collection is a helpful function of kubelet that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes. External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist."} {"_id":"doc-en-website-0cea9d07770df67ff86596ad1c63d7638d1a346b799a31019207ebe409c769ba","title":"","text":"You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param \"githubbranch\" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: --> ## Service 安全 ## 保护 Service {#securing-the-service} 到现在为止,我们只在集群内部访问了 Nginx server。在将 Service 暴露到 Internet 之前,我们希望确保通信信道是安全的。对于这可能需要: 到现在为止,我们只在集群内部访问了 Nginx 服务器。在将 Service 暴露到因特网之前,我们希望确保通信信道是安全的。 为实现这一目的,可能需要: * https 自签名证书(除非已经有了一个识别身份的证书) * 使用证书配置的 Nginx server * 用于 HTTPS 的自签名证书(除非已经有了一个识别身份的证书) * 使用证书配置的 Nginx 服务器 * 使证书可以访问 Pod 的 [Secret](/zh/docs/concepts/configuration/secret/) 可以从 [Nginx https 示例](https://github.com/kubernetes/kubernetes/tree/{{< param \"githubbranch\" >}}/examples/https-nginx/) 获取所有上述内容,简明示例如下: 你可以从 [Nginx https 示例](https://github.com/kubernetes/kubernetes/tree/{{< param \"githubbranch\" >}}/staging/https-nginx/) 获取所有上述内容。你需要安装 go 和 make 工具。如果你不想安装这些软件,可以按照 后文所述的手动执行步骤执行操作。简要过程如下: ```shell make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt"} {"_id":"doc-en-website-6976926951ea71616b7fcc5381d4d5a81ba6a7988c385eb2b5c1d54a136d49b7","title":"","text":" 以下是您在运行make时遇到问题时要遵循的手动步骤(例如,在Windows上): 以下是你在运行 make 时遇到问题时要遵循的手动步骤(例如,在 Windows 上): ```shell # Create a public private key pair"} {"_id":"doc-en-website-40d74fc67d9332e8ad4a4720b338ca36137660dcd526c554832d81825bffaff9","title":"","text":" 使用前面命令的输出来创建yaml文件,如下所示。 base64编码的值应全部放在一行上。 使用前面命令的输出来创建 yaml 文件,如下所示。 base64 编码的值应全部放在一行上。 ```yaml apiVersion: \"v1\""} {"_id":"doc-en-website-426c59a69c9eb3bbb92874c09c8f2151b0aa8e378778b6e200226c8643b67c5d","title":"","text":" 现在使用文件创建 secrets: 现在使用文件创建 Secrets: ```shell kubectl apply -f nginxsecrets.yaml"} {"_id":"doc-en-website-e590846a06851f2a68aab7743a29196105adfbf178d692f4d3d5dd541663f085","title":"","text":" 现在修改 Nginx 副本,启动一个使用在秘钥中的证书的 https 服务器和 Servcie,都暴露端口(80 和 443): 现在修改 nginx 副本,启动一个使用在秘钥中的证书的 HTTPS 服务器和 Servcie,暴露端口(80 和 443): {{< codenew file=\"service/networking/nginx-secure-app.yaml\" >}}"} {"_id":"doc-en-website-cfe08c3b8fe38b2818d71f67df40e9803a5a3ab94a093488aa877a11cb24ec0e","title":"","text":"- Each container has access to the keys through a volume mounted at `/etc/nginx/ssl`. This is setup *before* the nginx server is started. --> 关于 nginx-secure-app 清单,值得注意的几点如下: 关于 nginx-secure-app manifest 值得注意的点如下: - 它在相同的文件中包含了 Deployment 和 Service 的规格 - [Nginx 服务器](https://github.com/kubernetes/kubernetes/tree/{{< param \"githubbranch\" >}}/examples/https-nginx/default.conf) 处理 80 端口上的 http 流量,以及 443 端口上的 https 流量,Nginx Service 暴露了这两个端口。 - 每个容器访问挂载在 /etc/nginx/ssl 卷上的秘钥。这需要在 Nginx server 启动之前安装好。 - 它在相同的文件中包含了 Deployment 和 Service 的规约 - [nginx 服务器](https://github.com/kubernetes/kubernetes/tree/{{< param \"githubbranch\" >}}/staging/https-nginx/default.conf) 处理 80 端口上的 HTTP 流量,以及 443 端口上的 HTTPS 流量,Nginx Service 暴露了这两个端口。 - 每个容器访问挂载在 /etc/nginx/ssl 卷上的秘钥。这需要在 Nginx 服务器启动之前安装好。 ```shell kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml"} {"_id":"doc-en-website-2c88b49bdf19a15838f563bb8279e60c501cc60ad684e2e1c093b6fb4cce8bac","title":"","text":" 这时可以从任何节点访问到 Nginx server。 这时,你可以从任何节点访问到 Nginx 服务器。 ```shell kubectl get pods -o yaml | grep -i podip"} {"_id":"doc-en-website-0ec65b8943bb0764b8a4faa86c3c919eb9fac86aaa49ffb514268cd0d6f2c668","title":"","text":"so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup. Let's test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service): --> 注意最后一步我们是如何提供 `-k` 参数执行 curl命令的,这是因为在证书生成时, 我们不知道任何关于运行 Nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 通过创建 Service,我们连接了在证书中的 CName 与在 Service 查询时被 Pod使用的实际 DNS 名字。 注意最后一步我们是如何提供 `-k` 参数执行 curl 命令的,这是因为在证书生成时, 我们不知道任何关于运行 nginx 的 Pod 的信息,所以不得不在执行 curl 命令时忽略 CName 不匹配的情况。 通过创建 Service,我们连接了在证书中的 CName 与在 Service 查询时被 Pod 使用的实际 DNS 名字。 让我们从一个 Pod 来测试(为了简化使用同一个秘钥,Pod 仅需要使用 nginx.crt 去访问 Service): {{< codenew file=\"service/networking/curlpod.yaml\" >}}"} {"_id":"doc-en-website-06b54c87814079643624c91f0838ffc6673c6a849a2dd859534667eab96105f9","title":"","text":"Let's now recreate the Service to use a cloud load balancer, just change the `Type` of `my-nginx` Service from `NodePort` to `LoadBalancer`: --> 让我们重新创建一个 Service,使用一个云负载均衡器,只需要将 `my-nginx` Service 的 `Type` 由 `NodePort` 改成 `LoadBalancer`。 让我们重新创建一个 Service,使用一个云负载均衡器,只需要将 `my-nginx` Service 的 `Type` 由 `NodePort` 改成 `LoadBalancer`。 ```shell kubectl edit svc my-nginx"} {"_id":"doc-en-website-46b0ca207d8d8dabf576ce54b467ddf5833e808bf0d898f40faf23b8effb6c8a","title":"","text":"在开始本教程之前,你应该熟悉以下 Kubernetes 的概念: * [Pods](/zh/docs/user-guide/pods/single-container/) * [Pods](/zh/docs/concepts/workloads/pods/) * [Cluster DNS](/zh/docs/concepts/services-networking/dns-pod-service/) * [Headless Services](/zh/docs/concepts/services-networking/service/#headless-services) * [PersistentVolumes](/zh/docs/concepts/storage/persistent-volumes/)"} {"_id":"doc-en-website-83788cb192406cdedf094a52992e757bf3b6e31480f1c861c48dcd120a7270a5","title":"","text":"即使你已经删除了 StatefulSet 和它的全部 Pod,这些 Pod 将会被重新创建并挂载它们的 PersistentVolumes,并且 `web-0` 和 `web-1` 将仍然使用它们的主机名提供服务。 最后删除 `web` StatefulSet 和 `nginx` service。 最后删除 `nginx` service... ```shell kubectl delete service nginx ``` ``` service \"nginx\" deleted ``` ... 并且删除 `web` StatefulSet: ```shell kubectl delete statefulset web ``` ``` statefulset \"web\" deleted ```"} {"_id":"doc-en-website-e0b85f054a0b3a69d81f21174ff43295de9c6826961132c9f0f2fb8cff89f52a","title":"","text":" {{ if .Site.Params.mermaid.enable }} {{ end }} {{ $jsBase := resources.Get \"js/base.js\" }} {{ $jsAnchor := resources.Get \"js/anchor.js\" }} {{ $jsSearch := resources.Get \"js/search.js\" | resources.ExecuteAsTemplate \"js/search.js\" .Site.Home }} {{ $jsMermaid := resources.Get \"js/mermaid.js\" | resources.ExecuteAsTemplate \"js/mermaid.js\" . }} {{ if .Site.Params.offlineSearch }} {{ $jsSearch = resources.Get \"js/offline-search.js\" }} {{ end }} {{ $js := (slice $jsBase $jsAnchor $jsSearch $jsMermaid) | resources.Concat \"js/main.js\" }} {{ if .Site.IsServer }} {{ else }} {{ $js := $js | minify | fingerprint }} {{ end }} {{ with .Site.Params.prism_syntax_highlighting }} {{ end }} {{ partial \"hooks/body-end.html\" . }} "} {"_id":"doc-en-website-5530a758696e3d9dd36029393a46139e926241f96e5ba570d21954885373f30b","title":"","text":" /* Copyright (C) Federico Zivolo 2018 Distributed under the MIT License (license terms are at http://opensource.org/licenses/MIT). */(function(e,t){'object'==typeof exports&&'undefined'!=typeof module?module.exports=t():'function'==typeof define&&define.amd?define(t):e.Popper=t()})(this,function(){'use strict';function e(e){return e&&'[object Function]'==={}.toString.call(e)}function t(e,t){if(1!==e.nodeType)return[];var o=getComputedStyle(e,null);return t?o[t]:o}function o(e){return'HTML'===e.nodeName?e:e.parentNode||e.host}function n(e){if(!e)return document.body;switch(e.nodeName){case'HTML':case'BODY':return e.ownerDocument.body;case'#document':return e.body;}var i=t(e),r=i.overflow,p=i.overflowX,s=i.overflowY;return /(auto|scroll|overlay)/.test(r+s+p)?e:n(o(e))}function r(e){return 11===e?re:10===e?pe:re||pe}function p(e){if(!e)return document.documentElement;for(var o=r(10)?document.body:null,n=e.offsetParent;n===o&&e.nextElementSibling;)n=(e=e.nextElementSibling).offsetParent;var i=n&&n.nodeName;return i&&'BODY'!==i&&'HTML'!==i?-1!==['TD','TABLE'].indexOf(n.nodeName)&&'static'===t(n,'position')?p(n):n:e?e.ownerDocument.documentElement:document.documentElement}function s(e){var t=e.nodeName;return'BODY'!==t&&('HTML'===t||p(e.firstElementChild)===e)}function d(e){return null===e.parentNode?e:d(e.parentNode)}function a(e,t){if(!e||!e.nodeType||!t||!t.nodeType)return document.documentElement;var o=e.compareDocumentPosition(t)&Node.DOCUMENT_POSITION_FOLLOWING,n=o?e:t,i=o?t:e,r=document.createRange();r.setStart(n,0),r.setEnd(i,0);var l=r.commonAncestorContainer;if(e!==l&&t!==l||n.contains(i))return s(l)?l:p(l);var f=d(e);return f.host?a(f.host,t):a(e,d(t).host)}function l(e){var t=1=o.clientWidth&&n>=o.clientHeight}),l=0a[e]&&!t.escapeWithReference&&(n=J(f[o],a[e]-('right'===e?f.width:f.height))),ae({},o,n)}};return l.forEach(function(e){var t=-1===['left','top'].indexOf(e)?'secondary':'primary';f=le({},f,m[t](e))}),e.offsets.popper=f,e},priority:['left','right','top','bottom'],padding:5,boundariesElement:'scrollParent'},keepTogether:{order:400,enabled:!0,fn:function(e){var t=e.offsets,o=t.popper,n=t.reference,i=e.placement.split('-')[0],r=Z,p=-1!==['top','bottom'].indexOf(i),s=p?'right':'bottom',d=p?'left':'top',a=p?'width':'height';return o[s]r(n[s])&&(e.offsets.popper[d]=r(n[s])),e}},arrow:{order:500,enabled:!0,fn:function(e,o){var n;if(!q(e.instance.modifiers,'arrow','keepTogether'))return e;var i=o.element;if('string'==typeof i){if(i=e.instance.popper.querySelector(i),!i)return e;}else if(!e.instance.popper.contains(i))return console.warn('WARNING: `arrow.element` must be child of its popper element!'),e;var r=e.placement.split('-')[0],p=e.offsets,s=p.popper,d=p.reference,a=-1!==['left','right'].indexOf(r),l=a?'height':'width',f=a?'Top':'Left',m=f.toLowerCase(),h=a?'left':'top',c=a?'bottom':'right',u=S(i)[l];d[c]-us[c]&&(e.offsets.popper[m]+=d[m]+u-s[c]),e.offsets.popper=g(e.offsets.popper);var b=d[m]+d[l]/2-u/2,y=t(e.instance.popper),w=parseFloat(y['margin'+f],10),E=parseFloat(y['border'+f+'Width'],10),v=b-e.offsets.popper[m]-w-E;return v=$(J(s[l]-u,v),0),e.arrowElement=i,e.offsets.arrow=(n={},ae(n,m,Q(v)),ae(n,h,''),n),e},element:'[x-arrow]'},flip:{order:600,enabled:!0,fn:function(e,t){if(W(e.instance.modifiers,'inner'))return e;if(e.flipped&&e.placement===e.originalPlacement)return e;var o=v(e.instance.popper,e.instance.reference,t.padding,t.boundariesElement,e.positionFixed),n=e.placement.split('-')[0],i=T(n),r=e.placement.split('-')[1]||'',p=[];switch(t.behavior){case he.FLIP:p=[n,i];break;case he.CLOCKWISE:p=z(n);break;case he.COUNTERCLOCKWISE:p=z(n,!0);break;default:p=t.behavior;}return p.forEach(function(s,d){if(n!==s||p.length===d+1)return e;n=e.placement.split('-')[0],i=T(n);var a=e.offsets.popper,l=e.offsets.reference,f=Z,m='left'===n&&f(a.right)>f(l.left)||'right'===n&&f(a.left)f(l.top)||'bottom'===n&&f(a.top)f(o.right),g=f(a.top)f(o.bottom),b='left'===n&&h||'right'===n&&c||'top'===n&&g||'bottom'===n&&u,y=-1!==['top','bottom'].indexOf(n),w=!!t.flipVariations&&(y&&'start'===r&&h||y&&'end'===r&&c||!y&&'start'===r&&g||!y&&'end'===r&&u);(m||b||w)&&(e.flipped=!0,(m||b)&&(n=p[d+1]),w&&(r=G(r)),e.placement=n+(r?'-'+r:''),e.offsets.popper=le({},e.offsets.popper,C(e.instance.popper,e.offsets.reference,e.placement)),e=P(e.instance.modifiers,e,'flip'))}),e},behavior:'flip',padding:5,boundariesElement:'viewport'},inner:{order:700,enabled:!1,fn:function(e){var t=e.placement,o=t.split('-')[0],n=e.offsets,i=n.popper,r=n.reference,p=-1!==['left','right'].indexOf(o),s=-1===['top','left'].indexOf(o);return i[p?'left':'top']=r[o]-(s?i[p?'width':'height']:0),e.placement=T(t),e.offsets.popper=g(i),e}},hide:{order:800,enabled:!0,fn:function(e){if(!q(e.instance.modifiers,'hide','preventOverflow'))return e;var t=e.offsets.reference,o=D(e.instance.modifiers,function(e){return'preventOverflow'===e.name}).boundaries;if(t.bottomo.right||t.top>o.bottom||t.right"} {"_id":"doc-en-website-1cd5ed0d1c2f6ea53650bf34463a3fb08a67d067b18d0bf4dd88439bf1ac80d4","title":"","text":"- wget - \"-O\" - \"/work-dir/index.html\" - http://kubernetes.io - http://info.cern.ch volumeMounts: - name: workdir mountPath: \"/work-dir\""} {"_id":"doc-en-website-b12afdaa7f97353de9abfca867b5e88a246bf292df05af94e93b036b23e03d99","title":"","text":"See [Add ImagePullSecrets to a service account](/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for a detailed explanation of that process. ### Automatic mounting of manually created Secrets Manually created secrets (for example, one containing a token for accessing a GitHub account) can be automatically attached to pods based on their service account. ## Details ### Restrictions"} {"_id":"doc-en-website-737466ba7e839c8e369423c9be2af57d32e9a89022e378e4d1af766d8ab7f3cd","title":"","text":" --- title: SIG Docsへの参加 content_type: concept weight: 60 card: name: contribute weight: 60 --- SIG Docsは、Kubernetesプロジェクト内の [special interest groups](https://github.com/kubernetes/community/blob/master/sig-list.md)の1つであり、 Kubernetes全体のドキュメントの作成、更新、および保守に重点を置いています。 SIGの詳細については、[SIG DocsのGithubリポジトリ](https://github.com/kubernetes/community/blob/master/sig-list.md)を参照してください。 SIG Docsは、すべての寄稿者からのコンテンツとレビューを歓迎します。 誰でもPull Request(PR)を開くことができ、コンテンツに関するissueを提出したり、進行中のPull Requestにコメントしたりできます。 あなたは、[member](/docs/contribute/participate/roles-and-responsibilities/#members)や、 [reviewer](/docs/contribute/participate/roles-and-responsibilities/#reviewers)、 [approver](/docs/contribute/participate/roles-and-responsibilities/#approvers)になることもできます。 これらの役割にはより多くのアクセスが必要であり、変更を承認およびコミットするための特定の責任が伴います。 Kubernetesコミュニティ内でメンバーシップがどのように機能するかについての詳細は、 [community-membership](https://github.com/kubernetes/community/blob/master/community-membership.md) をご覧ください。 このドキュメントの残りの部分では、kubernetesの中で最も広く公開されている Kubernetesのウェブサイトとドキュメントの管理を担当しているSIG Docsの中で、これらの役割がどのように機能するのかを概説します。 ## SIG Docs chairperson SIG Docsを含む各SIGは、議長として機能する1人以上のSIGメンバーを選択します。 これらは、SIGDocsとKubernetes organizationの他の部分との連絡先です。 それらには、Kubernetesプロジェクト全体の構造と、SIG Docsがその中でどのように機能するかについての広範な知識が必要です。 現在のchairpersonのリストについては、 [Leadership](https://github.com/kubernetes/community/tree/master/sig-docs#leadership) を参照してください。 ## SIG Docs teamsと自動化 SIG Docsの自動化は、GitHub teamsとOWNERSファイルの2つの異なるメカニズムに依存しています。 ### GitHub teams GitHubには、二つのSIG Docs [teams](https://github.com/orgs/kubernetes/teams?query=sig-docs) カテゴリがあります: - `@sig-docs-{language}-owners`は承認者かつリードです。 - `@sig-docs-{language}-reviewers` はレビュアーです。 それぞれをGitHubコメントの`@name`で参照して、そのグループの全員とコミュニケーションできます。 ProwチームとGitHub teamsが完全に一致せずに重複する場合があります。 問題の割り当て、Pull Request、およびPR承認のサポートのために、自動化ではOWNERSファイルからの情報を使用します。 ### OWNERSファイルとfront-matter Kubernetesプロジェクトは、GitHubのissueとPull Requestに関連する自動化のためにprowと呼ばれる自動化ツールを使用します。 [Kubernetes Webサイトリポジトリ](https://github.com/kubernetes/website) は、2つの[prowプラグイン](https://github.com/kubernetes/test-infra/tree/master/prow/plugins)を使用します: - blunderbuss - approve これらの2つのプラグインは`kubernetes.website`のGithubリポジトリのトップレベルにある [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS)ファイルと、 [OWNERS_ALIASES](https://github.com/kubernetes/website/blob/master/OWNERS_ALIASES)ファイルを使用して、 リポジトリ内でのprowの動作を制御します。 OWNERSファイルには、SIG Docsのレビュー担当者および承認者であるユーザーのリストが含まれています。 OWNERSファイルはサブディレクトリに存在することもでき、そのサブディレクトリとその子孫のファイルのレビュー担当者または承認者として機能できるユーザーを上書きできます。 一般的なOWNERSファイルの詳細については、 [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md)を参照してください。 さらに、個々のMarkdownファイルは、個々のGitHubユーザー名またはGitHubグループを一覧表示することにより、そのfront-matterでレビュー担当者と承認者を一覧表示できます。 OWNERSファイルとMarkdownファイルのfront-matterの組み合わせにより、PRの技術的および編集上のレビューを誰に依頼するかについてPRの所有者が自動化システムから得るアドバイスが決まります。 ## マージの仕組み Pull Requestがコンテンツの公開に使用されるブランチにマージされると、そのコンテンツは http://kubernetes.io に公開されます。 公開されたコンテンツの品質を高くするために、Pull RequestのマージはSIG Docsの承認者に限定しています。仕組みは次のとおりです。 - Pull Requestに`lgtm`ラベルと`approve`ラベルの両方があり、`hold`ラベルがなく、すべてのテストに合格すると、Pull Requestは自動的にマージされます。 - Kubernetes organizationのメンバーとSIG Docsの承認者はコメントを追加して、特定のPull Requestが自動的にマージされないようにすることができます(`/hold`コメントを追加するか、`/lgtm`コメントを保留します)。 - Kubernetesメンバーは誰でも、`/lgtm`コメントを追加することで`lgtm`ラベルを追加できます。 - `/approve`コメントを追加してPull Requestをマージできるのは、SIG Docsの承認者だけです。一部の承認者は、[PR Wrangler](/docs/contribute/participate/pr-wranglers/)や[SIG Docsのchairperson](#sig-docs-chairperson)など、追加の特定の役割も実行します。 ## {{% heading \"whatsnext\" %}} Kubernetesドキュメントへの貢献の詳細については、以下を参照してください: - [Contributing new content](/docs/contribute/new-content/overview/) - [Reviewing content](/docs/contribute/review/reviewing-prs) - [ドキュメントスタイルの概要](/ja/docs/contribute/style/) "} {"_id":"doc-en-website-dfbd174530c5d6e68fc5df9b5429e3875fed61de8fde171d4f2fb35ce9958064","title":"","text":"title: \"弃用 Dockershim 的常见问题\" date: 2020-12-02 slug: dockershim-faq aliases: [ '/dockershim' ] --- ## 容器"} {"_id":"doc-en-website-4f21532fc031461c4cd6dac2f67d5837928977cd9ee5f9bb395f83ed5b361719","title":"","text":"容器漏洞扫描和操作系统依赖安全性 | 作为镜像构建的一部分,您应该扫描您的容器里的已知漏洞。 镜像签名和执行 | 对容器镜像进行签名,以维护对容器内容的信任。 禁止特权用户 | 构建容器时,请查阅文档以了解如何在具有最低操作系统特权级别的容器内部创建用户,以实现容器的目标。 使用带有较强隔离能力的容器运行时 | 选择提供较强隔离能力的[容器运行时类](/zh/docs/concepts/containers/runtime-class/)。 学习了解相关的 Kubernetes 安全主题:"} {"_id":"doc-en-website-abfc720efe5fb2c2c84cbde366d0b8da566e785b6092580a0068e23436e2746f","title":"","text":"* 为控制面[加密通信中的数据](/zh/docs/tasks/tls/managing-tls-in-a-cluster/) * [加密静止状态的数据](/zh/docs/tasks/administer-cluster/encrypt-data/) * [Kubernetes 中的 Secret](/zh/docs/concepts/configuration/secret/) * [运行时类](/zh/docs/concepts/containers/runtime-class) "} {"_id":"doc-en-website-194dd118e574d6dfe5926273de18d4b5c8c1bdce9816ccb463a5d7ee18bf00a7","title":"","text":" This page explains how to upgrade a Kubernetes cluster created with kubeadm from version {{< skew latestVersionAddMinor -1 >}}.x to version {{< skew latestVersion >}}.x, and from version {{< skew latestVersion >}}.x to {{< skew latestVersion >}}.y (where `y > x`). Skipping MINOR versions {{< skew currentVersionAddMinor -1 >}}.x to version {{< skew currentVersion >}}.x, and from version {{< skew currentVersion >}}.x to {{< skew currentVersion >}}.y (where `y > x`). Skipping MINOR versions when upgrading is unsupported. To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: - [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -2 >}} to {{< skew latestVersionAddMinor -1 >}}](https://v{{< skew latestVersionAddMinor -1 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -3 >}} to {{< skew latestVersionAddMinor -2 >}}](https://v{{< skew latestVersionAddMinor -2 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -4 >}} to {{< skew latestVersionAddMinor -3 >}}](https://v{{< skew latestVersionAddMinor -3 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew latestVersionAddMinor -5 >}} to {{< skew latestVersionAddMinor -4 >}}](https://v{{< skew latestVersionAddMinor -4 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -2 >}} to {{< skew currentVersionAddMinor -1 >}}](https://v{{< skew currentVersionAddMinor -1 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -3 >}} to {{< skew currentVersionAddMinor -2 >}}](https://v{{< skew currentVersionAddMinor -2 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -4 >}} to {{< skew currentVersionAddMinor -3 >}}](https://v{{< skew currentVersionAddMinor -3 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) - [Upgrading a kubeadm cluster from {{< skew currentVersionAddMinor -5 >}} to {{< skew currentVersionAddMinor -4 >}}](https://v{{< skew currentVersionAddMinor -4 \"-\" >}}.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/) The upgrade workflow at high level is the following:"} {"_id":"doc-en-website-6a736ebdd325431d688afaa06fd5a3f8964731946a8f33fbe4525fcd99aacd1c","title":"","text":"## Determine which version to upgrade to Find the latest stable {{< skew latestVersion >}} version using the OS package manager: Find the latest stable {{< skew currentVersion >}} version using the OS package manager: {{< tabs name=\"k8s_install_versions\" >}} {{% tab name=\"Ubuntu, Debian or HypriotOS\" %}} apt update apt-cache madison kubeadm # find the latest {{< skew latestVersion >}} version in the list # it should look like {{< skew latestVersion >}}.x-00, where x is the latest patch # find the latest {{< skew currentVersion >}} version in the list # it should look like {{< skew currentVersion >}}.x-00, where x is the latest patch {{% /tab %}} {{% tab name=\"CentOS, RHEL or Fedora\" %}} yum list --showduplicates kubeadm --disableexcludes=kubernetes # find the latest {{< skew latestVersion >}} version in the list # it should look like {{< skew latestVersion >}}.x-0, where x is the latest patch # find the latest {{< skew currentVersion >}} version in the list # it should look like {{< skew currentVersion >}}.x-0, where x is the latest patch {{% /tab %}} {{< /tabs >}}"} {"_id":"doc-en-website-b9930800b351ed319b2b7b3ca42604529e4f2de7365504bee3f4a1be9475d727","title":"","text":"{{< tabs name=\"k8s_install_kubeadm_first_cp\" >}} {{% tab name=\"Ubuntu, Debian or HypriotOS\" %}} # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00 apt-get install -y --allow-change-held-packages kubeadm={{< skew currentVersion >}}.x-00 {{% /tab %}} {{% tab name=\"CentOS, RHEL or Fedora\" %}} # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}}"} {"_id":"doc-en-website-256d4e09b630faa391e0e130d3339044e005442876f046df21e7e28216e689c0","title":"","text":"```shell # replace x with the patch version you picked for this upgrade sudo kubeadm upgrade apply v{{< skew latestVersion >}}.x sudo kubeadm upgrade apply v{{< skew currentVersion >}}.x ``` Once the command finishes you should see: ``` [upgrade/successful] SUCCESS! Your cluster was upgraded to \"v{{< skew latestVersion >}}.x\". Enjoy! [upgrade/successful] SUCCESS! Your cluster was upgraded to \"v{{< skew currentVersion >}}.x\". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ```"} {"_id":"doc-en-website-1bb676b05392205c08c9ee5cae6300dc3c46eceb9c1c8feae4dd6d94ff792b1b","title":"","text":"{{< tabs name=\"k8s_install_kubelet\" >}} {{< tab name=\"Ubuntu, Debian or HypriotOS\" >}}
  # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version   # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version  apt-mark unhold kubelet kubectl &&   apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 &&    apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 &&   apt-mark hold kubelet kubectl - # since apt-get version 1.1 you can also use the following method apt-get update &&   apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00   apt-get install -y --allow-change-held-packages kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00  
{{< /tab >}} {{< tab name=\"CentOS, RHEL or Fedora\" >}}
  # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes   # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes  
{{< /tab >}} {{< /tabs >}}"} {"_id":"doc-en-website-2cfd9c468d45a94a74d7e65845359478c15b61277d112aaa6232b3e232ca3690","title":"","text":"{{< tabs name=\"k8s_install_kubeadm_worker_nodes\" >}} {{% tab name=\"Ubuntu, Debian or HypriotOS\" %}} # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm={{< skew latestVersion >}}.x-00 && apt-get update && apt-get install -y kubeadm={{< skew currentVersion >}}.x-00 && apt-mark hold kubeadm - # since apt-get version 1.1 you can also use the following method apt-get update && apt-get install -y --allow-change-held-packages kubeadm={{< skew latestVersion >}}.x-00 apt-get install -y --allow-change-held-packages kubeadm={{< skew currentVersion >}}.x-00 {{% /tab %}} {{% tab name=\"CentOS, RHEL or Fedora\" %}} # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version yum install -y kubeadm-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version yum install -y kubeadm-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}}"} {"_id":"doc-en-website-5d93c1e7ee1cb83da960d2e5648011f6437cc8385c893a2aa4b16636c5ea37d8","title":"","text":"{{< tabs name=\"k8s_kubelet_and_kubectl\" >}} {{% tab name=\"Ubuntu, Debian or HypriotOS\" %}} # replace x in {{< skew latestVersion >}}.x-00 with the latest patch version # replace x in {{< skew currentVersion >}}.x-00 with the latest patch version apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 && apt-get update && apt-get install -y kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 && apt-mark hold kubelet kubectl - # since apt-get version 1.1 you can also use the following method apt-get update && apt-get install -y --allow-change-held-packages kubelet={{< skew latestVersion >}}.x-00 kubectl={{< skew latestVersion >}}.x-00 apt-get install -y --allow-change-held-packages kubelet={{< skew currentVersion >}}.x-00 kubectl={{< skew currentVersion >}}.x-00 {{% /tab %}} {{% tab name=\"CentOS, RHEL or Fedora\" %}} # replace x in {{< skew latestVersion >}}.x-0 with the latest patch version yum install -y kubelet-{{< skew latestVersion >}}.x-0 kubectl-{{< skew latestVersion >}}.x-0 --disableexcludes=kubernetes # replace x in {{< skew currentVersion >}}.x-0 with the latest patch version yum install -y kubelet-{{< skew currentVersion >}}.x-0 kubectl-{{< skew currentVersion >}}.x-0 --disableexcludes=kubernetes {{% /tab %}} {{< /tabs >}}"} {"_id":"doc-en-website-df4729789fb6f7b85e32de9b4a6075a0a09c1e30e347d05d008c44eb81517a32","title":"","text":"{{- $latestVersionAddMinor = printf \"%s%s%d\" (index $versionArray 0) $seperator $latestVersionAddMinor -}} {{- $latestVersionAddMinor -}} {{- end -}} {{- $currentVersion := site.Params.version -}} {{- $currentVersion := (replace $currentVersion \"v\" \"\") -}} {{- $currentVersionArray := split $currentVersion \".\" -}} {{- $currentMinorVersion := int (index $currentVersionArray 1) -}} {{- if eq $version \"currentVersion\" -}} {{- $currentVersion -}} {{- end -}} {{- if eq $version \"currentVersionAddMinor\" -}} {{- $seperator := .Get 2 -}} {{- if eq $seperator \"\" -}} {{- $seperator = \".\" -}} {{- end -}} {{- $currentVersionAddMinor := int (.Get 1) -}} {{- $currentVersionAddMinor = add $currentMinorVersion $currentVersionAddMinor -}} {{- $currentVersionAddMinor = printf \"%s%s%d\" (index $versionArray 0) $seperator $currentVersionAddMinor -}} {{- $currentVersionAddMinor -}} {{- end -}} No newline at end of file"} {"_id":"doc-en-website-6c0f49e8bfb45d93a02a8dfc9a8f2c9f3bc43fca59b31c95fc167f2d444c9a1c","title":"","text":"A longstanding bug regarding exec probe timeouts that may impact existing pod definitions has been fixed. Prior to this fix, the field `timeoutSeconds` was not respected for exec probes. Instead, probes would run indefinitely, even past their configured deadline, until a result was returned. With this change, the default value of `1 second` will be applied if a value is not specified and existing pod definitions may no longer be sufficient if a probe takes longer than one second. A feature gate, called `ExecProbeTimeout`, has been added with this fix that enables cluster operators to revert to the previous behavior, but this will be locked and removed in subsequent releases. In order to revert to the previous behavior, cluster operators should set this feature gate to `false`. Please review the updated documentation regarding [configuring probes](docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for more details. Please review the updated documentation regarding [configuring probes](/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for more details. ## Other Updates"} {"_id":"doc-en-website-1c3cf63cc84eed73e9de774ed8ddec0a77661a158e38c7a279c31770375e8abc","title":"","text":"{{< note >}} To be able to create Indexed Jobs, make sure to enable the `IndexedJob` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/) on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/). {{< /note >}}"} {"_id":"doc-en-website-2faf2853a752b0196d905971eb4e5441d406fe0c0f05837db23ab34e7732d982","title":"","text":"{{< note >}} Suspending Jobs is available in Kubernetes versions 1.21 and above. You must enable the `SuspendJob` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/) on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/) in order to use this feature. {{< /note >}}"} {"_id":"doc-en-website-d0bb5e23bcfd37e132185b4703dec1940880f666aae44550761c3bbaa63fe2f5","title":"","text":"### Migrating to the `systemd` driver in kubeadm managed clusters Follow this [Migration guide](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver) Follow this [Migration guide](/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/) if you wish to migrate to the `systemd` cgroup driver in existing kubeadm managed clusters. ## Container runtimes"} {"_id":"doc-en-website-ea1d23153cabf989312bdc1e5ab5c215ffcb312bf5b19993457d10209147e8e6","title":"","text":"Refer to the following table for Windows operating system support in Kubernetes. A single heterogeneous Kubernetes cluster can have both Windows and Linux worker nodes. Windows containers have to be scheduled on Windows nodes and Linux containers on Linux nodes. | Kubernetes version | Windows Server LTSC releases | Windows Server SAC releases | | --- | --- | --- | --- | | --- | --- | --- | | *Kubernetes v1.17* | Windows Server 2019 | Windows Server ver 1809 | | *Kubernetes v1.18* | Windows Server 2019 | Windows Server ver 1809, Windows Server ver 1903, Windows Server ver 1909 | | *Kubernetes v1.19* | Windows Server 2019 | Windows Server ver 1909, Windows Server ver 2004 |"} {"_id":"doc-en-website-54deca19b2ee73120a1bf38264bf2868c7a0b75d252b137e6de60ce10f95da19","title":"","text":"## Enable Topology Aware Hints To enable service topology hints, enable the `TopologyAwareHints` [feature gate](docs/reference/command-line-tools-reference/feature-gates/) for the gate](/docs/reference/command-line-tools-reference/feature-gates/) for the kube-apiserver, kube-controller-manager, and kube-proxy: ```"} {"_id":"doc-en-website-fdde9106946d10aa7fef8a99a565a6d882eff4d8e0a7e8e28a8ea4a4d58c6990","title":"","text":"{{< note >}} Kubeadm uses the same `KubeletConfiguration` for all nodes in the cluster. The `KubeletConfiguration` is stored in a [ConfigMap](docs/concepts/configuration/configmap) The `KubeletConfiguration` is stored in a [ConfigMap](/docs/concepts/configuration/configmap) object under the `kube-system` namespace. Executing the sub commands `init`, `join` and `upgrade` would result in kubeadm"} {"_id":"doc-en-website-d387241d0241f2588d01b21a9eb5fa51fae67ca169d5433f49df19dd947e7e77","title":"","text":"To be able to create Indexed Jobs, make sure to enable the `IndexedJob` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the [API server](docs/reference/command-line-tools-reference/kube-apiserver/) on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/). "} {"_id":"doc-en-website-164b607204448628c4b9fa8ced8ca9000b8a0d302f26e455df7155eb54890531","title":"","text":"## Downloads for v1.21.0 ### Source Code filename | sha512 hash -------- | ----------- [kubernetes.tar.gz](https://dl.k8s.io/v1.21.0/kubernetes.tar.gz) | `19bb76a3fa5ce4b9f043b2a3a77c32365ab1fcb902d8dd6678427fb8be8f49f64a5a03dc46aaef9c7dadee05501cf83412eda46f0edacbb8fc1ed0bf5fb79142`"} {"_id":"doc-en-website-764941eb28b7c694e79f23dfa0d39b51887e625be038103264a37374845d37d4","title":"","text":"## Changelog since v1.20.0 # Release notes for v1.21.0-rc.0 [Documentation](https://docs.k8s.io/docs/home) # Changelog since v1.20.0 ## What's New (Major Themes) ### Deprecation of PodSecurityPolicy"} {"_id":"doc-en-website-b896603915db929c52fee53112b89849fd6369a287c77602c206198f6795c408","title":"","text":"- sigs.k8s.io/kustomize: v2.0.3+incompatible ## Dependencies ### Added - github.com/go-errors/errors: [v1.0.1](https://github.com/go-errors/errors/tree/v1.0.1) - github.com/gobuffalo/here: [v0.6.0](https://github.com/gobuffalo/here/tree/v0.6.0) - github.com/google/shlex: [e7afc7f](https://github.com/google/shlex/tree/e7afc7f) - github.com/markbates/pkger: [v0.17.1](https://github.com/markbates/pkger/tree/v0.17.1) - github.com/moby/spdystream: [v0.2.0](https://github.com/moby/spdystream/tree/v0.2.0) - github.com/monochromegane/go-gitignore: [205db1a](https://github.com/monochromegane/go-gitignore/tree/205db1a) - github.com/niemeyer/pretty: [a10e7ca](https://github.com/niemeyer/pretty/tree/a10e7ca) - github.com/xlab/treeprint: [a009c39](https://github.com/xlab/treeprint/tree/a009c39) - go.starlark.net: 8dd3e2e - golang.org/x/term: 6a3ed07 - sigs.k8s.io/kustomize/api: v0.8.5 - sigs.k8s.io/kustomize/cmd/config: v0.9.7 - sigs.k8s.io/kustomize/kustomize/v4: v4.0.5 - sigs.k8s.io/kustomize/kyaml: v0.10.15 ### Changed - dmitri.shuralyov.com/gpu/mtl: 666a987 → 28db891 - github.com/Azure/go-autorest/autorest: [v0.11.1 → v0.11.12](https://github.com/Azure/go-autorest/autorest/compare/v0.11.1...v0.11.12) - github.com/NYTimes/gziphandler: [56545f4 → v1.1.1](https://github.com/NYTimes/gziphandler/compare/56545f4...v1.1.1) - github.com/cilium/ebpf: [1c8d4c9 → v0.2.0](https://github.com/cilium/ebpf/compare/1c8d4c9...v0.2.0) - github.com/container-storage-interface/spec: [v1.2.0 → v1.3.0](https://github.com/container-storage-interface/spec/compare/v1.2.0...v1.3.0) - github.com/containerd/console: [v1.0.0 → v1.0.1](https://github.com/containerd/console/compare/v1.0.0...v1.0.1) - github.com/containerd/containerd: [v1.4.1 → v1.4.4](https://github.com/containerd/containerd/compare/v1.4.1...v1.4.4) - github.com/coredns/corefile-migration: [v1.0.10 → v1.0.11](https://github.com/coredns/corefile-migration/compare/v1.0.10...v1.0.11) - github.com/creack/pty: [v1.1.7 → v1.1.11](https://github.com/creack/pty/compare/v1.1.7...v1.1.11) - github.com/docker/docker: [bd33bbf → v20.10.2+incompatible](https://github.com/docker/docker/compare/bd33bbf...v20.10.2) - github.com/go-logr/logr: [v0.2.0 → v0.4.0](https://github.com/go-logr/logr/compare/v0.2.0...v0.4.0) - github.com/go-openapi/spec: [v0.19.3 → v0.19.5](https://github.com/go-openapi/spec/compare/v0.19.3...v0.19.5) - github.com/go-openapi/strfmt: [v0.19.3 → v0.19.5](https://github.com/go-openapi/strfmt/compare/v0.19.3...v0.19.5) - github.com/go-openapi/validate: [v0.19.5 → v0.19.8](https://github.com/go-openapi/validate/compare/v0.19.5...v0.19.8) - github.com/gogo/protobuf: [v1.3.1 → v1.3.2](https://github.com/gogo/protobuf/compare/v1.3.1...v1.3.2) - github.com/golang/mock: [v1.4.1 → v1.4.4](https://github.com/golang/mock/compare/v1.4.1...v1.4.4) - github.com/google/cadvisor: [v0.38.5 → v0.39.0](https://github.com/google/cadvisor/compare/v0.38.5...v0.39.0) - github.com/heketi/heketi: [c2e2a4a → v10.2.0+incompatible](https://github.com/heketi/heketi/compare/c2e2a4a...v10.2.0) - github.com/kisielk/errcheck: [v1.2.0 → v1.5.0](https://github.com/kisielk/errcheck/compare/v1.2.0...v1.5.0) - github.com/konsorten/go-windows-terminal-sequences: [v1.0.3 → v1.0.2](https://github.com/konsorten/go-windows-terminal-sequences/compare/v1.0.3...v1.0.2) - github.com/kr/text: [v0.1.0 → v0.2.0](https://github.com/kr/text/compare/v0.1.0...v0.2.0) - github.com/mattn/go-runewidth: [v0.0.2 → v0.0.7](https://github.com/mattn/go-runewidth/compare/v0.0.2...v0.0.7) - github.com/miekg/dns: [v1.1.4 → v1.1.35](https://github.com/miekg/dns/compare/v1.1.4...v1.1.35) - github.com/moby/sys/mountinfo: [v0.1.3 → v0.4.0](https://github.com/moby/sys/mountinfo/compare/v0.1.3...v0.4.0) - github.com/moby/term: [672ec06 → df9cb8a](https://github.com/moby/term/compare/672ec06...df9cb8a) - github.com/mrunalp/fileutils: [abd8a0e → v0.5.0](https://github.com/mrunalp/fileutils/compare/abd8a0e...v0.5.0) - github.com/olekukonko/tablewriter: [a0225b3 → v0.0.4](https://github.com/olekukonko/tablewriter/compare/a0225b3...v0.0.4) - github.com/opencontainers/runc: [v1.0.0-rc92 → v1.0.0-rc93](https://github.com/opencontainers/runc/compare/v1.0.0-rc92...v1.0.0-rc93) - github.com/opencontainers/runtime-spec: [4d89ac9 → e6143ca](https://github.com/opencontainers/runtime-spec/compare/4d89ac9...e6143ca) - github.com/opencontainers/selinux: [v1.6.0 → v1.8.0](https://github.com/opencontainers/selinux/compare/v1.6.0...v1.8.0) - github.com/sergi/go-diff: [v1.0.0 → v1.1.0](https://github.com/sergi/go-diff/compare/v1.0.0...v1.1.0) - github.com/sirupsen/logrus: [v1.6.0 → v1.7.0](https://github.com/sirupsen/logrus/compare/v1.6.0...v1.7.0) - github.com/syndtr/gocapability: [d983527 → 42c35b4](https://github.com/syndtr/gocapability/compare/d983527...42c35b4) - github.com/willf/bitset: [d5bec33 → v1.1.11](https://github.com/willf/bitset/compare/d5bec33...v1.1.11) - github.com/yuin/goldmark: [v1.1.27 → v1.2.1](https://github.com/yuin/goldmark/compare/v1.1.27...v1.2.1) - golang.org/x/crypto: 7f63de1 → 5ea612d - golang.org/x/exp: 6cc2880 → 85be41e - golang.org/x/mobile: d2bd2a2 → e6ae53a - golang.org/x/mod: v0.3.0 → ce943fd - golang.org/x/net: 69a7880 → 3d97a24 - golang.org/x/sync: cd5d95a → 67f06af - golang.org/x/sys: 5cba982 → a50acf3 - golang.org/x/time: 3af7569 → f8bda1e - golang.org/x/tools: c1934b7 → v0.1.0 - gopkg.in/check.v1: 41f04d3 → 8fa4692 - gopkg.in/yaml.v2: v2.2.8 → v2.4.0 - gotest.tools/v3: v3.0.2 → v3.0.3 - k8s.io/gengo: 83324d8 → b6c5ce2 - k8s.io/klog/v2: v2.4.0 → v2.8.0 - k8s.io/kube-openapi: d219536 → 591a79e - k8s.io/system-validators: v1.2.0 → v1.4.0 - sigs.k8s.io/apiserver-network-proxy/konnectivity-client: v0.0.14 → v0.0.15 - sigs.k8s.io/structured-merge-diff/v4: v4.0.2 → v4.1.0 ### Removed - github.com/codegangsta/negroni: [v1.0.0](https://github.com/codegangsta/negroni/tree/v1.0.0) - github.com/docker/spdystream: [449fdfc](https://github.com/docker/spdystream/tree/449fdfc) - github.com/golangplus/bytes: [45c989f](https://github.com/golangplus/bytes/tree/45c989f) - github.com/golangplus/fmt: [2a5d6d7](https://github.com/golangplus/fmt/tree/2a5d6d7) - github.com/gorilla/context: [v1.1.1](https://github.com/gorilla/context/tree/v1.1.1) - github.com/kr/pty: [v1.1.5](https://github.com/kr/pty/tree/v1.1.5) - rsc.io/quote/v3: v3.1.0 - rsc.io/sampler: v1.3.0 - sigs.k8s.io/kustomize: v2.0.3+incompatible # v1.21.0-rc.0"} {"_id":"doc-en-website-ddfe668a7310735b9f20bbc2010a05ffa23ec5087c90f78be063d0719ad547fd","title":"","text":" --- title: アドオンのインストール content_type: concept --- {{% thirdparty-content %}} アドオンはKubernetesの機能を拡張するものです。 このページでは、利用可能なアドオンの一部の一覧と、それぞれのアドオンのインストール方法へのリンクを提供します。 ## ネットワークとネットワークポリシー * [ACI](https://www.github.com/noironetworks/aci-containers)は、統合されたコンテナネットワークとネットワークセキュリティをCisco ACIを使用して提供します。 * [Antrea](https://antrea.io/)は、L3またはL4で動作して、Open vSwitchをネットワークデータプレーンとして活用する、Kubernetes向けのネットワークとセキュリティサービスを提供します。 * [Calico](https://docs.projectcalico.org/latest/introduction/)はネットワークとネットワークプリシーのプロバイダーです。Calicoは、BGPを使用または未使用の非オーバーレイおよびオーバーレイネットワークを含む、フレキシブルなさまざまなネットワークオプションをサポートします。Calicoはホスト、Pod、そして(IstioとEnvoyを使用している場合には)サービスメッシュ上のアプリケーションに対してネットワークポリシーを強制するために、同一のエンジンを使用します。 * [Canal](https://github.com/tigera/canal/tree/master/k8s-install)はFlannelとCalicoをあわせたもので、ネットワークとネットワークポリシーを提供します。 * [Cilium](https://github.com/cilium/cilium)は、L3のネットワークとネットワークポリシーのプラグインで、HTTP/API/L7のポリシーを透過的に強制できます。ルーティングとoverlay/encapsulationモードの両方をサポートしており、他のCNIプラグイン上で機能できます。 * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie)は、KubernetesをCalico、Canal、Flannel、Romana、Weaveなど選択したCNIプラグインをシームレスに接続できるようにするプラグインです。 * [Contiv](https://contiv.github.io)は、さまざまなユースケースと豊富なポリシーフレームワーク向けに設定可能なネットワーク(BGPを使用したネイティブのL3、vxlanを使用したオーバーレイ、古典的なL2、Cisco-SDN/ACI)を提供します。Contivプロジェクトは完全に[オープンソース](https://github.com/contiv)です。[インストーラ](https://github.com/contiv/install)はkubeadmとkubeadm以外の両方をベースとしたインストールオプションがあります。 * [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/)は、[Tungsten Fabric](https://tungsten.io)をベースにしている、オープンソースでマルチクラウドに対応したネットワーク仮想化およびポリシー管理プラットフォームです。ContrailおよびTungsten Fabricは、Kubernetes、OpenShift、OpenStack、Mesosなどのオーケストレーションシステムと統合されており、仮想マシン、コンテナ/Pod、ベアメタルのワークロードに隔離モードを提供します。 * [Flannel](https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md)は、Kubernetesで使用できるオーバーレイネットワークプロバイダーです。 * [Knitter](https://github.com/ZTE/Knitter/)は、1つのKubernetes Podで複数のネットワークインターフェイスをサポートするためのプラグインです。 * [Multus](https://github.com/Intel-Corp/multus-cni)は、すべてのCNIプラグイン(たとえば、Calico、Cilium、Contiv、Flannel)に加えて、SRIOV、DPDK、OVS-DPDK、VPPをベースとするKubernetes上のワークロードをサポートする、複数のネットワークサポートのためのマルチプラグインです。 * [OVN-Kubernetes](https://github.com/ovn-org/ovn-kubernetes/)は、Open vSwitch(OVS)プロジェクトから生まれた仮想ネットワーク実装である[OVN(Open Virtual Network)](https://github.com/ovn-org/ovn/)をベースとする、Kubernetesのためのネットワークプロバイダです。OVN-Kubernetesは、OVSベースのロードバランサーおよびネットワークポリシーの実装を含む、Kubernetes向けのオーバーレイベースのネットワーク実装を提供します。 * [OVN4NFV-K8S-Plugin](https://github.com/opnfv/ovn4nfv-k8s-plugin)は、クラウドネイティブベースのService function chaining(SFC)、Multiple OVNオーバーレイネットワーク、動的なサブネットの作成、動的な仮想ネットワークの作成、VLANプロバイダーネットワーク、Directプロバイダーネットワークを提供し、他のMulti-networkプラグインと付け替え可能なOVNベースのCNIコントローラープラグインです。 * [NSX-T](https://docs.vmware.com/en/VMware-NSX-T/2.0/nsxt_20_ncp_kubernetes.pdf) Container Plug-in(NCP)は、VMware NSX-TとKubernetesなどのコンテナオーケストレーター間のインテグレーションを提供します。また、NSX-Tと、Pivotal Container Service(PKS)とOpenShiftなどのコンテナベースのCaaS/PaaSプラットフォームとのインテグレーションも提供します。 * [Nuage](https://github.com/nuagenetworks/nuage-kubernetes/blob/v5.1.1-1/docs/kubernetes-1-installation.rst)は、Kubernetes Podと非Kubernetes環境間で可視化とセキュリティモニタリングを使用してポリシーベースのネットワークを提供するSDNプラットフォームです。 * [Romana](https://romana.io)は、[NetworkPolicy API](/docs/concepts/services-networking/network-policies/)もサポートするPodネットワーク向けのL3のネットワークソリューションです。Kubeadmアドオンのインストールの詳細は[こちら](https://github.com/romana/romana/tree/master/containerize)で確認できます。 * [Weave Net](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)は、ネットワークパーティションの両面で機能し、外部データベースを必要とせずに、ネットワークとネットワークポリシーを提供します。 ## サービスディスカバリ * [CoreDNS](https://coredns.io)は、フレキシブルで拡張可能なDNSサーバーです。Pod向けのクラスター内DNSとして[インストール](https://github.com/coredns/deployment/tree/master/kubernetes)できます。 ## 可視化と制御 * [Dashboard](https://github.com/kubernetes/dashboard#kubernetes-dashboard)はKubernetes向けのダッシュボードを提供するウェブインターフェイスです。 * [Weave Scope](https://www.weave.works/documentation/scope-latest-installing/#k8s)は、コンテナ、Pod、Serviceなどをグラフィカルに可視化するツールです。[Weave Cloud account](https://cloud.weave.works/)と組み合わせて使うか、UIを自分でホストして使います。 ## インフラストラクチャ * [KubeVirt](https://kubevirt.io/user-guide/#/installation/installation)は仮想マシンをKubernetes上で実行するためのアドオンです。通常、ベアメタルのクラスタで実行します。 ## レガシーなアドオン いくつかのアドオンは、廃止された[cluster/addons](https://git.k8s.io/kubernetes/cluster/addons)ディレクトリに掲載されています。 よくメンテナンスされたアドオンはここにリンクしてください。PRを歓迎しています。 "} {"_id":"doc-en-website-5460624cfdd62ca0cfbec66f1621ccdd91d12c2ecc038a61ad7df1d156d40e89","title":"","text":"Los objetos básicos de Kubernetes incluyen: * [Pod](/docs/concepts/workloads/pods/pod-overview/) * [Pod](/es/docs/concepts/workloads/pods/pod/) * [Service](/docs/concepts/services-networking/service/) * [Volume](/docs/concepts/storage/volumes/) * [Namespace](/docs/concepts/overview/working-with-objects/namespaces/) * [Namespace](/es/docs/concepts/overview/working-with-objects/namespaces/) Además, Kubernetes contiene abstracciónes de nivel superior llamadas Controladores. Los Controladores se basan en los objetos básicos y proporcionan funcionalidades adicionales sobre ellos. Incluyen: * [ReplicaSet](/docs/concepts/workloads/controllers/replicaset/) * [Deployment](/docs/concepts/workloads/controllers/deployment/) * [StatefulSet](/docs/concepts/workloads/controllers/statefulset/) * [DaemonSet](/docs/concepts/workloads/controllers/daemonset/) * [Job](/docs/concepts/workloads/controllers/jobs-run-to-completion/) * [ReplicaSet](/es/docs/concepts/workloads/controllers/replicaset/) * [Deployment](/es/docs/concepts/workloads/controllers/deployment/) * [StatefulSet](/es/docs/concepts/workloads/controllers/statefulset/) * [DaemonSet](/es/docs/concepts/workloads/controllers/daemonset/) * [Job](/es/docs/concepts/workloads/controllers/jobs-run-to-completion/) ## Plano de Control de Kubernetes"} {"_id":"doc-en-website-bef874f043f7e3d99a251e7d44223c6a0fb2d615cc82ca13d1aa753073f91d25","title":"","text":"#### Metadatos de los Objectos * [Annotations](/docs/concepts/overview/working-with-objects/annotations/) * [Annotations](/es/docs/concepts/overview/working-with-objects/annotations/) ## {{% heading \"whatsnext\" %}} Si estás interesado en escribir una página sobre conceptos, revisa [Usando Templates de Páginas](/docs/home/contribute/page-templates/) para obtener información sobre el tipo de página conceptos y la plantilla conceptos. Si quieres empezar a contribuir a la documentación de Kubernetes accede a la página [Empieza a contribuir](/es/docs/contribute/start/). "} {"_id":"doc-en-website-40e0c1ee42717b2deca8662aa195e5cf3dabcb13c5d1b71df4f36c8aa81bfd84","title":"","text":"| `ProbeTerminationGracePeriod` | `false` | Alpha | 1.21 | | | `ProcMountType` | `false` | Alpha | 1.12 | | | `QOSReserved` | `false` | Alpha | 1.11 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | | | `RemainingItemCount` | `false` | Alpha | 1.15 | 1.15 | | `RemainingItemCount` | `true` | Beta | 1.16 | | | `RemoveSelfLink` | `false` | Alpha | 1.16 | 1.19 | | `RemoveSelfLink` | `true` | Beta | 1.20 | | | `RotateKubeletServerCertificate` | `false` | Alpha | 1.7 | 1.11 |"} {"_id":"doc-en-website-d2d5036e3346926c5d9caaf681669a9beb80b9d2cf98a7effa9d31b308cba202","title":"","text":"} ``` Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. Note that the `resourceVersion` of the list remains constant across each request, indicating the server is showing us a consistent snapshot of the pods. Pods that are created, updated, or deleted after version `10245` would not be shown unless the user makes a list request without the `continue` token. This allows clients to break large requests into smaller chunks and then perform a watch operation on the full set without missing any updates. `remainingItemCount` is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the API server does not include a `remainingItemCount` field in its response. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and the API server does not include a `remainingItemCount` field in its response. The intended use of the `remainingItemCount` is estimating the size of a collection. ## Receiving resources as Tables"} {"_id":"doc-en-website-0d873fdd81ef355ac4ff1db02d5466e5f5eab210d62f5cc1400b130d6b61b4a2","title":"","text":" --- title: 启用拓扑感知提示 content_type: task min-kubernetes-server-version: 1.21 --- {{< feature-state for_k8s_version=\"v1.21\" state=\"alpha\" >}} _拓扑感知提示_ 启用具有拓扑感知能力的路由,其中拓扑感知信息包含在 {{< glossary_tooltip text=\"EndpointSlices\" term_id=\"endpoint-slice\" >}} 中。 此功能尽量将流量限制在它的发起区域附近; 可以降低成本,或者提高网络性能。 ## {{% heading \"prerequisites\" %}} {{< include \"task-tutorial-prereqs.md\" >}} {{< version-check >}} 为了启用拓扑感知提示,先要满足以下先决条件: * 配置 {{< glossary_tooltip text=\"kube-proxy\" term_id=\"kube-proxy\" >}} 以 iptables 或 IPVS 模式运行 * 确保未禁用 EndpointSlices ## 启动拓扑感知提示 {#enable-topology-aware-hints} 要启用服务拓扑感知,请启用 kube-apiserver、kube-controller-manager、和 kube-proxy 的 [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/) `TopologyAwareHints`。 ``` --feature-gates=\"TopologyAwareHints=true\" ``` ## {{% heading \"whatsnext\" %}} * 参阅面向服务的[拓扑感知提示](/zh/docs/concepts/services-networking/topology-aware-hints) * 参阅[用服务连通应用](/zh/docs/concepts/services-networking/connect-applications-service/) "} {"_id":"doc-en-website-e63bae8f25789be964cb30a732487ce4dd389abc7903fc94fa33b298795b2685","title":"","text":"/docs/tasks/administer-cluster/quota-memory-cpu-namespace/ /docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/ 301 /docs/tasks/administer-cluster/quota-pod-namespace/ /docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/ 301 /docs/tasks/administer-cluster/reserve-compute-resources/out-of-resource.md /docs/tasks/administer-cluster/out-of-resource/ 301 /docs/tasks/administer-cluster/out-of-resource/ /docs/concepts/scheduling-eviction/pod-eviction/ 301 /docs/tasks/administer-cluster/out-of-resource/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301 /docs/tasks/administer-cluster/romana-network-policy/ /docs/tasks/administer-cluster/network-policy-provider/romana-network-policy/ 301 /docs/tasks/administer-cluster/running-cloud-controller.md /docs/tasks/administer-cluster/running-cloud-controller/ 301 /docs/tasks/administer-cluster/share-configuration/ /docs/tasks/access-application-cluster/configure-access-multiple-clusters/ 301"} {"_id":"doc-en-website-9841034e37fb0c38f6cff7fe801790b09002f951adcb9a7f234871e885c56f93","title":"","text":"

Attend KubeCon NA virtually on November 17-20, 2020 Attend KubeCon North America on October 11-15, 2021



Attend KubeCon EU virtually on May 4 – 7, 2021 Revisit KubeCon EU 2021
"} {"_id":"doc-en-website-998d6deb84f326996f851002d099e2e2fac0b4a395f2de9e143fa829b5fcdcfe","title":"","text":"{{< blocks/kubernetes-features >}} {{< blocks/case-studies >}} No newline at end of file {{< blocks/case-studies >}} "} {"_id":"doc-en-website-8c96321fa5e21b216df0c3e7eb39d1ada2c710dd218f9e4462e57b46398bda51","title":"","text":"`ContainerDevices` do expose the topology information declaring to which NUMA cells the device is affine. The NUMA cells are identified using a opaque integer ID, which value is consistent to what device plugins report [when they register themselves to the kubelet](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager). plugins report [when they register themselves to the kubelet](/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/#device-plugin-integration-with-the-topology-manager). The gRPC service is served over a unix socket at `/var/lib/kubelet/pod-resources/kubelet.sock`."} {"_id":"doc-en-website-438c03a3307c771336dfc27b5a4ecc3934a97371df721043bba231ec16311ee0","title":"","text":"of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes. Your cluster administrator or hosting provider should have documented what level of voluntary disruptions, if any, to expect. Certain configuration options, such as [using PriorityClasses](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) [using PriorityClasses](/docs/concepts/configuration/pod-priority-preemption/) in your pod spec can also cause voluntary (and involuntary) disruptions."} {"_id":"doc-en-website-93aa2096e30ef231c3c2b7932ca25de0400703e97d1249a3550e4dccba73a9c3","title":"","text":"resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. See [scheduling](https://kubernetes.io/docs/concepts/scheduling-eviction/) See [scheduling](/docs/concepts/scheduling-eviction/) for more information about scheduling and the kube-scheduler component. ```"} {"_id":"doc-en-website-c046cbbacb698e1b0805a0ac20714d566a9164f5aa95c9e69c87181ad7004924","title":"","text":"## {{% heading \"whatsnext\" %}} * Read the [kube-scheduler reference](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/) * Read the [kube-scheduler reference](/docs/reference/command-line-tools-reference/kube-scheduler/) * Learn about [scheduling](/docs/concepts/scheduling-eviction/kube-scheduler/) * Read the [kube-scheduler configuration (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) reference"} {"_id":"doc-en-website-8863868899ed872681e921ea64756b51ac1129219fcf380857c5736ece08079c","title":"","text":"Generates keys and certificate signing requests (CSRs) for all the certificates required to run the control plane. This command also generates partial kubeconfig files with private key data in the \"users > user > client-key-data\" field, and for each kubeconfig file an accompanying \".csr\" file is created. This command is designed for use in [Kubeadm External CA Mode](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing. This command is designed for use in [Kubeadm External CA Mode](/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#external-ca-mode). It generates CSRs which you can then submit to your external certificate authority for signing. The PEM encoded signed certificates should then be saved alongside the key files, using \".crt\" as the file extension, or in the case of kubeconfig files, the PEM encoded signed certificate should be base64 encoded and added to the kubeconfig file in the \"users > user > client-certificate-data\" field."} {"_id":"doc-en-website-b1087e90940234f257b7cdd2c9c7ccdcd0c9f58344a34fa69fa7884c83ad33b8","title":"","text":"PodSecurityPolicy in the **policy/v1beta1** API version will no longer be served in v1.25, and the PodSecurityPolicy admission controller will be removed. PodSecurityPolicy replacements are still under discussion, but current use can be migrated to [3rd-party admission webhooks](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) now. [3rd-party admission webhooks](/docs/reference/access-authn-authz/extensible-admission-controllers/) now. #### RuntimeClass {#runtimeclass-v125}"} {"_id":"doc-en-website-64e46e1e20eb719ced809890e741bad48cebec911b7a3bccac8c65962873cfac","title":"","text":"* start and configure additional etcd instance * configure the {{< glossary_tooltip term_id=\"kube-apiserver\" text=\"API server\" >}} to use it for storing events See [Operating etcd clusters for Kubernetes](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/) and [Set up a High Availability etcd cluster with kubeadm](docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) See [Operating etcd clusters for Kubernetes](/docs/tasks/administer-cluster/configure-upgrade-etcd/) and [Set up a High Availability etcd cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/) for details on configuring and managing etcd for a large cluster. ## Addon resources"} {"_id":"doc-en-website-4844f863c4582babe80f22270c105b63b1cb5bec00aa512cadd40b8af55ad181","title":"","text":"security mechanisms to make sure that users and workloads can get access to the resources they need, while keeping workloads, and the cluster itself, secure. You can set limits on the resources that users and workloads can access by managing [policies](https://kubernetes.io/docs/concepts/policy/) and by managing [policies](/docs/concepts/policy/) and [container resources](/docs/concepts/configuration/manage-resources-containers/). Before building a Kubernetes production environment on your own, consider"} {"_id":"doc-en-website-b6949261a6a482ae1538ef890c8cee4024a234972d37b803b349c14749cf241c","title":"","text":"deployment methods. - Configure user management by determining your [Authentication](/docs/reference/access-authn-authz/authentication/) and [Authorization](docs/reference/access-authn-authz/authorization/) methods. [Authorization](/docs/reference/access-authn-authz/authorization/) methods. - Prepare for application workloads by setting up [resource limits](docs/tasks/administer-cluster/manage-resources/), [resource limits](/docs/tasks/administer-cluster/manage-resources/), [DNS autoscaling](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/) and [service accounts](/docs/reference/access-authn-authz/service-accounts-admin/)."} {"_id":"doc-en-website-3eafa54ac42d36285a205371ed8c5f27ce6912bc5d561edb165edc9ecbd86d20","title":"","text":"certificates by requesting them from the `certificates.k8s.io` API. One known limitation is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically approved by the default signer in the kube-controller-manager - [`kubernetes.io/kubelet-serving`](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers). [`kubernetes.io/kubelet-serving`](/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers). This will require action from the user or a third party controller. These CSRs can be viewed using:"} {"_id":"doc-en-website-ea1184b9b2da6ecc8baeffd05728245a596cbfc790d089dbde9be30d4220fe45","title":"","text":" ## `LoggingConfiguration` {#LoggingConfiguration} **Appears in:** - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) LoggingConfiguration contains logging options Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information.
StringAPI Object
\"pods\"Pod
\"servicesService
\"services\"Service
\"replicationcontrollers\"ReplicationController
\"resourcequotas\"ResourceQuota
\"secrets\"Secret
FieldDescription
format [Required]
string
Format Flag specifies the structure of log messages. default value of format is `text`
sanitization [Required]
bool
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)
## `KubeletConfiguration` {#kubelet-config-k8s-io-v1beta1-KubeletConfiguration}"} {"_id":"doc-en-website-e8e3522327315019193b37d18e9cb93ad5c924d6c670a800165f5c60f3955157","title":"","text":"status to master if node status does not change. Kubelet will ignore this frequency and post node status immediately if any change is detected. It is only used when node lease feature is enabled. nodeStatusReportFrequency's default value is 1m. But if nodeStatusUpdateFrequency is set explicitly, default value is 5m. But if nodeStatusUpdateFrequency is set explicitly, nodeStatusReportFrequency's default value will be set to nodeStatusUpdateFrequency for backward compatibility. Default: \"1m\" Default: \"5m\" "} {"_id":"doc-en-website-697e683738e5921e39e32d87a62f065eaf6708a0597c27d02cca7238f6633720","title":"","text":"Requires the CPUManager feature gate to be enabled. Dynamic Kubelet Config (beta): This field should not be updated without a full node reboot. It is safest to keep this value the same as the local config. Default: \"none\" Default: \"None\" "} {"_id":"doc-en-website-8bf1c716692123b9f857fdaab1233e0ca7fcfb6a09ab78a96f2cbc1f1aecdaf5","title":"","text":" memoryManagerPolicy
string MemoryManagerPolicy is the name of the policy to use by memory manager. Requires the MemoryManager feature gate to be enabled. Dynamic Kubelet Config (beta): This field should not be updated without a full node reboot. It is safest to keep this value the same as the local config. Default: \"none\"
topologyManagerPolicy
string "} {"_id":"doc-en-website-fde02185e6d5557676fbc3d63d453f1ebc55b80d00db59124a6f6122ff55b15a","title":"","text":" ShutdownGracePeriod specifies the total duration that the node should delay the shutdown and total grace period for pod termination during a node shutdown. Default: \"30s\" Default: \"0s\" "} {"_id":"doc-en-website-f3c5e466e737f2fca0cd9fa318216d9db4a44f337af23d01f8cca7a59c707733","title":"","text":" ShutdownGracePeriodCriticalPods specifies the duration used to terminate critical pods during a node shutdown. This should be less than ShutdownGracePeriod. For example, if ShutdownGracePeriod=30s, and ShutdownGracePeriodCriticalPods=10s, during a node shutdown the first 20 seconds would be reserved for gracefully terminating normal pods, and the last 10 seconds would be reserved for terminating critical pods. Default: \"10s\" Default: \"0s\" reservedMemory
[]MemoryReservation ReservedMemory specifies a comma-separated list of memory reservations for NUMA nodes. The parameter makes sense only in the context of the memory manager feature. The memory manager will not allocate reserved memory for container workloads. For example, if you have a NUMA0 with 10Gi of memory and the ReservedMemory was specified to reserve 1Gi of memory at NUMA0, the memory manager will assume that only 9Gi is available for allocation. You can specify a different amount of NUMA node and memory types. You can omit this parameter at all, but you should be aware that the amount of reserved memory from all NUMA nodes should be equal to the amount of memory specified by the node allocatable features(https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable). If at least one node allocatable parameter has a non-zero value, you will need to specify at least one NUMA node. Also, avoid specifying: 1. Duplicates, the same NUMA node, and memory type, but with a different value. 2. zero limits for any memory type. 3. NUMAs nodes IDs that do not exist under the machine. 4. memory types except for memory and hugepages- Default: nil enableProfilingHandler
bool enableProfilingHandler enables profiling via web interface host:port/debug/pprof/ Default: true enableDebugFlagsHandler
bool enableDebugFlagsHandler enables flags endpoint via web interface host:port/debug/flags/v Default: true
"} {"_id":"doc-en-website-c6f1c5080284e9093897e2cf755c6e111aca6621b21ddf6c5affb7b0032cbb65","title":"","text":" ## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} (Alias of `string`) **Appears in:** - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) ResourceChangeDetectionStrategy denotes a mode in which internal managers (secret, configmap) are discovering object changes. ## `LoggingConfiguration` {#LoggingConfiguration} ## `MemoryReservation` {#kubelet-config-k8s-io-v1beta1-MemoryReservation} "} {"_id":"doc-en-website-9ee4f8b4b2ad342204ce8a8c3ec9708268cdfcdf95609615615d7706d078c66c","title":"","text":"- [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) LoggingConfiguration contains logging options Refer [Logs Options](https://github.com/kubernetes/component-base/blob/master/logs/options.go) for more information. MemoryReservation specifies the memory reservation of different types for each NUMA node "} {"_id":"doc-en-website-caff000d492060b1deda577c5d574802d73d4e5d7aebb6245f794eb10a3bd14b","title":"","text":" No description provided. No description provided.
FieldDescription
format [Required]
string
numaNode [Required]
int32
Format Flag specifies the structure of log messages. default value of format is `text`
sanitization [Required]
bool
limits [Required]
core/v1.ResourceList
[Experimental] When enabled prevents logging of fields tagged as sensitive (passwords, keys, tokens). Runtime log sanitization may introduce significant computation overhead and therefore should not be enabled in production.`)
## `ResourceChangeDetectionStrategy` {#kubelet-config-k8s-io-v1beta1-ResourceChangeDetectionStrategy} (Alias of `string`) **Appears in:** - [KubeletConfiguration](#kubelet-config-k8s-io-v1beta1-KubeletConfiguration) ResourceChangeDetectionStrategy denotes a mode in which internal managers (secret, configmap) are discovering object changes. "} {"_id":"doc-en-website-0ab50b59629e83a786bf720b08839106d33ef3866c9baffc5f2c0260aa94f401","title":"","text":"{{ $show := cond (or (lt $ulNr $ulShow) $activePath (and (not $shouldDelayActive) (eq $s.Parent $p.Parent)) (and (not $shouldDelayActive) (eq $s.Parent $p)) (and (not $shouldDelayActive) ($p.IsDescendant $s.Parent))) true false -}} {{ $mid := printf \"m-%s\" ($s.RelPermalink | anchorize) -}} {{ $pages_tmp := where (union $s.Pages $s.Sections).ByWeight \".Params.toc_hide\" \"!=\" true -}} {{/* We get untranslated subpages below to make sure we build all levels of the sidenav in localizationed docs sets */}} {{ with site.Params.language_alternatives -}} {{ range . }} {{ with (where $.section.Translations \".Lang\" . ) -}} {{ $p := index . 0 -}} {{ $pages_tmp = where ( $pages_tmp | lang.Merge (union $p.Pages $p.Sections)) \".Params.toc_hide\" \"!=\" true -}} {{ end -}} {{ end -}} {{ end -}} {{ $pages := $pages_tmp | first $sidebarMenuTruncate -}} {{ $withChild := gt (len $pages) 0 -}} {{ $manualLink := cond (isset $s.Params \"manuallink\") $s.Params.manualLink ( cond (isset $s.Params \"manuallinkrelref\") (relref $s $s.Params.manualLinkRelref) $s.RelPermalink) -}}"} {"_id":"doc-en-website-3e822568e9a8e83b3749f53ad9b4093deaa78dedc2291d0644cff066f359f4e9","title":"","text":" --- reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases content_type: concept weight: 60 min-kubernetes-server-version: 1.7 --- Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec. Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart. ## Default hosts file content Start an Nginx Pod which is assigned a Pod IP: ```shell kubectl run nginx --image nginx ``` ``` pod/nginx created ``` Examine a Pod IP: ```shell kubectl get pods --output=wide ``` ``` NAME READY STATUS RESTARTS AGE IP NODE nginx 1/1 Running 0 13s 10.200.0.4 worker0 ``` The hosts file content would look like this: ```shell kubectl exec nginx -- cat /etc/hosts ``` ``` # Kubernetes-managed hosts file. 127.0.0.1\tlocalhost ::1\tlocalhost ip6-localhost ip6-loopback fe00::0\tip6-localnet fe00::0\tip6-mcastprefix fe00::1\tip6-allnodes fe00::2\tip6-allrouters 10.200.0.4\tnginx ``` By default, the `hosts` file only includes IPv4 and IPv6 boilerplates like `localhost` and its own hostname. ## Adding additional entries with hostAliases In addition to the default boilerplate, you can add additional entries to the `hosts` file. For example: to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`, `bar.remote` to `10.1.2.3`, you can configure HostAliases for a Pod under `.spec.hostAliases`: {{< codenew file=\"service/networking/hostaliases-pod.yaml\" >}} You can start a Pod with that configuration by running: ```shell kubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml ``` ``` pod/hostaliases-pod created ``` Examine a Pod's details to see its IPv4 address and its status: ```shell kubectl get pod --output=wide ``` ``` NAME READY STATUS RESTARTS AGE IP NODE hostaliases-pod 0/1 Completed 0 6s 10.200.0.5 worker0 ``` The `hosts` file content looks like this: ```shell kubectl logs hostaliases-pod ``` ``` # Kubernetes-managed hosts file. 127.0.0.1\tlocalhost ::1\tlocalhost ip6-localhost ip6-loopback fe00::0\tip6-localnet fe00::0\tip6-mcastprefix fe00::1\tip6-allnodes fe00::2\tip6-allrouters 10.200.0.5\thostaliases-pod # Entries added by HostAliases. 127.0.0.1\tfoo.local\tbar.local 10.1.2.3\tfoo.remote\tbar.remote ``` with the additional entries specified at the bottom. ## Why does the kubelet manage the hosts file? {#why-does-kubelet-manage-the-hosts-file} The kubelet [manages](https://github.com/kubernetes/kubernetes/issues/14633) the `hosts` file for each container of the Pod to prevent Docker from [modifying](https://github.com/moby/moby/issues/17190) the file after the containers have already been started. {{< caution >}} Avoid making manual changes to the hosts file inside a container. If you make manual changes to the hosts file, those changes are lost when the container exits. {{< /caution >}} "} {"_id":"doc-en-website-ab66c7c4d876d7e8410ebf8471cba0f88dd9b3d17627eb9183608b001825978a","title":"","text":" --- reviewers: - rickypai - thockin title: Adding entries to Pod /etc/hosts with HostAliases content_type: task weight: 60 min-kubernetes-server-version: 1.7 --- Adding entries to a Pod's `/etc/hosts` file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec. Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart. ## Default hosts file content Start an Nginx Pod which is assigned a Pod IP: ```shell kubectl run nginx --image nginx ``` ``` pod/nginx created ``` Examine a Pod IP: ```shell kubectl get pods --output=wide ``` ``` NAME READY STATUS RESTARTS AGE IP NODE nginx 1/1 Running 0 13s 10.200.0.4 worker0 ``` The hosts file content would look like this: ```shell kubectl exec nginx -- cat /etc/hosts ``` ``` # Kubernetes-managed hosts file. 127.0.0.1\tlocalhost ::1\tlocalhost ip6-localhost ip6-loopback fe00::0\tip6-localnet fe00::0\tip6-mcastprefix fe00::1\tip6-allnodes fe00::2\tip6-allrouters 10.200.0.4\tnginx ``` By default, the `hosts` file only includes IPv4 and IPv6 boilerplates like `localhost` and its own hostname. ## Adding additional entries with hostAliases In addition to the default boilerplate, you can add additional entries to the `hosts` file. For example: to resolve `foo.local`, `bar.local` to `127.0.0.1` and `foo.remote`, `bar.remote` to `10.1.2.3`, you can configure HostAliases for a Pod under `.spec.hostAliases`: {{< codenew file=\"service/networking/hostaliases-pod.yaml\" >}} You can start a Pod with that configuration by running: ```shell kubectl apply -f https://k8s.io/examples/service/networking/hostaliases-pod.yaml ``` ``` pod/hostaliases-pod created ``` Examine a Pod's details to see its IPv4 address and its status: ```shell kubectl get pod --output=wide ``` ``` NAME READY STATUS RESTARTS AGE IP NODE hostaliases-pod 0/1 Completed 0 6s 10.200.0.5 worker0 ``` The `hosts` file content looks like this: ```shell kubectl logs hostaliases-pod ``` ``` # Kubernetes-managed hosts file. 127.0.0.1\tlocalhost ::1\tlocalhost ip6-localhost ip6-loopback fe00::0\tip6-localnet fe00::0\tip6-mcastprefix fe00::1\tip6-allnodes fe00::2\tip6-allrouters 10.200.0.5\thostaliases-pod # Entries added by HostAliases. 127.0.0.1\tfoo.local\tbar.local 10.1.2.3\tfoo.remote\tbar.remote ``` with the additional entries specified at the bottom. ## Why does the kubelet manage the hosts file? {#why-does-kubelet-manage-the-hosts-file} The kubelet [manages](https://github.com/kubernetes/kubernetes/issues/14633) the `hosts` file for each container of the Pod to prevent Docker from [modifying](https://github.com/moby/moby/issues/17190) the file after the containers have already been started. {{< caution >}} Avoid making manual changes to the hosts file inside a container. If you make manual changes to the hosts file, those changes are lost when the container exits. {{< /caution >}} "} {"_id":"doc-en-website-05c82aeb721fc079c480dc4a89ae94bb5b1777f4b5b39dc14ab4c3f0cfad16fb","title":"","text":"/docs/concepts/scheduling-eviction/eviction-policy/ /docs/concepts/scheduling-eviction/node-pressure-eviction/ 301 /docs/concepts/service-catalog/ /docs/concepts/extend-kubernetes/service-catalog/ 301 /docs/concepts/services-networking/networkpolicies/ /docs/concepts/services-networking/network-policies/ 301 /docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ /docs/tasks/network/customize-hosts-file-for-pods/ 301 /docs/concepts/storage/etcd-store-api-object/ /docs/tasks/administer-cluster/configure-upgrade-etcd/ 301 /docs/concepts/storage/volumes/emptyDirapiVersion/ /docs/concepts/storage/volumes/#emptydir/ 301 /docs/concepts/tools/kubectl/object-management-overview/ /docs/concepts/overview/object-management-kubectl/overview/ 301"} {"_id":"doc-en-website-59105e6c02fe8f757076da3108a5aa5a73afadfc058c7e40788961da9fe18965","title":"","text":"* [AppArmor](/docs/tutorials/clusters/apparmor/): Use program profiles to restrict the capabilities of individual programs. * [Seccomp](https://en.wikipedia.org/wiki/Seccomp): Filter a process's system calls. * [Seccomp](/docs/tutorials/clusters/seccomp/): Filter a process's system calls. * AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the [`no_new_privs`](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt) flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has `CAP_SYS_ADMIN`."} {"_id":"doc-en-website-74de8f6086249796fa9d4647eba8df342d55e43f7c77129583f7662578db32b5","title":"","text":"### Stable Storage Kubernetes creates one [PersistentVolume](/docs/concepts/storage/persistent-volumes/) for each VolumeClaimTemplate. In the nginx example above, each Pod will receive a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod receives one PersistentVolumeClaim. In the nginx example above, each Podreceives a single PersistentVolume with a StorageClass of `my-storage-class` and 1 Gib of provisioned storage. If no StorageClass is specified, then the default StorageClass will be used. When a Pod is (re)scheduled onto a node, its `volumeMounts` mount the PersistentVolumes associated with its PersistentVolume Claims. Note that, the PersistentVolumes associated with the"} {"_id":"doc-en-website-bd51cdea3dd4aba2fce3a7b6eca2970f79fab28f2ee67d1144ad43e7fb1fa36d","title":"","text":"{{/* We cache this partial for bigger sites and set the active class client side. */}} {{ $sidebarCacheLimit := cond (isset .Site.Params.ui \"sidebar_cache_limit\") .Site.Params.ui.sidebar_cache_limit 2000 }} {{ $shouldDelayActive := ge (len .Site.Pages) $sidebarCacheLimit }} {{ $sidebarCacheLimit := cond (isset .Site.Params.ui \"sidebar_cache_limit\") .Site.Params.ui.sidebar_cache_limit 2000 -}} {{ $shouldDelayActive := ge (len .Site.Pages) $sidebarCacheLimit -}}
{{ if not .Site.Params.ui.sidebar_search_disable }} {{ if not .Site.Params.ui.sidebar_search_disable -}} {{ else }} {{ else -}}
{{ end }} {{ end -}}
{{ define \"section-tree-nav-section\" }} {{ $s := .section }} {{ $p := .page }} {{ $shouldDelayActive := .shouldDelayActive }} {{ $sidebarMenuTruncate := .sidebarMenuTruncate }} {{ $treeRoot := cond (eq .ulNr 0) true false }} {{ $ulNr := .ulNr }} {{ $ulShow := .ulShow }} {{ $active := and (not $shouldDelayActive) (eq $s $p) }} {{ $activePath := and (not $shouldDelayActive) ($p.IsDescendant $s) }} {{ $show := cond (or (lt $ulNr $ulShow) $activePath (and (not $shouldDelayActive) (eq $s.Parent $p.Parent)) (and (not $shouldDelayActive) (eq $s.Parent $p)) (and (not $shouldDelayActive) ($p.IsDescendant $s.Parent))) true false }} {{ $mid := printf \"m-%s\" ($s.RelPermalink | anchorize) }} {{ $pages_tmp := where (union $s.Pages $s.Sections).ByWeight \".Params.toc_hide\" \"!=\" true }} {{ $pages := $pages_tmp | first $sidebarMenuTruncate }} {{ $withChild := gt (len $pages) 0 }} {{ $manualLink := cond (isset $s.Params \"manuallink\") $s.Params.manualLink ( cond (isset $s.Params \"manuallinkrelref\") (relref $s $s.Params.manualLinkRelref) $s.RelPermalink) }} {{ $manualLinkTitle := cond (isset $s.Params \"manuallinktitle\") $s.Params.manualLinkTitle $s.Title }} {{ define \"section-tree-nav-section\" -}} {{ $s := .section -}} {{ $p := .page -}} {{ $shouldDelayActive := .shouldDelayActive -}} {{ $sidebarMenuTruncate := .sidebarMenuTruncate -}} {{ $treeRoot := cond (eq .ulNr 0) true false -}} {{ $ulNr := .ulNr -}} {{ $ulShow := .ulShow -}} {{ $active := and (not $shouldDelayActive) (eq $s $p) -}} {{ $activePath := and (not $shouldDelayActive) ($p.IsDescendant $s) -}} {{ $show := cond (or (lt $ulNr $ulShow) $activePath (and (not $shouldDelayActive) (eq $s.Parent $p.Parent)) (and (not $shouldDelayActive) (eq $s.Parent $p)) (and (not $shouldDelayActive) ($p.IsDescendant $s.Parent))) true false -}} {{ $mid := printf \"m-%s\" ($s.RelPermalink | anchorize) -}} {{ $pages_tmp := where (union $s.Pages $s.Sections).ByWeight \".Params.toc_hide\" \"!=\" true -}} {{ $pages := $pages_tmp | first $sidebarMenuTruncate -}} {{ $withChild := gt (len $pages) 0 -}} {{ $manualLink := cond (isset $s.Params \"manuallink\") $s.Params.manualLink ( cond (isset $s.Params \"manuallinkrelref\") (relref $s $s.Params.manualLinkRelref) $s.RelPermalink) -}} {{ $manualLinkTitle := cond (isset $s.Params \"manuallinktitle\") $s.Params.manualLinkTitle $s.Title -}}
  • {{ if (and $p.Site.Params.ui.sidebar_menu_foldable (ge $ulNr 1)) }} {{ if (and $p.Site.Params.ui.sidebar_menu_foldable (ge $ulNr 1)) -}} {{ else }} {{ else -}} {{ if not $treeRoot }} {{ with $s.Params.Icon}}{{ end }}{{ $s.LinkTitle }} {{ end }} {{ end }} {{if $withChild }} {{ $ulNr := add $ulNr 1 }} {{ end -}} {{ end -}} {{ if $withChild -}} {{ $ulNr := add $ulNr 1 -}} {{ end }} {{- end }}
  • {{ end }} {{- end }} "} {"_id":"doc-en-website-f94e8a545614ac30639056bc064381535dabf760f531addad076d83a50274bd9","title":"","text":" This page shows how to install the `kubeadm` toolbox. For information how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. For information on how to create a cluster with kubeadm once you have performed this installation process, see the [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) page. "} {"_id":"doc-en-website-521fb601a736f17ee6ed72259e353d568f5fd81c35d4df6d31ba3f21d4abc48a","title":"","text":" {{< feature-state for_k8s_version=\"v1.16\" state=\"beta\" >}} {{< feature-state for_k8s_version=\"v1.22\" state=\"stable\" >}} ## Introduction"} {"_id":"doc-en-website-f3bf20362b2d64f007b254039a316e2b090f4726ed9798ae3320d92eddb83157","title":"","text":"and direct the network traffic to the cluster nodes: 1. Make sure that the `ServiceAccountTokenVolumeProjection` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled. You can enable [service account token volume protection](/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) by providing the following flags to the kube-apiserver: ``` --service-account-issuer=api --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --api-audiences=system:konnectivity-server ``` [Service Account Token Volume Projection](/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) feature enabled in your cluster. It is enabled by default since Kubernetes v1.20. 1. Create an egress configuration file such as `admin/konnectivity/egress-selector-configuration.yaml`. 1. Set the `--egress-selector-config-file` flag of the API Server to the path of your API Server egress configuration file."} {"_id":"doc-en-website-31c333d1e54d7ec1a15d9124963bbe093d87f1c2842d4d8eb3ec115604345d20","title":"","text":"} body { display: grid; grid-column-template: auto; header + .td-outer { min-height: 50vh; height: auto;"} {"_id":"doc-en-website-31ae2c34c1f1c1afab743a1066a08cabb04f37305f355f6eff625267408c16d2","title":"","text":"There are two types of provisioners for vSphere storage classes: - [CSI provisioner](#csi-provisioner): `csi.vsphere.vmware.com` - [CSI provisioner](#vsphere-provisioner-csi): `csi.vsphere.vmware.com` - [vCP provisioner](#vcp-provisioner): `kubernetes.io/vsphere-volume` In-tree provisioners are [deprecated](/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi). For more information on the CSI provisioner, see [Kubernetes vSphere CSI Driver](https://vsphere-csi-driver.sigs.k8s.io/) and [vSphereVolume CSI migration](/docs/concepts/storage/volumes/#csi-migration-5). #### CSI Provisioner {#vsphere-provisioner-csi} The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/master/example/vanilla-k8s-file-driver/example-sc.yaml). The vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the [vSphere CSI repository](https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/example/vanilla-k8s-RWM-filesystem-volumes/example-sc.yaml). #### vCP Provisioner"} {"_id":"doc-en-website-0a4bd11776a136b7047609411e8bd56e31a1c8af0ddb34b6e775eb928a7b83f9","title":"","text":"Used on: Namespaces When the `NamespaceDefaultLabelName` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled, the Kubernetes API server sets this label on all namespaces. The label value is set to the name of the namespace. The Kubernetes API server (part of the {{< glossary_tooltip text=\"control plane\" term_id=\"control-plane\" >}}) sets this label on all namespaces. The label value is set to the name of the namespace. You can't change this label's value. This is useful if you want to target a specific namespace with a label {{< glossary_tooltip text=\"selector\" term_id=\"selector\" >}}."} {"_id":"doc-en-website-a6868900a92eff30d6d3283a192aca2c61c170fb4b48d3329b128aaa82557b70","title":"","text":"- Using PowerShell to automate the verification using the `-eq` operator to get a `True` or `False` result: ```powershell $($(CertUtil -hashfile .kubectl.exe SHA256)[1] -replace \" \", \"\") -eq $(type .kubectl.exe.sha256) $(Get-FileHash -Algorithm SHA256 .kubectl.exe).Hash -eq $(Get-Content .kubectl.exe.sha256) ``` 1. Append or prepend the `kubectl` binary folder to your `PATH` environment variable."} {"_id":"doc-en-website-56072c47ed046da1d1bd91e2393bba475da7c46f7758d5f818fff68a3193f105","title":"","text":"## Accessing services running on the cluster The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/access-application-cluster/access-cluster/) The previous section describes how to connect to the Kubernetes API server. For information about connecting to other services running on a Kubernetes cluster, see [Access Cluster Services.](/docs/tasks/administer-cluster/access-cluster-services/) ## Requesting redirects"} {"_id":"doc-en-website-3f82c93eaec524eed6a9a3776c4ca7705e5c154c3dcdb5a78eda9a9f1d54c03c","title":"","text":"[downward API](/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image. as are any environment variables specified statically in the container image. ### Cluster information A list of all services that were running when a Container was created is available to that Container as environment variables. This list is limited to services within the same namespace as the new Container's Pod and Kubernetes control plane services. Those environment variables match the syntax of Docker links. For a service named *foo* that maps to a Container named *bar*, the following variables are defined:"} {"_id":"doc-en-website-d44d9cfee23a913d6281353c949c552241fcbb77ba7fe92eb9a0645caf83a805","title":"","text":"what that means, check out the blog post [Don't Panic: Kubernetes and Docker](/blog/2020/12/02/dont-panic-kubernetes-and-docker/). Also, you can read [check whether Dockershim deprecation affects you](/docs/tasks/administer-cluster/migrating-from-dockershim/check-if-dockershim-deprecation-affects-you/) to check whether it does. ### Why is dockershim being deprecated? Maintaining dockershim has become a heavy burden on the Kubernetes maintainers."} {"_id":"doc-en-website-4c92ae1f837c6cb6a8944f214c13b2ae1dd68c5991f59915e5124cde054138c8","title":"","text":"are being moved to the [Tasks](/docs/tasks/), [Tutorials](/docs/tutorials/), and [Concepts](/docs/concepts) sections. The content in this topic has moved to: --> Kubernetes文档中[使用手册](/zh/docs/user-guide/)部分中的主题被移动到 Kubernetes文档中[用户指南](/zh/docs/user-guide/)部分中的主题被移动到 [任务](/zh/docs/tasks/)、[教程](/zh/docs/tutorials/)和[概念](/zh/docs/concepts)节。 本主题内容已移至:"} {"_id":"doc-en-website-49e08d7f16176a2997ebb20d45257702df956ded45f3e93f38089bde66c71cc7","title":"","text":" --- title: Podのオーバーヘッド content_type: concept weight: 30 --- {{< feature-state for_k8s_version=\"v1.18\" state=\"beta\" >}} PodをNode上で実行する時に、Pod自身は大量のシステムリソースを消費します。これらのリソースは、Pod内のコンテナ(群)を実行するために必要なリソースとして追加されます。Podのオーバーヘッドは、コンテナの要求と制限に加えて、Podのインフラストラクチャで消費されるリソースを計算するための機能です。 Kubernetesでは、Podの[RuntimeClass](/docs/concepts/containers/runtime-class/)に関連するオーバーヘッドに応じて、[アドミッション](/ja/docs/reference/access-authn-authz/extensible-admission-controllers/#what-are-admission-webhooks)時にPodのオーバーヘッドが設定されます。 Podのオーバーヘッドを有効にした場合、Podのスケジューリング時にコンテナのリソース要求の合計に加えて、オーバーヘッドも考慮されます。同様に、Kubeletは、Podのcgroupのサイズ決定時およびPodの退役の順位付け時に、Podのオーバーヘッドを含めます。 ## Podのオーバーヘッドの有効化 {#set-up} クラスター全体で`PodOverhead`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)が有効になっていること(1.18時点ではデフォルトでオンになっています)と、`overhead`フィールドを定義する`RuntimeClass`が利用されていることを確認する必要があります。 ## 使用例 Podのオーバーヘッド機能を使用するためには、`overhead`フィールドが定義されたRuntimeClassが必要です。例として、仮想マシンとゲストOSにPodあたり約120MiBを使用する仮想化コンテナランタイムで、次のようなRuntimeClassを定義できます。 ```yaml --- kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata-fc handler: kata-fc overhead: podFixed: memory: \"120Mi\" cpu: \"250m\" ``` `kata-fc`RuntimeClassハンドラーを指定して作成されたワークロードは、リソースクォータの計算や、Nodeのスケジューリング、およびPodのcgroupのサイズ決定にメモリーとCPUのオーバーヘッドが考慮されます。 次のtest-podのワークロードの例を実行するとします。 ```yaml apiVersion: v1 kind: Pod metadata: name: test-pod spec: runtimeClassName: kata-fc containers: - name: busybox-ctr image: busybox stdin: true tty: true resources: limits: cpu: 500m memory: 100Mi - name: nginx-ctr image: nginx resources: limits: cpu: 1500m memory: 100Mi ``` アドミッション時、RuntimeClass[アドミッションコントローラー](/docs/reference/access-authn-authz/admission-controllers/)は、RuntimeClass内に記述された`オーバーヘッド`を含むようにワークロードのPodSpecを更新します。もし既にPodSpec内にこのフィールドが定義済みの場合、そのPodは拒否されます。この例では、RuntimeClassの名前しか指定されていないため、アドミッションコントローラーは`オーバーヘッド`を含むようにPodを変更します。 RuntimeClassのアドミッションコントローラーの後、更新されたPodSpecを確認できます。 ```bash kubectl get pod test-pod -o jsonpath='{.spec.overhead}' ``` 出力は次の通りです: ``` map[cpu:250m memory:120Mi] ``` ResourceQuotaが定義されている場合、コンテナ要求の合計と`オーバーヘッド`フィールドがカウントされます。 kube-schedulerが新しいPodを実行すべきNodeを決定する際、スケジューラーはそのPodの`オーバーヘッド`と、そのPodに対するコンテナ要求の合計を考慮します。この例だと、スケジューラーは、要求とオーバーヘッドを追加し、2.25CPUと320MiBのメモリを持つNodeを探します。 PodがNodeにスケジュールされると、そのNodeのkubeletはPodのために新しい{{< glossary_tooltip text=\"cgroup\" term_id=\"cgroup\" >}}を生成します。基盤となるコンテナランタイムがコンテナを作成するのは、このPod内です。 リソースにコンテナごとの制限が定義されている場合(制限が定義されているGuaranteed QoSまたはBustrable QoS)、kubeletはそのリソース(CPUはcpu.cfs_quota_us、メモリはmemory.limit_in_bytes)に関連するPodのcgroupの上限を設定します。この上限は、コンテナの制限とPodSpecで定義された`オーバーヘッド`の合計に基づきます。 CPUについては、PodがGuaranteedまたはBurstable QoSの場合、kubeletはコンテナの要求の合計とPodSpecに定義された`オーバーヘッド`に基づいて`cpu.share`を設定します。 次の例より、ワークロードに対するコンテナの要求を確認できます。 ```bash kubectl get pod test-pod -o jsonpath='{.spec.containers[*].resources.limits}' ``` コンテナの要求の合計は、CPUは2000m、メモリーは200MiBです。 ``` map[cpu: 500m memory:100Mi] map[cpu:1500m memory:100Mi] ``` Nodeで観測される値と比較してみましょう。 ```bash kubectl describe node | grep test-pod -B2 ``` 出力では、2250mのCPUと320MiBのメモリーが要求されており、Podのオーバーヘッドが含まれていることが分かります。 ``` Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default test-pod 2250m (56%) 2250m (56%) 320Mi (1%) 320Mi (1%) 36m ``` ## Podのcgroupの制限を確認 ワークロードで実行中のNode上にある、Podのメモリーのcgroupを確認します。次に示す例では、CRI互換のコンテナランタイムのCLIを提供するNodeで[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)を使用しています。これはPodのオーバーヘッドの動作を示すための高度な例であり、ユーザーがNode上で直接cgroupsを確認する必要はありません。 まず、特定のNodeで、Podの識別子を決定します。 ```bash # PodがスケジュールされているNodeで実行 POD_ID=\"$(sudo crictl pods --name test-pod -q)\" ``` ここから、Podのcgroupのパスが決定します。 ```bash # PodがスケジュールされているNodeで実行 sudo crictl inspectp -o=json $POD_ID | grep cgroupsPath ``` 結果のcgroupパスにはPodの`ポーズ中`コンテナも含まれます。Podレベルのcgroupは1つ上のディレクトリです。 ``` \"cgroupsPath\": \"/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/7ccf55aee35dd16aca4189c952d83487297f3cd760f1bbf09620e206e7d0c27a\" ``` 今回のケースでは、Podのcgroupパスは、`kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2`となります。メモリーのPodレベルのcgroupの設定を確認しましょう。 ```bash # PodがスケジュールされているNodeで実行 # また、Podに割り当てられたcgroupと同じ名前に変更 cat /sys/fs/cgroup/memory/kubepods/podd7f4b509-cf94-4951-9417-d1087c92a5b2/memory.limit_in_bytes ``` 予想通り320MiBです。 ``` 335544320 ``` ### Observability Podのオーバヘッドが利用されているタイミングを特定し、定義されたオーバーヘッドで実行されているワークロードの安定性を観察するため、[kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)には`kube_pod_overhead`というメトリクスが用意されています。この機能はv1.9のkube-state-metricsでは利用できませんが、次のリリースで期待されています。それまでは、kube-state-metricsをソースからビルドする必要があります。 ## {{% heading \"whatsnext\" %}} * [RuntimeClass](/ja/docs/concepts/containers/runtime-class/) * [Podのオーバーヘッドの設計](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead) "} {"_id":"doc-en-website-1c66ca62b0f1d7cc7a2d0f19a8f6ded05ec0d5818ce0ff527087a0803acfeb5f","title":"","text":"`Guaranteed` pods are guaranteed only when requests and limits are specified for all the containers and they are equal. These pods will never be evicted because of another pod's resource consumption. If a system daemon (such as `kubelet`, `docker`, and `journald`) is consuming more resources than were reserved via and `journald`) is consuming more resources than were reserved via `system-reserved` or `kube-reserved` allocations, and the node only has `Guaranteed` or `Burstable` pods using less resources than requests left on it, then the kubelet must choose to evict one of these pods to preserve node stability"} {"_id":"doc-en-website-0f00ca41e6af3362e87594bd2906baca7db53b34ca3017606974c111277b1cba","title":"","text":"--> 仅当 `Guaranteed` Pod 中所有容器都被指定了请求和限制并且二者相等时,才保证 Pod 不被驱逐。 这些 Pod 永远不会因为另一个 Pod 的资源消耗而被驱逐。 如果系统守护进程(例如 `kubelet`、`docker` 和 `journald`) 如果系统守护进程(例如 `kubelet` 和 `journald`) 消耗的资源比通过 `system-reserved` 或 `kube-reserved` 分配保留的资源多, 并且该节点只有 `Guaranteed` 或 `Burstable` Pod 使用的资源少于其上剩余的请求, 那么 kubelet 必须选择驱逐这些 Pod 中的一个以保持节点稳定性并减少资源匮乏对其他 Pod 的影响。"} {"_id":"doc-en-website-76fb6e2086bc5d38021bb95bb0cef39e2436eb48f5051e717fa33bd04bd8d206","title":"","text":"title: Pod 优先级(Pod Priority) id: pod-priority date: 2019-01-31 full_link: /zh/docs/concepts/configuration/pod-priority-preemption/#pod-priority full_link: /zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority short_description: > Pod 优先级表示一个 Pod 相对于其他 Pod 的重要性。"} {"_id":"doc-en-website-c7f8a0712b310b0cbdc8da97dd158b5277c0f5d505a71d2a84de6303f78260c5","title":"","text":"title: Pod Priority id: pod-priority date: 2019-01-31 full_link: /docs/concepts/configuration/pod-priority-preemption/#pod-priority full_link: /docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority short_description: > Pod Priority indicates the importance of a Pod relative to other Pods."} {"_id":"doc-en-website-5d279b8c28073f6f6ba190b908cc0c20c1974b7bbabbfedf3edfa17988d24861","title":"","text":" [Pod 优先级](/zh/docs/concepts/configuration/pod-priority-preemption/#pod-priority) [Pod 优先级](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority) 允许用户为 Pod 设置高于或低于其他 Pod 的优先级 -- 这对于生产集群 工作负载而言是一个重要的特性。"} {"_id":"doc-en-website-4b6f83f358d5469a4b731bad6da46fff5cd5d3c4520016cb5f0d7f1df2023b61","title":"","text":"The [`fejta-bot`](https://github.com/fejta-bot) bot marks issues as stale after 90 days of inactivity. After 30 more days it marks issues as rotten and closes them. PR wranglers should close issues after 14-30 days of inactivity. {{< /note >}} ## PR Wrangler shadow program In late 2021, SIG Docs introduced the PR Wrangler Shadow Program. The program was introduced to help new contributors understand the PR wrangling process. ### Become a shadow - If you are interested in shadowing as a PR wrangler, please visit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) to see the PR wrangling schedule for this year and sign up. - Kubernetes org members can edit the [PR Wranglers Wiki page](https://github.com/kubernetes/website/wiki/PR-Wranglers) and sign up to shadow an existing PR Wrangler for a week. - Others can reach out on the [#sig-docs Slack channel](https://kubernetes.slack.com/messages/sig-docs) for requesting to shadow an assigned PR Wrangler for a specific week. Feel free to reach out to Brad Topol (`@bradtopol`) or one of the [SIG Docs co-chairs/leads](https://github.com/kubernetes/community/tree/master/sig-docs#leadership). - Once you've signed up to shadow a PR Wrangler, introduce yourself to the PR Wrangler on the [Kubernetes Slack](slack.k8s.io). No newline at end of file"} {"_id":"doc-en-website-a1af4779080ef1cf17ba6f27acfa6d15e2c053791990a8b2df6dfcdf43833e00","title":"","text":"{{< warning >}} Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. Only use kubeconfig files from trusted sources. Using a specially-crafted kubeconfig file could result in malicious code execution or file exposure. If you must use an untrusted kubeconfig file, inspect it carefully first, much as you would a shell script. {{< /warning>}}"} {"_id":"doc-en-website-ec9bc5c32f74430be348f0242eb073cf66f46fc4f291067f79154d360db8ed8b","title":"","text":"Create a directory named `config-exercise`. In your `config-exercise` directory, create a file named `config-demo` with this content: ```shell ```yaml apiVersion: v1 kind: Config preferences: {}"} {"_id":"doc-en-website-ad2de6d948b03561ebf594a7d06b93633e1379febafc81f2f3f9b18148ec0e50","title":"","text":"### Linux ```shell export KUBECONFIG_SAVED=$KUBECONFIG export KUBECONFIG_SAVED=\"$KUBECONFIG\" ``` ### Windows PowerShell"} {"_id":"doc-en-website-6562c36fea4f2f4dc0f52dc85ad733cce4e3a8524b88495793cfdc0f1466db80","title":"","text":"### Linux ```shell export KUBECONFIG=$KUBECONFIG:config-demo:config-demo-2 export KUBECONFIG=\"${KUBECONFIG}:config-demo:config-demo-2\" ``` ### Windows PowerShell"} {"_id":"doc-en-website-f887da715e787bffe6c2f74f55c8fbfadbc67e7566a6de953a0f8bc737043a00","title":"","text":"### Linux ```shell export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config export KUBECONFIG=\"${KUBECONFIG}:${HOME}/.kube/config\" ``` ### Windows Powershell"} {"_id":"doc-en-website-13c84699ca81b13c84617ca9d8ae5e19ed8b0b16d7da8ea27156917be7f9c28b","title":"","text":"### Linux ```shell export KUBECONFIG=$KUBECONFIG_SAVED export KUBECONFIG=\"$KUBECONFIG_SAVED\" ``` ### Windows PowerShell"} {"_id":"doc-en-website-a29a5cfa1d202425d61a89f52251d67e465b44319b204f01689f0c558a3afbfc","title":"","text":" --- title: スケジューリングポリシー content_type: concept sitemap: priority: 0.2 # スケジューリングポリシーは廃止されました。 --- バージョンv1.23より前のKubernetesでは、スケジューリングポリシーを使用して、*predicates*と*priorities*の処理を指定することができました。例えば、`kube-scheduler --policy-config-file `または`kube-scheduler --policy-configmap `を実行すると、スケジューリングポリシーを設定することが可能です。 このスケジューリングポリシーは、バージョンv1.23以降のKubernetesではサポートされていません。関連するフラグである、`policy-config-file`、`policy-configmap`、`policy-configmap-namespace`、`use-legacy-policy-config`も同様にサポートされていません。 代わりに、[スケジューラー設定](/ja/docs/reference/scheduling/config/)を使用してください。 ## {{% heading \"whatsnext\" %}} * [スケジューリング](/ja/docs/concepts/scheduling-eviction/kube-scheduler/)について学ぶ * [kube-scheduler設定](/ja/docs/reference/scheduling/config/)について学ぶ * [kube-scheduler設定リファレンス(v1beta3)](/docs/reference/config-api/kube-scheduler-config.v1beta3/)について読む "} {"_id":"doc-en-website-1341d9f25535d0a0b56aa89d3795e6826507e1fc8bc1113a79fcdb501943dfe4","title":"","text":"# Array of authenticated usernames to exempt. usernames: [] # Array of runtime class names to exempt. runtimeClassNames: [] runtimeClasses: [] # Array of namespaces to exempt. namespaces: [] ```"} {"_id":"doc-en-website-47ffeaf7ab78293ce6b6bfda85144e8c7b171f30a2fb4c5069bd6f7ce645d673","title":"","text":"the YAML: `192.0.2.42:9376` (TCP). {{< note >}} The Kubernetes API server does not allow proxying to endpoints that are not mapped to pods. Actions such as `kubectl proxy ` where the service has no selector will fail due to this constraint. This prevents the Kubernetes API server from being used as a proxy to endpoints the caller may not be authorized to access. The Kubernetes API server does not allow proxying to endpoints that are not mapped to pods. Actions such as `kubectl proxy ` where the service has no selector will fail due to this constraint. This prevents the Kubernetes API server from being used as a proxy to endpoints the caller may not be authorized to access. {{< /note >}} An ExternalName Service is a special case of Service that does not have"} {"_id":"doc-en-website-c49887b3ddeb27d17700c6aaac5539182d1ebf0c47dfae85271c0f81c57d770a","title":"","text":"Later in this page you can read about various kube-proxy implementations work. Overall, you should note that, when running `kube-proxy`, kernel level rules may be modified (for example, iptables rules might get created), which won't get cleaned up, modified (for example, iptables rules might get created), which won't get cleaned up, in some cases until you reboot. Thus, running kube-proxy is something that should only be done by an administrator which understands the consequences of having a low level, privileged network proxying service on a computer. Although the `kube-proxy`"} {"_id":"doc-en-website-5eb9ec7db4d6db503bd94e0ddea3f69cb30b65d7a11570eb2e0324271492512d","title":"","text":"- The ConfigMap parameters for the kube-proxy cannot all be validated and verified on startup. For example, if your operating system doesn't allow you to run iptables commands, the standard kernel kube-proxy implementation will not work. Likewise, if you have an operating system which doesn't support `netsh`, it will not run in Windows userspace mode. ### User space proxy mode {#proxy-mode-userspace} {{< feature-state for_k8s_version=\"v1.23\" state=\"deprecated\" >}} In this (legacy) mode, kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service it opens a"} {"_id":"doc-en-website-732d45e398c31c98d6c9c28ea1682dd0ed80f31adf5c032f5780927ff2a1356e","title":"","text":"other versions of Kubernetes, check the documentation for that release. By default, `spec.loadBalancerClass` is `nil` and a `LoadBalancer` type of Service uses the cloud provider's default load balancer implementation if the cluster is configured with a cloud provider using the `--cloud-provider` component flag. a cloud provider using the `--cloud-provider` component flag. If `spec.loadBalancerClass` is specified, it is assumed that a load balancer implementation that matches the specified class is watching for Services. Any default load balancer implementation (for example, the one provided by the cloud provider) will ignore Services that have this field set. `spec.loadBalancerClass` can be set on a Service of type `LoadBalancer` only. Once set, it cannot be changed. Once set, it cannot be changed. The value of `spec.loadBalancerClass` must be a label-style identifier, with an optional prefix such as \"`internal-vip`\" or \"`example.com/internal-vip`\". Unprefixed names are reserved for end-users."} {"_id":"doc-en-website-fe905f92e324d0ba3a57caa4bd954979285448f6dea9b05058aa90cd415abaff","title":"","text":"service.beta.kubernetes.io/aws-load-balancer-security-groups: \"sg-53fae93f\" # A list of existing security groups to be configured on the ELB created. Unlike the annotation # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation # service.beta.kubernetes.io/aws-load-balancer-extra-security-groups, this replaces all other security groups previously assigned to the ELB and also overrides the creation # of a uniquely generated security group for this ELB. # The first security group ID on this list is used as a source to permit incoming traffic to target worker nodes (service traffic and health checks). # If multiple ELBs are configured with the same security group ID, only a single permit line will be added to the worker node security groups, that means if you delete any"} {"_id":"doc-en-website-a05f2abd59021d3a6a81d9817466e3ae4fdd48a6fca4adbb8c33640a7d53bda2","title":"","text":"service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: \"sg-53fae93f,sg-42efd82e\" # A list of additional security groups to be added to the created ELB, this leaves the uniquely generated security group in place, this ensures that every ELB # has a unique security group ID and a matching permit line to allow traffic to the target worker nodes (service traffic and health checks). # Security groups defined here can be shared between services. # Security groups defined here can be shared between services. service.beta.kubernetes.io/aws-load-balancer-target-node-labels: \"ingress-gw,gw-name=public-api\" # A comma separated list of key-value pairs which are used"} {"_id":"doc-en-website-7231a24ed630e02205adbea941cfa738f27df6355c729fbbe5ad39d7cac7b9e5","title":"","text":" --- title: Podのセキュリティアドミッション content_type: concept weight: 20 min-kubernetes-server-version: v1.22 --- {{< feature-state for_k8s_version=\"v1.23\" state=\"beta\" >}} Kubernetesの[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards/)はPodに対して異なる分離レベルを定義します。 これらの標準によって、Podの動作をどのように制限したいかを、明確かつ一貫した方法で定義することができます。 ベータ版機能として、Kubernetesは[PodSecurityPolicy](/docs/concepts/policy/pod-security-policy/)の後継である組み込みの _Pod Security_ {{< glossary_tooltip text=\"アドミッションコントローラー\" term_id=\"admission-controller\" >}}を提供しています。 Podセキュリティの制限は、Pod作成時に{{< glossary_tooltip text=\"名前空間\" term_id=\"namespace\" >}}レベルで適用されます。 {{< note >}} PodSecurityPolicy APIは非推奨であり、v1.25でKubernetesから[削除](/docs/reference/using-api/deprecation-guide/#v1-25)される予定です。 {{< /note >}} ## `PodSecurity`アドミッションプラグインの有効化 {#enabling-the-podsecurity-admission-plugin} v1.23において、`PodSecurity`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)はベータ版の機能で、デフォルトで有効化されています。 v1.22において、`PodSecurity`の[フィーチャーゲート](/ja/docs/reference/command-line-tools-reference/feature-gates/)はアルファ版の機能で、組み込みのアドミッションプラグインを使用するには、`kube-apiserver`で有効にする必要があります。 ```shell --feature-gates=\"...,PodSecurity=true\" ``` ## 代替案:`PodSecurity`アドミッションwebhookのインストール {#webhook} クラスターがv1.22より古い、あるいは`PodSecurity`機能を有効にできないなどの理由で、ビルトインの`PodSecurity`アドミッションプラグインが使えない環境では、`PodSecurity`はアドミッションロジックはベータ版の[validating admission webhook](https://git.k8s.io/pod-security-admission/webhook)としても提供されています。 ビルド前のコンテナイメージ、証明書生成スクリプト、マニフェストの例は、[https://git.k8s.io/pod-security-admission/webhook](https://git.k8s.io/pod-security-admission/webhook)で入手可能です。 インストール方法: ```shell git clone git@github.com:kubernetes/pod-security-admission.git cd pod-security-admission/webhook make certs kubectl apply -k . ``` {{< note >}} 生成された証明書の有効期限は2年間です。有効期限が切れる前に、証明書を再生成するか、内蔵のアドミッションプラグインを使用してWebhookを削除してください。 {{< /note >}} ## Podのセキュリティレベル {#pod-security-levels} Podのセキュリティアドミッションは、Podの[Security Context](/docs/tasks/configure-pod-container/security-context/)とその他の関連フィールドに、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards)で定義された3つのレベル、`privileged`、`baseline`、`restricted`に従って要件を設定するものです。 これらの要件の詳細については、[Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards)のページを参照してください。 ## Podの名前空間に対するセキュリティアドミッションラベル {#pod-security-admission-labels-for-namespaces} この機能を有効にするか、Webhookをインストールすると、名前空間を設定して、各名前空間でPodセキュリティに使用したいadmission controlモードを定義できます。 Kubernetesは、名前空間に使用したい定義済みのPodセキュリティの標準レベルのいずれかを適用するために設定できる{{< glossary_tooltip term_id=\"label\" text=\"ラベル\" >}}のセットを用意しています。 選択したラベルは、以下のように違反の可能性が検出された場合に{{< glossary_tooltip text=\"コントロールプレーン\" term_id=\"control-plane\" >}}が取るアクションを定義します。 {{< table caption=\"Podのセキュリティアドミッションのモード\" >}} モード | 説明 :---------|:------------ **enforce** | ポリシーに違反した場合、Podは拒否されます。 **audit** | ポリシー違反は、[監査ログ](/ja/docs/tasks/debug-application-cluster/audit/)に記録されるイベントに監査アノテーションを追加するトリガーとなりますが、それ以外は許可されます。 **warn** | ポリシーに違反した場合は、ユーザーへの警告がトリガーされますが、それ以外は許可されます。 {{< /table >}} 名前空間は、任意のまたはすべてのモードを設定することができ、異なるモードに対して異なるレベルを設定することもできます。 各モードには、使用するポリシーを決定する2つのラベルがあります。 ```yaml # モードごとのレベルラベルは、そのモードに適用するポリシーレベルを示す。 # # MODEは`enforce`、`audit`、`warn`のいずれかでなければならない。 # LEVELは`privileged`、`baseline`、`restricted`のいずれかでなければならない。 pod-security.kubernetes.io/: # オプション: モードごとのバージョンラベルは、Kubernetesのマイナーバージョンに同梱される # バージョンにポリシーを固定するために使用できる(例えばv{{< skew latestVersion >}}など)。 # # MODEは`enforce`、`audit`、`warn`のいずれかでなければならない。 # VERSIONは有効なKubernetesのマイナーバージョンか`latest`でなければならない。 pod-security.kubernetes.io/-version: ``` [名前空間ラベルでのPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels)で使用例を確認できます。 ## WorkloadのリソースとPodテンプレート {#workload-resources-and-pod-templates} Podは、{{< glossary_tooltip term_id=\"deployment\" >}}や{{< glossary_tooltip term_id=\"job\">}}のような[ワークロードオブジェクト](/ja/docs/concepts/workloads/controllers/)を作成することによって、しばしば間接的に生成されます。 ワークロードオブジェクトは_Pod template_を定義し、ワークロードリソースの{{< glossary_tooltip term_id=\"controller\" text=\"コントローラー\" >}}はそのテンプレートに基づきPodを作成します。 違反の早期発見を支援するために、auditモードとwarningモードは、ワークロードリソースに適用されます。 ただし、enforceモードはワークロードリソースには**適用されず**、結果としてのPodオブジェクトにのみ適用されます。 ## 適用除外(Exemption) {#exemptions} Podセキュリティの施行から _exemptions_ を定義することで、特定の名前空間に関連するポリシーのために禁止されていたPodの作成を許可することができます。 Exemptionは[アドミッションコントローラーの設定](/docs/tasks/configure-pod-container/enforce-standards-admission-controller/#configure-the-admission-controller)で静的に設定することができます。 Exemptionは明示的に列挙する必要があります。 Exemptionを満たしたリクエストは、アドミッションコントローラーによって _無視_ されます(`enforce`、`audit`、`warn`のすべての動作がスキップされます)。Exemptionの次元は以下の通りです。 - **Usernames:** 認証されていない(あるいは偽装された)ユーザー名を持つユーザーからの要求は無視されます。 - **RuntimeClassNames:** Podと[ワークロードリソース](#workload-resources-and-pod-templates)で指定された除外ランタイムクラス名は、無視されます。 - **Namespaces:** 除外された名前空間のPodと[ワークロードリソース](#workload-resources-and-pod-templates)は、無視されます。 {{< caution >}} ほとんどのPodは、[ワークロードリソース](#workload-resources-and-pod-templates)に対応してコントローラーが作成します。つまり、エンドユーザーを適用除外にするのはPodを直接作成する場合のみで、ワークロードリソースを作成する場合は適用除外になりません。 コントローラーサービスアカウント(`system:serviceaccount:kube-system:replicaset-controller`など)は通常、除外してはいけません。そうした場合、対応するワークロードリソースを作成できるすべてのユーザーを暗黙的に除外してしまうためです。 {{< /caution >}} 以下のPodフィールドに対する更新は、ポリシーチェックの対象外となります。つまり、Podの更新要求がこれらのフィールドを変更するだけであれば、Podが現在のポリシーレベルに違反していても拒否されることはありません。 - すべてのメタデータの更新(seccompまたはAppArmorアノテーションへの変更を**除く**) - `seccomp.security.alpha.kubernetes.io/pod`(非推奨) - `container.seccomp.security.alpha.kubernetes.io/*`(非推奨) - `container.apparmor.security.beta.kubernetes.io/*` - `.spec.activeDeadlineSeconds`に対する有効な更新 - `.spec.tolerations`に対する有効な更新 ## {{% heading \"whatsnext\" %}} - [Podセキュリティの標準](/ja/docs/concepts/security/pod-security-standards) - [Podセキュリティの標準の適用](/docs/setup/best-practices/enforcing-pod-security-standards) - [ビルトインのアドミッションコントローラーの設定によるPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-admission-controller) - [名前空間ラベルでのPodセキュリティの標準の適用](/docs/tasks/configure-pod-container/enforce-standards-namespace-labels) - [PodSecurityPolicyからビルトインのPodSecurityアドミッションコントローラーへの移行](/docs/tasks/configure-pod-container/migrate-from-psp) "} {"_id":"doc-en-website-ed435159e984e239b1090da350b118463cbb986d53cbc69f0c3e4e5a8b7cccf4","title":"","text":"[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) on the [API server](/docs/reference/command-line-tools-reference/kube-apiserver/) and the [controller manager](/docs/reference/command-line-tools-reference/kube-controller-manager/). It is enabled by default. When enabled, the control plane tracks new Jobs using the behavior described below. Jobs created before the feature was enabled are unaffected. As a user,"} {"_id":"doc-en-website-da6b4639a2c233e3912ab9d068cd79396c9042e582579a38ffc7e6b0c1d16f0f","title":"","text":"title: \"弃用 Dockershim 的常见问题\" date: 2020-12-02 slug: dockershim-faq aliases: [ '/dockershim' ] aliases: [ '/zh/dockershim' ] --- **作者:** Ricardo Katz (VMware), James Strong (Chainguard) [Ingress](/zh-cn/docs/concepts/services-networking/ingress/) 可能是 Kubernetes 最容易受攻击的组件之一。 Ingress 通常定义一个 HTTP 反向代理,暴露在互联网上,包含多个网站,并具有对 Kubernetes API 的一些特权访问(例如读取与 TLS 证书及其私钥相关的 Secret)。 虽然它是架构中的一个风险组件,但它仍然是正常公开服务的最流行方式。 Ingress-NGINX 一直是安全评估的重头戏,这类评估会发现我们有着很大的问题: 在将配置转换为 `nginx.conf` 文件之前,我们没有进行所有适当的清理,这可能会导致信息泄露风险。 虽然我们了解此风险以及解决此问题的真正需求,但这并不是一个容易的过程, 因此我们在当前(v1.2.0)版本中采取了另一种方法来减少(但不是消除!)这种风险。 ## 了解 Ingress NGINX v1.2.0 和 chrooted NGINX 进程 主要挑战之一是 Ingress-NGINX 运行着 Web 代理服务器(NGINX),并与 Ingress 控制器一起运行 (后者是一个可以访问 Kubernetes API 并创建 `nginx.conf` 的组件)。 因此,NGINX 对控制器的文件系统(和 Kubernetes 服务帐户令牌,以及容器中的其他配置)具有相同的访问权限。 虽然拆分这些组件是我们的最终目标,但该项目需要快速响应;这让我们想到了使用 `chroot()`。 让我们看一下 Ingress-NGINX 容器在此更改之前的样子: ![Ingress NGINX pre chroot](ingress-pre-chroot.png) 正如我们所见,用来提供 HTTP Proxy 的容器(不是 Pod,是容器!)也是是监视 Ingress 对象并将数据写入容器卷的容器。 现在,见识一下新架构: ![Ingress NGINX post chroot](ingress-post-chroot.png) 这一切意味着什么?一个基本的总结是:我们将 NGINX 服务隔离为控制器容器内的容器。 虽然这并不完全正确,但要了解这里所做的事情,最好了解 Linux 容器(以及内核命名空间等底层机制)是如何工作的。 你可以在 Kubernetes 词汇表中阅读有关 cgroup 的信息:[`cgroup`](/zh-cn/docs/reference/glossary/?fundamental=true#term-cgroup), 并在 NGINX 项目文章[什么是命名空间和 cgroup,以及它们如何工作?](https://www.nginx.com/blog/what-are-namespaces-cgroups-how-do-they-work/) 中了解有关 cgroup 与命名空间交互的更多信息。(当你阅读时,请记住 Linux 内核命名空间与 [Kubernetes 命名空间](/zh-cn/docs/concepts/overview/working-with-objects/namespaces/)不同)。 ## 跳过谈话,我需要什么才能使用这种新方法? 虽然这增加了安全性,但我们在这个版本中把这个功能作为一个选项,这样你就可以有时间在你的环境中做出正确的调整。 此新功能仅在 Ingress-NGINX 控制器的 v1.2.0 版本中可用。 要使用这个功能,在你的部署中有两个必要的改变: * 将后缀 \"-chroot\" 添加到容器镜像名称中。例如:`gcr.io/k8s-staging-ingress-nginx/controller-chroot:v1.2.0` * 在你的 Ingress 控制器的 Pod 模板中,找到添加 `NET_BIND_SERVICE` 权能的位置并添加 `SYS_CHROOT` 权能。 编辑清单后,你将看到如下代码段: ```yaml capabilities: drop: - ALL add: - NET_BIND_SERVICE - SYS_CHROOT ``` 如果你使用官方 Helm Chart 部署控制器,则在 `values.yaml` 中更改以下设置: ```yaml controller: image: chroot: true ``` Ingress 控制器通常部署在集群作用域(IngressClass API 是集群作用域的)。 如果你管理 Ingress-NGINX 控制器但你不是整个集群的操作员, 请在部署中启用它**之前**与集群管理员确认你是否可以使用 `SYS_CHROOT` 功能。 ## 好吧,但这如何能提高我的 Ingress 控制器的安全性呢? 以下面的配置片段为例,想象一下,由于某种原因,它被添加到你的 `nginx.conf` 中: ``` location /randomthing/ { alias /; autoindex on; } ``` 如果你部署了这种配置,有人可以调用 `http://website.example/randomthing` 并获取对 Ingress 控制器的整个文件系统的一些列表(和访问权限)。 现在,你能在下面的列表中发现 chroot 处理过和未经 chroot 处理过的 Nginx 之间的区别吗? | 不额外调用 `chroot()` | 额外调用 `chroot()` | |----------------------------|--------| | `bin` | `bin` | | `dev` | `dev` | | `etc` | `etc` | | `home` | | | `lib` | `lib` | | `media` | | | `mnt` | | | `opt` | `opt` | | `proc` | `proc` | | `root` | | | `run` | `run` | | `sbin` | | | `srv` | | | `sys` | | | `tmp` | `tmp` | | `usr` | `usr` | | `var` | `var` | | `dbg` | | | `nginx-ingress-controller` | | | `wait-shutdown` | | 左侧的那个没有 chroot 处理。所以 NGINX 可以完全访问文件系统。右侧的那个经过 chroot 处理, 因此创建了一个新文件系统,其中只有使 NGINX 工作所需的文件。 ## 此版本中的其他安全改进如何? 我们知道新的 `chroot()` 机制有助于解决部分风险,但仍然有人可以尝试注入命令来读取,例如 `nginx.conf` 文件并提取敏感信息。 所以,这个版本的另一个变化(可选择取消)是 **深度探测(Deep Inspector)**。 我们知道某些指令或正则表达式可能对 NGINX 造成危险,因此深度探测器会检查 Ingress 对象中的所有字段 (在其协调期间,并且还使用[验证准入 webhook](/zh-cn/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook)) 验证是否有任何字段包含这些危险指令。 Ingress 控制器已经通过注解做了这个工作,我们的目标是把现有的验证转移到深度探测中,作为未来版本的一部分。 你可以在 [https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/inspector/rules.go](https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/inspector/rules.go) 中查看现有规则。 由于检查和匹配相关 Ingress 对象中的所有字符串的性质,此新功能可能会消耗更多 CPU。 你可以通过使用命令行参数 `--deep-inspect=false` 运行 Ingress 控制器来禁用它。 ## 下一步是什么? 这不是我们的最终目标。我们的最终目标是拆分控制平面和数据平面进程。 事实上,这样做也将帮助我们实现 [Gateway](https://gateway-api.sigs.k8s.io/) API 实现, 因为一旦它“知道”要提供什么,我们可能会有不同的控制器 数据平面(我们需要一些帮助!!) Kubernetes 中的其他一些项目已经采用了这种方法(如 [KPNG](https://github.com/kubernetes-sigs/kpng), 建议替换 `kube-proxy`),我们计划与他们保持一致,并为 Ingress-NGINX 获得相同的体验。 ## 延伸阅读 如果你想了解如何在 Ingress NGINX 中完成 chrooting,请查看 [https://github.com/kubernetes/ingress-nginx/pull/8337](https://github.com/kubernetes/ingress-nginx/pull/8337)。 包含所有更改的版本 v1.2.0 可以在以下位置找到 [https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.0](https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.2.0) "} {"_id":"doc-en-website-ef2ed9af4d216dc9ef98508052f0d88382a8afe6e8ad43b67700170dfd81ac9e","title":"","text":"nohup.out # Hugo output public/ resources/ /public/ /resources/ .hugo_build.lock # Netlify Functions build output package-lock.json functions/ node_modules/ /functions/ /node_modules/ # Generated files when building with make container-build .config/"} {"_id":"doc-en-website-c7939ac7ad92039bf0817079ee7d94383a8dac8b0faee33a1053d735790af437","title":"","text":"`annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | Add or update the annotations of one or more resources. `api-resources` | `kubectl api-resources [flags]` | List the API resources that are available. `api-versions` | `kubectl api-versions [flags]` | List the API versions that are available. --> 操作 | 语法 | 描述 -------------------- | -------------------- | -------------------- `alpha` | `kubectl alpha SUBCOMMAND [flags]` | 列出与 alpha 特性对应的可用命令,这些特性在 Kubernetes 集群中默认情况下是不启用的。 `annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的注解。 `api-resources` | `kubectl api-resources [flags]` | 列出可用的 API 资源。 `api-versions` | `kubectl api-versions [flags]` | 列出可用的 API 版本。 `apply` | `kubectl apply -f FILENAME [flags]`| 从文件或 stdin 对资源应用配置更改。 `attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | 挂接到正在运行的容器,查看输出流或与容器(stdin)交互。 `auth` | `kubectl auth [flags] [options]` | 检查授权。 `autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | 自动扩缩由副本控制器管理的一组 pod。 `certificate` | `kubectl certificate SUBCOMMAND [options]` | 修改证书资源。 `cluster-info` | `kubectl cluster-info [flags]` | 显示有关集群中主服务器和服务的端口信息。 `completion` | `kubectl completion SHELL [options]` | 为指定的 Shell(Bash 或 Zsh)输出 Shell 补齐代码。 `config` | `kubectl config SUBCOMMAND [flags]` | 修改 kubeconfig 文件。有关详细信息,请参阅各个子命令。 `convert` | `kubectl convert -f FILENAME [options]` | 在不同的 API 版本之间转换配置文件。配置文件可以是 YAML 或 JSON 格式。注意 - 需要安装 `kubectl-convert` 插件。 `cordon` | `kubectl cordon NODE [options]` | 将节点标记为不可调度。 `cp` | `kubectl cp [options]` | 从容器复制文件、目录或将文件、目录复制到容器。 `create` | `kubectl create -f FILENAME [flags]` | 从文件或 stdin 创建一个或多个资源。 `delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | 基于文件、标准输入或通过指定标签选择器、名称、资源选择器或资源本身,删除资源。 `describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | 显示一个或多个资源的详细状态。 `diff` | `kubectl diff -f FILENAME [flags]`| 在当前起作用的配置和文件或标准输之间作对比 (**BETA**) `drain` | `kubectl drain NODE [options]` | 腾空节点以准备维护。 `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 Pod 中的容器执行命令。 `explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 `expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 Pod 作为新的 Kubernetes 服务暴露。 `get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 列出一个或多个资源。 `kustomize` | kubectl kustomize [flags] [options]` | 列出从 kustomization.yaml 文件中的指令生成的一组 API 资源。参数必须是包含文件的目录的路径,或者是 git 存储库 URL,其路径后缀相对于存储库根目录指定了相同的路径。 操作 | 语法 | 描述 -------------------- | -------------------- | -------------------- `alpha` | `kubectl alpha SUBCOMMAND [flags]` | 列出与 alpha 特性对应的可用命令,这些特性在 Kubernetes 集群中默认情况下是不启用的。 `annotate` | kubectl annotate (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的注解。 `api-resources` | `kubectl api-resources [flags]` | 列出可用的 API 资源。 `api-versions` | `kubectl api-versions [flags]` | 列出可用的 API 版本。 `apply` | `kubectl apply -f FILENAME [flags]`| 从文件或 stdin 对资源应用配置更改。 `attach` | `kubectl attach POD -c CONTAINER [-i] [-t] [flags]` | 挂接到正在运行的容器,查看输出流或与容器(stdin)交互。 `auth` | `kubectl auth [flags] [options]` | 检查授权。 `autoscale` | kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [flags] | 自动扩缩由副本控制器管理的一组 pod。 `certificate` | `kubectl certificate SUBCOMMAND [options]` | 修改证书资源。 `cluster-info` | `kubectl cluster-info [flags]` | 显示有关集群中主服务器和服务的端口信息。 `completion` | `kubectl completion SHELL [options]` | 为指定的 Shell(Bash 或 Zsh)输出 Shell 补齐代码。 `config` | `kubectl config SUBCOMMAND [flags]` | 修改 kubeconfig 文件。有关详细信息,请参阅各个子命令。 `convert` | `kubectl convert -f FILENAME [options]` | 在不同的 API 版本之间转换配置文件。配置文件可以是 YAML 或 JSON 格式。注意 - 需要安装 `kubectl-convert` 插件。 `cordon` | `kubectl cordon NODE [options]` | 将节点标记为不可调度。 `cp` | `kubectl cp [options]` | 从容器复制文件、目录或将文件、目录复制到容器。 `create` | `kubectl create -f FILENAME [flags]` | 从文件或 stdin 创建一个或多个资源。 `delete` | kubectl delete (-f FILENAME | TYPE [NAME | /NAME | -l label | --all]) [flags] | 基于文件、标准输入或通过指定标签选择器、名称、资源选择器或资源本身,删除资源。 `describe` | kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | /NAME | -l label]) [flags] | 显示一个或多个资源的详细状态。 `diff` | `kubectl diff -f FILENAME [flags]`| 在当前起作用的配置和文件或标准输之间作对比 (**BETA**) `drain` | `kubectl drain NODE [options]` | 腾空节点以准备维护。 `edit` | kubectl edit (-f FILENAME | TYPE NAME | TYPE/NAME) [flags] | 使用默认编辑器编辑和更新服务器上一个或多个资源的定义。 `exec` | `kubectl exec POD [-c CONTAINER] [-i] [-t] [flags] [-- COMMAND [args...]]` | 对 Pod 中的容器执行命令。 `explain` | `kubectl explain [--recursive=false] [flags]` | 获取多种资源的文档。例如 Pod、Node、Service 等。 `expose` | kubectl expose (-f FILENAME | TYPE NAME | TYPE/NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [--external-ip=external-ip-of-service] [--type=type] [flags] | 将副本控制器、服务或 Pod 作为新的 Kubernetes 服务暴露。 `get` | kubectl get (-f FILENAME | TYPE [NAME | /NAME | -l label]) [--watch] [--sort-by=FIELD] [[-o | --output]=OUTPUT_FORMAT] [flags] | 列出一个或多个资源。 `kustomize` | kubectl kustomize [flags] [options]` | 列出从 kustomization.yaml 文件中的指令生成的一组 API 资源。参数必须是包含文件的目录的路径,或者是 git 存储库 URL,其路径后缀相对于存储库根目录指定了相同的路径。 `label` | kubectl label (-f FILENAME | TYPE NAME | TYPE/NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--overwrite] [--all] [--resource-version=version] [flags] | 添加或更新一个或多个资源的标签。 `logs` | `kubectl logs POD [-c CONTAINER] [--follow] [flags]` | 打印 Pod 中容器的日志。 `options` | `kubectl options` | 全局命令行选项列表,这些选项适用于所有命令。"} {"_id":"doc-en-website-3d2a780dce1ecb1dfb97b7f31702eed120cb739ef4929708b5925f1855055b6d","title":"","text":"`replace` | `kubectl replace -f FILENAME` | 基于文件或标准输入替换资源。 `rollout` | `kubectl rollout SUBCOMMAND [options]` | 管理资源的上线。有效的资源类型包括:Deployment、 DaemonSet 和 StatefulSet。 `run` | kubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--dry-run=server | client | none] [--overrides=inline-json] [flags] | 在集群上运行指定的镜像。 `scale` | kubectl scale (-f FILENAME | TYPE NAME | TYPE/NAME) --replicas=COUNT [--resource-version=version] [--current-replicas=count] [flags] | 更新指定副本控制器的大小。 `set` | `kubectl set SUBCOMMAND [options]` | 配置应用资源。 `taint` | `kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]` | 更新一个或多个节点上的污点。"} {"_id":"doc-en-website-c27216a25e2f6197f9c963a66b1cf3716ec51deeebb2b8cb27a418287484d3ab","title":"","text":"Patch your Deployment again with this new patch: ```shell kubectl patch deployment retainkeys-demo --type merge --patch-file patch-file-retainkeys.yaml kubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml ``` Examine the content of the Deployment:"} {"_id":"doc-en-website-b707f0f23ab1ddcbd57f61cf3373e3cf00a7fc4ff1c36a2af265ab587050535a","title":"","text":"See [Enforcing Pod Security at the Namespace Level](/docs/concepts/security/pod-security-admission) for more information. ### rbac.authorization.kubernetes.io/autoupdate Example: `rbac.authorization.kubernetes.io/autoupdate: \"false\"` Used on: ClusterRole, ClusterRoleBinding, Role, RoleBinding When this annotation is set to `\"true\"` on default RBAC objects created by the kube-apiserver, they are automatically updated at server start to add missing permissions and subjects (extra permissions and subjects are left in place). To prevent autoupdating a particular role or rolebinding, set this annotation to `\"false\"`. If you create your own RBAC objects and set this annotation to `\"false\"`, `kubectl auth reconcile` (which allows reconciling arbitrary RBAC objects in a {{< glossary_tooltip text=\"manifest\" term_id=\"manifest\" >}}) respects this annotation and does not automatically add missing permissions and subjects. ### kubernetes.io/psp (deprecated) {#kubernetes-io-psp} Example: `kubernetes.io/psp: restricted`"} {"_id":"doc-en-website-d2c836a5a942e1dc9d81af8097bafd4487f740da355f4cb8d3305f2b582362f8","title":"","text":"### Submitting Documentation Pull Requests If you're fixing an issue in the existing documentation, you should submit a PR against the main branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](http://kubernetes.io/docs/home/contribute/create-pull-request/). If you're fixing an issue in the existing documentation, you should submit a PR against the main branch. Follow [these instructions to create a documentation pull request against the kubernetes.io repository](https://kubernetes.io/docs/contribute/new-content/open-a-pr/). For more information, see [contributing to Kubernetes docs](https://kubernetes.io/docs/contribute/)."} {"_id":"doc-en-website-6560712ed39b37103eb90523b6c38cef5a9e0917fcee487f18a432c4f3cc3849","title":"","text":"[plugin](/docs/concepts/storage/volumes/#types-of-volumes). The following broad classes of Kubernetes volume plugins are supported on Windows: * [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexVolume) * [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume-deprecated) * Please note that FlexVolumes have been deprecated as of 1.23 * [`CSI Plugins`](/docs/concepts/storage/volumes/#csi)"} {"_id":"doc-en-website-b1a71155d38d61ccda9e6386fe8bad4a63b2bf0f69b3d346a5da728314363ae8","title":"","text":"## What is a Pod? {{< note >}} While Kubernetes supports more {{< glossary_tooltip text=\"container runtimes\" term_id=\"container-runtime\" >}} than just Docker, [Docker](https://www.docker.com/) is the most commonly known runtime, and it helps to describe Pods using some terminology from Docker. You need to install a [container runtime](/docs/setup/production-environment/container-runtimes/) into each node in the cluster so that Pods can run there. {{< /note >}} The shared context of a Pod is a set of Linux namespaces, cgroups, and"} {"_id":"doc-en-website-9948acc5ed9b422f0a2ab3fb52c9135d81fb3fc414e5d1061aa90e895ec20a38","title":"","text":"{{< note >}} If you use a Docker credentials store, you won't see that `auth` entry but a `credsStore` entry with the name of the store as value. In that case, you can create a secret directly. See [Create a Secret by providing credentials on the command line](#create-a-secret-by-providing-credentials-on-the-command-line). {{< /note >}} ## Create a Secret based on existing credentials {#registry-secret-existing-credentials}"} {"_id":"doc-en-website-a4a31c85758fb2f046d21c76bfb6ac0213f9f71b7b9ff2809022dbfd5b24bb59","title":"","text":" --- title: रेप्लिकासेट (ReplicaSet) id: replica-set date: 2018-04-12 full_link: /docs/concepts/workloads/controllers/replicaset/ short_description: > रेप्लिकासेट यह सुनिश्चित करता है कि किसी एक अवसर पर निर्दिष्ट संख्या में पॉड प्रतिकृतियां चल रही हैं। aka: tags: - fundamental - core-object - workload --- एक रेप्लिकासेट (का उद्देश्य) किसी भी समय चल रहे रेप्लिका पॉड्स का एक समूह बनाए रखना है। वर्कलोड ऑब्जेक्ट्स, जैसे {{< glossary_tooltip text=\"डिप्लॉयमेंट\" term_id=\"deployment\" >}}, रेप्लिकासेट्स के विनिर्देश के आधार पर आपके क्लस्टर पर कॉन्फ़िगर की गई संख्या में {{< glossary_tooltip term_id=\"pod\" text=\"पॉड्स\" >}} चल रहे है ये सुनिश्चित करते हैं। "} {"_id":"doc-en-website-cb17a1b8f7c33141ba0f346aeb5cd7c925c7ac8a5ac72d06f85b6afa9f12a75e","title":"","text":"durable external storage, or provide ephemeral storage, or they might offer a read-only interface to information using a filesystem paradigm. Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume) plugins, Kubernetes also includes support for [FlexVolume](/docs/concepts/storage/volumes/#flexvolume-deprecated) plugins, which are deprecated since Kubernetes v1.23 (in favour of CSI). FlexVolume plugins allow users to mount volume types that aren't natively supported by Kubernetes. When"} {"_id":"doc-en-website-6f74d90e3c687e8e948cf213bedacf23ed3452801ff7029be4383bc33d517fba","title":"","text":"* [`gcePersistentDisk`](#gcepersistentdisk) * [`vsphereVolume`](#vspherevolume) ### flexVolume (deprecated) ### flexVolume (deprecated) {#flexvolume} {{< feature-state for_k8s_version=\"v1.23\" state=\"deprecated\" >}}"} {"_id":"doc-en-website-37e76407529bbfedcf8b3bbc2c9d7a7704f723a65ef5b19a2f6e349cf1676f18","title":"","text":"[plugin](/docs/concepts/storage/volumes/#types-of-volumes). The following broad classes of Kubernetes volume plugins are supported on Windows: * [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume-deprecated) * [`FlexVolume plugins`](/docs/concepts/storage/volumes/#flexvolume) * Please note that FlexVolumes have been deprecated as of 1.23 * [`CSI Plugins`](/docs/concepts/storage/volumes/#csi)"} {"_id":"doc-en-website-629c0a209d788af40367c4ff06f5ddf2ca73160bcccf1a9f9607a1a1df3f33db","title":"","text":"Details of the metric data that Kubernetes components export. --- ## Metrics (v1.26) ## Metrics (auto-generated 2022 Nov 01) This page details the metrics that different Kubernetes components export. You can query the metrics endpoint for these components using an HTTP scrape, and fetch the current metrics data in Prometheus format."} {"_id":"doc-en-website-c13a4e120cb07800f9ceced0ca25bb2ceda919416167cacad9e2a6697f832e2b","title":"","text":"Histogram Admission controller latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).
    name
    operation
    rejected
    type
    None apiserver_admission_step_admission_duration_seconds STABLE Histogram Admission sub-step latency histogram in seconds, broken out for each operation and API resource and step type (validate or admit).
    operation
    rejected
    type
    None apiserver_admission_webhook_admission_duration_seconds STABLE Histogram Admission webhook latency histogram in seconds, identified by name and broken out for each operation and API resource and type (validate or admit).
    name
    operation
    rejected
    type
    None apiserver_current_inflight_requests STABLE Gauge Maximal number of currently used inflight request limit of this apiserver per request kind in last second.
    request_kind
    None apiserver_longrunning_requests STABLE Gauge Gauge of all active long-running apiserver requests broken out by verb, group, version, resource, scope and component. Not all requests are tracked this way.
    component
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_request_duration_seconds STABLE Histogram Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component.
    component
    dry_run
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_request_total STABLE Counter Counter of apiserver requests broken out for each verb, dry run value, group, version, resource, scope, component, and HTTP response code.
    code
    component
    dry_run
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_requested_deprecated_apis STABLE Gauge Gauge of deprecated APIs that have been requested, broken out by API group, version, resource, subresource, and removed_release.
    group
    removed_release
    resource
    subresource
    version
    None apiserver_response_sizes STABLE Histogram Response size distribution in bytes for each group, version, verb, resource, subresource, scope and component.
    component
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_storage_objects STABLE Gauge Number of stored objects at the time of last check split by kind.
    resource
    cronjob_controller_job_creation_skew_duration_seconds STABLE Histogram Time between when a cronjob is scheduled to be run, and when the corresponding job is created job_controller_job_pods_finished_total STABLE Counter The number of finished Pods that are fully tracked
    completion_mode
    result
    job_controller_job_sync_duration_seconds STABLE Histogram The time it took to sync a job
    action
    completion_mode
    result
    job_controller_job_syncs_total STABLE Counter The number of job syncs
    action
    completion_mode
    result
    job_controller_jobs_finished_total STABLE Counter The number of finished jobs
    completion_mode
    reason
    result
    None node_collector_evictions_total STABLE Counter Number of Node evictions that happened since current instance of NodeController started.
    zone
    None scheduler_framework_extension_point_duration_seconds STABLE Histogram Latency for running all plugins of a specific extension point.
    extension_point
    profile
    status
    None scheduler_pending_pods STABLE Gauge Number of pending pods, by the queue type. 'active' means number of pods in activeQ; 'backoff' means number of pods in backoffQ; 'unschedulable' means number of pods in unschedulablePods that the scheduler attempted to schedule and failed; 'gated' is the number of unschedulable pods that the scheduler never attempted to schedule because they are gated. Number of pending pods, by the queue type. 'active' means number of pods in activeQ; 'backoff' means number of pods in backoffQ; 'unschedulable' means number of pods in unschedulablePods.
    queue
    None scheduler_pod_scheduling_attempts STABLE Histogram Number of attempts to successfully schedule a pod. None None scheduler_pod_scheduling_duration_seconds STABLE Histogram E2e latency for a pod being scheduled which may include multiple scheduling attempts.
    attempts
    None scheduler_preemption_attempts_total STABLE Counter Total preemption attempts in the cluster till now None None scheduler_preemption_victims STABLE Histogram Number of selected preemption victims None None scheduler_queue_incoming_pods_total STABLE Counter Number of pods added to scheduling queues by event and queue type.
    event
    queue
    None scheduler_schedule_attempts_total STABLE Counter Number of attempts to schedule pods, by the result. 'unschedulable' means a pod could not be scheduled, while 'error' means an internal scheduler problem.
    profile
    result
    None scheduler_scheduling_attempt_duration_seconds STABLE Histogram Scheduling attempt latency in seconds (scheduling algorithm + binding)
    profile
    result
    None "} {"_id":"doc-en-website-2f8898e80298d4f61865353c78dc22971aaedddd4cb4572b29b376bb4df6cdf7","title":"","text":"Help Labels Const Labels Deprecated Version "} {"_id":"doc-en-website-7776aa02adce1ab674952814744f5bf1a775b9011e160f1bfd376fb8dbce2bfd","title":"","text":"Counter Counter of OpenAPI v2 spec regeneration count broken down by causing APIService name and reason.
    apiservice
    reason
    None aggregator_openapi_v2_regeneration_duration ALPHA Gauge Gauge of OpenAPI v2 spec regeneration duration in seconds.
    reason
    None aggregator_unavailable_apiservice ALPHA Custom Gauge of APIServices which are marked as unavailable broken down by APIService name.
    name
    None aggregator_unavailable_apiservice_total ALPHA Counter Counter of APIServices which are marked as unavailable broken down by APIService name and reason.
    name
    reason
    None apiextensions_openapi_v2_regeneration_count ALPHA Counter Counter of OpenAPI v2 spec regeneration count broken down by causing CRD name and reason.
    crd
    reason
    None apiextensions_openapi_v3_regeneration_count ALPHA Counter Counter of OpenAPI v3 spec regeneration count broken down by group, version, causing CRD and reason.
    crd
    group
    reason
    version
    None apiserver_admission_step_admission_duration_seconds_summary ALPHA Summary Admission sub-step latency summary in seconds, broken out for each operation and API resource and step type (validate or admit).
    operation
    rejected
    type
    None apiserver_admission_webhook_fail_open_count ALPHA Counter Admission webhook fail open count, identified by name and broken out for each admission type (validating or mutating).
    name
    type
    None apiserver_admission_webhook_rejection_count ALPHA Counter Admission webhook rejection count, identified by name and broken out for each admission type (validating or admit) and operation. Additional labels specify an error type (calling_webhook_error or apiserver_internal_error if an error occurred; no_error otherwise) and optionally a non-zero rejection code if the webhook rejects the request with an HTTP status code (honored by the apiserver when the code is greater or equal to 400). Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.
    error_type
    name
    operation
    rejection_code
    type
    None apiserver_admission_webhook_request_total ALPHA Counter Admission webhook request total, identified by name and broken out for each admission type (validating or mutating) and operation. Additional labels specify whether the request was rejected or not and an HTTP status code. Codes greater than 600 are truncated to 600, to keep the metrics cardinality bounded.
    code
    name
    operation
    rejected
    type
    None apiserver_audit_error_total ALPHA Counter Counter of audit events that failed to be audited properly. Plugin identifies the plugin affected by the error.
    plugin
    None apiserver_audit_event_total ALPHA Counter Counter of audit events generated and sent to the audit backend. None None apiserver_audit_level_total ALPHA Counter Counter of policy levels for audit events (1 per request).
    level
    None apiserver_audit_requests_rejected_total ALPHA Counter Counter of apiserver requests rejected due to an error in audit logging backend. None None apiserver_cache_list_fetched_objects_total ALPHA Counter Number of objects read from watch cache in the course of serving a LIST request
    index
    resource_prefix
    None apiserver_cache_list_returned_objects_total ALPHA Counter Number of objects returned for a LIST request from watch cache
    resource_prefix
    None apiserver_cache_list_total ALPHA Counter Number of LIST requests served from watch cache
    index
    resource_prefix
    None apiserver_cel_compilation_duration_seconds ALPHA Histogram None None apiserver_cel_evaluation_duration_seconds ALPHA Histogram None None apiserver_certificates_registry_csr_honored_duration_total ALPHA Counter Total number of issued CSRs with a requested duration that was honored, sliced by signer (only kubernetes.io signer names are specifically identified)
    signerName
    None apiserver_certificates_registry_csr_requested_duration_total ALPHA Counter Total number of issued CSRs with a requested duration, sliced by signer (only kubernetes.io signer names are specifically identified)
    signerName
    None apiserver_client_certificate_expiration_seconds ALPHA Histogram Distribution of the remaining lifetime on the certificate used to authenticate a request. None None apiserver_crd_webhook_conversion_duration_seconds ALPHA Histogram CRD webhook conversion duration in seconds
    crd_name
    from_version
    succeeded
    to_version
    None apiserver_current_inqueue_requests ALPHA Gauge Maximal number of queued requests in this apiserver per request kind in last second.
    request_kind
    None apiserver_delegated_authn_request_duration_seconds ALPHA Histogram Request latency in seconds. Broken down by status code.
    code
    None apiserver_delegated_authn_request_total ALPHA Counter Number of HTTP requests partitioned by status code.
    code
    None apiserver_delegated_authz_request_duration_seconds ALPHA Histogram Request latency in seconds. Broken down by status code.
    code
    None apiserver_delegated_authz_request_total ALPHA Counter Number of HTTP requests partitioned by status code.
    code
    None apiserver_egress_dialer_dial_duration_seconds ALPHA Histogram Dial latency histogram in seconds, labeled by the protocol (http-connect or grpc), transport (tcp or uds)
    protocol
    transport
    None apiserver_egress_dialer_dial_failure_count ALPHA Counter Dial failure count, labeled by the protocol (http-connect or grpc), transport (tcp or uds), and stage (connect or proxy). The stage indicates at which stage the dial failed
    protocol
    stage
    transport
    apiserver_egress_dialer_dial_start_total ALPHA Counter Dial starts, labeled by the protocol (http-connect or grpc) and transport (tcp or uds).
    protocol
    transport
    None apiserver_envelope_encryption_dek_cache_fill_percent ALPHA Gauge Percent of the cache slots currently occupied by cached DEKs. None None apiserver_envelope_encryption_dek_cache_inter_arrival_time_seconds ALPHA Histogram Time (in seconds) of inter arrival of transformation requests.
    transformation_type
    None apiserver_flowcontrol_current_executing_requests ALPHA Gauge Number of requests in initial (for a WATCH) or any (for a non-WATCH) execution stage in the API Priority and Fairness subsystem
    flow_schema
    priority_level
    None apiserver_flowcontrol_current_inqueue_requests ALPHA Gauge Number of requests currently pending in queues of the API Priority and Fairness subsystem
    flow_schema
    priority_level
    apiserver_flowcontrol_current_limit_seats ALPHA Gauge current derived number of execution seats available to each priority level
    priority_level
    None apiserver_flowcontrol_current_r ALPHA Gauge R(time of last change)
    priority_level
    apiserver_flowcontrol_demand_seats ALPHA TimingRatioHistogram Observations, at the end of every nanosecond, of (the number of seats each priority level could use) / (nominal number of seats for that level)
    priority_level
    apiserver_flowcontrol_demand_seats_average ALPHA Gauge Time-weighted average, over last adjustment period, of demand_seats
    priority_level
    apiserver_flowcontrol_demand_seats_high_watermark ALPHA Gauge High watermark, over last adjustment period, of demand_seats
    priority_level
    apiserver_flowcontrol_demand_seats_smoothed ALPHA Gauge Smoothed seat demands
    priority_level
    apiserver_flowcontrol_demand_seats_stdev ALPHA Gauge Time-weighted standard deviation, over last adjustment period, of demand_seats
    priority_level
    None apiserver_flowcontrol_dispatch_r ALPHA Gauge R(time of last dispatch)
    priority_level
    None apiserver_flowcontrol_dispatched_requests_total ALPHA Counter Number of requests executed by API Priority and Fairness subsystem
    flow_schema
    priority_level
    None apiserver_flowcontrol_epoch_advance_total ALPHA Counter Number of times the queueset's progress meter jumped backward
    priority_level
    success
    None apiserver_flowcontrol_latest_s ALPHA Gauge S(most recently dispatched request)
    priority_level
    apiserver_flowcontrol_lower_limit_seats ALPHA Gauge Configured lower bound on number of execution seats available to each priority level
    priority_level
    None apiserver_flowcontrol_next_discounted_s_bounds ALPHA Gauge min and max, over queues, of S(oldest waiting request in queue) - estimated work in progress
    bound
    priority_level
    None apiserver_flowcontrol_next_s_bounds ALPHA Gauge min and max, over queues, of S(oldest waiting request in queue)
    bound
    priority_level
    apiserver_flowcontrol_nominal_limit_seats ALPHA Gauge Nominal number of execution seats configured for each priority level
    priority_level
    None apiserver_flowcontrol_priority_level_request_utilization ALPHA TimingRatioHistogram Observations, at the end of every nanosecond, of number of requests (as a fraction of the relevant limit) waiting or in any stage of execution (but only initial stage for WATCHes)
    phase
    priority_level
    None apiserver_flowcontrol_priority_level_seat_utilization ALPHA TimingRatioHistogram Observations, at the end of every nanosecond, of utilization of seats for any stage of execution (but only initial stage for WATCHes)
    priority_level
    phase:executing
    map[phase:executing] apiserver_flowcontrol_read_vs_write_current_requests ALPHA TimingRatioHistogram Observations, at the end of every nanosecond, of the number of requests (as a fraction of the relevant limit) waiting or in regular stage of execution
    phase
    request_kind
    None apiserver_flowcontrol_rejected_requests_total ALPHA Counter Number of requests rejected by API Priority and Fairness subsystem
    flow_schema
    priority_level
    reason
    None apiserver_flowcontrol_request_concurrency_in_use ALPHA Gauge Concurrency (number of seats) occupied by the currently executing (initial stage for a WATCH, any stage otherwise) requests in the API Priority and Fairness subsystem
    flow_schema
    priority_level
    None apiserver_flowcontrol_request_concurrency_limit ALPHA Gauge Shared concurrency limit in the API Priority and Fairness subsystem
    priority_level
    None apiserver_flowcontrol_request_dispatch_no_accommodation_total ALPHA Counter Number of times a dispatch attempt resulted in a non accommodation due to lack of available seats
    flow_schema
    priority_level
    None apiserver_flowcontrol_request_execution_seconds ALPHA Histogram Duration of initial stage (for a WATCH) or any (for a non-WATCH) stage of request execution in the API Priority and Fairness subsystem
    flow_schema
    priority_level
    type
    None apiserver_flowcontrol_request_queue_length_after_enqueue ALPHA Histogram Length of queue in the API Priority and Fairness subsystem, as seen by each request after it is enqueued
    flow_schema
    priority_level
    None apiserver_flowcontrol_request_wait_duration_seconds ALPHA Histogram Length of time a request spent waiting in its queue
    execute
    flow_schema
    priority_level
    apiserver_flowcontrol_seat_fair_frac ALPHA Gauge Fair fraction of server's concurrency to allocate to each priority level that can use it apiserver_flowcontrol_target_seats ALPHA Gauge Seat allocation targets
    priority_level
    apiserver_flowcontrol_upper_limit_seats ALPHA Gauge Configured upper bound on number of execution seats available to each priority level
    priority_level
    None apiserver_flowcontrol_watch_count_samples ALPHA Histogram count of watchers for mutating requests in API Priority and Fairness
    flow_schema
    priority_level
    None apiserver_flowcontrol_work_estimated_seats ALPHA Histogram Number of estimated seats (maximum of initial and final seats) associated with requests in API Priority and Fairness
    flow_schema
    priority_level
    None apiserver_init_events_total ALPHA Counter Counter of init events processed in watch cache broken by resource type.
    resource
    None apiserver_kube_aggregator_x509_insecure_sha1_total ALPHA Counter Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures (either/or, based on the runtime environment) None None apiserver_kube_aggregator_x509_missing_san_total ALPHA Counter Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing (either/or, based on the runtime environment) None None apiserver_request_aborts_total ALPHA Counter Number of requests which apiserver aborted possibly due to a timeout, for each group, version, verb, resource, subresource and scope
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_request_body_sizes ALPHA Histogram Apiserver request body sizes broken out by size.
    resource
    verb
    None apiserver_request_filter_duration_seconds ALPHA Histogram Request filter latency distribution in seconds, for each filter type
    filter
    None apiserver_request_post_timeout_total ALPHA Counter Tracks the activity of the request handlers after the associated requests have been timed out by the apiserver
    source
    status
    apiserver_request_sli_duration_seconds ALPHA Histogram Response latency distribution (not counting webhook duration) in seconds for each verb, group, version, resource, subresource, scope and component.
    component
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_request_slo_duration_seconds ALPHA Histogram Response latency distribution (not counting webhook duration) in seconds for each verb, group, version, resource, subresource, scope and component.
    component
    group
    resource
    scope
    subresource
    verb
    version
    1.27.0 None apiserver_request_terminations_total ALPHA Counter Number of requests which apiserver terminated in self-defense.
    code
    component
    group
    resource
    scope
    subresource
    verb
    version
    None apiserver_request_timestamp_comparison_time ALPHA Histogram Time taken for comparison of old vs new objects in UPDATE or PATCH requests
    code_path
    None apiserver_selfrequest_total ALPHA Counter Counter of apiserver self-requests broken out for each verb, API resource and subresource.
    resource
    subresource
    verb
    None apiserver_storage_data_key_generation_duration_seconds ALPHA Histogram Latencies in seconds of data encryption key(DEK) generation operations. None None apiserver_storage_data_key_generation_failures_total ALPHA Counter Total number of failed data encryption key(DEK) generation operations. None None apiserver_storage_db_total_size_in_bytes ALPHA Gauge Total size of the storage database file physically allocated in bytes.
    endpoint
    None apiserver_storage_envelope_transformation_cache_misses_total ALPHA Counter Total number of cache misses while accessing key decryption key(KEK). None None apiserver_storage_list_evaluated_objects_total ALPHA Counter Number of objects tested in the course of serving a LIST request from storage
    resource
    None apiserver_storage_list_fetched_objects_total ALPHA Counter Number of objects read from storage in the course of serving a LIST request
    resource
    None apiserver_storage_list_returned_objects_total ALPHA Counter Number of objects returned for a LIST request from storage
    resource
    None apiserver_storage_list_total ALPHA Counter Number of LIST requests served from storage
    resource
    None apiserver_storage_transformation_duration_seconds ALPHA Histogram Latencies in seconds of value transformation operations.
    transformation_type
    None apiserver_storage_transformation_operations_total ALPHA Counter Total number of transformations.
    status
    transformation_type
    transformer_prefix
    None apiserver_terminated_watchers_total ALPHA Counter Counter of watchers closed due to unresponsiveness broken by resource type.
    resource
    None apiserver_tls_handshake_errors_total ALPHA Counter Number of requests dropped with 'TLS handshake error from' error None None apiserver_validating_admission_policy_check_duration_seconds ALPHA Histogram Validation admission latency for individual validation expressions in seconds, labeled by policy and further including binding, state and enforcement action taken.
    enforcement_action
    policy
    policy_binding
    state
    Validation admission latency for individual validation expressions in seconds, labeled by policy and param resource, further including binding, state and enforcement action taken.
    enforcement_action
    params
    policy
    policy_binding
    state
    validation_expression
    None
    apiserver_validating_admission_policy_check_total ALPHA Counter Validation admission policy check total, labeled by policy and further identified by binding, enforcement action taken, and state.
    enforcement_action
    policy
    policy_binding
    state
    Validation admission policy check total, labeled by policy and param resource, and further identified by binding, validation expression, enforcement action taken, and state.
    enforcement_action
    params
    policy
    policy_binding
    state
    validation_expression
    None
    apiserver_validating_admission_policy_definition_total ALPHA Counter Validation admission policy count total, labeled by state and enforcement action.
    enforcement_action
    state
    None apiserver_watch_cache_events_dispatched_total ALPHA Counter Counter of events dispatched in watch cache broken by resource type.
    resource
    None apiserver_watch_cache_initializations_total ALPHA Counter Counter of watch cache initializations broken by resource type.
    resource
    None apiserver_watch_events_sizes ALPHA Histogram Watch event size distribution in bytes
    group
    kind
    version
    None apiserver_watch_events_total ALPHA Counter Number of events sent in watch clients
    group
    kind
    version
    None apiserver_webhooks_x509_insecure_sha1_total ALPHA Counter Counts the number of requests to servers with insecure SHA1 signatures in their serving certificate OR the number of connection failures due to the insecure SHA1 signatures (either/or, based on the runtime environment) None None apiserver_webhooks_x509_missing_san_total ALPHA Counter Counts the number of requests to servers missing SAN extension in their serving certificate OR the number of connection failures due to the lack of x509 certificate SAN extension missing (either/or, based on the runtime environment) None None attachdetach_controller_forced_detaches ALPHA Counter Number of times the A/D Controller performed a forced detach None None attachdetach_controller_total_volumes ALPHA Custom Number of volumes in A/D Controller
    plugin_name
    state
    None authenticated_user_requests ALPHA Counter Counter of authenticated requests broken out by username.
    username
    None authentication_attempts ALPHA Counter Counter of authenticated attempts.
    result
    None authentication_duration_seconds ALPHA Histogram Authentication duration in seconds broken out by result.
    result
    None authentication_token_cache_active_fetch_count ALPHA Gauge
    status
    None authentication_token_cache_fetch_total ALPHA Counter
    status
    None authentication_token_cache_request_duration_seconds ALPHA Histogram
    status
    None authentication_token_cache_request_total ALPHA Counter
    status
    None cloudprovider_aws_api_request_duration_seconds ALPHA Histogram Latency of AWS API calls
    request
    None cloudprovider_aws_api_request_errors ALPHA Counter AWS API errors
    request
    None cloudprovider_aws_api_throttled_requests_total ALPHA Counter AWS API throttled requests
    operation_name
    None cloudprovider_azure_api_request_duration_seconds ALPHA Histogram Latency of an Azure API call
    request
    resource_group
    source
    subscription_id
    None cloudprovider_azure_api_request_errors ALPHA Counter Number of errors for an Azure API call
    request
    resource_group
    source
    subscription_id
    None cloudprovider_azure_api_request_ratelimited_count ALPHA Counter Number of rate limited Azure API calls
    request
    resource_group
    source
    subscription_id
    None cloudprovider_azure_api_request_throttled_count ALPHA Counter Number of throttled Azure API calls
    request
    resource_group
    source
    subscription_id
    None cloudprovider_azure_op_duration_seconds ALPHA Histogram Latency of an Azure service operation
    request
    resource_group
    source
    subscription_id
    None cloudprovider_azure_op_failure_count ALPHA Counter Number of failed Azure service operations
    request
    resource_group
    source
    subscription_id
    None cloudprovider_gce_api_request_duration_seconds ALPHA Histogram Latency of a GCE API call
    region
    request
    version
    zone
    None cloudprovider_gce_api_request_errors ALPHA Counter Number of errors for an API call
    region
    request
    version
    zone
    None cloudprovider_vsphere_api_request_duration_seconds ALPHA Histogram Latency of vsphere api call
    request
    None cloudprovider_vsphere_api_request_errors ALPHA Counter vsphere Api errors
    request
    None cloudprovider_vsphere_operation_duration_seconds ALPHA Histogram Latency of vsphere operation call
    operation
    None cloudprovider_vsphere_operation_errors ALPHA Counter vsphere operation errors
    operation
    None cloudprovider_vsphere_vcenter_versions ALPHA Custom Versions for connected vSphere vCenters
    hostname
    version
    build
    None container_cpu_usage_seconds_total ALPHA Custom Cumulative cpu time consumed by the container in core-seconds
    container
    pod
    namespace
    None container_memory_working_set_bytes ALPHA Custom Current working set of the container in bytes
    container
    pod
    namespace
    None container_start_time_seconds ALPHA Custom Start time of the container since unix epoch in seconds
    container
    pod
    namespace
    None cronjob_controller_cronjob_job_creation_skew_duration_seconds ALPHA Histogram Time between when a cronjob is scheduled to be run, and when the corresponding job is created None None csi_operations_seconds ALPHA Histogram Container Storage Interface operation duration with gRPC error code status total
    driver_name
    grpc_status_code
    method_name
    migrated
    None endpoint_slice_controller_changes ALPHA Counter Number of EndpointSlice changes
    operation
    None endpoint_slice_controller_desired_endpoint_slices ALPHA Gauge Number of EndpointSlices that would exist with perfect endpoint allocation None None endpoint_slice_controller_endpoints_added_per_sync ALPHA Histogram Number of endpoints added on each Service sync None None endpoint_slice_controller_endpoints_desired ALPHA Gauge Number of endpoints desired None None endpoint_slice_controller_endpoints_removed_per_sync ALPHA Histogram Number of endpoints removed on each Service sync None None endpoint_slice_controller_endpointslices_changed_per_sync ALPHA Histogram Number of EndpointSlices changed on each Service sync
    topology
    None endpoint_slice_controller_num_endpoint_slices ALPHA Gauge Number of EndpointSlices None None endpoint_slice_controller_syncs ALPHA Counter Number of EndpointSlice syncs
    result
    None endpoint_slice_mirroring_controller_addresses_skipped_per_sync ALPHA Histogram Number of addresses skipped on each Endpoints sync due to being invalid or exceeding MaxEndpointsPerSubset None None endpoint_slice_mirroring_controller_changes ALPHA Counter Number of EndpointSlice changes
    operation
    None endpoint_slice_mirroring_controller_desired_endpoint_slices ALPHA Gauge Number of EndpointSlices that would exist with perfect endpoint allocation None None endpoint_slice_mirroring_controller_endpoints_added_per_sync ALPHA Histogram Number of endpoints added on each Endpoints sync None None endpoint_slice_mirroring_controller_endpoints_desired ALPHA Gauge Number of endpoints desired None None endpoint_slice_mirroring_controller_endpoints_removed_per_sync ALPHA Histogram Number of endpoints removed on each Endpoints sync None None endpoint_slice_mirroring_controller_endpoints_sync_duration ALPHA Histogram Duration of syncEndpoints() in seconds None None endpoint_slice_mirroring_controller_endpoints_updated_per_sync ALPHA Histogram Number of endpoints updated on each Endpoints sync None None endpoint_slice_mirroring_controller_num_endpoint_slices ALPHA Gauge Number of EndpointSlices None None ephemeral_volume_controller_create_failures_total ALPHA Counter Number of PersistenVolumeClaims creation requests None None ephemeral_volume_controller_create_total ALPHA Counter Number of PersistenVolumeClaims creation requests None None etcd_bookmark_counts ALPHA Gauge Number of etcd bookmarks (progress notify events) split by kind.
    resource
    None etcd_lease_object_counts ALPHA Histogram Number of objects attached to a single etcd lease. None None etcd_request_duration_seconds ALPHA Histogram Etcd request latency in seconds for each operation and object type.
    operation
    type
    None etcd_version_info ALPHA Gauge Etcd server's binary version
    binary_version
    None field_validation_request_duration_seconds ALPHA Histogram Response latency distribution in seconds for each field validation value and whether field validation is enabled or not
    enabled
    field_validation
    None garbagecollector_controller_resources_sync_error_total ALPHA Counter Number of garbage collector resources sync errors None None get_token_count ALPHA Counter Counter of total Token() requests to the alternate token source None None get_token_fail_count ALPHA Counter Counter of failed Token() requests to the alternate token source None None job_controller_job_finished_total ALPHA Counter The number of finished job
    completion_mode
    reason
    result
    None job_controller_job_pods_finished_total ALPHA Counter The number of finished Pods that are fully tracked
    completion_mode
    result
    None job_controller_job_sync_duration_seconds ALPHA Histogram The time it took to sync a job
    action
    completion_mode
    result
    None job_controller_job_sync_total ALPHA Counter The number of job syncs
    action
    completion_mode
    result
    None
    job_controller_pod_failures_handled_by_failure_policy_total ALPHA Counter `The number of failed Pods handled by failure policy with, \t\t\trespect to the failure policy action applied based on the matched, \t\t\trule. Possible values of the action label correspond to the, \t\t\tpossible values for the failure policy rule action, which are:, \t\t\t\"FailJob\", \"Ignore\" and \"Count\".`
    action
    None job_controller_terminated_pods_tracking_finalizer_total ALPHA Counter `The number of terminated pods (phase=Failed|Succeeded), that have the finalizer batch.kubernetes.io/job-tracking, The event label can be \"add\" or \"delete\".`
    event
    None kube_apiserver_clusterip_allocator_allocated_ips ALPHA Gauge Gauge measuring the number of allocated IPs for Services
    cidr
    None kube_apiserver_clusterip_allocator_allocation_errors_total ALPHA Counter Number of errors trying to allocate Cluster IPs
    cidr
    scope
    None kube_apiserver_clusterip_allocator_allocation_total ALPHA Counter Number of Cluster IPs allocations
    cidr
    scope
    None kube_apiserver_clusterip_allocator_available_ips ALPHA Gauge Gauge measuring the number of available IPs for Services
    cidr
    None kube_apiserver_pod_logs_pods_logs_backend_tls_failure_total ALPHA Counter Total number of requests for pods/logs that failed due to kubelet server TLS verification None None kube_apiserver_pod_logs_pods_logs_insecure_backend_total ALPHA Counter Total number of requests for pods/logs sliced by usage type: enforce_tls, skip_tls_allowed, skip_tls_denied
    usage
    None kube_pod_resource_limit ALPHA Custom Resources limit for workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.
    namespace
    pod
    node
    scheduler
    priority
    resource
    unit
    None kube_pod_resource_request ALPHA Custom Resources requested by workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any.
    namespace
    pod
    node
    scheduler
    priority
    resource
    unit
    None kubelet_certificate_manager_client_expiration_renew_errors ALPHA Counter Counter of certificate renewal errors. None None kubelet_certificate_manager_client_ttl_seconds ALPHA Gauge Gauge of the TTL (time-to-live) of the Kubelet's client certificate. The value is in seconds until certificate expiry (negative if already expired). If client certificate is invalid or unused, the value will be +INF. None None kubelet_certificate_manager_server_rotation_seconds ALPHA Histogram Histogram of the number of seconds the previous certificate lived before being rotated. None None kubelet_certificate_manager_server_ttl_seconds ALPHA Gauge Gauge of the shortest TTL (time-to-live) of the Kubelet's serving certificate. The value is in seconds until certificate expiry (negative if already expired). If serving certificate is invalid or unused, the value will be +INF. None None kubelet_cgroup_manager_duration_seconds ALPHA Histogram Duration in seconds for cgroup manager operations. Broken down by method.
    operation_type
    None kubelet_container_log_filesystem_used_bytes ALPHA Custom Bytes used by the container's logs on the filesystem.
    uid
    namespace
    pod
    container
    None kubelet_containers_per_pod_count ALPHA Histogram The number of containers per pod. None None kubelet_cpu_manager_pinning_errors_total ALPHA Counter The number of cpu core allocations which required pinning failed. None None kubelet_cpu_manager_pinning_requests_total ALPHA Counter The number of cpu core allocations which required pinning. kubelet_credential_provider_plugin_duration ALPHA Histogram Duration of execution in seconds for credential provider plugin
    plugin_name
    kubelet_credential_provider_plugin_errors ALPHA Counter Number of errors from credential provider plugin
    plugin_name
    None None kubelet_device_plugin_alloc_duration_seconds ALPHA Histogram Duration in seconds to serve a device plugin Allocation request. Broken down by resource name.
    resource_name
    None kubelet_device_plugin_registration_total ALPHA Counter Cumulative number of device plugin registrations. Broken down by resource name.
    resource_name
    None kubelet_eviction_stats_age_seconds ALPHA Histogram Time between when stats are collected, and when pod is evicted based on those stats by eviction signal
    eviction_signal
    None kubelet_evictions ALPHA Counter Cumulative number of pod evictions by eviction signal
    eviction_signal
    None kubelet_graceful_shutdown_end_time_seconds ALPHA Gauge Last graceful shutdown start time since unix epoch in seconds None None kubelet_graceful_shutdown_start_time_seconds ALPHA Gauge Last graceful shutdown start time since unix epoch in seconds None None kubelet_http_inflight_requests ALPHA Gauge Number of the inflight http requests
    long_running
    method
    path
    server_type
    None kubelet_http_requests_duration_seconds ALPHA Histogram Duration in seconds to serve http requests
    long_running
    method
    path
    server_type
    None kubelet_http_requests_total ALPHA Counter Number of the http requests received since the server started
    long_running
    method
    path
    server_type
    None kubelet_kubelet_credential_provider_plugin_duration ALPHA Histogram Duration of execution in seconds for credential provider plugin
    plugin_name
    None kubelet_kubelet_credential_provider_plugin_errors ALPHA Counter Number of errors from credential provider plugin
    plugin_name
    None
    kubelet_lifecycle_handler_http_fallbacks_total ALPHA Counter The number of times lifecycle handlers successfully fell back to http from https. None None kubelet_managed_ephemeral_containers ALPHA Gauge Current number of ephemeral containers in pods managed by this kubelet. Ephemeral containers will be ignored if disabled by the EphemeralContainers feature gate, and this number will be 0. None None kubelet_node_name ALPHA Gauge The node's name. The count is always 1.
    node
    None kubelet_pleg_discard_events ALPHA Counter The number of discard events in PLEG. None None kubelet_pleg_last_seen_seconds ALPHA Gauge Timestamp in seconds when PLEG was last seen active. None None kubelet_pleg_relist_duration_seconds ALPHA Histogram Duration in seconds for relisting pods in PLEG. None None kubelet_pleg_relist_interval_seconds ALPHA Histogram Interval in seconds between relisting in PLEG. None None kubelet_pod_resources_endpoint_errors_get_allocatable ALPHA Counter Number of requests to the PodResource GetAllocatableResources endpoint which returned error. Broken down by server api version.
    server_api_version
    None kubelet_pod_resources_endpoint_errors_list ALPHA Counter Number of requests to the PodResource List endpoint which returned error. Broken down by server api version.
    server_api_version
    None kubelet_pod_resources_endpoint_requests_get_allocatable ALPHA Counter Number of requests to the PodResource GetAllocatableResources endpoint. Broken down by server api version.
    server_api_version
    None kubelet_pod_resources_endpoint_requests_list ALPHA Counter Number of requests to the PodResource List endpoint. Broken down by server api version.
    server_api_version
    None kubelet_pod_resources_endpoint_requests_total ALPHA Counter Cumulative number of requests to the PodResource endpoint. Broken down by server api version.
    server_api_version
    None kubelet_pod_start_duration_seconds ALPHA Histogram Duration in seconds from kubelet seeing a pod for the first time to the pod starting to run kubelet_pod_start_sli_duration_seconds ALPHA Histogram Duration in seconds to start a pod, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch None None kubelet_pod_status_sync_duration_seconds ALPHA Histogram Duration in seconds to sync a pod status update. Measures time from detection of a change to pod status until the API is successfully updated for that pod, even if multiple intevening changes to pod status occur. None None kubelet_pod_worker_duration_seconds ALPHA Histogram Duration in seconds to sync a single pod. Broken down by operation type: create, update, or sync
    operation_type
    None kubelet_pod_worker_start_duration_seconds ALPHA Histogram Duration in seconds from kubelet seeing a pod to starting a worker. None None kubelet_preemptions ALPHA Counter Cumulative number of pod preemptions by preemption resource
    preemption_signal
    None kubelet_run_podsandbox_duration_seconds ALPHA Histogram Duration in seconds of the run_podsandbox operations. Broken down by RuntimeClass.Handler.
    runtime_handler
    None kubelet_run_podsandbox_errors_total ALPHA Counter Cumulative number of the run_podsandbox operation errors by RuntimeClass.Handler.
    runtime_handler
    None kubelet_running_containers ALPHA Gauge Number of containers currently running
    container_state
    None kubelet_running_pods ALPHA Gauge Number of pods that have a running pod sandbox None None kubelet_runtime_operations_duration_seconds ALPHA Histogram Duration in seconds of runtime operations. Broken down by operation type.
    operation_type
    None kubelet_runtime_operations_errors_total ALPHA Counter Cumulative number of runtime operation errors by operation type.
    operation_type
    None kubelet_runtime_operations_total ALPHA Counter Cumulative number of runtime operations by operation type.
    operation_type
    None kubelet_server_expiration_renew_errors ALPHA Counter Counter of certificate renewal errors. None None kubelet_started_containers_errors_total ALPHA Counter Cumulative number of errors when starting containers
    code
    container_type
    None kubelet_started_containers_total ALPHA Counter Cumulative number of containers started
    container_type
    None kubelet_started_host_process_containers_errors_total ALPHA Counter Cumulative number of errors when starting hostprocess containers. This metric will only be collected on Windows and requires WindowsHostProcessContainers feature gate to be enabled.
    code
    container_type
    None kubelet_started_host_process_containers_total ALPHA Counter Cumulative number of hostprocess containers started. This metric will only be collected on Windows and requires WindowsHostProcessContainers feature gate to be enabled.
    container_type
    None kubelet_started_pods_errors_total ALPHA Counter Cumulative number of errors when starting pods None None kubelet_started_pods_total ALPHA Counter Cumulative number of pods started None None kubelet_volume_metric_collection_duration_seconds ALPHA Histogram Duration in seconds to calculate volume stats
    metric_source
    None kubelet_volume_stats_available_bytes ALPHA Custom Number of available bytes in the volume
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_capacity_bytes ALPHA Custom Capacity in bytes of the volume
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_health_status_abnormal ALPHA Custom Abnormal volume health status. The count is either 1 or 0. 1 indicates the volume is unhealthy, 0 indicates volume is healthy
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_inodes ALPHA Custom Maximum number of inodes in the volume
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_inodes_free ALPHA Custom Number of free inodes in the volume
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_inodes_used ALPHA Custom Number of used inodes in the volume
    namespace
    persistentvolumeclaim
    None kubelet_volume_stats_used_bytes ALPHA Custom Number of used bytes in the volume
    namespace
    persistentvolumeclaim
    None kubeproxy_network_programming_duration_seconds ALPHA Histogram In Cluster Network Programming Latency in seconds None None kubeproxy_sync_proxy_rules_duration_seconds ALPHA Histogram SyncProxyRules latency in seconds None None kubeproxy_sync_proxy_rules_endpoint_changes_pending ALPHA Gauge Pending proxy rules Endpoint changes None None kubeproxy_sync_proxy_rules_endpoint_changes_total ALPHA Counter Cumulative proxy rules Endpoint changes kubeproxy_sync_proxy_rules_iptables_partial_restore_failures_total ALPHA Counter Cumulative proxy iptables partial restore failures None None kubeproxy_sync_proxy_rules_iptables_restore_failures_total ALPHA Counter Cumulative proxy iptables restore failures None None kubeproxy_sync_proxy_rules_iptables_total ALPHA Gauge Number of proxy iptables rules programmed
    table
    None kubeproxy_sync_proxy_rules_last_queued_timestamp_seconds ALPHA Gauge The last time a sync of proxy rules was queued None None kubeproxy_sync_proxy_rules_last_timestamp_seconds ALPHA Gauge The last time proxy rules were successfully synced None None kubeproxy_sync_proxy_rules_no_local_endpoints_total ALPHA Gauge Number of services with a Local traffic policy and no endpoints
    traffic_policy
    None kubeproxy_sync_proxy_rules_service_changes_pending ALPHA Gauge Pending proxy rules Service changes None None kubeproxy_sync_proxy_rules_service_changes_total ALPHA Counter Cumulative proxy rules Service changes None None kubernetes_build_info ALPHA Gauge A metric with a constant '1' value labeled by major, minor, git version, git commit, git tree state, build date, Go version, and compiler from which Kubernetes was built, and platform on which it is running.
    build_date
    compiler
    git_commit
    git_tree_state
    git_version
    go_version
    major
    minor
    platform
    None kubernetes_feature_enabled ALPHA Gauge This metric records the data about the stage and enablement of a k8s feature.
    name
    stage
    None kubernetes_healthcheck ALPHA Gauge This metric records the result of a single healthcheck.
    name
    type
    None kubernetes_healthchecks_total ALPHA Counter This metric records the results of all healthcheck.
    name
    status
    type
    None leader_election_master_status ALPHA Gauge Gauge of if the reporting system is master of the relevant lease, 0 indicates backup, 1 indicates master. 'name' is the string used to identify the lease. Please make sure to group by name.
    name
    None node_authorizer_graph_actions_duration_seconds ALPHA Histogram Histogram of duration of graph actions in node authorizer.
    operation
    None node_collector_evictions_number ALPHA Counter Number of Node evictions that happened since current instance of NodeController started, This metric is replaced by node_collector_evictions_total.
    zone
    1.24.0 None node_collector_unhealthy_nodes_in_zone ALPHA Gauge Gauge measuring number of not Ready Nodes per zones.
    zone
    None node_collector_zone_health ALPHA Gauge Gauge measuring percentage of healthy nodes per zone.
    zone
    None node_collector_zone_size ALPHA Gauge Gauge measuring number of registered Nodes per zones.
    zone
    None node_cpu_usage_seconds_total ALPHA Custom Cumulative cpu time consumed by the node in core-seconds None None node_ipam_controller_cidrset_allocation_tries_per_request ALPHA Histogram Number of endpoints added on each Service sync
    clusterCIDR
    None node_ipam_controller_cidrset_cidrs_allocations_total ALPHA Counter Counter measuring total number of CIDR allocations.
    clusterCIDR
    None node_ipam_controller_cidrset_cidrs_releases_total ALPHA Counter Counter measuring total number of CIDR releases.
    clusterCIDR
    None node_ipam_controller_cidrset_usage_cidrs ALPHA Gauge Gauge measuring percentage of allocated CIDRs.
    clusterCIDR
    None node_ipam_controller_multicidrset_allocation_tries_per_request ALPHA Histogram Histogram measuring CIDR allocation tries per request.
    clusterCIDR
    None node_ipam_controller_multicidrset_cidrs_allocations_total ALPHA Counter Counter measuring total number of CIDR allocations.
    clusterCIDR
    None node_ipam_controller_multicidrset_cidrs_releases_total ALPHA Counter Counter measuring total number of CIDR releases.
    clusterCIDR
    None node_ipam_controller_multicidrset_usage_cidrs ALPHA Gauge Gauge measuring percentage of allocated CIDRs.
    clusterCIDR
    None node_memory_working_set_bytes ALPHA Custom Current working set of the node in bytes None None number_of_l4_ilbs ALPHA Gauge Number of L4 ILBs
    feature
    None plugin_manager_total_plugins ALPHA Custom Number of plugins in Plugin Manager
    socket_path
    state
    None pod_cpu_usage_seconds_total ALPHA Custom Cumulative cpu time consumed by the pod in core-seconds
    pod
    namespace
    pod_gc_collector_force_delete_pod_errors_total ALPHA Counter Number of errors encountered when forcefully deleting the pods since the Pod GC Controller started. pod_gc_collector_force_delete_pods_total ALPHA Counter Number of pods that are being forcefully deleted since the Pod GC Controller started. None pod_memory_working_set_bytes ALPHA Custom Current working set of the pod in bytes
    pod
    namespace
    None pod_security_errors_total ALPHA Counter Number of errors preventing normal evaluation. Non-fatal errors may result in the latest restricted profile being used for evaluation.
    fatal
    request_operation
    resource
    subresource
    None pod_security_evaluations_total ALPHA Counter Number of policy evaluations that occurred, not counting ignored or exempt requests.
    decision
    mode
    policy_level
    policy_version
    request_operation
    resource
    subresource
    None pod_security_exemptions_total ALPHA Counter Number of exempt requests, not counting ignored or out of scope requests.
    request_operation
    resource
    subresource
    None prober_probe_duration_seconds ALPHA Histogram Duration in seconds for a probe response.
    container
    namespace
    pod
    probe_type
    None prober_probe_total ALPHA Counter Cumulative number of a liveness, readiness or startup probe for a container by result.
    container
    namespace
    pod
    pod_uid
    probe_type
    result
    None pv_collector_bound_pv_count ALPHA Custom Gauge measuring number of persistent volume currently bound
    storage_class
    None pv_collector_bound_pvc_count ALPHA Custom Gauge measuring number of persistent volume claim currently bound
    namespace
    None pv_collector_total_pv_count ALPHA Custom Gauge measuring total number of persistent volumes
    plugin_name
    volume_mode
    None pv_collector_unbound_pv_count ALPHA Custom Gauge measuring number of persistent volume currently unbound
    storage_class
    None pv_collector_unbound_pvc_count ALPHA Custom Gauge measuring number of persistent volume claim currently unbound
    namespace
    None replicaset_controller_sorting_deletion_age_ratio ALPHA Histogram The ratio of chosen deleted pod's ages to the current youngest pod's age (at the time). Should be <2.The intent of this metric is to measure the rough efficacy of the LogarithmicScaleDown feature gate's effect onthe sorting (and deletion) of pods when a replicaset scales down. This only considers Ready pods when calculating and reporting. None None rest_client_exec_plugin_call_total ALPHA Counter Number of calls to an exec plugin, partitioned by the type of event encountered (no_error, plugin_execution_error, plugin_not_found_error, client_internal_error) and an optional exit code. The exit code will be set to 0 if and only if the plugin call was successful.
    call_status
    code
    None rest_client_exec_plugin_certificate_rotation_age ALPHA Histogram Histogram of the number of seconds the last auth exec plugin client certificate lived before being rotated. If auth exec plugin client certificates are unused, histogram will contain no data. None None rest_client_exec_plugin_ttl_seconds ALPHA Gauge Gauge of the shortest TTL (time-to-live) of the client certificate(s) managed by the auth exec plugin. The value is in seconds until certificate expiry (negative if already expired). If auth exec plugins are unused or manage no TLS certificates, the value will be +INF. None None rest_client_rate_limiter_duration_seconds ALPHA Histogram Client side rate limiter latency in seconds. Broken down by verb, and host.
    host
    verb
    None rest_client_request_duration_seconds ALPHA Histogram Request latency in seconds. Broken down by verb, and host.
    host
    verb
    None rest_client_request_size_bytes ALPHA Histogram Request size in bytes. Broken down by verb and host.
    host
    verb
    None rest_client_requests_total ALPHA Counter Number of HTTP requests, partitioned by status code, method, and host.
    code
    host
    method
    None rest_client_response_size_bytes ALPHA Histogram Response size in bytes. Broken down by verb and host.
    host
    verb
    None retroactive_storageclass_errors_total ALPHA Counter Total number of failed retroactive StorageClass assignments to persistent volume claim None None retroactive_storageclass_total ALPHA Counter Total number of retroactive StorageClass assignments to persistent volume claim None None root_ca_cert_publisher_sync_duration_seconds ALPHA Histogram Number of namespace syncs happened in root ca cert publisher.
    code
    None root_ca_cert_publisher_sync_total ALPHA Counter Number of namespace syncs happened in root ca cert publisher.
    code
    None running_managed_controllers ALPHA Gauge Indicates where instances of a controller are currently running
    manager
    name
    None scheduler_e2e_scheduling_duration_seconds ALPHA Histogram E2e scheduling latency in seconds (scheduling algorithm + binding). This metric is replaced by scheduling_attempt_duration_seconds.
    profile
    result
    1.23.0 None scheduler_goroutines ALPHA Gauge Number of running goroutines split by the work they do such as binding.
    operation
    None scheduler_permit_wait_duration_seconds ALPHA Histogram Duration of waiting on permit.
    result
    None scheduler_plugin_execution_duration_seconds ALPHA Histogram Duration for running a plugin at a specific extension point.
    extension_point
    plugin
    status
    None scheduler_scheduler_cache_size ALPHA Gauge Number of nodes, pods, and assumed (bound) pods in the scheduler cache.
    type
    None scheduler_scheduler_goroutines ALPHA Gauge Number of running goroutines split by the work they do such as binding. This metric is replaced by the \"goroutines\" metric.
    work
    1.26.0 None scheduler_scheduling_algorithm_duration_seconds ALPHA Histogram Scheduling algorithm latency in seconds None None scheduler_unschedulable_pods ALPHA Gauge The number of unschedulable pods broken down by plugin name. A pod will increment the gauge for all plugins that caused it to not schedule and so this metric have meaning only when broken down by plugin.
    plugin
    profile
    None scheduler_volume_binder_cache_requests_total ALPHA Counter Total number for request volume binding cache
    operation
    None scheduler_volume_scheduling_stage_error_total ALPHA Counter Volume scheduling stage error count
    operation
    None scrape_error ALPHA Custom 1 if there was an error while getting container metrics, 0 otherwise None None service_controller_nodesync_latency_seconds ALPHA Histogram A metric measuring the latency for nodesync which updates loadbalancer hosts on cluster node updates. None None service_controller_update_loadbalancer_host_latency_seconds ALPHA Histogram A metric measuring the latency for updating each load balancer hosts. None None serviceaccount_legacy_tokens_total ALPHA Counter Cumulative legacy service account tokens used None None serviceaccount_stale_tokens_total ALPHA Counter Cumulative stale projected service account tokens used None None serviceaccount_valid_tokens_total ALPHA Counter Cumulative valid projected service account tokens used None None storage_count_attachable_volumes_in_use ALPHA Custom Measure number of volumes in use
    node
    volume_plugin
    None storage_operation_duration_seconds ALPHA Histogram Storage operation duration
    migrated
    operation_name
    status
    volume_plugin
    None ttl_after_finished_controller_job_deletion_duration_seconds ALPHA Histogram The time it took to delete the job since it became eligible for deletion None None volume_manager_selinux_container_errors_total ALPHA Gauge Number of errors when kubelet cannot compute SELinux context for a container. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of containers. None None volume_manager_selinux_container_warnings_total ALPHA Gauge Number of errors when kubelet cannot compute SELinux context for a container that are ignored. They will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes. None None volume_manager_selinux_pod_context_mismatch_errors_total ALPHA Gauge Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of Pods. None None volume_manager_selinux_pod_context_mismatch_warnings_total ALPHA Gauge Number of errors when a Pod defines different SELinux contexts for its containers that use the same volume. They are not errors yet, but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes. None None volume_manager_selinux_volume_context_mismatch_errors_total ALPHA Gauge Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs. Kubelet can't start such a Pod then and it will retry, therefore value of this metric may not represent the actual nr. of Pods. None None volume_manager_selinux_volume_context_mismatch_warnings_total ALPHA Gauge Number of errors when a Pod uses a volume that is already mounted with a different SELinux context than the Pod needs. They are not errors yet, but they will become real errors when SELinuxMountReadWriteOncePod feature is expanded to all volume access modes. None None volume_manager_selinux_volumes_admitted_total ALPHA Gauge Number of volumes whose SELinux context was fine and will be mounted with mount -o context option. None None volume_manager_total_volumes ALPHA Custom Number of volumes in Volume Manager
    plugin_name
    state
    None volume_operation_total_errors ALPHA Counter Total volume operation errors
    operation_name
    plugin_name
    None volume_operation_total_seconds ALPHA Histogram Storage operation end to end duration in seconds
    operation_name
    plugin_name
    None watch_cache_capacity ALPHA Gauge Total capacity of watch cache broken by resource type.
    resource
    None watch_cache_capacity_decrease_total ALPHA Counter Total number of watch cache capacity decrease events broken by resource type.
    resource
    None watch_cache_capacity_increase_total ALPHA Counter Total number of watch cache capacity increase events broken by resource type.
    resource
    None workqueue_adds_total ALPHA Counter Total number of adds handled by workqueue
    name
    None workqueue_depth ALPHA Gauge Current depth of workqueue
    name
    None workqueue_longest_running_processor_seconds ALPHA Gauge How many seconds has the longest running processor for workqueue been running.
    name
    None workqueue_queue_duration_seconds ALPHA Histogram How long in seconds an item stays in workqueue before being requested.
    name
    None workqueue_retries_total ALPHA Counter Total number of retries handled by workqueue
    name
    None workqueue_unfinished_work_seconds ALPHA Gauge How many seconds of work has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.
    name
    None workqueue_work_duration_seconds ALPHA Histogram How long in seconds processing an item from workqueue takes.
    name
    None "} {"_id":"doc-en-website-0688949ec10aeed6aa63e0410e0835179066f5de5ba669dd6af0de3e55bbc8af","title":"","text":"outline: none; padding: .5em 0 .5em 0; } /* CSS for 'figure' full-screen display */ /* Define styles for full-screen overlay */ .figure-fullscreen-overlay { position: fixed; inset: 0; z-index: 9999; background-color: rgba(255, 255, 255, 0.95); /* White background with some transparency */ display: flex; justify-content: center; align-items: center; padding: calc(5% + 20px); box-sizing: border-box; } /* CSS class to scale the image when zoomed */ .figure-zoomed { transform: scale(1.2); } /* Define styles for full-screen image */ .figure-fullscreen-img { max-width: 100%; max-height: 100%; object-fit: contain; /* Maintain aspect ratio and fit within the container */ } /* Define styles for close button */ .figure-close-button { position: absolute; top: 1%; right: 2%; cursor: pointer; font-size: calc(5vw + 10px); color: #333; } No newline at end of file"} {"_id":"doc-en-website-3b114fc94aaaaee63748d6e23ded1a7e61f1ed70c5b35946181979ed4417e010","title":"","text":"This document outlines the various components you need to have for a complete and working Kubernetes cluster. {{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Components of Kubernetes\" caption=\"The components of a Kubernetes cluster\" class=\"diagram-large\" >}} {{< figure src=\"/images/docs/components-of-kubernetes.svg\" alt=\"Components of Kubernetes\" caption=\"The components of a Kubernetes cluster\" class=\"diagram-large\" clicktozoom=\"true\" >}} ## Control Plane Components"} {"_id":"doc-en-website-4abf4dc7f91dda4bb6ea5ff6fea5778115878d7d9bbda250239286798702599d","title":"","text":" {{- end -}} {{- if .HasShortcode \"figure\" -}} {{- end -}} {{- if eq (lower .Params.cid) \"community\" -}} {{- if eq .Params.community_styles_migrated true -}} "} {"_id":"doc-en-website-5825865e8d621d67190e82ccbcfe9179f095195bae66faeceb09402459c7ff40","title":"","text":" {{ $src := (.Page.Resources.GetMatch (printf \"**%s*\" (.Get \"src\"))) }} {{ $clickToZoom := .Get \"clicktozoom\" }} {{- if .Get \"link\" -}} {{- end }} \"{{ {{- if .Get \"link\" }}{{ end -}} {{- if or (or (.Get \"title\") (.Get \"caption\")) (.Get \"attr\") -}}
    {{ with (.Get \"title\") -}}

    {{ . }}

    {{- end -}} {{- if or (.Get \"caption\") (.Get \"attr\") -}}

    {{- .Get \"caption\" | markdownify -}} {{- with .Get \"attrlink\" }} {{- end -}} {{- .Get \"attr\" | markdownify -}} {{- if .Get \"attrlink\" }}{{ end }}

    {{- end }}
    {{- end }}
    No newline at end of file"} {"_id":"doc-en-website-6265e5e494308cf9e4d8c4753118133706435a3cacfd878b985fc668519584e0","title":"","text":" // The page and script is loaded successfully $(document).ready(function() { // Function to handle hover over
    elements function handleFigureHover() { // Only change cursor to zoom-in if figure has 'clickable-zoom' class if ($(this).hasClass('clickable-zoom') && !$(this).hasClass('figure-fullscreen-content')) { $(this).css('cursor', 'zoom-in'); } } // Attach hover event to
    elements with 'clickable-zoom' class $('figure.clickable-zoom').hover(handleFigureHover, function() { // Mouse out - revert cursor back to default $(this).css('cursor', 'default'); }); // Attach click event to
    elements with 'clickable-zoom' class $('figure.clickable-zoom').click(function() { var $figure = $(this); // Check if the figure has 'clickable-zoom' class if ($figure.hasClass('clickable-zoom')) { var $img = $figure.find('img'); // Get the element within the clicked
    // Toggle 'figure-zoomed' class to scale the image $img.toggleClass('figure-zoomed'); // Create a full-screen overlay var $fullscreenOverlay = $('
    '); // Clone the element to display in full-screen var $fullscreenImg = $img.clone(); $fullscreenImg.addClass('figure-fullscreen-img'); // Append the full-screen image to the overlay $fullscreenOverlay.append($fullscreenImg); // Create a close button for the full-screen overlay var $closeButton = $('×'); $closeButton.click(function() { // Remove the full-screen overlay when close button is clicked $fullscreenOverlay.remove(); $('body').css('overflow', 'auto'); // Restore scrolling to the underlying page // Remove 'figure-zoomed' class to reset image scale $img.removeClass('figure-zoomed'); }); $fullscreenOverlay.append($closeButton); // Append the overlay to the body $('body').append($fullscreenOverlay); // Disable scrolling on the underlying page $('body').css('overflow', 'hidden'); // Close full-screen figure when clicking outside of it (on the overlay) $fullscreenOverlay.click(function(event) { if (event.target === this) { // Clicked on the overlay area (outside the full-screen image) $fullscreenOverlay.remove(); $('body').css('overflow', 'auto'); // Restore scrolling to the underlying page // Remove 'figure-zoomed' class to reset image scale $img.removeClass('figure-zoomed'); } }); } }); }); "} {"_id":"doc-en-website-d721a717cbc30c664d98795040573fc95946bfc93be03ec2603def08307bf3f1","title":"","text":" Kubernetes runs your workload by placing containers into Pods to run on _Nodes_. Kubernetes runs your {{< glossary_tooltip text=\"workload\" term_id=\"workload\" >}} by placing containers into Pods to run on _Nodes_. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the {{< glossary_tooltip text=\"control plane\" term_id=\"control-plane\" >}}"} {"_id":"doc-en-website-4b4b2da2cdc0869fb3504472cd7a08f503f0dcff735e96658ddfd558ca2fa791","title":"","text":"section [Graceful Node Shutdown](#graceful-node-shutdown) for more details. When a node is shutdown but not detected by kubelet's Node Shutdown Manager, the pods that are part of a StatefulSet will be stuck in terminating status on that are part of a {{< glossary_tooltip text=\"StatefulSet\" term_id=\"statefulset\" >}} will be stuck in terminating status on the shutdown node and cannot move to a new running node. This is because kubelet on the shutdown node is not available to delete the pods so the StatefulSet cannot create a new pod with the same name. If there are volumes used by the pods, the"} {"_id":"doc-en-website-704d30e735ba0f6816a9e81fe4cff493a71a13e0c2f2f6d6592b7b7156ec6201","title":"","text":"To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service` with either `NoExecute` or `NoSchedule` effect to a Node marking it out-of-service. If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled on `kube-controller-manager`, and a Node is marked out-of-service with this taint, the is enabled on {{< glossary_tooltip text=\"kube-controller-manager\" term_id=\"kube-controller-manager\" >}}, and a Node is marked out-of-service with this taint, the pods on the node will be forcefully deleted if there are no matching tolerations on it and volume detach operations for the pods terminating on the node will happen immediately. This allows the Pods on the out-of-service node to recover quickly on a different node."} {"_id":"doc-en-website-669395a645ff20eb3e2402ed3d52c4b95daf3c54bade9a0baab8cf342e3398d8","title":"","text":"## {{% heading \"whatsnext\" %}} * Learn about the [components](/docs/concepts/overview/components/#node-components) that make up a node. * Read the [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param \"version\" >}}/#node-v1-core). * Read the [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. * Read about [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). Learn more about the following: * [Components](/docs/concepts/overview/components/#node-components) that make up a node. * [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param \"version\" >}}/#node-v1-core). * [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document. * [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/). * [Node Resource Managers](/docs/concepts/policy/node-resource-managers/). * [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/). "} {"_id":"doc-en-website-081f50b30ae4de9990ea9dca44346c52118b1ceb6d9d06be9392ebc46827ad9a","title":"","text":"- operation --- एग्रीगेशन लेयर आपको अपने क्लस्टर में अतिरिक्त कुबेरनेट्स-शैली API स्थापित करने देता है।``` एग्रीगेशन लेयर आपको अपने क्लस्टर में अतिरिक्त कुबेरनेट्स-शैली API स्थापित करने देता है। जब आपने {{< glossary_tooltip text=\"कुबेरनेट्स API सर्वर\" term_id=\"kube-apiserver\" >}} को [अतिरिक्त API का समर्थन](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) करने के लिए कॉन्फ़िगर किया हो, आप कुबेरनेट्स एपीआई में URL पथ का \"दावा\" करने के लिए `APIService` ऑब्जेक्ट जोड़ सकते हैं। जब आपने {{< glossary_tooltip text=\"कुबेरनेट्स API सर्वर\" term_id=\"kube-apiserver\" >}} को [अतिरिक्त API का समर्थन](/docs/tasks/extend-kubernetes/configure-aggregation-layer/) करने के लिए कॉन्फ़िगर किया हो, आप कुबेरनेट्स API में URL पाथ का \"दावा\" करने के लिए `APIService` ऑब्जेक्ट जोड़ सकते हैं। "} {"_id":"doc-en-website-3001dd541e5885925af44331cf1b5f53ebb4e7f8a4cf04ce8eb9816072025fd3","title":"","text":" --- title: 动态资源分配 content_type: concept weight: 65 --- {{< feature-state for_k8s_version=\"v1.26\" state=\"alpha\" >}} 动态资源分配是一个用于在 Pod 之间和 Pod 内部容器之间请求和共享资源的新 API。 它是对为通用资源所提供的持久卷 API 的泛化。第三方资源驱动程序负责跟踪和分配资源。 不同类型的资源支持用任意参数进行定义和初始化。 ## {{% heading \"prerequisites\" %}} Kubernetes v{{< skew currentVersion >}} 包含用于动态资源分配的集群级 API 支持, 但它需要被[显式启用](#enabling-dynamic-resource-allocation)。 你还必须为此 API 要管理的特定资源安装资源驱动程序。 如果你未运行 Kubernetes v{{< skew currentVersion>}}, 请查看对应版本的 Kubernetes 文档。 ## API {#api} 新的 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text=\"API 组\" term_id=\"api-group\" >}}提供四种新类型: ResourceClass : 定义由哪个资源驱动程序处理某种资源,并为其提供通用参数。 集群管理员在安装资源驱动程序时创建 ResourceClass。 ResourceClaim : 定义工作负载所需的特定资源实例。 由用户创建(手动管理生命周期,可以在不同的 Pod 之间共享), 或者由控制平面基于 ResourceClaimTemplate 为特定 Pod 创建 (自动管理生命周期,通常仅由一个 Pod 使用)。 ResourceClaimTemplate : 定义用于创建 ResourceClaim 的 spec 和一些元数据。 部署工作负载时由用户创建。 PodScheduling : 供控制平面和资源驱动程序内部使用, 在需要为 Pod 分配 ResourceClaim 时协调 Pod 调度。 ResourceClass 和 ResourceClaim 的参数存储在单独的对象中, 通常使用安装资源驱动程序时创建的 {{< glossary_tooltip term_id=\"CustomResourceDefinition\" text=\"CRD\" >}} 所定义的类型。 `core/v1` 的 `PodSpec` 在新的 `resourceClaims` 字段中定义 Pod 所需的 ResourceClaim。 该列表中的条目引用 ResourceClaim 或 ResourceClaimTemplate。 当引用 ResourceClaim 时,使用此 PodSpec 的所有 Pod (例如 Deployment 或 StatefulSet 中的 Pod)共享相同的 ResourceClaim 实例。 引用 ResourceClaimTemplate 时,每个 Pod 都有自己的实例。 容器资源的 `resources.claims` 列表定义容器可以访问的资源实例, 从而可以实现在一个或多个容器之间共享资源。 下面是一个虚构的资源驱动程序的示例。 该示例将为此 Pod 创建两个 ResourceClaim 对象,每个容器都可以访问其中一个。 ```yaml apiVersion: resource.k8s.io/v1alpha1 kind: ResourceClass name: resource.example.com driverName: resource-driver.example.com --- apiVersion: cats.resource.example.com/v1 kind: ClaimParameters name: large-black-cat-claim-parameters spec: color: black size: large --- apiVersion: resource.k8s.io/v1alpha1 kind: ResourceClaimTemplate metadata: name: large-black-cat-claim-template spec: spec: resourceClassName: resource.example.com parametersRef: apiGroup: cats.resource.example.com kind: ClaimParameters name: large-black-cat-claim-parameters –-- apiVersion: v1 kind: Pod metadata: name: pod-with-cats spec: containers: - name: container0 image: ubuntu:20.04 command: [\"sleep\", \"9999\"] resources: claims: - name: cat-0 - name: container1 image: ubuntu:20.04 command: [\"sleep\", \"9999\"] resources: claims: - name: cat-1 resourceClaims: - name: cat-0 source: resourceClaimTemplateName: large-black-cat-claim-template - name: cat-1 source: resourceClaimTemplateName: large-black-cat-claim-template ``` ## 调度 {#scheduling} 与原生资源(CPU、RAM)和扩展资源(由设备插件管理,并由 kubelet 公布)不同, 调度器不知道集群中有哪些动态资源, 也不知道如何将它们拆分以满足特定 ResourceClaim 的要求。 资源驱动程序负责这些任务。 资源驱动程序在为 ResourceClaim 保留资源后将其标记为“已分配(Allocated)”。 然后告诉调度器集群中可用的 ResourceClaim 的位置。 ResourceClaim 可以在创建时就进行分配(“立即分配”),不用考虑哪些 Pod 将使用它。 默认情况下采用延迟分配,直到需要 ResourceClaim 的 Pod 被调度时 (即“等待第一个消费者”)再进行分配。 在这种模式下,调度器检查 Pod 所需的所有 ResourceClaim,并创建一个 PodScheduling 对象, 通知负责这些 ResourceClaim 的资源驱动程序,告知它们调度器认为适合该 Pod 的节点。 资源驱动程序通过排除没有足够剩余资源的节点来响应调度器。 一旦调度器有了这些信息,它就会选择一个节点,并将该选择存储在 PodScheduling 对象中。 然后,资源驱动程序为分配其 ResourceClaim,以便资源可用于该节点。 完成后,Pod 就会被调度。 作为此过程的一部分,ResourceClaim 会为 Pod 保留。 目前,ResourceClaim 可以由单个 Pod 独占使用或不限数量的多个 Pod 使用。 除非 Pod 的所有资源都已分配和保留,否则 Pod 不会被调度到节点,这是一个重要特性。 这避免了 Pod 被调度到一个节点但无法在那里运行的情况, 这种情况很糟糕,因为被挂起 Pod 也会阻塞为其保留的其他资源,如 RAM 或 CPU。 ## 限制 {#limitations} 调度器插件必须参与调度那些使用 ResourceClaim 的 Pod。 通过设置 `nodeName` 字段绕过调度器会导致 kubelet 拒绝启动 Pod, 因为 ResourceClaim 没有被保留或甚至根本没有被分配。 未来可能[去除该限制](https://github.com/kubernetes/kubernetes/issues/114005)。 ## 启用动态资源分配 {#enabling-dynamic-resource-allocation} 动态资源分配是一个 **alpha 特性**,只有在启用 `DynamicResourceAllocation` [特性门控](/zh-cn/docs/reference/command-line-tools-reference/feature-gates/) 和 `resource.k8s.io/v1alpha1` {{< glossary_tooltip text=\"API 组\" term_id=\"api-group\" >}} 时才启用。 有关详细信息,参阅 `--feature-gates` 和 `--runtime-config` [kube-apiserver 参数](/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/)。 kube-scheduler、kube-controller-manager 和 kubelet 也需要设置该特性门控。 快速检查 Kubernetes 集群是否支持该功能的方法是列出 ResourceClass 对象: ```shell kubectl get resourceclasses ``` 如果你的集群支持动态资源分配,则响应是 ResourceClass 对象列表或: ``` No resources found ``` 如果不支持,则会输出如下错误: ``` error: the server doesn't have a resource type \"resourceclasses\" ``` kube-scheduler 的默认配置仅在启用特性门控时才启用 \"DynamicResources\" 插件。 自定义配置可能需要被修改才能启用它。 除了在集群中启用该功能外,还必须安装资源驱动程序。 欲了解详细信息,请参阅驱动程序的文档。 ## {{% heading \"whatsnext\" %}} - 了解更多该设计的信息, 参阅[动态资源分配 KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/3063-dynamic-resource-allocation/README.md)。 No newline at end of file"} {"_id":"doc-en-website-4b08d961452b82a5b7683208ef488955251810a8107c40fdaea02b3700388e16","title":"","text":" --- title: ファイナライザー(Finalizers) content_type: concept weight: 80 --- {{}} ファイナライザーを利用すると、対象のリソースを削除する前に特定のクリーンアップを行うように{{}}に警告することで、{{}}を管理することができます。 大抵の場合ファイナライザーは実行されるコードを指定することはありません。 その代わり、一般的にはアノテーションのように特定のリソースに関するキーのリストになります。 Kubernetesはいくつかのファイナライザーを自動的に追加しますが、自分で追加することもできます。 ## ファイナライザーはどのように動作するか マニフェストファイルを使ってリソースを作るとき、`metadata.finalizers`フィールドの中でファイナライザーを指定することができます。 リソースを削除しようとするとき、削除リクエストを扱うAPIサーバーは`finalizers`フィールドの値を確認し、以下のように扱います。 * 削除を開始した時間をオブジェクトの`metadata.deletionTimestamp`フィールドに設定します。 * `metadata.finalizers`フィールドが空になるまでオブジェクトが削除されるのを阻止します。 * ステータスコード`202`(HTTP \"Accepted\")を返します。 ファイナライザーを管理しているコントローラーは、オブジェクトの削除がリクエストされたことを示す`metadata.deletionTimestamp`がオブジェクトに設定されたことを検知します。 するとコントローラーはリソースに指定されたファイナライザーの要求を満たそうとします。 ファイナライザーの条件が満たされるたびに、そのコントローラーはリソースの`finalizers`フィールドの対象のキーを削除します。 `finalizers`フィールドが空になったとき、`deletionTimestamp`フィールドが設定されたオブジェクトは自動的に削除されます。管理外のリソース削除を防ぐためにファイナライザーを利用することもできます。 ファイナライザーの一般的な例は`kubernetes.io/pv-protection`で、これは `PersistentVolume`オブジェクトが誤って削除されるのを防ぐためのものです。 `PersistentVolume`オブジェクトをPodが利用中の場合、Kubernetesは`pv-protection`ファイナライザーを追加します。 `PersistentVolume`を削除しようとすると`Terminating`ステータスになりますが、ファイナライザーが存在しているためコントローラーはボリュームを削除することができません。 Podが`PersistentVolume`の利用を停止するとKubernetesは`pv-protection`ファイナライザーを削除し、コントローラーがボリュームを削除します。 ## オーナーリファレンス、ラベル、ファイナライザー {#owners-labels-finalizers} {{}}のように、 [オーナーリファレンス](/docs/concepts/overview/working-with-objects/owners-dependents/)はKubernetesのオブジェクト間の関係性を説明しますが、利用される目的が異なります。 {{}} がPodのようなオブジェクトを管理するとき、関連するオブジェクトのグループの変更を追跡するためにラベルを利用します。 例えば、{{}}がいくつかのPodを作成するとき、JobコントローラーはそれらのPodにラベルを付け、クラスター内の同じラベルを持つPodの変更を追跡します。 Jobコントローラーは、Podを作成したJobを指す*オーナーリファレンス*もそれらのPodに追加します。 Podが実行されているときにJobを削除すると、Kubernetesはオーナーリファレンス(ラベルではない)を使って、クリーンアップする必要のあるPodをクラスター内から探し出します。 また、Kubernetesは削除対象のリソースのオーナーリファレンスを認識して、ファイナライザーを処理します。 状況によっては、ファイナライザーが依存オブジェクトの削除をブロックしてしまい、対象のオーナーオブジェクトが完全に削除されず予想以上に長時間残ってしまうことがあります。 このような状況では、対象のオーナーと依存オブジェクトの、ファイナライザーとオーナーリファレンスを確認して問題を解決する必要があります。 {{}} オブジェクトが削除中の状態で詰まってしまった場合、削除を続行するために手動でファイナライザーを削除することは避けてください。 通常、ファイナライザーは理由があってリソースに追加されているものであるため、強制的に削除してしまうとクラスターで何らかの問題を引き起こすことがあります。 そのファイナライザーの目的を理解しているかつ、別の方法で達成できる場合にのみ行うべきです(例えば、依存オブジェクトを手動で削除するなど)。 {{}} ## {{% heading \"whatsnext\" %}} * Kubernetesブログの[ファイナライザーを利用した削除の制御](/blog/2021/05/14/using-finalizers-to-control-deletion/)をお読みください。 "} {"_id":"doc-en-website-05ceb5fbd84c9d9c7078ff5037745551ab57775c0cc9efe0f3ee231a38391d29","title":"","text":" --- title: ファイナライザー id: finalizer date: 2021-07-07 full_link: /ja/docs/concepts/overview/working-with-objects/finalizers/ short_description: > 削除対象としてマークされたオブジェクトを完全に削除する前に、特定の条件が満たされるまでKubernetesを待機させるための名前空間付きのキーです。 aka: tags: - fundamental - operation --- ファイナライザーは、削除対象としてマークされたリソースを完全に削除する前に、特定の条件が満たされるまでKubernetesを待機させるための名前空間付きのキーです。 ファイナライザーは、削除されたオブジェクトが所有していたリソースをクリーンアップするように{{}}に警告します。 Kubernetesにファイナライザーが指定されたオブジェクトを削除するように指示すると、Kubernetes APIはそのオブジェクトに`.metadata.deletionTimestamp`を追加し削除対象としてマークして、ステータスコード`202`(HTTP \"Accepted\")を返します。 コントロールプレーンやその他のコンポーネントがファイナライザーによって定義されたアクションを実行している間、対象のオブジェクトは終了中の状態のまま残っています。 それらのアクションが完了したら、そのコントローラーは関係しているファイナライザーを対象のオブジェクトから削除します。 `metadata.finalizers`フィールドが空になったら、Kubernetesは削除が完了したと判断しオブジェクトを削除します。 ファイナライザーはリソースの{{}}を管理するために使うことができます。 例えば、コントローラーが対象のリソースを削除する前に関連するリソースやインフラをクリーンアップするためにファイナライザーを定義することができます。 "} {"_id":"doc-en-website-567e1378a89a28e6ec5c5c35e63be530f2584559b12c50298e0cbb3be746a52d","title":"","text":"## Start a Kubelet process configured via the config file {{< note >}} If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadmin init`. If you use kubeadm to initialize your cluster, use the kubelet-config while creating your cluster with `kubeadm init`. See [configuring kubelet using kubeadm](/docs/setup/production-environment/tools/kubeadm/kubelet-integration/) for details. {{< /note >}}"} {"_id":"doc-en-website-f82c4532a3eba5519d155522400220c3667ff0f167b0ba4d005ae478bc38c970","title":"","text":" --- title: cgroup v2について content_type: concept weight: 50 --- Linuxでは、{{< glossary_tooltip text=\"コントロールグループ\" term_id=\"cgroup\" >}}がプロセスに割り当てられるリソースを制限しています。 コンテナ化されたワークロードの、CPU/メモリーの要求と制限を含む[Podとコンテナのリソース管理](/docs/concepts/configuration/manage-resources-containers/)を強制するために、 {{< glossary_tooltip text=\"kubelet\" term_id=\"kubelet\" >}}と基盤となるコンテナランタイムはcgroupをインターフェースとして接続する必要があります。 Linuxではcgroup v1とcgroup v2の2つのバージョンのcgroupがあります。 cgroup v2は新世代の`cgroup` APIです。 ## cgroup v2とは何か? {#cgroup-v2} {{< feature-state for_k8s_version=\"v1.25\" state=\"stable\" >}} cgroup v2はLinuxの`cgroup` APIの次のバージョンです。 cgroup v2はリソース管理機能を強化した統合制御システムを提供しています。 以下のように、cgroup v2はcgroup v1からいくつかの点を改善しています。 - 統合された単一階層設計のAPI - より安全なコンテナへのサブツリーの移譲 - [Pressure Stall Information](https://www.kernel.org/doc/html/latest/accounting/psi.html)などの新機能 - 強化されたリソース割り当て管理と複数リソース間の隔離 - 異なるタイプのメモリー割り当ての統一(ネットワークメモリー、カーネルメモリーなど) - ページキャッシュの書き戻しといった、非即時のリソース変更 Kubernetesのいくつかの機能では、強化されたリソース管理と隔離のためにcgroup v2のみを使用しています。 例えば、[MemoryQoS](/blog/2021/11/26/qos-memory-resources/)機能はメモリーQoSを改善し、cgroup v2の基本的な機能に依存しています。 ## cgroup v2を使う {#using-cgroupv2} cgroup v2を使うおすすめの方法は、デフォルトでcgroup v2が有効で使うことができるLinuxディストリビューションを使うことです。 あなたのディストリビューションがcgroup v2を使っているかどうかを確認するためには、[Linux Nodeのcgroupバージョンを特定する](#check-cgroup-version)を参照してください。 ### 必要要件 {#requirements} cgroup v2を使うには以下のような必要要件があります。 * OSディストリビューションでcgroup v2が有効であること * Linuxカーネルバージョンが5.8以上であること * コンテナランタイムがcgroup v2をサポートしていること。例えば、 * [containerd](https://containerd.io/) v1.4以降 * [cri-o](https://cri-o.io/) v1.20以降 * kubeletとコンテナランタイムが[systemd cgroupドライバー](/docs/setup/production-environment/container-runtimes#systemd-cgroup-driver)を使うように設定されていること ### Linuxディストリビューションのcgroup v2サポート cgroup v2を使っているLinuxディストリビューションの一覧は[cgroup v2ドキュメント](https://github.com/opencontainers/runc/blob/main/docs/cgroup-v2.md)をご覧ください。 * Container-Optimized OS (M97以降) * Ubuntu (21.10以降, 22.04以降推奨) * Debian GNU/Linux (Debian 11 bullseye以降) * Fedora (31以降) * Arch Linux (April 2021以降) * RHEL and RHEL-like distributions (9以降) あなたのディストリビューションがcgroup v2を使っているかどうかを確認するためには、あなたのディストリビューションのドキュメントを参照するか、[Linux Nodeのcgroupバージョンを特定する](#check-cgroup-version)の説明に従ってください。 カーネルのcmdlineの起動時引数を修正することで、手動であなたのLinuxディストリビューションのcgroup v2を有効にすることもできます。 あなたのディストリビューションがGRUBを使っている場合は、 `/etc/default/grub`の中の`GRUB_CMDLINE_LINUX`に`systemd.unified_cgroup_hierarchy=1`を追加し、`sudo update-grub`を実行してください。 ただし、おすすめの方法はデフォルトですでにcgroup v2が有効になっているディストリビューションを使うことです。 ### cgroup v2への移行 {#migrating-cgroupv2} cgroup v2に移行するには、[必要要件](#requirements)を満たすことを確認し、 cgroup v2がデフォルトで有効であるカーネルバージョンにアップグレードします。 kubeletはOSがcgroup v2で動作していることを自動的に検出し、それに応じて処理を行うため、追加設定は必要ありません。 ノード上やコンテナ内からユーザーが直接cgroupファイルシステムにアクセスしない限り、cgroup v2に切り替えたときのユーザー体験に目立った違いはないはずです。 cgroup v2はcgroup v1とは違うAPIを利用しているため、cgroupファイルシステムに直接アクセスしているアプリケーションはcgroup v2をサポートしている新しいバージョンに更新する必要があります。例えば、 * サードパーティーの監視またはセキュリティエージェントはcgroupファイルシステムに依存していることがあります。 エージェントをcgroup v2をサポートしているバージョンに更新してください。 * Podやコンテナを監視するために[cAdvisor](https://github.com/google/cadvisor)をスタンドアローンのDaemonSetとして起動している場合、v0.43.0以上に更新してください。 * JDKを利用している場合、[cgroup v2を完全にサポートしている](https://bugs.openjdk.org/browse/JDK-8230305)JDK 11.0.16以降、またはJDK15以降を利用することが望ましいです。 ## Linux Nodeのcgroupバージョンを特定する {#check-cgroup-version} cgroupバージョンは利用されているLinuxディストリビューションと、OSで設定されているデフォルトのcgroupバージョンに依存します。 あなたのディストリビューションがどちらのcgroupバージョンを利用しているのかを確認するには、`stat -fc %T /sys/fs/cgroup/`コマンドをノード上で実行してください。 ```shell stat -fc %T /sys/fs/cgroup/ ``` cgroup v2では、`cgroup2fs`と出力されます。 cgroup v1では、`tmpfs`と出力されます。 ## {{% heading \"whatsnext\" %}} - [cgroups](https://man7.org/linux/man-pages/man7/cgroups.7.html)についてもっと学習しましょう。 - [コンテナランタイム](/ja/docs/concepts/architecture/cri)についてもっと学習しましょう。 - [cgroupドライバー](/docs/setup/production-environment/container-runtimes#cgroup-drivers)についてもっと学習しましょう。 "} {"_id":"doc-en-website-fdab5a0ab83395cf2b4e956a08b8e6fc36cefb59fbb140c51df8e51d9c60a99e","title":"","text":"Being vulnerable does not necessarily mean that your service will be exploited. Though your services are vulnerable in some ways unknown to you, offenders still need to identify these vulnerabilities and then exploit them. If offenders fail to exploit your service vulnerabilities, you win! In other words, having a vulnerability that can’t be exploited, represents a risk that can’t be realized. {{< figure src=\"Example.png\" alt=\"Image of an example of offender gaining foothold in a service\" class=\"diagram-large\" caption=\"Figure 1. An Offender gaining foothold in a vulnerable service\" >}} {{< figure src=\"security_behavior_figure_1.svg\" alt=\"Image of an example of offender gaining foothold in a service\" class=\"diagram-large\" caption=\"Figure 1. An Offender gaining foothold in a vulnerable service\" >}} The above diagram shows an example in which the offender does not yet have a foothold in the service; that is, it is assumed that your service does not run code controlled by the offender on day 1. In our example the service has vulnerabilities in the API exposed to clients. To gain an initial foothold the offender uses a malicious client to try and exploit one of the service API vulnerabilities. The malicious client sends an exploit that triggers some unplanned behavior of the service."} {"_id":"doc-en-website-cf836933919f9422ea51d8024cbd772b5f2035419483e06f97d9d015bdbb60f0","title":"","text":"Kubernetes is often used to support workloads designed with microservice architecture. By design, microservices aim to follow the UNIX philosophy of \"Do One Thing And Do It Well\". Each microservice has a bounded context and a clear interface. In other words, you can expect the microservice clients to send relatively regular requests and the microservice to present a relatively regular behavior as a response to these requests. Consequently, a microservice architecture is an excellent candidate for security-behavior monitoring. {{< figure src=\"Microservices.png\" alt=\"Image showing why microservices are well suited for security-behavior monitoring\" class=\"diagram-large\" caption=\"Figure 2. Microservices are well suited for security-behavior monitoring\" >}} {{< figure src=\"security_behavior_figure_2.svg\" alt=\"Image showing why microservices are well suited for security-behavior monitoring\" class=\"diagram-large\" caption=\"Figure 2. Microservices are well suited for security-behavior monitoring\" >}} The diagram above clarifies how dividing a monolithic service to a set of microservices improves our ability to perform security-behavior monitoring and control. In a monolithic service approach, different client requests are intertwined, resulting in a diminished ability to identify irregular client behaviors. Without prior knowledge, an observer of the intertwined client requests will find it hard to distinguish between types of requests and their related characteristics. Further, internal client requests are not exposed to the observer. Lastly, the aggregated behavior of the monolithic service is a compound of the many different internal behaviors of its components, making it hard to identify irregular service behavior."} {"_id":"doc-en-website-936ab9f1fda7e24228f538f0426c117084006d7e803c94078b396db60bd2e7e4","title":"","text":" "} {"_id":"doc-en-website-20290e3e64b847d24d717899d4c400e95c2111f7d89fbd689ad72d47015a876e","title":"","text":" "} {"_id":"doc-en-website-4300cc7fc2e193c3076a63b2f33760b5ee834ad7d44de4be2e2db1300ba949dc","title":"","text":"## Clean up Run `kind delete cluster --name psa-with-cluster-pss` and `kind delete cluster --name psa-wo-cluster-pss` to delete the clusters you created. Now delete the clusters which you created above by running the following command: ```shell kind delete cluster --name psa-with-cluster-pss ``` ```shell kind delete cluster --name psa-wo-cluster-pss ``` ## {{% heading \"whatsnext\" %}}"} {"_id":"doc-en-website-965f69e587057b03dce4f4778c9349694aedaf108a451f4a828715218572de2f","title":"","text":"## Clean up Run `kind delete cluster --name psa-ns-level` to delete the cluster created. Now delete the cluster which you created above by running the following command: ```shell kind delete cluster --name psa-ns-level ``` ## {{% heading \"whatsnext\" %}}"} {"_id":"doc-en-website-4d4806e8ed7a60ee86ebb12caf39f55a3d278d32a13b4688e8e35f6c3dc341fc","title":"","text":" --- title: Common Expression Language in Kubernetes reviewers: - jpbetz - cici37 content_type: concept weight: 35 min-kubernetes-server-version: 1.25 --- The [Common Expression Language (CEL)](https://github.com/google/cel-go) is used in the Kubernetes API to declare validation rules, policy rules, and other constraints or conditions. CEL expressions are evaluated directly in the {{< glossary_tooltip text=\"API server\" term_id=\"kube-apiserver\" >}}, making CEL a convenient alternative to out-of-process mechanisms, such as webhooks, for many extensibility use cases. Your CEL expressions continue to execute so long as the control plane's API server component remains available. ## Language overview The [CEL language](https://github.com/google/cel-spec/blob/master/doc/langdef.md) has a straightforward syntax that is similar to the expressions in C, C++, Java, JavaScript and Go. CEL was designed to be embedded into applications. Each CEL \"program\" is a single expression that evaluates to a single value. CEL expressions are typically short \"one-liners\" that inline well into the string fields of Kubernetes API resources. Inputs to a CEL program are \"variables\". Each Kubernetes API field that contains CEL declares in the API documentation which variables are available to use for that field. For example, in the `x-kubernetes-validations[i].rules` field of CustomResourceDefinitions, the `self` and `oldSelf` variables are available and refer to the previous and current state of the custom resource data to be validated by the CEL expression. Other Kubernetes API fields may declare different variables. See the API documentation of the API fields to learn which variables are available for that field. Example CEL expressions: {{< table caption=\"Examples of CEL expressions and the purpose of each\" >}} | Rule | Purpose | |------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------| | `self.minReplicas <= self.replicas && self.replicas <= self.maxReplicas` | Validate that the three fields defining replicas are ordered appropriately | | `'Available' in self.stateCounts` | Validate that an entry with the 'Available' key exists in a map | | `(self.list1.size() == 0) != (self.list2.size() == 0)` | Validate that one of two lists is non-empty, but not both | | `self.envars.filter(e, e.name = 'MY_ENV').all(e, e.value.matches('^[a-zA-Z]*$')` | Validate the 'value' field of a listMap entry where key field 'name' is 'MY_ENV' | | `has(self.expired) && self.created + self.ttl < self.expired` | Validate that 'expired' date is after a 'create' date plus a 'ttl' duration | | `self.health.startsWith('ok')` | Validate a 'health' string field has the prefix 'ok' | | `self.widgets.exists(w, w.key == 'x' && w.foo < 10)` | Validate that the 'foo' property of a listMap item with a key 'x' is less than 10 | | `type(self) == string ? self == '99%' : self == 42` | Validate an int-or-string field for both the the int and string cases | | `self.metadata.name == 'singleton'` | Validate that an object's name matches a specific value (making it a singleton) | | `self.set1.all(e, !(e in self.set2))` | Validate that two listSets are disjoint | | `self.names.size() == self.details.size() && self.names.all(n, n in self.details)` | Validate the 'details' map is keyed by the items in the 'names' listSet | ## CEL community libraries Kubernetes CEL expressions have access to the following CEL community libraries: - CEL standard functions, defined in the [list of standard definitions](https://github.com/google/cel-spec/blob/master/doc/langdef.md#list-of-standard-definitions) - CEL standard [macros](https://github.com/google/cel-spec/blob/v0.7.0/doc/langdef.md#macros) - CEL [extended string function library](https://pkg.go.dev/github.com/google/cel-go/ext#Strings) ## Kubernetes CEL libraries In additional to the CEL community libraries, Kubernetes includes CEL libraries that are available everywhere CEL is used in Kubernetes. ### Kubernetes list library The list library includes `indexOf` and `lastIndexOf`, which work similar to the strings functions of the same names. These functions either the first or last positional index of the provided element in the list. The list library also includes `min`, `max` and `sum`. Sum is supported on all number types as well as the duration type. Min and max are supported on all comparable types. `isSorted` is also provided as a convenience function and is supported on all comparable types. Examples: {{< table caption=\"Examples of CEL expressions using list library functions\" >}} | CEL Expression | Purpose | |------------------------------------------------------------------------------------|-----------------------------------------------------------| | `names.isSorted()` | Verify that a list of names is kept in alphabetical order | | `items.map(x, x.weight).sum() == 1.0` | Verify that the \"weights\" of a list of objects sum to 1.0 | | `lowPriorities.map(x, x.priority).max() < highPriorities.map(x, x.priority).min()` | Verify that two sets of priorities do not overlap | | `names.indexOf('should-be-first') == 1` | Require that the first name in a list if a specific value | See the [Kubernetes List Library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Lists) godoc for more information. ### Kubernetes regex library In addition to the `matches` function provided by the CEL standard library, the regex library provides `find` and `findAll`, enabling a much wider range of regex operations. Examples: {{< table caption=\"Examples of CEL expressions using regex library functions\" >}} | CEL Expression | Purpose | |-------------------------------------------------------------|----------------------------------------------------------| | `\"abc 123\".find('[0-9]*')` | Find the first number in a string | | `\"1, 2, 3, 4\".findAll('[0-9]*').map(x, int(x)).sum() < 100` | Verify that the numbers in a string sum to less than 100 | See the [Kubernetes regex library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#Regex) godoc for more information. ### Kubernetes URL library To make it easier and safer to process URLs, the following functions have been added: - `isURL(string)` checks if a string is a valid URL according to the [Go's net/url](https://pkg.go.dev/net/url#URL) package. The string must be an absolute URL. - `url(string) URL` converts a string to a URL or results in an error if the string is not a valid URL. Once parsed via the `url` function, the resulting URL object has `getScheme`, `getHost`, `getHostname`, `getPort`, `getEscapedPath` and `getQuery` accessors. Examples: {{< table caption=\"Examples of CEL expressions using URL library functions\" >}} | CEL Expression | Purpose | |-----------------------------------------------------------------|------------------------------------------------| | `url('https://example.com:80/').getHost()` | Get the 'example.com:80' host part of the URL. | | `url('https://example.com/path with spaces/').getEscapedPath()` | Returns '/path%20with%20spaces/' | See the [Kubernetes URL library](https://pkg.go.dev/k8s.io/apiextensions-apiserver/pkg/apiserver/schema/cel/library#URLs) godoc for more information. ## Type checking CEL is a [gradually typed language](https://github.com/google/cel-spec/blob/master/doc/langdef.md#gradual-type-checking). Some Kubernetes API fields contain fully type checked CEL expressions. For example, [CustomResourceDefinitions Validation Rules](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) are fully type checked. Some Kubernetes API fields contain partially type checked CEL expressions. A partially type checked expression is an experessions where some of the variables are statically typed but others are dynamically typed. For example, in the CEL expressions of [ValidatingAdmissionPolicies](/docs/reference/access-authn-authz/validating-admission-policy/) the `request` variable is typed, but the `object` variable is dynamically typed. As a result, an expression containing `request.namex` would fail type checking because the `namex` field is not defined. However, `object.namex` would pass type checking even when the `namex` field is not defined for the resource kinds that `object` refers to, because `object` is dynamically typed. The `has()` macro in CEL may be used in CEL expressions to check if a field of a dynamically typed variable is accessable before attempting to access the field's value. For example: ```cel has(object.namex) ? object.namex == 'special' : request.name == 'special' ``` ## Type system integration {{< table caption=\"Table showing the relationship between OpenAPIv3 types and CEL types\" >}} | OpenAPIv3 type | CEL type | |----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------| | 'object' with Properties | object / \"message type\" (`type()` evaluates to `selfType.path.to.object.from.self` | | 'object' with AdditionalProperties | map | | 'object' with x-kubernetes-embedded-type | object / \"message type\", 'apiVersion', 'kind', 'metadata.name' and 'metadata.generateName' are implicitly included in schema | | 'object' with x-kubernetes-preserve-unknown-fields | object / \"message type\", unknown fields are NOT accessible in CEL expression | | x-kubernetes-int-or-string | union of int or string, `self.intOrString < 100 || self.intOrString == '50%'` evaluates to true for both `50` and `\"50%\"` | | 'array | list | | 'array' with x-kubernetes-list-type=map | list with map based Equality & unique key guarantees | | 'array' with x-kubernetes-list-type=set | list with set based Equality & unique entry guarantees | | 'boolean' | boolean | | 'number' (all formats) | double | | 'integer' (all formats) | int (64) | | _no equivalent_ | uint (64) | | 'null' | null_type | | 'string' | string | | 'string' with format=byte (base64 encoded) | bytes | | 'string' with format=date | timestamp (google.protobuf.Timestamp) | | 'string' with format=datetime | timestamp (google.protobuf.Timestamp) | | 'string' with format=duration | duration (google.protobuf.Duration) | Also see: [CEL types](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#values), [OpenAPI types](https://swagger.io/specification/#data-types), [Kubernetes Structural Schemas](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema). Equality comparison for arrays with `x-kubernetes-list-type` of `set` or `map` ignores element order. For example `[1, 2] == [2, 1]` if the arrays represent Kubernetes `set` values. Concatenation on arrays with `x-kubernetes-list-type` use the semantics of the list type: - `set`: `X + Y` performs a union where the array positions of all elements in `X` are preserved and non-intersecting elements in `Y` are appended, retaining their partial order. - `map`: `X + Y` performs a merge where the array positions of all keys in `X` are preserved but the values are overwritten by values in `Y` when the key sets of `X` and `Y` intersect. Elements in `Y` with non-intersecting keys are appended, retaining their partial order. ## Escaping Only Kubernetes resource property names of the form `[a-zA-Z_.-/][a-zA-Z0-9_.-/]*` are accessible from CEL. Accessible property names are escaped according to the following rules when accessed in the expression: {{< table caption=\"Table of CEL identifier escaping rules\" >}} | escape sequence | property name equivalent | |-------------------|----------------------------------------------------------------------------------------------| | `__underscores__` | `__` | | `__dot__` | `.` | | `__dash__` | `-` | | `__slash__` | `/` | | `__{keyword}__` | [CEL **RESERVED** keyword](https://github.com/google/cel-spec/blob/v0.6.0/doc/langdef.md#syntax) | When you escape any of CEL's **RESERVED** keywords you need to match the exact property name use the underscore escaping (for example, `int` in the word `sprint` would not be escaped and nor would it need to be). Examples on escaping: {{< table caption=\"Examples escaped CEL identifiers\" >}} | property name | rule with escaped property name | |---------------|-----------------------------------| | `namespace` | `self.__namespace__ > 0` | | `x-prop` | `self.x__dash__prop > 0` | | `redact__d` | `self.redact__underscores__d > 0` | | `string` | `self.startsWith('kube')` | ## Resource constraints CEL is non-Turing complete and offers a variety of production safety controls to limit execution time. CEL's _resource constraint_ features provide feedback to developers about expression complexity and help protect the API server from excessive resource consumption during evaluation. CEL's resource constraint features are used to prevent CEL evaluation from consuming excessive API server resources. A key element of the resource constraint features is a _cost unit_ that CEL defines as a way of tracking CPU utilization. Cost units are independant of system load and hardware. Cost units are also deterministic; for any given CEL expression and input data, evaluation of the expression by the CEL interpreter will always result in the same cost. Many of CEL's core operations have fixed costs. The simplest operations, such as comparisons (e.g. `<`) have a cost of 1. Some have a higher fixed cost, for example list literal declarations have a fixed base cost of 40 cost units. Calls to functions implemented in native code approximate cost based on the time complexity of the operation. For example: operations that use regular expressions, such as `match` and `find`, are estimated using an approximated cost of `length(regexString)*length(inputString)`. The approximated cost reflects the worst case time complexity of Go's RE2 implementation. ### Runtime cost budget All CEL expressions evaluated by Kubernetes are constrained by a runtime cost budget. The runtime cost budget is an estimate of actual CPU utilization computed by incrementing a cost unit counter while interpreting a CEL expression. If the CEL interpreter executes too many instructions, the runtime cost budget will be exceeded, execution of the expressions will be halted, and an error will result. Some Kubernetes resources define an additional runtime cost budget that bounds the execution of multiple expressions. If the sum total of the cost of expressions exceed the budget, execution of the expressions will be halted, and an error will result. For example the validation of a custom resource has a _per-validation_ runtime cost budget for all [Validation Rules](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules) evaluated to validate the custom resource. ### Estimated cost limits For some Kubernetes resources, the API server may also check if worst case estimated running time of CEL expressions would be prohibitively expensive to execute. If so, the API server prevent the CEL expression from being written to API resources by rejecting create or update operations containing the CEL expression to the API resources. This feature offers a stronger assurance that CEL expressions written to the API resource will be evaluate at runtime without exceeding the runtime cost budget. No newline at end of file"} {"_id":"doc-en-website-13af63160d93b3b05205466903a5d24ac73b646bbaef7b5d78f35bac8f400493","title":"","text":"Different kinds of object can also have different `.status`; again, the API reference pages detail the structure of that `.status` field, and its content for each different type of object. ## Server side field validation Starting with Kubernetes v1.25, the API server offers server side [field validation](/docs/reference/using-api/api-concepts/#field-validation) that detects unrecognized or duplicate fields in an object. It provides all the functionality of `kubectl --validate` on the server side. The `kubectl` tool uses the `--validate` flag to set the level of field validation. It accepts the values `ignore`, `warn`, and `strict` while also accepting the values `true` (equivalent to `strict`) and `false` (equivalent to `ignore`). The default validation setting for `kubectl` is `--validate=true`. `Strict` : Strict field validation, errors on validation failure `Warn` : Field validation is performed, but errors are exposed as warnings rather than failing the request `Ignore` : No server side field validation is performed When `kubectl` cannot connect to an API server that supports field validation it will fall back to using client-side validation. Kubernetes 1.27 and later versions always offer field validation; older Kubernetes releases might not. If your cluster is older than v1.27, check the documentation for your version of Kubernetes. ## {{% heading \"whatsnext\" %}} If you're new to Kubernetes, read more about the following:"} {"_id":"doc-en-website-4d3f22438345987e35bab2b17474cb500c8f5f3f1577fa60240918af324339c4","title":"","text":"Kubernetes provides great primitives for deploying applications to a cluster: it can be as simple as `kubectl create -f app.yaml`. Deploy apps across multiple clusters has never been that simple. How should app workloads be distributed? Should the app resources be replicated into all clusters, replicated into selected clusters, or partitioned into clusters? How is access to the clusters managed? What happens if some of the resources that a user wants to distribute pre-exist, in some or all of the clusters, in some form? In SIG Multicluster, our journey has revealed that there are multiple possible models to solve these problems and there probably is no single best-fit, all-scenario solution. [Federation](/docs/concepts/cluster-administration/federation/), however, is the single biggest Kubernetes open source sub-project, and has seen the maximum interest and contribution from the community in this problem space. The project initially reused the Kubernetes API to do away with any added usage complexity for an existing Kubernetes user. This approach was not viable, because of the problems summarised below: In SIG Multicluster, our journey has revealed that there are multiple possible models to solve these problems and there probably is no single best-fit, all-scenario solution. [Kubernetes Cluster Federation (KubeFed for short)](https://github.com/kubernetes-sigs/kubefed), however, is the single biggest Kubernetes open source sub-project, and has seen the maximum interest and contribution from the community in this problem space. The project initially reused the Kubernetes API to do away with any added usage complexity for an existing Kubernetes user. This approach was not viable, because of the problems summarised below: * Difficulties in re-implementing the Kubernetes API at the cluster level, as federation-specific extensions were stored in annotations. * Limited flexibility in federated types, placement and reconciliation, due to 1:1 emulation of the Kubernetes API."} {"_id":"doc-en-website-a9eb094f672bea588c8bcab68a6b3ba7edb4dbe33096aff19bded4993d23a52b","title":"","text":"## {{% heading \"prerequisites\" %}} These instructions are for Kubernetes {{< skew currentVersion >}}. If you want to check the integrity of components for a different version of Kubernetes, check the documentation for that Kubernetes release. You will need to have the following tools installed: - `cosign` ([install guide](https://docs.sigstore.dev/cosign/installation/)) - `curl` (often provided by your operating system) - `jq` ([download jq](https://stedolan.github.io/jq/download/)) ## Verifying binary signatures"} {"_id":"doc-en-website-cae2beab1679829d2986af11fd6b587d17b12c3fd07a41d60516752e284ae48c","title":"","text":"done ``` Then verify the blob by using `cosign`: Then verify the blob by using `cosign verify-blob`: ```shell cosign verify-blob \"$BINARY\" --signature \"$BINARY\".sig --certificate \"$BINARY\".cert cosign verify-blob \"$BINARY\" --signature \"$BINARY\".sig --certificate \"$BINARY\".cert --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com --certificate-oidc-issuer https://accounts.google.com ``` cosign v1.9.0 is required to be able to use the `--certificate` flag. Please use `--cert` for older versions of cosign. {{< note >}} To learn more about keyless signing, please refer to [Keyless Signatures](https://github.com/sigstore/cosign/blob/main/KEYLESS.md#keyless-signatures). Cosign 2.0 requires the `--certificate-identity` and `--certificate-oidc-issuer` options. To learn more about keyless signing, please refer to [Keyless Signatures](https://docs.sigstore.dev/cosign/keyless). Previous versions of Cosign required that you set `COSIGN_EXPERIMENTAL=1`. For additional information, plase refer to the [sigstore Blog](https://blog.sigstore.dev/cosign-2-0-released/) {{< /note >}} ## Verifying image signatures"} {"_id":"doc-en-website-f736fdc7120e5d1dbc447e8623a4e8f1bcd273c528e52fbbb587f6cf049f62e1","title":"","text":"For a complete list of images that are signed please refer to [Releases](/releases/download/). Let's pick one image from this list and verify its signature using Pick one image from this list and verify its signature using the `cosign verify` command: ```shell COSIGN_EXPERIMENTAL=1 cosign verify registry.k8s.io/kube-apiserver-amd64:v{{< skew currentPatchVersion >}} cosign verify registry.k8s.io/kube-apiserver-amd64:v{{< skew currentPatchVersion >}} --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com --certificate-oidc-issuer https://accounts.google.com | jq . ``` {{< note >}} `COSIGN_EXPERIMENTAL=1` is used to allow verification of images signed in `KEYLESS` mode. To learn more about keyless signing, please refer to [Keyless Signatures](https://github.com/sigstore/cosign/blob/main/KEYLESS.md#keyless-signatures) . {{< /note >}} ### Verifying images for all control plane components To verify all signed control plane images, please run this command: To verify all signed control plane images for the latest stable version (v{{< skew currentPatchVersion >}}), please run the following commands: ```shell curl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" | grep -v sha256 | cut -d- -f3- | sed 's/-///' | sed 's/-v1/:v1/' > images.txt curl -Ls \"https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release\" | grep \"SPDXID: SPDXRef-Package-registry.k8s.io\" | grep -v sha256 | cut -d- -f3- | sed 's/-///' | sed 's/-v1/:v1/' | sort > images.txt input=images.txt while IFS= read -r image do COSIGN_EXPERIMENTAL=1 cosign verify \"$image\" cosign verify \"$image\" --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com --certificate-oidc-issuer https://accounts.google.com | jq . done < \"$input\" ``` Once you have verified an image, specify that image by its digest in your Pod manifests as per this example: `registry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2` . Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example: ```console registry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2 ``` For more information, please refer to [Image Pull Policy](/docs/concepts/containers/images/#image-pull-policy) to the [Image Pull Policy](/docs/concepts/containers/images/#image-pull-policy) section. ## Verifying Image Signatures with Admission Controller For non-control plane images ( e.g. [conformance image](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/image/README.md)) , signatures can also be verified at deploy time using For non-control plane images (for example [conformance image](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/image/README.md)), signatures can also be verified at deploy time using [sigstore policy-controller](https://docs.sigstore.dev/policy-controller/overview) admission controller. To get started with `policy-controller` here are a few helpful resources: admission controller. Here are some helpful resources to get started with `policy-controller`: - [Installation](https://github.com/sigstore/helm-charts/tree/main/charts/policy-controller) - [Configuration Options](https://github.com/sigstore/policy-controller/tree/main/config)"} {"_id":"doc-en-website-ab27081bb50f7bfc95e79cd395d63c22e260d90b9329577eaf2ac45aa9cbb679","title":"","text":"Install CNI plugins (required for most pod network): ```bash CNI_PLUGINS_VERSION=\"v1.1.1\" CNI_PLUGINS_VERSION=\"v1.2.0\" ARCH=\"amd64\" DEST=\"/opt/cni/bin\" sudo mkdir -p \"$DEST\""} {"_id":"doc-en-website-36f169108d78fb8ccce0341435ace8eaf34e9b7af78a07d37c27658e41d13084","title":"","text":"Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)) ```bash CRICTL_VERSION=\"v1.25.0\" CRICTL_VERSION=\"v1.26.0\" ARCH=\"amd64\" curl -L \"https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz\" | sudo tar -C $DOWNLOAD_DIR -xz ```"} {"_id":"doc-en-website-ee9adc95945126562c29c7dc742e5a7c8321fd43c1950d1e8a88eb9bf80a2dee","title":"","text":"## {{% heading \"whatsnext\" %}} * [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) * [Using kubeadm to Create a Cluster](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) No newline at end of file"} {"_id":"doc-en-website-216c06647e309ad88ba522994934c32b236018f2feb2281902882510e4852c64","title":"","text":"{{< glossary_definition term_id=\"kube-controller-manager\" length=\"all\" >}} Some types of these controllers are: There are many different types of controllers. Some examples of them are: * Node controller: Responsible for noticing and responding when nodes go down. * Job controller: Watches for Job objects that represent one-off tasks, then creates"} {"_id":"doc-en-website-57f163a005bc96b9039b51d37ea5c1ac2a7650cb11720c08c5ac922f33477364","title":"","text":"* EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods). * ServiceAccount controller: Create default ServiceAccounts for new namespaces. The above is not an exhaustive list. ### cloud-controller-manager {{< glossary_definition term_id=\"cloud-controller-manager\" length=\"short\" >}}"} {"_id":"doc-en-website-b17c120155e880f4d6f913ecbc6b7b2a5a6e5ebc4a3a7d580daa75e6f40f4ddd","title":"","text":"* Etcd's official [documentation](https://etcd.io/docs/). * Several [container runtimes](/docs/setup/production-environment/container-runtimes/) in Kubernetes. * Integrating with cloud providers using [cloud-controller-manager](/docs/concepts/architecture/cloud-controller/). * [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands. * [kubectl](/docs/reference/generated/kubectl/kubectl-commands) commands. No newline at end of file"} {"_id":"doc-en-website-98c084c367e4b969e571e3ec3f6c1270d1244bb10ececbdc9d29eac04a83da44","title":"","text":"each kube-apiserver. You can inspect Leases owned by each kube-apiserver by checking for lease objects in the `kube-system` namespace with the name `kube-apiserver-`. Alternatively you can use the label selector `k8s.io/component=kube-apiserver`: with the name `kube-apiserver-`. Alternatively you can use the label selector `apiserver.kubernetes.io/identity=kube-apiserver`: ```shell kubectl -n kube-system get lease -l k8s.io/component=kube-apiserver kubectl -n kube-system get lease -l apiserver.kubernetes.io/identity=kube-apiserver ``` ``` NAME HOLDER AGE kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 5m33s kube-apiserver-dz2dqprdpsgnm756t5rnov7yka kube-apiserver-dz2dqprdpsgnm756t5rnov7yka_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s kube-apiserver-fyloo45sdenffw2ugwaz3likua kube-apiserver-fyloo45sdenffw2ugwaz3likua_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s apiserver-07a5ea9b9b072c4a5f3d1c3702 apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 5m33s apiserver-7be9e061c59d368b3ddaf1376e apiserver-7be9e061c59d368b3ddaf1376e_84f2a85d-37c1-4b14-b6b9-603e62e4896f 4m23s apiserver-1dfef752bcb36637d2763d1868 apiserver-1dfef752bcb36637d2763d1868_c5ffa286-8a9a-45d4-91e7-61118ed58d2e 4m43s ``` The SHA256 hash used in the lease name is based on the OS hostname as seen by that API server. Each kube-apiserver should be"} {"_id":"doc-en-website-17cac63a22f8e2ae0824511ddf4f4a3b498bbac84a3e59536f9ab80d19217f35","title":"","text":"hostname used by kube-apisever by checking the value of the `kubernetes.io/hostname` label: ```shell kubectl -n kube-system get lease kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a -o yaml kubectl -n kube-system get lease apiserver-07a5ea9b9b072c4a5f3d1c3702 -o yaml ``` ```yaml apiVersion: coordination.k8s.io/v1 kind: Lease metadata: creationTimestamp: \"2022-11-30T15:37:15Z\" creationTimestamp: \"2023-07-02T13:16:48Z\" labels: k8s.io/component: kube-apiserver kubernetes.io/hostname: kind-control-plane name: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a apiserver.kubernetes.io/identity: kube-apiserver kubernetes.io/hostname: master-1 name: apiserver-07a5ea9b9b072c4a5f3d1c3702 namespace: kube-system resourceVersion: \"18171\" uid: d6c68901-4ec5-4385-b1ef-2d783738da6c resourceVersion: \"334899\" uid: 90870ab5-1ba9-4523-b215-e4d4e662acb1 spec: holderIdentity: kube-apiserver-c4vwjftbvpc5os2vvzle4qg27a_9cbf54e5-1136-44bd-8f9a-1dcd15c346b4 holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05 leaseDurationSeconds: 3600 renewTime: \"2022-11-30T18:04:27.912073Z\" renewTime: \"2023-07-04T21:58:48.065888Z\" ``` Expired leases from kube-apiservers that no longer exist are garbage collected by new kube-apiservers after 1 hour."} {"_id":"doc-en-website-be0a9200af300d00feefa66e03408185a3705d9309dba3d4269ecdfecd1cc755","title":"","text":"You should now be able to curl the nginx Service on `:` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're curious about how this works you can read more about the [service proxy](/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies). about the [service proxy](/docs/reference/networking/virtual-ips/). ## Accessing the Service"} {"_id":"doc-en-website-209ce1efaf840757b877fb6efbee461c237ac23e4af26801d2cacbfd34f2fa56","title":"","text":"| Session affinity | Ensures that connections from a particular client are passed to the same Pod each time. | Windows Server 2022 | Set `service.spec.sessionAffinity` to \"ClientIP\" | | Direct Server Return (DSR) | Load balancing mode where the IP address fixups and the LBNAT occurs at the container vSwitch port directly; service traffic arrives with the source IP set as the originating pod IP. | Windows Server 2019 | Set the following flags in kube-proxy: `--feature-gates=\"WinDSR=true\" --enable-dsr=true` | | Preserve-Destination | Skips DNAT of service traffic, thereby preserving the virtual IP of the target service in packets reaching the backend Pod. Also disables node-node forwarding. | Windows Server, version 1903 | Set `\"preserve-destination\": \"true\"` in service annotations and enable DSR in kube-proxy. | | IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | Windows Server 2019 | See [IPv4/IPv6 dual-stack](#ipv4ipv6-dual-stack) | | IPv4/IPv6 dual-stack networking | Native IPv4-to-IPv4 in parallel with IPv6-to-IPv6 communications to, from, and within a cluster | Windows Server 2019 | See [IPv4/IPv6 dual-stack](/docs/concepts/services-networking/dual-stack/#windows-support) | | Client IP preservation | Ensures that source IP of incoming ingress traffic gets preserved. Also disables node-node forwarding. | Windows Server 2019 | Set `service.spec.externalTrafficPolicy` to \"Local\" and enable DSR in kube-proxy | {{< /table >}}"} {"_id":"doc-en-website-5c5d92c1dc35b0ef783b0bd4f4d16ede1f52447b5881fbef17cee2bfcbe35737","title":"","text":"I componenti del Control Plane sono responsabili di tutte le decisioni globali sul cluster (ad esempio, lo scheduling) oltre che a rilevare e rispondere agli eventi del cluster (ad esempio, l'avvio di un nuovo {{< glossary_tooltip text=\"pod\" term_id=\"pod\">}} quando il valore `replicas` di un deployment non è soddisfatto). I componenti della Control Plane possono essere eseguiti su qualsiasi nodo del cluster stesso. Solitamente, per semplicità, gli script di installazione tendono a eseguire tutti i componenti della Control Plane sulla stessa macchina, separando la Control Plane dai workload dell'utente. Vedi [creare un cluster in High-Availability](/docs/admin/high-availability/) per un esempio di un'installazione multi-master. Vedi [creare un cluster in High-Availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) per un esempio di un'installazione multi-master. ### kube-apiserver"} {"_id":"doc-en-website-5531ca22e32bb60ec66c171a29265f9365c26aebac31f0a68bb4288c564afde4","title":"","text":"body.cid-community #navigation-items { padding: 0.25em; width: 100vw; width: 100%; max-width: initial; margin-top: 2.5em;"} {"_id":"doc-en-website-cc316be2ee541111e7e1dbe460b2ad9971ed27efe278e0cc4a44c31b1b53cbbf","title":"","text":"body.cid-community #gallery { display: flex; max-width: 100vw; max-width: 100%; gap: 0.75rem; justify-content: center; margin-left: auto;"} {"_id":"doc-en-website-12eb2a9f8c06e34aacab434505f57495a4a66920a6f1c113ee1315f5c2b8682d","title":"","text":"body.cid-community .community-section#events { width: 100vw; width: 100%; max-width: initial; margin-bottom: 0;"} {"_id":"doc-en-website-f5074c39ee5b5011b55116591f8198cfcbe301bc67adcfb84b0542a00e58f5ef","title":"","text":"} body.cid-community .community-section#values { width: 100vw; width: 100%; max-width: initial; background-image: url('/images/community/event-bg.jpg'); color: #fff;"} {"_id":"doc-en-website-1c1e62c57e4d248c89b51df858fa735d7b7fdfbdab858f0002cab9353f5a719a","title":"","text":"} body.cid-community .community-section#meetups { width: 100vw; width: 100%; max-width: initial; margin-top: 0;"} {"_id":"doc-en-website-09fb4e08e48b83d49a2399d2a090dd08d4a2d7a5ae15a24437058312d6f513a4","title":"","text":"background-repeat: no-repeat, repeat; background-size: auto 100%, cover; color: #fff; width: 100vw; /* fallback in case calc() fails */ padding: 5vw; padding-bottom: 1em;"} {"_id":"doc-en-website-bd6602592ecec530901c3b2e0cbdad52693ec2508d2853e7cde497ae3d08d4a4","title":"","text":"} body.cid-community #videos { width: 100vw; width: 100%; max-width: initial; padding: 0.5em 5vw 5% 5vw; /* fallback in case calc() fails */ background-color: #eeeeee;"} {"_id":"doc-en-website-6545c590c5afe2afdb48d47e2d3205b01ba32b0133adb260c6b128ef6e2ac07e","title":"","text":"body.cid-community .community-section.community-frame { width: 100vw; width: 100%; } body.cid-community .community-section.community-frame .twittercol1 {"} {"_id":"doc-en-website-691afa853754362b7946e497200ef0a767bea8a8a91c289bc899bdff220b4c6c","title":"","text":"body.cid-community .community-section#meetups p:last-of-type { margin-bottom: 6em; /* extra space for background */ } } @media only screen and (max-width: 767px) { body.cid-community .community-section h2:before, body.cid-community .community-section h2:after { display: none; } } No newline at end of file"} {"_id":"doc-en-website-ec260bda0ba3da2cf69368e5853ccc999a25cfe395500366503f60d1d2c1b89d","title":"","text":"## Labels, annotations and taints used on API objects ### apf.kubernetes.io/autoupdate-spec Type: Annotation Example: `apf.kubernetes.io/autoupdate-spec: \"true\"` Used on: [`FlowSchema` and `PriorityLevelConfiguration` Objects](/concepts/cluster-administration/flow-control/#defaults) If this annotation is set to true on a FlowSchema or PriorityLevelConfiguration, the `spec` for that object is managed by the kube-apiserver. If the API server does not recognize an APF object, and you annotate it for automatic update, the API server deletes the entire object. Otherwise, the API server does not manage the object spec. For more details, read [Maintenance of the Mandatory and Suggested Configuration Objects](/docs/concepts/cluster-administration/flow-control/#maintenance-of-the-mandatory-and-suggested-configuration-objects). ### app.kubernetes.io/component Type: Label"} {"_id":"doc-en-website-d4eb48e5924da80c21a6e697f4938122fa89ce099e6a7f417c83ee246015c19f","title":"","text":"over the package builds. This means that anything before v1.24.0 will only be available in the Google-hosted repository. - There's a dedicated package repository for each Kubernetes minor version. When upgrading to to a different minor release, you must bear in mind that When upgrading to a different minor release, you must bear in mind that the package repository details also change. {{< /note >}}"} {"_id":"doc-en-website-d12e0d488e0d963536ec39ab01b4cd439f51bf8cb8e3ca90771e775330c17b20","title":"","text":"Limiting port ranges of communication | This recommendation may be a bit self-explanatory, but wherever possible you should only expose the ports on your service that are absolutely essential for communication or metric gathering. | 3rd Party Dependency Security | It is a good practice to regularly scan your application's third party libraries for known security vulnerabilities. Each programming language has a tool for performing this check automatically. | Static Code Analysis | Most languages provide a way for a snippet of code to be analyzed for any potentially unsafe coding practices. Whenever possible you should perform checks using automated tooling that can scan codebases for common security errors. Some of the tools can be found at: https://owasp.org/www-community/Source_Code_Analysis_Tools | Dynamic probing attacks | There are a few automated tools that you can run against your service to try some of the well known service attacks. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the [OWASP Zed Attack proxy](https://owasp.org/www-project-zap/) tool. | Dynamic probing attacks | There are a few automated tools that you can run against your service to try some of the well known service attacks. These include SQL injection, CSRF, and XSS. One of the most popular dynamic analysis tools is the [OWASP Zed Attack proxy](https://www.zaproxy.org/) tool. | {{< /table >}}"} {"_id":"doc-en-website-14714522090872de0b8e2d3bda05904011941848f3d280137ca940d20d90c6bc","title":"","text":"

    If you haven't worked through the earlier sections, start from Using minikube to create a cluster.

    Scaling is accomplished by changing the number of replicas in a Deployment

    NOTE If you are trying this after the previous section , you may need to start from creating a cluster as the services may have been deleted

    "} {"_id":"doc-en-website-2299da7e9885d2132bc76afced0d4e43ba0da6ba102c412400f9e04fb04402b3","title":"","text":"To reflect the change on kubeadm nodes you must do the following: - Log in to a kubeadm node - Run `kubeadm upgrade node phase kubelet-config` to download the latest `kubelet-config` ConfigMap contents into the local file `/var/lib/kubelet/config.conf` ConfigMap contents into the local file `/var/lib/kubelet/config.yaml` - Edit the file `/var/lib/kubelet/kubeadm-flags.env` to apply additional configuration with flags - Restart the kubelet service with `systemctl restart kubelet`"} {"_id":"doc-en-website-1db74325695238932198b20d60f104c0e9fd1a80af3dd1dd1e85933372543f95","title":"","text":"{{< note >}} During `kubeadm upgrade`, kubeadm downloads the `KubeletConfiguration` from the `kubelet-config` ConfigMap and overwrite the contents of `/var/lib/kubelet/config.conf`. `kubelet-config` ConfigMap and overwrite the contents of `/var/lib/kubelet/config.yaml`. This means that node local configuration must be applied either by flags in `/var/lib/kubelet/kubeadm-flags.env` or by manually updating the contents of `/var/lib/kubelet/config.conf` after `kubeadm upgrade`, and then restarting the kubelet. `/var/lib/kubelet/config.yaml` after `kubeadm upgrade`, and then restarting the kubelet. {{< /note >}} ### Applying kube-proxy configuration changes"} {"_id":"doc-en-website-f1d9522a48cf7ebdb8614ed03297e9673fa0df066f52ce402b3c5c1e69a7973c","title":"","text":"#### Persisting kubelet reconfiguration Any changes to the `KubeletConfiguration` stored in `/var/lib/kubelet/config.conf` will be overwritten on Any changes to the `KubeletConfiguration` stored in `/var/lib/kubelet/config.yaml` will be overwritten on `kubeadm upgrade` by downloading the contents of the cluster wide `kubelet-config` ConfigMap. To persist kubelet node specific configuration either the file `/var/lib/kubelet/config.conf` To persist kubelet node specific configuration either the file `/var/lib/kubelet/config.yaml` has to be updated manually post-upgrade or the file `/var/lib/kubelet/kubeadm-flags.env` can include flags. The kubelet flags override the associated `KubeletConfiguration` options, but note that some of the flags are deprecated. A kubelet restart will be required after changing `/var/lib/kubelet/config.conf` or A kubelet restart will be required after changing `/var/lib/kubelet/config.yaml` or `/var/lib/kubelet/kubeadm-flags.env`. ## {{% heading \"whatsnext\" %}}"} {"_id":"doc-en-website-8ab866172c4f0464dec553350d03d49935d71db07cc569f419ebe2e0756951a1","title":"","text":"and bare metal workloads. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes. * [Gateway API](/docs/concepts/services-networking/gateway/) is an open source project managed by the [SIG Network](https://github.com/kubernetes/community/tree/master/sig-network) community and provides an expressive, extensible, and role-oriented API for modeling service networking. * [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod. * [Multus](https://github.com/k8snetworkplumbingwg/multus-cni) is a Multi plugin for"} {"_id":"doc-en-website-0b578cee38ec03582ce932b95389e80655eb6690e1d69db09a8bc59557b78b05","title":"","text":"to be reachable from outside your cluster. - [Ingress](/docs/concepts/services-networking/ingress/) provides extra functionality specifically for exposing HTTP applications, websites and APIs. - [Gateway API](/docs/concepts/services-networking/gateway/) is an {{}} that provides an expressive, extensible, and role-oriented family of API kinds for modeling service networking. - You can also use Services to [publish services only for consumption inside your cluster](/docs/concepts/services-networking/service-traffic-policy/)."} {"_id":"doc-en-website-09da951a450f6e7ab1e9dd26324f3f9c3fd1f0e2e3fe42126ac4c581666648e4","title":"","text":" --- title: Gateway API content_type: concept description: >- Gateway API is a family of API kinds that provide dynamic infrastructure provisioning and advanced traffic routing. weight: 55 --- Make network services available by using an extensible, role-oriented, protocol-aware configuration mechanism. [Gateway API](https://gateway-api.sigs.k8s.io/) is an {{}} containing API [kinds](https://gateway-api.sigs.k8s.io/references/spec/) that provide dynamic infrastructure provisioning and advanced traffic routing. ## Design principles The following principles shaped the design and architecture of Gateway API: * __Role-oriented:__ Gateway API kinds are modeled after organizational roles that are responsible for managing Kubernetes service networking: * __Infrastructure Provider:__ Manages infrastructure that allows multiple isolated clusters to serve multiple tenants, e.g. a cloud provider. * __Cluster Operator:__ Manages clusters and is typically concerned with policies, network access, application permissions, etc. * __Application Developer:__ Manages an application running in a cluster and is typically concerned with application-level configuration and [Service](/docs/concepts/services-networking/service/) composition. * __Portable:__ Gateway API specifications are defined as [custom resources](docs/concepts/extend-kubernetes/api-extension/custom-resources) and are supported by many [implementations](https://gateway-api.sigs.k8s.io/implementations/). * __Expressive:__ Gateway API kinds support functionality for common traffic routing use cases such as header-based matching, traffic weighting, and others that were only possible in [Ingress](/docs/concepts/services-networking/ingress/) by using custom annotations. * __Extensible:__ Gateway allows for custom resources to be linked at various layers of the API. This makes granular customization possible at the appropriate places within the API structure. ## Resource model Gateway API has three stable API kinds: * __GatewayClass:__ Defines a set of gateways with common configuration and managed by a controller that implements the class. * __Gateway:__ Defines an instance of traffic handling infrastructure, such as cloud load balancer. * __HTTPRoute:__ Defines HTTP-specific rules for mapping traffic from a Gateway listener to a representation of backend network endpoints. These endpoints are often represented as a {{}}. Gateway API is organized into different API kinds that have interdependent relationships to support the role-oriented nature of organizations. A Gateway object is associated with exactly one GatewayClass; the GatewayClass describes the gateway controller responsible for managing Gateways of this class. One or more route kinds such as HTTPRoute, are then associated to Gateways. A Gateway can filter the routes that may be attached to its `listeners`, forming a bidirectional trust model with routes. The following figure illustrates the relationships of the three stable Gateway API kinds: {{< figure src=\"/docs/images/gateway-kind-relationships.svg\" alt=\"A figure illustrating the relationships of the three stable Gateway API kinds\" class=\"diagram-medium\" >}} ### GatewayClass {#api-kind-gateway-class} Gateways can be implemented by different controllers, often with different configurations. A Gateway must reference a GatewayClass that contains the name of the controller that implements the class. A minimal GatewayClass example: ```yaml apiVersion: gateway.networking.k8s.io/v1 kind: GatewayClass metadata: name: example-class spec: controllerName: example.com/gateway-controller ``` In this example, a controller that has implemented Gateway API is configured to manage GatewayClasses with the controller name `example.com/gateway-controller`. Gateways of this class will be managed by the implementation's controller. See the [GatewayClass](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.GatewayClass) reference for a full definition of this API kind. ### Gateway {#api-kind-gateway} A Gateway describes an instance of traffic handling infrastructure. It defines a network endpoint that can be used for processing traffic, i.e. filtering, balancing, splitting, etc. for backends such as a Service. For example, a Gateway may represent a cloud load balancer or an in-cluster proxy server that is configured to accept HTTP traffic. A minimal Gateway resource example: ```yaml apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: example-gateway spec: gatewayClassName: example-class listeners: - name: http protocol: HTTP port: 80 ``` In this example, an instance of traffic handling infrastructure is programmed to listen for HTTP traffic on port 80. Since the `addresses` field is unspecified, an address or hostname is assigned to the Gateway by the implementation's controller. This address is used as a network endpoint for processing traffic of backend network endpoints defined in routes. See the [Gateway](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.Gateway) reference for a full definition of this API kind. ### HTTPRoute {#api-kind-httproute} The HTTPRoute kind specifies routing behavior of HTTP requests from a Gateway listener to backend network endpoints. For a Service backend, an implementation may represent the backend network endpoint as a Service IP or the backing Endpoints of the Service. An HTTPRoute represents configuration that is applied to the underlying Gateway implementation. For example, defining a new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server. A minimal HTTPRoute example: ```yaml apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: example-httproute spec: parentRefs: - name: example-gateway hostnames: - \"www.example.com\" rules: - matches: - path: type: PathPrefix value: /login backendRefs: - name: example-svc port: 8080 ``` In this example, HTTP traffic from Gateway `example-gateway` with the Host: header set to `www.example.com` and the request path specified as `/login` will be routed to Service `example-svc` on port `8080`. See the [HTTPRoute](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPRoute) reference for a full definition of this API kind. ## Request flow Here is a simple example of HTTP traffic being routed to a Service by using a Gateway and an HTTPRoute: {{< figure src=\"/docs/images/gateway-request-flow.svg\" alt=\"A diagram that provides an example of HTTP traffic being routed to a Service by using a Gateway and an HTTPRoute\" class=\"diagram-medium\" >}} In this example, the request flow for a Gateway implemented as a reverse proxy is: 1. The client starts to prepare an HTTP request for the URL `http://www.example.com` 2. The client's DNS resolver queries for the destination name and learns a mapping to one or more IP addresses associated with the Gateway. 3. The client sends a request to the Gateway IP address; the reverse proxy receives the HTTP request and uses the Host: header to match a configuration that was derived from the Gateway and attached HTTPRoute. 4. Optionally, the reverse proxy can perform request header and/or path matching based on match rules of the HTTPRoute. 5. Optionally, the reverse proxy can modify the request; for example, to add or emove headers, based on filter rules of the HTTPRoute. 6. Lastly, the reverse proxy forwards the request to one or more backends. ## Conformance Gateway API covers a broad set of features and is widely implemented. This combination requires clear conformance definitions and tests to ensure that the API provides a consistent experience wherever it is used. See the [conformance](https://gateway-api.sigs.k8s.io/concepts/conformance/) documentation to understand details such as release channels, support levels, and running conformance tests. ## Migrating from Ingress Gateway API is the successor to the [Ingress](/docs/concepts/services-networking/ingress/) API. However, it does not include the Ingress kind. As a result, a one-time conversion from your existing Ingress resources to Gateway API resources is necessary. Refer to the [ingress migration](https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/#migrating-from-ingress) guide for details on migrating Ingress resources to Gateway API resources. ## {{% heading \"whatsnext\" %}} Instead of Gateway API resources being natively implemented by Kubernetes, the specifications are defined as [Custom Resources](docs/concepts/extend-kubernetes/api-extension/custom-resources) supported by a wide range of [implementations](https://gateway-api.sigs.k8s.io/implementations/). [Install](https://gateway-api.sigs.k8s.io/guides/#installing-gateway-api) the Gateway API CRDs or follow the installation instructions of your selected implementation. After installing an implementation, use the [Getting Started](https://gateway-api.sigs.k8s.io/guides/) guide to help you quickly start working with Gateway API. {{< note >}} Make sure to review the documentation of your selected implementation to understand any caveats. {{< /note >}} Refer to the [API specification](https://gateway-api.sigs.k8s.io/reference/spec/) for additional details of all Gateway API kinds. "} {"_id":"doc-en-website-a3a55de52e73a7c2c8e2ab90634394374f61f7b708e2ae999d711b8b5b94cc72","title":"","text":"{{< feature-state for_k8s_version=\"v1.19\" state=\"stable\" >}} {{< glossary_definition term_id=\"ingress\" length=\"all\" >}} {{< note >}} Ingress is frozen. New features are being added to the [Gateway API](/docs/concepts/services-networking/gateway/). {{< /note >}} ## Terminology"} {"_id":"doc-en-website-a94fe4231fd770e7e8f3e8c513178271090cb6dc75717af2b80d4838fbf3f482","title":"","text":"* Read about [Ingress](/docs/concepts/services-networking/ingress/), which exposes HTTP and HTTPS routes from outside the cluster to Services within your cluster. * Read about [Gateway](https://gateway-api.sigs.k8s.io/), an extension to * Read about [Gateway](/docs/concepts/services-networking/gateway/), an extension to Kubernetes that provides more flexibility than Ingress. For more context, read the following:"} {"_id":"doc-en-website-8bdf1321df8a453fb7ea987114531581630f4874cfbdc8daeef2787af164a770","title":"","text":"
    cluster
    GatewayClass
    Gateway
    HTTPRoute
    No newline at end of file"} {"_id":"doc-en-website-9dffc8197ebd66b2870fd87f610ab62847e37e642a4ea17f5556f7583a7b0a42","title":"","text":" No newline at end of file"} {"_id":"doc-en-website-25a0f1c6b2ebaffe218c8e172ced51a3ae37b810f351c4f02a08d8fec8389856","title":"","text":" --- title: Gateway API id: gateway-api date: 2023-10-19 full_link: /docs/concepts/services-networking/gateway/ short_description: > An API for modeling service networking in Kubernetes. aka: tags: - networking - architecture - extension --- A family of API kinds for modeling service networking in Kubernetes. Gateway API provides a family of extensible, role-oriented, protocol-aware API kinds for modeling service networking in Kubernetes. "} {"_id":"doc-en-website-c8f7f8dfca78f2636877f6559136a8ced6a0606a7f93dba45abb1b0ef23dcbbf","title":"","text":" --- title: Autoscaling Workloads description: >- With autoscaling, you can automatically update your workloads in one way or another. This allows your cluster to react to changes in resource demand more elastically and efficiently. content_type: concept weight: 40 --- In Kubernetes, you can _scale_ a workload depending on the current demand of resources. This allows your cluster to react to changes in resource demand more elastically and efficiently. When you scale a workload, you can either increase or decrease the number of replicas managed by the workload, or adjust the resources available to the replicas in-place. The first approach is referred to as _horizontal scaling_, while the second is referred to as _vertical scaling_. There are manual and automatic ways to scale your workloads, depending on your use case. ## Scaling workloads manually Kubernetes supports _manual scaling_ of workloads. Horizontal scaling can be done using the `kubectl` CLI. For vertical scaling, you need to _patch_ the resource definition of your workload. See below for examples of both strategies. - **Horizontal scaling**: [Running multiple instances of your app](/docs/tutorials/kubernetes-basics/scale/scale-intro/) - **Vertical scaling**: [Resizing CPU and memory resources assigned to containers](/docs/tasks/configure-pod-container/resize-container-resources) ## Scaling workloads automatically Kubernetes also supports _automatic scaling_ of workloads, which is the focus of this page. The concept of _Autoscaling_ in Kubernetes refers to the ability to automatically update an object that manages a set of Pods (for example a {{< glossary_tooltip text=\"Deployment\" term_id=\"deployment\" >}}. ### Scaling workloads horizontally In Kubernetes, you can automatically scale a workload horizontally using a _HorizontalPodAutoscaler_ (HPA). It is implemented as a Kubernetes API resource and a {{< glossary_tooltip text=\"controller\" term_id=\"controller\" >}} and periodically adjusts the number of {{< glossary_tooltip text=\"replicas\" term_id=\"replica\" >}} in a workload to match observed resource utilization such as CPU or memory usage. There is a [walkthrough tutorial](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough) of configuring a HorizontalPodAutoscaler for a Deployment. ### Scaling workloads vertically {{< feature-state for_k8s_version=\"v1.25\" state=\"stable\" >}} You can automatically scale a workload vertically using a _VerticalPodAutoscaler_ (VPA). Different to the HPA, the VPA doesn't come with Kubernetes by default, but is a separate project that can be found [on GitHub](https://github.com/kubernetes/autoscaler/tree/9f87b78df0f1d6e142234bb32e8acbd71295585a/vertical-pod-autoscaler). Once installed, it allows you to create {{< glossary_tooltip text=\"CustomResourceDefinitions\" term_id=\"customresourcedefinition\" >}} (CRDs) for your workloads which define _how_ and _when_ to scale the resources of the managed replicas. {{< note >}} You will need to have the [Metrics Server](https://github.com/kubernetes-sigs/metrics-server) installed to your cluster for the HPA to work. {{< /note >}} At the moment, the VPA can operate in four different modes: {{< table caption=\"Different modes of the VPA\" >}} Mode | Description :----|:----------- `Auto` | Currently `Recreate`, might change to in-place updates in the future `Recreate` | The VPA assigns resource requests on pod creation as well as updates them on existing pods by evicting them when the requested resources differ significantly from the new recommendation `Initial` | The VPA only assigns resource requests on pod creation and never changes them later. `Off` | The VPA does not automatically change the resource requirements of the pods. The recommendations are calculated and can be inspected in the VPA object. {{< /table >}} #### Requirements for in-place resizing {{< feature-state for_k8s_version=\"v1.27\" state=\"alpha\" >}} Resizing a workload in-place **without** restarting the {{< glossary_tooltip text=\"Pods\" term_id=\"pod\" >}} or its {{< glossary_tooltip text=\"Containers\" term_id=\"container\" >}} requires Kubernetes version 1.27 or later.
    Additionally, the `InPlaceVerticalScaling` feature gate needs to be enabled. {{< feature-gate-description name=\"InPlacePodVerticalScaling\" >}} ### Autoscaling based on cluster size For workloads that need to be scaled based on the size of the cluster (for example `cluster-dns` or other system components), you can use the _Cluster Proportional Autoscaler_.
    Just like the VPA, it is not part of the Kubernetes core, but hosted in its own repository [on GitHub](https://github.com/kubernetes-sigs/cluster-proportional-autoscaler). The Cluster Proportional Autoscaler watches the number of schedulable {{< glossary_tooltip text=\"nodes\" term_id=\"node\" >}} and cores and scales the number of replicas of the target workload accordingly. ### Event driven Autoscaling It is also possible to scale workloads based on events, for example using the [_Kubernetes Event Driven Autoscaler_ (**KEDA**)](https://keda.sh/). KEDA is a CNCF graduated enabling you to scale your workloads based on the number of events to be processed, for example the amount of messages in a queue. There exists a wide range of adapters for different event sources to choose from. ### Autoscaling based on schedules Another strategy for scaling your workloads is to **schedule** the scaling operations, for example in order to reduce resource consumption during off-peak hours. Similar to event driven autoscaling, such behavior can be achieved using KEDA in conjunction with its [`Cron` scaler](https://keda.sh/docs/2.13/scalers/cron/). The `Cron` scaler allows you to define schedules (and time zones) for scaling your workloads in or out. ## Scaling cluster infrastructure If scaling workloads isn't enough to meet your needs, you can also scale your cluster infrastructure itself. Scaling the cluster infrastructure normally means adding or removing {{< glossary_tooltip text=\"nodes\" term_id=\"node\" >}}. This can be done using one of two available autoscalers: - [**Cluster Autoscaler**](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) - [**Karpenter**](https://github.com/kubernetes-sigs/karpenter?tab=readme-ov-file) Both scalers work by watching for pods marked as _unschedulable_ or _underutilized_ nodes and then adding or removing nodes as needed. ## {{% heading \"whatsnext\" %}} - Learn more about scaling horizontally - [Scale a StatefulSet](/docs/tasks/run-application/scale-stateful-set/) - [HorizontalPodAutoscaler Walkthrough](/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) - [Resize Container Resources In-Place](/docs/tasks/configure-pod-container/resize-container-resources/) - [Autoscale the DNS Service in a Cluster](/docs/tasks/administer-cluster/dns-horizontal-autoscaling/)
    "} {"_id":"doc-en-website-baf563b1a37db6ba47190962ddbaff1d127f1391e99287fcb315db48456402e5","title":"","text":"You can use {{< glossary_tooltip term_id=\"containerd\" text=\"ContainerD\" >}} 1.4.0+ as the container runtime for Kubernetes nodes that run Windows. Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#install-containerd). Learn how to [install ContainerD on a Windows node](/docs/setup/production-environment/container-runtimes/#containerd). {{< note >}} There is a [known limitation](/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations) when using GMSA with containerd to access Windows network shares, which requires a"} {"_id":"doc-en-website-bdff4cecbe12876089707ebb017e0321038ca86aa708c81503b6a58556266daf","title":"","text":"- `kubectl edit cm kubelet-config-x.yy -n kube-system`を呼び出します(`x.yy`はKubernetesのバージョンに置き換えてください)。 - 既存の`cgroupDriver`の値を修正するか、以下のような新しいフィールドを追加します。 ``yaml ```yaml cgroupDriver: systemd ```"} {"_id":"doc-en-website-1c1c8061dcbb7ebfca0b898f3f6e5647e2519f93a084df76e680fe799b198493","title":"","text":"communicates with). 1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer. to [high availability](/docs/setup/production-environment/tools/kubeadm/high-availability/) you should specify the `--control-plane-endpoint` to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer. 1. Choose a Pod network add-on, and verify whether it requires any arguments to be passed to `kubeadm init`. Depending on which third-party provider you choose, you might need to set the `--pod-network-cidr` to"} {"_id":"doc-en-website-6d76fcb81177ebd0b4df82581b33f30bbb1996b0ceb1ea6f1fbae2a5ba51e0a0","title":"","text":"kubectl apply -f ``` {{< note >}} Only a few CNI plugins support Windows. More details and setup instructions can be found in [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#network-config). {{< /note >}} You can install only one Pod network per cluster. Once a Pod network has been installed, you can confirm that it is working by"} {"_id":"doc-en-website-3f9d45485ef224d91d1e40d14103378a891eaa5edb56e9ae7b411fa5061cf320","title":"","text":"kubectl label nodes --all node.kubernetes.io/exclude-from-external-load-balancers- ``` ### Joining your nodes {#join-nodes} The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine: * SSH to the machine * Become root (e.g. `sudo su -`) * [Install a runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime) if needed * Run the command that was output by `kubeadm init`. For example: ```bash kubeadm join --token : --discovery-token-ca-cert-hash sha256: ``` If you do not have the token, you can get it by running the following command on the control-plane node: ```bash kubeadm token list ``` The output is similar to this: ```console TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system: signing token generated by bootstrappers: 'kubeadm init'. kubeadm: default-node-token ``` By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node: ```bash kubeadm token create ``` The output is similar to this: ### Adding more control plane nodes ```console 5didvk.d09sbcov8ph2amjw ``` If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the control-plane node: ```bash openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` The output is similar to: ```console 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78 ``` {{< note >}} To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`. {{< /note >}} See [Creating Highly Available Clusters with kubeadm](/docs/setup/production-environment/tools/kubeadm/high-availability/) for steps on creating a high availability kubeadm cluster by adding more control plane nodes. The output should look something like: ### Adding worker nodes {#join-nodes} ``` [preflight] Running pre-flight checks The worker nodes are where your workloads run. ... (log output of join workflow) ... The following pages show how to add Linux and Windows worker nodes to the cluster by using the `kubeadm join` command: Node join complete: * Certificate signing request sent to control-plane and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on control-plane to see this machine join. ``` A few seconds later, you should notice this node in the output from `kubectl get nodes` when run on the control-plane node. {{< note >}} As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run on the first control-plane node. To provide higher availability, please rebalance the CoreDNS Pods with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined. {{< /note >}} * [Adding Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/) * [Adding Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/) ### (Optional) Controlling your cluster from machines other than the control-plane node"} {"_id":"doc-en-website-90671e9b5ee6082be990c8bbd998ed0e0438a0d9496eedd4eda58aeb1c8fae43","title":"","text":"* A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager. * 2 GB or more of RAM per machine (any less will leave little room for your apps). * 2 CPUs or more. * 2 CPUs or more for control plane machines. * Full network connectivity between all machines in the cluster (public or private network is fine). * Unique hostname, MAC address, and product_uuid for every node. See [here](#verify-mac-address) for more details. * Certain ports are open on your machines. See [here](#check-required-ports) for more details."} {"_id":"doc-en-website-74ae4b8a1eccc9b0a667d015711f435b35d38d1961c8195c024fe4222d78f838","title":"","text":" --- title: Adding Linux worker nodes content_type: task weight: 50 --- This page explains how to add Linux worker nodes to a kubeadm cluster. ## {{% heading \"prerequisites\" %}} * Each joining worker node has installed the required components from [Installing kubeadm](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/), such as, kubeadm, the kubelet and a {{< glossary_tooltip term_id=\"container-runtime\" text=\"container runtime\" >}}. * A running kubeadm cluster created by `kubeadm init` and following the steps in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/). * You need superuser access to the node. ## Adding Linux worker nodes To add new Linux worker nodes to your cluster do the following for each machine: 1. Connect to the machine by using SSH or another method. 1. Run the command that was output by `kubeadm init`. For example: ```bash sudo kubeadm join --token : --discovery-token-ca-cert-hash sha256: ``` ### Additional information for kubeadm join {{< note >}} To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`. {{< /note >}} If you do not have the token, you can get it by running the following command on the control plane node: ```bash sudo kubeadm token list ``` The output is similar to this: ```console TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system: signing token generated by bootstrappers: 'kubeadm init'. kubeadm: default-node-token ``` By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control plane node: ```bash sudo kubeadm token create ``` The output is similar to this: ```console 5didvk.d09sbcov8ph2amjw ``` If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following commands on the control plane node: ```bash sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` The output is similar to: ```console 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78 ``` The output of the `kubeadm join` command should look something like: ``` [preflight] Running pre-flight checks ... (log output of join workflow) ... Node join complete: * Certificate signing request sent to control-plane and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on control-plane to see this machine join. ``` A few seconds later, you should notice this node in the output from `kubectl get nodes`. (for example, run `kubectl` on a control plane node). {{< note >}} As the cluster nodes are usually initialized sequentially, the CoreDNS Pods are likely to all run on the first control plane node. To provide higher availability, please rebalance the CoreDNS Pods with `kubectl -n kube-system rollout restart deployment coredns` after at least one new node is joined. {{< /note >}} ## {{% heading \"whatsnext\" %}} * See how to [add Windows worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/). "} {"_id":"doc-en-website-690fb8285765ea3b031a4767f55a86bf991c0354744369b48eb7b9309cc8dc27","title":"","text":" --- title: Adding Windows worker nodes content_type: task weight: 50 --- This page explains how to add Windows worker nodes to a kubeadm cluster. ## {{% heading \"prerequisites\" %}} * A running [Windows Server 2022](https://www.microsoft.com/cloud-platform/windows-server-pricing) (or higher) instance with administrative access. * A running kubeadm cluster created by `kubeadm init` and following the steps in the document [Creating a cluster with kubeadm](/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/). ## Adding Windows worker nodes {{< note >}} To facilitate the addition of Windows worker nodes to a cluster, PowerShell scripts from the repository https://sigs.k8s.io/sig-windows-tools are used. {{< /note >}} Do the following for each machine: 1. Open a PowerShell session on the machine. 1. Make sure you are Administrator or a privileged user. Then proceed with the steps outlined below. ### Install containerd {{% thirdparty-content %}} To install containerd, first run the following command: ```PowerShell curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1 `````` Then run the following command, but first replace `CONTAINERD_VERSION` with a recent release from the [containerd repository](https://github.com/containerd/containerd/releases). The version must not have a `v` prefix. For example, use `1.7.22` instead of `v1.7.22`: ```PowerShell .Install-Containerd.ps1 -ContainerDVersion CONTAINERD_VERSION ``` * Adjust any other parameters for `Install-Containerd.ps1` such as `netAdapterName` as you need them. * Set `skipHypervisorSupportCheck` if your machine does not support Hyper-V and cannot host Hyper-V isolated containers. * If you change the `Install-Containerd.ps1` optional parameters `CNIBinPath` and/or `CNIConfigPath` you will need to configure the installed Windows CNI plugin with matching values. ### Install kubeadm and kubelet Run the following commands to install kubeadm and the kubelet: ```PowerShell curl.exe -LO https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1 .PrepareNode.ps1 -KubernetesVersion v{{< skew currentVersion >}} ``` * Adjust the parameter `KubernetesVersion` of `PrepareNode.ps1` if needed. ### Run `kubeadm join` Run the command that was output by `kubeadm init`. For example: ```bash kubeadm join --token : --discovery-token-ca-cert-hash sha256: ``` #### Additional information about kubeadm join {{< note >}} To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[2001:db8::101]:2073`. {{< /note >}} If you do not have the token, you can get it by running the following command on the control plane node: ```bash kubeadm token list ``` The output is similar to this: ```console TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system: signing token generated by bootstrappers: 'kubeadm init'. kubeadm: default-node-token ``` By default, node join tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control plane node: ```bash kubeadm token create ``` The output is similar to this: ```console 5didvk.d09sbcov8ph2amjw ``` If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following commands on the control plane node: ```bash sudo cat /etc/kubernetes/pki/ca.crt | openssl x509 -pubkey | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` The output is similar to: ```console 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78 ``` The output of the `kubeadm join` command should look something like: ``` [preflight] Running pre-flight checks ... (log output of join workflow) ... Node join complete: * Certificate signing request sent to control-plane and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on control-plane to see this machine join. ``` A few seconds later, you should notice this node in the output from `kubectl get nodes`. (for example, run `kubectl` on a control plane node). ### Network configuration CNI setup on clusters mixed with Linux and Windows nodes requires more steps than just running `kubectl apply` on a manifest file. Additionally, the CNI plugin running on control plane nodes must be prepared to support the CNI plugin running on Windows worker nodes. {{% thirdparty-content %}} Only a few CNI plugins currently support Windows. Below you can find individual setup instructions for them: * [Flannel](https://sigs.k8s.io/sig-windows-tools/guides/flannel.md) * [Calico](https://docs.tigera.io/calico/latest/getting-started/kubernetes/windows-calico/) ### Install kubectl for Windows (optional) {#install-kubectl} See [Install and Set Up kubectl on Windows](/docs/tasks/tools/install-kubectl-windows/). ## {{% heading \"whatsnext\" %}} * See how to [add Linux worker nodes](/docs/tasks/administer-cluster/kubeadm/adding-linux-nodes/). "} {"_id":"doc-en-website-95bf5b6581ccd8f08b4406d99eed5a42f3125b186b23dd922b250fc3114bea5b","title":"","text":"## Enabling sidecar containers Starting with Kubernetes 1.28, a Enabled by default with Kubernetes 1.29, a [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named `SidecarContainers` allows you to specify a `restartPolicy` for containers listed in a Pod's `initContainers` field. These restartable _sidecar_ containers are independent with"} {"_id":"doc-en-website-cac84d356604925c87564e7e3738fc0c3711382c5500db54eea9e1aed222ac9e","title":"","text":"Here's an example of a Job with two containers, one of which is a sidecar: {{% code_sample language=\"yaml\" file=\"application/job/job-sidecar.yaml\" %}} By default, this feature is not available in Kubernetes. To avail this feature, you need to enable the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) named `SidecarContainers`. ## Differences from regular containers"} {"_id":"doc-en-website-43f5e1657155ab3f2498daf97599635965fb04dab30c939d23efa1d2f94043ca","title":"","text":"--- Allow setting the `restartPolicy` of an init container to `Always` so that the container becomes a sidecar container (restartable init containers). See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/init-containers/#sidecar-containers-and-restartpolicy) See [Sidecar containers and restartPolicy](/docs/concepts/workloads/pods/sidecar-containers/) for more details. "} {"_id":"doc-en-website-93b79602ec24df3aaa136e5f3339c4593ebe2ec62b5f61c2358bc79556e72d2e","title":"","text":"Selector: tier=frontend Labels: app=guestbook tier=frontend Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"guestbook\",\"tier\":\"frontend\"},\"name\":\"frontend\",... Annotations: Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: tier=frontend Containers: php-redis: Image: gcr.io/google_samples/gb-frontend:v3 Image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5 Port: Host Port: Environment: "} {"_id":"doc-en-website-76344a47c32457b2d3d90fb247a6d0ece18107d81de7ca0d9eb5c31f5ded4883","title":"","text":"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 117s replicaset-controller Created pod: frontend-wtsmm Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-b2zdv Normal SuccessfulCreate 116s replicaset-controller Created pod: frontend-vcmts Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-gbgfx Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-rwz57 Normal SuccessfulCreate 13s replicaset-controller Created pod: frontend-wkl7w ``` And lastly you can check for the Pods brought up:"} {"_id":"doc-en-website-0d2e352a4bd8c5f599576e8080a1b09d5f3f7f1ce18bb97419b3445d3f0cf38e","title":"","text":"``` NAME READY STATUS RESTARTS AGE frontend-b2zdv 1/1 Running 0 6m36s frontend-vcmts 1/1 Running 0 6m36s frontend-wtsmm 1/1 Running 0 6m36s frontend-gbgfx 1/1 Running 0 10m frontend-rwz57 1/1 Running 0 10m frontend-wkl7w 1/1 Running 0 10m ``` You can also verify that the owner reference of these pods is set to the frontend ReplicaSet. To do this, get the yaml of one of the Pods running: ```shell kubectl get pods frontend-b2zdv -o yaml kubectl get pods frontend-gbgfx -o yaml ``` The output will look similar to this, with the frontend ReplicaSet's info set in the metadata's ownerReferences field:"} {"_id":"doc-en-website-1d517b3d9bfeb6f85251d33cda9998a6c57438b4ca8cb467d8a99692fa3d7bf5","title":"","text":"apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2020-02-12T07:06:16Z\" creationTimestamp: \"2024-02-28T22:30:44Z\" generateName: frontend- labels: tier: frontend name: frontend-b2zdv name: frontend-gbgfx namespace: default ownerReferences: - apiVersion: apps/v1"} {"_id":"doc-en-website-1b5c540be47900f6c5d76ef87015c975d1d24fc692090c096af14de0b89f76dd","title":"","text":"controller: true kind: ReplicaSet name: frontend uid: f391f6db-bb9b-4c09-ae74-6a1f77f3d5cf uid: e129deca-f864-481b-bb16-b27abfd92292 ... ```"} {"_id":"doc-en-website-53e8018d2b6e3a7cc46452da07efa3c06efb9bdcb7b13afd67b7d53a6ea59d77","title":"","text":"spec: containers: - name: php-redis image: gcr.io/google_samples/gb-frontend:v3 image: us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5 "} {"_id":"doc-en-website-a56f25759da204bb730143f2a73853fa5b2f0e57c0a480c09213df28fb4d355f","title":"","text":"## {{% heading \"prerequisites\" %}} {{< include \"task-tutorial-prereqs.md\" >}} {{< version-check >}} {{< include \"task-tutorial-prereqs-node-upgrade.md\" >}} {{< version-check >}} * Familiarize yourself with [the process for upgrading the rest of your kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to upgrade the control plane nodes before upgrading your Linux Worker nodes."} {"_id":"doc-en-website-2d4ba3fe1272f449fd8febc84bfa24ecde412fcc5370c7261587b115dcfcc0a6","title":"","text":"## {{% heading \"prerequisites\" %}} {{< include \"task-tutorial-prereqs.md\" >}} {{< version-check >}} {{< include \"task-tutorial-prereqs-node-upgrade.md\" >}} {{< version-check >}} * Familiarize yourself with [the process for upgrading the rest of your kubeadm cluster](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade). You will want to upgrade the control plane nodes before upgrading your Windows nodes."} {"_id":"doc-en-website-1f75261602bcd19ea0b8a82fd25920bd8f5bd719824882ab1e6e494a5db9a718","title":"","text":" You need to have shell access to all the nodes, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. "} {"_id":"doc-en-website-97b8d4a85829b49821eb3c3142cc331b7402421f55dcab9d9009548a48038bbd","title":"","text":"string

    ontrolPlaneEndpoint sets a stable IP address or DNS name for the control plane;

    controlPlaneEndpoint sets a stable IP address or DNS name for the control plane; It can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port. In case the controlPlaneEndpoint is not specified, the advertiseAddress + bindPort are used; in case the controlPlaneEndpoint is specified but without a TCP port,"} {"_id":"doc-en-website-f762848a2e2865f4f7df2cdd0454baeb1c25a73560fd878a4bd0b029afc8cb0d","title":"","text":"

    localAPIEndpoint represents the endpoint of the API server instance that's deployed on this control plane node. In HA setups, this differs from ClusterConfiguration.controlPlaneEndpoint in the sense that ontrolPlaneEndpoint is the global endpoint for the cluster, which then in the sense that controlPlaneEndpoint is the global endpoint for the cluster, which then loadbalances the requests to each individual API server. This configuration object lets you customize what IP/DNS name and port the local API server advertises it's accessible on. By default, kubeadm tries to auto-detect the IP of the default"} {"_id":"doc-en-website-28fc05d5d61dc40d1d40640a5bb5a12ffd5451b26414e96506dd8be992cd7797","title":"","text":"Some Kubernetes features exclusively use cgroup v2 for enhanced resource management and isolation. For example, the [MemoryQoS](/blog/2021/11/26/qos-memory-resources/) feature improves memory QoS [MemoryQoS](/docs/concepts/workloads/pods/pod-qos/#memory-qos-with-cgroup-v2) feature improves memory QoS and relies on cgroup v2 primitives."} {"_id":"doc-en-website-b78e279930dbe1947fb70f8c18e8dd23a46ec7653094530b5dc0464d810639fd","title":"","text":"returned. In this case, the client will need to start from the beginning or omit the `limit` parameter. For example, if there are 1,253 pods on the cluster and you wants to receive chunks For example, if there are 1,253 pods on the cluster and you want to receive chunks of 500 pods at a time, request those chunks as follows: 1. List all of the pods on a cluster, retrieving up to 500 pods each time."} {"_id":"doc-en-website-004d6c1cc394664c613bea2c22bab817bf26acb24679ea2c29f69745ca37e816","title":"","text":"

    {{< cncf-landscape helpers=false category=\"kubernetes-training-partner\" >}} {{< cncf-landscape helpers=false category=\"special--kubernetes-training-partner\" >}}
    "} {"_id":"doc-en-website-b5ec237e31ffa38b52fcf5a52ea047eddaaf6c97a825d421cb57cf3a63b0bfef","title":"","text":"In Kubernetes v{{< skew currentVersion >}}, the value of `.spec.os.name` does not affect how the {{< glossary_tooltip text=\"kube-scheduler\" term_id=\"kube-scheduler\" >}} picks a Pod to run a node. In any cluster where there is more than one operating system for picks a node for the Pod to run on. In any cluster where there is more than one operating system for running nodes, you should set the [kubernetes.io/os](/docs/reference/labels-annotations-taints/#kubernetes-io-os) label correctly on each node, and define pods with a `nodeSelector` based on the operating system"} {"_id":"doc-en-website-3b954c7463db847a316403c240e7e4788ee76af4ce646d2d07d88b87544ee491","title":"","text":"* [Borg](https://research.google.com/pubs/pub43438.html) * [Marathon](https://mesosphere.github.io/marathon/docs/rest-api.html) * [Omega](https://research.google/pubs/pub41684/) * [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/). No newline at end of file * [Tupperware](https://engineering.fb.com/data-center-engineering/tupperware/). "} {"_id":"doc-en-website-1f81aee199112957e9408b88c15ff3346d01e4ba95be7ac123174943c44ab23a","title":"","text":" Buy your ticket now! 21 - 23 August | Hong Kong - name: KubeCon 2024 NA startTime: 2024-09-30T00:00:00 #Added in https://github.com/kubernetes/website/pull/48086 endTime: 2024-11-15T18:00:00 style: >- background: linear-gradient(90deg, rgba(64,66,169,1) 60%, rgba(0,81,181,1) 100%); color: #fffff; title: | KubeCon + CloudNativeCon 2024 message: | Join us for three days of incredible opportunities to collaborate, learn and share with the cloud native community.
    Buy your ticket now! 12 - 15 November | Salt Lake City
    "} {"_id":"doc-en-website-c473fa730e2e4421e79d99b0db8815c98fd22cf0accbd11c15181550b40a6026","title":"","text":" No newline at end of file"}