{"_id":"doc-en-kubernetes-5dd7f9d7b2c41055c3179e98f959d918190addabd96cb3e159fee76199a87051","title":"","text":" # Identifiers and Names in Kubernetes A summarization of the goals and recommendations for identifiers and names in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199). ## Definitions identifier : an opaque machine generated value guaranteed to be unique in a certain space name : a human readable string intended to help an end user distinguish between similar but distinct entities [rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) label (DNS_LABEL) : An alphanumeric (a-z, A-Z, and 0-9) string less than 64 characters, with the '-' character allowed anywhere except the first or last character, suitable for use as a hostname or segment in a domain name. [rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) subdomain (DNS_SUBDOMAIN) : One or more rfc1035/rfc1123 labels separated by '.' with a maximum length of 255 characters namespace string (NAMESPACE) : An rfc1035/rfc1123 subdomain no longer than 191 characters (255-63-1) source namespace string : The namespace string of a source of pod definitions on a host [rfc4122](http://www.ietf.org/rfc/rfc4122.txt) universally unique identifier (UUID) : A 128 bit generated value that is extremely unilkely to collide across time and space and requires no central coordination pod unique name : the combination of a pod's source namespace string and name string on a host pod unique identifier : the identifier associated with a single execution of a pod on a host, which changes on each restart. Must be a UUID ## Objectives for names and identifiers 1) Uniquely identify an instance of a pod on the apiserver and on the kubelet 2) Uniquely identify an instance of a container within a pod on the apiserver and on the kubelet 3) Uniquely identify a single execution of a container in time for logging or reporting 4) The structure of a pod specification should stay largely the same throughout the entire system 5) Provide human-friendly, memorable, semantically meaningful, short-ish references in container and pod operations 6) Provide predictable container and pod references in operations and/or configuration files 7) Allow idempotent creation of API resources (#148) 8) Allow DNS names to be automatically generated for individual containers or pods (#146) ## Implications 1) Each container name within a container manifest must be unique within that manifest 2) Each pod instance on the apiserver must have a unique identifier across space and time (UUID) 1) The apiserver may set this identifier if not specified by a client 2) This identifier will persist even if moved across hosts 3) Each pod instance on the apiserver must have a name string which is human-friendly, dns-friendly (DNS_LABEL), and unique in the apiserver space 1) The apiserver may set this name string if not specified by a client 4) Each apiserver must have a configured namespace string (NAMESPACE) that is unique across all apiservers that share its configured minions 5) Each source of pod configuration to a kubelet must have a source namespace string (NAMESPACE) that is unique across all sources available to that kubelet 6) All pod instances on a host must have a name string which is human-friendly, dns-friendly, and unique per namespace string (DNS_LABEL) 7) The combination of the name string and source namespace string on a kubelet must be unique and is referred to as the pod unique name 8) When starting an instance of a pod on a kubelet the first time, a new pod unique identifier (UUID) should be assigned to that pod instance 1) If that pod is restarted, it must retain the pod unique identifier it previously had 2) If the pod is stopped and a new instance with the same pod unique name is started, it must be assigned a new pod unique identifier 9) The kubelet should use the pod unique name and pod unique identifier to produce a Docker container name (--name) "} {"_id":"doc-en-kubernetes-d0b2a1694e137a25f0c9d773183746b4cbbfdc14bd5ba535205288f1c6c3991e","title":"","text":" # Maintainers Eric Paris "} {"_id":"doc-en-kubernetes-a70238982f99fadc61e7ce926d51d6c8b6faa8bfc5da0332d4165249799ad153","title":"","text":" #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh "} {"_id":"doc-en-kubernetes-3b11a81468f6a5ac23a5c5152bea0f7aaadbb57189a6cd5f4abeb0c3a33cdcd4","title":"","text":" #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh "} {"_id":"doc-en-kubernetes-745d381fa41da15f155ef8da60ae55fcca0f06dadec83246b90b044f9bd74fa6","title":"","text":"} if status == health.Healthy { result = append(result, minion) } else { glog.Errorf(\"%s failed a health check, ignoring.\", minion) } } return result, nil"} {"_id":"doc-en-kubernetes-fc8264fb7df08567e2d75f8f24c25060160872ae681efacdb33ce35d071441bd","title":"","text":"func TestErrorsToAPIStatus(t *testing.T) { cases := map[error]api.Status{ NewAlreadyExistsErr(\"foo\", \"bar\"): api.Status{ NewAlreadyExistsErr(\"foo\", \"bar\"): { Status: api.StatusFailure, Code: http.StatusConflict, Reason: \"already_exists\","} {"_id":"doc-en-kubernetes-e0b3f5c2bdc6e8c560cf17ee8d76c4cb251d10eff3595b5e6d934d97d3fecc66","title":"","text":"ID: \"bar\", }, }, NewConflictErr(\"foo\", \"bar\", errors.New(\"failure\")): api.Status{ NewConflictErr(\"foo\", \"bar\", errors.New(\"failure\")): { Status: api.StatusFailure, Code: http.StatusConflict, Reason: \"conflict\","} {"_id":"doc-en-kubernetes-98ebd6b80d6b9f12ac1b115a06371f14a92cdb96c50946e24109e61012d9efca","title":"","text":"} func TestSyncCreateTimeout(t *testing.T) { testOver := make(chan struct{}) defer close(testOver) storage := SimpleRESTStorage{ injectedFunction: func(obj interface{}) (interface{}, error) { time.Sleep(5 * time.Millisecond) // Eliminate flakes by ensuring the create operation takes longer than this test. <-testOver return obj, nil }, }"} {"_id":"doc-en-kubernetes-69e82608088ae166fb63f35140cfbc8c4e5e7037b4dcc6b261029d67a4c00790","title":"","text":"} func TestOpGet(t *testing.T) { simpleStorage := &SimpleRESTStorage{} testOver := make(chan struct{}) defer close(testOver) simpleStorage := &SimpleRESTStorage{ injectedFunction: func(obj interface{}) (interface{}, error) { // Eliminate flakes by ensuring the create operation takes longer than this test. <-testOver return obj, nil }, } handler := New(map[string]RESTStorage{ \"foo\": simpleStorage, }, codec, \"/prefix/version\")"} {"_id":"doc-en-kubernetes-bee0dfa0415af80051e30433b1014e5509d82a22f43a90957446c5df8414bd2b","title":"","text":"for _, service := range services { activeServices.Insert(service.ID) info, exists := proxier.getServiceInfo(service.ID) if exists && info.port == service.Port { if exists && info.active && info.port == service.Port { continue } if exists { if exists && info.port != service.Port { proxier.StopProxy(service.ID) } glog.Infof(\"Adding a new service %s on port %d\", service.ID, service.Port)"} {"_id":"doc-en-kubernetes-03513259d836d47077e1b4fe013229e0eeeb501e38350b0eccfa472ed8d62a98","title":"","text":"} } func TestProxyUpdateDeleteUpdate(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}}) p := NewProxier(lb) proxyPort, err := p.addServiceOnUnusedPort(\"echo\") if err != nil { t.Fatalf(\"error adding new service: %#v\", err) } conn, err := net.Dial(\"tcp\", net.JoinHostPort(\"127.0.0.1\", proxyPort)) if err != nil { t.Fatalf(\"error connecting to proxy: %v\", err) } conn.Close() p.OnUpdate([]api.Service{}) if err := waitForClosedPort(p, proxyPort); err != nil { t.Fatalf(err.Error()) } proxyPortNum, _ := strconv.Atoi(proxyPort) p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: proxyPortNum}, }) testEchoConnection(t, \"127.0.0.1\", proxyPort) } func TestProxyUpdatePort(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}})"} {"_id":"doc-en-kubernetes-5f6b46d36e3704a92aa12d61d2641b75938a38d1e9db1833bbd7f60535ed958d","title":"","text":"} testEchoConnection(t, \"127.0.0.1\", newPort) } func TestProxyUpdatePortLetsGoOfOldPort(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}}) p := NewProxier(lb) proxyPort, err := p.addServiceOnUnusedPort(\"echo\") if err != nil { t.Fatalf(\"error adding new service: %#v\", err) } // add a new dummy listener in order to get a port that is free l, _ := net.Listen(\"tcp\", \":0\") _, newPort, _ := net.SplitHostPort(l.Addr().String()) portNum, _ := strconv.Atoi(newPort) l.Close() // Wait for the socket to actually get free. if err := waitForClosedPort(p, newPort); err != nil { t.Fatalf(err.Error()) } if proxyPort == newPort { t.Errorf(\"expected difference, got %s %s\", newPort, proxyPort) } p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: portNum}, }) if err := waitForClosedPort(p, proxyPort); err != nil { t.Fatalf(err.Error()) } testEchoConnection(t, \"127.0.0.1\", newPort) proxyPortNum, _ := strconv.Atoi(proxyPort) p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: proxyPortNum}, }) if err := waitForClosedPort(p, newPort); err != nil { t.Fatalf(err.Error()) } testEchoConnection(t, \"127.0.0.1\", proxyPort) } "} {"_id":"doc-en-kubernetes-a5df071292da4751cd50d7569ef1932600ef8908f38a48124af25581a2f4dd3c","title":"","text":"trap \"teardown\" EXIT POD_ID_LIST=$($CLOUDCFG -json -l name=myNginx list pods | jq \".items[].id\") POD_ID_LIST=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' -l name=myNginx list pods) # Container turn up on a clean cluster can take a while for the docker image pull. ALL_RUNNING=0 while [ $ALL_RUNNING -ne 1 ]; do"} {"_id":"doc-en-kubernetes-45f4ec9c4c966259384749e6d930a90f2d36f4eda8e3a67079db6cfe942c3555","title":"","text":"sleep 5 ALL_RUNNING=1 for id in $POD_ID_LIST; do CURRENT_STATUS=$(remove-quotes $($CLOUDCFG -json get \"pods/$(remove-quotes ${id})\" | jq '.currentState.info[\"mynginx\"].State.Running and .currentState.info[\"net\"].State.Running')) CURRENT_STATUS=$($CLOUDCFG -template '{{and .CurrentState.Info.mynginx.State.Running .CurrentState.Info.net.State.Running}}' get pods/$id) if [ \"$CURRENT_STATUS\" != \"true\" ]; then ALL_RUNNING=0 fi"} {"_id":"doc-en-kubernetes-6e2bda87e8c7f1ac4e6bab4e661bd8aa237e5e1a8fe59dde0b7c6d74659b0c34","title":"","text":"sleep 5 POD_LIST_1=$($CLOUDCFG -json list pods | jq \".items[].id\") POD_LIST_1=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' list pods) echo \"Pods running: ${POD_LIST_1}\" $CLOUDCFG stop redisSlaveController"} {"_id":"doc-en-kubernetes-648f4f93b9e8c651b3e943319bf38b385595c6af065de8d48cb05602612e8bb9","title":"","text":"$CLOUDCFG delete services/redismaster $CLOUDCFG delete pods/redis-master-2 POD_LIST_2=$($CLOUDCFG -json list pods | jq \".items[].id\") POD_LIST_2=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' list pods) echo \"Pods running after shutdown: ${POD_LIST_2}\" exit 0"} {"_id":"doc-en-kubernetes-5e8c4d1ed3c082bc29e8fd243c98005ab4f37bcc7223ebf6c8d65da27ab06e3c","title":"","text":" #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Launches an nginx container and verifies it can be reached. Assumes that # we're being called by hack/e2e-test.sh (we use some env vars it sets up). # Exit on error set -e source \"${KUBE_REPO_ROOT}/cluster/kube-env.sh\" source \"${KUBE_REPO_ROOT}/cluster/$KUBERNETES_PROVIDER/util.sh\" function validate() { POD_ID_LIST=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' -l name=$controller list pods) # Container turn up on a clean cluster can take a while for the docker image pull. ALL_RUNNING=0 while [ $ALL_RUNNING -ne 1 ]; do echo \"Waiting for all containers in pod to come up.\" sleep 5 ALL_RUNNING=1 for id in $POD_ID_LIST; do CURRENT_STATUS=$($CLOUDCFG -template '{{and .CurrentState.Info.datacontroller.State.Running .CurrentState.Info.net.State.Running}}' get pods/$id) if [ \"$CURRENT_STATUS\" != \"true\" ]; then ALL_RUNNING=0 fi done done ids=($POD_ID_LIST) if [ ${#ids[@]} -ne $1 ]; then echo \"Unexpected number of pods: ${#ids[@]}\" exit 1 fi } controller=dataController # Launch a container $CLOUDCFG -p 8080:80 run brendanburns/data 2 $controller function teardown() { echo \"Cleaning up test artifacts\" $CLOUDCFG stop $controller $CLOUDCFG rm $controller } trap \"teardown\" EXIT validate 2 $CLOUDCFG resize $controller 1 validate 1 $CLOUDCFG resize $controller 2 validate 2 # TODO: test rolling update here, but to do so, we need to make the update blocking # $CLOUDCFG -u=20s rollingupdate $controller # # Wait for the replica controller to recreate # sleep 10 # # validate 2 exit 0 "} {"_id":"doc-en-kubernetes-e93d0912014231880624770df242c0fba29a38c39974928ffe51a12efc7d001e","title":"","text":"LEAVE_UP=${2:-0} TEAR_DOWN=${3:-0} HAVE_JQ=$(which jq) if [[ -z ${HAVE_JQ} ]]; then echo \"Please install jq, e.g.: 'sudo apt-get install jq' or, \" echo \"'sudo yum install jq' or, \" echo \"if you're on a mac with homebrew, 'brew install jq'.\" exit 1 fi # Exit on error set -e"} {"_id":"doc-en-kubernetes-e4d50f5d87bfdfd1daf6fe4b2eb739aecb09d1fff0a82b55268e35ac6511abb4","title":"","text":"for (( i=0; i <${NUM_MINIONS}; i++)) do KUBE_MINION_IP_ADDRESSES[$i]=\"${MINION_IP_BASE}$[$i+2]\" MINION_NAMES[$i]=\"${MINION_IP_BASE}$[$i+2]\" done No newline at end of file VAGRANT_MINION_NAMES[$i]=\"minion-$[$i+1]\" done "} {"_id":"doc-en-kubernetes-3790b2eb6255d6d16694cf30d96e497f273431a7167b431c4b10880786cd294e","title":"","text":"source $(dirname ${BASH_SOURCE})/${KUBE_CONFIG_FILE-\"config-default.sh\"} function detect-master () { echo \"KUBE_MASTER_IP: $KUBE_MASTER_IP\" echo \"KUBE_MASTER: $KUBE_MASTER\" echo \"KUBE_MASTER_IP: $KUBE_MASTER_IP\" echo \"KUBE_MASTER: $KUBE_MASTER\" } # Get minion IP addresses and store in KUBE_MINION_IP_ADDRESSES[] function detect-minions { echo \"Minions already detected\" echo \"Minions already detected\" } # Verify prereqs on host machine"} {"_id":"doc-en-kubernetes-cba7902427a9b9b98d6a55fc48c3ec056d7cd6ea5a0735514bdbf4e9d40b745e","title":"","text":"} # Instantiate a kubernetes cluster function kube-up { vagrant up function kube-up { get-password vagrant up echo \"Each machine instance has been created.\" echo \" Now waiting for the Salt provisioning process to complete on each machine.\" echo \" This can take some time based on your network, disk, and cpu speed.\" echo \" It is possible for an error to occur during Salt provision of cluster and this could loop forever.\" # verify master has all required daemons echo \"Validating master\" MACHINE=\"master\" REQUIRED_DAEMON=(\"salt-master\" \"salt-minion\" \"apiserver\" \"nginx\" \"controller-manager\" \"scheduler\") VALIDATED=\"1\" until [ \"$VALIDATED\" -eq \"0\" ]; do VALIDATED=\"0\" for daemon in ${REQUIRED_DAEMON[@]}; do vagrant ssh $MACHINE -c \"which $daemon\" >/dev/null 2>&1 || { printf \".\"; VALIDATED=\"1\"; sleep 2; } done done # verify each minion has all required daemons for (( i=0; i<${#MINION_NAMES[@]}; i++)); do echo \"Validating ${VAGRANT_MINION_NAMES[$i]}\" MACHINE=${VAGRANT_MINION_NAMES[$i]} REQUIRED_DAEMON=(\"salt-minion\" \"kubelet\" \"docker\") VALIDATED=\"1\" until [ \"$VALIDATED\" -eq \"0\" ]; do VALIDATED=\"0\" for daemon in ${REQUIRED_DAEMON[@]}; do vagrant ssh $MACHINE -c \"which $daemon\" >/dev/null 2>&1 || { printf \".\"; VALIDATED=\"1\"; sleep 2; } done done done echo echo \"Waiting for each minion to be registered with cloud provider\" for (( i=0; i<${#MINION_NAMES[@]}; i++)); do COUNT=\"0\" until [ \"$COUNT\" -eq \"1\" ]; do $(dirname $0)/kubecfg.sh -template '{{range.Items}}{{.ID}}:{{end}}' list minions > /tmp/minions COUNT=$(grep -c ${MINION_NAMES[i]} /tmp/minions) || { printf \".\"; sleep 2; COUNT=\"0\"; } done done echo echo \"Kubernetes cluster created.\" echo echo \"Kubernetes cluster is running. Access the master at:\" echo echo \" https://${user}:${passwd}@${KUBE_MASTER_IP}\" } # Delete a kubernetes cluster function kube-down { vagrant destroy -f vagrant destroy -f } # Update a kubernetes cluster with latest source function kube-push { vagrant provision vagrant provision } # Execute prior to running tests to build a release if required for env function test-build-release { echo \"Vagrant provider can skip release build\" echo \"Vagrant provider can skip release build\" } # Execute prior to running tests to initialize required structure function test-setup { echo \"Vagrant test setup complete\" echo \"Vagrant test setup complete\" } # Execute after running tests to perform any required clean-up function test-teardown { echo \"Vagrant ignores tear-down\" echo \"Vagrant ignores tear-down\" } # Set the {user} and {password} environment values required to interact with provider function get-password { export user=vagrant export passwd=vagrant echo \"Using credentials: $user:$passwd\" export user=vagrant export passwd=vagrant echo \"Using credentials: $user:$passwd\" }"} {"_id":"doc-en-kubernetes-a2705aed7e9007718f2ef62b604c4ccd0380b585086a92b695ecbe989d1d0d66","title":"","text":"'roles:kubernetes-pool-vsphere': - match: grain - static-routes 'roles:kubernetes-pool-vagrant': - match: grain - vagrant "} {"_id":"doc-en-kubernetes-04e5bd356c557b98d8e015e8de5daab053b70313bf448a52cf22b41022efeb45","title":"","text":" vagrant: user.present: - optional_groups: - docker - remove_groups: False - require: - pkg: docker-io "} {"_id":"doc-en-kubernetes-137926659c3f142ee294c3f02865497aea6ae6f6fdf025f24431d037f7c82edd","title":"","text":"etcd_servers: $MASTER_IP roles: - kubernetes-pool - kubernetes-pool-vagrant cbr-cidr: $MINION_IP_RANGE minion_ip: $MINION_IP EOF"} {"_id":"doc-en-kubernetes-f3959c6de3539f23b7d34fce4582fef8eb76afd91fb0856f55c645adaf76ec9c","title":"","text":"return map[string][]api.Pod{}, err } for _, scheduledPod := range pods { host := scheduledPod.CurrentState.Host host := scheduledPod.DesiredState.Host machineToPods[host] = append(machineToPods[host], scheduledPod) } return machineToPods, nil"} {"_id":"doc-en-kubernetes-46b84c8226a3127ae7cfce6a2900a9480392f426f142978e647fd0c5a0aa1385","title":"","text":"{CPU: 2000}, }, }, Host: \"machine1\", } cpuAndMemory := api.PodState{ Manifest: api.ContainerManifest{"} {"_id":"doc-en-kubernetes-372c9d87de6ea0cb06368e2b96c94eca4d59b034eddb2c083764aabbf65ed16a","title":"","text":"{CPU: 2000, Memory: 3000}, }, }, Host: \"machine2\", } tests := []struct { pod api.Pod"} {"_id":"doc-en-kubernetes-34a15a11d883ec2f0419d3400e0d9e285eda5150fbd4a825490cf895a25a93c5","title":"","text":"expectedList: []HostPriority{{\"machine1\", 0}, {\"machine2\", 0}}, test: \"no resources requested\", pods: []api.Pod{ {CurrentState: machine1State, Labels: labels2}, {CurrentState: machine1State, Labels: labels1}, {CurrentState: machine2State, Labels: labels1}, {CurrentState: machine2State, Labels: labels1}, {DesiredState: machine1State, Labels: labels2}, {DesiredState: machine1State, Labels: labels1}, {DesiredState: machine2State, Labels: labels1}, {DesiredState: machine2State, Labels: labels1}, }, }, {"} {"_id":"doc-en-kubernetes-dcf0382711b8744399fbfdf088830137ab3c091eba6b4a21deceaa2a9aed1bc9","title":"","text":"expectedList: []HostPriority{{\"machine1\", 37 /* int(75% / 2) */}, {\"machine2\", 62 /* int( 75% + 50% / 2) */}}, test: \"no resources requested\", pods: []api.Pod{ {DesiredState: cpuOnly, CurrentState: machine1State}, {DesiredState: cpuAndMemory, CurrentState: machine2State}, {DesiredState: cpuOnly}, {DesiredState: cpuAndMemory}, }, }, {"} {"_id":"doc-en-kubernetes-a4a702dd6323660ceb6ec0175d4b7ac1325e2ce32c22c8373aee95551a31c4ab","title":"","text":"expectedList: []HostPriority{{\"machine1\", 0}, {\"machine2\", 0}}, test: \"zero minion resources\", pods: []api.Pod{ {DesiredState: cpuOnly, CurrentState: machine1State}, {DesiredState: cpuAndMemory, CurrentState: machine2State}, {DesiredState: cpuOnly}, {DesiredState: cpuAndMemory}, }, }, }"} {"_id":"doc-en-kubernetes-7698bb34ffd9ec8a6d11e5fd2cee506f6fa6228ef67d6aaf07fe94d8c787c421","title":"","text":"// Addf logs info immediately. func (passthroughLogger) Addf(format string, data ...interface{}) { glog.Infof(format, data...) glog.InfoDepth(1, fmt.Sprintf(format, data...)) } // DefaultStacktracePred is the default implementation of StacktracePred."} {"_id":"doc-en-kubernetes-ae3ad765b06428df0b1a5022cd02406f89d573d141b4922d9688e14d7bd0c4d2","title":"","text":"// Log is intended to be called once at the end of your request handler, via defer func (rl *respLogger) Log() { latency := time.Since(rl.startTime) glog.V(2).Infof(\"%s %s: (%v) %v%v%v\", rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo) if glog.V(2) { glog.InfoDepth(1, fmt.Sprintf(\"%s %s: (%v) %v%v%v\", rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo)) } } // Header implements http.ResponseWriter."} {"_id":"doc-en-kubernetes-61d2511b799716e3fd58db162df560893be639b45fe46cdd70cc01a9fe45639f","title":"","text":"echo \" echo 'Waiting for metadata MINION_IP_RANGE...'\" echo \" sleep 3\" echo \"done\" echo \"\" echo \"# Remove docker artifacts on minion nodes\" echo \"iptables -t nat -F\" echo \"ifconfig docker0 down\" echo \"brctl delbr docker0\" echo \"\" echo \"EXTRA_DOCKER_OPTS='${EXTRA_DOCKER_OPTS}'\" echo \"ENABLE_DOCKER_REGISTRY_CACHE='${ENABLE_DOCKER_REGISTRY_CACHE:-false}'\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/common.sh\""} {"_id":"doc-en-kubernetes-b3bc097dba99bc444f17af7a6f314791a179e75e7dda6e34b492903e2787f6bd","title":"","text":"return readDockerConfigFileFromBytes(contents) } // HttpError wraps a non-StatusOK error code as an error. type HttpError struct { StatusCode int Url string } // Error implements error func (he *HttpError) Error() string { return fmt.Sprintf(\"http status code: %d while fetching url %s\", he.StatusCode, he.Url) } func ReadUrl(url string, client *http.Client, header *http.Header) (body []byte, err error) { req, err := http.NewRequest(\"GET\", url, nil) if err != nil { glog.Errorf(\"while creating request to read %s: %v\", url, err) return nil, err } if header != nil {"} {"_id":"doc-en-kubernetes-6b5c235cd01619ad8e36ace30dd56047eada04de549b796b673e294b4294c902","title":"","text":"} resp, err := client.Do(req) if err != nil { glog.Errorf(\"while trying to read %s: %v\", url, err) return nil, err } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { err := fmt.Errorf(\"http status code: %d while fetching url %s\", resp.StatusCode, url) glog.Errorf(\"while trying to read %s: %v\", url, err) glog.V(2).Infof(\"body of failing http response: %v\", resp.Body) return nil, err return nil, &HttpError{ StatusCode: resp.StatusCode, Url: url, } } contents, err := ioutil.ReadAll(resp.Body) if err != nil { glog.Errorf(\"while trying to read %s: %v\", url, err) return nil, err }"} {"_id":"doc-en-kubernetes-1ce57414992a7ad2b8a61e9d6358e24d096c58632cbf3d94233c449cf6b79991","title":"","text":"func (g *dockerConfigKeyProvider) Provide() credentialprovider.DockerConfig { // Read the contents of the google-dockercfg metadata key and // parse them as an alternate .dockercfg if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(dockerConfigKey, g.Client, metadataHeader); err == nil { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(dockerConfigKey, g.Client, metadataHeader); err != nil { glog.Errorf(\"while reading 'google-dockercfg' metadata: %v\", err) } else { return cfg }"} {"_id":"doc-en-kubernetes-c49221226005afebb55e5b8c82f053a20a3e44eeaaba4e089fc1c0ea9e33bb0b","title":"","text":"// Provide implements DockerConfigProvider func (g *dockerConfigUrlKeyProvider) Provide() credentialprovider.DockerConfig { // Read the contents of the google-dockercfg-url key and load a .dockercfg from there if url, err := credentialprovider.ReadUrl(dockerConfigUrlKey, g.Client, metadataHeader); err == nil { if url, err := credentialprovider.ReadUrl(dockerConfigUrlKey, g.Client, metadataHeader); err != nil { glog.Errorf(\"while reading 'google-dockercfg-url' metadata: %v\", err) } else { if strings.HasPrefix(string(url), \"http\") { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(string(url), g.Client, nil); err == nil { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(string(url), g.Client, nil); err != nil { glog.Errorf(\"while reading 'google-dockercfg-url'-specified url: %s, %v\", string(url), err) } else { return cfg } } else {"} {"_id":"doc-en-kubernetes-654cfee1af2701040a81e275a84f9457c9a5b22b4da527179ad1d1646b217f59","title":"","text":"tokenJsonBlob, err := credentialprovider.ReadUrl(metadataToken, g.Client, metadataHeader) if err != nil { glog.Errorf(\"while reading access token endpoint: %v\", err) return cfg } email, err := credentialprovider.ReadUrl(metadataEmail, g.Client, metadataHeader) if err != nil { glog.Errorf(\"while reading email endpoint: %v\", err) return cfg }"} {"_id":"doc-en-kubernetes-5bdb878a76504e2bf239961639672bc380bb42b9bd2322c9afb7439331c98d3f","title":"","text":"# Optional: Enable node logging. ENABLE_NODE_LOGGING=true LOGGING_DESTINATION=elasticsearch # options: elasticsearch, gcp # Don't require https for registries in our local RFC1918 network EXTRA_DOCKER_OPTS=\"--insecure-registry 10.0.0.0/8\" "} {"_id":"doc-en-kubernetes-9d730cd43a86d1d194c08a666b7c4ea237baff9e736083247c4b512d4bfaccff","title":"","text":"LOGGING_DESTINATION=elasticsearch # options: elasticsearch, gcp ENABLE_CLUSTER_MONITORING=false # Don't require https for registries in our local RFC1918 network EXTRA_DOCKER_OPTS=\"--insecure-registry 10.0.0.0/8\" "} {"_id":"doc-en-kubernetes-e2b8f67c6c6914e7094a331cffc57628b1c018982fe24aca865cb2743d3b3c14","title":"","text":"cloud: gce EOF DOCKER_OPTS=\"\" if [[ -n \"${EXTRA_DOCKER_OPTS-}\" ]]; then DOCKER_OPTS=\"${EXTRA_DOCKER_OPTS}\" fi # Decide if enable the cache if [[ \"${ENABLE_DOCKER_REGISTRY_CACHE}\" == \"true\" ]]; then if [[ \"${ENABLE_DOCKER_REGISTRY_CACHE}\" == \"true\" ]]; then REGION=$(echo \"${ZONE}\" | cut -f 1,2 -d -) echo \"Enable docker registry cache at region: \" $REGION DOCKER_OPTS=\"--registry-mirror=\"https://${REGION}.docker-cache.clustermaster.net\"\" DOCKER_OPTS=\"${DOCKER_OPTS} --registry-mirror='https://${REGION}.docker-cache.clustermaster.net'\" fi cat <>/etc/salt/minion.d/grains.conf if [[ -n \"{DOCKER_OPTS}\" ]]; then cat <>/etc/salt/minion.d/grains.conf docker_opts: $DOCKER_OPTS EOF fi"} {"_id":"doc-en-kubernetes-5d98b5dccb9509ca8d0bdfb67b6e15c16ed4a7753c80e3325ec0e19fe46263d7","title":"","text":"echo \"ZONE='${ZONE}'\" echo \"MASTER_NAME='${MASTER_NAME}'\" echo \"MINION_IP_RANGE='${MINION_IP_RANGES[$i]}'\" echo \"EXTRA_DOCKER_OPTS='${EXTRA_DOCKER_OPTS}'\" echo \"ENABLE_DOCKER_REGISTRY_CACHE='${ENABLE_DOCKER_REGISTRY_CACHE:-false}'\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/common.sh\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/salt-minion.sh\""} {"_id":"doc-en-kubernetes-fe8f820400f1edc5d05f4f6403cece3f14d841eec9396013f16753e8c8aa16bf","title":"","text":" DOCKER_OPTS=\"\" {% if grains.docker_opts is defined %} {% set docker_opts = grains.docker_opts %} {% else %} {% set docker_opts = \"\" %} DOCKER_OPTS=\"${DOCKER_OPTS} {{grains.docker_opts}}\" {% endif %} DOCKER_OPTS=\"{{docker_opts}} --bridge cbr0 --iptables=false --ip-masq=false -r=false\" DOCKER_OPTS=\"${DOCKER_OPTS} --bridge cbr0 --iptables=false --ip-masq=false -r=false\" "} {"_id":"doc-en-kubernetes-fe567d9cf78409720bcbb87b1ad6339cb18b53dd88041da1d575419b6cc2df05","title":"","text":" # The Kubernetes API Primary system and API concepts are documented in the [User guide](user-guide.md). Overall API conventions are described in the [API conventions doc](api-conventions.md). Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka \"master\") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swaggerui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). Remote access to the API is discussed in the [access doc](accessing_the_api.md). The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. Kubernetes itself is decomposed into multiple components, which interact through its API. ## API changes In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). ## API versioning Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. ## v1beta1 and v1beta2 are deprecated; please move to v1beta3 ASAP As of April 1, 2015, the Kubernetes v1beta3 API has been enabled by default, and the v1beta1 and v1beta2 APIs are deprecated. v1beta3 should be considered the v1 release-candidate API, and the v1 API is expected to be substantially similar. As \"pre-release\" APIs, v1beta1, v1beta2, and v1beta3 will be eliminated once the v1 API is available, by the end of June 2015. ## v1beta3 conversion tips We're working to convert all documentation and examples to v1beta3. Most examples already contain a v1beta3 subdirectory with the API objects translated to v1beta3. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. Some important differences between v1beta1/2 and v1beta3: * The resource `id` is now called `name`. * `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` * `desiredState` is now called `spec`, and `currentState` is now called `status` * `/minions` has been moved to `/nodes`, and the resource has kind `Node` * The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}` * The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. * To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the `?watch=true` URL parameter along with the desired `resourceVersion` parameter to watch from. * The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. * Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). * Restart policy is represented simply as a string (e.g., \"Always\") rather than as a nested map (\"always{}\"). * The volume `source` is inlined into `volume` rather than nested. "} {"_id":"doc-en-kubernetes-a44f6cb5e869eccf248b8ac505e27244011ef25c2c80e1233583629299af7a4f","title":"","text":"`service`, a client can simply connect to $MYAPP_SERVICE_HOST on port $MYAPP_SERVICE_PORT. ## Service without selector Services, in addition to providing clean abstraction to access pods, can also abstract any kind of backend: - you want to have an external database cluster in production, but in test you use your own databases. - you want to point your service to a service in another [`namespace`](namespaces.md) or on another cluster. - you are migrating your workload to Kubernetes and some of your backends run outside of Kubernetes. In any of these scenarios you can define a service without a selector: ```json \"kind\": \"Service\", \"apiVersion\": \"v1beta1\", \"id\": \"myapp\", \"port\": 8765 ``` then you can explicitly map the service to a specific endpoint(s): ```json \"kind\": \"Endpoints\", \"apiVersion\": \"v1beta1\", \"id\": \"myapp\", \"endpoints\": [\"173.194.112.206:80\"] ``` Access to the service without a selector works the same as if it had selector. The traffic will be routed to endpoints defined by the user (`173.194.112.206:80` in case of this example). ## How do they work? Each node in a Kubernetes cluster runs a `service proxy`. This application"} {"_id":"doc-en-kubernetes-625074686ee19c90e3f328d639a9cbdc472bd6a9cc867c967232109cc7c209a0","title":"","text":"EmptyDir *EmptyDir `json:\"emptyDir\"` // GCEPersistentDisk represents a GCE Disk resource that is attached to a // kubelet's host machine and then exposed to the pod. GCEPersistentDisk *GCEPersistentDisk `json:\"persistentDisk\"` GCEPersistentDisk *GCEPersistentDisk `json:\"gcePersistentDisk\"` // GitRepo represents a git repository at a particular revision. GitRepo *GitRepo `json:\"gitRepo\"` }"} {"_id":"doc-en-kubernetes-3dced0d2b2fe488bc7ddcbc632906d4f82fbd404f84d89e75d2a5d4b449d937e","title":"","text":" // +build !windows /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package gce_pd import ( \"os\" \"syscall\" ) // Determine if a directory is a mountpoint, by comparing the device for the directory // with the device for it's parent. If they are the same, it's not a mountpoint, if they're // different, it is. func isMountPoint(file string) (bool, error) { stat, err := os.Stat(file) if err != nil { return false, err } rootStat, err := os.Lstat(file + \"/..\") if err != nil { return false, err } // If the directory has the same device as parent, then it's not a mountpoint. return stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev, nil } "} {"_id":"doc-en-kubernetes-0c4488e8c76d6924ce0614a130538ccd156d3678806d22eb36812f8700034843","title":"","text":" // +build !windows /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package gce_pd import ( \"os\" \"syscall\" ) // Determine if a directory is a mountpoint, by comparing the device for the directory // with the device for it's parent. If they are the same, it's not a mountpoint, if they're // different, it is. func isMountPoint(file string) (bool, error) { stat, err := os.Stat(file) if err != nil { return false, err } rootStat, err := os.Lstat(file + \"/..\") if err != nil { return false, err } // If the directory has the same device as parent, then it's not a mountpoint. return stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev, nil } "} {"_id":"doc-en-kubernetes-c354a22747dabbc29c10835a65caaaf90444e20a88e58aeb5b00ffd068dcb814","title":"","text":" // +build !linux // +build windows /* Copyright 2014 Google Inc. All rights reserved."} {"_id":"doc-en-kubernetes-404ceebe195f6393af475b3e59bce772c69e27b5c00cbcacfe7a7e702c9a9a29","title":"","text":"} if event.Type == watch.Error { util.HandleError(fmt.Errorf(\"error from watch during sync: %v\", errors.FromObject(event.Object))) // Clear the resource version, this may cause us to skip some elements on the watch, // but we'll catch them on the synchronize() call, so it works out. *resourceVersion = \"\" continue } glog.V(4).Infof(\"Got watch: %#v\", event) rc, ok := event.Object.(*api.ReplicationController) if !ok { if status, ok := event.Object.(*api.Status); ok { if status.Status == api.StatusFailure { glog.Errorf(\"failed to watch: %v\", status) // Clear resource version here, as above, this won't hurt consistency, but we // should consider introspecting more carefully here. (or make the apiserver smarter) // \"why not both?\" *resourceVersion = \"\" continue } } util.HandleError(fmt.Errorf(\"unexpected object: %#v\", event.Object)) continue }"} {"_id":"doc-en-kubernetes-d998d01d6651bff9e21e9c2387fc7a64b5509059ec20c69804d524105991c41c","title":"","text":" /* Copyright 2015 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package e2e import ( \"fmt\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" . \"github.com/onsi/ginkgo\" ) var _ = Describe(\"Cadvisor\", func() { var c *client.Client BeforeEach(func() { var err error c, err = loadClient() expectNoError(err) }) It(\"cadvisor should be healthy on every node.\", func() { CheckCadvisorHealthOnAllNodes(c) }) }) func CheckCadvisorHealthOnAllNodes(c *client.Client) { By(\"getting list of nodes\") nodeList, err := c.Nodes().List() expectNoError(err) for _, node := range nodeList.Items { // cadvisor is not accessible directly unless its port (4194 by default) is exposed. // Here, we access '/stats/' REST endpoint on the kubelet which polls cadvisor internally. statsResource := fmt.Sprintf(\"api/v1beta1/proxy/minions/%s/stats/\", node.Name) By(fmt.Sprintf(\"Querying stats from node %s using url %s\", node.Name, statsResource)) _, err = c.Get().AbsPath(statsResource).Timeout(1 * time.Second).Do().Raw() expectNoError(err) } } "} {"_id":"doc-en-kubernetes-16f48068bcbca3bf360a382ec6a85c2d5022a889ab7256594b881254c3965af9","title":"","text":"t.Fatalf(\"Expected %#v, Got %#v\", expected, update) } case <-time.After(2 * time.Millisecond): case <-time.After(time.Second): t.Errorf(\"Expected update, timeout instead\") } }"} {"_id":"doc-en-kubernetes-97e3688f0778a62ea4ed90bf19a2fc7f7a5a3e031710b474dbddf49dd3fed16f","title":"","text":"t.Errorf(\"Unexpected UID: %s\", update.Pods[0].ObjectMeta.UID) } case <-time.After(2 * time.Millisecond): case <-time.After(time.Second): t.Errorf(\"Expected update, timeout instead\") } }"} {"_id":"doc-en-kubernetes-a16830786ad922ae371a556faf596d13c124c91ae32b8e6ca7d5a57ee9b59ce6","title":"","text":"# shasum # 6. Update this file with new tar version and new hash {% set etcd_version=\"v0.4.6\" %} {% set etcd_version=\"v2.0.0\" %} {% set etcd_tar_url=\"https://storage.googleapis.com/kubernetes-release/etcd/etcd-%s-linux-amd64.tar.gz\" | format(etcd_version) %} {% set etcd_tar_hash=\"sha1=5db514e30b9f340eda00671230d5136855ae14d7\" %} {% set etcd_tar_hash=\"sha1=b3cd41d1748bf882a58a98c9585fd5849b943811\" %} etcd-tar: archive:"} {"_id":"doc-en-kubernetes-683e6658148ffee96574ea294ab137472d5de55ee1fce20d75fcd621f3d6478f","title":"","text":"DESC=\"The etcd key-value share configuration service\" NAME=etcd DAEMON=/usr/local/bin/$NAME DAEMON_ARGS=\"-peer-addr $HOSTNAME:7001 -name $HOSTNAME\" # DAEMON_ARGS=\"-peer-addr $HOSTNAME:7001 -name $HOSTNAME\" host_ip=$(hostname -i) DAEMON_ARGS=\"-addr ${host_ip}:4001 -bind-addr ${host_ip}:4001 -data-dir /var/etcd -initial-advertise-peer-urls http://${HOSTNAME}:2380 -name ${HOSTNAME} -initial-cluster ${HOSTNAME}=http://${HOSTNAME}:2380\" DAEMON_LOG_FILE=/var/log/$NAME.log PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME"} {"_id":"doc-en-kubernetes-4897f940020ebbc6d789c1b604af7826318164ed84f0fcb9f572550b09d8a6ca","title":"","text":"exit 1 fi version=$(etcd -version | cut -d \" \" -f 3) if [[ \"${version}\" < \"2.0.0\" ]]; then kube::log::usage \"etcd version 2.0.0 or greater required.\" exit 1 fi # Start etcd ETCD_DIR=$(mktemp -d -t test-etcd.XXXXXX) etcd -name test -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null & kube::log::usage \"etcd -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null\" etcd -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null & ETCD_PID=$! kube::util::wait_for_url \"http://${host}:${port}/v2/keys/\" \"etcd: \" echo \"Waiting for etcd to come up.\" while true; do if curl -L http://127.0.0.1:4001/v2/keys/test -XPUT -d value=\"test\"; then break fi done kube::util::wait_for_url \"http://${host}:${port}/v2/keys/test\" \"etcd: \" } kube::etcd::cleanup() {"} {"_id":"doc-en-kubernetes-7cfd15081eef595237856b11fd1d8b4236ac17462447857f3ecdac70ffe3764a","title":"","text":"KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. ETCD_VERSION=${ETCD_VERSION:-v0.4.6} ETCD_VERSION=${ETCD_VERSION:-v2.0.0} cd \"${KUBE_ROOT}/third_party\" curl -sL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz "} {"_id":"doc-en-kubernetes-6e3c74340fb0c64c805550bd7e7f70304bcbe8e68ef36a554bf030df27c671f3","title":"","text":"# The IP of the master export MASTER_IP=\"10.245.1.2\" export INSTANCE_PREFIX=kubernetes export INSTANCE_PREFIX=\"kubernetes\" export MASTER_NAME=\"${INSTANCE_PREFIX}-master\" # Map out the IPs, names and container subnets of each minion"} {"_id":"doc-en-kubernetes-d5c5378f24fb46bd45f685f67eaff4fbed9650af9553afa78f5caa2dd351db0c","title":"","text":"dns_replicas: '$(echo \"$DNS_REPLICAS\" | sed -e \"s/'/''/g\")' dns_server: '$(echo \"$DNS_SERVER_IP\" | sed -e \"s/'/''/g\")' dns_domain: '$(echo \"$DNS_DOMAIN\" | sed -e \"s/'/''/g\")' instance_prefix: '$(echo \"$INSTANCE_PREFIX\" | sed -e \"s/'/''/g\")' EOF # Configure the salt-master"} {"_id":"doc-en-kubernetes-a7467afaa396090f75d283b1bdff2292a8e93c42902eb7951ab7c56a69e5bdaf","title":"","text":"# KUBE_PASSWORD function get-password { # go template to extract the auth-path of the current-context user local template='{{$ctx := index . \"current-context\"}}{{$user := index . \"contexts\" $ctx \"user\"}}{{index . \"users\" $user \"auth-path\"}}' local template='{{with $ctx := index . \"current-context\"}}{{$user := index . \"contexts\" $ctx \"user\"}}{{index . \"users\" $user \"auth-path\"}}{{end}}' local file=$(\"${KUBE_ROOT}/cluster/kubectl.sh\" config view -o template --template=\"${template}\") if [[ -r \"$file\" ]]; then if [[ ! -z \"$file\" && -r \"$file\" ]]; then KUBE_USER=$(cat \"$file\" | python -c 'import json,sys;print json.load(sys.stdin)[\"User\"]') KUBE_PASSWORD=$(cat \"$file\" | python -c 'import json,sys;print json.load(sys.stdin)[\"Password\"]') return"} {"_id":"doc-en-kubernetes-9239655b41b76696f68403bf206d47e9ae1a33eb75da975d968e4ae07d6e8c28","title":"","text":"Start the server: ```sh cluster/kubectl.sh proxy -www=$PWD/www cluster/kubectl.sh proxy --www=$PWD/www ``` The UI should now be running on [localhost](http://localhost:8001/static/index.html#/groups//selector)"} {"_id":"doc-en-kubernetes-6484546cff8dc36e27c9001cd1d1cb7e711f9fdc498f6376049f601a5982a344","title":"","text":"// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\"` Volumes []Volume `json:\"volumes\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\"` // Required: Set DNS policy."} {"_id":"doc-en-kubernetes-85ed6f05622e5d94e6078804248e939e6930f1133f0ffced90b92effd011e26e","title":"","text":"// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; containers cannot currently be added or removed\"` Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; containers cannot currently be added or removed; there must be at least one container in a Pod\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\" description:\"restart policy for all containers within the pod; one of RestartPolicyAlways, RestartPolicyOnFailure, RestartPolicyNever\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" DNSPolicy DNSPolicy `json:\"dnsPolicy,omitempty\" description:\"DNS policy for containers within the pod; one of 'ClusterFirst' or 'Default'\"`"} {"_id":"doc-en-kubernetes-8f5013ede8864d4a4f033efd03c33f59b04400995c2fee31addc5a9a891f4f2d","title":"","text":"// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; cannot be updated; containers cannot currently be added or removed\"` Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; cannot be updated; containers cannot currently be added or removed; there must be at least one container in a Pod\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\" description:\"restart policy for all containers within the pod; one of RestartPolicyAlways, RestartPolicyOnFailure, RestartPolicyNever\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" DNSPolicy DNSPolicy `json:\"dnsPolicy,omitempty\" description:\"DNS policy for containers within the pod; one of 'ClusterFirst' or 'Default'\"`"} {"_id":"doc-en-kubernetes-70e1511db8a028ea8c2bc85a3c0c946e9059511717d715dadfc9db9d818cb540","title":"","text":"case \"name\": return \"name\", value, nil case \"DesiredState.Host\": return \"Status.Host\", value, nil return \"status.host\", value, nil case \"DesiredState.Status\": podStatus := PodStatus(value) var internalValue newer.PodPhase newer.Scheme.Convert(&podStatus, &internalValue) return \"Status.Phase\", string(internalValue), nil return \"status.phase\", string(internalValue), nil default: return \"\", \"\", fmt.Errorf(\"field label not supported: %s\", label) }"} {"_id":"doc-en-kubernetes-9ea43e3f33eeb3743dd7f22ee659669331a52a93e2eb10bfdfb6d516602d326a","title":"","text":"switch label { case \"name\": fallthrough case \"Status.Phase\": case \"status.phase\": fallthrough case \"Status.Host\": case \"status.host\": return label, value, nil default: return \"\", \"\", fmt.Errorf(\"field label not supported: %s\", label)"} {"_id":"doc-en-kubernetes-13947009bc9f7b46fc829ab3204d7997466abf7c3810b798776b65c05d4e7241","title":"","text":"label: \"label=qux\", expectedIDs: util.NewStringSet(\"qux\"), }, { field: \"Status.Phase=Failed\", field: \"status.phase=Failed\", expectedIDs: util.NewStringSet(\"baz\"), }, { field: \"Status.Host=barhost\", field: \"status.host=barhost\", expectedIDs: util.NewStringSet(\"bar\"), }, { field: \"Status.Host=\", field: \"status.host=\", expectedIDs: util.NewStringSet(\"foo\", \"baz\", \"qux\", \"zot\"), }, { field: \"Status.Host!=\", field: \"status.host!=\", expectedIDs: util.NewStringSet(\"bar\"), }, }"} {"_id":"doc-en-kubernetes-1c5c249d845c52b7f0673d42c2280fe388cae07d2186046b45c0b90b81ec0ff7","title":"","text":"func PodToSelectableFields(pod *api.Pod) labels.Set { return labels.Set{ \"name\": pod.Name, \"Status.Phase\": string(pod.Status.Phase), \"Status.Host\": pod.Status.Host, \"status.phase\": string(pod.Status.Phase), \"status.host\": pod.Status.Host, } }"} {"_id":"doc-en-kubernetes-3cffa85d874fce7a91b30059510244f0bfd0ee6dcca9b7598659c0f0d33d1e5e","title":"","text":"case \"v1beta1\", \"v1beta2\": return \"DesiredState.Host\" default: return \"Status.Host\" return \"status.host\" } }"} {"_id":"doc-en-kubernetes-d974c948cd5e284562a0eb13316116d1c4c247db3126c1374594721930e1ada1","title":"","text":"id: fluentd-to-gcp containers: - name: fluentd-gcp-container image: kubernetes/fluentd-gcp:1.0 image: kubernetes/fluentd-gcp:1.1 volumeMounts: - name: containers mountPath: /var/lib/docker/containers"} {"_id":"doc-en-kubernetes-fa76cdaaa31be376377ee41866579e41ccf0a8555e0690328e46ba3e07efcb70","title":"","text":".PHONY:\tbuild push TAG = 1.0 TAG = 1.1 build: sudo docker build -t kubernetes/fluentd-gcp:$(TAG) ."} {"_id":"doc-en-kubernetes-1a9de9bdee26d83ba471fedfe873f0807689d9d7ba9fd258368953654fa49de9","title":"","text":"path /var/lib/docker/containers/*/*-json.log pos_file /var/lib/docker/containers/containers.log.pos time_format %Y-%m-%dT%H:%M:%S tag docker.container.* tag docker.* read_from_head true type google_cloud flush_interval 5s # Never wait longer than 5 minutes between retries."} {"_id":"doc-en-kubernetes-bc33c6c589a1b7b1392f2ca9bb3a18ec63283e63a2570d5983352667921d9e33","title":"","text":"import ( \"fmt\" \"io/ioutil\" \"net/http\" \"sort\" \"strings\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\""} {"_id":"doc-en-kubernetes-792316d300c2c14b4ff41fefe9f89b3357d5505d15217c97651bddd5de293fc7","title":"","text":"}() }, 240.0) It(\"should be able to create a functioning external load balancer\", func() { serviceName := \"external-lb-test\" ns := api.NamespaceDefault labels := map[string]string{ \"key0\": \"value0\", } service := &api.Service{ ObjectMeta: api.ObjectMeta{ Name: serviceName, }, Spec: api.ServiceSpec{ Port: 80, Selector: labels, TargetPort: util.NewIntOrStringFromInt(80), CreateExternalLoadBalancer: true, }, } By(\"cleaning up previous service \" + serviceName + \" from namespace \" + ns) c.Services(ns).Delete(serviceName) By(\"creating service \" + serviceName + \" with external load balancer in namespace \" + ns) result, err := c.Services(ns).Create(service) Expect(err).NotTo(HaveOccurred()) defer func(ns, serviceName string) { // clean up when we're done By(\"deleting service \" + serviceName + \" in namespace \" + ns) err := c.Services(ns).Delete(serviceName) Expect(err).NotTo(HaveOccurred()) }(ns, serviceName) if len(result.Spec.PublicIPs) != 1 { Failf(\"got unexpected number (%d) of public IPs for externally load balanced service: %v\", result.Spec.PublicIPs, result) } ip := result.Spec.PublicIPs[0] port := result.Spec.Port pod := &api.Pod{ TypeMeta: api.TypeMeta{ Kind: \"Pod\", APIVersion: \"v1beta1\", }, ObjectMeta: api.ObjectMeta{ Name: \"elb-test-\" + string(util.NewUUID()), Labels: labels, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"webserver\", Image: \"kubernetes/test-webserver\", }, }, }, } By(\"creating pod to be part of service \" + serviceName) podClient := c.Pods(api.NamespaceDefault) defer func() { By(\"deleting pod \" + pod.Name) defer GinkgoRecover() podClient.Delete(pod.Name) }() if _, err := podClient.Create(pod); err != nil { Failf(\"Failed to create pod %s: %v\", pod.Name, err) } expectNoError(waitForPodRunning(c, pod.Name)) By(\"hitting the pod through the service's external load balancer\") var resp *http.Response for t := time.Now(); time.Since(t) < 4*time.Minute; time.Sleep(5 * time.Second) { resp, err = http.Get(fmt.Sprintf(\"http://%s:%d\", ip, port)) if err == nil { break } } Expect(err).NotTo(HaveOccurred()) defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) Expect(err).NotTo(HaveOccurred()) if resp.StatusCode != 200 { Failf(\"received non-success return status %q trying to access pod through load balancer; got body: %s\", resp.Status, string(body)) } if !strings.Contains(string(body), \"test-webserver\") { Failf(\"received response body without expected substring 'test-webserver': %s\", string(body)) } }) It(\"should correctly serve identically named services in different namespaces on different external IP addresses\", func() { serviceNames := []string{\"services-namespace-test0\"} // Could add more here, but then it takes longer. namespaces := []string{\"namespace0\", \"namespace1\"} // As above."} {"_id":"doc-en-kubernetes-72de335dd829a82140eb7c112dc208a30920157e1177a07be64d42712c85e186","title":"","text":"- user: root - group: root - mode: 644 /etc/monit/conf.d/kube-proxy: file: - managed - source: salt://monit/kube-proxy - user: root - group: root - mode: 644 {% endif %} monit-service: service: - running - name: monit - name: monit - watch: - pkg: monit - pkg: monit - file: /etc/monit/conf.d/* {% endif %}"} {"_id":"doc-en-kubernetes-3701f5e9bc547f0927e7e901d14d5fff36a9c803b0f591700698c66174802914","title":"","text":" check process kube-proxy with pidfile /var/run/kube-proxy.pid group kube-proxy start program = \"/etc/init.d/kube-proxy start\" stop program = \"/etc/init.d/kube-proxy stop\" if does not exist then restart if failed port 10249 protocol HTTP request \"/healthz\" with timeout 10 seconds then restart "} {"_id":"doc-en-kubernetes-b9dbe25ff76cefd6cbdd9391e75cf09df87912fa1e04ae8351971c113fe8ae15","title":"","text":"// Start syncing node list from cloudprovider. if syncNodeList && nc.isRunningCloudProvider() { go util.Forever(func() { if err = nc.SyncCloudNodes(); err != nil { if err := nc.SyncCloudNodes(); err != nil { glog.Errorf(\"Error syncing cloud: %v\", err) } }, period)"} {"_id":"doc-en-kubernetes-442221afe3f5835f71768ab201c205c080345d3dfc3e3e1b33c8a72c2b868606","title":"","text":"// Start syncing or monitoring node status. if syncNodeStatus { go util.Forever(func() { if err = nc.SyncProbedNodeStatus(); err != nil { if err := nc.SyncProbedNodeStatus(); err != nil { glog.Errorf(\"Error syncing status: %v\", err) } }, period) } else { go util.Forever(func() { if err = nc.MonitorNodeStatus(); err != nil { if err := nc.MonitorNodeStatus(); err != nil { glog.Errorf(\"Error monitoring node status: %v\", err) } }, nodeMonitorPeriod)"} {"_id":"doc-en-kubernetes-64f9b86f5ec6dfefe00b1c68860c6178cc467215741af45f44f7faf5a2753ac8","title":"","text":"}, { \"ImportPath\": \"github.com/docker/spdystream\", \"Rev\": \"e731c8f9f19ffd7e51a469a2de1580c1dfbb4fae\" \"Rev\": \"99515db39d3dad9607e0293f18152f3d59da76dc\" }, { \"ImportPath\": \"github.com/elazarl/go-bindata-assetfs\","} {"_id":"doc-en-kubernetes-053f8acab286a5f198c08967aa78b86517c62614d9910c64e78f4c53d8852bce","title":"","text":"if timer != nil { timer.Stop() } // Start a goroutine to drain resetChan. This is needed because we've seen // some unit tests with large numbers of goroutines get into a situation // where resetChan fills up, at least 1 call to Write() is still trying to // send to resetChan, the connection gets closed, and this case statement // attempts to grab the write lock that Write() already has, causing a // deadlock. // // See https://github.com/docker/spdystream/issues/49 for more details. go func() { for _ = range resetChan { } }() i.writeLock.Lock() close(resetChan) i.resetChan = nil i.writeLock.Unlock() break Loop } }"} {"_id":"doc-en-kubernetes-2c22877a1660934c80a26cc59f123aad314705efbd22f89e40a359fceb84225b","title":"","text":"client.BindClientConfigFlags(fs, &s.ClientConfig) fs.StringVar(&s.AlgorithmProvider, \"algorithm_provider\", s.AlgorithmProvider, \"The scheduling algorithm provider to use\") fs.StringVar(&s.PolicyConfigFile, \"policy_config_file\", s.PolicyConfigFile, \"File with scheduler policy configuration\") fs.BoolVar(&s.EnableProfiling, \"profiling\", false, \"Enable profiling via web interface host:port/debug/pprof/\") fs.BoolVar(&s.EnableProfiling, \"profiling\", true, \"Enable profiling via web interface host:port/debug/pprof/\") } // Run runs the specified SchedulerServer. This should never exit."} {"_id":"doc-en-kubernetes-83c484a5085fa9d343f73b464d909c72a7ccd5d390861b384eb9e569aa4520bb","title":"","text":"} go func() { mux := http.NewServeMux() if s.EnableProfiling { http.HandleFunc(\"/debug/pprof/\", pprof.Index) http.HandleFunc(\"/debug/pprof/profile\", pprof.Profile) http.HandleFunc(\"/debug/pprof/symbol\", pprof.Symbol) mux.HandleFunc(\"/debug/pprof/\", pprof.Index) mux.HandleFunc(\"/debug/pprof/profile\", pprof.Profile) mux.HandleFunc(\"/debug/pprof/symbol\", pprof.Symbol) } http.Handle(\"/metrics\", prometheus.Handler()) http.ListenAndServe(net.JoinHostPort(s.Address.String(), strconv.Itoa(s.Port)), nil) mux.Handle(\"/metrics\", prometheus.Handler()) server := &http.Server{ Addr: net.JoinHostPort(s.Address.String(), strconv.Itoa(s.Port)), Handler: mux, } glog.Fatal(server.ListenAndServe()) }() configFactory := factory.NewConfigFactory(kubeClient)"} {"_id":"doc-en-kubernetes-a8e7be87469f3db4b87b353f0c5442e105958d66686a072ebeba0ec2f9c5238f","title":"","text":"allErrs := errs.ValidationErrorList{} allErrs = append(allErrs, ValidateObjectMetaUpdate(&oldNamespace.ObjectMeta, &newNamespace.ObjectMeta).Prefix(\"metadata\")...) newNamespace.Spec = oldNamespace.Spec if newNamespace.DeletionTimestamp.IsZero() { if newNamespace.Status.Phase != api.NamespaceActive { allErrs = append(allErrs, errs.NewFieldInvalid(\"Status.Phase\", newNamespace.Status.Phase, \"A namespace may only be in active status if it does not have a deletion timestamp.\")) } } else { if newNamespace.Status.Phase != api.NamespaceTerminating { allErrs = append(allErrs, errs.NewFieldInvalid(\"Status.Phase\", newNamespace.Status.Phase, \"A namespace may only be in terminating status if it has a deletion timestamp.\")) } } return allErrs }"} {"_id":"doc-en-kubernetes-e2e629b206b5725045db59a13a56bbb1e07d0387eef65c90c0f5625c5a672282","title":"","text":"} func TestValidateNamespaceStatusUpdate(t *testing.T) { now := util.Now() tests := []struct { oldNamespace api.Namespace namespace api.Namespace valid bool }{ {api.Namespace{}, api.Namespace{}, true}, {api.Namespace{}, api.Namespace{ Status: api.NamespaceStatus{ Phase: api.NamespaceActive, }, }, true}, {api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}, Name: \"foo\", DeletionTimestamp: &now}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating, },"} {"_id":"doc-en-kubernetes-bef27bef5c19cde72d5a3fb04023d8a16b2c68eb91d6f8f4b4b7eeaffab2c5a2","title":"","text":"Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating, }, }, false}, {api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"bar\"}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating,"} {"_id":"doc-en-kubernetes-4523f688787ea72a5eee2a31a0921a8a91901ec4be3d23639647dc18b5f3c47c","title":"","text":"for i, test := range tests { test.namespace.ObjectMeta.ResourceVersion = \"1\" test.oldNamespace.ObjectMeta.ResourceVersion = \"1\" errs := ValidateNamespaceStatusUpdate(&test.oldNamespace, &test.namespace) errs := ValidateNamespaceStatusUpdate(&test.namespace, &test.oldNamespace) if test.valid && len(errs) > 0 { t.Errorf(\"%d: Unexpected error: %v\", i, errs) t.Logf(\"%#v vs %#v\", test.oldNamespace.ObjectMeta, test.namespace.ObjectMeta)"} {"_id":"doc-en-kubernetes-18bb6e3dc3203f351dbdc9bcd95190486b4cd69262f5530f27a34e45ef25a378","title":"","text":"namespace := nsObj.(*api.Namespace) // upon first request to delete, we switch the phase to start namespace termination if namespace.DeletionTimestamp == nil { if namespace.DeletionTimestamp.IsZero() { now := util.Now() namespace.DeletionTimestamp = &now namespace.Status.Phase = api.NamespaceTerminating"} {"_id":"doc-en-kubernetes-b51a6f6ad150d42c3ad2c2aac15dafd3233c0e6c88413c75cde1e341a9077c5d","title":"","text":"// prior to final deletion, we must ensure that finalizers is empty if len(namespace.Spec.Finalizers) != 0 { err = fmt.Errorf(\"Unable to delete namespace %v because finalizers is not empty %v\", namespace.Name, namespace.Spec.Finalizers) err = fmt.Errorf(\"Namespace %v termination is in progress, waiting for %v\", namespace.Name, namespace.Spec.Finalizers) return nil, err } return r.Etcd.Delete(ctx, name, nil) }"} {"_id":"doc-en-kubernetes-54ae05a93453d5828d9cb0fcc62d9ce1e8af2c9df4a3b559e1cba98cfa50d57e","title":"","text":"if err != nil { t.Fatalf(\"unexpected error: %v\", err) } } func TestDeleteNamespaceWithIncompleteFinalizers(t *testing.T) { now := util.Now() fakeEtcdClient, helper := newHelper(t) fakeEtcdClient.ChangeIndex = 1 fakeEtcdClient.Data[\"/registry/namespaces/foo\"] = tools.EtcdResponseWithError{ R: &etcd.Response{ Node: &etcd.Node{ Value: runtime.EncodeOrDie(latest.Codec, &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\", DeletionTimestamp: &now, }, Spec: api.NamespaceSpec{ Finalizers: []api.FinalizerName{api.FinalizerKubernetes}, }, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, }), ModifiedIndex: 1, CreatedIndex: 1, }, }, } storage, _, _ := NewStorage(helper) _, err := storage.Delete(api.NewDefaultContext(), \"foo\", nil) if err == nil { t.Fatalf(\"expected error: %v\", err) } } // TODO: when we add life-cycle, this will go to Terminating, and then we need to test Terminating to gone func TestDeleteNamespaceWithCompleteFinalizers(t *testing.T) { now := util.Now() fakeEtcdClient, helper := newHelper(t) fakeEtcdClient.ChangeIndex = 1 fakeEtcdClient.Data[\"/registry/namespaces/foo\"] = tools.EtcdResponseWithError{ R: &etcd.Response{ Node: &etcd.Node{ Value: runtime.EncodeOrDie(latest.Codec, &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\", DeletionTimestamp: &now, }, Spec: api.NamespaceSpec{ Finalizers: []api.FinalizerName{}, }, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, }), ModifiedIndex: 1, CreatedIndex: 1, }, }, } storage, _, _ := NewStorage(helper) _, err := storage.Delete(api.NewDefaultContext(), \"foo\", nil) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } }"} {"_id":"doc-en-kubernetes-1b33fed19f7ef15b640606726c76c5869145649cdb356928e732a50e9f6cfb9c","title":"","text":"\"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" ) func TestNamespaceStrategy(t *testing.T) {"} {"_id":"doc-en-kubernetes-f6f7c6384c04bab0a1dd81fa2e794f07858a9d7b47b566b9cf486eeda7169e53","title":"","text":"if StatusStrategy.AllowCreateOnUpdate() { t.Errorf(\"Namespaces should not allow create on update\") } now := util.Now() oldNamespace := &api.Namespace{ ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"10\"}, Spec: api.NamespaceSpec{Finalizers: []api.FinalizerName{\"kubernetes\"}}, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, } namespace := &api.Namespace{ ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"9\"}, ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"9\", DeletionTimestamp: &now}, Status: api.NamespaceStatus{Phase: api.NamespaceTerminating}, } StatusStrategy.PrepareForUpdate(namespace, oldNamespace)"} {"_id":"doc-en-kubernetes-221c64b381a3e63c9a8504c0237f740ee33c62c5df53a0e58a4646ceb83c7e2a","title":"","text":"package gce_cloud import ( \"crypto/md5\" \"fmt\" \"io\" \"io/ioutil\""} {"_id":"doc-en-kubernetes-7c38531342edb56a5c8d7f0af677b0f3de4b3de5c422f21ebc94dfdcff3de8f3","title":"","text":"\"google.golang.org/cloud/compute/metadata\" ) const LOAD_BALANCER_NAME_MAX_LENGTH = 63 // GCECloud is an implementation of Interface, TCPLoadBalancer and Instances for Google Compute Engine. type GCECloud struct { service *compute.Service"} {"_id":"doc-en-kubernetes-9f0f66f1e2af5ad5cd1119dd9952c9a96f14b7b79017b33715264980b7ff9712","title":"","text":"} } func normalizeName(name string) string { // If it's short enough, just leave it. if len(name) < LOAD_BALANCER_NAME_MAX_LENGTH-6 { return name } // Hash and truncate hash := md5.Sum([]byte(name)) truncated := name[0 : LOAD_BALANCER_NAME_MAX_LENGTH-6] shortHash := hash[0:6] return fmt.Sprintf(\"%s%s\", truncated, string(shortHash)) } // CreateTCPLoadBalancer is an implementation of TCPLoadBalancer.CreateTCPLoadBalancer. // TODO(a-robinson): Don't just ignore specified IP addresses. Check if they're // owned by the project and available to be used, and use them if they are. func (gce *GCECloud) CreateTCPLoadBalancer(name, region string, externalIP net.IP, ports []int, hosts []string, affinityType api.AffinityType) (string, error) { func (gce *GCECloud) CreateTCPLoadBalancer(origName, region string, externalIP net.IP, ports []int, hosts []string, affinityType api.AffinityType) (string, error) { name := normalizeName(origName) err := gce.makeTargetPool(name, region, hosts, translateAffinityType(affinityType)) if err != nil { if !isHTTPErrorCode(err, http.StatusConflict) {"} {"_id":"doc-en-kubernetes-0589cd5e0961021b139354ddc49080f986af63d6dd6d63a065c8eb71ea824f30","title":"","text":"} // UpdateTCPLoadBalancer is an implementation of TCPLoadBalancer.UpdateTCPLoadBalancer. func (gce *GCECloud) UpdateTCPLoadBalancer(name, region string, hosts []string) error { func (gce *GCECloud) UpdateTCPLoadBalancer(origName, region string, hosts []string) error { name := normalizeName(origName) pool, err := gce.service.TargetPools.Get(gce.projectID, region, name).Do() if err != nil { return err"} {"_id":"doc-en-kubernetes-be640f38bac80d03141af601bc3f04dc941ec4aba3d66645438ffee1b9dd4be4","title":"","text":"} // DeleteTCPLoadBalancer is an implementation of TCPLoadBalancer.DeleteTCPLoadBalancer. func (gce *GCECloud) DeleteTCPLoadBalancer(name, region string) error { func (gce *GCECloud) DeleteTCPLoadBalancer(origName, region string) error { name := normalizeName(origName) op, err := gce.service.ForwardingRules.Delete(gce.projectID, region, name).Do() if err != nil && isHTTPErrorCode(err, http.StatusNotFound) { glog.Infof(\"Forwarding rule %s already deleted. Continuing to delete target pool.\", name)"} {"_id":"doc-en-kubernetes-f0899a70d4d7701f5e40d31d72321d33920f5c1a816eb7c327d52b70543cc99c","title":"","text":" #!/bin/bash # Copyright 2015 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Create generic token following GCE standard create_token() { echo $(cat /dev/urandom | base64 | tr -d \"=+/\" | dd bs=32 count=1 2> /dev/null) } get_tokens_from_csv() { KUBE_BEARER_TOKEN=$(awk -F, '/admin/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) KUBELET_TOKEN=$(awk -F, '/kubelet/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) KUBE_PROXY_TOKEN=$(awk -F, '/kube_proxy/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) } generate_admin_token() { echo \"$(create_token),admin,admin\" >> ${KUBE_TEMP}/known_tokens.csv } # Creates a csv file each time called (i.e one per kubelet). generate_kubelet_tokens() { echo \"$(create_token),kubelet,kubelet\" > ${KUBE_TEMP}/${1}_tokens.csv echo \"$(create_token),kube_proxy,kube_proxy\" >> ${KUBE_TEMP}/${1}_tokens.csv } "} {"_id":"doc-en-kubernetes-234fe7de88afd6e051cb253fc0993146b8838439e8d09380ad65bb1527880bfa","title":"","text":" #cloud-config write_files: - path: /etc/cloud.conf permissions: 0600 content: | [Global] auth-url = OS_AUTH_URL username = OS_USERNAME api-key = OS_PASSWORD tenant-id = OS_TENANT_NAME region = OS_REGION_NAME [LoadBalancer] subnet-id = 11111111-1111-1111-1111-111111111111 - path: /opt/bin/git-kubernetes-nginx.sh permissions: 0755 content: | #!/bin/bash git clone https://github.com/thommay/kubernetes_nginx /opt/kubernetes_nginx /usr/bin/cp /opt/.kubernetes_auth /opt/kubernetes_nginx/.kubernetes_auth /opt/kubernetes_nginx/git-kubernetes-nginx.sh - path: /opt/bin/download-release.sh permissions: 0755 content: | #!/bin/bash # This temp URL is only good for the length of time specified at cluster creation time. # Afterward, it will result in a 403. OBJECT_URL=\"CLOUD_FILES_URL\" if [ ! -s /opt/kubernetes.tar.gz ] then echo \"Downloading release ($OBJECT_URL)\" wget \"${OBJECT_URL}\" -O /opt/kubernetes.tar.gz echo \"Unpacking release\" rm -rf /opt/kubernetes || false tar xzf /opt/kubernetes.tar.gz -C /opt/ else echo \"kubernetes release found. Skipping download.\" fi - path: /opt/.kubernetes_auth permissions: 0600 content: | KUBE_USER:KUBE_PASSWORD coreos: etcd2: discovery: https://discovery.etcd.io/DISCOVERY_ID advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 flannel: ip_masq: true interface: eth2 fleet: public-ip: $private_ipv4 metadata: kubernetes_role=master update: reboot-strategy: off units: - name: etcd2.service command: start - name: fleet.service command: start - name: flanneld.service drop-ins: - name: 50-flannel.conf content: | [Unit] Requires=etcd2.service After=etcd2.service [Service] ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\":\"KUBE_NETWORK\", \"Backend\": {\"Type\": \"host-gw\"}}' command: start - name: generate-serviceaccount-key.service command: start content: | [Unit] Description=Generate service-account key file [Service] ExecStartPre=-/usr/bin/mkdir -p /var/run/kubernetes/ ExecStart=/bin/openssl genrsa -out /var/run/kubernetes/kube-serviceaccount.key 2048 2>/dev/null RemainAfterExit=yes Type=oneshot - name: docker.service command: start drop-ins: - name: 51-docker-mirror.conf content: | [Unit] # making sure that flanneld finished startup, otherwise containers # won't land in flannel's network... Requires=flanneld.service After=flanneld.service Restart=Always - name: download-release.service command: start content: | [Unit] Description=Downloads Kubernetes Release After=network-online.target Requires=network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/bash /opt/bin/download-release.sh - name: kube-apiserver.service command: start content: | [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=download-release.service Requires=download-release.service Requires=generate-serviceaccount-key.service After=generate-serviceaccount-key.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-apiserver /opt/bin/kube-apiserver ExecStartPre=/usr/bin/mkdir -p /var/lib/kube-apiserver ExecStart=/opt/bin/kube-apiserver --address=127.0.0.1 --cloud-provider=rackspace --cloud-config=/etc/cloud.conf --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --port=8080 --service-cluster-ip-range=SERVICE_CLUSTER_IP_RANGE --token-auth-file=/var/lib/kube-apiserver/known_tokens.csv --v=2 --service-account-key-file=/var/run/kubernetes/kube-serviceaccount.key --service-account-lookup=true --admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultTolerationSeconds,ResourceQuota Restart=always RestartSec=5 - name: apiserver-advertiser.service command: start content: | [Unit] Description=Kubernetes Apiserver Advertiser After=etcd2.service Requires=etcd2.service After=master-apiserver.service [Service] ExecStart=/bin/sh -c 'etcdctl set /corekube/apiservers/$public_ipv4 $public_ipv4' Restart=always RestartSec=120 - name: kube-controller-manager.service command: start content: | [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=kube-apiserver.service Requires=kube-apiserver.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-controller-manager /opt/bin/kube-controller-manager ExecStart=/opt/bin/kube-controller-manager --cloud-provider=rackspace --cloud-config=/etc/cloud.conf --logtostderr=true --master=127.0.0.1:8080 --v=2 --service-account-private-key-file=/var/run/kubernetes/kube-serviceaccount.key --root-ca-file=/run/kubernetes/apiserver.crt Restart=always RestartSec=5 - name: kube-scheduler.service command: start content: | [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=kube-apiserver.service Requires=kube-apiserver.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-scheduler /opt/bin/kube-scheduler ExecStart=/opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080 Restart=always RestartSec=5 #Running nginx service with --net=\"host\" is a necessary evil until running all k8s services in docker. - name: kubernetes-nginx.service command: start content: | [Unit] Description=Kubernetes Nginx Service After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service [Service] ExecStartPre=/opt/bin/git-kubernetes-nginx.sh ExecStartPre=-/usr/bin/docker rm kubernetes_nginx ExecStart=/usr/bin/docker run --rm --net=\"host\" -p \"443:443\" -t --name \"kubernetes_nginx\" kubernetes_nginx ExecStop=/usr/bin/docker stop kubernetes_nginx Restart=always RestartSec=15 "} {"_id":"doc-en-kubernetes-d74319ac30c5857ce1cd653923a9913ee799a474230ff450b2e5deef7e55d6e3","title":"","text":" #cloud-config write_files: - path: /opt/bin/regen-apiserver-list.sh permissions: 0755 content: | #!/bin/sh m=$(echo $(etcdctl ls --recursive /corekube/apiservers | cut -d/ -f4 | sort) | tr ' ' ,) mkdir -p /run/kubelet echo \"APISERVER_IPS=$m\" > /run/kubelet/apiservers.env echo \"FIRST_APISERVER_URL=https://${m%%,*}:6443\" >> /run/kubelet/apiservers.env - path: /opt/bin/download-release.sh permissions: 0755 content: | #!/bin/bash # This temp URL is only good for the length of time specified at cluster creation time. # Afterward, it will result in a 403. OBJECT_URL=\"CLOUD_FILES_URL\" if [ ! -s /opt/kubernetes.tar.gz ] then echo \"Downloading release ($OBJECT_URL)\" wget \"${OBJECT_URL}\" -O /opt/kubernetes.tar.gz echo \"Unpacking release\" rm -rf /opt/kubernetes || false tar xzf /opt/kubernetes.tar.gz -C /opt/ else echo \"kubernetes release found. Skipping download.\" fi - path: /run/config-kubelet.sh permissions: 0755 content: | #!/bin/bash -e set -x /usr/bin/mkdir -p /var/lib/kubelet cat > /var/lib/kubelet/kubeconfig << EOF apiVersion: v1 kind: Config users: - name: kubelet user: token: KUBELET_TOKEN clusters: - name: local cluster: insecure-skip-tls-verify: true contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context EOF - path: /run/config-kube-proxy.sh permissions: 0755 content: | #!/bin/bash -e set -x /usr/bin/mkdir -p /var/lib/kube-proxy cat > /var/lib/kube-proxy/kubeconfig << EOF apiVersion: v1 kind: Config users: - name: kube-proxy user: token: KUBE_PROXY_TOKEN clusters: - name: local cluster: insecure-skip-tls-verify: true contexts: - context: cluster: local user: kube-proxy name: service-account-context current-context: service-account-context EOF coreos: etcd2: discovery: https://discovery.etcd.io/DISCOVERY_ID advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 flannel: ip_masq: true interface: eth2 fleet: public-ip: $private_ipv4 metadata: kubernetes_role=minion update: reboot-strategy: off units: - name: etcd2.service command: start - name: fleet.service command: start - name: flanneld.service drop-ins: - name: 50-flannel.conf content: | [Unit] Requires=etcd2.service After=etcd2.service [Service] ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\":\"KUBE_NETWORK\", \"Backend\": {\"Type\": \"host-gw\"}}' command: start - name: docker.service command: start drop-ins: - name: 51-docker-mirror.conf content: | [Unit] # making sure that flanneld finished startup, otherwise containers # won't land in flannel's network... Requires=flanneld.service After=flanneld.service Restart=Always - name: download-release.service command: start content: | [Unit] Description=Downloads Kubernetes Release After=network-online.target Requires=network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/bash /opt/bin/download-release.sh - name: kubelet.service command: start content: | [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service After=download-release.service Requires=download-release.service After=apiserver-finder.service Requires=apiserver-finder.service [Service] EnvironmentFile=/run/kubelet/apiservers.env ExecStartPre=/run/config-kubelet.sh ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kubelet /opt/bin/kubelet ExecStart=/opt/bin/kubelet --address=$private_ipv4 --api-servers=${FIRST_APISERVER_URL} --cluster-dns=DNS_SERVER_IP --cluster-domain=DNS_DOMAIN --healthz-bind-address=$private_ipv4 --hostname-override=$private_ipv4 --logtostderr=true --v=2 Restart=always RestartSec=5 KillMode=process - name: kube-proxy.service command: start content: | [Unit] Description=Kubernetes Proxy Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service After=download-release.service Requires=download-release.service After=apiserver-finder.service Requires=apiserver-finder.service [Service] EnvironmentFile=/run/kubelet/apiservers.env ExecStartPre=/run/config-kube-proxy.sh ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-proxy /opt/bin/kube-proxy ExecStart=/opt/bin/kube-proxy --bind-address=$private_ipv4 --kubeconfig=/var/lib/kube-proxy/kubeconfig --logtostderr=true --hostname-override=$private_ipv4 --master=${FIRST_APISERVER_URL} Restart=always RestartSec=5 - name: apiserver-finder.service command: start content: | [Unit] Description=Kubernetes Apiserver finder After=network-online.target Requires=network-online.target After=etcd2.service Requires=etcd2.service [Service] ExecStartPre=/opt/bin/regen-apiserver-list.sh ExecStart=/usr/bin/etcdctl exec-watch --recursive /corekube/apiservers -- /opt/bin/regen-apiserver-list.sh Restart=always RestartSec=30 - name: cbr0.netdev command: start content: | [NetDev] Kind=bridge Name=cbr0 - name: cbr0.network command: start content: | [Match] Name=cbr0 [Network] Address=10.240.INDEX.1/24 - name: nat.service command: start content: | [Unit] Description=NAT container->outside traffic [Service] ExecStart=/usr/sbin/iptables -t nat -A POSTROUTING -o eth0 -s 10.240.INDEX.0/24 -j MASQUERADE ExecStart=/usr/sbin/iptables -t nat -A POSTROUTING -o eth1 -s 10.240.INDEX.0/24 -j MASQUERADE RemainAfterExit=yes Type=oneshot "} {"_id":"doc-en-kubernetes-890b30325ea00471731c55586f129e9f88b4c5191e31ff8f1c55e9b0eb69c6e6","title":"","text":" #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Sane defaults for dev environments. The following variables can be easily overriden # by setting each as a ENV variable ahead of time: # KUBE_IMAGE, KUBE_MASTER_FLAVOR, KUBE_NODE_FLAVOR, NUM_NODES, NOVA_NETWORK and SSH_KEY_NAME # Shared KUBE_IMAGE=\"${KUBE_IMAGE-3eba4fbb-51da-4233-b699-8a4030561add}\" # CoreOS (Stable) SSH_KEY_NAME=\"${SSH_KEY_NAME-id_kubernetes}\" NOVA_NETWORK_LABEL=\"kubernetes-pool-net\" NOVA_NETWORK_CIDR=\"${NOVA_NETWORK-192.168.0.0/24}\" INSTANCE_PREFIX=\"kubernetes\" # Master KUBE_MASTER_FLAVOR=\"${KUBE_MASTER_FLAVOR-general1-1}\" MASTER_NAME=\"${INSTANCE_PREFIX}-master\" MASTER_TAG=\"tags=${INSTANCE_PREFIX}-master\" # Node KUBE_NODE_FLAVOR=\"${KUBE_NODE_FLAVOR-general1-2}\" NUM_NODES=\"${NUM_NODES-4}\" NODE_TAG=\"tags=${INSTANCE_PREFIX}-node\" NODE_NAMES=($(eval echo ${INSTANCE_PREFIX}-node-{1..${NUM_NODES}})) KUBE_NETWORK=\"10.240.0.0/16\" SERVICE_CLUSTER_IP_RANGE=\"10.0.0.0/16\" # formerly PORTAL_NET # Optional: Enable node logging. ENABLE_NODE_LOGGING=false LOGGING_DESTINATION=elasticsearch # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING=false ELASTICSEARCH_LOGGING_REPLICAS=1 # Optional: Cluster monitoring to setup as part of the cluster bring up: # none - No cluster monitoring setup # influxdb - Heapster, InfluxDB, and Grafana # google - Heapster, Google Cloud Monitoring, and Google Cloud Logging ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}\" # Optional: Install cluster DNS. ENABLE_CLUSTER_DNS=\"${KUBE_ENABLE_CLUSTER_DNS:-true}\" DNS_SERVER_IP=\"10.0.0.10\" DNS_DOMAIN=\"cluster.local\" # Optional: Enable DNS horizontal autoscaler ENABLE_DNS_HORIZONTAL_AUTOSCALER=\"${KUBE_ENABLE_DNS_HORIZONTAL_AUTOSCALER:-false}\" # Optional: Install Kubernetes UI ENABLE_CLUSTER_UI=\"${KUBE_ENABLE_CLUSTER_UI:-true}\" "} {"_id":"doc-en-kubernetes-59137f87b20aa7e948fbc3602a242c043eac9e77f42cbb0bd09bea40bcbc4e10","title":"","text":" #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Bring up a Kubernetes cluster. # # If the full release name (gs:///) is passed in then we take # that directly. If not then we assume we are doing development stuff and take # the defaults in the release config. # exit on any error set -e source $(dirname $0)/../kube-util.sh echo \"Starting cluster using provider: $KUBERNETES_PROVIDER\" verify-prereqs kube-up # skipping validation for now until since machines show up as private IPs # source $(dirname $0)/validate-cluster.sh echo \"Done\" "} {"_id":"doc-en-kubernetes-5cc07505eede528b025ddf42d44e1c479ad850e9dfc786f78b8090e80c1bbb63","title":"","text":" #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # A library of helper functions for deploying on Rackspace # Use the config file specified in $KUBE_CONFIG_FILE, or default to # config-default.sh. KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. source $(dirname ${BASH_SOURCE})/${KUBE_CONFIG_FILE-\"config-default.sh\"} source \"${KUBE_ROOT}/cluster/common.sh\" source \"${KUBE_ROOT}/cluster/rackspace/authorization.sh\" verify-prereqs() { # Make sure that prerequisites are installed. for x in nova swiftly; do if [ \"$(which $x)\" == \"\" ]; then echo \"cluster/rackspace/util.sh: Can't find $x in PATH, please fix and retry.\" exit 1 fi done if [[ -z \"${OS_AUTH_URL-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_AUTH_URL not set.\" echo -e \"texport OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/\" return 1 fi if [[ -z \"${OS_USERNAME-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_USERNAME not set.\" echo -e \"texport OS_USERNAME=myusername\" return 1 fi if [[ -z \"${OS_PASSWORD-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_PASSWORD not set.\" echo -e \"texport OS_PASSWORD=myapikey\" return 1 fi } rax-ssh-key() { if [ ! -f $HOME/.ssh/${SSH_KEY_NAME} ]; then echo \"cluster/rackspace/util.sh: Generating SSH KEY ${HOME}/.ssh/${SSH_KEY_NAME}\" ssh-keygen -f ${HOME}/.ssh/${SSH_KEY_NAME} -N '' > /dev/null fi if ! $(nova keypair-list | grep $SSH_KEY_NAME > /dev/null 2>&1); then echo \"cluster/rackspace/util.sh: Uploading key to Rackspace:\" echo -e \"tnova keypair-add ${SSH_KEY_NAME} --pub-key ${HOME}/.ssh/${SSH_KEY_NAME}.pub\" nova keypair-add ${SSH_KEY_NAME} --pub-key ${HOME}/.ssh/${SSH_KEY_NAME}.pub > /dev/null 2>&1 else echo \"cluster/rackspace/util.sh: SSH key ${SSH_KEY_NAME}.pub already uploaded\" fi } rackspace-set-vars() { CLOUDFILES_CONTAINER=\"kubernetes-releases-${OS_USERNAME}\" CONTAINER_PREFIX=${CONTAINER_PREFIX-devel/} find-release-tars } # Retrieves a tempurl from cloudfiles to make the release object publicly accessible temporarily. find-object-url() { rackspace-set-vars KUBE_TAR=${CLOUDFILES_CONTAINER}/${CONTAINER_PREFIX}/kubernetes-server-linux-amd64.tar.gz # Create temp URL good for 24 hours RELEASE_TMP_URL=$(swiftly -A ${OS_AUTH_URL} -U ${OS_USERNAME} -K ${OS_PASSWORD} tempurl GET ${KUBE_TAR} 86400 ) echo \"cluster/rackspace/util.sh: Object temp URL:\" echo -e \"t${RELEASE_TMP_URL}\" } ensure_dev_container() { SWIFTLY_CMD=\"swiftly -A ${OS_AUTH_URL} -U ${OS_USERNAME} -K ${OS_PASSWORD}\" if ! ${SWIFTLY_CMD} get ${CLOUDFILES_CONTAINER} > /dev/null 2>&1 ; then echo \"cluster/rackspace/util.sh: Container doesn't exist. Creating container ${CLOUDFILES_CONTAINER}\" ${SWIFTLY_CMD} put ${CLOUDFILES_CONTAINER} > /dev/null 2>&1 fi } # Copy kubernetes-server-linux-amd64.tar.gz to cloud files object store copy_dev_tarballs() { echo \"cluster/rackspace/util.sh: Uploading to Cloud Files\" ${SWIFTLY_CMD} put -i ${SERVER_BINARY_TAR} ${CLOUDFILES_CONTAINER}/${CONTAINER_PREFIX}/kubernetes-server-linux-amd64.tar.gz > /dev/null 2>&1 echo \"Release pushed.\" } prep_known_tokens() { for (( i=0; i<${#NODE_NAMES[@]}; i++)); do generate_kubelet_tokens ${NODE_NAMES[i]} cat ${KUBE_TEMP}/${NODE_NAMES[i]}_tokens.csv >> ${KUBE_TEMP}/known_tokens.csv done # Generate tokens for other \"service accounts\". Append to known_tokens. # # NB: If this list ever changes, this script actually has to # change to detect the existence of this file, kill any deleted # old tokens and add any new tokens (to handle the upgrade case). local -r service_accounts=(\"system:scheduler\" \"system:controller_manager\" \"system:logging\" \"system:monitoring\" \"system:dns\") for account in \"${service_accounts[@]}\"; do echo \"$(create_token),${account},${account}\" >> ${KUBE_TEMP}/known_tokens.csv done generate_admin_token } rax-boot-master() { DISCOVERY_URL=$(curl https://discovery.etcd.io/new?size=1) DISCOVERY_ID=$(echo \"${DISCOVERY_URL}\" | cut -f 4 -d /) echo \"cluster/rackspace/util.sh: etcd discovery URL: ${DISCOVERY_URL}\" # Copy cloud-config to KUBE_TEMP and work some sed magic sed -e \"s|DISCOVERY_ID|${DISCOVERY_ID}|\" -e \"s|CLOUD_FILES_URL|${RELEASE_TMP_URL//&/&}|\" -e \"s|KUBE_USER|${KUBE_USER}|\" -e \"s|KUBE_PASSWORD|${KUBE_PASSWORD}|\" -e \"s|SERVICE_CLUSTER_IP_RANGE|${SERVICE_CLUSTER_IP_RANGE}|\" -e \"s|KUBE_NETWORK|${KUBE_NETWORK}|\" -e \"s|OS_AUTH_URL|${OS_AUTH_URL}|\" -e \"s|OS_USERNAME|${OS_USERNAME}|\" -e \"s|OS_PASSWORD|${OS_PASSWORD}|\" -e \"s|OS_TENANT_NAME|${OS_TENANT_NAME}|\" -e \"s|OS_REGION_NAME|${OS_REGION_NAME}|\" $(dirname $0)/rackspace/cloud-config/master-cloud-config.yaml > $KUBE_TEMP/master-cloud-config.yaml MASTER_BOOT_CMD=\"nova boot --key-name ${SSH_KEY_NAME} --flavor ${KUBE_MASTER_FLAVOR} --image ${KUBE_IMAGE} --meta ${MASTER_TAG} --meta ETCD=${DISCOVERY_ID} --user-data ${KUBE_TEMP}/master-cloud-config.yaml --config-drive true --nic net-id=${NETWORK_UUID} ${MASTER_NAME}\" echo \"cluster/rackspace/util.sh: Booting ${MASTER_NAME} with following command:\" echo -e \"t$MASTER_BOOT_CMD\" $MASTER_BOOT_CMD } rax-boot-nodes() { cp $(dirname $0)/rackspace/cloud-config/node-cloud-config.yaml ${KUBE_TEMP}/node-cloud-config.yaml for (( i=0; i<${#NODE_NAMES[@]}; i++)); do get_tokens_from_csv ${NODE_NAMES[i]} sed -e \"s|DISCOVERY_ID|${DISCOVERY_ID}|\" -e \"s|CLOUD_FILES_URL|${RELEASE_TMP_URL//&/&}|\" -e \"s|DNS_SERVER_IP|${DNS_SERVER_IP:-}|\" -e \"s|DNS_DOMAIN|${DNS_DOMAIN:-}|\" -e \"s|ENABLE_CLUSTER_DNS|${ENABLE_CLUSTER_DNS:-false}|\" -e \"s|ENABLE_NODE_LOGGING|${ENABLE_NODE_LOGGING:-false}|\" -e \"s|INDEX|$((i + 1))|g\" -e \"s|KUBELET_TOKEN|${KUBELET_TOKEN}|\" -e \"s|KUBE_NETWORK|${KUBE_NETWORK}|\" -e \"s|KUBELET_TOKEN|${KUBELET_TOKEN}|\" -e \"s|KUBE_PROXY_TOKEN|${KUBE_PROXY_TOKEN}|\" -e \"s|LOGGING_DESTINATION|${LOGGING_DESTINATION:-}|\" $(dirname $0)/rackspace/cloud-config/node-cloud-config.yaml > $KUBE_TEMP/node-cloud-config-$(($i + 1)).yaml NODE_BOOT_CMD=\"nova boot --key-name ${SSH_KEY_NAME} --flavor ${KUBE_NODE_FLAVOR} --image ${KUBE_IMAGE} --meta ${NODE_TAG} --user-data ${KUBE_TEMP}/node-cloud-config-$(( i +1 )).yaml --config-drive true --nic net-id=${NETWORK_UUID} ${NODE_NAMES[$i]}\" echo \"cluster/rackspace/util.sh: Booting ${NODE_NAMES[$i]} with following command:\" echo -e \"t$NODE_BOOT_CMD\" $NODE_BOOT_CMD done } rax-nova-network() { if ! $(nova network-list | grep $NOVA_NETWORK_LABEL > /dev/null 2>&1); then SAFE_CIDR=$(echo $NOVA_NETWORK_CIDR | tr -d '') NETWORK_CREATE_CMD=\"nova network-create $NOVA_NETWORK_LABEL $SAFE_CIDR\" echo \"cluster/rackspace/util.sh: Creating cloud network with following command:\" echo -e \"t${NETWORK_CREATE_CMD}\" $NETWORK_CREATE_CMD else echo \"cluster/rackspace/util.sh: Using existing cloud network $NOVA_NETWORK_LABEL\" fi } detect-nodes() { KUBE_NODE_IP_ADDRESSES=() for (( i=0; i<${#NODE_NAMES[@]}; i++)); do local node_ip=$(nova show --minimal ${NODE_NAMES[$i]} | grep accessIPv4 | awk '{print $4}') echo \"cluster/rackspace/util.sh: Found ${NODE_NAMES[$i]} at ${node_ip}\" KUBE_NODE_IP_ADDRESSES+=(\"${node_ip}\") done if [ -z \"$KUBE_NODE_IP_ADDRESSES\" ]; then echo \"cluster/rackspace/util.sh: Could not detect Kubernetes node nodes. Make sure you've launched a cluster with 'kube-up.sh'\" exit 1 fi } detect-master() { KUBE_MASTER=${MASTER_NAME} echo \"Waiting for ${MASTER_NAME} IP Address.\" echo echo \" This will continually check to see if the master node has an IP address.\" echo KUBE_MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep accessIPv4 | awk '{print $4}') while [ \"${KUBE_MASTER_IP-|}\" == \"|\" ]; do KUBE_MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep accessIPv4 | awk '{print $4}') printf \".\" sleep 2 done echo \"${KUBE_MASTER} IP Address is ${KUBE_MASTER_IP}\" } # $1 should be the network you would like to get an IP address for detect-master-nova-net() { KUBE_MASTER=${MASTER_NAME} MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep $1 | awk '{print $5}') } kube-up() { SCRIPT_DIR=$(CDPATH=\"\" cd $(dirname $0); pwd) rackspace-set-vars ensure_dev_container copy_dev_tarballs # Find the release to use. Generally it will be passed when doing a 'prod' # install and will default to the release/config.sh version when doing a # developer up. find-object-url # Create a temp directory to hold scripts that will be uploaded to master/nodes KUBE_TEMP=$(mktemp -d -t kubernetes.XXXXXX) trap \"rm -rf ${KUBE_TEMP}\" EXIT load-or-gen-kube-basicauth python2.7 $(dirname $0)/../third_party/htpasswd/htpasswd.py -b -c ${KUBE_TEMP}/htpasswd $KUBE_USER $KUBE_PASSWORD HTPASSWD=$(cat ${KUBE_TEMP}/htpasswd) rax-nova-network NETWORK_UUID=$(nova network-list | grep -i ${NOVA_NETWORK_LABEL} | awk '{print $2}') # create and upload ssh key if necessary rax-ssh-key echo \"cluster/rackspace/util.sh: Starting Cloud Servers\" prep_known_tokens rax-boot-master rax-boot-nodes detect-master # TODO look for a better way to get the known_tokens to the master. This is needed over file injection since the files were too large on a 4 node cluster. $(scp -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} ${KUBE_TEMP}/known_tokens.csv core@${KUBE_MASTER_IP}:/home/core/known_tokens.csv) $(sleep 2) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo /usr/bin/mkdir -p /var/lib/kube-apiserver) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo mv /home/core/known_tokens.csv /var/lib/kube-apiserver/known_tokens.csv) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo chown root:root /var/lib/kube-apiserver/known_tokens.csv) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo systemctl restart kube-apiserver) FAIL=0 for job in `jobs -p` do wait $job || let \"FAIL+=1\" done if (( $FAIL != 0 )); then echo \"${FAIL} commands failed. Exiting.\" exit 2 fi echo \"Waiting for cluster initialization.\" echo echo \" This will continually check to see if the API for kubernetes is reachable.\" echo \" This might loop forever if there was some uncaught error during start\" echo \" up.\" echo #This will fail until apiserver salt is updated until $(curl --insecure --user ${KUBE_USER}:${KUBE_PASSWORD} --max-time 5 --fail --output /dev/null --silent https://${KUBE_MASTER_IP}/healthz); do printf \".\" sleep 2 done echo \"Kubernetes cluster created.\" export KUBE_CERT=\"\" export KUBE_KEY=\"\" export CA_CERT=\"\" export CONTEXT=\"rackspace_${INSTANCE_PREFIX}\" create-kubeconfig # Don't bail on errors, we want to be able to print some info. set +e detect-nodes # ensures KUBECONFIG is set get-kubeconfig-basicauth echo \"All nodes may not be online yet, this is okay.\" echo echo \"Kubernetes cluster is running. The master is running at:\" echo echo \" https://${KUBE_MASTER_IP}\" echo echo \"The user name and password to use is located in ${KUBECONFIG:-$DEFAULT_KUBECONFIG}.\" echo echo \"Security note: The server above uses a self signed certificate. This is\" echo \" subject to \"Man in the middle\" type attacks.\" echo } # Perform preparations required to run e2e tests function prepare-e2e() { echo \"Rackspace doesn't need special preparations for e2e tests\" } "} {"_id":"doc-en-kubernetes-4c9dc51b3201feb404937f941349480ed18fc2def1112cedddbfe043047e10e6","title":"","text":"return nil } glog.Infof(\"Setting Proxy IP to %v\", hostIP) return CreateProxier(loadBalancer, listenIP, iptables, hostIP) return createProxier(loadBalancer, listenIP, iptables, hostIP) } func CreateProxier(loadBalancer LoadBalancer, listenIP net.IP, iptables iptables.Interface, hostIP net.IP) *Proxier { func createProxier(loadBalancer LoadBalancer, listenIP net.IP, iptables iptables.Interface, hostIP net.IP) *Proxier { glog.Infof(\"Initializing iptables\") // Clean up old messes. Ignore erors. iptablesDeleteOld(iptables)"} {"_id":"doc-en-kubernetes-3447c1047c5b50c341e6518a25b0d5b781ccc191d100f4607e4d90a34aa51c01","title":"","text":"}}, }}) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfoP, err := p.addServiceOnPort(serviceP, \"TCP\", 0, time.Second)"} {"_id":"doc-en-kubernetes-2692ea95d752e566ea380005856e9dd5663cbcdaa8e7b697eb6ad757627a431f","title":"","text":"serviceQ := ServicePortName{types.NamespacedName{\"testnamespace\", \"echo\"}, \"q\"} serviceX := ServicePortName{types.NamespacedName{\"testnamespace\", \"echo\"}, \"x\"} p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) p.OnUpdate([]api.Service{{"} {"_id":"doc-en-kubernetes-08371bdcb170d85f10964f5beebcbc7e6395698bcc3b36f2007dbbfe894f9887","title":"","text":"}, }) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfo, err := p.addServiceOnPort(service, \"UDP\", 0, time.Second)"} {"_id":"doc-en-kubernetes-9918f6181845d4d93d373e98806ba39e99c86437fbb3fb95fe4b92f814c95a8f","title":"","text":"}, }) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfo, err := p.addServiceOnPort(service, \"TCP\", 0, time.Second)"} {"_id":"doc-en-kubernetes-f9874b24eec205868f1f38d0ed2b3fdc7edcb3e69b50bd6ef86bb89285c53d83","title":"","text":"\"github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/types\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" utilyaml \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/yaml\" \"github.com/ghodss/yaml\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-3430a45b60fa8a9bce11719db20d0e292df191242d85696bd5438217be95cbec","title":"","text":"type defaultFunc func(pod *api.Pod) error func tryDecodeSinglePod(data []byte, defaultFn defaultFunc) (parsed bool, pod *api.Pod, err error) { obj, err := api.Scheme.Decode(data) // JSON is valid YAML, so this should work for everything. json, err := utilyaml.ToJSON(data) if err != nil { return false, nil, err } obj, err := api.Scheme.Decode(json) if err != nil { return false, pod, err }"} {"_id":"doc-en-kubernetes-539cf25e446796fb3f0dae58672c223b43b6a20296af51f15f49acd677bff184","title":"","text":"} func tryDecodePodList(data []byte, defaultFn defaultFunc) (parsed bool, pods api.PodList, err error) { obj, err := api.Scheme.Decode(data) json, err := utilyaml.ToJSON(data) if err != nil { return false, api.PodList{}, err } obj, err := api.Scheme.Decode(json) if err != nil { return false, pods, err }"} {"_id":"doc-en-kubernetes-fa416530ef5408b8f05d686a75822492eadb2e92808e4558d8cef5c473ed1410","title":"","text":" /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package config import ( \"reflect\" \"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/testapi\" \"github.com/ghodss/yaml\" ) func noDefault(*api.Pod) error { return nil } func TestDecodeSinglePod(t *testing.T) { pod := &api.Pod{ TypeMeta: api.TypeMeta{ APIVersion: \"\", }, ObjectMeta: api.ObjectMeta{ Name: \"test\", UID: \"12345\", Namespace: \"mynamespace\", }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyAlways, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{ Name: \"image\", Image: \"test/image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePath: \"/dev/termination-log\", }}, }, } json, err := testapi.Codec().Encode(pod) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podOut, err := tryDecodeSinglePod(json, noDefault) if testapi.Version() == \"v1beta1\" { // v1beta1 conversion leaves empty lists that should be nil podOut.Spec.Containers[0].Resources.Limits = nil podOut.Spec.Containers[0].Resources.Requests = nil } if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(json)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(json)) } if !reflect.DeepEqual(pod, podOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, podOut, string(json)) } externalPod, err := testapi.Converter().ConvertToVersion(pod, \"v1beta3\") if err != nil { t.Errorf(\"unexpected error: %v\", err) } yaml, err := yaml.Marshal(externalPod) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podOut, err = tryDecodeSinglePod(yaml, noDefault) if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(yaml)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(yaml)) } if !reflect.DeepEqual(pod, podOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, podOut, string(yaml)) } } func TestDecodePodList(t *testing.T) { pod := &api.Pod{ TypeMeta: api.TypeMeta{ APIVersion: \"\", }, ObjectMeta: api.ObjectMeta{ Name: \"test\", UID: \"12345\", Namespace: \"mynamespace\", }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyAlways, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{ Name: \"image\", Image: \"test/image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePath: \"/dev/termination-log\", }}, }, } podList := &api.PodList{ Items: []api.Pod{*pod}, } json, err := testapi.Codec().Encode(podList) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podListOut, err := tryDecodePodList(json, noDefault) if testapi.Version() == \"v1beta1\" { // v1beta1 conversion leaves empty lists that should be nil podListOut.Items[0].Spec.Containers[0].Resources.Limits = nil podListOut.Items[0].Spec.Containers[0].Resources.Requests = nil } if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(json)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(json)) } if !reflect.DeepEqual(podList, &podListOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", podList, &podListOut, string(json)) } externalPodList, err := testapi.Converter().ConvertToVersion(podList, \"v1beta3\") if err != nil { t.Errorf(\"unexpected error: %v\", err) } yaml, err := yaml.Marshal(externalPodList) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podListOut, err = tryDecodePodList(yaml, noDefault) if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(yaml)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(yaml)) } if !reflect.DeepEqual(podList, &podListOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, &podListOut, string(yaml)) } } "} {"_id":"doc-en-kubernetes-79c4280a882321b8dcfa3c1401200cbcd1da5d8baccc1d70fa767cc195e7bf1f","title":"","text":" # Admission Controllers ## What are they? An admission control plug-in is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object. The plug-in code is in the API server process and must be compiled into the binary in order to be used at this time. Each admission control plug-in is run in sequence before a request is accepted into the cluster. If any of the plug-ins in the sequence reject the request, the entire request is rejected immediately and an error is returned to the end-user. Admission control plug-ins may mutate the incoming object in some cases to apply system configured defaults. In addition, admission control plug-ins may mutate related resources as part of request processing to do things like increment quota usage. ## Why do I need them? Many advanced features in Kubernetes require an admission control plug-in to be enabled in order to properly support the feature. As a result, a Kubernetes API server that is not properly configured with the right set of admission control plug-ins is an incomplete server and will not support all the features you expect. ## How do I turn on an admission control plug-in? The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited, ordered list of admission control choices to invoke prior to modifying objects in the cluster. ## What does each plug-in do? ### AlwaysAdmit This plug-in will accept all incoming requests made to the Kubernetes API server. ### AlwaysDeny This plug-in will reject all mutating requests made to the Kubernetes API server. It's largely intended for testing purposes and is not recommended for usage in a real deployment. ### DenyExecOnPrivileged This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container. If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec commands in those containers, we strongly encourage enabling this plug-in. ### ServiceAccount This plug-in limits admission of Pod creation requests based on the Pod's ```ServiceAccount```. 1. If the pod does not have a ```ServiceAccount```, it modifies the pod's ```ServiceAccount``` to \"default\". 2. It ensures that the ```ServiceAccount``` referenced by a pod exists. 3. If ```LimitSecretReferences``` is true, it rejects the pod if the pod references ```Secret``` objects which the pods ```ServiceAccount``` does not reference. 4. If the pod does not contain any ```ImagePullSecrets```, the ```ImagePullSecrets``` of the ```ServiceAccount``` are added to the pod. 5. If ```MountServiceAccountToken``` is true, it adds a ```VolumeMount``` with the pod's ```ServiceAccount``` API token secret to containers in the pod. We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects. ### SecurityContextDeny This plug-in will deny any ```SecurityContext``` that defines options that were not available on the ```Container```. ### ResourceQuota This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota``` objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is so that quota is not prematurely incremented only for the request to be rejected later in admission control. ### LimitRanger This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. ### NamespaceExists This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` and reject the request if the ```Namespace``` was not previously created. We strongly recommend running this plug-in to ensure integrity of your data. ### NamespaceAutoProvision (deprecated) This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` and create a new ```Namespace``` if one did not already exist previously. We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```. ### NamespaceLifecycle This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it. A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in. Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes. ## Is there a recommended set of plug-ins to use? Yes. For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins: ```shell --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota ``` "} {"_id":"doc-en-kubernetes-e06112a56a76529d48d6921c89004d4d28702de270d0190309013ca2dca3f8c2","title":"","text":"KUBE_PROMPT_FOR_UPDATE=y KUBE_SKIP_UPDATE=${KUBE_SKIP_UPDATE-\"n\"} # Suffix to append to the staging path used for the server tars. Useful if # multiple versions of the server are being used in the same project # simultaneously (e.g. on Jenkins). KUBE_GCS_STAGING_PATH_SUFFIX=${KUBE_GCS_STAGING_PATH_SUFFIX-\"\"} # How long (in seconds) to wait for cluster initialization. KUBE_CLUSTER_INITIALIZATION_TIMEOUT=${KUBE_CLUSTER_INITIALIZATION_TIMEOUT:-300}"} {"_id":"doc-en-kubernetes-91ef31c9ba0c2473d0e9da5ed4a4643c4cb38ca8a81fd5df7dfe87230fc05b92","title":"","text":"gsutil mb \"${staging_bucket}\" fi local -r staging_path=\"${staging_bucket}/devel${KUBE_GCS_STAGING_PATH_SUFFIX}\" local -r staging_path=\"${staging_bucket}/${INSTANCE_PREFIX}-devel\" SERVER_BINARY_TAR_HASH=$(sha1sum-file \"${SERVER_BINARY_TAR}\") SALT_TAR_HASH=$(sha1sum-file \"${SALT_TAR}\")"} {"_id":"doc-en-kubernetes-5faed00eba9de025efb411dc4db0f452e8c9be1840c5183c9253147a0efde608","title":"","text":"${GCE_SLOW_TESTS[@]:+${GCE_SLOW_TESTS[@]}} )\"} : ${KUBE_GCE_INSTANCE_PREFIX:=\"e2e-gce-${NODE_NAME}-${EXECUTOR_NUMBER}\"} : ${KUBE_GCS_STAGING_PATH_SUFFIX:=\"-${NODE_NAME}-${EXECUTOR_NUMBER}\"} : ${PROJECT:=\"kubernetes-jenkins-pull\"} : ${ENABLE_DEPLOYMENTS:=true} # Override GCE defaults"} {"_id":"doc-en-kubernetes-2ec150b3d8333d0433b816cb43a5ff4257e730671431ce9d6b75f5ccaa2a7e42","title":"","text":"export KUBE_GCE_ZONE=${E2E_ZONE} export KUBE_GCE_NETWORK=${E2E_NETWORK} export KUBE_GCE_INSTANCE_PREFIX=${KUBE_GCE_INSTANCE_PREFIX:-} export KUBE_GCS_STAGING_PATH_SUFFIX=${KUBE_GCS_STAGING_PATH_SUFFIX:-} export KUBE_GCE_NODE_PROJECT=${KUBE_GCE_NODE_PROJECT:-} export KUBE_GCE_NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-} export KUBE_OS_DISTRIBUTION=${KUBE_OS_DISTRIBUTION:-}"} {"_id":"doc-en-kubernetes-a9eafe3272debeff63d61e53045a3064909a24dfb61be6506741a494fab9a116","title":"","text":"KUBE_PROXY_TOKEN: $(yaml-quote ${KUBE_PROXY_TOKEN:-}) ADMISSION_CONTROL: $(yaml-quote ${ADMISSION_CONTROL:-}) MASTER_IP_RANGE: $(yaml-quote ${MASTER_IP_RANGE}) CA_CERT: $(yaml-quote ${CA_CERT_BASE64}) CA_CERT: $(yaml-quote ${CA_CERT_BASE64:-}) EOF if [[ \"${master}\" == \"true\" ]]; then"} {"_id":"doc-en-kubernetes-b3893d0cd50188792747a411e44b1be2c928909f6df0acc3db87c34d993db92f","title":"","text":"// available volumes await a claim case api.VolumeAvailable: // TODO: remove api.VolumePending phase altogether _, exists, err := volumeIndex.Get(volume) if err != nil { return err } if !exists { volumeIndex.Add(volume) } if volume.Spec.ClaimRef != nil { _, err := binderClient.GetPersistentVolumeClaim(volume.Spec.ClaimRef.Namespace, volume.Spec.ClaimRef.Name) if err == nil {"} {"_id":"doc-en-kubernetes-46da62bf3382b5305f544e79819f393501c0e121186ec9d1d36b19c5d1036f96","title":"","text":"} } func TestMissingFromIndex(t *testing.T) { api.ForTesting_ReferencesAllowBlankSelfLinks = true o := testclient.NewObjects(api.Scheme, api.Scheme) if err := testclient.AddObjectsFromPath(\"../../examples/persistent-volumes/claims/claim-01.yaml\", o, api.Scheme); err != nil { t.Fatal(err) } if err := testclient.AddObjectsFromPath(\"../../examples/persistent-volumes/volumes/local-01.yaml\", o, api.Scheme); err != nil { t.Fatal(err) } client := &testclient.Fake{ReactFn: testclient.ObjectReaction(o, latest.RESTMapper)} pv, err := client.PersistentVolumes().Get(\"any\") if err != nil { t.Error(\"Unexpected error getting PV from client: %v\", err) } claim, error := client.PersistentVolumeClaims(\"ns\").Get(\"any\") if error != nil { t.Errorf(\"Unexpected error getting PVC from client: %v\", err) } volumeIndex := NewPersistentVolumeOrderedIndex() mockClient := &mockBinderClient{ volume: pv, claim: claim, } // the default value of the PV is Pending. // if has previously been processed by the binder, it's status in etcd would be Available. // Only Pending volumes were being indexed and made ready for claims. pv.Status.Phase = api.VolumeAvailable // adds the volume to the index, making the volume available syncVolume(volumeIndex, mockClient, pv) if pv.Status.Phase != api.VolumeAvailable { t.Errorf(\"Expected phase %s but got %s\", api.VolumeBound, pv.Status.Phase) } // an initial sync for a claim will bind it to an unbound volume, triggers state change err = syncClaim(volumeIndex, mockClient, claim) if err != nil { t.Fatalf(\"Expected Clam to be bound, instead got an error: %+vn\", err) } // state change causes another syncClaim to update statuses syncClaim(volumeIndex, mockClient, claim) // claim updated volume's status, causing an update and syncVolume call syncVolume(volumeIndex, mockClient, pv) if pv.Spec.ClaimRef == nil { t.Errorf(\"Expected ClaimRef but got nil for pv.Status.ClaimRef: %+vn\", pv) } if pv.Status.Phase != api.VolumeBound { t.Errorf(\"Expected phase %s but got %s\", api.VolumeBound, pv.Status.Phase) } if claim.Status.Phase != api.ClaimBound { t.Errorf(\"Expected phase %s but got %s\", api.ClaimBound, claim.Status.Phase) } if len(claim.Status.AccessModes) != len(pv.Spec.AccessModes) { t.Errorf(\"Expected phase %s but got %s\", api.ClaimBound, claim.Status.Phase) } if claim.Status.AccessModes[0] != pv.Spec.AccessModes[0] { t.Errorf(\"Expected access mode %s but got %s\", claim.Status.AccessModes[0], pv.Spec.AccessModes[0]) } // pretend the user deleted their claim mockClient.claim = nil syncVolume(volumeIndex, mockClient, pv) if pv.Status.Phase != api.VolumeReleased { t.Errorf(\"Expected phase %s but got %s\", api.VolumeReleased, pv.Status.Phase) } } type mockBinderClient struct { volume *api.PersistentVolume claim *api.PersistentVolumeClaim"} {"_id":"doc-en-kubernetes-c525715b0238f5e5f4f523c2822e7128591ff26a4699b7f0b4ff011c4034daa1","title":"","text":" /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package operationmanager import ( \"fmt\" \"sync\" ) // Operation Manager is a thread-safe interface for keeping track of multiple pending async operations. type OperationManager interface { // Called when the operation with the given ID has started. // Creates a new channel with specified buffer size tracked with the specified ID. // Returns a read-only version of the newly created channel. // Returns an error if an entry with the specified ID already exists (previous entry must be removed by calling Close). Start(id string, bufferSize uint) (<-chan interface{}, error) // Called when the operation with the given ID has terminated. // Closes and removes the channel associated with ID. // Returns an error if no associated channel exists. Close(id string) error // Attempts to send msg to the channel associated with ID. // Returns an error if no associated channel exists. Send(id string, msg interface{}) error } // Returns a new instance of a channel manager. func NewOperationManager() OperationManager { return &operationManager{ chanMap: make(map[string]chan interface{}), } } type operationManager struct { sync.RWMutex chanMap map[string]chan interface{} } // Called when the operation with the given ID has started. // Creates a new channel with specified buffer size tracked with the specified ID. // Returns a read-only version of the newly created channel. // Returns an error if an entry with the specified ID already exists (previous entry must be removed by calling Close). func (cm *operationManager) Start(id string, bufferSize uint) (<-chan interface{}, error) { cm.Lock() defer cm.Unlock() if _, exists := cm.chanMap[id]; exists { return nil, fmt.Errorf(\"id %q already exists\", id) } cm.chanMap[id] = make(chan interface{}, bufferSize) return cm.chanMap[id], nil } // Called when the operation with the given ID has terminated. // Closes and removes the channel associated with ID. // Returns an error if no associated channel exists. func (cm *operationManager) Close(id string) error { cm.Lock() defer cm.Unlock() if _, exists := cm.chanMap[id]; !exists { return fmt.Errorf(\"id %q not found\", id) } close(cm.chanMap[id]) delete(cm.chanMap, id) return nil } // Attempts to send msg to the channel associated with ID. // Returns an error if no associated channel exists. func (cm *operationManager) Send(id string, msg interface{}) error { cm.RLock() defer cm.RUnlock() if _, exists := cm.chanMap[id]; !exists { return fmt.Errorf(\"id %q not found\", id) } cm.chanMap[id] <- msg return nil } "} {"_id":"doc-en-kubernetes-a4a883cf3b183ce3156589275caf96982d8119e388734cc9eceb23949671d1d2","title":"","text":" /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Channel Manager keeps track of multiple channels package operationmanager import ( \"testing\" ) func TestStart(t *testing.T) { // Arrange cm := NewOperationManager() chanId := \"testChanId\" testMsg := \"test message\" // Act ch, startErr := cm.Start(chanId, 1 /* bufferSize */) sigErr := cm.Send(chanId, testMsg) // Assert if startErr != nil { t.Fatalf(\"Unexpected error on Start. Expected: Actual: <%v>\", startErr) } if sigErr != nil { t.Fatalf(\"Unexpected error on Send. Expected: Actual: <%v>\", sigErr) } if actual := <-ch; actual != testMsg { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg, actual) } } func TestStartIdExists(t *testing.T) { // Arrange cm := NewOperationManager() chanId := \"testChanId\" // Act _, startErr1 := cm.Start(chanId, 1 /* bufferSize */) _, startErr2 := cm.Start(chanId, 1 /* bufferSize */) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 == nil { t.Fatalf(\"Expected error on Start2. Expected: Actual: \") } } func TestStartAndAdd2Chans(t *testing.T) { // Arrange cm := NewOperationManager() chanId1 := \"testChanId1\" chanId2 := \"testChanId2\" testMsg1 := \"test message 1\" testMsg2 := \"test message 2\" // Act ch1, startErr1 := cm.Start(chanId1, 1 /* bufferSize */) ch2, startErr2 := cm.Start(chanId2, 1 /* bufferSize */) sigErr1 := cm.Send(chanId1, testMsg1) sigErr2 := cm.Send(chanId2, testMsg2) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 != nil { t.Fatalf(\"Unexpected error on Start2. Expected: Actual: <%v>\", startErr2) } if sigErr1 != nil { t.Fatalf(\"Unexpected error on Send1. Expected: Actual: <%v>\", sigErr1) } if sigErr2 != nil { t.Fatalf(\"Unexpected error on Send2. Expected: Actual: <%v>\", sigErr2) } if actual := <-ch1; actual != testMsg1 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg1, actual) } if actual := <-ch2; actual != testMsg2 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg2, actual) } } func TestStartAndAdd2ChansAndClose(t *testing.T) { // Arrange cm := NewOperationManager() chanId1 := \"testChanId1\" chanId2 := \"testChanId2\" testMsg1 := \"test message 1\" testMsg2 := \"test message 2\" // Act ch1, startErr1 := cm.Start(chanId1, 1 /* bufferSize */) ch2, startErr2 := cm.Start(chanId2, 1 /* bufferSize */) sigErr1 := cm.Send(chanId1, testMsg1) sigErr2 := cm.Send(chanId2, testMsg2) cm.Close(chanId1) sigErr3 := cm.Send(chanId1, testMsg1) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 != nil { t.Fatalf(\"Unexpected error on Start2. Expected: Actual: <%v>\", startErr2) } if sigErr1 != nil { t.Fatalf(\"Unexpected error on Send1. Expected: Actual: <%v>\", sigErr1) } if sigErr2 != nil { t.Fatalf(\"Unexpected error on Send2. Expected: Actual: <%v>\", sigErr2) } if sigErr3 == nil { t.Fatalf(\"Expected error on Send3. Expected: Actual: \", sigErr2) } if actual := <-ch1; actual != testMsg1 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg1, actual) } if actual := <-ch2; actual != testMsg2 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg2, actual) } } "} {"_id":"doc-en-kubernetes-06c38035d3e3ad22def3ac553544caf143063d6448d863db6449f24ff2b9ce8f","title":"","text":"package gce_pd import ( \"errors\" \"fmt\" \"os\" \"path\" \"path/filepath\" \"strings\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/cloudprovider\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/cloudprovider/gce\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/exec\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/mount\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/operationmanager\" \"github.com/golang/glog\" ) const ( diskByIdPath = \"/dev/disk/by-id/\" diskGooglePrefix = \"google-\" diskScsiGooglePrefix = \"scsi-0Google_PersistentDisk_\" diskPartitionSuffix = \"-part\" diskSDPath = \"/dev/sd\" diskSDPattern = \"/dev/sd*\" maxChecks = 10 maxRetries = 10 checkSleepDuration = time.Second ) // Singleton operation manager for managing detach clean up go routines var detachCleanupManager = operationmanager.NewOperationManager() type GCEDiskUtil struct{} // Attaches a disk specified by a volume.GCEPersistentDisk to the current kubelet. // Mounts the disk to it's global path. func (util *GCEDiskUtil) AttachAndMountDisk(pd *gcePersistentDisk, globalPDPath string) error { func (diskUtil *GCEDiskUtil) AttachAndMountDisk(pd *gcePersistentDisk, globalPDPath string) error { glog.V(5).Infof(\"AttachAndMountDisk(pd, %q) where pd is %#vrn\", globalPDPath, pd) // Terminate any in progress verify detach go routines, this will block until the goroutine is ready to exit because the channel is unbuffered detachCleanupManager.Send(pd.pdName, true) sdBefore, err := filepath.Glob(diskSDPattern) if err != nil { glog.Errorf(\"Error filepath.Glob(\"%s\"): %vrn\", diskSDPattern, err) } sdBeforeSet := util.NewStringSet(sdBefore...) gce, err := cloudprovider.GetCloudProvider(\"gce\", nil) if err != nil { return err } if err := gce.(*gce_cloud.GCECloud).AttachDisk(pd.pdName, pd.readOnly); err != nil { return err } devicePaths := []string{ path.Join(\"/dev/disk/by-id/\", \"google-\"+pd.pdName), path.Join(\"/dev/disk/by-id/\", \"scsi-0Google_PersistentDisk_\"+pd.pdName), } if pd.partition != \"\" { for i, path := range devicePaths { devicePaths[i] = path + \"-part\" + pd.partition } } //TODO(jonesdl) There should probably be better method than busy-waiting here. numTries := 0 devicePath := \"\" // Wait for the disk device to be created for { for _, path := range devicePaths { _, err := os.Stat(path) if err == nil { devicePath = path break } if err != nil && !os.IsNotExist(err) { return err } } if devicePath != \"\" { break } numTries++ if numTries == 10 { return errors.New(\"Could not attach disk: Timeout after 10s\") } time.Sleep(time.Second) devicePath, err := verifyAttached(pd, sdBeforeSet, gce) if err != nil { return err } // Only mount the PD globally once."} {"_id":"doc-en-kubernetes-eae09b24dc0853551cbcc46d33e90c73a064d72a720c7c95fa6a7e78de906d3e","title":"","text":"func (util *GCEDiskUtil) DetachDisk(pd *gcePersistentDisk) error { // Unmount the global PD mount, which should be the only one. globalPDPath := makeGlobalPDName(pd.plugin.host, pd.pdName) glog.V(5).Infof(\"DetachDisk(pd) where pd is %#v and the globalPDPath is %qrn\", pd, globalPDPath) // Terminate any in progress verify detach go routines, this will block until the goroutine is ready to exit because the channel is unbuffered detachCleanupManager.Send(pd.pdName, true) if err := pd.mounter.Unmount(globalPDPath); err != nil { return err }"} {"_id":"doc-en-kubernetes-2fe6704755207fd584ed730e615faa8a9cb609d942f64ff07d10fb8e5fce4165","title":"","text":"if err := gce.(*gce_cloud.GCECloud).DetachDisk(pd.pdName); err != nil { return err } // Verify disk detached, retry if needed. go verifyDetached(pd, gce) return nil } // Verifys the disk device to be created has been succesffully attached, and retries if it fails. func verifyAttached(pd *gcePersistentDisk, sdBeforeSet util.StringSet, gce cloudprovider.Interface) (string, error) { devicePaths := getDiskByIdPaths(pd) for numRetries := 0; numRetries < maxRetries; numRetries++ { for numChecks := 0; numChecks < maxChecks; numChecks++ { if err := udevadmChangeToNewDrives(sdBeforeSet); err != nil { // udevadm errors should not block disk attachment, log and continue glog.Errorf(\"%v\", err) } for _, path := range devicePaths { if pathExists, err := pathExists(path); err != nil { return \"\", err } else if pathExists { // A device path has succesfully been created for the PD glog.V(5).Infof(\"Succesfully attached GCE PD %q.\", pd.pdName) return path, nil } } // Sleep then check again glog.V(5).Infof(\"Waiting for GCE PD %q to attach.\", pd.pdName) time.Sleep(checkSleepDuration) } // Try attaching the disk again glog.Warningf(\"Timed out waiting for GCE PD %q to attach. Retrying attach.\", pd.pdName) if err := gce.(*gce_cloud.GCECloud).AttachDisk(pd.pdName, pd.readOnly); err != nil { return \"\", err } } return \"\", fmt.Errorf(\"Could not attach GCE PD %q. Timeout waiting for mount paths to be created.\", pd.pdName) } // Veify the specified persistent disk device has been succesfully detached, and retries if it fails. // This function is intended to be called asynchronously as a go routine. func verifyDetached(pd *gcePersistentDisk, gce cloudprovider.Interface) { defer util.HandleCrash() // Setting bufferSize to 0 so that when senders send, they are blocked until we recieve. This avoids the need to have a separate exit check. ch, err := detachCleanupManager.Start(pd.pdName, 0 /* bufferSize */) if err != nil { glog.Errorf(\"Error adding %q to detachCleanupManager: %v\", pd.pdName, err) return } defer detachCleanupManager.Close(pd.pdName) devicePaths := getDiskByIdPaths(pd) for numRetries := 0; numRetries < maxRetries; numRetries++ { for numChecks := 0; numChecks < maxChecks; numChecks++ { select { case <-ch: glog.Warningf(\"Terminating GCE PD %q detach verification. Another attach/detach call was made for this PD.\", pd.pdName) return default: allPathsRemoved := true for _, path := range devicePaths { if err := udevadmChangeToDrive(path); err != nil { // udevadm errors should not block disk detachment, log and continue glog.Errorf(\"%v\", err) } if exists, err := pathExists(path); err != nil { glog.Errorf(\"Error check path: %v\", err) return } else { allPathsRemoved = allPathsRemoved && !exists } } if allPathsRemoved { // All paths to the PD have been succefully removed glog.V(5).Infof(\"Succesfully detached GCE PD %q.\", pd.pdName) return } // Sleep then check again glog.V(5).Infof(\"Waiting for GCE PD %q to detach.\", pd.pdName) time.Sleep(checkSleepDuration) } } // Try detaching disk again glog.Warningf(\"Timed out waiting for GCE PD %q to detach. Retrying detach.\", pd.pdName) if err := gce.(*gce_cloud.GCECloud).DetachDisk(pd.pdName); err != nil { glog.Errorf(\"Error on retry detach PD %q: %v\", pd.pdName, err) return } } glog.Errorf(\"Could not detach GCE PD %q. One or more mount paths was not removed.\", pd.pdName) } // Returns list of all /dev/disk/by-id/* paths for given PD. func getDiskByIdPaths(pd *gcePersistentDisk) []string { devicePaths := []string{ path.Join(diskByIdPath, diskGooglePrefix+pd.pdName), path.Join(diskByIdPath, diskScsiGooglePrefix+pd.pdName), } if pd.partition != \"\" { for i, path := range devicePaths { devicePaths[i] = path + diskPartitionSuffix + pd.partition } } return devicePaths } // Checks if the specified path exists func pathExists(path string) (bool, error) { _, err := os.Stat(path) if err == nil { return true, nil } else if os.IsNotExist(err) { return false, nil } else { return false, err } } // Calls \"udevadm trigger --action=change\" for newly created \"/dev/sd*\" drives (exist only in after set). // This is workaround for Issue #7972. Once the underlying issue has been resolved, this may be removed. func udevadmChangeToNewDrives(sdBeforeSet util.StringSet) error { sdAfter, err := filepath.Glob(diskSDPattern) if err != nil { return fmt.Errorf(\"Error filepath.Glob(\"%s\"): %vrn\", diskSDPattern, err) } for _, sd := range sdAfter { if !sdBeforeSet.Has(sd) { return udevadmChangeToDrive(sd) } } return nil } // Calls \"udevadm trigger --action=change\" on the specified drive. // drivePath must be the the block device path to trigger on, in the format \"/dev/sd*\", or a symlink to it. // This is workaround for Issue #7972. Once the underlying issue has been resolved, this may be removed. func udevadmChangeToDrive(drivePath string) error { glog.V(5).Infof(\"udevadmChangeToDrive: drive=%q\", drivePath) // Evaluate symlink, if any drive, err := filepath.EvalSymlinks(drivePath) if err != nil { return fmt.Errorf(\"udevadmChangeToDrive: filepath.EvalSymlinks(%q) failed with %v.\", drivePath, err) } glog.V(5).Infof(\"udevadmChangeToDrive: symlink path is %q\", drive) // Check to make sure input is \"/dev/sd*\" if !strings.Contains(drive, diskSDPath) { return fmt.Errorf(\"udevadmChangeToDrive: expected input in the form \"%s\" but drive is %q.\", diskSDPattern, drive) } // Call \"udevadm trigger --action=change --property-match=DEVNAME=/dev/sd...\" _, err = exec.New().Command( \"udevadm\", \"trigger\", \"--action=change\", fmt.Sprintf(\"--property-match=DEVNAME=%s\", drive)).CombinedOutput() if err != nil { return fmt.Errorf(\"udevadmChangeToDrive: udevadm trigger failed for drive %q with %v.\", drive, err) } return nil }"} {"_id":"doc-en-kubernetes-02394522b4c086c8ae1fbee571a0c1ec3554c23be7b74645e7bf5c28425ad8de","title":"","text":"var _ = Describe(\"Probing container\", func() { framework := Framework{BaseName: \"container-probe\"} var podClient client.PodInterface probe := nginxProbeBuilder{} probe := webserverProbeBuilder{} BeforeEach(func() { framework.beforeEach()"} {"_id":"doc-en-kubernetes-709c2e2d7d4c07c9e81c5ed03d075ca9967615f17e770dedbd88ac171d494193","title":"","text":"expectNoError(err) startTime := time.Now() expectNoError(wait.Poll(poll, 90*time.Second, func() (bool, error) { Expect(wait.Poll(poll, 90*time.Second, func() (bool, error) { p, err := podClient.Get(p.Name) if err != nil { return false, err } return api.IsPodReady(p), nil })) ready := api.IsPodReady(p) if !ready { Logf(\"pod is not yet ready; pod has phase %q.\", p.Status.Phase) return false, nil } return true, nil })).NotTo(HaveOccurred(), \"pod never became ready\") if time.Since(startTime) < 30*time.Second { Failf(\"Pod became ready before it's initial delay\")"} {"_id":"doc-en-kubernetes-bdbd229e761a48d32c7cc732b77efd50ac5b1371ac849096c360ef4df6abce2e","title":"","text":"isReady, err := podRunningReady(p) expectNoError(err) Expect(isReady).To(BeTrue()) Expect(isReady).To(BeTrue(), \"pod should be ready\") Expect(getRestartCount(p) == 0).To(BeTrue()) restartCount := getRestartCount(p) Expect(restartCount == 0).To(BeTrue(), \"pod should have a restart count of 0 but got %v\", restartCount) }) It(\"with readiness probe that fails should never be ready and never restart\", func() {"} {"_id":"doc-en-kubernetes-3f5411c63aee66c723080ba3a7cc71d4579cac651f48bdca6fd8fbae3a536c47","title":"","text":"expectNoError(err) isReady, err := podRunningReady(p) Expect(isReady).NotTo(BeTrue()) Expect(isReady).NotTo(BeTrue(), \"pod should be not ready\") Expect(getRestartCount(p) == 0).To(BeTrue()) restartCount := getRestartCount(p) Expect(restartCount == 0).To(BeTrue(), \"pod should have a restart count of 0 but got %v\", restartCount) }) })"} {"_id":"doc-en-kubernetes-32f61a2f2c222e316f9166ad574f362eb0411d2a666c4ad79a9d2c7cf1e8b01f","title":"","text":"func makePodSpec(readinessProbe, livenessProbe *api.Probe) *api.Pod { pod := &api.Pod{ ObjectMeta: api.ObjectMeta{Name: \"nginx-\" + string(util.NewUUID())}, ObjectMeta: api.ObjectMeta{Name: \"test-webserver-\" + string(util.NewUUID())}, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"nginx\", Image: \"nginx\", Name: \"test-webserver\", Image: \"gcr.io/google_containers/test-webserver\", LivenessProbe: livenessProbe, ReadinessProbe: readinessProbe, },"} {"_id":"doc-en-kubernetes-86eff17049f2f0326241622643b4c947d261ff0eef5c6961b0dc37945723c667","title":"","text":"return pod } type nginxProbeBuilder struct { type webserverProbeBuilder struct { failing bool initialDelay bool } func (b nginxProbeBuilder) withFailing() nginxProbeBuilder { func (b webserverProbeBuilder) withFailing() webserverProbeBuilder { b.failing = true return b } func (b nginxProbeBuilder) withInitialDelay() nginxProbeBuilder { func (b webserverProbeBuilder) withInitialDelay() webserverProbeBuilder { b.initialDelay = true return b } func (b nginxProbeBuilder) build() *api.Probe { func (b webserverProbeBuilder) build() *api.Probe { probe := &api.Probe{ Handler: api.Handler{ HTTPGet: &api.HTTPGetAction{"} {"_id":"doc-en-kubernetes-2af6a3a28337c77b2015a7dc3e4e63318a903b941f29da457b8d0ee76ae23e03","title":"","text":"// Remove unidentified containers. for _, container := range unidentifiedContainers { glog.Infof(\"Removing unidentified dead container %q with ID %q\", container.name, container.id) err = cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: container.id}) err = cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: container.id, RemoveVolumes: true}) if err != nil { glog.Warningf(\"Failed to remove unidentified dead container %q: %v\", container.name, err) }"} {"_id":"doc-en-kubernetes-a7bbe657281ccfb73f86bf819ae876fbfa00a9554e6de8a8edde6299c9159ad8","title":"","text":"// Remove from oldest to newest (last to first). numToKeep := len(containers) - toRemove for i := numToKeep; i < len(containers); i++ { err := cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: containers[i].id}) err := cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: containers[i].id, RemoveVolumes: true}) if err != nil { glog.Warningf(\"Failed to remove dead container %q: %v\", containers[i].name, err) }"} {"_id":"doc-en-kubernetes-95dd5f33db85c896c5e60ea2b5b219565cd75e89bd3e2b2246ffdcfa2ef6db7f","title":"","text":"In one terminal, run `gcloud compute ssh master --ssh-flag=\"-L 8080:127.0.0.1:8080\"` and in a second run `gcloud compute ssh master --ssh-flag=\"-R 8080:127.0.0.1:8080\"`. ### OpenStack These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard. These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack. #### Make sure you can connect with OpenStack Make sure the environment variables are set for OpenStack such as: ```sh OS_TENANT_ID OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME ``` Test this works with something like: ``` nova list ``` #### Get a Suitable CoreOS Image You'll need a [suitable version of CoreOS image for OpenStack] (https://coreos.com/os/docs/latest/booting-on-openstack.html) Once you download that, upload it to glance. An example is shown below: ```sh glance image-create --name CoreOS723 --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True ``` #### Create security group ```sh nova secgroup-create kubernetes \"Kubernetes Security Group\" nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0 nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0 ``` #### Provision the Master ```sh nova boot --image --key-name --flavor --security-group kubernetes --user-data files/master.yaml kube-master ``` `````` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723' `````` is the keypair name that you already generated to access the instance. `````` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size. The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work. Next, assign it a public IP address: ``` nova floating-ip-list ``` Get an IP address that's free and run: ``` nova floating-ip-associate kube-master ``` where `````` is the IP address that was available from the ```nova floating-ip-list``` command. #### Provision Worker Nodes Edit ```node.yaml``` and replace all instances of `````` with the private IP address of the master node. You can get this by runnning ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it. ```sh nova boot --image --key-name --flavor --security-group kubernetes --user-data files/node.yaml minion01 ``` This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master. ### VMware Fusion #### Create the master config-drive"} {"_id":"doc-en-kubernetes-bff761ca084cc59b8560d6888874fcb466797d1e3d96658bcf5f6917aa08cebd","title":"","text":"Manager: m, } // Export the HTTP endpoint if a port was specified. if port > 0 { err = cadvisorClient.exportHTTP(port) if err != nil { return nil, err } err = cadvisorClient.exportHTTP(port) if err != nil { return nil, err } return cadvisorClient, nil }"} {"_id":"doc-en-kubernetes-bef7a0a0b9d6383f980516f985adc985a60d2fc3003a029bc2b4894f4315115d","title":"","text":"} func (cc *cadvisorClient) exportHTTP(port uint) error { // Register the handlers regardless as this registers the prometheus // collector properly. mux := http.NewServeMux() err := cadvisorHttp.RegisterHandlers(mux, cc, \"\", \"\", \"\", \"\", \"/metrics\") if err != nil { return err } serv := &http.Server{ Addr: fmt.Sprintf(\":%d\", port), Handler: mux, } // TODO(vmarmol): Remove this when the cAdvisor port is once again free. // If export failed, retry in the background until we are able to bind. // This allows an existing cAdvisor to be killed before this one registers. go func() { defer util.HandleCrash() err := serv.ListenAndServe() for err != nil { glog.Infof(\"Failed to register cAdvisor on port %d, retrying. Error: %v\", port, err) time.Sleep(time.Minute) err = serv.ListenAndServe() // Only start the http server if port > 0 if port > 0 { serv := &http.Server{ Addr: fmt.Sprintf(\":%d\", port), Handler: mux, } }() // TODO(vmarmol): Remove this when the cAdvisor port is once again free. // If export failed, retry in the background until we are able to bind. // This allows an existing cAdvisor to be killed before this one registers. go func() { defer util.HandleCrash() err := serv.ListenAndServe() for err != nil { glog.Infof(\"Failed to register cAdvisor on port %d, retrying. Error: %v\", port, err) time.Sleep(time.Minute) err = serv.ListenAndServe() } }() } return nil }"} {"_id":"doc-en-kubernetes-b041b68e4ba1cfd8eaaf694db2f71f1cfa035aa7d484f0061a53a9be00b03e82","title":"","text":"# KUBE_TEST_API_VERSIONS=${KUBE_TEST_API_VERSIONS:-\"v1,experimental/v1alpha1\"} KUBE_TEST_API_VERSIONS=${KUBE_TEST_API_VERSIONS:-\"v1,experimental/v1alpha1\"} # Give integration tests longer to run KUBE_TIMEOUT=${KUBE_TIMEOUT:--timeout 240s} KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=${KUBE_INTEGRATION_TEST_MAX_CONCURRENCY:-\"-1\"} LOG_LEVEL=${LOG_LEVEL:-2}"} {"_id":"doc-en-kubernetes-8afe28bf0b2780d4439e6534f1cb824e93ac12596263051dbe57d53365747e98","title":"","text":"# KUBE_RACE=\"-race\" KUBE_GOFLAGS=\"-tags 'integration no-docker' \" KUBE_RACE=\"\" KUBE_TIMEOUT=\"${KUBE_TIMEOUT}\" KUBE_TEST_API_VERSIONS=\"$1\" KUBE_API_VERSIONS=\"v1,experimental/v1alpha1\" \"${KUBE_ROOT}/hack/test-go.sh\" test/integration"} {"_id":"doc-en-kubernetes-935a80e9380fb718b6022ed1a33deb275424039d5d4580430665f5fad71fa4a9","title":"","text":"func rcByNameContainer(name string, replicas int, image string, labels map[string]string, c api.Container) *api.ReplicationController { // Add \"name\": name to the labels, overwriting if it exists. labels[\"name\"] = name gracePeriod := int64(0) return &api.ReplicationController{ TypeMeta: unversioned.TypeMeta{ Kind: \"ReplicationController\","} {"_id":"doc-en-kubernetes-e136b371e0079de48de6de26778afff56dd84dd7b1fb7223abd1eaab28d4c4ee","title":"","text":"Labels: labels, }, Spec: api.PodSpec{ Containers: []api.Container{c}, Containers: []api.Container{c}, TerminationGracePeriodSeconds: &gracePeriod, }, }, },"} {"_id":"doc-en-kubernetes-f3dfe3bbb20982c454bb6cb44b63075a1252dd5c764644ce47d62feb98b744f8","title":"","text":"// In case of failure or too long waiting time, an error is returned. func waitForRCPodToDisappear(c *client.Client, ns, rcName, podName string) error { label := labels.SelectorFromSet(labels.Set(map[string]string{\"name\": rcName})) return waitForPodToDisappear(c, ns, podName, label, 20*time.Second, 5*time.Minute) // NodeController evicts pod after 5 minutes, so we need timeout greater than that. // Additionally, there can be non-zero grace period, so we are setting 10 minutes // to be on the safe size. return waitForPodToDisappear(c, ns, podName, label, 20*time.Second, 10*time.Minute) } // waitForService waits until the service appears (exist == true), or disappears (exist == false)"} {"_id":"doc-en-kubernetes-534dbc6650b34e7f5dcb8c8837ce0894e9567283c054b5aecf11e6fd4cd22f59","title":"","text":"return fmt.Errorf(\"pod %s is not running and cannot be attached to; current phase is %s\", p.PodName, pod.Status.Phase) } containerName := p.ContainerName if len(containerName) == 0 { glog.V(4).Infof(\"defaulting container name to %s\", pod.Spec.Containers[0].Name) containerName = pod.Spec.Containers[0].Name } // TODO: refactor with terminal helpers from the edit utility once that is merged var stdin io.Reader tty := p.TTY"} {"_id":"doc-en-kubernetes-2f7723bd834959ab791ba457ef1fdf85602b0e568ec9a6f7c2f335a53e2e7541","title":"","text":"Name(pod.Name). Namespace(pod.Namespace). SubResource(\"attach\"). Param(\"container\", containerName) Param(\"container\", p.GetContainerName(pod)) return p.Attach.Attach(req, p.Config, stdin, p.Out, p.Err, tty) } // GetContainerName returns the name of the container to attach to, with a fallback. func (p *AttachOptions) GetContainerName(pod *api.Pod) string { if len(p.ContainerName) > 0 { return p.ContainerName } glog.V(4).Infof(\"defaulting container name to %s\", pod.Spec.Containers[0].Name) return pod.Spec.Containers[0].Name } "} {"_id":"doc-en-kubernetes-fc49718a23c59eabe25086439c45ec61a9eb2decc56a14d09c817366d6e8b300","title":"","text":"return err } if status == api.PodSucceeded || status == api.PodFailed { return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: pod.Spec.Containers[0].Name}, opts.Out) return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: opts.GetContainerName(pod)}, opts.Out) } opts.Client = c opts.PodName = pod.Name opts.Namespace = pod.Namespace return opts.Run() if err := opts.Run(); err != nil { fmt.Fprintf(opts.Out, \"Error attaching, falling back to logs: %vn\", err) return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: opts.GetContainerName(pod)}, opts.Out) } return nil } func getRestartPolicy(cmd *cobra.Command, interactive bool) (api.RestartPolicy, error) {"} {"_id":"doc-en-kubernetes-b05a52db897a9e88e73403c317b6c0ebdacf4b13b37357be0ae15904fc49d948","title":"","text":"if containerChanges.StartInfraContainer && (len(containerChanges.ContainersToStart) > 0) { glog.V(4).Infof(\"Creating pod infra container for %q\", podFullName) podInfraContainerID, err = dm.createPodInfraContainer(pod) if err != nil { glog.Errorf(\"Failed to create pod infra container: %v; Skipping pod %q\", err, podFullName) return err } // Call the networking plugin if err == nil { err = dm.networkPlugin.SetUpPod(pod.Namespace, pod.Name, podInfraContainerID) } err = dm.networkPlugin.SetUpPod(pod.Namespace, pod.Name, podInfraContainerID) if err != nil { glog.Errorf(\"Failed to create pod infra container: %v; Skipping pod %q\", err, podFullName) // Delete infra container if delErr := dm.KillContainerInPod(kubecontainer.ContainerID{ ID: string(podInfraContainerID), Type: \"docker\"}, nil, pod); delErr != nil { glog.Warningf(\"Clear infra container failed for pod %q: %v\", podFullName, delErr) } return err }"} {"_id":"doc-en-kubernetes-d17f0d3d055e0d41a00234c8e4268df879f0dbcffebdbcd166374b528301854f","title":"","text":"package flocker import ( \"io/ioutil\" \"os\" \"testing\" flockerClient \"github.com/ClusterHQ/flocker-go\""} {"_id":"doc-en-kubernetes-801e85fc3efb9c48ee35aa40b2866b84d45ce6d9df2ad3b730824eaeb69f888f","title":"","text":"const pluginName = \"kubernetes.io/flocker\" func newInitializedVolumePlugMgr() volume.VolumePluginMgr { func newInitializedVolumePlugMgr(t *testing.T) (volume.VolumePluginMgr, string) { plugMgr := volume.VolumePluginMgr{} plugMgr.InitPlugins(ProbeVolumePlugins(), volume.NewFakeVolumeHost(\"/foo/bar\", nil, nil)) return plugMgr dir, err := ioutil.TempDir(\"\", \"flocker\") assert.NoError(t, err) plugMgr.InitPlugins(ProbeVolumePlugins(), volume.NewFakeVolumeHost(dir, nil, nil)) return plugMgr, dir } func TestGetByName(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NotNil(plug, \"Can't find the plugin by name\")"} {"_id":"doc-en-kubernetes-83a3639b9e979760d052c677e590a3771ee40309575c1235f6b7ffab9ddc6061","title":"","text":"func TestCanSupport(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NoError(err)"} {"_id":"doc-en-kubernetes-c8787d0a0fa0bd3bd32ca090d61337ea76a0092dd149b706653fa0b322a80046","title":"","text":"func TestNewBuilder(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NoError(err)"} {"_id":"doc-en-kubernetes-6acb5dd0a7f69ec89e843d0e92e60f58bf978f0ae06134cef69b79b98895a710","title":"","text":"assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, rootDir := newInitializedVolumePlugMgr(t) if rootDir != \"\" { defer os.RemoveAll(rootDir) } plug, err := plugMgr.FindPluginByName(flockerPluginName) assert.NoError(err)"} {"_id":"doc-en-kubernetes-f740146422fa9d92b4512091e13b031ffaa922c4132cc8fe40f7222b881d84d0","title":"","text":"kube::test::get_object_assert 'rc mock2' \"{{${labels_field}.status}}\" 'replaced' fi fi # Command: kubectl edit multiple resources temp_editor=\"${KUBE_TEMP}/tmp-editor.sh\" echo -e '#!/bin/bashnsed -i \"s/status: replaced/status: edited/g\" $@' > \"${temp_editor}\" chmod +x \"${temp_editor}\" EDITOR=\"${temp_editor}\" kubectl edit \"${kube_flags[@]}\" -f \"${file}\" # Post-condition: mock service (and mock2) and mock rc (and mock2) are edited if [ \"$has_svc\" = true ]; then kube::test::get_object_assert 'services mock' \"{{${labels_field}.status}}\" 'edited' if [ \"$two_svcs\" = true ]; then kube::test::get_object_assert 'services mock2' \"{{${labels_field}.status}}\" 'edited' fi fi if [ \"$has_rc\" = true ]; then kube::test::get_object_assert 'rc mock' \"{{${labels_field}.status}}\" 'edited' if [ \"$two_rcs\" = true ]; then kube::test::get_object_assert 'rc mock2' \"{{${labels_field}.status}}\" 'edited' fi fi # cleaning rm \"${temp_editor}\" # Command # We need to set --overwrite, because otherwise, if the first attempt to run \"kubectl label\" # fails on some, but not all, of the resources, retries will fail because it tries to modify"} {"_id":"doc-en-kubernetes-ba767ac21fd089e7575dff088c09c108566a651f3d84cd52a921a99d67c1a972","title":"","text":"} func (a genericAccessor) SetAnnotations(annotations map[string]string) { if a.annotations == nil { emptyAnnotations := make(map[string]string) a.annotations = &emptyAnnotations } *a.annotations = annotations }"} {"_id":"doc-en-kubernetes-ad36e5a41968266b5d487d1b5190950091462c03fc5c10c9979416c596cba087","title":"","text":"defaultVersion := cmdutil.OutputVersion(cmd, clientConfig.Version) results := editResults{} for { obj, err := resource.AsVersionedObject(infos, false, defaultVersion) objs, err := resource.AsVersionedObjects(infos, defaultVersion) if err != nil { return preservedFile(err, results.file, out) } // if input object is a list, traverse and edit each item one at a time for _, obj := range objs { // TODO: add an annotating YAML printer that can print inline comments on each field, // including descriptions or validation errors // generate the file to edit buf := &bytes.Buffer{} if err := results.header.writeTo(buf); err != nil { return preservedFile(err, results.file, out) } if err := printer.PrintObj(obj, buf); err != nil { return preservedFile(err, results.file, out) } original := buf.Bytes() // TODO: add an annotating YAML printer that can print inline comments on each field, // including descriptions or validation errors // generate the file to edit buf := &bytes.Buffer{} if err := results.header.writeTo(buf); err != nil { return preservedFile(err, results.file, out) } if err := printer.PrintObj(obj, buf); err != nil { return preservedFile(err, results.file, out) } original := buf.Bytes() // launch the editor edit := editor.NewDefaultEditor() edited, file, err := edit.LaunchTempFile(\"kubectl-edit-\", ext, buf) if err != nil { return preservedFile(err, results.file, out) } // launch the editor edit := editor.NewDefaultEditor() edited, file, err := edit.LaunchTempFile(\"kubectl-edit-\", ext, buf) if err != nil { return preservedFile(err, results.file, out) } // cleanup any file from the previous pass if len(results.file) > 0 { os.Remove(results.file) } // cleanup any file from the previous pass if len(results.file) > 0 { os.Remove(results.file) } glog.V(4).Infof(\"User edited:n%s\", string(edited)) fmt.Printf(\"User edited:n%s\", string(edited)) lines, err := hasLines(bytes.NewBuffer(edited)) if err != nil { return preservedFile(err, file, out) } if bytes.Equal(original, edited) { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) glog.V(4).Infof(\"User edited:n%s\", string(edited)) lines, err := hasLines(bytes.NewBuffer(edited)) if err != nil { return preservedFile(err, file, out) } fmt.Fprintln(out, \"Edit cancelled, no changes made.\") return nil } if !lines { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) // Compare content without comments if bytes.Equal(stripComments(original), stripComments(edited)) { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) } fmt.Fprintln(out, \"Edit cancelled, no changes made.\") continue } if !lines { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) } fmt.Fprintln(out, \"Edit cancelled, saved file was empty.\") continue } fmt.Fprintln(out, \"Edit cancelled, saved file was empty.\") return nil } results = editResults{ file: file, } results = editResults{ file: file, } // parse the edited file updates, err := rmap.InfoForData(edited, \"edited-file\") if err != nil { return preservedFile(err, file, out) } // parse the edited file updates, err := rmap.InfoForData(edited, \"edited-file\") if err != nil { return fmt.Errorf(\"The edited file had a syntax error: %v\", err) } // annotate the edited object for kubectl apply if err := kubectl.UpdateApplyAnnotation(updates); err != nil { return preservedFile(err, file, out) } // annotate the edited object for kubectl apply if err := kubectl.UpdateApplyAnnotation(updates); err != nil { return preservedFile(err, file, out) } visitor := resource.NewFlattenListVisitor(updates, rmap) visitor := resource.NewFlattenListVisitor(updates, rmap) // need to make sure the original namespace wasn't changed while editing if err = visitor.Visit(resource.RequireNamespace(cmdNamespace)); err != nil { return preservedFile(err, file, out) } // need to make sure the original namespace wasn't changed while editing if err = visitor.Visit(resource.RequireNamespace(cmdNamespace)); err != nil { return preservedFile(err, file, out) } // use strategic merge to create a patch originalJS, err := yaml.ToJSON(original) if err != nil { return preservedFile(err, file, out) } editedJS, err := yaml.ToJSON(edited) if err != nil { return preservedFile(err, file, out) } patch, err := strategicpatch.CreateStrategicMergePatch(originalJS, editedJS, obj) // TODO: change all jsonmerge to strategicpatch // for checking preconditions preconditions := []jsonmerge.PreconditionFunc{} if err != nil { glog.V(4).Infof(\"Unable to calculate diff, no merge is possible: %v\", err) return preservedFile(err, file, out) } else { preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"apiVersion\")) preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"kind\")) preconditions = append(preconditions, jsonmerge.RequireMetadataKeyUnchanged(\"name\")) results.version = defaultVersion } // use strategic merge to create a patch originalJS, err := yaml.ToJSON(original) if err != nil { return preservedFile(err, file, out) } editedJS, err := yaml.ToJSON(edited) if err != nil { return preservedFile(err, file, out) } patch, err := strategicpatch.CreateStrategicMergePatch(originalJS, editedJS, obj) // TODO: change all jsonmerge to strategicpatch // for checking preconditions preconditions := []jsonmerge.PreconditionFunc{} if err != nil { glog.V(4).Infof(\"Unable to calculate diff, no merge is possible: %v\", err) return preservedFile(err, file, out) } else { preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"apiVersion\")) preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"kind\")) preconditions = append(preconditions, jsonmerge.RequireMetadataKeyUnchanged(\"name\")) results.version = defaultVersion } if hold, msg := jsonmerge.TestPreconditionsHold(patch, preconditions); !hold { fmt.Fprintf(out, \"error: %s\", msg) return preservedFile(nil, file, out) } if hold, msg := jsonmerge.TestPreconditionsHold(patch, preconditions); !hold { fmt.Fprintf(out, \"error: %s\", msg) return preservedFile(nil, file, out) } err = visitor.Visit(func(info *resource.Info, err error) error { patched, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) if err != nil { fmt.Fprintln(out, results.addError(err, info)) err = visitor.Visit(func(info *resource.Info, err error) error { patched, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) if err != nil { fmt.Fprintln(out, results.addError(err, info)) return nil } info.Refresh(patched, true) cmdutil.PrintSuccess(mapper, false, out, info.Mapping.Resource, info.Name, \"edited\") return nil }) if err != nil { return preservedFile(err, file, out) } info.Refresh(patched, true) cmdutil.PrintSuccess(mapper, false, out, info.Mapping.Resource, info.Name, \"edited\") return nil }) if err != nil { return preservedFile(err, file, out) } if results.retryable > 0 { fmt.Fprintf(out, \"You can run `kubectl replace -f %s` to try this update again.n\", file) return errExit } if results.conflict > 0 { fmt.Fprintf(out, \"You must update your local resource version and run `kubectl replace -f %s` to overwrite the remote changes.n\", file) return errExit if results.retryable > 0 { fmt.Fprintf(out, \"You can run `kubectl replace -f %s` to try this update again.n\", file) return errExit } if results.conflict > 0 { fmt.Fprintf(out, \"You must update your local resource version and run `kubectl replace -f %s` to overwrite the remote changes.n\", file) return errExit } if len(results.edit) == 0 { if results.notfound == 0 { os.Remove(file) } else { fmt.Fprintf(out, \"The edits you made on deleted resources have been saved to %qn\", file) } } } if len(results.edit) == 0 { if results.notfound == 0 { os.Remove(file) } else { fmt.Fprintf(out, \"The edits you made on deleted resources have been saved to %qn\", file) } return nil }"} {"_id":"doc-en-kubernetes-04bdb3d2bb63267cc2d28461f18221496da6ec0095e36e4d303c9ec939a231a7","title":"","text":"} return false, nil } // stripComments will transform a YAML file into JSON, thus dropping any comments // in it. Note that if the given file has a syntax error, the transformation will // fail and we will manually drop all comments from the file. func stripComments(file []byte) []byte { stripped, err := yaml.ToJSON(file) if err != nil { stripped = manualStrip(file) } return stripped } // manualStrip is used for dropping comments from a YAML file func manualStrip(file []byte) []byte { stripped := []byte{} for _, line := range bytes.Split(file, []byte(\"n\")) { if bytes.HasPrefix(bytes.TrimSpace(line), []byte(\"#\")) { continue } stripped = append(stripped, line...) stripped = append(stripped, 'n') } return stripped } "} {"_id":"doc-en-kubernetes-a88437e65a3a8055f5357af04ee62d0026637c40bdca373d5645eb251088725e","title":"","text":"command := exec.Command(\"mount\", mountArgs...) output, err := command.CombinedOutput() if err != nil { glog.Errorf(\"Mount failed: %vnMounting arguments: %s %s %s %vnOutput: %sn\", return fmt.Errorf(\"Mount failed: %vnMounting arguments: %s %s %s %vnOutput: %sn\", err, source, target, fstype, options, string(output)) } return err"} {"_id":"doc-en-kubernetes-3b7a9165243d003230e6f2b8d01c3faa3345eb780025309de7344e15deeba32a","title":"","text":"command := exec.Command(\"umount\", target) output, err := command.CombinedOutput() if err != nil { glog.Errorf(\"Unmount failed: %vnUnmounting arguments: %snOutput: %sn\", err, target, string(output)) return err return fmt.Errorf(\"Unmount failed: %vnUnmounting arguments: %snOutput: %sn\", err, target, string(output)) } return nil }"} {"_id":"doc-en-kubernetes-52fcfbd57004797122bff061220df57223172da5a11b583f491199c8465debdc","title":"","text":"# google - Heapster, Google Cloud Monitoring, and Google Cloud Logging # googleinfluxdb - Enable influxdb and google (except GCM) # standalone - Heapster only. Metrics available via Heapster REST API. ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-googleinfluxdb}\" ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}\" # Optional: Enable node logging. ENABLE_NODE_LOGGING=\"${KUBE_ENABLE_NODE_LOGGING:-true}\""} {"_id":"doc-en-kubernetes-bb3448a2f8231e96978bd32caa3dfe4cef5816826281a0af9bc6816a74da177e","title":"","text":"AUTOSCALER_MIN_NODES=\"${KUBE_AUTOSCALER_MIN_NODES:-1}\" AUTOSCALER_MAX_NODES=\"${KUBE_AUTOSCALER_MAX_NODES:-${NUM_MINIONS}}\" TARGET_NODE_UTILIZATION=\"${KUBE_TARGET_NODE_UTILIZATION:-0.7}\" ENABLE_CLUSTER_MONITORING=googleinfluxdb fi # Optional: Enable deployment experimental feature, not ready for production use."} {"_id":"doc-en-kubernetes-f659f8b8c8d5851aa52621379991adbeae0a55441c3bc028f358bd52a40d22aa","title":"","text":"# Kubernetes Cluster Admin Guide: Cluster Components **Table of Contents** - [Kubernetes Cluster Admin Guide: Cluster Components](#kubernetes-cluster-admin-guide-cluster-components) - [Master Components](#master-components) - [kube-apiserver](#kube-apiserver) - [etcd](#etcd) - [kube-controller-manager](#kube-controller-manager) - [kube-scheduler](#kube-scheduler) - [addons](#addons) - [DNS](#dns) - [User interface](#user-interface) - [Container Resource Monitoring](#container-resource-monitoring) - [Cluster-level Logging](#cluster-level-logging) - [Node components](#node-components) - [kubelet](#kubelet) - [kube-proxy](#kube-proxy) - [docker](#docker) - [rkt](#rkt) - [monit](#monit) - [fluentd](#fluentd) This document outlines the various binary components that need to run to deliver a functioning Kubernetes cluster."} {"_id":"doc-en-kubernetes-f6f45ce8d69583315b55867b88f1db6794354dc05c3fbd3c33d9982a5cec832a","title":"","text":"Addon objects are created in the \"kube-system\" namespace. Example addons are: * [DNS](http://releases.k8s.io/HEAD/cluster/addons/dns/) provides cluster local DNS. * [kube-ui](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) provides a graphical UI for the cluster. * [fluentd-elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) provides log storage. Also see the [gcp version](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/). * [cluster-monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) provides monitoring for the cluster. #### DNS While the other addons are not strictly required, all Kubernetes clusters should have [cluster DNS](dns.md), as many examples rely on it. Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. #### User interface The kube-ui provides a read-only overview of the cluster state. Access [the UI using kubectl proxy](../user-guide/connecting-to-applications-proxy.md#connecting-to-the-kube-ui-service-from-your-local-workstation) #### Container Resource Monitoring [Container Resource Monitoring](../user-guide/monitoring.md) records generic time-series metrics about containers in a central database, and provides a UI for browsing that data. #### Cluster-level Logging [Container Logging](../user-guide/monitoring.md) saves container logs to a central log store with search/browsing interface. There are two implementations: * [Cluster-level logging to Google Cloud Logging]( docs/user-guide/logging.md#cluster-level-logging-to-google-cloud-logging) * [Cluster-level Logging with Elasticsearch and Kibana]( docs/user-guide/logging.md#cluster-level-logging-with-elasticsearch-and-kibana) ## Node components"} {"_id":"doc-en-kubernetes-c00006965ef4bfc4ffcc3f9521519cd0b3dcdc0b42b97d0405c6385d72b23cc6","title":"","text":"`monit` is a lightweight process babysitting system for keeping kubelet and docker running. ### fluentd `fluentd` is a daemon which helps provide [cluster-level logging](#cluster-level-logging). [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]()"} {"_id":"doc-en-kubernetes-906aecf5a78ae3aee282a885feeac406b1acfd719ea9462f0314052f03183e77","title":"","text":"- [Scheduler pod template](#scheduler-pod-template) - [Controller Manager Template](#controller-manager-template) - [Starting and Verifying Apiserver, Scheduler, and Controller Manager](#starting-and-verifying-apiserver-scheduler-and-controller-manager) - [Logging](#logging) - [Monitoring](#monitoring) - [DNS](#dns) - [Starting Cluster Services](#starting-cluster-services) - [Troubleshooting](#troubleshooting) - [Running validate-cluster](#running-validate-cluster) - [Inspect pods and services](#inspect-pods-and-services)"} {"_id":"doc-en-kubernetes-c792047fe9a77e92289bf8618ccaee1fbd3a23a9fdfc1a8f17176feedfae1ce8","title":"","text":"- Otherwise, if taking the firewall-based security approach - `--api-servers=http://$MASTER_IP` - `--config=/etc/kubernetes/manifests` - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Addons](#starting-addons).) - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).) - `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses. - `--docker-root=` - `--root-dir=`"} {"_id":"doc-en-kubernetes-3806feaac5ab9435de66d36611c127e2e09f0cdc692f275466ccd90bbe17df57","title":"","text":"You should soon be able to see all your nodes by running the `kubectl get nodes` command. Otherwise, you will need to manually create node objects. ### Logging **TODO** talk about starting Logging. ### Monitoring **TODO** talk about starting Monitoring. ### DNS **TODO** talk about starting DNS. ### Starting Cluster Services You will want to complete your Kubernetes clusters by adding cluster-wide services. These are sometimes called *addons*, and [an overview of their purpose is in the admin guide]( ../../docs/admin/cluster-components.md#addons). Notes for setting up each cluster service are given below: * Cluster DNS: * required for many kubernetes examples * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/dns/) * [Admin Guide](../admin/dns.md) * Cluster-level Logging * Multiple implementations with different storage backends and UIs. * [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) * [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/). * Both require running fluentd on each node. * [User Guide](../user-guide/logging.md) * Container Resource Monitoring * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) * GUI * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) cluster. ## Troubleshooting"} {"_id":"doc-en-kubernetes-d666f3b9348166c8e27cf0d6f177ecf3cb28d250971037c2ff5b8133da1ce2a0","title":"","text":"# Connecting to applications: kubectl proxy and apiserver proxy - [Connecting to applications: kubectl proxy and apiserver proxy](#connecting-to-applications-kubectl-proxy-and-apiserver-proxy) - [Getting the apiserver proxy URL of kube-ui](#getting-the-apiserver-proxy-url-of-kube-ui) - [Connecting to the kube-ui service from your local workstation](#connecting-to-the-kube-ui-service-from-your-local-workstation) You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation."} {"_id":"doc-en-kubernetes-5334e52850e973dddc416900e9822fc089a9f2dffa3d330cd086fe3333eaf6d3","title":"","text":"# Logging **Table of Contents** - [Logging](#logging) - [Logging by Kubernetes Components](#logging-by-kubernetes-components) - [Examining the logs of running containers](#examining-the-logs-of-running-containers) - [Cluster level logging to Google Cloud Logging](#cluster-level-logging-to-google-cloud-logging) - [Cluster level logging with Elasticsearch and Kibana](#cluster-level-logging-with-elasticsearch-and-kibana) - [Ingesting Application Log Files](#ingesting-application-log-files) - [Known issues](#known-issues) ## Logging by Kubernetes Components Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](../devel/logging.md)."} {"_id":"doc-en-kubernetes-48a353894265358f5dd25b1305911e3a8800bf102bd9b690eae3ba9dd32ad7e6","title":"","text":"} if (opts.Watch || forceWatch) && rw != nil { glog.Infof(\"Started to log from %v for %v\", ctx, req.Request.URL.RequestURI()) watcher, err := rw.Watch(ctx, &opts) if err != nil { scope.err(err, res.ResponseWriter, req.Request)"} {"_id":"doc-en-kubernetes-ae6f0afab15f5950f8fcde4833d6d37d3ed9ddb0a4ec362aec9f4e02741981c8","title":"","text":"-o -path './test/e2e/*' -o -path './test/e2e_node/*' -o -path './test/integration/*' -o -path './test/component/scheduler/perf/*' ) -prune ) -name '*_test.go' -print0 | xargs -0n1 dirname | sed 's|^./||' | sort -u )"} {"_id":"doc-en-kubernetes-8de89b8c6a4e02604401db70412de1e7eca7d6c16dfb784922be1063a768154a","title":"","text":" \"WARNING\" \"WARNING\" \"WARNING\" \"WARNING\" \"WARNING\"

PLEASE NOTE: This document applies to the HEAD of the source tree

If you are using a released version of Kubernetes, you should refer to the docs that go with that version. The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/proposals/choosing-scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- Scheduler Performance Test ====== Motivation ------ We already have a performance testing system -- Kubemark. However, Kubemark requires setting up and bootstrapping a whole cluster, which takes a lot of time. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. We have the following goals: - Save time on testing - The test and benchmark can be run in a single box. We only set up components necessary to scheduling without booting up a cluster. - Profiling runtime metrics to find out bottleneck - Write scheduler integration test but focus on performance measurement. Take advantage of go profiling tools and collect fine-grained metrics, like cpu-profiling, memory-profiling and block-profiling. - Reproduce test result easily - We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. Currently the test suite has the following: - density test (by adding a new Go test) - schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes - print out scheduling rate every second - let you learn the rate changes vs number of scheduled pods - benchmark - make use of `go test -bench` and report nanosecond/op. - schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small: 10 - 100. How To Run ------ ``` cd kubernetes/test/component/scheduler/perf ./test-performance.sh ``` [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/test/component/scheduler/perf/README.md?pixel)]()
"} {"_id":"doc-en-kubernetes-63a9f94c7e22642097e3c1d84560088b8c72d74d97b74eba79b0875f49c2b35e","title":"","text":" /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"testing\" \"time\" ) // BenchmarkScheduling100Nodes0Pods benchmarks the scheduling rate // when the cluster has 100 nodes and 0 scheduled pods func BenchmarkScheduling100Nodes0Pods(b *testing.B) { benchmarkScheduling(100, 0, b) } // BenchmarkScheduling100Nodes1000Pods benchmarks the scheduling rate // when the cluster has 100 nodes and 1000 scheduled pods func BenchmarkScheduling100Nodes1000Pods(b *testing.B) { benchmarkScheduling(100, 1000, b) } // BenchmarkScheduling1000Nodes0Pods benchmarks the scheduling rate // when the cluster has 1000 nodes and 0 scheduled pods func BenchmarkScheduling1000Nodes0Pods(b *testing.B) { benchmarkScheduling(1000, 0, b) } // BenchmarkScheduling1000Nodes1000Pods benchmarks the scheduling rate // when the cluster has 1000 nodes and 1000 scheduled pods func BenchmarkScheduling1000Nodes1000Pods(b *testing.B) { benchmarkScheduling(1000, 1000, b) } // benchmarkScheduling benchmarks scheduling rate with specific number of nodes // and specific number of pods already scheduled. Since an operation takes relatively // long time, b.N should be small: 10 - 100. func benchmarkScheduling(numNodes, numScheduledPods int, b *testing.B) { schedulerConfigFactory, finalFunc := mustSetupScheduler() defer finalFunc() c := schedulerConfigFactory.Client makeNodes(c, numNodes) makePods(c, numScheduledPods) for { scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(scheduled) >= numScheduledPods { break } time.Sleep(1 * time.Second) } // start benchmark b.ResetTimer() makePods(c, b.N) for { // This can potentially affect performance of scheduler, since List() is done under mutex. // TODO: Setup watch on apiserver and wait until all pods scheduled. scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(scheduled) >= numScheduledPods+b.N { break } // Note: This might introduce slight deviation in accuracy of benchmark results. // Since the total amount of time is relatively large, it might not be a concern. time.Sleep(100 * time.Millisecond) } } "} {"_id":"doc-en-kubernetes-d319af9459a1bddce3ef672a40a372b1a46ae2f612490b0de8a51ed5ef2c1406","title":"","text":" /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"fmt\" \"testing\" \"time\" ) // TestSchedule100Node3KPods schedules 3k pods on 100 nodes. func TestSchedule100Node3KPods(t *testing.T) { schedulePods(100, 3000) } // TestSchedule1000Node30KPods schedules 30k pods on 1000 nodes. func TestSchedule1000Node30KPods(t *testing.T) { schedulePods(1000, 30000) } // schedulePods schedules specific number of pods on specific number of nodes. // This is used to learn the scheduling throughput on various // sizes of cluster and changes as more and more pods are scheduled. // It won't stop until all pods are scheduled. func schedulePods(numNodes, numPods int) { schedulerConfigFactory, destroyFunc := mustSetupScheduler() defer destroyFunc() c := schedulerConfigFactory.Client makeNodes(c, numNodes) makePods(c, numPods) prev := 0 start := time.Now() for { // This can potentially affect performance of scheduler, since List() is done under mutex. // Listing 10000 pods is an expensive operation, so running it frequently may impact scheduler. // TODO: Setup watch on apiserver and wait until all pods scheduled. scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() fmt.Printf(\"%dstrate: %dttotal: %dn\", time.Since(start)/time.Second, len(scheduled)-prev, len(scheduled)) if len(scheduled) >= numPods { return } prev = len(scheduled) time.Sleep(1 * time.Second) } } "} {"_id":"doc-en-kubernetes-1f73cfb3474ce8780fbdde136e86c355075a32f7e6d90dc9ffd41e79f895da29","title":"","text":" #!/usr/bin/env bash # Copyright 2014 The Kubernetes Authors All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail pushd \"../../../..\" source \"./hack/lib/util.sh\" source \"./hack/lib/logging.sh\" source \"./hack/lib/etcd.sh\" popd cleanup() { kube::etcd::cleanup kube::log::status \"performance test cleanup complete\" } trap cleanup EXIT kube::etcd::start kube::log::status \"performance test start\" # TODO: set log-dir and prof output dir. DIR_BASENAME=$(basename `pwd`) go test -c -o \"${DIR_BASENAME}.test\" # We are using the benchmark suite to do profiling. Because it only runs a few pods and # theoretically it has less variance. \"./${DIR_BASENAME}.test\" -test.bench=. -test.run=xxxx -test.cpuprofile=prof.out -logtostderr=false kube::log::status \"benchmark tests finished\" # Running density tests. It might take a long time. \"./${DIR_BASENAME}.test\" -test.run=. -test.timeout=60m kube::log::status \"density tests finished\" "} {"_id":"doc-en-kubernetes-e876ffb1d503f96c82449b539492e05a30e494a6d3c73115a786c6b8442ab652","title":"","text":" /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"net/http\" \"net/http/httptest\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/resource\" \"k8s.io/kubernetes/pkg/api/testapi\" \"k8s.io/kubernetes/pkg/client/record\" client \"k8s.io/kubernetes/pkg/client/unversioned\" \"k8s.io/kubernetes/pkg/master\" \"k8s.io/kubernetes/plugin/pkg/scheduler\" _ \"k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider\" \"k8s.io/kubernetes/plugin/pkg/scheduler/factory\" \"k8s.io/kubernetes/test/integration/framework\" ) // mustSetupScheduler starts the following components: // - k8s api server (a.k.a. master) // - scheduler // It returns scheduler config factory and destroyFunc which should be used to // remove resources after finished. // Notes on rate limiter: // - The BindPodsRateLimiter is nil, meaning no rate limits. // - client rate limit is set to 5000. func mustSetupScheduler() (schedulerConfigFactory *factory.ConfigFactory, destroyFunc func()) { framework.DeleteAllEtcdKeys() var m *master.Master masterConfig := framework.NewIntegrationTestMasterConfig() m = master.New(masterConfig) s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { m.Handler.ServeHTTP(w, req) })) c := client.NewOrDie(&client.Config{ Host: s.URL, GroupVersion: testapi.Default.GroupVersion(), QPS: 5000.0, Burst: 5000, }) schedulerConfigFactory = factory.NewConfigFactory(c, nil) schedulerConfig, err := schedulerConfigFactory.Create() if err != nil { panic(\"Couldn't create scheduler config\") } eventBroadcaster := record.NewBroadcaster() schedulerConfig.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: \"scheduler\"}) eventBroadcaster.StartRecordingToSink(c.Events(\"\")) scheduler.New(schedulerConfig).Run() destroyFunc = func() { glog.Infof(\"destroying\") close(schedulerConfig.StopEverything) s.Close() glog.Infof(\"destroyed\") } return } func makeNodes(c client.Interface, nodeCount int) { glog.Infof(\"making %d nodes\", nodeCount) baseNode := &api.Node{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-node-\", }, Spec: api.NodeSpec{ ExternalID: \"foobar\", }, Status: api.NodeStatus{ Capacity: api.ResourceList{ api.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), api.ResourceCPU: resource.MustParse(\"4\"), api.ResourceMemory: resource.MustParse(\"32Gi\"), }, Phase: api.NodeRunning, Conditions: []api.NodeCondition{ {Type: api.NodeReady, Status: api.ConditionTrue}, }, }, } for i := 0; i < nodeCount; i++ { if _, err := c.Nodes().Create(baseNode); err != nil { panic(\"error creating node: \" + err.Error()) } } } // makePods will setup specified number of scheduled pods. // Currently it goes through scheduling path and it's very slow to setup large number of pods. // TODO: Setup pods evenly on all nodes and quickly/non-linearly. func makePods(c client.Interface, podCount int) { glog.Infof(\"making %d pods\", podCount) basePod := &api.Pod{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-pod-\", }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"pause\", Image: \"gcr.io/google_containers/pause:1.0\", Resources: api.ResourceRequirements{ Limits: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, Requests: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, }, }}, }, } threads := 30 remaining := make(chan int, 1000) go func() { for i := 0; i < podCount; i++ { remaining <- i } close(remaining) }() for i := 0; i < threads; i++ { go func() { for { _, ok := <-remaining if !ok { return } for { _, err := c.Pods(\"default\").Create(basePod) if err == nil { break } } } }() } } "} {"_id":"doc-en-kubernetes-362efe3c289a2c4a353dcba34808269a82ddc616124e333476a68af0cc74238c","title":"","text":"\"fmt\" \"net/http\" \"net/http/httptest\" \"sync\" \"testing\" \"time\""} {"_id":"doc-en-kubernetes-680297b49a215c9cfb1528ef4979c0d18e911bef81f67b1fd58565fd68978733","title":"","text":"} } } func BenchmarkScheduling(b *testing.B) { framework.DeleteAllEtcdKeys() var m *master.Master s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { m.Handler.ServeHTTP(w, req) })) defer s.Close() masterConfig := framework.NewIntegrationTestMasterConfig() m = master.New(masterConfig) c := client.NewOrDie(&client.Config{ Host: s.URL, GroupVersion: testapi.Default.GroupVersion(), QPS: 5000.0, Burst: 5000, }) schedulerConfigFactory := factory.NewConfigFactory(c, nil) schedulerConfig, err := schedulerConfigFactory.Create() if err != nil { b.Fatalf(\"Couldn't create scheduler config: %v\", err) } eventBroadcaster := record.NewBroadcaster() schedulerConfig.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: \"scheduler\"}) eventBroadcaster.StartRecordingToSink(c.Events(\"\")) scheduler.New(schedulerConfig).Run() defer close(schedulerConfig.StopEverything) makeNNodes(c, 1000) N := b.N b.ResetTimer() makeNPods(c, N) for { objs := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(objs) >= N { fmt.Printf(\"%v pods scheduled.n\", len(objs)) /* // To prove that this actually works: for _, o := range objs { fmt.Printf(\"%sn\", o.(*api.Pod).Spec.NodeName) } */ break } time.Sleep(time.Millisecond) } b.StopTimer() } func makeNNodes(c client.Interface, N int) { baseNode := &api.Node{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-node-\", }, Spec: api.NodeSpec{ ExternalID: \"foobar\", }, Status: api.NodeStatus{ Capacity: api.ResourceList{ api.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), api.ResourceCPU: resource.MustParse(\"4\"), api.ResourceMemory: resource.MustParse(\"32Gi\"), }, Phase: api.NodeRunning, Conditions: []api.NodeCondition{ {Type: api.NodeReady, Status: api.ConditionTrue}, }, }, } for i := 0; i < N; i++ { if _, err := c.Nodes().Create(baseNode); err != nil { panic(\"error creating node: \" + err.Error()) } } } func makeNPods(c client.Interface, N int) { basePod := &api.Pod{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-pod-\", }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"pause\", Image: \"gcr.io/google_containers/pause:1.0\", Resources: api.ResourceRequirements{ Limits: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, Requests: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, }, }}, }, } wg := sync.WaitGroup{} threads := 30 wg.Add(threads) remaining := make(chan int, N) go func() { for i := 0; i < N; i++ { remaining <- i } close(remaining) }() for i := 0; i < threads; i++ { go func() { defer wg.Done() for { _, ok := <-remaining if !ok { return } for { _, err := c.Pods(\"default\").Create(basePod) if err == nil { break } } } }() } wg.Wait() } "} {"_id":"doc-en-kubernetes-1f04cbfb325986150059894c05be7f10e540b95dc139bf8c3ae60ecb20a8c994","title":"","text":"func parseEnvs(envArray []string) ([]api.EnvVar, error) { envs := []api.EnvVar{} for _, env := range envArray { parts := strings.Split(env, \"=\") if len(parts) != 2 || !validation.IsCIdentifier(parts[0]) || len(parts[1]) == 0 { pos := strings.Index(env, \"=\") if pos == -1 { return nil, fmt.Errorf(\"invalid env: %v\", env) } envVar := api.EnvVar{Name: parts[0], Value: parts[1]} name := env[:pos] value := env[pos+1:] if len(name) == 0 || !validation.IsCIdentifier(name) || len(value) == 0 { return nil, fmt.Errorf(\"invalid env: %v\", env) } envVar := api.EnvVar{Name: name, Value: value} envs = append(envs, envVar) } return envs, nil"} {"_id":"doc-en-kubernetes-f170dd73baf91cbb8b47cb5be66947022a231dc71697b76165f3c81a356329e9","title":"","text":"} } } func TestParseEnv(t *testing.T) { tests := []struct { envArray []string expected []api.EnvVar expectErr bool test string }{ { envArray: []string{ \"THIS_ENV=isOK\", \"HAS_COMMAS=foo,bar\", \"HAS_EQUALS=jJnro54iUu75xNy==\", }, expected: []api.EnvVar{ { Name: \"THIS_ENV\", Value: \"isOK\", }, { Name: \"HAS_COMMAS\", Value: \"foo,bar\", }, { Name: \"HAS_EQUALS\", Value: \"jJnro54iUu75xNy==\", }, }, expectErr: false, test: \"test case 1\", }, { envArray: []string{ \"WITH_OUT_EQUALS\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 2\", }, { envArray: []string{ \"WITH_OUT_VALUES=\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 3\", }, { envArray: []string{ \"=WITH_OUT_NAME\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 4\", }, } for _, test := range tests { envs, err := parseEnvs(test.envArray) if !test.expectErr && err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, test.test) } if test.expectErr && err != nil { continue } if !reflect.DeepEqual(envs, test.expected) { t.Errorf(\"nexpected:n%#vnsaw:n%#v (%s)\", test.expected, envs, test.test) } } } "} {"_id":"doc-en-kubernetes-011c6819da0d37c896c1382263d6beb095d37fa9a578eace4ab8c28bb736487d","title":"","text":"eventCorrelator := NewEventCorrelator(util.RealClock{}) return eventBroadcaster.StartEventWatcher( func(event *api.Event) { // Make a copy before modification, because there could be multiple listeners. // Events are safe to copy like this. eventCopy := *event event = &eventCopy result, err := eventCorrelator.EventCorrelate(event) if err != nil { util.HandleError(err) } if result.Skip { return } tries := 0 for { if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) { break } tries++ if tries >= maxTriesPerEvent { glog.Errorf(\"Unable to write event '%#v' (retry limit exceeded!)\", event) break } // Randomize the first sleep so that various clients won't all be // synced up if the master goes down. if tries == 1 { time.Sleep(time.Duration(float64(sleepDuration) * randGen.Float64())) } else { time.Sleep(sleepDuration) } } recordToSink(sink, event, eventCorrelator, randGen) }) } func recordToSink(sink EventSink, event *api.Event, eventCorrelator *EventCorrelator, randGen *rand.Rand) { // Make a copy before modification, because there could be multiple listeners. // Events are safe to copy like this. eventCopy := *event event = &eventCopy result, err := eventCorrelator.EventCorrelate(event) if err != nil { util.HandleError(err) } if result.Skip { return } tries := 0 for { if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) { break } tries++ if tries >= maxTriesPerEvent { glog.Errorf(\"Unable to write event '%#v' (retry limit exceeded!)\", event) break } // Randomize the first sleep so that various clients won't all be // synced up if the master goes down. if tries == 1 { time.Sleep(time.Duration(float64(sleepDuration) * randGen.Float64())) } else { time.Sleep(sleepDuration) } } } func isKeyNotFoundError(err error) bool { statusErr, _ := err.(*errors.StatusError) // At the moment the server is returning 500 instead of a more specific"} {"_id":"doc-en-kubernetes-57ec4e332b5084ce82e4167e8a76c8b2120d36db60d48e45d20fd251394eaa1f","title":"","text":"import ( \"encoding/json\" \"fmt\" \"runtime\" \"math/rand\" \"strconv\" \"testing\" \"time\""} {"_id":"doc-en-kubernetes-e91b43f04052cb0e0f61297cd1bb6cdcc764f939b0a483ad7c047b0bd717bd8e","title":"","text":"} func TestWriteEventError(t *testing.T) { ref := &api.ObjectReference{ Kind: \"Pod\", Name: \"foo\", Namespace: \"baz\", UID: \"bar\", APIVersion: \"version\", } type entry struct { timesToSendError int attemptsMade int attemptsWanted int err error }"} {"_id":"doc-en-kubernetes-30c38c98fd647e0012edaa7c556e7762ff4f622e8236e90d7b1fd82dd9b6247a","title":"","text":"err: fmt.Errorf(\"A weird error\"), }, } done := make(chan struct{}) eventBroadcaster := NewBroadcaster() defer eventBroadcaster.StartRecordingToSink( &testEventSink{ eventCorrelator := NewEventCorrelator(util.RealClock{}) randGen := rand.New(rand.NewSource(time.Now().UnixNano())) for caseName, ent := range table { attempts := 0 sink := &testEventSink{ OnCreate: func(event *api.Event) (*api.Event, error) { if event.Message == \"finished\" { close(done) return event, nil } item, ok := table[event.Message] if !ok { t.Errorf(\"Unexpected event: %#v\", event) return event, nil } item.attemptsMade++ if item.attemptsMade < item.timesToSendError { return nil, item.err attempts++ if attempts < ent.timesToSendError { return nil, ent.err } return event, nil }, }, ).Stop() clock := &util.FakeClock{time.Now()} recorder := recorderWithFakeClock(api.EventSource{Component: \"eventTest\"}, eventBroadcaster, clock) for caseName := range table { clock.Step(1 * time.Second) recorder.Event(ref, api.EventTypeNormal, \"Reason\", caseName) runtime.Gosched() } recorder.Event(ref, api.EventTypeNormal, \"Reason\", \"finished\") <-done for caseName, item := range table { if e, a := item.attemptsWanted, item.attemptsMade; e != a { t.Errorf(\"case %v: wanted %v, got %v attempts\", caseName, e, a) } ev := &api.Event{} recordToSink(sink, ev, eventCorrelator, randGen) if attempts != ent.attemptsWanted { t.Errorf(\"case %v: wanted %d, got %d attempts\", caseName, ent.attemptsWanted, attempts) } } }"} {"_id":"doc-en-kubernetes-1aa67e7da57a6926f5eb84cc30c032433baff7c08ec249903c7f9422a484f221","title":"","text":"var BindingSaturationReportInterval = 1 * time.Second var ( E2eSchedulingLatency = prometheus.NewSummary( prometheus.SummaryOpts{ E2eSchedulingLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"e2e_scheduling_latency_microseconds\", Help: \"E2e scheduling latency (scheduling algorithm + binding)\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) SchedulingAlgorithmLatency = prometheus.NewSummary( prometheus.SummaryOpts{ SchedulingAlgorithmLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"scheduling_algorithm_latency_microseconds\", Help: \"Scheduling algorithm latency\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) BindingLatency = prometheus.NewSummary( prometheus.SummaryOpts{ BindingLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"binding_latency_microseconds\", Help: \"Binding latency\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) BindingRateLimiterSaturation = prometheus.NewGauge("} {"_id":"doc-en-kubernetes-bb1327adc32cb590cccc093a74a391d75fa1a2c72ee21374c94eb147af07655f","title":"","text":"glog.V(3).Infof(\"Attempting to schedule: %+v\", pod) start := time.Now() defer func() { metrics.E2eSchedulingLatency.Observe(metrics.SinceInMicroseconds(start)) }() dest, err := s.config.Algorithm.Schedule(pod, s.config.NodeLister) metrics.SchedulingAlgorithmLatency.Observe(metrics.SinceInMicroseconds(start)) if err != nil { glog.V(1).Infof(\"Failed to schedule: %+v\", pod) s.config.Recorder.Eventf(pod, api.EventTypeWarning, \"FailedScheduling\", \"%v\", err) s.config.Error(pod, err) return } metrics.SchedulingAlgorithmLatency.Observe(metrics.SinceInMicroseconds(start)) b := &api.Binding{ ObjectMeta: api.ObjectMeta{Namespace: pod.Namespace, Name: pod.Name}, Target: api.ObjectReference{"} {"_id":"doc-en-kubernetes-8f07e6057d1d00a5e65c8d782c645c5555ee43241a25b1b7b0b9ebe5a73042e5","title":"","text":"s.config.Modeler.LockedAction(func() { bindingStart := time.Now() err := s.config.Binder.Bind(b) metrics.BindingLatency.Observe(metrics.SinceInMicroseconds(bindingStart)) if err != nil { glog.V(1).Infof(\"Failed to bind pod: %+v\", err) s.config.Recorder.Eventf(pod, api.EventTypeNormal, \"FailedScheduling\", \"Binding rejected: %v\", err) s.config.Error(pod, err) return } metrics.BindingLatency.Observe(metrics.SinceInMicroseconds(bindingStart)) s.config.Recorder.Eventf(pod, api.EventTypeNormal, \"Scheduled\", \"Successfully assigned %v to %v\", pod.Name, dest) // tell the model to assume that this binding took effect. assumed := *pod assumed.Spec.NodeName = dest s.config.Modeler.AssumePod(&assumed) }) metrics.E2eSchedulingLatency.Observe(metrics.SinceInMicroseconds(start)) }"} {"_id":"doc-en-kubernetes-ebc66d77e24ab237c98b3959ba34cfd7e3f3de49272b21894bcb582cbeb1e76d","title":"","text":"apiVersion: v1 kind: ReplicationController metadata: name: heapster-v11 name: heapster-v12 namespace: kube-system labels: k8s-app: heapster version: v11 version: v12 kubernetes.io/cluster-service: \"true\" spec: replicas: 1 selector: k8s-app: heapster version: v11 version: v12 template: metadata: labels: k8s-app: heapster version: v11 version: v12 kubernetes.io/cluster-service: \"true\" spec: containers: - image: gcr.io/google_containers/heapster:v0.18.4 - image: gcr.io/google_containers/heapster:v0.18.5 name: heapster resources: # keep request = limit to keep this container in guaranteed class"} {"_id":"doc-en-kubernetes-ad312c9b8752a10351c04c7fff0d1a3dbb6c3a97d2acaedd78373c30ebe599b2","title":"","text":"By(\"restricting to a time range\") time.Sleep(1500 * time.Millisecond) // ensure that startup logs on the node are seen as older than 1s out = runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=1s\") recent := len(strings.Split(out, \"n\")) out = runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=24h\") older := len(strings.Split(out, \"n\")) Expect(recent).To(BeNumerically(\"<\", older)) recent_out := runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=1s\") recent := len(strings.Split(recent_out, \"n\")) older_out := runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=24h\") older := len(strings.Split(older_out, \"n\")) Expect(recent).To(BeNumerically(\"<\", older), \"expected recent(%v) to be less than older(%v)nrecent lines:n%vnolder lines:n%vn\", recent, older, recent_out, older_out) }) }) })"} {"_id":"doc-en-kubernetes-d35fa71ee8824da4b40155b19d1ccd789d268fb5501f57faebeb2059c716bfd1","title":"","text":"readonly KUBE_NODE_BINARIES=(\"${KUBE_NODE_TARGETS[@]##*/}\") readonly KUBE_NODE_BINARIES_WIN=(\"${KUBE_NODE_BINARIES[@]/%/.exe}\") if [[ \"${KUBE_FASTBUILD:-}\" == \"true\" ]]; then if [[ -n \"${KUBE_BUILD_PLATFORMS:-}\" ]]; then readonly KUBE_SERVER_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_NODE_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_TEST_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_CLIENT_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) elif [[ \"${KUBE_FASTBUILD:-}\" == \"true\" ]]; then readonly KUBE_SERVER_PLATFORMS=(linux/amd64) readonly KUBE_NODE_PLATFORMS=(linux/amd64) if [[ \"${KUBE_BUILDER_OS:-}\" == \"darwin\"* ]]; then"} {"_id":"doc-en-kubernetes-d7360ea6d6c919fd207a3604048922d88ad5448942a504e7efea9b7df4d6267d","title":"","text":"pendingActionSet.Insert( strings.Join([]string{\"delete-collection\", \"daemonsets\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"deployments\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"replicasets\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"jobs\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"horizontalpodautoscalers\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"ingresses\", \"\"}, \"-\"),"} {"_id":"doc-en-kubernetes-7b0dd7f5dbd61c679b18815a61960cfd66d03ff942834df5cc612747b6780028","title":"","text":"mockClient := fake.NewSimpleClientset(testInput.testNamespace) if containsVersion(versions, \"extensions/v1beta1\") { resources := []unversioned.APIResource{} for _, resource := range []string{\"daemonsets\", \"deployments\", \"jobs\", \"horizontalpodautoscalers\", \"ingresses\"} { for _, resource := range []string{\"daemonsets\", \"deployments\", \"replicasets\", \"jobs\", \"horizontalpodautoscalers\", \"ingresses\"} { resources = append(resources, unversioned.APIResource{Name: resource}) } mockClient.Resources = map[string]*unversioned.APIResourceList{"} {"_id":"doc-en-kubernetes-20886b1a54b6a6fea2936c5a404a98df3b86a18a193dca840028f706711462b5","title":"","text":"return estimate, err } } if containsResource(resources, \"replicasets\") { err = deleteReplicaSets(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } } return estimate, nil }"} {"_id":"doc-en-kubernetes-d558f7e679a7a08fbb9fc9335effb80ce7a7605162c83bf10c9e5e7c2ae3f585","title":"","text":"return expClient.Deployments(ns).DeleteCollection(nil, api.ListOptions{}) } func deleteReplicaSets(expClient extensions_unversioned.ExtensionsInterface, ns string) error { return expClient.ReplicaSets(ns).DeleteCollection(nil, api.ListOptions{}) } func deleteIngress(expClient extensions_unversioned.ExtensionsInterface, ns string) error { return expClient.Ingresses(ns).DeleteCollection(nil, api.ListOptions{}) }"} {"_id":"doc-en-kubernetes-fa29dbaf1f68f1094b74edc90b9e4c7d724f5a9da1b782e0631544c5a07fdd02","title":"","text":"func TestClient(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := api.NamespaceDefault framework.DeleteAllEtcdKeys()"} {"_id":"doc-en-kubernetes-f4d028b5b1309560e99b7143f8a068140491d08730d8e43ee670a7a791366fce","title":"","text":"func TestSingleWatch(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := \"blargh\" deleteAllEtcdKeys()"} {"_id":"doc-en-kubernetes-44d505dfad11f25e8977d0627b021585e0d05a9abfe242ed22c7db76f758817c","title":"","text":"framework.DeleteAllEtcdKeys() defer framework.DeleteAllEtcdKeys() _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := api.NamespaceDefault client := client.NewOrDie(&client.Config{Host: s.URL, ContentConfig: client.ContentConfig{GroupVersion: testapi.Default.GroupVersion()}})"} {"_id":"doc-en-kubernetes-e4c3d6fa881908190861d750462cda2a30321e914c2da87cb87e21c85f2b0d43","title":"","text":"func TestExperimentalPrefix(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/apis/extensions/\") if err != nil {"} {"_id":"doc-en-kubernetes-c5cfacfc94666286392440ca5c99f749f87451cebfe9173a090c5d7ac3ad1fb6","title":"","text":"func TestWatchSucceedsWithoutArgs(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/api/v1/namespaces?watch=1\") if err != nil {"} {"_id":"doc-en-kubernetes-30af27f79b5fa46dada9eeef3c474ec72be9930047e15433de9c7b88e483ccc3","title":"","text":"func TestAccept(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/api/\") if err != nil {"} {"_id":"doc-en-kubernetes-a072704031fb7f1d2c2a9b10f14f471ebfa5954d666f119cb7e47b20395ccfe5","title":"","text":"func TestMasterProcessMetrics(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() metrics, err := scrapeMetrics(s) if err != nil {"} {"_id":"doc-en-kubernetes-0d675fe0f2ce234d3bb0767d7e74694adbb0ec9b1cd4dd072842ed25e8ee9952","title":"","text":"func TestApiserverMetrics(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() // Make a request to the apiserver to ensure there's at least one data point // for the metrics we're expecting -- otherwise, they won't be exported."} {"_id":"doc-en-kubernetes-47cea94ac5e65434685c617e73d9b7f1c5452ec6468319488d1e653ea664cfff","title":"","text":"func TestPersistentVolumeRecycler(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() deleteAllEtcdKeys() // Use higher QPS and Burst, there is a test for race condition below, which"} {"_id":"doc-en-kubernetes-708ab6b13e7b4be2147026b890fa9e148594788b5838ece69d729c71f50b4a45","title":"","text":"\"apiVersion\": \"v1\", \"kind\": \"ResourceQuota\", \"metadata\": { \"name\": \"quota\", \"name\": \"quota\" }, \"spec\": { \"hard\": {"} {"_id":"doc-en-kubernetes-869d1eb1156893b288bffb3f7fbcc8a5f8ec07d69880b865fbe21e8891c23e31","title":"","text":"\"pods\": \"10\", \"services\": \"5\", \"replicationcontrollers\":\"20\", \"resourcequotas\":\"1\", }, \"resourcequotas\":\"1\" } } } EOF"} {"_id":"doc-en-kubernetes-29265b8887cef3d94bd88b793941777bce7d10c31b014ef95aa5cda24f76744b","title":"","text":"trigger scaling up and down the number of pods in your application. * New GUI (dashboard) allows you to get started quickly and enables the same functionality found in the CLI as a more approachable and discoverable way of interacting with the system. Note: the GUI is eanbled by default for new cluster creation, however, it does not interacting with the system. Note: the GUI is enabled by default in 1.2 clusters. \"XXX \"Dashboard ## Other notable improvements"} {"_id":"doc-en-kubernetes-55141c12fad505112d65cce8fce967b4d367785b296f55f4f89760a8986281c7","title":"","text":"local output=`check-minion ${minion_ip}` echo $output if [[ \"${output}\" != \"working\" ]]; then if (( attempt > 9 )); then if (( attempt > 20 )); then echo echo -e \"${color_red}Your cluster is unlikely to work correctly.\" >&2 echo \"Please run ./cluster/kube-down.sh and re-create the\" >&2"} {"_id":"doc-en-kubernetes-bd257930c8ce2489746dd8e23105bcd86cba557753c9c2c2fe19297750ee378d","title":"","text":"// ServerResourcesForGroupVersion returns the supported resources for a group and version. func (d *DiscoveryClient) ServerResourcesForGroupVersion(groupVersion string) (resources *unversioned.APIResourceList, err error) { url := url.URL{} if groupVersion == \"v1\" { if len(groupVersion) == 0 { return nil, fmt.Errorf(\"groupVersion shouldn't be empty\") } else if groupVersion == \"v1\" { url.Path = \"/api/\" + groupVersion } else { url.Path = \"/apis/\" + groupVersion"} {"_id":"doc-en-kubernetes-a7a9ebbf68a63530274871dc065bcb1affd06fc90f6193644a7640863bf7c661","title":"","text":"\"k8s.io/kubernetes/pkg/client/typed/discovery\" \"k8s.io/kubernetes/pkg/client/typed/dynamic\" \"k8s.io/kubernetes/pkg/runtime\" utilerrors \"k8s.io/kubernetes/pkg/util/errors\" \"k8s.io/kubernetes/pkg/util/sets\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-c5cc1a2ea43566482c53f5c70c517dbdeab000bfcbae9adfd76be20699245532","title":"","text":"if err != nil { return results, err } allErrs := []error{} for _, apiGroup := range serverGroupList.Groups { preferredVersion := apiGroup.PreferredVersion apiResourceList, err := discoveryClient.ServerResourcesForGroupVersion(preferredVersion.GroupVersion) if err != nil { return results, err allErrs = append(allErrs, err) continue } groupVersion := unversioned.GroupVersion{Group: apiGroup.Name, Version: preferredVersion.Version} for _, apiResource := range apiResourceList.APIResources {"} {"_id":"doc-en-kubernetes-2ec66b9ccd444984accc7065b122700d44eea6a74184d8617985ff3549462d69","title":"","text":"results = append(results, groupVersion.WithResource(apiResource.Name)) } } return results, nil return results, utilerrors.NewAggregate(allErrs) }"} {"_id":"doc-en-kubernetes-f7c42836c5492a3992c6a08f63fbbd9a95873def027a376b07f15bef44b77eb0","title":"","text":"* `-f`: Resource file * also used for `--follow` in `logs`, but should be deprecated in favor of `-F` * `-n`: Namespace scope * `-l`: Label selector * also used for `--labels` in `expose`, but should be deprecated * `-L`: Label columns"} {"_id":"doc-en-kubernetes-7082f7d23d04d4fbd4a9f593f9567d4396bce6c43ca6e71291ab9925b473b192","title":"","text":"kubectl create \"${kube_flags[@]}\" --namespace=other -f docs/admin/limitrange/valid-pod.yaml # Post-condition: valid-pod POD is created kube::test::get_object_assert 'pods --namespace=other' \"{{range.items}}{{$id_field}}:{{end}}\" 'valid-pod:' # Post-condition: verify shorthand `-n other` has the same results as `--namespace=other` kube::test::get_object_assert 'pods -n other' \"{{range.items}}{{$id_field}}:{{end}}\" 'valid-pod:' ### Delete POD valid-pod in specific namespace # Pre-condition: valid-pod POD exists"} {"_id":"doc-en-kubernetes-48d66ad774532d0dbbe8b6adffab7748c21f8c94ec0bf36e7b950e96f6131805","title":"","text":"return ContextOverrideFlags{ ClusterName: FlagInfo{prefix + FlagClusterName, \"\", \"\", \"The name of the kubeconfig cluster to use\"}, AuthInfoName: FlagInfo{prefix + FlagAuthInfoName, \"\", \"\", \"The name of the kubeconfig user to use\"}, Namespace: FlagInfo{prefix + FlagNamespace, \"\", \"\", \"If present, the namespace scope for this CLI request\"}, Namespace: FlagInfo{prefix + FlagNamespace, \"n\", \"\", \"If present, the namespace scope for this CLI request\"}, } }"} {"_id":"doc-en-kubernetes-38a9426b08ffadf2279099038a2e5af7946b7df1054e32f3a362b1769c1d1fbf","title":"","text":"kube::test::get_object_assert pods \"{{range.items}}{{$id_field}}:{{end}}\" '' # Command kubectl get pods --sort-by=\"{metadata.name}\" kubectl get pods --sort-by=\"{metadata.creationTimestamp}\" ############################ # Kubectl --all-namespaces #"} {"_id":"doc-en-kubernetes-aca8342ee1c5e44a8f739e20985c76ca873ac501f954372c53d32afd83853bcc","title":"","text":"\"sort\" \"k8s.io/kubernetes/pkg/api/meta\" \"k8s.io/kubernetes/pkg/api/unversioned\" \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/util/integer\" \"k8s.io/kubernetes/pkg/util/jsonpath\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-7e1b9fdf7c7136990101bfea47a8d7b5bbca77e41bcf8ceedd8c2cc8faa0065c","title":"","text":"return i.String() < j.String(), nil case reflect.Ptr: return isLess(i.Elem(), j.Elem()) case reflect.Struct: // special case handling lessFuncList := []structLessFunc{timeLess} if ok, less := structLess(i, j, lessFuncList); ok { return less, nil } // fallback to the fields comparision for idx := 0; idx < i.NumField(); idx++ { less, err := isLess(i.Field(idx), j.Field(idx)) if err != nil || !less { return less, err } } return true, nil case reflect.Array, reflect.Slice: // note: the length of i and j may be different for idx := 0; idx < integer.IntMin(i.Len(), j.Len()); idx++ { less, err := isLess(i.Index(idx), j.Index(idx)) if err != nil || !less { return less, err } } return true, nil default: return false, fmt.Errorf(\"unsortable type: %v\", i.Kind()) } } // structLessFunc checks whether i and j could be compared(the first return value), // and if it could, return whether i is less than j(the second return value) type structLessFunc func(i, j reflect.Value) (bool, bool) // structLess returns whether i and j could be compared with the given function list func structLess(i, j reflect.Value, lessFuncList []structLessFunc) (bool, bool) { for _, lessFunc := range lessFuncList { if ok, less := lessFunc(i, j); ok { return ok, less } } return false, false } // compare two unversioned.Time values. func timeLess(i, j reflect.Value) (bool, bool) { if i.Type() != reflect.TypeOf(unversioned.Unix(0, 0)) { return false, false } return true, i.MethodByName(\"Before\").Call([]reflect.Value{j})[0].Bool() } func (r *RuntimeSort) Less(i, j int) bool { iObj := r.objs[i] jObj := r.objs[j]"} {"_id":"doc-en-kubernetes-0049f1d4d9f09ba884ed929e8105d3f80339d3d25434589bdec3084d7ae698f3","title":"","text":"\"testing\" internal \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/unversioned\" api \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" )"} {"_id":"doc-en-kubernetes-a7b4e88da2e9cd1cccc0828cca6147047ef34b1b671cd6c1574c9ab06659cd1a","title":"","text":"field: \"{.metadata.name}\", }, { name: \"random-order-timestamp\", obj: &api.PodList{ Items: []api.Pod{ { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(300, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(100, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(200, 0), }, }, }, }, sort: &api.PodList{ Items: []api.Pod{ { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(100, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(200, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(300, 0), }, }, }, }, field: \"{.metadata.creationTimestamp}\", }, { name: \"random-order-numbers\", obj: &api.ReplicationControllerList{ Items: []api.ReplicationController{"} {"_id":"doc-en-kubernetes-480c578a70c385c4a31aa3b2f03cea53c276fbcba22e13088f675d78dfeae4f3","title":"","text":" /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Code generated by protoc-gen-gogo. // source: k8s.io/kubernetes/pkg/apis/apps/v1alpha1/generated.proto // DO NOT EDIT! /* Package v1alpha1 is a generated protocol buffer package. It is generated from these files: k8s.io/kubernetes/pkg/apis/apps/v1alpha1/generated.proto It has these top-level messages: PetSet PetSetList PetSetSpec PetSetStatus */ package v1alpha1 import proto \"github.com/gogo/protobuf/proto\" import fmt \"fmt\" import math \"math\" import _ \"github.com/gogo/protobuf/gogoproto\" import _ \"k8s.io/kubernetes/pkg/api/resource\" import k8s_io_kubernetes_pkg_api_unversioned \"k8s.io/kubernetes/pkg/api/unversioned\" import k8s_io_kubernetes_pkg_api_v1 \"k8s.io/kubernetes/pkg/api/v1\" import _ \"k8s.io/kubernetes/pkg/util/intstr\" import io \"io\" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf func (m *PetSet) Reset() { *m = PetSet{} } func (m *PetSet) String() string { return proto.CompactTextString(m) } func (*PetSet) ProtoMessage() {} func (m *PetSetList) Reset() { *m = PetSetList{} } func (m *PetSetList) String() string { return proto.CompactTextString(m) } func (*PetSetList) ProtoMessage() {} func (m *PetSetSpec) Reset() { *m = PetSetSpec{} } func (m *PetSetSpec) String() string { return proto.CompactTextString(m) } func (*PetSetSpec) ProtoMessage() {} func (m *PetSetStatus) Reset() { *m = PetSetStatus{} } func (m *PetSetStatus) String() string { return proto.CompactTextString(m) } func (*PetSetStatus) ProtoMessage() {} func init() { proto.RegisterType((*PetSet)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSet\") proto.RegisterType((*PetSetList)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetList\") proto.RegisterType((*PetSetSpec)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetSpec\") proto.RegisterType((*PetSetStatus)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetStatus\") } func (m *PetSet) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSet) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l data[i] = 0xa i++ i = encodeVarintGenerated(data, i, uint64(m.ObjectMeta.Size())) n1, err := m.ObjectMeta.MarshalTo(data[i:]) if err != nil { return 0, err } i += n1 data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(m.Spec.Size())) n2, err := m.Spec.MarshalTo(data[i:]) if err != nil { return 0, err } i += n2 data[i] = 0x1a i++ i = encodeVarintGenerated(data, i, uint64(m.Status.Size())) n3, err := m.Status.MarshalTo(data[i:]) if err != nil { return 0, err } i += n3 return i, nil } func (m *PetSetList) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetList) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l data[i] = 0xa i++ i = encodeVarintGenerated(data, i, uint64(m.ListMeta.Size())) n4, err := m.ListMeta.MarshalTo(data[i:]) if err != nil { return 0, err } i += n4 if len(m.Items) > 0 { for _, msg := range m.Items { data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(msg.Size())) n, err := msg.MarshalTo(data[i:]) if err != nil { return 0, err } i += n } } return i, nil } func (m *PetSetSpec) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetSpec) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l if m.Replicas != nil { data[i] = 0x8 i++ i = encodeVarintGenerated(data, i, uint64(*m.Replicas)) } if m.Selector != nil { data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(m.Selector.Size())) n5, err := m.Selector.MarshalTo(data[i:]) if err != nil { return 0, err } i += n5 } data[i] = 0x1a i++ i = encodeVarintGenerated(data, i, uint64(m.Template.Size())) n6, err := m.Template.MarshalTo(data[i:]) if err != nil { return 0, err } i += n6 if len(m.VolumeClaimTemplates) > 0 { for _, msg := range m.VolumeClaimTemplates { data[i] = 0x22 i++ i = encodeVarintGenerated(data, i, uint64(msg.Size())) n, err := msg.MarshalTo(data[i:]) if err != nil { return 0, err } i += n } } data[i] = 0x2a i++ i = encodeVarintGenerated(data, i, uint64(len(m.ServiceName))) i += copy(data[i:], m.ServiceName) return i, nil } func (m *PetSetStatus) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetStatus) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l if m.ObservedGeneration != nil { data[i] = 0x8 i++ i = encodeVarintGenerated(data, i, uint64(*m.ObservedGeneration)) } data[i] = 0x10 i++ i = encodeVarintGenerated(data, i, uint64(m.Replicas)) return i, nil } func encodeFixed64Generated(data []byte, offset int, v uint64) int { data[offset] = uint8(v) data[offset+1] = uint8(v >> 8) data[offset+2] = uint8(v >> 16) data[offset+3] = uint8(v >> 24) data[offset+4] = uint8(v >> 32) data[offset+5] = uint8(v >> 40) data[offset+6] = uint8(v >> 48) data[offset+7] = uint8(v >> 56) return offset + 8 } func encodeFixed32Generated(data []byte, offset int, v uint32) int { data[offset] = uint8(v) data[offset+1] = uint8(v >> 8) data[offset+2] = uint8(v >> 16) data[offset+3] = uint8(v >> 24) return offset + 4 } func encodeVarintGenerated(data []byte, offset int, v uint64) int { for v >= 1<<7 { data[offset] = uint8(v&0x7f | 0x80) v >>= 7 offset++ } data[offset] = uint8(v) return offset + 1 } func (m *PetSet) Size() (n int) { var l int _ = l l = m.ObjectMeta.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Spec.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Status.Size() n += 1 + l + sovGenerated(uint64(l)) return n } func (m *PetSetList) Size() (n int) { var l int _ = l l = m.ListMeta.Size() n += 1 + l + sovGenerated(uint64(l)) if len(m.Items) > 0 { for _, e := range m.Items { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } return n } func (m *PetSetSpec) Size() (n int) { var l int _ = l if m.Replicas != nil { n += 1 + sovGenerated(uint64(*m.Replicas)) } if m.Selector != nil { l = m.Selector.Size() n += 1 + l + sovGenerated(uint64(l)) } l = m.Template.Size() n += 1 + l + sovGenerated(uint64(l)) if len(m.VolumeClaimTemplates) > 0 { for _, e := range m.VolumeClaimTemplates { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } l = len(m.ServiceName) n += 1 + l + sovGenerated(uint64(l)) return n } func (m *PetSetStatus) Size() (n int) { var l int _ = l if m.ObservedGeneration != nil { n += 1 + sovGenerated(uint64(*m.ObservedGeneration)) } n += 1 + sovGenerated(uint64(m.Replicas)) return n } func sovGenerated(x uint64) (n int) { for { n++ x >>= 7 if x == 0 { break } } return n } func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } func (m *PetSet) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSet: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSet: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ObjectMeta\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ObjectMeta.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Spec\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Spec.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Status\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Status.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetList) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetList: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetList: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ListMeta\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ListMeta.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Items\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } m.Items = append(m.Items, PetSet{}) if err := m.Items[len(m.Items)-1].Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetSpec) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetSpec: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetSpec: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field Replicas\", wireType) } var v int32 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ v |= (int32(b) & 0x7F) << shift if b < 0x80 { break } } m.Replicas = &v case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Selector\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if m.Selector == nil { m.Selector = &k8s_io_kubernetes_pkg_api_unversioned.LabelSelector{} } if err := m.Selector.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Template\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Template.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field VolumeClaimTemplates\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } m.VolumeClaimTemplates = append(m.VolumeClaimTemplates, k8s_io_kubernetes_pkg_api_v1.PersistentVolumeClaim{}) if err := m.VolumeClaimTemplates[len(m.VolumeClaimTemplates)-1].Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ServiceName\", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ stringLen |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex > l { return io.ErrUnexpectedEOF } m.ServiceName = string(data[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetStatus) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetStatus: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetStatus: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field ObservedGeneration\", wireType) } var v int64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ v |= (int64(b) & 0x7F) << shift if b < 0x80 { break } } m.ObservedGeneration = &v case 2: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field Replicas\", wireType) } m.Replicas = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ m.Replicas |= (int32(b) & 0x7F) << shift if b < 0x80 { break } } default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func skipGenerated(data []byte) (n int, err error) { l := len(data) iNdEx := 0 for iNdEx < l { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } wireType := int(wire & 0x7) switch wireType { case 0: for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } iNdEx++ if data[iNdEx-1] < 0x80 { break } } return iNdEx, nil case 1: iNdEx += 8 return iNdEx, nil case 2: var length int for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ length |= (int(b) & 0x7F) << shift if b < 0x80 { break } } iNdEx += length if length < 0 { return 0, ErrInvalidLengthGenerated } return iNdEx, nil case 3: for { var innerWire uint64 var start int = iNdEx for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ innerWire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } innerWireType := int(innerWire & 0x7) if innerWireType == 4 { break } next, err := skipGenerated(data[start:]) if err != nil { return 0, err } iNdEx = start + next } return iNdEx, nil case 4: return iNdEx, nil case 5: iNdEx += 4 return iNdEx, nil default: return 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType) } } panic(\"unreachable\") } var ( ErrInvalidLengthGenerated = fmt.Errorf(\"proto: negative length found during unmarshaling\") ErrIntOverflowGenerated = fmt.Errorf(\"proto: integer overflow\") ) "} {"_id":"doc-en-kubernetes-f78551fdd9a516f286253b61bc6635bab2691d8df7070e2079c175b1dd4307b4","title":"","text":" /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // This file was autogenerated by go-to-protobuf. Do not edit it manually! syntax = 'proto2'; package k8s.io.kubernetes.pkg.apis.apps.v1alpha1; import \"k8s.io/kubernetes/pkg/api/resource/generated.proto\"; import \"k8s.io/kubernetes/pkg/api/unversioned/generated.proto\"; import \"k8s.io/kubernetes/pkg/api/v1/generated.proto\"; import \"k8s.io/kubernetes/pkg/util/intstr/generated.proto\"; // Package-wide variables from generator \"generated\". option go_package = \"v1alpha1\"; // PetSet represents a set of pods with consistent identities. // Identities are defined as: // - Network: A single stable DNS and hostname. // - Storage: As many VolumeClaims as requested. // The PetSet guarantees that a given network identity will always // map to the same storage identity. PetSet is currently in alpha // and subject to change without notice. message PetSet { optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1; // Spec defines the desired identities of pets in this set. optional PetSetSpec spec = 2; // Status is the current status of Pets in this PetSet. This data // may be out of date by some window of time. optional PetSetStatus status = 3; } // PetSetList is a collection of PetSets. message PetSetList { optional k8s.io.kubernetes.pkg.api.unversioned.ListMeta metadata = 1; repeated PetSet items = 2; } // A PetSetSpec is the specification of a PetSet. message PetSetSpec { // Replicas is the desired number of replicas of the given Template. // These are replicas in the sense that they are instantiations of the // same Template, but individual replicas also have a consistent identity. // If unspecified, defaults to 1. // TODO: Consider a rename of this field. optional int32 replicas = 1; // Selector is a label query over pods that should match the replica count. // If empty, defaulted to labels on the pod template. // More info: http://releases.k8s.io/HEAD/docs/user-guide/labels.md#label-selectors optional k8s.io.kubernetes.pkg.api.unversioned.LabelSelector selector = 2; // Template is the object that describes the pod that will be created if // insufficient replicas are detected. Each pod stamped out by the PetSet // will fulfill this Template, but have a unique identity from the rest // of the PetSet. optional k8s.io.kubernetes.pkg.api.v1.PodTemplateSpec template = 3; // VolumeClaimTemplates is a list of claims that pets are allowed to reference. // The PetSet controller is responsible for mapping network identities to // claims in a way that maintains the identity of a pet. Every claim in // this list must have at least one matching (by name) volumeMount in one // container in the template. A claim in this list takes precedence over // any volumes in the template, with the same name. // TODO: Define the behavior if a claim already exists with the same name. repeated k8s.io.kubernetes.pkg.api.v1.PersistentVolumeClaim volumeClaimTemplates = 4; // ServiceName is the name of the service that governs this PetSet. // This service must exist before the PetSet, and is responsible for // the network identity of the set. Pets get DNS/hostnames that follow the // pattern: pet-specific-string.serviceName.default.svc.cluster.local // where \"pet-specific-string\" is managed by the PetSet controller. optional string serviceName = 5; } // PetSetStatus represents the current state of a PetSet. message PetSetStatus { // most recent generation observed by this autoscaler. optional int64 observedGeneration = 1; // Replicas is the number of actual replicas. optional int32 replicas = 2; } "} {"_id":"doc-en-kubernetes-4c8b3952c76ccd4bec02c50ef2e9530c51c6ed23236f4f3b7bde1efe3e1eb3bc","title":"","text":"if !options.Local { helper := resource.NewHelper(client, mapping) patchedObject, err := helper.Patch(namespace, name, patchType, patchBytes) _, err := helper.Patch(namespace, name, patchType, patchBytes) if err != nil { return err } if cmdutil.ShouldRecord(cmd, info) { if err := cmdutil.RecordChangeCause(patchedObject, f.Command()); err == nil { // don't return an error on failure. The patch itself succeeded, its only the hint for that change that failed // don't bother checking for failures of this replace, because a failure to indicate the hint doesn't fail the command // also, don't force the replacement. If the replacement fails on a resourceVersion conflict, then it means this // record hint is likely to be invalid anyway, so avoid the bad hint resource.NewHelper(client, mapping).Replace(namespace, name, false, patchedObject) // don't return an error on failure. The patch itself succeeded, its only the hint for that change that failed // don't bother checking for failures of this replace, because a failure to indicate the hint doesn't fail the command // also, don't force the replacement. If the replacement fails on a resourceVersion conflict, then it means this // record hint is likely to be invalid anyway, so avoid the bad hint patch, err := cmdutil.ChangeResourcePatch(info, f.Command()) if err == nil { helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) } } count++"} {"_id":"doc-en-kubernetes-552d5a4f04229d194333866a2519858fcfa775d7c94f914661cab31949614a3f","title":"","text":"} // TODO: if we ever want to go generic, this allows a clean -o yaml without trying to print columns or anything // rawExtension := &runtime.Unknown{ // \tRaw: originalPatchedObjJS, //\tRaw: originalPatchedObjJS, // } printer, err := f.PrinterForMapping(cmd, mapping, false)"} {"_id":"doc-en-kubernetes-923cb8ada8358a523094eae019170a1e5e8ca2b7f6bba855c866e25d4dc620d5","title":"","text":"} func TestDelNode(t *testing.T) { defer func() { now = time.Now }() var tick int64 now = func() time.Time { t := time.Unix(tick, 0) tick++ return t } evictor := NewRateLimitedTimedQueue(flowcontrol.NewFakeAlwaysRateLimiter()) evictor.Add(\"first\") evictor.Add(\"second\")"} {"_id":"doc-en-kubernetes-b67a2b56f7ff6553e006aec95fcf10eec7d4745e8dff4dd997ef8fb60423670a","title":"","text":"tar xf elasticsearch-1.5.2.tar.gz && rm elasticsearch-1.5.2.tar.gz RUN mkdir -p /elasticsearch-1.5.2/config/templates COPY elasticsearch.yml /elasticsearch-1.5.2/config/elasticsearch.yml COPY template-k8s-logstash.json /elasticsearch-1.5.2/config/templates/template-k8s-logstash.json COPY run.sh / COPY elasticsearch_logging_discovery /"} {"_id":"doc-en-kubernetes-d349bc58d26d053ff0a1ab139347e36e4e8201866e5ef02e1ea52776812895ac","title":"","text":" { \"template_k8s_logstash\" : { \"template\" : \"logstash-*\", \"settings\" : { \"index.refresh_interval\" : \"5s\" }, \"mappings\" : { \"_default_\" : { \"dynamic_templates\" : [ { \"kubernetes_field\" : { \"path_match\" : \"kubernetes.*\", \"mapping\" : { \"type\" : \"string\", \"index\" : \"not_analyzed\" } } } ] } } } } "} {"_id":"doc-en-kubernetes-ba84c6f4da529065660bb3cfffed18866b79cbdae8000f458458ad8be2d7962e","title":"","text":"kube::etcd::install() { ( cd \"${KUBE_ROOT}/third_party\" curl -fsSL --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-amd64.tar.gz | tar xzf - ln -fns \"etcd-v${ETCD_VERSION}-linux-amd64\" etcd if [[ $(uname) == \"Darwin\" ]]; then download_file=\"etcd-v${ETCD_VERSION}-darwin-amd64.zip\" curl -fsSLO --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/\"${download_file}\" unzip -o \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-darwin-amd64\" etcd rm \"${download_file}\" else curl -fsSL --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-amd64.tar.gz | tar xzf - ln -fns \"etcd-v${ETCD_VERSION}-linux-amd64\" etcd fi kube::log::info \"etcd v${ETCD_VERSION} installed. To use:\" kube::log::info \"export PATH=${PATH}:$(pwd)/etcd\" )"} {"_id":"doc-en-kubernetes-6b84738a94a23af0f56a7c8cf31220024d7d74d7993be985041a14472692bb8b","title":"","text":" /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package core import ( \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/fields\" \"k8s.io/kubernetes/pkg/labels\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDefaultingFuncs(scheme *runtime.Scheme) { scheme.AddDefaultingFuncs( func(obj *api.ListOptions) { if obj.LabelSelector == nil { obj.LabelSelector = labels.Everything() } if obj.FieldSelector == nil { obj.FieldSelector = fields.Everything() } }, ) } func addConversionFuncs(scheme *runtime.Scheme) { scheme.AddConversionFuncs( api.Convert_unversioned_TypeMeta_To_unversioned_TypeMeta, api.Convert_unversioned_ListMeta_To_unversioned_ListMeta, api.Convert_intstr_IntOrString_To_intstr_IntOrString, api.Convert_unversioned_Time_To_unversioned_Time, api.Convert_Slice_string_To_unversioned_Time, api.Convert_string_To_labels_Selector, api.Convert_string_To_fields_Selector, api.Convert_Pointer_bool_To_bool, api.Convert_bool_To_Pointer_bool, api.Convert_Pointer_string_To_string, api.Convert_string_To_Pointer_string, api.Convert_labels_Selector_To_string, api.Convert_fields_Selector_To_string, api.Convert_resource_Quantity_To_resource_Quantity, ) } "} {"_id":"doc-en-kubernetes-7594f092355a5bc5de3628251b80f8ee7d09e6e990b3b8284693c9a26ddfe3ac","title":"","text":" /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package core import ( \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDeepCopyFuncs(scheme *runtime.Scheme) { if err := scheme.AddGeneratedDeepCopyFuncs( api.DeepCopy_api_DeleteOptions, api.DeepCopy_api_ExportOptions, api.DeepCopy_api_List, api.DeepCopy_api_ListOptions, api.DeepCopy_api_ObjectMeta, api.DeepCopy_api_ObjectReference, api.DeepCopy_api_OwnerReference, api.DeepCopy_api_Service, api.DeepCopy_api_ServiceList, api.DeepCopy_api_ServicePort, api.DeepCopy_api_ServiceSpec, api.DeepCopy_api_ServiceStatus, ); err != nil { // if one of the deep copy functions is malformed, detect it immediately. panic(err) } } "} {"_id":"doc-en-kubernetes-4bb9d79fd048482eaecc5f7b756e34b0dbbbd4b13e8aa133619f7ccf0277763a","title":"","text":"&unversioned.APIGroup{}, &unversioned.APIResourceList{}, ) addDeepCopyFuncs(scheme) addDefaultingFuncs(scheme) addConversionFuncs(scheme) }"} {"_id":"doc-en-kubernetes-9193350c46f90fcba863fe730753ca88a2c7e627b9f6e798745899178328a0f9","title":"","text":"import ( \"fmt\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" )"} {"_id":"doc-en-kubernetes-607e84ced79aa2e76fd42d37a38cdd726e203cf299d4c370a9d77a45aac10bba","title":"","text":"func addConversionFuncs(scheme *runtime.Scheme) { // Add non-generated conversion functions err := scheme.AddConversionFuncs( v1.Convert_api_ServiceSpec_To_v1_ServiceSpec, v1.Convert_v1_DeleteOptions_To_api_DeleteOptions, v1.Convert_api_DeleteOptions_To_v1_DeleteOptions, v1.Convert_v1_ExportOptions_To_api_ExportOptions, v1.Convert_api_ExportOptions_To_v1_ExportOptions, v1.Convert_v1_List_To_api_List, v1.Convert_api_List_To_v1_List, v1.Convert_v1_ListOptions_To_api_ListOptions, v1.Convert_api_ListOptions_To_v1_ListOptions, v1.Convert_v1_ObjectFieldSelector_To_api_ObjectFieldSelector, v1.Convert_api_ObjectFieldSelector_To_v1_ObjectFieldSelector, v1.Convert_v1_ObjectMeta_To_api_ObjectMeta, v1.Convert_api_ObjectMeta_To_v1_ObjectMeta, v1.Convert_v1_ObjectReference_To_api_ObjectReference, v1.Convert_api_ObjectReference_To_v1_ObjectReference, v1.Convert_v1_OwnerReference_To_api_OwnerReference, v1.Convert_api_OwnerReference_To_v1_OwnerReference, v1.Convert_v1_Service_To_api_Service, v1.Convert_api_Service_To_v1_Service, v1.Convert_v1_ServiceList_To_api_ServiceList, v1.Convert_api_ServiceList_To_v1_ServiceList, v1.Convert_v1_ServicePort_To_api_ServicePort, v1.Convert_api_ServicePort_To_v1_ServicePort, v1.Convert_v1_ServiceProxyOptions_To_api_ServiceProxyOptions, v1.Convert_api_ServiceProxyOptions_To_v1_ServiceProxyOptions, v1.Convert_v1_ServiceSpec_To_api_ServiceSpec, v1.Convert_api_ServiceSpec_To_v1_ServiceSpec, v1.Convert_v1_ServiceStatus_To_api_ServiceStatus, v1.Convert_api_ServiceStatus_To_v1_ServiceStatus, ) if err != nil { // If one of the conversion functions is malformed, detect it immediately."} {"_id":"doc-en-kubernetes-730e16fcf0872634daf133e83372c62bab09568914217adae36e316b4301ad9a","title":"","text":"for _, kind := range []string{ \"Service\", } { err = api.Scheme.AddFieldLabelConversionFunc(\"v1\", kind, err = scheme.AddFieldLabelConversionFunc(\"v1\", kind, func(label, value string) (string, string, error) { switch label { case \"metadata.namespace\","} {"_id":"doc-en-kubernetes-71d428ebfbc7fa4fb0807eb410ca7f2bf521c711473b4462db452802b18c4739","title":"","text":" /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package v1 import ( \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDeepCopyFuncs(scheme *runtime.Scheme) { if err := scheme.AddGeneratedDeepCopyFuncs( v1.DeepCopy_v1_DeleteOptions, v1.DeepCopy_v1_ExportOptions, v1.DeepCopy_v1_List, v1.DeepCopy_v1_ListOptions, v1.DeepCopy_v1_ObjectMeta, v1.DeepCopy_v1_ObjectReference, v1.DeepCopy_v1_OwnerReference, v1.DeepCopy_v1_Service, v1.DeepCopy_v1_ServiceList, v1.DeepCopy_v1_ServicePort, v1.DeepCopy_v1_ServiceSpec, v1.DeepCopy_v1_ServiceStatus, ); err != nil { // if one of the deep copy functions is malformed, detect it immediately. panic(err) } } "} {"_id":"doc-en-kubernetes-0c27dc9269463d68be33f02fb16abc2a50d57f28e79ff20b73b070221e6ffb90","title":"","text":"addKnownTypes(scheme) addConversionFuncs(scheme) addDefaultingFuncs(scheme) addDeepCopyFuncs(scheme) } // Adds the list of known types to api.Scheme."} {"_id":"doc-en-kubernetes-7cd2e9a903ffb272eb913a3337912ccb3388d38e4f7b89795a67b5b463cf17f1","title":"","text":"func assertManagedStatus( config *KubeletManagedHostConfig, podName string, expectedIsManaged bool, name string) { // See https://github.com/kubernetes/kubernetes/issues/27023 // TODO: workaround for https://github.com/kubernetes/kubernetes/issues/34256 // // Retry until timeout for the right contents of /etc/hosts to show // up. There may be a low probability race here. We still fail the // test if retry was necessary, but at least we will know whether or // not it resolves or seems to be a permanent condition. // // If /etc/hosts is properly mounted, then this will succeed // immediately. // Retry until timeout for the contents of /etc/hosts to show // up. Note: if /etc/hosts is properly mounted, then this will // succeed immediately. const retryTimeout = 30 * time.Second retryCount := 0 etcHostsContent := \"\" matched := false for startTime := time.Now(); time.Since(startTime) < retryTimeout; { etcHostsContent = config.getEtcHostsContent(podName, name) isManaged := strings.Contains(etcHostsContent, etcHostsPartialContent) if expectedIsManaged == isManaged { matched = true break return } glog.Errorf( glog.Warningf( \"For pod: %s, name: %s, expected %t, actual %t (/etc/hosts was %q), retryCount: %d\", podName, name, expectedIsManaged, isManaged, etcHostsContent, retryCount)"} {"_id":"doc-en-kubernetes-9b19c9bacfa8a6878cf3519a632b8f7464e463be24090eac789eda0a8c7f263b","title":"","text":"time.Sleep(100 * time.Millisecond) } if retryCount > 0 { if matched { conditionText := \"should\" if !expectedIsManaged { conditionText = \"should not\" } framework.Failf( \"/etc/hosts file %s be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", conditionText, name, retryCount, etcHostsContent) } else { framework.Failf( \"had to retry %d times to get matching content in /etc/hosts (name: %s)\", retryCount, name) } if expectedIsManaged { framework.Failf( \"/etc/hosts file should be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", name, retryCount, etcHostsContent) } else { framework.Failf( \"/etc/hosts file should no be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", name, retryCount, etcHostsContent) } }"} {"_id":"doc-en-kubernetes-df84244d39276a9a203153ee2d6c383335e287c0a4e6d0269e1f13c2d4ed4bf7","title":"","text":"var endpointColumns = []string{\"NAME\", \"ENDPOINTS\", \"AGE\"} var nodeColumns = []string{\"NAME\", \"STATUS\", \"AGE\"} var daemonSetColumns = []string{\"NAME\", \"DESIRED\", \"CURRENT\", \"NODE-SELECTOR\", \"AGE\"} var eventColumns = []string{\"FIRSTSEEN\", \"LASTSEEN\", \"COUNT\", \"NAME\", \"KIND\", \"SUBOBJECT\", \"TYPE\", \"REASON\", \"SOURCE\", \"MESSAGE\"} var eventColumns = []string{\"LASTSEEN\", \"FIRSTSEEN\", \"COUNT\", \"NAME\", \"KIND\", \"SUBOBJECT\", \"TYPE\", \"REASON\", \"SOURCE\", \"MESSAGE\"} var limitRangeColumns = []string{\"NAME\", \"AGE\"} var resourceQuotaColumns = []string{\"NAME\", \"AGE\"} var namespaceColumns = []string{\"NAME\", \"STATUS\", \"AGE\"}"} {"_id":"doc-en-kubernetes-cad1fa7eff47c69fcfd59f8fc7aa244645ffa6e26f50bc30b33322d5eb0ab108","title":"","text":"if _, err := fmt.Fprintf( w, \"%st%st%dt%st%st%st%st%st%st%s\", FirstTimestamp, LastTimestamp, FirstTimestamp, event.Count, event.InvolvedObject.Name, event.InvolvedObject.Kind,"} {"_id":"doc-en-kubernetes-73295edc882161b8be65028842cfb0cb0d438b3beb196432dc1c73796c43fca1","title":"","text":"var k8sBinDir = flag.String(\"k8s-bin-dir\", \"\", \"Directory containing k8s kubelet and kube-apiserver binaries.\") var buildTargets = []string{ \"cmd/kubelet\", \"cmd/kube-apiserver\", \"test/e2e_node/e2e_node.test\", } func buildGo() { glog.Infof(\"Building k8s binaries...\") k8sRoot, err := getK8sRootDir() if err != nil { glog.Fatalf(\"Failed to locate kubernetes root directory %v.\", err) } cmd := exec.Command(filepath.Join(k8sRoot, \"hack/build-go.sh\"), buildTargets...) cmd := exec.Command(filepath.Join(k8sRoot, \"hack/build-go.sh\")) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err = cmd.Run()"} {"_id":"doc-en-kubernetes-92743c6358e0bad6ac9d7d238a92ee22571b88ffaefaedc04c71c8086c6a496e","title":"","text":"return \"\", err } if _, err := os.Stat(filepath.Join(*k8sBinDir, bin)); err != nil { return \"\", fmt.Errorf(\"Could not find %s under directory %s.\", bin, absPath) return \"\", fmt.Errorf(\"Could not find kube-apiserver under directory %s.\", absPath) } return filepath.Join(absPath, bin), nil }"} {"_id":"doc-en-kubernetes-dab75e57b1154a44c86579560ea00a1304ab777c7f22cf03803fba62c4e37550","title":"","text":"kubectl replace \"${kube_flags[@]}\" --force -f /tmp/tmp-valid-pod.json # Post-condition: spec.container.name = \"replaced-k8s-serve-hostname\" kube::test::get_object_assert 'pod valid-pod' \"{{(index .spec.containers 0).name}}\" 'replaced-k8s-serve-hostname' ## check replace --grace-period requires --force output_message=$(! kubectl replace \"${kube_flags[@]}\" --grace-period=1 -f /tmp/tmp-valid-pod.json 2>&1) kube::test::if_has_string \"${output_message}\" '--grace-period must have --force specified' ## check replace --timeout requires --force output_message=$(! kubectl replace \"${kube_flags[@]}\" --timeout=1s -f /tmp/tmp-valid-pod.json 2>&1) kube::test::if_has_string \"${output_message}\" '--timeout must have --force specified' #cleaning rm /tmp/tmp-valid-pod.json"} {"_id":"doc-en-kubernetes-b8ab92bba877a2e277509e29b9f50eba1c508735654361e9f650e8fd7bd542b1","title":"","text":"\"github.com/spf13/cobra\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/api/errors\" \"k8s.io/kubernetes/pkg/kubectl\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"k8s.io/kubernetes/pkg/kubectl/resource\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/util/wait\" ) // ReplaceOptions is the start of the data required to perform the operation. As new fields are added, add them here instead of"} {"_id":"doc-en-kubernetes-532d88d15463bb362b69ff6d72f0a2a13d139e5b4795fea68bb8c08a78cea5dc","title":"","text":"return forceReplace(f, out, cmd, args, shortOutput, options) } if cmdutil.GetFlagInt(cmd, \"grace-period\") >= 0 { return fmt.Errorf(\"--grace-period must have --force specified\") } if cmdutil.GetFlagDuration(cmd, \"timeout\") != 0 { return fmt.Errorf(\"--timeout must have --force specified\") } mapper, typer := f.Object(cmdutil.GetIncludeThirdPartyAPIs(cmd)) r := resource.NewBuilder(mapper, typer, resource.ClientMapperFunc(f.ClientForMapping), f.Decoder(true)). Schema(schema)."} {"_id":"doc-en-kubernetes-fc3384fd8a13d7e0f644848a247d424d80fef3ecf578423d688de74fb533c9c0","title":"","text":"} //Replace will create a resource if it doesn't exist already, so ignore not found error ignoreNotFound := true timeout := cmdutil.GetFlagDuration(cmd, \"timeout\") // By default use a reaper to delete all related resources. if cmdutil.GetFlagBool(cmd, \"cascade\") { glog.Warningf(\"\"cascade\" is set, kubectl will delete and re-create all resources managed by this resource (e.g. Pods created by a ReplicationController). Consider using \"kubectl rolling-update\" if you want to update a ReplicationController together with its Pods.\") err = ReapResult(r, f, out, cmdutil.GetFlagBool(cmd, \"cascade\"), ignoreNotFound, cmdutil.GetFlagDuration(cmd, \"timeout\"), cmdutil.GetFlagInt(cmd, \"grace-period\"), shortOutput, mapper, false) err = ReapResult(r, f, out, cmdutil.GetFlagBool(cmd, \"cascade\"), ignoreNotFound, timeout, cmdutil.GetFlagInt(cmd, \"grace-period\"), shortOutput, mapper, false) } else { err = DeleteResult(r, out, ignoreNotFound, shortOutput, mapper) }"} {"_id":"doc-en-kubernetes-24e12552e7d4bc11a4bae587cf87efb5ebd98d10e90aacd6e973fec59f38aece","title":"","text":"return err } if timeout == 0 { timeout = kubectl.Timeout } r.Visit(func(info *resource.Info, err error) error { if err != nil { return err } return wait.PollImmediate(kubectl.Interval, timeout, func() (bool, error) { if err := info.Get(); !errors.IsNotFound(err) { return false, err } return true, nil }) }) r = resource.NewBuilder(mapper, typer, resource.ClientMapperFunc(f.UnstructuredClientForMapping), runtime.UnstructuredJSONScheme). Schema(schema). ContinueOnError()."} {"_id":"doc-en-kubernetes-08fd3593414206bf9927f1077b4ff16d2bca233d5963e85ca60d73830883ca48","title":"","text":"f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} deleted := false tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: deleted = true fallthrough case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodGet: statusCode := http.StatusOK if deleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil default:"} {"_id":"doc-en-kubernetes-9ce6eebd0a6751e9f10424517d6511cd07cf2ee4f8d08dab07f363cf779479f4","title":"","text":"f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} redisMasterDeleted := false frontendDeleted := false tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: redisMasterDeleted = true fallthrough case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodGet: statusCode := http.StatusOK if redisMasterDeleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/services/frontend\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/services/frontend\" && m == http.MethodDelete: frontendDeleted = true fallthrough case p == \"/namespaces/test/services/frontend\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil case p == \"/namespaces/test/services/frontend\" && m == http.MethodGet: statusCode := http.StatusOK if frontendDeleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil case p == \"/namespaces/test/services\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil default:"} {"_id":"doc-en-kubernetes-fca4dad94514e5fc4023681bd6f559e7580d9a775d21b7fc0364bdcf0d05bb9b","title":"","text":"f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} created := map[string]bool{} tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodPut: created[p] = true return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodGet: statusCode := http.StatusNotFound if created[p] { statusCode = http.StatusOK } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodDelete: delete(created, p) return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers\") && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil"} {"_id":"doc-en-kubernetes-7465e2e443409b35723bbed5e985b218c2aa9b654177a4817dce96b91d9781b9","title":"","text":"NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodDelete): return &http.Response{StatusCode: http.StatusNotFound, Header: defaultHeader(), Body: stringBody(\"\")}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil"} {"_id":"doc-en-kubernetes-87483e40644167a1658ef107f45d7914b2ad8c0af8314d640ad4742b4fb7c835","title":"","text":"$host_kubectl create secret generic ${name} --from-file=\"${dir}/kubeconfig\" --namespace=\"${FEDERATION_NAMESPACE}\" done $template \"${manifests_root}/federation-apiserver-\"{deployment,secrets}\".yaml\" | $host_kubectl create -f - $template \"${manifests_root}/federation-controller-manager-deployment.yaml\" | $host_kubectl create -f - for file in federation-etcd-pvc.yaml federation-apiserver-{deployment,secrets}.yaml federation-controller-manager-deployment.yaml; do $template \"${manifests_root}/${file}\" | $host_kubectl create -f - done # Update the users kubeconfig to include federation-apiserver credentials. CONTEXT=federation-cluster "} {"_id":"doc-en-kubernetes-b82dd4044c6d2fe920fd4e2df0d5b7dd75d385481a6dc0a7f042aadfc815ab17","title":"","text":"readOnly: true - name: etcd image: quay.io/coreos/etcd:v2.3.3 command: - /etcd - --data-dir - /var/etcd/data volumeMounts: - mountPath: /var/etcd name: varetcd volumes: - name: federation-apiserver-secrets secret: secretName: federation-apiserver-secrets - name: varetcd persistentVolumeClaim: claimName: {{.FEDERATION_APISERVER_DEPLOYMENT_NAME}}-etcd-claim "} {"_id":"doc-en-kubernetes-91dea5dc515e3fa361024740b93baf9c4f3ad1480272f6c7073bca63016f08f0","title":"","text":" apiVersion: v1 kind: PersistentVolumeClaim metadata: name: {{.FEDERATION_APISERVER_DEPLOYMENT_NAME}}-etcd-claim annotations: volume.alpha.kubernetes.io/storage-class: \"yes\" namespace: {{.FEDERATION_NAMESPACE}} labels: app: federated-cluster spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi "} {"_id":"doc-en-kubernetes-e38dd6a6e96e736291bda9d7780e969a109cfe99d69c22cccfe74c96318796e7","title":"","text":"Expect(container.Create()).To(Succeed()) defer container.Delete() By(\"check the pod phase\") Eventually(container.GetPhase, retryTimeout, pollInterval).Should(Equal(testCase.phase)) Consistently(container.GetPhase, consistentCheckTimeout, pollInterval).Should(Equal(testCase.phase)) // We need to check container state first. The default pod status is pending, If we check // pod phase first, and the expected pod phase is Pending, the container status may not // even show up when we check it. By(\"check the container state\") status, err := container.GetStatus() Expect(err).NotTo(HaveOccurred()) Expect(GetContainerState(status.State)).To(Equal(testCase.state)) getState := func() (ContainerState, error) { status, err := container.GetStatus() if err != nil { return ContainerStateUnknown, err } return GetContainerState(status.State), nil } Eventually(getState, retryTimeout, pollInterval).Should(Equal(testCase.state)) Consistently(getState, consistentCheckTimeout, pollInterval).Should(Equal(testCase.state)) By(\"check the pod phase\") Expect(container.GetPhase()).To(Equal(testCase.phase)) By(\"it should be possible to delete\") Expect(container.Delete()).To(Succeed())"} {"_id":"doc-en-kubernetes-675824e77431d37f2d4aaf04c6b142fd6ed6fa2751ac6658ff0ee368d3b24adc","title":"","text":"kube::release::package_kube_manifests_tarball & kube::util::wait-for-jobs || { kube::log::error \"previous tarball phase failed\"; return 1; } kube::release::package_full_tarball & # _full depends on all the previous phases kube::release::package_final_tarball & # _final depends on some of the previous phases kube::release::package_test_tarball & # _test doesn't depend on anything kube::util::wait-for-jobs || { kube::log::error \"previous tarball phase failed\"; return 1; } }"} {"_id":"doc-en-kubernetes-1274bdfc7e0b08f1ac0effa73c36d9c3e4c0d18adc2061de0a80013af14e606a","title":"","text":"kube::release::create_tarball \"${package_name}\" \"${release_stage}/..\" } # This is all the stuff you need to run/install kubernetes. This includes: # - precompiled binaries for client # This is all the platform-independent stuff you need to run/install kubernetes. # Arch-specific binaries will need to be downloaded separately (possibly by # using the bundled cluster/get-kube-binaries.sh script). # Included in this tarball: # - Cluster spin up/down scripts and configs for various cloud providers # - tarballs for server binary and salt configs that are ready to be uploaded # - Tarballs for salt configs that are ready to be uploaded # to master by whatever means appropriate. function kube::release::package_full_tarball() { kube::log::status \"Building tarball: full\" # - Examples (which may or may not still work) # - The remnants of the docs/ directory function kube::release::package_final_tarball() { kube::log::status \"Building tarball: final\" # This isn't a \"full\" tarball anymore, but the release lib still expects # artifacts under \"full/kubernetes/\" local release_stage=\"${RELEASE_STAGE}/full/kubernetes\" rm -rf \"${release_stage}\" mkdir -p \"${release_stage}\" # Copy all of the client binaries in here, but not test or server binaries. # The server binaries are included with the server binary tarball. local platform for platform in \"${KUBE_CLIENT_PLATFORMS[@]}\"; do local client_bins=(\"${KUBE_CLIENT_BINARIES[@]}\") if [[ \"${platform%/*}\" == \"windows\" ]]; then client_bins=(\"${KUBE_CLIENT_BINARIES_WIN[@]}\") fi mkdir -p \"${release_stage}/platforms/${platform}\" cp \"${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}\" \"${release_stage}/platforms/${platform}\" done mkdir -p \"${release_stage}/client\" cat < \"${release_stage}/client/README\" Client binaries are no longer included in the Kubernetes final tarball. Run cluster/get-kube-binaries.sh to download client and server binaries. EOF # We want everything in /cluster except saltbase. That is only needed on the # server."} {"_id":"doc-en-kubernetes-3170fb7f02dc7755b373285863ac00ddaa47c08487393e642e3a3dca95eb7593","title":"","text":"mkdir -p \"${release_stage}/server\" cp \"${RELEASE_DIR}/kubernetes-salt.tar.gz\" \"${release_stage}/server/\" cp \"${RELEASE_DIR}\"/kubernetes-server-*.tar.gz \"${release_stage}/server/\" cp \"${RELEASE_DIR}/kubernetes-manifests.tar.gz\" \"${release_stage}/server/\" cat < \"${release_stage}/server/README\" Server binary tarballs are no longer included in the Kubernetes final tarball. Run cluster/get-kube-binaries.sh to download client and server binaries. EOF mkdir -p \"${release_stage}/third_party\" cp -R \"${KUBE_ROOT}/third_party/htpasswd\" \"${release_stage}/third_party/htpasswd\""} {"_id":"doc-en-kubernetes-192782e70865eb918e6c7a61365c424de8a811f00bc804547686eb681701b61b","title":"","text":"\"os/exec\" \"path\" \"path/filepath\" \"reflect\" \"strconv\" \"strings\" \"syscall\""} {"_id":"doc-en-kubernetes-e29eb9f5b45e979e922a5ceacfcaae023c569a02eadb22832c4bb9888167d1c6","title":"","text":"cmd.Cmd.Stdout = outfile cmd.Cmd.Stderr = outfile // Killing the sudo command should kill the server as well. attrs := &syscall.SysProcAttr{} // Hack to set linux-only field without build tags. deathSigField := reflect.ValueOf(attrs).Elem().FieldByName(\"Pdeathsig\") if deathSigField.IsValid() { deathSigField.Set(reflect.ValueOf(syscall.SIGKILL)) } else { cmdErrorChan <- fmt.Errorf(\"Failed to set Pdeathsig field (non-linux build)\") return } cmd.Cmd.SysProcAttr = attrs // Run the command err = cmd.Run() if err != nil {"} {"_id":"doc-en-kubernetes-df1a74591f1bae243d9b31a68c5d4dc241b531e54cf4b2a56410b3237791fe1c","title":"","text":"const timeout = 10 * time.Second for _, signal := range []string{\"-TERM\", \"-KILL\"} { glog.V(2).Infof(\"Killing process %d (%s) with %s\", pid, name, signal) _, err := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)).Output() cmd := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)) // Run the 'kill' command in a separate process group so sudo doesn't ignore it cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true} _, err := cmd.Output() if err != nil { glog.Errorf(\"Error signaling process %d (%s) with %s: %v\", pid, name, signal, err) continue"} {"_id":"doc-en-kubernetes-2c34673f1128ad4c99654a24149be721ebdb0a8507b6771ad2341f1f5725ff8a","title":"","text":"cmd.Cmd.Stdout = outfile cmd.Cmd.Stderr = outfile // Killing the sudo command should kill the server as well. // Death of this test process should kill the server as well. attrs := &syscall.SysProcAttr{} // Hack to set linux-only field without build tags. deathSigField := reflect.ValueOf(attrs).Elem().FieldByName(\"Pdeathsig\") if deathSigField.IsValid() { deathSigField.Set(reflect.ValueOf(syscall.SIGKILL)) deathSigField.Set(reflect.ValueOf(syscall.SIGTERM)) } else { cmdErrorChan <- fmt.Errorf(\"Failed to set Pdeathsig field (non-linux build)\") return"} {"_id":"doc-en-kubernetes-d48bc27e64c54a9f5e850899b2021dc0e74990571c18f5b2a72e072eb3b5c29d","title":"","text":"const timeout = 10 * time.Second for _, signal := range []string{\"-TERM\", \"-KILL\"} { glog.V(2).Infof(\"Killing process %d (%s) with %s\", pid, name, signal) _, err := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)).Output() cmd := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)) // Run the 'kill' command in a separate process group so sudo doesn't ignore it attrs := &syscall.SysProcAttr{} // Hack to set unix-only field without build tags. setpgidField := reflect.ValueOf(attrs).Elem().FieldByName(\"Setpgid\") if setpgidField.IsValid() { setpgidField.Set(reflect.ValueOf(true)) } else { return fmt.Errorf(\"Failed to set Setpgid field (non-unix build)\") } cmd.SysProcAttr = attrs _, err := cmd.Output() if err != nil { glog.Errorf(\"Error signaling process %d (%s) with %s: %v\", pid, name, signal, err) continue"} {"_id":"doc-en-kubernetes-391fb937369b2ba03bdf1df5a52593de1b3863c8eb27d79ca1528d7547f084a1","title":"","text":"workerControllerJson := mkpath(\"storm-worker-controller.json\") nsFlag := fmt.Sprintf(\"--namespace=%v\", ns) zookeeperPod := \"zookeeper\" nimbusPod := \"nimbus\" By(\"starting Zookeeper\") framework.RunKubectlOrDie(\"create\", \"-f\", zookeeperPodJson, nsFlag) framework.RunKubectlOrDie(\"create\", \"-f\", zookeeperServiceJson, nsFlag) err := framework.WaitForPodNameRunningInNamespace(c, zookeeperPod, ns) err := f.WaitForPodRunningSlow(zookeeperPod) Expect(err).NotTo(HaveOccurred()) By(\"checking if zookeeper is up and running\")"} {"_id":"doc-en-kubernetes-870f2abc301ab35fecb026005d1e81e3dd7e93b5f8320215cb2b83d0f9d351c1","title":"","text":"By(\"starting Nimbus\") framework.RunKubectlOrDie(\"create\", \"-f\", nimbusPodJson, nsFlag) framework.RunKubectlOrDie(\"create\", \"-f\", nimbusServiceJson, nsFlag) err = framework.WaitForPodNameRunningInNamespace(c, \"nimbus\", ns) err = f.WaitForPodRunningSlow(nimbusPod) Expect(err).NotTo(HaveOccurred()) err = framework.WaitForEndpoint(c, ns, \"nimbus\")"} {"_id":"doc-en-kubernetes-7feae0898f875f506bbb7d894001c3c90dbca14aa42b5df29ea0ba1452f9e2d3","title":"","text":"# OS options for minions KUBE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION:-jessie}\" KUBE_MASTER_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" KUBE_NODE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" MASTER_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" NODE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" KUBE_NODE_IMAGE=\"${KUBE_NODE_IMAGE:-}\" COREOS_CHANNEL=\"${COREOS_CHANNEL:-alpha}\" CONTAINER_RUNTIME=\"${KUBE_CONTAINER_RUNTIME:-docker}\""} {"_id":"doc-en-kubernetes-50a4e1d31fc54a627222a6d12c44ae5160b4d1683babc7bc04e43fb6be5aa895","title":"","text":"if err := p.SyncPVCs(pet); err != nil { return err } if exists { // if pet failed - we need to remove old one because of consistent naming if exists && realPet.pod.Status.Phase == api.PodFailed { glog.V(4).Infof(\"Delete evicted pod %v\", realPet.pod.Name) if err := p.petClient.Delete(realPet); err != nil { return err } } else if exists { if !p.isHealthy(realPet.pod) { glog.Infof(\"PetSet %v waiting on unhealthy pet %v\", pet.parent.Name, realPet.pod.Name) }"} {"_id":"doc-en-kubernetes-41c7831f97f42c1048ac0edcaac53f5d4ca7dec7dba6b8fc7630067fc506bd02","title":"","text":"\"k8s.io/kubernetes/pkg/controller/petset\" \"k8s.io/kubernetes/pkg/labels\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/types\" \"k8s.io/kubernetes/pkg/util/sets\" \"k8s.io/kubernetes/pkg/util/wait\" utilyaml \"k8s.io/kubernetes/pkg/util/yaml\" \"k8s.io/kubernetes/pkg/watch\" \"k8s.io/kubernetes/test/e2e/framework\" ) const ( petsetPoll = 10 * time.Second // Some pets install base packages via wget petsetTimeout = 10 * time.Minute petsetTimeout = 10 * time.Minute // Timeout for pet pods to change state petPodTimeout = 5 * time.Minute zookeeperManifestPath = \"test/e2e/testing-manifests/petset/zookeeper\" mysqlGaleraManifestPath = \"test/e2e/testing-manifests/petset/mysql-galera\" redisManifestPath = \"test/e2e/testing-manifests/petset/redis\""} {"_id":"doc-en-kubernetes-7e03950edfab3b0154b44aac0b275c03d68ad4cb0bab416f009856c0fb20bac6","title":"","text":"}) }) var _ = framework.KubeDescribe(\"Pet set recreate [Slow] [Feature:PetSet]\", func() { f := framework.NewDefaultFramework(\"pet-set-recreate\") var c *client.Client var ns string labels := map[string]string{ \"foo\": \"bar\", \"baz\": \"blah\", } headlessSvcName := \"test\" podName := \"test-pod\" petSetName := \"web\" petPodName := \"web-0\" BeforeEach(func() { framework.SkipUnlessProviderIs(\"gce\", \"vagrant\") By(\"creating service \" + headlessSvcName + \" in namespace \" + f.Namespace.Name) headlessService := createServiceSpec(headlessSvcName, \"\", true, labels) _, err := f.Client.Services(f.Namespace.Name).Create(headlessService) framework.ExpectNoError(err) c = f.Client ns = f.Namespace.Name }) AfterEach(func() { if CurrentGinkgoTestDescription().Failed { dumpDebugInfo(c, ns) } By(\"Deleting all petset in ns \" + ns) deleteAllPetSets(c, ns) }) It(\"should recreate evicted petset\", func() { By(\"looking for a node to schedule pet set and pod\") nodes := framework.GetReadySchedulableNodesOrDie(f.Client) node := nodes.Items[0] By(\"creating pod with conflicting port in namespace \" + f.Namespace.Name) conflictingPort := api.ContainerPort{HostPort: 21017, ContainerPort: 21017, Name: \"conflict\"} pod := &api.Pod{ ObjectMeta: api.ObjectMeta{ Name: podName, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"nginx\", Image: \"gcr.io/google_containers/nginx-slim:0.7\", Ports: []api.ContainerPort{conflictingPort}, }, }, NodeName: node.Name, }, } pod, err := f.Client.Pods(f.Namespace.Name).Create(pod) framework.ExpectNoError(err) By(\"creating petset with conflicting port in namespace \" + f.Namespace.Name) ps := newPetSet(petSetName, f.Namespace.Name, headlessSvcName, 1, nil, nil, labels) petContainer := &ps.Spec.Template.Spec.Containers[0] petContainer.Ports = append(petContainer.Ports, conflictingPort) ps.Spec.Template.Spec.NodeName = node.Name _, err = f.Client.Apps().PetSets(f.Namespace.Name).Create(ps) framework.ExpectNoError(err) By(\"waiting until pod \" + podName + \" will start running in namespace \" + f.Namespace.Name) if err := f.WaitForPodRunning(podName); err != nil { framework.Failf(\"Pod %v did not start running: %v\", podName, err) } var initialPetPodUID types.UID By(\"waiting until pet pod \" + petPodName + \" will be recreated and deleted at least once in namespace \" + f.Namespace.Name) w, err := f.Client.Pods(f.Namespace.Name).Watch(api.SingleObject(api.ObjectMeta{Name: petPodName})) framework.ExpectNoError(err) // we need to get UID from pod in any state and wait until pet set controller will remove pod atleast once _, err = watch.Until(petPodTimeout, w, func(event watch.Event) (bool, error) { pod := event.Object.(*api.Pod) switch event.Type { case watch.Deleted: framework.Logf(\"Observed delete event for pet pod %v in namespace %v\", pod.Name, pod.Namespace) if initialPetPodUID == \"\" { return false, nil } return true, nil } framework.Logf(\"Observed pet pod in namespace: %v, name: %v, uid: %v, status phase: %v. Waiting for petset controller to delete.\", pod.Namespace, pod.Name, pod.UID, pod.Status.Phase) initialPetPodUID = pod.UID return false, nil }) if err != nil { framework.Failf(\"Pod %v expected to be re-created atleast once\", petPodName) } By(\"removing pod with conflicting port in namespace \" + f.Namespace.Name) err = f.Client.Pods(f.Namespace.Name).Delete(pod.Name, api.NewDeleteOptions(0)) framework.ExpectNoError(err) By(\"waiting when pet pod \" + petPodName + \" will be recreated in namespace \" + f.Namespace.Name + \" and will be in running state\") // we may catch delete event, thats why we are waiting for running phase like this, and not with watch.Until Eventually(func() error { petPod, err := f.Client.Pods(f.Namespace.Name).Get(petPodName) if err != nil { return err } if petPod.Status.Phase != api.PodRunning { return fmt.Errorf(\"Pod %v is not in running phase: %v\", petPod.Name, petPod.Status.Phase) } else if petPod.UID == initialPetPodUID { return fmt.Errorf(\"Pod %v wasn't recreated: %v == %v\", petPod.Name, petPod.UID, initialPetPodUID) } return nil }, petPodTimeout, 2*time.Second).Should(BeNil()) }) }) func dumpDebugInfo(c *client.Client, ns string) { pl, _ := c.Pods(ns).List(api.ListOptions{LabelSelector: labels.Everything()}) for _, p := range pl.Items {"} {"_id":"doc-en-kubernetes-372696e4ae48d7218ba6a463315c3689a491efa5d974d204606a29e2317863e9","title":"","text":"return err } if waitForReplicas != nil { watchOptions := api.ListOptions{FieldSelector: fields.OneTermEqualSelector(\"metadata.name\", name), ResourceVersion: updatedResourceVersion} watcher, err := scaler.c.ReplicationControllers(namespace).Watch(watchOptions) checkRC := func(rc *api.ReplicationController) bool { if uint(rc.Spec.Replicas) != newSize { // the size is changed by other party. Don't need to wait for the new change to complete. return true } return rc.Status.ObservedGeneration >= rc.Generation && rc.Status.Replicas == rc.Spec.Replicas } // If number of replicas doesn't change, then the update may not event // be sent to underlying databse (we don't send no-op changes). // In such case, will have value of the most // recent update (which may be far in the past) so we may get \"too old // RV\" error from watch or potentially no ReplicationController events // will be deliver, since it may already be in the expected state. // To protect from these two, we first issue Get() to ensure that we // are not already in the expected state. currentRC, err := scaler.c.ReplicationControllers(namespace).Get(name) if err != nil { return err } _, err = watch.Until(waitForReplicas.Timeout, watcher, func(event watch.Event) (bool, error) { if event.Type != watch.Added && event.Type != watch.Modified { return false, nil if !checkRC(currentRC) { watchOptions := api.ListOptions{ FieldSelector: fields.OneTermEqualSelector(\"metadata.name\", name), ResourceVersion: updatedResourceVersion, } rc := event.Object.(*api.ReplicationController) if uint(rc.Spec.Replicas) != newSize { // the size is changed by other party. Don't need to wait for the new change to complete. return true, nil watcher, err := scaler.c.ReplicationControllers(namespace).Watch(watchOptions) if err != nil { return err } return rc.Status.ObservedGeneration >= rc.Generation && rc.Status.Replicas == rc.Spec.Replicas, nil }) if err == wait.ErrWaitTimeout { return fmt.Errorf(\"timed out waiting for %q to be synced\", name) _, err = watch.Until(waitForReplicas.Timeout, watcher, func(event watch.Event) (bool, error) { if event.Type != watch.Added && event.Type != watch.Modified { return false, nil } return checkRC(event.Object.(*api.ReplicationController)), nil }) if err == wait.ErrWaitTimeout { return fmt.Errorf(\"timed out waiting for %q to be synced\", name) } return err } return err } return nil }"} {"_id":"doc-en-kubernetes-057311196765cb0e7f92e30190d09f1e0654a8d0171a68f24d80a16bb92963fb","title":"","text":"}, }, StopError: nil, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"watch\", \"delete\"}, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"get\", \"delete\"}, }, { Name: \"NoOverlapping\","} {"_id":"doc-en-kubernetes-a6858dc85a4620c11bc5ae5f281d6f3be7f0f950858388f6b0ac32cd30ad20a0","title":"","text":"}, }, StopError: nil, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"watch\", \"delete\"}, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"get\", \"delete\"}, }, { Name: \"OverlappingError\","} {"_id":"doc-en-kubernetes-1078fcfa6f04862105f5d4b723fa0a4691f40d1f604ef88f619c7cde3ff9573a","title":"","text":"healthchecker.mutationRequestChannel <- req } func updateServiceListener(serviceName types.NamespacedName, listenPort int, addOrDelete bool) bool { func updateServiceListener(serviceName types.NamespacedName, listenPort int, add bool) bool { responseChannel := make(chan bool) req := &proxyListenerRequest{ serviceName: serviceName, listenPort: uint16(listenPort), add: addOrDelete, add: add, responseChannel: responseChannel, } healthchecker.listenerRequestChannel <- req"} {"_id":"doc-en-kubernetes-bb51bc9254123b7761a574e47491d69dfea23818243c44d8c206f4ea80342050","title":"","text":"return updateServiceListener(serviceName, listenPort, true) } // DeleteServiceListener Request addition of a listener for a service's health check // DeleteServiceListener Request deletion of a listener for a service's health check func DeleteServiceListener(serviceName types.NamespacedName, listenPort int) bool { return updateServiceListener(serviceName, listenPort, false) }"} {"_id":"doc-en-kubernetes-31df6b808d4b9905f46e876ffbf52f475d37394499a6c75b7167b29566548cd5","title":"","text":"s, ok := h.serviceEndpointsMap.Get(serviceName) if !ok { glog.V(4).Infof(\"Service %s not found or has no local endpoints\", serviceName) sendHealthCheckResponse(rw, http.StatusServiceUnavailable, \"No Service Endpoints Not Found\") sendHealthCheckResponse(rw, http.StatusServiceUnavailable, \"No Service Endpoints Found\") return } numEndpoints := len(*s.(*serviceEndpointsList).endpoints)"} {"_id":"doc-en-kubernetes-2bc4a8495519bd2f663285c4c76327ddf902f29dc9fccc139a4ab84a1e64e466","title":"","text":"func run(s *options.KubeletServer, kubeDeps *kubelet.KubeletDeps) (err error) { // TODO: this should be replaced by a --standalone flag standaloneMode := (len(s.APIServerList) == 0) standaloneMode := (len(s.APIServerList) == 0 && !s.RequireKubeConfig) if s.ExitOnLockContention && s.LockFilePath == \"\" { return errors.New(\"cannot exit on lock file contention: no lock file specified\")"} {"_id":"doc-en-kubernetes-01ef60197d8e84b4347b7181ab26fe925d485cbb60047f66e3e057eda471c6aa","title":"","text":"go nc.podController.Run(wait.NeverStop) go nc.daemonSetController.Run(wait.NeverStop) if nc.internalPodInformer != nil { nc.internalPodInformer.Run(wait.NeverStop) go nc.internalPodInformer.Run(wait.NeverStop) } // Incorporate the results of node status pushed from kubelet to master."} {"_id":"doc-en-kubernetes-42850ff9756da0c17fa67f011430d3ea785fbe082db3bbd7494ce8f81af3da06","title":"","text":"Items []Foo `json:\"items\"` } var _ = Describe(\"ThirdParty resources\", func() { // This test is marked flaky pending namespace controller observing dynamic creation of new third party types. var _ = Describe(\"ThirdParty resources [Flaky] [Disruptive]\", func() { f := framework.NewDefaultFramework(\"thirdparty\")"} {"_id":"doc-en-kubernetes-210cb30b15b726d52d6e06c7e590c95dff272da12af70f7a2b2fc3f8c3b56269","title":"","text":"return result, len(result) > 0 } // TODO: Most probably splitting this method to a separate thread will visibily // improve throughput of our watch machinery. So what we should do is to: // - OnEvent handler simply put an element to channel // - processEvent be another goroutine processing events from that channel // Additionally, if we make this channel buffered, cacher will be more resistant // to single watchers being slow - see cacheWatcher::add method. func (c *Cacher) processEvent(event watchCacheEvent) { triggerValues, supported := c.triggerValues(&event)"} {"_id":"doc-en-kubernetes-86a694806be89e8df4cda5c663e442b7f04fc7c5364abbab95b27dea443c78f9","title":"","text":"// OK, block sending, but only for up to 5 seconds. // cacheWatcher.add is called very often, so arrange // to reuse timers instead of constantly allocating. startTime := time.Now() const timeout = 5 * time.Second t, ok := timerPool.Get().(*time.Timer) if ok {"} {"_id":"doc-en-kubernetes-24341fa31b308bd04c31a6b67cb937fee7c74a4b0409372f0ec89eeb9c6101fe","title":"","text":"c.forget(false) c.stop() } glog.V(2).Infof(\"cacheWatcher add function blocked processing for %v\", time.Since(startTime)) } func (c *cacheWatcher) sendWatchCacheEvent(event watchCacheEvent) {"} {"_id":"doc-en-kubernetes-5d2d33bf722f9b016cc51dd49e5cd27969538c609e9777516c02c37bf189f2f6","title":"","text":"import ( \"fmt\" \"net/http\" \"reflect\" \"sync\" \"sync/atomic\" \"time\""} {"_id":"doc-en-kubernetes-74e2beac7ce474c9f671ffedcb405b5c147d9190685e423dc39ffb2a72a93cd0","title":"","text":"// Injectable for testing. Send the event down the outgoing channel. emit func(watch.Event) // HighWaterMarks for performance debugging. incomingHWM HighWaterMark outgoingHWM HighWaterMark cache etcdCache }"} {"_id":"doc-en-kubernetes-36a4bbe9bc7844d84dd648e4ba69c465bff9f20df0ce1c03d13a8aa5440c66cf","title":"","text":"cancel: nil, } w.emit = func(e watch.Event) { if curLen := int64(len(w.outgoing)); w.outgoingHWM.Update(curLen) { // Monitor if this gets backed up, and how much. glog.V(1).Infof(\"watch (%v): %v objects queued in outgoing channel.\", reflect.TypeOf(e.Object).String(), curLen) } // Give up on user stop, without this we leak a lot of goroutines in tests. select { case w.outgoing <- e:"} {"_id":"doc-en-kubernetes-57249fa8ab0d6c3002c064ffc44013b5601b842441a5ba5fc4decb20902746d1","title":"","text":"incoming <- &copied } var ( watchChannelHWM HighWaterMark ) // translate pulls stuff from etcd, converts, and pushes out the outgoing channel. Meant to be // called as a goroutine. func (w *etcdWatcher) translate() {"} {"_id":"doc-en-kubernetes-c526fc68ea52a14e8a000d72c3d96d4be1fb045d964c47975187e146c173bef6","title":"","text":"return case res, ok := <-w.etcdIncoming: if ok { if curLen := int64(len(w.etcdIncoming)); watchChannelHWM.Update(curLen) { if curLen := int64(len(w.etcdIncoming)); w.incomingHWM.Update(curLen) { // Monitor if this gets backed up, and how much. glog.V(2).Infof(\"watch: %v objects queued in channel.\", curLen) glog.V(1).Infof(\"watch: %v objects queued in incoming channel.\", curLen) } w.sendResult(res) }"} {"_id":"doc-en-kubernetes-8cdafe1ddcf562ec3b12e99276abdc75619a42aef3c9455ecef1a087c95698f7","title":"","text":"if res == nil { continue } if len(wc.resultChan) == outgoingBufSize { glog.Warningf(\"Fast watcher, slow processing. Number of buffered events: %d.\"+ \"Probably caused by slow dispatching events to watchers\", outgoingBufSize) } // If user couldn't receive results fast enough, we also block incoming events from watcher. // Because storing events in local will cause more memory usage. // The worst case would be closing the fast watcher."} {"_id":"doc-en-kubernetes-b6d9a5ba520878d10bc162a27089c2a79a0bcedd5e25a741744bec539639e68b","title":"","text":"func (wc *watchChan) sendEvent(e *event) { if len(wc.incomingEventChan) == incomingBufSize { glog.V(2).Infof(\"Fast watcher, slow processing. Number of buffered events: %d.\"+ glog.Warningf(\"Fast watcher, slow processing. Number of buffered events: %d.\"+ \"Probably caused by slow decoding, user not receiving fast, or other processing logic\", incomingBufSize) }"} {"_id":"doc-en-kubernetes-fdb9b5a89d9402a08714bc5c0e58e67e8e808cb40e473fcd93dd08c43c802034","title":"","text":"} framework.ExpectNoError(framework.RunRC(controllerRcConfig)) // Make sure endpoints are propagated. // TODO(piosz): replace sleep with endpoints watch. time.Sleep(10 * time.Second) // Wait for endpoints to propagate for the controller service. framework.ExpectNoError(framework.WaitForServiceEndpointsNum( c, ns, controllerName, 1, startServiceInterval, startServiceTimeout)) }"} {"_id":"doc-en-kubernetes-514977c4e8486ea4f5edac451ccef2485ced7d5e64590c4ea5fe28fd4994c1e9","title":"","text":"} // errorBadPodsStates create error message of basic info of bad pods for debugging. func errorBadPodsStates(badPods []api.Pod, desiredPods int, ns string, timeout time.Duration) string { errStr := fmt.Sprintf(\"%d / %d pods in namespace %q are NOT in the desired state in %vn\", len(badPods), desiredPods, ns, timeout) func errorBadPodsStates(badPods []api.Pod, desiredPods int, ns, desiredState string, timeout time.Duration) string { errStr := fmt.Sprintf(\"%d / %d pods in namespace %q are NOT in %s state in %vn\", len(badPods), desiredPods, ns, desiredState, timeout) // Pirnt bad pods info only if there are fewer than 10 bad pods if len(badPods) > 10 { return errStr + \"There are too many bad pods. Please check log for details.\""} {"_id":"doc-en-kubernetes-0837eebf78e6dc66a6c73a0006dcdc31ec68269dcf3cbc6ccb8390dc4c4a5f16","title":"","text":"// pods have been created. func WaitForPodsSuccess(c *client.Client, ns string, successPodLabels map[string]string, timeout time.Duration) error { successPodSelector := labels.SelectorFromSet(successPodLabels) start, badPods := time.Now(), []api.Pod{} start, badPods, desiredPods := time.Now(), []api.Pod{}, 0 if wait.PollImmediate(30*time.Second, timeout, func() (bool, error) { podList, err := c.Pods(ns).List(api.ListOptions{LabelSelector: successPodSelector})"} {"_id":"doc-en-kubernetes-d9b3c7a637c6a7767640c61bc93fe2966c80e59b68963ae53e2ca280ff34e6b7","title":"","text":"return true, nil } badPods = []api.Pod{} desiredPods = len(podList.Items) for _, pod := range podList.Items { if pod.Status.Phase != api.PodSucceeded { badPods = append(badPods, pod)"} {"_id":"doc-en-kubernetes-8d588c5438db40949bffbe28c513a1ac0500827391efdff44af3c63df145bf0c","title":"","text":"}) != nil { logPodStates(badPods) LogPodsWithLabels(c, ns, successPodLabels) return fmt.Errorf(\"Not all pods in namespace %q are successful within %v\", ns, timeout) return errors.New(errorBadPodsStates(badPods, desiredPods, ns, \"SUCCESS\", timeout)) } return nil }"} {"_id":"doc-en-kubernetes-66d708e834da016ea50597c47822a9006d82b327e89f45caf258e48aff3f1fae","title":"","text":"logPodStates(badPods) return false, nil }) != nil { return errors.New(errorBadPodsStates(badPods, desiredPods, ns, timeout)) return errors.New(errorBadPodsStates(badPods, desiredPods, ns, \"RUNNING and READY\", timeout)) } wg.Wait() if waitForSuccessError != nil {"} {"_id":"doc-en-kubernetes-95036b5248838550bd8fdef086828b184f72225af2a5e337306341be0d36cc51","title":"","text":"} // Create the container. fClock.SetTime(time.Now()) fClock.SetTime(time.Now().Add(-1 * time.Hour)) *expected.CreatedAt = fClock.Now().Unix() id, err := ds.CreateContainer(\"sandboxid\", config, sConfig) // Set the id manually since we don't know the id until it's created."} {"_id":"doc-en-kubernetes-e66464d4e02cf3b6f4ea48c9b2166fc7c0c9bed79da187ff6f2f05dadcdeb1bb","title":"","text":"assert.Equal(t, expected, status) // Advance the clock and stop the container. fClock.SetTime(time.Now()) fClock.SetTime(time.Now().Add(1 * time.Hour)) *expected.FinishedAt = fClock.Now().Unix() *expected.State = runtimeApi.ContainerState_EXITED *expected.Reason = \"Completed\""} {"_id":"doc-en-kubernetes-2d121d6789e8a78ee133ee52af88fcc42b73b72dafb647ddc4a8d9bfd25aa559","title":"","text":") func newTestDockerSevice() (*dockerService, *dockertools.FakeDockerClient, *clock.FakeClock) { c := dockertools.NewFakeDockerClient() fakeClock := clock.NewFakeClock(time.Time{}) c := dockertools.NewFakeDockerClientWithClock(fakeClock) return &dockerService{client: c}, c, fakeClock }"} {"_id":"doc-en-kubernetes-a2c7c64765547e2371eadafa8e0b33d0785044878c6b1bac0688ad4c84d7b702","title":"","text":"// `HostIP: s.API.AdvertiseAddrs[0]`, if there is only one address` {Name: \"http\", ContainerPort: 9898, HostPort: 9898}, }, SecurityContext: &api.SecurityContext{ SELinuxOptions: &api.SELinuxOptions{ // TODO: This implies our discovery container is not being restricted by // SELinux. This is not optimal and would be nice to adjust in future // so it can read /tmp/secret, but for now this avoids recommending // setenforce 0 system-wide. Type: \"unconfined_t\", }, }, }}, Volumes: []api.Volume{{ Name: kubeDiscoverySecretName,"} {"_id":"doc-en-kubernetes-e7c1004ed4b51780834c1318cb64d19b227e5f08774395cc829a6cf080457639","title":"","text":"} const yamlSeparator = \"n---\" const separator = \"---n\" const separator = \"---\" // splitYAMLDocument is a bufio.SplitFunc for splitting YAML streams into individual documents. func splitYAMLDocument(data []byte, atEOF bool) (advance int, token []byte, err error) {"} {"_id":"doc-en-kubernetes-f941b63bd56e132f75194e8a86bff63ce56626198217e90cbe7f0c4a350e196b","title":"","text":"return nil, err } if string(line) == separator || err == io.EOF { sep := len([]byte(separator)) if i := bytes.Index(line, []byte(separator)); i == 0 { // We have a potential document terminator i += sep after := line[i:] if len(strings.TrimRightFunc(string(after), unicode.IsSpace)) == 0 { if buffer.Len() != 0 { return buffer.Bytes(), nil } if err == io.EOF { return nil, err } } } if err == io.EOF { if buffer.Len() != 0 { // If we're at EOF, we have a final, non-terminated line. Return it. return buffer.Bytes(), nil } if err == io.EOF { return nil, err } } else { buffer.Write(line) return nil, err } buffer.Write(line) } }"} {"_id":"doc-en-kubernetes-f9b2e9c57cbd98427c8fa3760507913827104b2fcd532229af2b842521581bcc","title":"","text":"s := NewYAMLToJSONDecoder(bytes.NewReader([]byte(`--- stuff: 1 --- --- `))) obj := generic{} if err := s.Decode(&obj); err != nil {"} {"_id":"doc-en-kubernetes-ffaf8a140cd35bed91f9d6527505813dcbbff2c8c825995a926a1cd96cd916a3","title":"","text":"obj := generic{} err := s.Decode(&obj) if err == nil { t.Fatal(\"expected error with yaml: prefix, got no error\") t.Fatal(\"expected error with yaml: violate, got no error\") } fmt.Printf(\"err: %sn\", err.Error()) if !strings.HasPrefix(err.Error(), \"yaml: line 1:\") { t.Fatalf(\"expected %q to have 'yaml: line 1:' prefix\", err.Error()) if !strings.HasPrefix(err.Error(), \"yaml: line 2:\") { t.Fatalf(\"expected %q to have 'yaml: line 2:' found a tab character\", err.Error()) } }"} {"_id":"doc-en-kubernetes-adbe884ea999bc3ee0e9a676bd0db2789f2f1b3dd692eba314f33a7580e46f20","title":"","text":"docker build -t $(PREFIX)/logs-generator:$(TAG) . push: gcloud docker push $(PREFIX)/logs-generator:$(TAG) gcloud docker -- push $(PREFIX)/logs-generator:$(TAG) clean: rm -f logs-generator No newline at end of file"} {"_id":"doc-en-kubernetes-9fe93ec340d941191bad7e64e1e623bc30cb214b725e2645b9d4a0ca77ab051f","title":"","text":"RUN mkdir -p ${K8S_PATCHED_GOROOT} && curl -sSL https://github.com/golang/go/archive/go${K8S_PATCHED_GOLANG_VERSION}.tar.gz | tar -xz -C ${K8S_PATCHED_GOROOT} --strip-components=1 # We need a patched go1.7.1 for linux/arm (https://github.com/kubernetes/kubernetes/issues/29904) # We need go1.7.1 for all darwin builds (https://github.com/kubernetes/kubernetes/issues/32999) COPY golang-patches/CL28857-go1.7.1-luxas.patch ${K8S_PATCHED_GOROOT}/ RUN cd ${K8S_PATCHED_GOROOT} && patch -p1 < CL28857-go1.7.1-luxas.patch && cd src && GOROOT_FINAL=${K8S_PATCHED_GOROOT} GOROOT_BOOTSTRAP=/usr/local/go ./make.bash && for platform in linux/arm; do GOOS=${platform%/*} GOARCH=${platform##*/} GOROOT=${K8S_PATCHED_GOROOT} go install std; done && for platform in linux/arm darwin/386 darwin/amd64; do GOOS=${platform%/*} GOARCH=${platform##*/} GOROOT=${K8S_PATCHED_GOROOT} go install std; done "} {"_id":"doc-en-kubernetes-f90d092e8898bb6ab708e74f476ac00fff59c5eb0d1f98823e935c618b4fd572","title":"","text":" v1.6.3-7 v1.6.3-8 "} {"_id":"doc-en-kubernetes-ce9fe7343dfe52de71b0da108f81dc55cb1baa10cdfaafd5f03348b8b077a888","title":"","text":"if [[ ${platform} == \"linux/arm\" ]]; then export CGO_ENABLED=1 export CC=arm-linux-gnueabi-gcc # See https://github.com/kubernetes/kubernetes/issues/29904 export GOROOT=${K8S_PATCHED_GOROOT} elif [[ ${platform} == \"linux/arm64\" ]]; then export CGO_ENABLED=1"} {"_id":"doc-en-kubernetes-eead7ff778238e376af51b20160a78431c2c191a6bb6e41fad32640712ed95ee","title":"","text":"elif [[ ${platform} == \"linux/ppc64le\" ]]; then export CGO_ENABLED=1 export CC=powerpc64le-linux-gnu-gcc elif [[ ${platform} == \"darwin/\"* ]]; then # See https://github.com/kubernetes/kubernetes/issues/32999 export GOROOT=${K8S_PATCHED_GOROOT} fi fi }"} {"_id":"doc-en-kubernetes-cbb040239283ea18cf9ab32b1e75f90a0ccf38643fe4174cf3b1b2b63923c342","title":"","text":"r.setLastSyncResourceVersion(resourceVersion) resyncerrc := make(chan error, 1) cancelCh := make(chan struct{}) defer close(cancelCh) go func() { for { select { case <-resyncCh: case <-stopCh: return case <-cancelCh: return } glog.V(4).Infof(\"%s: forcing resync\", r.name) if err := r.store.Resync(); err != nil {"} {"_id":"doc-en-kubernetes-a40452c6f08ae9945d88ea6b40869c5f5b825492a2168c1f231ebaf2e7a3dcf7","title":"","text":"if lbaas.opts.ManageSecurityGroups { err := lbaas.ensureSecurityGroup(clusterName, apiService, nodes, loadbalancer) if err != nil { // cleanup what was created so far _ = lbaas.EnsureLoadBalancerDeleted(ctx, clusterName, apiService) return status, err return status, fmt.Errorf(\"Error reconciling security groups for LB service %v/%v: %v\", apiService.Namespace, apiService.Name, err) } }"} {"_id":"doc-en-kubernetes-c2e867ad64f8ea88eaaf5bc87d2784c55deff17a8b30952afe68375105853914","title":"","text":"// It doesn't update, probe or get the pod. type defaultPetHealthChecker struct{} // isHealthy returns true if the pod is running and has the // \"pod.alpha.kubernetes.io/initialized\" set to \"true\". // isHealthy returns true if the pod is ready & running. If the pod has the // \"pod.alpha.kubernetes.io/initialized\" annotation set to \"false\", pod state is ignored. func (d *defaultPetHealthChecker) isHealthy(pod *api.Pod) bool { if pod == nil || pod.Status.Phase != api.PodRunning { return false } podReady := api.IsPodReady(pod) // User may have specified a pod readiness override through a debug annotation. initialized, ok := pod.Annotations[StatefulSetInitAnnotation] if !ok { glog.Infof(\"StatefulSet pod %v in %v, waiting on annotation %v\", api.PodRunning, pod.Name, StatefulSetInitAnnotation) return false } b, err := strconv.ParseBool(initialized) if err != nil { return false if ok { if initAnnotation, err := strconv.ParseBool(initialized); err != nil { glog.Infof(\"Failed to parse %v annotation on pod %v: %v\", StatefulSetInitAnnotation, pod.Name, err) } else if !initAnnotation { glog.Infof(\"StatefulSet pod %v waiting on annotation %v\", pod.Name, StatefulSetInitAnnotation) podReady = initAnnotation } } return b && api.IsPodReady(pod) return podReady } // isDying returns true if the pod has a non-nil deletion timestamp. Since the"} {"_id":"doc-en-kubernetes-de363bcb06f8592aeac5873ab4b315bfdd46ba2c2d044e675df112fb6933d697","title":"","text":"ObjectMeta: api.ObjectMeta{ Name: name, Namespace: ns, Annotations: map[string]string{ \"pod.alpha.kubernetes.io/initialized\": \"false\", }, }, Spec: apps.StatefulSetSpec{ Selector: &unversioned.LabelSelector{"} {"_id":"doc-en-kubernetes-f9c6da0c05897cef87a31e1bff94acdb9b1bbedf46bb11295a486585d1c6fbb4","title":"","text":"create-static-ip \"${MASTER_NAME}-ip\" \"${REGION}\" MASTER_RESERVED_IP=$(gcloud compute addresses describe \"${MASTER_NAME}-ip\" --project \"${PROJECT}\" --region \"${REGION}\" -q --format='value(address)') KUBELET_APISERVER=\"${MASTER_RESERVED_IP}\" if [[ \"${REGISTER_MASTER_KUBELET:-}\" == \"true\" ]]; then KUBELET_APISERVER=\"${MASTER_RESERVED_IP}\" fi KUBERNETES_MASTER_NAME=\"${MASTER_RESERVED_IP}\" create-certs \"${MASTER_RESERVED_IP}\""} {"_id":"doc-en-kubernetes-50b9a7c370d3eb3f366f87e457218a29328ef302afbedabe46215d34cd1d2ece","title":"","text":"} // TODO(freehan): allow user to update loadbalancerSourceRanges allErrs = append(allErrs, ValidateImmutableField(service.Spec.LoadBalancerSourceRanges, oldService.Spec.LoadBalancerSourceRanges, field.NewPath(\"spec\", \"loadBalancerSourceRanges\"))...) // Only allow removing LoadBalancerSourceRanges when change service type from LoadBalancer // to non-LoadBalancer or adding LoadBalancerSourceRanges when change service type from // non-LoadBalancer to LoadBalancer. if service.Spec.Type != api.ServiceTypeLoadBalancer && oldService.Spec.Type != api.ServiceTypeLoadBalancer || service.Spec.Type == api.ServiceTypeLoadBalancer && oldService.Spec.Type == api.ServiceTypeLoadBalancer { allErrs = append(allErrs, ValidateImmutableField(service.Spec.LoadBalancerSourceRanges, oldService.Spec.LoadBalancerSourceRanges, field.NewPath(\"spec\", \"loadBalancerSourceRanges\"))...) } allErrs = append(allErrs, validateServiceFields(service)...) allErrs = append(allErrs, validateServiceAnnotations(service, oldService)...)"} {"_id":"doc-en-kubernetes-5bfe1d64d0b7c4102ab796f2658b1627731f5d5917fe593f94edf99ca37abbdf","title":"","text":"return errStr + buf.String() } // check if a Pod is controlled by a Replication Controller in the List func hasReplicationControllersForPod(rcs *api.ReplicationControllerList, pod api.Pod) bool { for _, rc := range rcs.Items { selector := labels.SelectorFromSet(rc.Spec.Selector) if selector.Matches(labels.Set(pod.ObjectMeta.Labels)) { return true } } return false } // WaitForPodsSuccess waits till all labels matching the given selector enter // the Success state. The caller is expected to only invoke this method once the // pods have been created."} {"_id":"doc-en-kubernetes-726563985792d604f7ead873d180d4375bfe5d2848ea0bb96840b774a7560980","title":"","text":"// WaitForPodsRunningReady waits up to timeout to ensure that all pods in // namespace ns are either running and ready, or failed but controlled by a // replication controller. Also, it ensures that at least minPods are running // and ready. It has separate behavior from other 'wait for' pods functions in // that it requires the list of pods on every iteration. This is useful, for // controller. Also, it ensures that at least minPods are running and // ready. It has separate behavior from other 'wait for' pods functions in // that it requests the list of pods on every iteration. This is useful, for // example, in cluster startup, because the number of pods increases while // waiting. // If ignoreLabels is not empty, pods matching this selector are ignored and"} {"_id":"doc-en-kubernetes-c8d6b7fdbf4360f9b76daf9ea0ccb6840d763ce0c63c1b1c723783bcc4f1f9a1","title":"","text":"}() if wait.PollImmediate(Poll, timeout, func() (bool, error) { // We get the new list of pods and replication controllers in every // iteration because more pods come online during startup and we want to // ensure they are also checked. // We get the new list of pods, replication controllers, and // replica sets in every iteration because more pods come // online during startup and we want to ensure they are also // checked. replicas, replicaOk := int32(0), int32(0) rcList, err := c.Core().ReplicationControllers(ns).List(api.ListOptions{}) if err != nil { Logf(\"Error getting replication controllers in namespace '%s': %v\", ns, err) return false, nil } replicas := int32(0) for _, rc := range rcList.Items { replicas += rc.Spec.Replicas replicaOk += rc.Status.ReadyReplicas } rsList, err := c.Extensions().ReplicaSets(ns).List(api.ListOptions{}) if err != nil { Logf(\"Error getting replication sets in namespace %q: %v\", ns, err) return false, nil } for _, rs := range rsList.Items { replicas += rs.Spec.Replicas replicaOk += rs.Status.ReadyReplicas } podList, err := c.Core().Pods(ns).List(api.ListOptions{})"} {"_id":"doc-en-kubernetes-0825a0dd692c079d1da03223ad7d08de5e2c932913ab63308109fdca310c3b15","title":"","text":"Logf(\"Error getting pods in namespace '%s': %v\", ns, err) return false, nil } nOk, replicaOk := int32(0), int32(0) nOk := int32(0) badPods = []api.Pod{} desiredPods = len(podList.Items) for _, pod := range podList.Items {"} {"_id":"doc-en-kubernetes-d7e752deaabeb7eff68cce3162a076c64474ef9475e2effbcac8d0307bcaf698","title":"","text":"} if res, err := testutils.PodRunningReady(&pod); res && err == nil { nOk++ if hasReplicationControllersForPod(rcList, pod) { replicaOk++ } } else { if pod.Status.Phase != api.PodFailed { Logf(\"The status of Pod %s is %s (Ready = false), waiting for it to be either Running (with Ready = true) or Failed\", pod.ObjectMeta.Name, pod.Status.Phase) badPods = append(badPods, pod) } else if !hasReplicationControllersForPod(rcList, pod) { Logf(\"Pod %s is Failed, but it's not controlled by a ReplicationController\", pod.ObjectMeta.Name) } else if _, ok := pod.Annotations[api.CreatedByAnnotation]; !ok { Logf(\"Pod %s is Failed, but it's not controlled by a controller\", pod.ObjectMeta.Name) badPods = append(badPods, pod) } //ignore failed pods that are controlled by a replication controller //ignore failed pods that are controlled by some controller } }"} {"_id":"doc-en-kubernetes-43446979b46ffe63db5b2053eff81032fd28ef8a4139cd2c6e6409992d5d9231","title":"","text":"--env \"KUBE_FASTBUILD=${KUBE_FASTBUILD:-false}\" --env \"KUBE_BUILDER_OS=${OSTYPE:-notdetected}\" --env \"KUBE_BUILD_PPC64LE=${KUBE_BUILD_PPC64LE}\" # TODO(IBM): remove --env \"KUBE_VERBOSE=${KUBE_VERBOSE}\" ) # If we have stdin we can run interactive. This allows things like 'shell.sh'"} {"_id":"doc-en-kubernetes-e7771e99215c122bd960731283c2e0a1f4c2b6dc0e12358a0a7e5dca3b25558c","title":"","text":"[[ -n ${1-} ]] || { kube::log::error_exit \"!!! Internal error. No platform set in kube::golang::set_platform_envs\" } # make sure we have a clean slate first kube::golang::unset_platform_envs export GOOS=${platform%/*} export GOARCH=${platform##*/}"} {"_id":"doc-en-kubernetes-7b680628dddfd37c4c1b079e4878b61b19523c4da5d00f14f4673048fffe4a90","title":"","text":"local -a nonstatics=() local -a tests=() V=2 kube::log::info \"Env for ${platform}: GOOS=${GOOS-} GOARCH=${GOARCH-} GOROOT=${GOROOT-} CGO_ENABLED=${CGO_ENABLED-} CC=${CC-}\" for binary in \"${binaries[@]}\"; do if [[ \"${binary}\" =~ \".test\"$ ]]; then"} {"_id":"doc-en-kubernetes-b177eebe06dd86135b814e983e3909f958ee575146038d9c886d87a31405bd6f","title":"","text":"else for platform in \"${platforms[@]}\"; do kube::log::status \"Building go targets for ${platform}:\" \"${targets[@]}\" kube::golang::set_platform_envs \"${platform}\" kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-} ( kube::golang::set_platform_envs \"${platform}\" kube::golang::build_binaries_for_platform ${platform} ${use_go_build:-} ) done fi )"} {"_id":"doc-en-kubernetes-d46313af722cf4338965e0d95d9267df3f67b5fa39178a3c58f783ee42115a1d","title":"","text":"[[ -n ${1-} ]] || { kube::log::error_exit \"!!! Internal error. No platform set in kube::golang::set_platform_envs\" } # make sure we have a clean slate first kube::golang::unset_platform_envs export GOOS=${platform%/*} export GOARCH=${platform##*/}"} {"_id":"doc-en-kubernetes-3b2f06347dca38a37fd861767405a1328262cdfaf8e7007db05fda9a664552d7","title":"","text":"for _, mountPoint := range mountPoints { const sysfsDevice = \"sysfs\" if mountPoint.Device != sysfsDevice { if mountPoint.Type != sysfsDevice { continue } // Check whether sysfs is 'rw'"} {"_id":"doc-en-kubernetes-e9158ec3181c21699dc01cf69b474d62bce23525b2e93350a8138a642759ab6e","title":"","text":"\"//vendor:k8s.io/apimachinery/pkg/runtime/schema\", \"//vendor:k8s.io/apimachinery/pkg/types\", \"//vendor:k8s.io/apimachinery/pkg/util/intstr\", \"//vendor:k8s.io/apimachinery/pkg/util/sets\", \"//vendor:k8s.io/apimachinery/pkg/util/uuid\", \"//vendor:k8s.io/apimachinery/pkg/watch\", \"//vendor:k8s.io/client-go/pkg/api\","} {"_id":"doc-en-kubernetes-bf16839ed6bf932dcf707ddf00ab1b8d9830e927ae1673da1361e6c374ee24c5","title":"","text":"\"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/uuid\" \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/test/e2e/framework\""} {"_id":"doc-en-kubernetes-18d39022ab9bee8f6592cbc21ce55316c7f182458a80d6ce8671acc092a64c73","title":"","text":"}) Context(\"\", func() { It(\"pod infra containers oom-score-adj should be -998 and best effort container's should be 1000\", func() { var err error // Take a snapshot of existing pause processes. These were // created before this test, and may not be infra // containers. They should be excluded from the test. existingPausePIDs, err := getPidsForProcess(\"pause\", \"\") Expect(err).To(BeNil(), \"failed to list all pause processes on the node\") existingPausePIDSet := sets.NewInt(existingPausePIDs...) podClient := f.PodClient() podName := \"besteffort\" + string(uuid.NewUUID()) podClient.Create(&v1.Pod{"} {"_id":"doc-en-kubernetes-c489e291b34a833492d91b207fb262c947227008d2d2a658d53efd59acf8b34a","title":"","text":"return fmt.Errorf(\"failed to get list of pause pids: %v\", err) } for _, pid := range pausePids { if existingPausePIDSet.Has(pid) { // Not created by this test. Ignore it. continue } if err := validateOOMScoreAdjSetting(pid, -998); err != nil { return err }"} {"_id":"doc-en-kubernetes-b6905192256b8c478f5c39410c75f93e02fe1b746feac6d75db7896787306db6","title":"","text":"\"fmt\" \"os\" \"path\" \"strings\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/api/resource\""} {"_id":"doc-en-kubernetes-0fb987d86186121faa0e23db0fe744567a9963c5cf0beb0f724137e8c09ac012","title":"","text":"if err != nil { return nil, err } volumePath = strings.Replace(volumePath, \"040\", \" \", -1) glog.V(5).Infof(\"vSphere volume path is %q\", volumePath) vsphereVolume := &v1.Volume{ Name: volumeName,"} {"_id":"doc-en-kubernetes-6e94eb507cdefe046b645f78591bee857508ff5220baaf141131aecb10b15c57","title":"","text":"\"//pkg/registry/rbac/rolebinding/etcd:go_default_library\", \"//pkg/registry/rbac/rolebinding/policybased:go_default_library\", \"//pkg/util/runtime:go_default_library\", \"//pkg/util/wait:go_default_library\", \"//plugin/pkg/auth/authorizer/rbac/bootstrappolicy:go_default_library\", \"//vendor:github.com/golang/glog\", ],"} {"_id":"doc-en-kubernetes-f2f6c47d546d84798950b888afe622a749a7521985a15fa404e29a7fba4bfa18","title":"","text":"import ( \"fmt\" \"sync\" \"time\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-089634bdc5e4420857951709f9da55da61312b8541e0fb7f4926475d766f2c01","title":"","text":"rolebindingetcd \"k8s.io/kubernetes/pkg/registry/rbac/rolebinding/etcd\" rolebindingpolicybased \"k8s.io/kubernetes/pkg/registry/rbac/rolebinding/policybased\" utilruntime \"k8s.io/kubernetes/pkg/util/runtime\" \"k8s.io/kubernetes/pkg/util/wait\" \"k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac/bootstrappolicy\" )"} {"_id":"doc-en-kubernetes-68631a912a8da368a1a862c61c859649af4bb74cae434e7a3b46fe7fe2d9dde6","title":"","text":"} func PostStartHook(hookContext genericapiserver.PostStartHookContext) error { clientset, err := rbacclient.NewForConfig(hookContext.LoopbackClientConfig) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterroles: %v\", err)) return nil } existingClusterRoles, err := clientset.ClusterRoles().List(api.ListOptions{}) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterroles: %v\", err)) return nil } // if clusterroles already exist, then assume we don't have work to do because we've already // initialized or another API server has started this task if len(existingClusterRoles.Items) > 0 { return nil } for _, clusterRole := range append(bootstrappolicy.ClusterRoles(), bootstrappolicy.ControllerRoles()...) { if _, err := clientset.ClusterRoles().Create(&clusterRole); err != nil { // don't fail on failures, try to create as many as you can // intializing roles is really important. On some e2e runs, we've seen cases where etcd is down when the server // starts, the roles don't initialize, and nothing works. err := wait.Poll(1*time.Second, 30*time.Second, func() (done bool, err error) { clientset, err := rbacclient.NewForConfig(hookContext.LoopbackClientConfig) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterroles: %v\", err)) continue return false, nil } glog.Infof(\"Created clusterrole.%s/%s\", rbac.GroupName, clusterRole.Name) } existingClusterRoleBindings, err := clientset.ClusterRoleBindings().List(api.ListOptions{}) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterrolebindings: %v\", err)) return nil } // if clusterrolebindings already exist, then assume we don't have work to do because we've already // initialized or another API server has started this task if len(existingClusterRoleBindings.Items) > 0 { return nil } existingClusterRoles, err := clientset.ClusterRoles().List(api.ListOptions{}) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterroles: %v\", err)) return false, nil } // only initialized on empty etcd if len(existingClusterRoles.Items) == 0 { for _, clusterRole := range append(bootstrappolicy.ClusterRoles(), bootstrappolicy.ControllerRoles()...) { if _, err := clientset.ClusterRoles().Create(&clusterRole); err != nil { // don't fail on failures, try to create as many as you can utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterroles: %v\", err)) continue } glog.Infof(\"Created clusterrole.%s/%s\", rbac.GroupName, clusterRole.Name) } } for _, clusterRoleBinding := range append(bootstrappolicy.ClusterRoleBindings(), bootstrappolicy.ControllerRoleBindings()...) { if _, err := clientset.ClusterRoleBindings().Create(&clusterRoleBinding); err != nil { // don't fail on failures, try to create as many as you can existingClusterRoleBindings, err := clientset.ClusterRoleBindings().List(api.ListOptions{}) if err != nil { utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterrolebindings: %v\", err)) continue return false, nil } // only initialized on empty etcd if len(existingClusterRoleBindings.Items) == 0 { for _, clusterRoleBinding := range append(bootstrappolicy.ClusterRoleBindings(), bootstrappolicy.ControllerRoleBindings()...) { if _, err := clientset.ClusterRoleBindings().Create(&clusterRoleBinding); err != nil { // don't fail on failures, try to create as many as you can utilruntime.HandleError(fmt.Errorf(\"unable to initialize clusterrolebindings: %v\", err)) continue } glog.Infof(\"Created clusterrolebinding.%s/%s\", rbac.GroupName, clusterRoleBinding.Name) } } glog.Infof(\"Created clusterrolebinding.%s/%s\", rbac.GroupName, clusterRoleBinding.Name) return true, nil }) // if we're never able to make it through intialization, kill the API server if err != nil { return fmt.Errorf(\"unable to initialize roles: %v\", err) } return nil"} {"_id":"doc-en-kubernetes-8eabfca24d654a2831fef622d55b761b83a97c4cfd13f998e34f97e37aa0a133","title":"","text":"package controller import ( \"hash/adler32\" \"hash/fnv\" \"sync\" \"github.com/golang/groupcache/lru\""} {"_id":"doc-en-kubernetes-327215bc767ff14d50cf2cc7882931bfb5310b6f20b672bbaa07e9b127783ac4","title":"","text":"// Since we match objects by namespace and Labels/Selector, so if two objects have the same namespace and labels, // they will have the same key. func keyFunc(obj objectWithMeta) uint64 { hash := adler32.New() hash := fnv.New32a() hashutil.DeepHashObject(hash, &equivalenceLabelObj{ namespace: obj.GetNamespace(), labels: obj.GetLabels(),"} {"_id":"doc-en-kubernetes-b2f8ccf7c423a40169bd46ad0a9ab75e6a96a596d339e4551cb4afe2ff01e023","title":"","text":"import ( \"bufio\" \"fmt\" \"hash/adler32\" \"hash/fnv\" \"io\" \"os\" \"os/exec\""} {"_id":"doc-en-kubernetes-3b4cb1c664e9aa8a9a1be8b3e8383af7908cc7f36689008a0b74563c08216d4b","title":"","text":"} func readProcMountsFrom(file io.Reader, out *[]MountPoint) (uint32, error) { hash := adler32.New() hash := fnv.New32a() scanner := bufio.NewReader(file) for { line, err := scanner.ReadString('n')"} {"_id":"doc-en-kubernetes-90f320cc7c0981fc3dbb6edb6d3461b26ffbbf603a8a5ccdcca319e0715dcae1","title":"","text":"/dev/1 /path/to/1 type1\tflags 1 1 /dev/2 /path/to/2 type2 flags,1,2=3 2 2 ` // NOTE: readProcMountsFrom has been updated to using fnv.New32a() hash, err := readProcMountsFrom(strings.NewReader(successCase), nil) if err != nil { t.Errorf(\"expected success\") } if hash != 0xa3522051 { t.Errorf(\"expected 0xa3522051, got %#x\", hash) if hash != 0xa290ff0b { t.Errorf(\"expected 0xa290ff0b, got %#x\", hash) } mounts := []MountPoint{} hash, err = readProcMountsFrom(strings.NewReader(successCase), &mounts) if err != nil { t.Errorf(\"expected success\") } if hash != 0xa3522051 { t.Errorf(\"expected 0xa3522051, got %#x\", hash) if hash != 0xa290ff0b { t.Errorf(\"expected 0xa290ff0b, got %#x\", hash) } if len(mounts) != 3 { t.Fatalf(\"expected 3 mounts, got %d\", len(mounts))"} {"_id":"doc-en-kubernetes-cb73ca9feb406ec003936f4821ec49a26e1b0b16540d1811dbcbb362b33b9ed9","title":"","text":"package core import ( \"hash/adler32\" \"hash/fnv\" \"github.com/golang/groupcache/lru\""} {"_id":"doc-en-kubernetes-5e67570a43755f33bedc5a2c414bf5b80aa4a83b82a6c2090a90c9705fd21d33","title":"","text":"// hashEquivalencePod returns the hash of equivalence pod. func (ec *EquivalenceCache) hashEquivalencePod(pod *v1.Pod) uint64 { equivalencePod := ec.getEquivalencePod(pod) hash := adler32.New() hash := fnv.New32a() hashutil.DeepHashObject(hash, equivalencePod) return uint64(hash.Sum32()) }"} {"_id":"doc-en-kubernetes-eee82a5bc100a471f31aab1ec4381533187566f7c306b0d094ee63cefb6a0acb","title":"","text":"type cacheWatcher struct { sync.Mutex copier runtime.ObjectCopier input chan *watchCacheEvent input chan watchCacheEvent result chan watch.Event done chan struct{} filter watchFilterFunc"} {"_id":"doc-en-kubernetes-1933b00d6182cb765df6698be539487f436e32d96324601096b33817b7d444c8","title":"","text":"func newCacheWatcher(copier runtime.ObjectCopier, resourceVersion uint64, chanSize int, initEvents []*watchCacheEvent, filter watchFilterFunc, forget func(bool)) *cacheWatcher { watcher := &cacheWatcher{ copier: copier, input: make(chan *watchCacheEvent, chanSize), input: make(chan watchCacheEvent, chanSize), result: make(chan watch.Event, chanSize), done: make(chan struct{}), filter: filter,"} {"_id":"doc-en-kubernetes-ce5016b48d265bd4d65752d4213b5269922efe93c61c9309eb48675e73d98912","title":"","text":"func (c *cacheWatcher) add(event *watchCacheEvent, budget *timeBudget) { // Try to send the event immediately, without blocking. select { case c.input <- event: case c.input <- *event: return default: }"} {"_id":"doc-en-kubernetes-747365d3f5f7926cd1d98818acf30d90a396bb398d6d61438e2025fdfdf08777","title":"","text":"defer timerPool.Put(t) select { case c.input <- event: case c.input <- *event: stopped := t.Stop() if !stopped { // Consume triggered (but not yet received) timer event"} {"_id":"doc-en-kubernetes-da6869b51895439ac559459c36f19a5c1b858c8b2063648939a1324dc0d2eacd","title":"","text":"} // only send events newer than resourceVersion if event.ResourceVersion > resourceVersion { c.sendWatchCacheEvent(event) c.sendWatchCacheEvent(&event) } } }"} {"_id":"doc-en-kubernetes-2fce45bad966e626c6f47f68768f6361dd3ef22156f32d2c97931045243cb93e","title":"","text":"return utilerrors.NewAggregate(errList) } // Find process(es) with a specified name (exact match) // Find process(es) with a specified name (regexp match) // and return their pid(s) func PidOf(name string) ([]int, error) { if len(name) == 0 {"} {"_id":"doc-en-kubernetes-8c4e72cf0c37e68a37436542b0110ec91de9d933d5549d747fcd18d762387178","title":"","text":"exit 1 fi echo \"$(date +'%Y-%m-%d %H:%M:%S') Detecting if migration is needed\" if [ \"${TARGET_STORAGE}\" != \"etcd2\" -a \"${TARGET_STORAGE}\" != \"etcd3\" ]; then echo \"Not supported version of storage: ${TARGET_STORAGE}\" exit 1"} {"_id":"doc-en-kubernetes-4b88d472d246da849417baddee119268e3b81e17eaf33b409e3e3d9c32a18dc4","title":"","text":"CURRENT_VERSION=${step} echo \"${CURRENT_VERSION}/${CURRENT_STORAGE}\" > \"${DATA_DIRECTORY}/${VERSION_FILE}\" fi if [ \"$(echo ${CURRENT_VERSION} | cut -c1-2)\" = \"3.\" -a \"${CURRENT_STORAGE}\" = \"etcd2\" -a \"${TARGET_STORAGE}\" = \"etcd3\" ]; then if [ \"$(echo ${CURRENT_VERSION} | cut -c1-2)\" = \"3.\" -a \"${CURRENT_VERSION}\" = \"${step}\" -a \"${CURRENT_STORAGE}\" = \"etcd2\" -a \"${TARGET_STORAGE}\" = \"etcd3\" ]; then # If it is the first 3.x release in the list and we are migrating # also from 'etcd2' to 'etcd3', do the migration now. echo \"Performing etcd2 -> etcd3 migration\""} {"_id":"doc-en-kubernetes-39ee29be4005e5459009e2c0dd89de21054fb996375b15fef0e56d08ea1740a0","title":"","text":"CURRENT_VERSION=\"2.3.7\" echo \"${CURRENT_VERSION}/${CURRENT_STORAGE}\" > \"${DATA_DIRECTORY}/${VERSION_FILE}\" fi echo \"$(date +'%Y-%m-%d %H:%M:%S') Migration finished\" "} {"_id":"doc-en-kubernetes-f7d62132b4db76685889587226bdf9877544f351fe75f65732023169f32c0783","title":"","text":"} } func (s *StatefulSetTester) waitForStatus(ss *apps.StatefulSet, expectedReplicas int32) { // WaitForStatus waits for the ss.Status.Replicas to be equal to expectedReplicas func (s *StatefulSetTester) WaitForStatus(ss *apps.StatefulSet, expectedReplicas int32) { Logf(\"Waiting for statefulset status.replicas updated to %d\", expectedReplicas) ns, name := ss.Namespace, ss.Name"} {"_id":"doc-en-kubernetes-e61461d917d216bf6a9da6b499ee8cb4911f13ec4698fd199601f55f6b258be4","title":"","text":"if err != nil { return false, err } if *ssGet.Status.ObservedGeneration < ss.Generation { return false, nil } if ssGet.Status.Replicas != expectedReplicas { Logf(\"Waiting for stateful set status to become %d, currently %d\", expectedReplicas, ssGet.Status.Replicas) return false, nil"} {"_id":"doc-en-kubernetes-221c65bce613f956c9b7306d452aa44016fea4ad6d9d9422aa936df685543fe4","title":"","text":"if err := sst.Scale(&ss, 0); err != nil { errList = append(errList, fmt.Sprintf(\"%v\", err)) } sst.waitForStatus(&ss, 0) sst.WaitForStatus(&ss, 0) Logf(\"Deleting statefulset %v\", ss.Name) if err := c.Apps().StatefulSets(ss.Namespace).Delete(ss.Name, nil); err != nil { errList = append(errList, fmt.Sprintf(\"%v\", err))"} {"_id":"doc-en-kubernetes-df9dad900fd539730bb82f8014f2c5d0b6f0a0a7cab673719982221cdf04214b","title":"","text":"By(\"Before scale up finished setting 2nd pod to be not ready by breaking readiness probe\") sst.BreakProbe(ss, testProbe) sst.WaitForStatus(ss, 0) sst.WaitForRunningAndNotReady(2, ss) By(\"Continue scale operation after the 2nd pod, and scaling down to 1 replica\")"} {"_id":"doc-en-kubernetes-e2788355e7491b0a1f10b051eb0f6a134fe6bd176d44984216d661d56c416938","title":"","text":"By(\"Confirming that stateful set scale up will halt with unhealthy stateful pod\") sst.BreakProbe(ss, testProbe) sst.WaitForRunningAndNotReady(*ss.Spec.Replicas, ss) sst.WaitForStatus(ss, 0) sst.UpdateReplicas(ss, 3) sst.ConfirmStatefulPodCount(1, ss, 10*time.Second)"} {"_id":"doc-en-kubernetes-5c5d00e36ce3969e4a100605509c39dcf4b5baec36827316ca42443274cd66dd","title":"","text":"Expect(err).NotTo(HaveOccurred()) sst.BreakProbe(ss, testProbe) sst.WaitForStatus(ss, 0) sst.WaitForRunningAndNotReady(3, ss) sst.UpdateReplicas(ss, 0) sst.ConfirmStatefulPodCount(3, ss, 10*time.Second)"} {"_id":"doc-en-kubernetes-331f31cb383acd049c9dc648455c530d45d3ace2ee570b6da0e26fe249719aca","title":"","text":"}) // Rc output := framework.RunKubectlOrDie(\"describe\", \"rc\", \"redis-master\", nsFlag) requiredStrings := [][]string{ {\"Name:\", \"redis-master\"}, {\"Namespace:\", ns},"} {"_id":"doc-en-kubernetes-15f1f38ffddfbd92977657ecbc5e796e1e2402f7c10072d58ff6870c4c36cdab","title":"","text":"{\"Pod Template:\"}, {\"Image:\", redisImage}, {\"Events:\"}} checkOutput(output, requiredStrings) checkKubectlOutputWithRetry(requiredStrings, \"describe\", \"rc\", \"redis-master\", nsFlag) // Service output = framework.RunKubectlOrDie(\"describe\", \"service\", \"redis-master\", nsFlag) output := framework.RunKubectlOrDie(\"describe\", \"service\", \"redis-master\", nsFlag) requiredStrings = [][]string{ {\"Name:\", \"redis-master\"}, {\"Namespace:\", ns},"} {"_id":"doc-en-kubernetes-27e1f3a0fb8e0d6d2b84ffcca5efff3378814bcf11ab4379acc09ac96526e066","title":"","text":"}) // Checks whether the output split by line contains the required elements. func checkOutput(output string, required [][]string) { func checkOutputReturnError(output string, required [][]string) error { outputLines := strings.Split(output, \"n\") currentLine := 0 for _, requirement := range required {"} {"_id":"doc-en-kubernetes-585e3c00c9dd583ab3b283ffaf7d98b2279bc5b9719be57c224f22efd8b45481","title":"","text":"currentLine++ } if currentLine == len(outputLines) { framework.Failf(\"Failed to find %s in %s\", requirement[0], output) return fmt.Errorf(\"failed to find %s in %s\", requirement[0], output) } for _, item := range requirement[1:] { if !strings.Contains(outputLines[currentLine], item) { framework.Failf(\"Failed to find %s in %s\", item, outputLines[currentLine]) return fmt.Errorf(\"failed to find %s in %s\", item, outputLines[currentLine]) } } } return nil } func checkOutput(output string, required [][]string) { err := checkOutputReturnError(output, required) if err != nil { framework.Failf(\"%v\", err) } } func checkKubectlOutputWithRetry(required [][]string, args ...string) { var pollErr error wait.PollImmediate(time.Second, time.Minute, func() (bool, error) { output := framework.RunKubectlOrDie(args...) err := checkOutputReturnError(output, required) if err != nil { pollErr = err return false, nil } pollErr = nil return true, nil }) if pollErr != nil { framework.Failf(\"%v\", pollErr) } return } func getAPIVersions(apiEndpoint string) (*metav1.APIVersions, error) {"} {"_id":"doc-en-kubernetes-adb0c010b376639250b0bcd03642d07e0135b06d38e87f27780a6077c8fef454","title":"","text":"} // Error caught by validation _, maxUnavailable, _ := ResolveFenceposts(deployment.Spec.Strategy.RollingUpdate.MaxSurge, deployment.Spec.Strategy.RollingUpdate.MaxUnavailable, *(deployment.Spec.Replicas)) if maxUnavailable > *deployment.Spec.Replicas { return *deployment.Spec.Replicas } return maxUnavailable }"} {"_id":"doc-en-kubernetes-d49eb9b532431c5cb40e37a8658f279cdb8ba5e83aeb3d692861b6908a66233e","title":"","text":"} } } func TestMaxUnavailable(t *testing.T) { deployment := func(replicas int32, maxUnavailable intstr.IntOrString) extensions.Deployment { return extensions.Deployment{ Spec: extensions.DeploymentSpec{ Replicas: func(i int32) *int32 { return &i }(replicas), Strategy: extensions.DeploymentStrategy{ RollingUpdate: &extensions.RollingUpdateDeployment{ MaxSurge: func(i int) *intstr.IntOrString { x := intstr.FromInt(i); return &x }(int(1)), MaxUnavailable: &maxUnavailable, }, Type: extensions.RollingUpdateDeploymentStrategyType, }, }, } } tests := []struct { name string deployment extensions.Deployment expected int32 }{ { name: \"maxUnavailable less than replicas\", deployment: deployment(10, intstr.FromInt(5)), expected: int32(5), }, { name: \"maxUnavailable equal replicas\", deployment: deployment(10, intstr.FromInt(10)), expected: int32(10), }, { name: \"maxUnavailable greater than replicas\", deployment: deployment(5, intstr.FromInt(10)), expected: int32(5), }, { name: \"maxUnavailable with replicas is 0\", deployment: deployment(0, intstr.FromInt(10)), expected: int32(0), }, { name: \"maxUnavailable with Recreate deployment strategy\", deployment: extensions.Deployment{ Spec: extensions.DeploymentSpec{ Strategy: extensions.DeploymentStrategy{ Type: extensions.RecreateDeploymentStrategyType, }, }, }, expected: int32(0), }, { name: \"maxUnavailable less than replicas with percents\", deployment: deployment(10, intstr.FromString(\"50%\")), expected: int32(5), }, { name: \"maxUnavailable equal replicas with percents\", deployment: deployment(10, intstr.FromString(\"100%\")), expected: int32(10), }, { name: \"maxUnavailable greater than replicas with percents\", deployment: deployment(5, intstr.FromString(\"100%\")), expected: int32(5), }, } for _, test := range tests { t.Log(test.name) maxUnavailable := MaxUnavailable(test.deployment) if test.expected != maxUnavailable { t.Fatalf(\"expected:%v, got:%v\", test.expected, maxUnavailable) } } } "} {"_id":"doc-en-kubernetes-caa27a13b30305c0ea696e07c1c4b7547a099699c8ce96d0b673aad133fcd7b4","title":"","text":"package handlers import ( \"encoding/json\" \"fmt\" \"k8s.io/apimachinery/pkg/conversion/unstructured\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/json\" \"k8s.io/apimachinery/pkg/util/strategicpatch\" \"github.com/evanphx/json-patch\""} {"_id":"doc-en-kubernetes-5cd4b8ef247018b50fef5bb87f26b5c9b37fce693b6e7ff16a433a1eec5643ad","title":"","text":"} func TestNumberConversion(t *testing.T) { codec := api.Codecs.LegacyCodec(schema.GroupVersion{Version: \"v1\"}) currentVersionedObject := &v1.Service{ TypeMeta: metav1.TypeMeta{Kind: \"Service\", APIVersion: \"v1\"}, ObjectMeta: metav1.ObjectMeta{Name: \"test-service\"}, Spec: v1.ServiceSpec{ Ports: []v1.ServicePort{ { Port: 80, Protocol: \"TCP\", NodePort: 31678, }, }, }, } versionedObjToUpdate := &v1.Service{} versionedObj := &v1.Service{} patchJS := []byte(`{\"spec\":{\"ports\":[{\"port\":80,\"nodePort\":31789}]}}`) _, _, err := strategicPatchObject(codec, currentVersionedObject, patchJS, versionedObjToUpdate, versionedObj) if err != nil { t.Fatal(err) } ports := versionedObjToUpdate.Spec.Ports if len(ports) != 1 || ports[0].Port != 80 || ports[0].NodePort != 31789 { t.Fatal(errors.New(\"Ports failed to merge because of number conversion issue\")) } } func TestPatchResourceWithVersionConflict(t *testing.T) { namespace := \"bar\" name := \"foo\""} {"_id":"doc-en-kubernetes-fc5e7118017922dd322a7f5d28d9f02e6c848765c74111cdfaa2e46e8b208d07","title":"","text":"\"//vendor:k8s.io/apimachinery/pkg/runtime/serializer/streaming\", \"//vendor:k8s.io/apimachinery/pkg/types\", \"//vendor:k8s.io/apimachinery/pkg/util/httpstream\", \"//vendor:k8s.io/apimachinery/pkg/util/json\", \"//vendor:k8s.io/apimachinery/pkg/util/mergepatch\", \"//vendor:k8s.io/apimachinery/pkg/util/net\", \"//vendor:k8s.io/apimachinery/pkg/util/runtime\","} {"_id":"doc-en-kubernetes-c71c02347886caae4c96258af58bb7beb9c4bb61cc641b8273dcb13f328ad230","title":"","text":"// Returns selectors of services, RCs and RSs matching the given pod. func getSelectors(pod *v1.Pod, sl algorithm.ServiceLister, cl algorithm.ControllerLister, rsl algorithm.ReplicaSetLister, ssl algorithm.StatefulSetLister) []labels.Selector { selectors := make([]labels.Selector, 0, 3) var selectors []labels.Selector if services, err := sl.GetPodServices(pod); err == nil { for _, service := range services { selectors = append(selectors, labels.SelectorFromSet(service.Spec.Selector))"} {"_id":"doc-en-kubernetes-49fba98956d3de036187082d6db5f8a939e22fe65f3398d986de719585dbb70a","title":"","text":"return antiAffinity.CalculateAntiAffinityPriority } // Classifies nodes into ones with labels and without labels. func (s *ServiceAntiAffinity) getNodeClassificationByLabels(nodes []*v1.Node) (map[string]string, []string) { labeledNodes := map[string]string{} nonLabeledNodes := []string{} for _, node := range nodes { if labels.Set(node.Labels).Has(s.label) { label := labels.Set(node.Labels).Get(s.label) labeledNodes[node.Name] = label } else { nonLabeledNodes = append(nonLabeledNodes, node.Name) } } return labeledNodes, nonLabeledNodes } // CalculateAntiAffinityPriority spreads pods by minimizing the number of pods belonging to the same service // on machines with the same value for a particular label. // The label to be considered is provided to the struct (ServiceAntiAffinity)."} {"_id":"doc-en-kubernetes-bde1cb7287ab8c5ce4267e5b00aaf1bafc24be69601e04408930fb0339436235","title":"","text":"} // separate out the nodes that have the label from the ones that don't otherNodes := []string{} labeledNodes := map[string]string{} for _, node := range nodes { if labels.Set(node.Labels).Has(s.label) { label := labels.Set(node.Labels).Get(s.label) labeledNodes[node.Name] = label } else { otherNodes = append(otherNodes, node.Name) } } labeledNodes, nonLabeledNodes := s.getNodeClassificationByLabels(nodes) podCounts := map[string]int{} for _, pod := range nsServicePods { label, exists := labeledNodes[pod.Spec.NodeName]"} {"_id":"doc-en-kubernetes-83069e4c4d7abe023824c517d3a470d7ca4f3ff969eb9a94fcb57df8ffeb028d","title":"","text":"} podCounts[label]++ } numServicePods := len(nsServicePods) result := []schedulerapi.HostPriority{} //score int - scale of 0-maxPriority"} {"_id":"doc-en-kubernetes-57bf4388b9ba1f2b4271dae13ba724b0e14a4afcf7189cfb42af98cb9734f759","title":"","text":"result = append(result, schedulerapi.HostPriority{Host: node, Score: int(fScore)}) } // add the open nodes with a score of 0 for _, node := range otherNodes { for _, node := range nonLabeledNodes { result = append(result, schedulerapi.HostPriority{Host: node, Score: 0}) } return result, nil }"} {"_id":"doc-en-kubernetes-3ece106d1a7e9b44ddc82d2c9485299f1acfb5e56d7c59c34414a3ea55b22ae8","title":"","text":"} } func TestGetNodeClassificationByLabels(t *testing.T) { const machine01 = \"machine01\" const machine02 = \"machine02\" const zoneA = \"zoneA\" zone1 := map[string]string{ \"zone\": zoneA, } labeledNodes := map[string]map[string]string{ machine01: zone1, } expectedNonLabeledNodes := []string{machine02} serviceAffinity := ServiceAntiAffinity{label: \"zone\"} newLabeledNodes, noNonLabeledNodes := serviceAffinity.getNodeClassificationByLabels(makeLabeledNodeList(labeledNodes)) noLabeledNodes, newnonLabeledNodes := serviceAffinity.getNodeClassificationByLabels(makeNodeList(expectedNonLabeledNodes)) label, _ := newLabeledNodes[machine01] if label != zoneA && len(noNonLabeledNodes) != 0 { t.Errorf(\"Expected only labeled node with label zoneA and no noNonLabeledNodes\") } if len(noLabeledNodes) != 0 && newnonLabeledNodes[0] != machine02 { t.Errorf(\"Expected only non labled nodes\") } } func makeLabeledNodeList(nodeMap map[string]map[string]string) []*v1.Node { nodes := make([]*v1.Node, 0, len(nodeMap)) for nodeName, labels := range nodeMap {"} {"_id":"doc-en-kubernetes-28a49384ed5115e52fd0ea2ccd9a71595e97ec2b4446a4fbafc5fc14b0a4bc59","title":"","text":"openssh-client nfs-common socat udev util-linux COPY cni-bin/bin /opt/cni/bin"} {"_id":"doc-en-kubernetes-b6ab816fb56fac291aeed1968a82aef4f505127371face0f524719ea49fbb103","title":"","text":"REGISTRY?=staging-k8s.gcr.io IMAGE?=debian-hyperkube-base TAG=0.9 TAG=0.10 ARCH?=amd64 CACHEBUST?=1"} {"_id":"doc-en-kubernetes-71bc8f1127d80f290e7fac12c90bf604bea92672710ee17670f9f331a1dc193a","title":"","text":"docker_pull( name = \"debian-hyperkube-base-amd64\", digest = \"sha256:d83594ecd85345144584523e7fa5388467edf5d2dfa30d0a1bcbf184cddf4a7b\", digest = \"sha256:cc782ed16599000ca4c85d47ec6264753747ae1e77520894dca84b104a7621e2\", registry = \"k8s.gcr.io\", repository = \"debian-hyperkube-base-amd64\", tag = \"0.9\", # ignored, but kept here for documentation tag = \"0.10\", # ignored, but kept here for documentation ) docker_pull("} {"_id":"doc-en-kubernetes-5e2d050c071ba81e141422f1e404c82c926fc4b70e9a4d6b9f920a289bbd9983","title":"","text":"OUT_DIR?=_output HYPERKUBE_BIN?=$(OUT_DIR)/dockerized/bin/linux/$(ARCH)/hyperkube BASEIMAGE=k8s.gcr.io/debian-hyperkube-base-$(ARCH):0.8 BASEIMAGE=k8s.gcr.io/debian-hyperkube-base-$(ARCH):0.10 TEMP_DIR:=$(shell mktemp -d -t hyperkubeXXXXXX) all: build"} {"_id":"doc-en-kubernetes-b964eccb51ada510c153fe24c3d4d3fb4105dffd49dc05aa334d3d592a43d066","title":"","text":"fi # Network plugin if [[ -n \"${NETWORK_PROVIDER:-}\" ]]; then if [[ \"${NETWORK_PROVIDER:-}\" == \"cni\" ]]; then flags+=\" --cni-bin-dir=/opt/kubernetes/bin\" else flags+=\" --network-plugin-dir=/opt/kubernetes/bin\" fi flags+=\" --cni-bin-dir=/opt/kubernetes/bin\" flags+=\" --network-plugin=${NETWORK_PROVIDER}\" fi if [[ -n \"${NON_MASQUERADE_CIDR:-}\" ]]; then"} {"_id":"doc-en-kubernetes-b635007556eeaacc69982ac0ab4ddf8742a493fe7f2afd5201661e7db9506d4a","title":"","text":"fi # Network plugin if [[ -n \"${NETWORK_PROVIDER:-}\" || -n \"${NETWORK_POLICY_PROVIDER:-}\" ]]; then if [[ \"${NETWORK_PROVIDER:-}\" == \"cni\" || \"${NETWORK_POLICY_PROVIDER:-}\" == \"calico\" ]]; then flags+=\" --cni-bin-dir=/home/kubernetes/bin\" else flags+=\" --network-plugin-dir=/home/kubernetes/bin\" fi flags+=\" --cni-bin-dir=/home/kubernetes/bin\" if [[ \"${NETWORK_POLICY_PROVIDER:-}\" == \"calico\" ]]; then # Calico uses CNI always. flags+=\" --network-plugin=cni\""} {"_id":"doc-en-kubernetes-50f8703c4a5f723954013e2051756d0aceb51dcd9e4be680012e9c9df6bb67c2","title":"","text":"{% if pillar.get('network_provider', '').lower() == 'opencontrail' %} {% set network_plugin = \"--network-plugin=opencontrail\" %} {% elif pillar.get('network_provider', '').lower() == 'cni' %} {% set network_plugin = \"--network-plugin=cni --network-plugin-dir=/etc/cni/net.d/\" %} {% set network_plugin = \"--network-plugin=cni --cni-bin-dir=/etc/cni/net.d/\" %} {%elif pillar.get('network_policy_provider', '').lower() == 'calico' and grains['roles'][0] != 'kubernetes-master' -%} {% set network_plugin = \"--network-plugin=cni --network-plugin-dir=/etc/cni/net.d/ --cni-bin-dir=/home/kubernetes/bin/\" %} {% set network_plugin = \"--network-plugin=cni --cni-conf-dir=/etc/cni/net.d/ --cni-bin-dir=/home/kubernetes/bin/\" %} {% elif pillar.get('network_provider', '').lower() == 'kubenet' %} {% set network_plugin = \"--network-plugin=kubenet\" -%} {% endif -%}"} {"_id":"doc-en-kubernetes-7dd2242fc76c4a9dee9c6493f19e36ccf24a8a50da917ffb9f2a273fbf44879d","title":"","text":"// Network plugin settings. Shared by both docker and rkt. fs.StringVar(&s.NetworkPluginName, \"network-plugin\", s.NetworkPluginName, \" The name of the network plugin to be invoked for various events in kubelet/pod lifecycle\") //TODO(#46410): Remove the network-plugin-dir flag. fs.StringVar(&s.NetworkPluginDir, \"network-plugin-dir\", s.NetworkPluginDir, \" The full path of the directory in which to search for network plugins or CNI config\") fs.MarkDeprecated(\"network-plugin-dir\", \"Use --cni-bin-dir instead. This flag will be removed in a future version.\") fs.StringVar(&s.CNIConfDir, \"cni-conf-dir\", s.CNIConfDir, \" The full path of the directory in which to search for CNI config files. Default: /etc/cni/net.d\") fs.StringVar(&s.CNIBinDir, \"cni-bin-dir\", s.CNIBinDir, \" The full path of the directory in which to search for CNI plugin binaries. Default: /opt/cni/bin\") fs.Int32Var(&s.NetworkPluginMTU, \"network-plugin-mtu\", s.NetworkPluginMTU, \" The MTU to be passed to the network plugin, to override the default. Set to 0 to use the default 1460 MTU.\")"} {"_id":"doc-en-kubernetes-02d331889051503c3595531204e80be4c8638dbbf690ec963ecca96e449dc311","title":"","text":"KUBELET_FLAGS=${KUBELET_FLAGS:-\"\"} # Name of the network plugin, eg: \"kubenet\" NET_PLUGIN=${NET_PLUGIN:-\"\"} # Place the binaries required by NET_PLUGIN in this directory, eg: \"/home/kubernetes/bin\". NET_PLUGIN_DIR=${NET_PLUGIN_DIR:-\"\"} # Place the config files and binaries required by NET_PLUGIN in these directory, # eg: \"/etc/cni/net.d\" for config files, and \"/opt/cni/bin\" for binaries. CNI_CONF_DIR=${CNI_CONF_DIR:-\"\"} CNI_BIN_DIR=${CNI_BIN_DIR:-\"\"} SERVICE_CLUSTER_IP_RANGE=${SERVICE_CLUSTER_IP_RANGE:-10.0.0.0/24} FIRST_SERVICE_CLUSTER_IP=${FIRST_SERVICE_CLUSTER_IP:-10.0.0.1} # if enabled, must set CGROUP_ROOT"} {"_id":"doc-en-kubernetes-5b75f63eed85f57de464bcadf0bf5ae4f94a43e0b890e8f9dc90caefea778c40","title":"","text":"auth_args=\"${auth_args} --client-ca-file=${CLIENT_CA_FILE}\" fi net_plugin_dir_args=\"\" if [[ -n \"${NET_PLUGIN_DIR}\" ]]; then net_plugin_dir_args=\"--network-plugin-dir=${NET_PLUGIN_DIR}\" cni_conf_dir_args=\"\" if [[ -n \"${CNI_CONF_DIR}\" ]]; then cni_conf_dir_args=\"--cni-conf-dir=${CNI_CONF_DIR}\" fi cni_bin_dir_args=\"\" if [[ -n \"${CNI_BIN_DIR}\" ]]; then cni_bin_dir_args=\"--cni-bin-dir=${CNI_BIN_DIR}\" fi container_runtime_endpoint_args=\"\""} {"_id":"doc-en-kubernetes-56cfb9b1c676b6185ffc527c4af2ec2d4beb04fef46c7df1f403129bb737193f","title":"","text":"--pod-manifest-path=\"${POD_MANIFEST_PATH}\" ${auth_args} ${dns_args} ${net_plugin_dir_args} ${cni_conf_dir_args} ${cni_bin_dir_args} ${net_plugin_args} ${container_runtime_endpoint_args} ${image_service_endpoint_args} "} {"_id":"doc-en-kubernetes-6bf6c9512127d8bd2d5a896cb3c760a312be0db7632788fe3e3c5b30f231be5c","title":"","text":"# Do not use any network plugin by default. User could override the flags with # test_args. test_args='--kubelet-flags=\"--network-plugin= --network-plugin-dir=\" '$test_args test_args='--kubelet-flags=\"--network-plugin= --cni-bin-dir=\" '$test_args # Runtime flags test_args='--kubelet-flags=\"--container-runtime='$runtime'\" '$test_args"} {"_id":"doc-en-kubernetes-a12931dc28d8850022123c13a0aa2063f950eebb091e4147b3c17d97afa8cbe9","title":"","text":"# plugin by default. NETWORK_PLUGIN=${NETWORK_PLUGIN:-\"\"} # NETWORK_PLUGIN_PATH is the path to network plugin binary. NETWORK_PLUGIN_PATH=${NETWORK_PLUGIN_PATH:-\"\"} # CNI_CONF_DIR is the path to network plugin binaries. CNI_CONF_DIR=${CNI_CONF_DIR:-\"\"} # CNI_BIN_DIR is the path to network plugin config files. CNI_BIN_DIR=${CNI_BIN_DIR:-\"\"} # start_kubelet starts kubelet and redirect kubelet log to $LOG_DIR/kubelet.log. kubelet_log=kubelet.log"} {"_id":"doc-en-kubernetes-07036046397465555e2ea9b022a09a87965cf9273466dcc8a07bc4c88805a30a","title":"","text":"--system-cgroups=/system --cgroup-root=/ --network-plugin=$NETWORK_PLUGIN --network-plugin-dir=$NETWORK_PLUGIN_PATH --cni-conf-dir=$CNI_CONF_DIR --cni-bin-dir=$CNI_BIN_DIR --v=$log_level --logtostderr"} {"_id":"doc-en-kubernetes-03b94f0564070a4257037a8023fd4442a2fb04d4ca2f4e8dd8c30414e9201401","title":"","text":"clusters = f.GetRegisteredClusters() ns = f.FederationNamespace.Name // create backend service service = createLBServiceOrFail(f.FederationClientset, ns, FederatedIngressServiceName) service = createLBServiceOrFail(f.FederationClientset, ns, FederatedIngressServiceName, clusters) // create the TLS secret secret = createTLSSecretOrFail(f.FederationClientset, ns, FederatedIngressTLSSecretName) // wait for services objects sync"} {"_id":"doc-en-kubernetes-4586dde5001f3d905589c8cfccfbcb5905f6032800e19dabb23aa78e40f84c54","title":"","text":"backendPods = createBackendPodsOrFail(clusters, nsName, FederatedServicePodName) service = createLBServiceOrFail(f.FederationClientset, nsName, FederatedServiceName) service = createLBServiceOrFail(f.FederationClientset, nsName, FederatedServiceName, clusters) obj, err := scheme.Scheme.DeepCopy(service) // Cloning shouldn't fail. On the off-chance it does, we // should shallow copy service to serviceShard before"} {"_id":"doc-en-kubernetes-03ec4f3dd199c07d18f66dbf601205a56c5942dd4fc53c84219ec0d5fed31a47","title":"","text":"return clientset.CoreV1().Services(namespace).Create(service) } func createLBService(clientset *fedclientset.Clientset, namespace, name string) (*v1.Service, error) { func createLBService(clientset *fedclientset.Clientset, namespace, name string, clusters fedframework.ClusterSlice) (*v1.Service, error) { if clientset == nil || len(namespace) == 0 { return nil, fmt.Errorf(\"Internal error: invalid parameters passed to createService: clientset: %v, namespace: %v\", clientset, namespace) }"} {"_id":"doc-en-kubernetes-d70d40f41d30b8b5849eb8704391fe51bc8c8d54c84f73824169419e5ee7db14","title":"","text":"// Tests can be run in parallel, so we need a different nodePort for // each test. // We add 1 to FederatedSvcNodePortLast because IntnRange's range end // is not inclusive. nodePort := int32(rand.IntnRange(FederatedSvcNodePortFirst, FederatedSvcNodePortLast+1)) // we add in a array all the \"available\" ports availablePorts := make([]int32, FederatedSvcNodePortLast-FederatedSvcNodePortFirst) for i := range availablePorts { availablePorts[i] = int32(FederatedSvcNodePortFirst + i) } var err error var service *v1.Service retry := 10 // the function should retry the service creation on different port only 10 time. // until the availablePort list is not empty, lets try to create the service for len(availablePorts) > 0 && retry > 0 { // select the Id of an available port i := rand.Intn(len(availablePorts)) By(fmt.Sprintf(\"try creating federated service %q in namespace %q with nodePort %d\", name, namespace, availablePorts[i])) service, err = createServiceWithNodePort(clientset, namespace, name, availablePorts[i]) if err == nil { // check if service have been created properly in all clusters. // if the service is not present in one of the clusters, we should cleanup all services if err = checkServicesCreation(namespace, name, clusters); err == nil { // everything was created properly so returns the federated service. return service, nil } } // in case of error, cleanup everything if service != nil { if err = deleteService(clientset, namespace, name, nil); err != nil { framework.ExpectNoError(err, \"Deleting service %q after a partial createService() error\", service.Name) return nil, err } cleanupServiceShardsAndProviderResources(namespace, service, clusters) } // creation failed, lets try with another port // first remove from the availablePorts the port with which the creation failed availablePorts = append(availablePorts[:i], availablePorts[i+1:]...) retry-- } return nil, err } func createServiceWithNodePort(clientset *fedclientset.Clientset, namespace, name string, nodePort int32) (*v1.Service, error) { service := &v1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: name,"} {"_id":"doc-en-kubernetes-ac4d7111808cacddffea38c1c4fdca5c1de8949c74378c86218bffbbca0f0921","title":"","text":"return clientset.CoreV1().Services(namespace).Create(service) } // checkServicesCreation checks if the service have been created successfuly in all the clusters. // if the service is not present in at least one of the clusters, this function returns an error. func checkServicesCreation(namespace, serviceName string, clusters fedframework.ClusterSlice) error { framework.Logf(\"check if service %q have been created in %d clusters\", serviceName, len(clusters)) for _, cluster := range clusters { name := cluster.Name err := wait.PollImmediate(framework.Poll, fedframework.FederatedDefaultTestTimeout, func() (bool, error) { var err error _, err = cluster.Clientset.CoreV1().Services(namespace).Get(serviceName, metav1.GetOptions{}) if err != nil && !errors.IsNotFound(err) { // Get failed with an error, try again. framework.Logf(\"Failed to find service %q in namespace %q, in cluster %q: %v. Trying again in %s\", serviceName, namespace, name, err, framework.Poll) return false, err } else if errors.IsNotFound(err) { framework.Logf(\"Service %q in namespace %q in cluster %q not found. Trying again in %s\", serviceName, namespace, name, framework.Poll) return false, nil } By(fmt.Sprintf(\"Service %q in namespace %q in cluster %q found\", serviceName, namespace, name)) return true, nil }) if err != nil { return err } } return nil } func createServiceOrFail(clientset *fedclientset.Clientset, namespace, name string) *v1.Service { service, err := createService(clientset, namespace, name) framework.ExpectNoError(err, \"Creating service %q in namespace %q\", service.Name, namespace)"} {"_id":"doc-en-kubernetes-5be095fb89afbcb374034497f94ee5c9183987c97747cd7615c136d0f262cf84","title":"","text":"return service } func createLBServiceOrFail(clientset *fedclientset.Clientset, namespace, name string) *v1.Service { service, err := createLBService(clientset, namespace, name) func createLBServiceOrFail(clientset *fedclientset.Clientset, namespace, name string, clusters fedframework.ClusterSlice) *v1.Service { service, err := createLBService(clientset, namespace, name, clusters) framework.ExpectNoError(err, \"Creating service %q in namespace %q\", service.Name, namespace) By(fmt.Sprintf(\"Successfully created federated service (type: load balancer) %q in namespace %q\", name, namespace)) return service"} {"_id":"doc-en-kubernetes-665455d5e4110c8e13f3191dd979e16bf5dfecd8003a6d800d20a37b35d25873","title":"","text":"Fail(fmt.Sprintf(\"Internal error: invalid parameters passed to deleteServiceOrFail: clientset: %v, namespace: %v, service: %v\", clientset, namespace, serviceName)) } framework.Logf(\"Deleting service %q in namespace %v\", serviceName, namespace) err := deleteService(clientset, namespace, serviceName, orphanDependents) if err != nil { framework.ExpectNoError(err, \"Error deleting service %q from namespace %q\", serviceName, namespace) } } func deleteService(clientset *fedclientset.Clientset, namespace string, serviceName string, orphanDependents *bool) error { err := clientset.CoreV1().Services(namespace).Delete(serviceName, &metav1.DeleteOptions{OrphanDependents: orphanDependents}) framework.ExpectNoError(err, \"Error deleting service %q from namespace %q\", serviceName, namespace) if err != nil { return err } // Wait for the service to be deleted. err = wait.Poll(5*time.Second, fedframework.FederatedDefaultTestTimeout, func() (bool, error) { _, err := clientset.Core().Services(namespace).Get(serviceName, metav1.GetOptions{})"} {"_id":"doc-en-kubernetes-867d0a8831a25c2e0876abc265097bb13bf402256160f935f5f2a38e05c4ed66","title":"","text":"} return false, err }) if err != nil { framework.DescribeSvc(namespace) framework.Failf(\"Error in deleting service %s: %v\", serviceName, err) } return err } func cleanupServiceShardsAndProviderResources(namespace string, service *v1.Service, clusters fedframework.ClusterSlice) {"} {"_id":"doc-en-kubernetes-170c0452b14d8fdce27c3b9035b11bfd062cdbc37b3f28a7d33a92fb312484a6","title":"","text":"if [ \"${DRY_RUN}\" = false ]; then ls \"${CLIENT_REPO}\" | { grep -v '_tmp' || true; } | xargs rm -rf mv \"${CLIENT_REPO_TEMP}\"/* \"${CLIENT_REPO}\" git checkout HEAD -- $(find \"${CLIENT_REPO}\" -name BUILD) fi"} {"_id":"doc-en-kubernetes-531080c7d0d49ca94cca87a27a474740fcd4e41b15e3c7a649295427c5be7119","title":"","text":"var ( // TODO: Deprecate gitMajor and gitMinor, use only gitVersion instead. gitMajor string = \"0\" // major version, always numeric gitMinor string = \"8.2+\" // minor version, numeric possibly followed by \"+\" gitVersion string = \"v0.8.2-dev\" // version from git, output of $(git describe) gitMinor string = \"8.3\" // minor version, numeric possibly followed by \"+\" gitVersion string = \"v0.8.3\" // version from git, output of $(git describe) gitCommit string = \"\" // sha1 from git, output of $(git rev-parse HEAD) gitTreeState string = \"not a git tree\" // state of git tree, either \"clean\" or \"dirty\" )"} {"_id":"doc-en-kubernetes-9b32fa979a405642195e76a01a23aa117f2b38cf55f193263b26af258a975529","title":"","text":"package spdy import ( \"bufio\" \"fmt\" \"io\" \"net\" \"net/http\" \"strings\" \"sync/atomic\" \"k8s.io/apimachinery/pkg/util/httpstream\" \"k8s.io/apimachinery/pkg/util/runtime\""} {"_id":"doc-en-kubernetes-1bfabb8f9988be4d28b8dc2293e4f2d194e78a2d59901a4914e76687ba874bd6","title":"","text":"type responseUpgrader struct { } // connWrapper is used to wrap a hijacked connection and its bufio.Reader. All // calls will be handled directly by the underlying net.Conn with the exception // of Read and Close calls, which will consider data in the bufio.Reader. This // ensures that data already inside the used bufio.Reader instance is also // read. type connWrapper struct { net.Conn closed int32 bufReader *bufio.Reader } func (w *connWrapper) Read(b []byte) (n int, err error) { if atomic.LoadInt32(&w.closed) == 1 { return 0, io.EOF } return w.bufReader.Read(b) } func (w *connWrapper) Close() error { err := w.Conn.Close() atomic.StoreInt32(&w.closed, 1) return err } // NewResponseUpgrader returns a new httpstream.ResponseUpgrader that is // capable of upgrading HTTP responses using SPDY/3.1 via the // spdystream package."} {"_id":"doc-en-kubernetes-ff7cb02b61e50fb85db39f680dc4d68e4f3bfee79799bad3fe93902c730de6a4","title":"","text":"w.Header().Add(httpstream.HeaderUpgrade, HeaderSpdy31) w.WriteHeader(http.StatusSwitchingProtocols) conn, _, err := hijacker.Hijack() conn, bufrw, err := hijacker.Hijack() if err != nil { runtime.HandleError(fmt.Errorf(\"unable to upgrade: error hijacking response: %v\", err)) return nil } spdyConn, err := NewServerConnection(conn, newStreamHandler) connWithBuf := &connWrapper{Conn: conn, bufReader: bufrw.Reader} spdyConn, err := NewServerConnection(connWithBuf, newStreamHandler) if err != nil { runtime.HandleError(fmt.Errorf(\"unable to upgrade: error creating SPDY server connection: %v\", err)) return nil"} {"_id":"doc-en-kubernetes-3ad30ea9c1f61bfdab01798eeb9717434acff09fd7b2da6bd0b6a82ec3ef40e4","title":"","text":"} var left, right interface{} if len(lefts) != 1 { switch { case len(lefts) == 0: continue case len(lefts) > 1: return input, fmt.Errorf(\"can only compare one element at a time\") } left = lefts[0].Interface()"} {"_id":"doc-en-kubernetes-782fbd79a5f19e220622de9146ca8d8b5b9e0fbe4ed353adf0592d54ee12611a","title":"","text":"if err != nil { return input, err } if len(rights) != 1 { switch { case len(rights) == 0: continue case len(rights) > 1: return input, fmt.Errorf(\"can only compare one element at a time\") } right = rights[0].Interface()"} {"_id":"doc-en-kubernetes-e5d802c6cf888415f538289a0bc7dadb78e4459b452b7d38d3103b9c9a28ac6a","title":"","text":") type jsonpathTest struct { name string template string input interface{} expect string name string template string input interface{} expect string expectError bool } func testJSONPath(tests []jsonpathTest, allowMissingKeys bool, t *testing.T) {"} {"_id":"doc-en-kubernetes-5fac69deede58d3e9b276f8aa336df98d424d9c575616cdca25ffc21560a7dc3","title":"","text":"} buf := new(bytes.Buffer) err = j.Execute(buf, test.input) if err != nil { if test.expectError { if test.expectError && err == nil { t.Errorf(\"in %s, expected execute error\", test.name) } continue } else if err != nil { t.Errorf(\"in %s, execute error %v\", test.name, err) } out := buf.String()"} {"_id":"doc-en-kubernetes-22b1d22546bc1653e9126bd97131a92ee8b0dbab9fb68f55add708d963bcd723","title":"","text":"} storeTests := []jsonpathTest{ {\"plain\", \"hello jsonpath\", nil, \"hello jsonpath\"}, {\"recursive\", \"{..}\", []int{1, 2, 3}, \"[1 2 3]\"}, {\"filter\", \"{[?(@<5)]}\", []int{2, 6, 3, 7}, \"2 3\"}, {\"quote\", `{\"{\"}`, nil, \"{\"}, {\"union\", \"{[1,3,4]}\", []int{0, 1, 2, 3, 4}, \"1 3 4\"}, {\"array\", \"{[0:2]}\", []string{\"Monday\", \"Tudesday\"}, \"Monday Tudesday\"}, {\"variable\", \"hello {.Name}\", storeData, \"hello jsonpath\"}, {\"dict/\", \"{$.Labels.web/html}\", storeData, \"15\"}, {\"dict/\", \"{$.Employees.jason}\", storeData, \"manager\"}, {\"dict/\", \"{$.Employees.dan}\", storeData, \"clerk\"}, {\"dict-\", \"{.Labels.k8s-app}\", storeData, \"20\"}, {\"nest\", \"{.Bicycle[*].Color}\", storeData, \"red green\"}, {\"allarray\", \"{.Book[*].Author}\", storeData, \"Nigel Rees Evelyn Waugh Herman Melville\"}, {\"allfileds\", \"{.Bicycle.*}\", storeData, \"{red 19.95 true} {green 20.01 false}\"}, {\"recurfileds\", \"{..Price}\", storeData, \"8.95 12.99 8.99 19.95 20.01\"}, {\"plain\", \"hello jsonpath\", nil, \"hello jsonpath\", false}, {\"recursive\", \"{..}\", []int{1, 2, 3}, \"[1 2 3]\", false}, {\"filter\", \"{[?(@<5)]}\", []int{2, 6, 3, 7}, \"2 3\", false}, {\"quote\", `{\"{\"}`, nil, \"{\", false}, {\"union\", \"{[1,3,4]}\", []int{0, 1, 2, 3, 4}, \"1 3 4\", false}, {\"array\", \"{[0:2]}\", []string{\"Monday\", \"Tudesday\"}, \"Monday Tudesday\", false}, {\"variable\", \"hello {.Name}\", storeData, \"hello jsonpath\", false}, {\"dict/\", \"{$.Labels.web/html}\", storeData, \"15\", false}, {\"dict/\", \"{$.Employees.jason}\", storeData, \"manager\", false}, {\"dict/\", \"{$.Employees.dan}\", storeData, \"clerk\", false}, {\"dict-\", \"{.Labels.k8s-app}\", storeData, \"20\", false}, {\"nest\", \"{.Bicycle[*].Color}\", storeData, \"red green\", false}, {\"allarray\", \"{.Book[*].Author}\", storeData, \"Nigel Rees Evelyn Waugh Herman Melville\", false}, {\"allfileds\", \"{.Bicycle.*}\", storeData, \"{red 19.95 true} {green 20.01 false}\", false}, {\"recurfileds\", \"{..Price}\", storeData, \"8.95 12.99 8.99 19.95 20.01\", false}, {\"lastarray\", \"{.Book[-1:]}\", storeData, \"{Category: fiction, Author: Herman Melville, Title: Moby Dick, Price: 8.99}\"}, \"{Category: fiction, Author: Herman Melville, Title: Moby Dick, Price: 8.99}\", false}, {\"recurarray\", \"{..Book[2]}\", storeData, \"{Category: fiction, Author: Herman Melville, Title: Moby Dick, Price: 8.99}\"}, {\"bool\", \"{.Bicycle[?(@.IsNew==true)]}\", storeData, \"{red 19.95 true}\"}, \"{Category: fiction, Author: Herman Melville, Title: Moby Dick, Price: 8.99}\", false}, {\"bool\", \"{.Bicycle[?(@.IsNew==true)]}\", storeData, \"{red 19.95 true}\", false}, } testJSONPath(storeTests, false, t) missingKeyTests := []jsonpathTest{ {\"nonexistent field\", \"{.hello}\", storeData, \"\"}, {\"nonexistent field\", \"{.hello}\", storeData, \"\", false}, } testJSONPath(missingKeyTests, true, t) failStoreTests := []jsonpathTest{ {\"invalid identifier\", \"{hello}\", storeData, \"unrecognized identifier hello\"}, {\"nonexistent field\", \"{.hello}\", storeData, \"hello is not found\"}, {\"invalid array\", \"{.Labels[0]}\", storeData, \"map[string]int is not array or slice\"}, {\"invalid filter operator\", \"{.Book[?(@.Price<>10)]}\", storeData, \"unrecognized filter operator <>\"}, {\"redundent end\", \"{range .Labels.*}{@}{end}{end}\", storeData, \"not in range, nothing to end\"}, {\"invalid identifier\", \"{hello}\", storeData, \"unrecognized identifier hello\", false}, {\"nonexistent field\", \"{.hello}\", storeData, \"hello is not found\", false}, {\"invalid array\", \"{.Labels[0]}\", storeData, \"map[string]int is not array or slice\", false}, {\"invalid filter operator\", \"{.Book[?(@.Price<>10)]}\", storeData, \"unrecognized filter operator <>\", false}, {\"redundent end\", \"{range .Labels.*}{@}{end}{end}\", storeData, \"not in range, nothing to end\", false}, } testFailJSONPath(failStoreTests, t) }"} {"_id":"doc-en-kubernetes-6e0de02ba8f70ba9eba8b1d8dccd95f5aae13ba65a786ccabfdc7971442ad44a","title":"","text":"t.Error(err) } pointsTests := []jsonpathTest{ {\"exists filter\", \"{[?(@.z)].id}\", pointsData, \"i2 i5\"}, {\"bracket key\", \"{[0]['id']}\", pointsData, \"i1\"}, {\"exists filter\", \"{[?(@.z)].id}\", pointsData, \"i2 i5\", false}, {\"bracket key\", \"{[0]['id']}\", pointsData, \"i1\", false}, } testJSONPath(pointsTests, false, t) }"} {"_id":"doc-en-kubernetes-d2879fe79ff80d12ad41d8480d3708ae73b3fda3966c16febef35b7d47ae7c83","title":"","text":"} nodesTests := []jsonpathTest{ {\"range item\", `{range .items[*]}{.metadata.name}, {end}{.kind}`, nodesData, \"127.0.0.1, 127.0.0.2, List\"}, {\"range item with quote\", `{range .items[*]}{.metadata.name}{\"t\"}{end}`, nodesData, \"127.0.0.1t127.0.0.2t\"}, {\"range item\", `{range .items[*]}{.metadata.name}, {end}{.kind}`, nodesData, \"127.0.0.1, 127.0.0.2, List\", false}, {\"range item with quote\", `{range .items[*]}{.metadata.name}{\"t\"}{end}`, nodesData, \"127.0.0.1t127.0.0.2t\", false}, {\"range addresss\", `{.items[*].status.addresses[*].address}`, nodesData, \"127.0.0.1 127.0.0.2 127.0.0.3\"}, \"127.0.0.1 127.0.0.2 127.0.0.3\", false}, {\"double range\", `{range .items[*]}{range .status.addresses[*]}{.address}, {end}{end}`, nodesData, \"127.0.0.1, 127.0.0.2, 127.0.0.3, \"}, {\"item name\", `{.items[*].metadata.name}`, nodesData, \"127.0.0.1 127.0.0.2\"}, \"127.0.0.1, 127.0.0.2, 127.0.0.3, \", false}, {\"item name\", `{.items[*].metadata.name}`, nodesData, \"127.0.0.1 127.0.0.2\", false}, {\"union nodes capacity\", `{.items[*]['metadata.name', 'status.capacity']}`, nodesData, \"127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]\"}, \"127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8]\", false}, {\"range nodes capacity\", `{range .items[*]}[{.metadata.name}, {.status.capacity}] {end}`, nodesData, \"[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] \"}, {\"user password\", `{.users[?(@.name==\"e2e\")].user.password}`, &nodesData, \"secret\"}, {\"hostname\", `{.items[0].metadata.labels.kubernetes.io/hostname}`, &nodesData, \"127.0.0.1\"}, {\"hostname filter\", `{.items[?(@.metadata.labels.kubernetes.io/hostname==\"127.0.0.1\")].kind}`, &nodesData, \"None\"}, {\"bool item\", `{.items[?(@..ready==true)].metadata.name}`, &nodesData, \"127.0.0.1\"}, \"[127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] \", false}, {\"user password\", `{.users[?(@.name==\"e2e\")].user.password}`, &nodesData, \"secret\", false}, {\"hostname\", `{.items[0].metadata.labels.kubernetes.io/hostname}`, &nodesData, \"127.0.0.1\", false}, {\"hostname filter\", `{.items[?(@.metadata.labels.kubernetes.io/hostname==\"127.0.0.1\")].kind}`, &nodesData, \"None\", false}, {\"bool item\", `{.items[?(@..ready==true)].metadata.name}`, &nodesData, \"127.0.0.1\", false}, } testJSONPath(nodesTests, false, t) randomPrintOrderTests := []jsonpathTest{ {\"recursive name\", \"{..name}\", nodesData, `127.0.0.1 127.0.0.2 myself e2e`}, {\"recursive name\", \"{..name}\", nodesData, `127.0.0.1 127.0.0.2 myself e2e`, false}, } testJSONPathSortOutput(randomPrintOrderTests, t) } func TestFilterPartialMatchesSometimesMissingAnnotations(t *testing.T) { // for https://issues.k8s.io/45546 var input = []byte(`{ \"kind\": \"List\", \"items\": [ { \"kind\": \"Pod\", \"metadata\": { \"name\": \"pod1\", \"annotations\": { \"color\": \"blue\" } } }, { \"kind\": \"Pod\", \"metadata\": { \"name\": \"pod2\" } }, { \"kind\": \"Pod\", \"metadata\": { \"name\": \"pod3\", \"annotations\": { \"color\": \"green\" } } }, { \"kind\": \"Pod\", \"metadata\": { \"name\": \"pod4\", \"annotations\": { \"color\": \"blue\" } } } ] }`) var data interface{} err := json.Unmarshal(input, &data) if err != nil { t.Fatal(err) } testJSONPath( []jsonpathTest{ { \"filter, should only match a subset, some items don't have annotations, tolerate missing items\", `{.items[?(@.metadata.annotations.color==\"blue\")].metadata.name}`, data, \"pod1 pod4\", false, // expect no error }, }, true, // allow missing keys t, ) testJSONPath( []jsonpathTest{ { \"filter, should only match a subset, some items don't have annotations, error on missing items\", `{.items[?(@.metadata.annotations.color==\"blue\")].metadata.name}`, data, \"\", true, // expect an error }, }, false, // don't allow missing keys t, ) } "} {"_id":"doc-en-kubernetes-bfe1b19731f72a976b3e8efbc981b4dffc10e4f7577a683c0b684753b066a84a","title":"","text":"\"integration\", ], deps = [ \"//vendor/k8s.io/apimachinery/pkg/api/errors:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/api/meta:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured:go_default_library\","} {"_id":"doc-en-kubernetes-791b886fae8650e8dbfe2cec1193e625632765874cdfb30b967cee383d811749","title":"","text":"\"testing\" \"time\" \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/api/meta\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\""} {"_id":"doc-en-kubernetes-b5bca62adf07d8d0ae02e4c4a8e49b0c558da7e33be723500f26bb83e0060b90","title":"","text":"if e, a := createdNoxuInstance, gottenNoxuInstance2; !reflect.DeepEqual(e, a) { t.Errorf(\"expected %v, got %v\", e, a) } } func TestDeRegistrationAndReRegistration(t *testing.T) { stopCh, apiExtensionClient, clientPool, err := testserver.StartDefaultServer() if err != nil { t.Fatal(err) } defer close(stopCh) noxuDefinition := testserver.NewNoxuCustomResourceDefinition() ns := \"not-the-default\" sameInstanceName := \"foo\" func() { noxuVersionClient, err := testserver.CreateNewCustomResourceDefinition(noxuDefinition, apiExtensionClient, clientPool) if err != nil { t.Fatal(err) } noxuNamespacedResourceClient := NewNamespacedCustomResourceClient(ns, noxuVersionClient, noxuDefinition) if _, err := instantiateCustomResource(t, testserver.NewNoxuInstance(ns, sameInstanceName), noxuNamespacedResourceClient, noxuDefinition); err != nil { t.Fatal(err) } // Remove sameInstanceName since at the moment there's no finalizers. // TODO: as soon finalizers will be implemented Delete can be removed. if err := noxuNamespacedResourceClient.Delete(sameInstanceName, nil); err != nil { t.Fatal(err) } if err := testserver.DeleteCustomResourceDefinition(noxuDefinition, apiExtensionClient); err != nil { t.Fatal(err) } if _, err := testserver.GetCustomResourceDefinition(noxuDefinition, apiExtensionClient); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } if _, err = noxuNamespacedResourceClient.List(metav1.ListOptions{}); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } if _, err = noxuNamespacedResourceClient.Get(\"foo\"); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } }() func() { if _, err := testserver.GetCustomResourceDefinition(noxuDefinition, apiExtensionClient); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } noxuVersionClient, err := testserver.CreateNewCustomResourceDefinition(noxuDefinition, apiExtensionClient, clientPool) if err != nil { t.Fatal(err) } noxuNamespacedResourceClient := NewNamespacedCustomResourceClient(ns, noxuVersionClient, noxuDefinition) initialList, err := noxuNamespacedResourceClient.List(metav1.ListOptions{}) if err != nil { t.Fatal(err) } if _, err = noxuNamespacedResourceClient.Get(sameInstanceName); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } if e, a := 0, len(initialList.(*unstructured.UnstructuredList).Items); e != a { t.Fatalf(\"expected %v, got %v\", e, a) } createdNoxuInstance, err := instantiateCustomResource(t, testserver.NewNoxuInstance(ns, sameInstanceName), noxuNamespacedResourceClient, noxuDefinition) if err != nil { t.Fatal(err) } gottenNoxuInstance, err := noxuNamespacedResourceClient.Get(sameInstanceName) if err != nil { t.Fatal(err) } if e, a := createdNoxuInstance, gottenNoxuInstance; !reflect.DeepEqual(e, a) { t.Fatalf(\"expected %v, got %v\", e, a) } listWithItem, err := noxuNamespacedResourceClient.List(metav1.ListOptions{}) if err != nil { t.Fatal(err) } if e, a := 1, len(listWithItem.(*unstructured.UnstructuredList).Items); e != a { t.Fatalf(\"expected %v, got %v\", e, a) } if e, a := *createdNoxuInstance, listWithItem.(*unstructured.UnstructuredList).Items[0]; !reflect.DeepEqual(e, a) { t.Fatalf(\"expected %v, got %v\", e, a) } if err := noxuNamespacedResourceClient.Delete(sameInstanceName, nil); err != nil { t.Fatal(err) } if _, err = noxuNamespacedResourceClient.Get(sameInstanceName); err == nil || !errors.IsNotFound(err) { t.Fatalf(\"expected a NotFound error, got:%v\", err) } listWithoutItem, err := noxuNamespacedResourceClient.List(metav1.ListOptions{}) if err != nil { t.Fatal(err) } if e, a := 0, len(listWithoutItem.(*unstructured.UnstructuredList).Items); e != a { t.Fatalf(\"expected %v, got %v\", e, a) } }() }"} {"_id":"doc-en-kubernetes-aba02a0cd96568479804b673efd782382f478625d90ba7b358b5140362f4db29","title":"","text":"tags = [\"automanaged\"], deps = [ \"//vendor/github.com/pborman/uuid:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/api/errors:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/runtime/schema:go_default_library\","} {"_id":"doc-en-kubernetes-4b9cf6a935f4b15793f73f9eb20bb3f3fee208ccacad280d58b4e1e967c2d674","title":"","text":"import ( \"time\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\" \"k8s.io/apimachinery/pkg/runtime/schema\""} {"_id":"doc-en-kubernetes-22b1db6a2ccd28c172fe1a70b2068a865acfe3cd18ff76a312db9b7e6ea22c31","title":"","text":"} return dynamicClient, nil } func DeleteCustomResourceDefinition(customResource *apiextensionsv1alpha1.CustomResourceDefinition, apiExtensionsClient clientset.Interface) error { if err := apiExtensionsClient.Apiextensions().CustomResourceDefinitions().Delete(customResource.Name, nil); err != nil { return err } err := wait.PollImmediate(30*time.Millisecond, 30*time.Second, func() (bool, error) { if _, err := apiExtensionsClient.Discovery().ServerResourcesForGroupVersion(customResource.Spec.Group + \"/\" + customResource.Spec.Version); err != nil { if errors.IsNotFound(err) { return true, nil } return false, err } return false, nil }) return err } func GetCustomResourceDefinition(customResource *apiextensionsv1alpha1.CustomResourceDefinition, apiExtensionsClient clientset.Interface) (*apiextensionsv1alpha1.CustomResourceDefinition, error) { return apiExtensionsClient.Apiextensions().CustomResourceDefinitions().Get(customResource.Name, metav1.GetOptions{}) } "} {"_id":"doc-en-kubernetes-13d43e0b9429d374a1ecee7548696e08ff7dbef21af74952a25c54e14d90fbd8","title":"","text":"library = \":go_default_library\", tags = [\"automanaged\"], deps = [ \"//pkg/cloudprovider/providers/azure:go_default_library\", \"//pkg/cloudprovider/providers/fake:go_default_library\", \"//pkg/util/mount:go_default_library\", \"//pkg/volume:go_default_library\", \"//pkg/volume/testing:go_default_library\","} {"_id":"doc-en-kubernetes-84050a42b0d800bed44d88bc9fafce6bc54434924330b08b98af6bd39d352919","title":"","text":"\"k8s.io/kubernetes/pkg/volume\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/cloudprovider\" \"k8s.io/kubernetes/pkg/cloudprovider/providers/azure\" \"k8s.io/kubernetes/pkg/volume/util\" ) // This is the primary entrypoint for volume plugins. // ProbeVolumePlugins is the primary endpoint for volume plugins func ProbeVolumePlugins() []volume.VolumePlugin { return []volume.VolumePlugin{&azureFilePlugin{nil}} }"} {"_id":"doc-en-kubernetes-07475e01541eb8fd1f84b96503756581ea4480adca9e8b3be4161c20fe5c8f04","title":"","text":"if accountName, accountKey, err = b.util.GetAzureCredentials(b.plugin.host, b.pod.Namespace, b.secretName); err != nil { return err } os.MkdirAll(dir, 0750) source := fmt.Sprintf(\"//%s.file.core.windows.net/%s\", accountName, b.shareName) os.MkdirAll(dir, 0700) source := fmt.Sprintf(\"//%s.file.%s/%s\", accountName, getStorageEndpointSuffix(b.plugin.host.GetCloudProvider()), b.shareName) // parameters suggested by https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/ options := []string{fmt.Sprintf(\"vers=3.0,username=%s,password=%s,dir_mode=0777,file_mode=0777\", accountName, accountKey)} options := []string{fmt.Sprintf(\"vers=3.0,username=%s,password=%s,dir_mode=0700,file_mode=0700\", accountName, accountKey)} if b.readOnly { options = append(options, \"ro\") }"} {"_id":"doc-en-kubernetes-07b47a8b0a758f74bd00ba0161082ff1015deebf2c5002b203975a65eb605e65","title":"","text":"return nil, false, fmt.Errorf(\"Spec does not reference an AzureFile volume type\") } func getAzureCloud(cloudProvider cloudprovider.Interface) (*azure.Cloud, error) { azure, ok := cloudProvider.(*azure.Cloud) if !ok || azure == nil { return nil, fmt.Errorf(\"Failed to get Azure Cloud Provider. GetCloudProvider returned %v instead\", cloudProvider) } return azure, nil } func getStorageEndpointSuffix(cloudprovider cloudprovider.Interface) string { const publicCloudStorageEndpointSuffix = \"core.windows.net\" azure, err := getAzureCloud(cloudprovider) if err != nil { glog.Warningf(\"No Azure cloud provider found. Using the Azure public cloud endpoint: %s\", publicCloudStorageEndpointSuffix) return publicCloudStorageEndpointSuffix } return azure.Environment.StorageEndpointSuffix } "} {"_id":"doc-en-kubernetes-b92ecebd906b685d20ba057d5a97c177b054766b6fe824d82851d03e29691d70","title":"","text":"\"io/ioutil\" \"os\" \"path\" \"strings\" \"testing\" \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/client-go/kubernetes/fake\" \"k8s.io/kubernetes/pkg/cloudprovider/providers/azure\" fakecloud \"k8s.io/kubernetes/pkg/cloudprovider/providers/fake\" \"k8s.io/kubernetes/pkg/util/mount\" \"k8s.io/kubernetes/pkg/volume\" volumetest \"k8s.io/kubernetes/pkg/volume/testing\""} {"_id":"doc-en-kubernetes-d115fa853b6a3446076c80169fa92d2147abcc1396fe005459d188d84cb3d468","title":"","text":"return false } func TestPlugin(t *testing.T) { func getAzureTestCloud(t *testing.T) *azure.Cloud { config := `{ \"aadClientId\": \"--aad-client-id--\", \"aadClientSecret\": \"--aad-client-secret--\" }` configReader := strings.NewReader(config) cloud, err := azure.NewCloud(configReader) if err != nil { t.Error(err) } azureCloud, ok := cloud.(*azure.Cloud) if !ok { t.Error(\"NewCloud returned incorrect type\") } return azureCloud } func getTestTempDir(t *testing.T) string { tmpDir, err := ioutil.TempDir(os.TempDir(), \"azurefileTest\") if err != nil { t.Fatalf(\"can't make a temp dir: %v\", err) } return tmpDir } func TestPluginAzureCloudProvider(t *testing.T) { tmpDir := getTestTempDir(t) defer os.RemoveAll(tmpDir) testPlugin(t, tmpDir, volumetest.NewFakeVolumeHostWithCloudProvider(tmpDir, nil, nil, getAzureTestCloud(t))) } func TestPluginWithoutCloudProvider(t *testing.T) { tmpDir := getTestTempDir(t) defer os.RemoveAll(tmpDir) testPlugin(t, tmpDir, volumetest.NewFakeVolumeHost(tmpDir, nil, nil)) } func TestPluginWithOtherCloudProvider(t *testing.T) { tmpDir := getTestTempDir(t) defer os.RemoveAll(tmpDir) cloud := &fakecloud.FakeCloud{} testPlugin(t, tmpDir, volumetest.NewFakeVolumeHostWithCloudProvider(tmpDir, nil, nil, cloud)) } func testPlugin(t *testing.T, tmpDir string, volumeHost volume.VolumeHost) { plugMgr := volume.VolumePluginMgr{} plugMgr.InitPlugins(ProbeVolumePlugins(), volumetest.NewFakeVolumeHost(tmpDir, nil, nil)) plugMgr.InitPlugins(ProbeVolumePlugins(), volumeHost) plug, err := plugMgr.FindPluginByName(\"kubernetes.io/azure-file\") if err != nil {"} {"_id":"doc-en-kubernetes-839719faf7240f60727f8b56965ca53afdb0c7d07adfb16e97c13731a4be27b7","title":"","text":"} func NewFakeVolumeHost(rootDir string, kubeClient clientset.Interface, plugins []VolumePlugin) *fakeVolumeHost { host := &fakeVolumeHost{rootDir: rootDir, kubeClient: kubeClient, cloud: nil} return newFakeVolumeHost(rootDir, kubeClient, plugins, nil) } func NewFakeVolumeHostWithCloudProvider(rootDir string, kubeClient clientset.Interface, plugins []VolumePlugin, cloud cloudprovider.Interface) *fakeVolumeHost { return newFakeVolumeHost(rootDir, kubeClient, plugins, cloud) } func newFakeVolumeHost(rootDir string, kubeClient clientset.Interface, plugins []VolumePlugin, cloud cloudprovider.Interface) *fakeVolumeHost { host := &fakeVolumeHost{rootDir: rootDir, kubeClient: kubeClient, cloud: cloud} host.mounter = &mount.FakeMounter{} host.writer = &io.StdWriter{} host.pluginMgr.InitPlugins(plugins, host)"} {"_id":"doc-en-kubernetes-9006ad8a1ef4066b91045f0f74c3c93e7d6b06df33a12c442b1043b7deaf0565","title":"","text":"} // SetRef stores a reference to a pod's container, associating it with the given container ID. // TODO: move this to client-go v1.ObjectReference func (c *RefManager) SetRef(id ContainerID, ref *v1.ObjectReference) { c.Lock() defer c.Unlock()"} {"_id":"doc-en-kubernetes-8d8c32bf201464bd35fc1b84398f2283e87ea9bb2b98cc33651f8a3e1c5938a0","title":"","text":"} // GetRef returns the container reference of the given ID, or (nil, false) if none is stored. // TODO: move this to client-go v1.ObjectReference func (c *RefManager) GetRef(id ContainerID) (ref *v1.ObjectReference, ok bool) { c.RLock() defer c.RUnlock()"} {"_id":"doc-en-kubernetes-5156073453ff201965d0fade08d39b3c032a58211f3ed42dbfab323186f6959d","title":"","text":"glog.Warningf(\"No ref for container %q\", containerID) return } m.recorder.Event(ref, eventType, reason, message) m.recorder.Event(events.ToObjectReference(ref), eventType, reason, message) } // executePreStopHook runs the pre-stop lifecycle hooks if applicable and returns the duration it takes."} {"_id":"doc-en-kubernetes-0ecedff605dd11b36796cae4ae85ebdb5de3bf532adb761596b0ee74871ca1f5","title":"","text":"if err != nil { glog.V(1).Infof(\"%s probe for %q errored: %v\", probeType, ctrName, err) if hasRef { pb.recorder.Eventf(ref, v1.EventTypeWarning, events.ContainerUnhealthy, \"%s probe errored: %v\", probeType, err) pb.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeWarning, events.ContainerUnhealthy, \"%s probe errored: %v\", probeType, err) } } else { // result != probe.Success glog.V(1).Infof(\"%s probe for %q failed (%v): %s\", probeType, ctrName, result, output) if hasRef { pb.recorder.Eventf(ref, v1.EventTypeWarning, events.ContainerUnhealthy, \"%s probe failed: %s\", probeType, output) pb.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeWarning, events.ContainerUnhealthy, \"%s probe failed: %s\", probeType, output) } } return results.Failure, err"} {"_id":"doc-en-kubernetes-6a07357654c1ea58647a5cecc619b4ea3fd0107d714faeaf8a5ee93cb72d639e","title":"","text":"uuid := utilstrings.ShortenString(id.uuid, 8) switch reason { case \"Created\": r.recorder.Eventf(ref, v1.EventTypeNormal, events.CreatedContainer, \"Created with rkt id %v\", uuid) r.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeNormal, events.CreatedContainer, \"Created with rkt id %v\", uuid) case \"Started\": r.recorder.Eventf(ref, v1.EventTypeNormal, events.StartedContainer, \"Started with rkt id %v\", uuid) r.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeNormal, events.StartedContainer, \"Started with rkt id %v\", uuid) case \"Failed\": r.recorder.Eventf(ref, v1.EventTypeWarning, events.FailedToStartContainer, \"Failed to start with rkt id %v with error %v\", uuid, failure) r.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeWarning, events.FailedToStartContainer, \"Failed to start with rkt id %v with error %v\", uuid, failure) case \"Killing\": r.recorder.Eventf(ref, v1.EventTypeNormal, events.KillingContainer, \"Killing with rkt id %v\", uuid) r.recorder.Eventf(events.ToObjectReference(ref), v1.EventTypeNormal, events.KillingContainer, \"Killing with rkt id %v\", uuid) default: glog.Errorf(\"rkt: Unexpected event %q\", reason) }"} {"_id":"doc-en-kubernetes-0e3f104991f633a9fda07684064de39e8d05d3174df8fd2e6967032326bd9303","title":"","text":"if len(namespace) == 0 { namespace = metav1.NamespaceDefault } selfLink = fmt.Sprintf(\"/api/\"+api.Registry.GroupOrDie(api.GroupName).GroupVersion.Version+\"/pods/namespaces/%s/%s\", name, namespace) selfLink = fmt.Sprintf(\"/api/\"+api.Registry.GroupOrDie(api.GroupName).GroupVersion.Version+\"/namespaces/%s/pods/%s\", namespace, name) return selfLink }"} {"_id":"doc-en-kubernetes-acd8a1d668b355f64914113fac6806075f1045e1331905dadd9b40c90e814486","title":"","text":"} } } func TestGetSelfLink(t *testing.T) { var testCases = []struct { desc string name string namespace string expectedSelfLink string }{ { desc: \"No namespace specified\", name: \"foo\", namespace: \"\", expectedSelfLink: \"/api/v1/namespaces/default/pods/foo\", }, { desc: \"Namespace specified\", name: \"foo\", namespace: \"bar\", expectedSelfLink: \"/api/v1/namespaces/bar/pods/foo\", }, } for _, testCase := range testCases { selfLink := getSelfLink(testCase.name, testCase.namespace) if testCase.expectedSelfLink != selfLink { t.Errorf(\"%s: getSelfLink error, expected: %s, got: %s\", testCase.desc, testCase.expectedSelfLink, selfLink) } } } "} {"_id":"doc-en-kubernetes-c7840a21fb40ecf57f57663c7c74607f80f814573f8a842de0f03083ed3212a3","title":"","text":"\"//pkg/client/clientset_generated/internalclientset:go_default_library\", \"//pkg/client/clientset_generated/internalclientset/typed/core/internalversion:go_default_library\", \"//pkg/kubeapiserver/admission:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/api/errors:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/admission:go_default_library\", ],"} {"_id":"doc-en-kubernetes-bc108532072832232937c90ce0bb40e3e66dd8a671f3dbdb86af202e9d555e29","title":"","text":"\"fmt\" \"io\" \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apiserver/pkg/admission\" \"k8s.io/kubernetes/pkg/api\""} {"_id":"doc-en-kubernetes-229b931a7c397926a3a20fbcd96879580026f2581b80ebeaa83c25a6df324db6","title":"","text":"return nil case admission.Delete: // get the existing pod // get the existing pod from the server cache existingPod, err := c.podsGetter.Pods(a.GetNamespace()).Get(a.GetName(), v1.GetOptions{ResourceVersion: \"0\"}) if errors.IsNotFound(err) { // wasn't found in the server cache, do a live lookup before forbidding existingPod, err = c.podsGetter.Pods(a.GetNamespace()).Get(a.GetName(), v1.GetOptions{}) } if err != nil { return admission.NewForbidden(a, err) }"} {"_id":"doc-en-kubernetes-a025134d6c7a30fceb3f4e3e651ee74d6e8cacaecd34dc04f5e359b6471a9355","title":"","text":"tolerations: - operator: \"Exists\" effect: \"NoExecute\" - key: \"CriticalAddonsOnly\" operator: \"Exists\" "} {"_id":"doc-en-kubernetes-111713cb7356cd7c453110e1466e10bc754216687fa5957606c70b9d1abfe452","title":"","text":"if kind.Version == runtime.APIVersionInternal { continue } if kind == api.Unversioned.WithKind(\"Status\") { if kind == metav1.Unversioned.WithKind(\"Status\") { // this is added below as unversioned continue } metaOnlyObject := gvkToMetadataOnlyObject(kind) scheme.AddKnownTypeWithName(kind, metaOnlyObject) } scheme.AddUnversionedTypes(api.Unversioned, &metav1.Status{}) scheme.AddUnversionedTypes(metav1.Unversioned, &metav1.Status{}) return serializer.NewCodecFactory(scheme) }"} {"_id":"doc-en-kubernetes-6503abb79b502b3fcd0eaacc588f5b1f8e57325350c6fb3cb6c0628ba4bc714b","title":"","text":"// SchemeGroupVersion is group version used to register these objects var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: \"v1\"} // Unversioned is group version for unversioned API objects // TODO: this should be v1 probably var Unversioned = schema.GroupVersion{Group: \"\", Version: \"v1\"} // WatchEventKind is name reserved for serializing watch events. const WatchEventKind = \"WatchEvent\""} {"_id":"doc-en-kubernetes-a970a199e15095005ca6ed46d570ac457a40f0b6035476f48733542704d86a25","title":"","text":"Convert_versioned_Event_to_versioned_InternalEvent, ) // Register Unversioned types under their own special group scheme.AddUnversionedTypes(Unversioned, &Status{}, &APIVersions{}, &APIGroupList{}, &APIGroup{}, &APIResourceList{}, ) // register manually. This usually goes through the SchemeBuilder, which we cannot use here. scheme.AddGeneratedDeepCopyFuncs(GetGeneratedDeepCopyFuncs()...) AddConversionFuncs(scheme)"} {"_id":"doc-en-kubernetes-9a153b357cc35eb655f40b08cf93663fddd64e022964311f9ac7a2c0c620001d","title":"","text":"t := reflect.TypeOf(obj).Elem() gvk := version.WithKind(t.Name()) s.unversionedTypes[t] = gvk if _, ok := s.unversionedKinds[gvk.Kind]; ok { panic(fmt.Sprintf(\"%v has already been registered as unversioned kind %q - kind name must be unique\", reflect.TypeOf(t), gvk.Kind)) if old, ok := s.unversionedKinds[gvk.Kind]; ok && t != old { panic(fmt.Sprintf(\"%v.%v has already been registered as unversioned kind %q - kind name must be unique\", old.PkgPath(), old.Name(), gvk)) } s.unversionedKinds[gvk.Kind] = t }"} {"_id":"doc-en-kubernetes-2d08aa9f0188966efa13d324f24ce950aa0d1bd94a340f8bbc1bc380e8e49233","title":"","text":"} s.AddUnversionedTypes(gv, &InternalSimple{}) s.AddUnversionedTypes(gv, &InternalSimple{}) if len(s.KnownTypes(gv)) != 1 { t.Errorf(\"expected only one %v type after double registration with custom name\", gv) }"} {"_id":"doc-en-kubernetes-151400ee2ac8006e17a560dec7f9035ca86a68f609c580e83c1a547771403711","title":"","text":"} } // EmbeddableTypeMeta passes GetObjectKind to the type which embeds it. type EmbeddableTypeMeta runtime.TypeMeta func (tm *EmbeddableTypeMeta) GetObjectKind() schema.ObjectKind { return (*runtime.TypeMeta)(tm) } func TestConflictingAddKnownTypes(t *testing.T) { s := runtime.NewScheme() gv := schema.GroupVersion{Group: \"foo\", Version: \"v1\"}"} {"_id":"doc-en-kubernetes-d1b4b2809f4ef26dbfbaa1393eb684b6c970e54d1b52775d37dcfb588512f47f","title":"","text":"panicked <- true } }() s.AddUnversionedTypes(gv, &InternalSimple{}) // redefine InternalSimple with the same name, but obviously as a different type type InternalSimple struct { EmbeddableTypeMeta `json:\",inline\"` TestString string `json:\"testString\"` } s.AddUnversionedTypes(gv, &InternalSimple{}) panicked <- false }()"} {"_id":"doc-en-kubernetes-009146622dd48374d238d05664955a82a0bbfd0fbef9855b3390cb30af84612d","title":"","text":"// SchemeGroupVersion is group version used to register these objects var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal} // Unversioned is group version for unversioned API objects // TODO: this should be v1 probably var Unversioned = schema.GroupVersion{Group: \"\", Version: \"v1\"} // ParameterCodec handles versioning of objects that are converted to query parameters. var ParameterCodec = runtime.NewParameterCodec(Scheme)"} {"_id":"doc-en-kubernetes-b5562393a85947e79f35de68ac280cfe5e22e548fcb1c7757b800a08aa13b4a0","title":"","text":"&ConfigMapList{}, ) // Register Unversioned types under their own special group scheme.AddUnversionedTypes(Unversioned, &metav1.Status{}, &metav1.APIVersions{}, &metav1.APIGroupList{}, &metav1.APIGroup{}, &metav1.APIResourceList{}, ) return nil }"} {"_id":"doc-en-kubernetes-43faa21049c32252f66fcc770fb34392eb55ced2cedfd85ad254b551b1f45dd5","title":"","text":"} } func TestStatus(t *testing.T) { _, s, closeFn := framework.RunAMaster(nil) defer closeFn() u := s.URL + \"/apis/batch/v1/namespaces/default/jobs/foo\" resp, err := http.Get(u) if err != nil { t.Fatalf(\"unexpected error getting %s: %v\", u, err) } if resp.StatusCode != http.StatusNotFound { t.Fatalf(\"got status %v instead of 404\", resp.StatusCode) } defer resp.Body.Close() data, _ := ioutil.ReadAll(resp.Body) decodedData := map[string]interface{}{} if err := json.Unmarshal(data, &decodedData); err != nil { t.Logf(\"body: %s\", string(data)) t.Fatalf(\"got error decoding data: %v\", err) } t.Logf(\"body: %s\", string(data)) if got, expected := decodedData[\"apiVersion\"], \"v1\"; got != expected { t.Errorf(\"unexpected apiVersion %q, expected %q\", got, expected) } if got, expected := decodedData[\"kind\"], \"Status\"; got != expected { t.Errorf(\"unexpected kind %q, expected %q\", got, expected) } if got, expected := decodedData[\"status\"], \"Failure\"; got != expected { t.Errorf(\"unexpected status %q, expected %q\", got, expected) } if got, expected := decodedData[\"code\"], float64(404); got != expected { t.Errorf(\"unexpected code %v, expected %v\", got, expected) } } func TestWatchSucceedsWithoutArgs(t *testing.T) { _, s, closeFn := framework.RunAMaster(nil) defer closeFn()"} {"_id":"doc-en-kubernetes-84ef4579f6a2170e18ae397f0c6fccbc6f305673ebc923fdf4fa333ec18b046c","title":"","text":"PodWaiting PodStatus = \"Waiting\" // PodRunning means that the pod is up and running. PodRunning PodStatus = \"Running\" // PodTerminated means that the pod has stopped. // PodTerminated means that the pod has stopped with error(s) PodTerminated PodStatus = \"Terminated\" // PodUnknown means that we failed to obtain info about the pod. PodUnknown PodStatus = \"Unknown\" // PodSucceeded means that the pod has stopped without error(s) PodSucceeded PodStatus = \"Succeeded\" ) type ContainerStateWaiting struct {"} {"_id":"doc-en-kubernetes-d758b89a7c4df0d90f8c9800580a7dbe1bc5a6ff227e79e56827f6fa675d34da","title":"","text":"case newer.PodRunning: *out = PodRunning case newer.PodSucceeded: *out = PodTerminated *out = PodSucceeded case newer.PodFailed: *out = PodTerminated case newer.PodUnknown:"} {"_id":"doc-en-kubernetes-4cdf0aaab158f2550906fadc4c5762565e830f863689cb7ede6e0e869511f102","title":"","text":"case PodTerminated: // Older API versions did not contain enough info to map to PodSucceeded *out = newer.PodFailed case PodSucceeded: *out = newer.PodSucceeded case PodUnknown: *out = newer.PodUnknown default:"} {"_id":"doc-en-kubernetes-ae6ff399305e2fb905afa1c4df90e085bc2176a6012404b1b80f2a05572e66ab","title":"","text":"PodTerminated PodStatus = \"Terminated\" // PodUnknown means that we failed to obtain info about the pod. PodUnknown PodStatus = \"Unknown\" // PodSucceeded means that the pod has stopped without error(s) PodSucceeded PodStatus = \"Succeeded\" ) type ContainerStateWaiting struct {"} {"_id":"doc-en-kubernetes-eb169a476fce4d5432553b5c0645a066d73ed0e9f0f8cb8788cebd332514e818","title":"","text":"package e2e import ( \"io/ioutil\" \"net/http\" \"strings\" \"time\""} {"_id":"doc-en-kubernetes-acd5865f5e7734cabf958dda044a84bd3f5d8a47dc594641844eaace3e1ea07d","title":"","text":"\"k8s.io/kubernetes/pkg/api/v1\" extensions \"k8s.io/kubernetes/pkg/apis/extensions/v1beta1\" \"k8s.io/kubernetes/test/e2e/framework\" \"k8s.io/kubernetes/test/e2e/generated\" . \"github.com/onsi/ginkgo\" . \"github.com/onsi/gomega\""} {"_id":"doc-en-kubernetes-60c55c17e4dd0a60d8299e116d857d77c5522ba1ce63b27f267414809192d0f7","title":"","text":"// Nvidia driver installation can take upwards of 5 minutes. driverInstallTimeout = 10 * time.Minute // Nvidia COS driver installer daemonset. cosNvidiaDriverInstallerPath = \"cluster/gce/gci/nvidia-gpus/cos-installer-daemonset.yaml\" cosNvidiaDriverInstallerUrl = \"https://raw.githubusercontent.com/ContainerEngine/accelerators/stable/cos-nvidia-gpu-installer/daemonset.yaml\" ) func makeCudaAdditionTestPod() *v1.Pod {"} {"_id":"doc-en-kubernetes-f947f5149bafc3a338a4211596482f0676eb9efda7a34520e81c86263da60550","title":"","text":"// GPU drivers might have already been installed. if !areGPUsAvailableOnAllSchedulableNodes(f) { // Install Nvidia Drivers. ds := dsFromManifest(cosNvidiaDriverInstallerPath) ds := dsFromManifest(cosNvidiaDriverInstallerUrl) ds.Namespace = f.Namespace.Name _, err := f.ClientSet.Extensions().DaemonSets(f.Namespace.Name).Create(ds) framework.ExpectNoError(err, \"failed to create daemonset\")"} {"_id":"doc-en-kubernetes-de986b276b53cdd91e5c979f42724c35b84232662d716477c20188d82a9c8cec","title":"","text":"} // dsFromManifest reads a .json/yaml file and returns the daemonset in it. func dsFromManifest(fileName string) *extensions.DaemonSet { func dsFromManifest(url string) *extensions.DaemonSet { var controller extensions.DaemonSet framework.Logf(\"Parsing ds from %v\", fileName) data := generated.ReadOrDie(fileName) framework.Logf(\"Parsing ds from %v\", url) var response *http.Response var err error for i := 1; i <= 5; i++ { response, err = http.Get(url) if err == nil && response.StatusCode == 200 { break } time.Sleep(time.Duration(i) * time.Second) } Expect(err).NotTo(HaveOccurred()) Expect(response.StatusCode).To(Equal(200)) defer response.Body.Close() data, err := ioutil.ReadAll(response.Body) Expect(err).NotTo(HaveOccurred()) json, err := utilyaml.ToJSON(data) Expect(err).NotTo(HaveOccurred())"} {"_id":"doc-en-kubernetes-a184669b4f8aef70c7f97a8283e42a75f89ba185e8981003aabe1b660b97789c","title":"","text":"runcmd: - modprobe configs - docker run -v /dev:/dev -v /home/kubernetes/bin/nvidia:/rootfs/nvidia -v /etc/os-release:/rootfs/etc/os-release -v /proc/sysrq-trigger:/sysrq -e LAKITU_KERNEL_SHA1=26481563cb3788ad254c2bf2126b843c161c7e48 -e BASE_DIR=/rootfs/nvidia --privileged gcr.io/google_containers/cos-nvidia-driver-install@sha256:ad83ede6e0c6d768bf7cf69a7dec972aa5e8f88778142ca46afd3286ad58cfc8 - docker run -v /dev:/dev -v /home/kubernetes/bin/nvidia:/rootfs/nvidia -v /etc/os-release:/rootfs/etc/os-release -v /proc/sysrq-trigger:/sysrq -e BASE_DIR=/rootfs/nvidia --privileged gcr.io/google_containers/cos-nvidia-driver-install@sha256:cb55c7971c337fece62f2bfe858662522a01e43ac9984a2dd1dd5c71487d225c - mount /tmp /tmp -o remount,exec,suid - usermod -a -G docker jenkins - mkdir -p /var/lib/kubelet"} {"_id":"doc-en-kubernetes-05af8398773a76624b30fb4b11bb9909f037c7d540122074af0a73af49209119","title":"","text":"image: e2e-node-containervm-v20161208-image # docker 1.11.2 project: kubernetes-node-e2e-images gci: image_regex: cos-beta-59-9460-20-0 # docker 1.11.2 image_regex: cos-stable-59-9460-60-0 # docker 1.11.2 project: cos-cloud metadata: \"user-data heketiAnn = \"heketi-dynamic-provisioner\" glusterTypeAnn = \"gluster.org/type\" glusterDescAnn = \"Gluster: Dynamically provisioned PV\" linuxGlusterMountBinary = \"mount.glusterfs\" autoUnmountBinaryVer = \"3.11\" )"} {"_id":"doc-en-kubernetes-a0ee4ca6fec483693c46daaa9f60b9b9c509398f131af4632f15776bfd00f5c4","title":"","text":"if kubeClient == nil { return nil, fmt.Errorf(\"glusterfs: failed to get kube client to initialize mounter\") } ep, err := kubeClient.Core().Endpoints(ns).Get(epName, metav1.GetOptions{}) if err != nil && errors.IsNotFound(err) { claim := spec.PersistentVolume.Spec.ClaimRef.Name checkEpName := dynamicEpSvcPrefix + claim if epName != checkEpName { return nil, fmt.Errorf(\"failed to get endpoint %s, error %v\", epName, err) } glog.Errorf(\"glusterfs: failed to get endpoint %s[%v]\", epName, err) if spec != nil && spec.PersistentVolume.Annotations[volumehelper.VolumeDynamicallyCreatedByKey] == heketiAnn { class, err := volutil.GetClassForVolume(plugin.host.GetKubeClient(), spec.PersistentVolume) if err != nil { return nil, fmt.Errorf(\"glusterfs: failed to get storageclass, error: %v\", err) } cfg, err := parseClassParameters(class.Parameters, plugin.host.GetKubeClient()) if err != nil { return nil, fmt.Errorf(\"glusterfs: failed to parse parameters, error: %v\", err) } scConfig := *cfg cli := gcli.NewClient(scConfig.url, scConfig.user, scConfig.secretValue) if cli == nil { return nil, fmt.Errorf(\"glusterfs: failed to create heketi client, error: %v\", err) } volumeID := dstrings.TrimPrefix(source.Path, volPrefix) volInfo, err := cli.VolumeInfo(volumeID) if err != nil { return nil, fmt.Errorf(\"glusterfs: failed to get volume info, error: %v\", err) } endpointIPs, err := getClusterNodes(cli, volInfo.Cluster) if err != nil { return nil, fmt.Errorf(\"glusterfs: failed to get cluster nodes, error: %v\", err) } // Give an attempt to recreate endpoint/service. _, _, err = plugin.createEndpointService(ns, epName, endpointIPs, claim) if err != nil && !errors.IsAlreadyExists(err) { glog.Errorf(\"glusterfs: failed to recreate endpoint/service, error: %v\", err) return nil, fmt.Errorf(\"failed to recreate endpoint/service, error: %v\", err) } glog.V(3).Infof(\"glusterfs: endpoint/service [%v] successfully recreated \", epName) } else { return nil, err } if err != nil { glog.Errorf(\"glusterfs: failed to get endpoints %s[%v]\", epName, err) return nil, err } glog.V(1).Infof(\"glusterfs: endpoints %v\", ep) return plugin.newMounterInternal(spec, ep, pod, plugin.host.GetMounter(), exec.New()) }"} {"_id":"doc-en-kubernetes-25cce2fdda31093d14b2127007b67fdd036bfca4fb1bb33353b5eb910ed01666","title":"","text":"return newGidTable, nil } //createEndpointService create an endpoint and service in provided namespace. func (plugin *glusterfsPlugin) createEndpointService(namespace string, epServiceName string, hostips []string, pvcname string) (endpoint *v1.Endpoints, service *v1.Service, err error) { addrlist := make([]v1.EndpointAddress, len(hostips)) for i, v := range hostips { addrlist[i].IP = v } endpoint = &v1.Endpoints{ ObjectMeta: metav1.ObjectMeta{ Namespace: namespace, Name: epServiceName, Labels: map[string]string{ \"gluster.kubernetes.io/provisioned-for-pvc\": pvcname, }, }, Subsets: []v1.EndpointSubset{{ Addresses: addrlist, Ports: []v1.EndpointPort{{Port: 1, Protocol: \"TCP\"}}, }}, } kubeClient := plugin.host.GetKubeClient() if kubeClient == nil { return nil, nil, fmt.Errorf(\"glusterfs: failed to get kube client when creating endpoint service\") } _, err = kubeClient.Core().Endpoints(namespace).Create(endpoint) if err != nil && errors.IsAlreadyExists(err) { glog.V(1).Infof(\"glusterfs: endpoint [%s] already exist in namespace [%s]\", endpoint, namespace) err = nil } if err != nil { glog.Errorf(\"glusterfs: failed to create endpoint: %v\", err) return nil, nil, fmt.Errorf(\"error creating endpoint: %v\", err) } service = &v1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: epServiceName, Namespace: namespace, Labels: map[string]string{ \"gluster.kubernetes.io/provisioned-for-pvc\": pvcname, }, }, Spec: v1.ServiceSpec{ Ports: []v1.ServicePort{ {Protocol: \"TCP\", Port: 1}}}} _, err = kubeClient.Core().Services(namespace).Create(service) if err != nil && errors.IsAlreadyExists(err) { glog.V(1).Infof(\"glusterfs: service [%s] already exist in namespace [%s]\", service, namespace) err = nil } if err != nil { glog.Errorf(\"glusterfs: failed to create service: %v\", err) return nil, nil, fmt.Errorf(\"error creating service: %v\", err) } return endpoint, service, nil } // deleteEndpointService delete the endpoint and service from the provided namespace. func (plugin *glusterfsPlugin) deleteEndpointService(namespace string, epServiceName string) (err error) { kubeClient := plugin.host.GetKubeClient() if kubeClient == nil { return fmt.Errorf(\"glusterfs: failed to get kube client when deleting endpoint service\") } err = kubeClient.Core().Services(namespace).Delete(epServiceName, nil) if err != nil { glog.Errorf(\"glusterfs: error deleting service %s/%s: %v\", namespace, epServiceName, err) return fmt.Errorf(\"error deleting service %s/%s: %v\", namespace, epServiceName, err) } glog.V(1).Infof(\"glusterfs: service/endpoint %s/%s deleted successfully\", namespace, epServiceName) return nil } func (d *glusterfsVolumeDeleter) getGid() (int, bool, error) { gidStr, ok := d.spec.Annotations[volumehelper.VolumeGidAnnotationKey]"} {"_id":"doc-en-kubernetes-aff627441cbac6b116f7ece4effb376c66fac7b19eef52c93116cbc04654d0c8","title":"","text":"dynamicEndpoint = pvSpec.Glusterfs.EndpointsName } glog.V(3).Infof(\"glusterfs: dynamic namespace and endpoint : [%v/%v]\", dynamicNamespace, dynamicEndpoint) err = d.plugin.deleteEndpointService(dynamicNamespace, dynamicEndpoint) err = d.deleteEndpointService(dynamicNamespace, dynamicEndpoint) if err != nil { glog.Errorf(\"glusterfs: error when deleting endpoint/service :%v\", err) } else {"} {"_id":"doc-en-kubernetes-84507db8a1f646ae69c5cea1bad9be7ad4459ab3a81911a212fcc2570a927e68","title":"","text":"gidStr := strconv.FormatInt(int64(gid), 10) pv.Annotations = map[string]string{ volumehelper.VolumeGidAnnotationKey: gidStr, volumehelper.VolumeDynamicallyCreatedByKey: heketiAnn, glusterTypeAnn: \"file\", \"Description\": glusterDescAnn, v1.MountOptionAnnotation: \"auto_unmount\", volumehelper.VolumeGidAnnotationKey: gidStr, \"kubernetes.io/createdby\": \"heketi-dynamic-provisioner\", \"gluster.org/type\": \"file\", \"Description\": \"Gluster: Dynamically provisioned PV\", v1.MountOptionAnnotation: \"auto_unmount\", } pv.Spec.Capacity = v1.ResourceList{"} {"_id":"doc-en-kubernetes-1e158d7daba06266d0d40a677afcaab057f820980e044b80ca2ad496120d82e2","title":"","text":"return pv, nil } func (p *glusterfsVolumeProvisioner) GetClusterNodes(cli *gcli.Client, cluster string) (dynamicHostIps []string, err error) { clusterinfo, err := cli.ClusterInfo(cluster) if err != nil { glog.Errorf(\"glusterfs: failed to get cluster details: %v\", err) return nil, fmt.Errorf(\"failed to get cluster details: %v\", err) } // For the dynamically provisioned volume, we gather the list of node IPs // of the cluster on which provisioned volume belongs to, as there can be multiple // clusters. for _, node := range clusterinfo.Nodes { nodei, err := cli.NodeInfo(string(node)) if err != nil { glog.Errorf(\"glusterfs: failed to get hostip: %v\", err) return nil, fmt.Errorf(\"failed to get hostip: %v\", err) } ipaddr := dstrings.Join(nodei.NodeAddRequest.Hostnames.Storage, \"\") dynamicHostIps = append(dynamicHostIps, ipaddr) } glog.V(3).Infof(\"glusterfs: hostlist :%v\", dynamicHostIps) if len(dynamicHostIps) == 0 { glog.Errorf(\"glusterfs: no hosts found: %v\", err) return nil, fmt.Errorf(\"no hosts found: %v\", err) } return dynamicHostIps, nil } func (p *glusterfsVolumeProvisioner) CreateVolume(gid int) (r *v1.GlusterfsVolumeSource, size int, err error) { var clusterIDs []string capacity := p.options.PVC.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)]"} {"_id":"doc-en-kubernetes-8a48704c26ac1514971a0729a2002652765ef4e88eeec62f239fb2a58b81d277","title":"","text":"return nil, 0, fmt.Errorf(\"error creating volume %v\", err) } glog.V(1).Infof(\"glusterfs: volume with size: %d and name: %s created\", volume.Size, volume.Name) dynamicHostIps, err := getClusterNodes(cli, volume.Cluster) dynamicHostIps, err := p.GetClusterNodes(cli, volume.Cluster) if err != nil { glog.Errorf(\"glusterfs: error [%v] when getting cluster nodes for volume %s\", err, volume) return nil, 0, fmt.Errorf(\"error [%v] when getting cluster nodes for volume %s\", err, volume)"} {"_id":"doc-en-kubernetes-476fe2e03e6f5263f64418fe88d5baac2a76935ec3a4f17fcbf756ad501f9b52","title":"","text":"// of volume creation. epServiceName := dynamicEpSvcPrefix + p.options.PVC.Name epNamespace := p.options.PVC.Namespace endpoint, service, err := p.plugin.createEndpointService(epNamespace, epServiceName, dynamicHostIps, p.options.PVC.Name) endpoint, service, err := p.createEndpointService(epNamespace, epServiceName, dynamicHostIps, p.options.PVC.Name) if err != nil { glog.Errorf(\"glusterfs: failed to create endpoint/service: %v\", err) deleteErr := cli.VolumeDelete(volume.Id)"} {"_id":"doc-en-kubernetes-886a22c54d0bb552d0aacc19364b639b73f6633669f646a1ac11e9f7bed58175","title":"","text":"}, sz, nil } func (p *glusterfsVolumeProvisioner) createEndpointService(namespace string, epServiceName string, hostips []string, pvcname string) (endpoint *v1.Endpoints, service *v1.Service, err error) { addrlist := make([]v1.EndpointAddress, len(hostips)) for i, v := range hostips { addrlist[i].IP = v } endpoint = &v1.Endpoints{ ObjectMeta: metav1.ObjectMeta{ Namespace: namespace, Name: epServiceName, Labels: map[string]string{ \"gluster.kubernetes.io/provisioned-for-pvc\": pvcname, }, }, Subsets: []v1.EndpointSubset{{ Addresses: addrlist, Ports: []v1.EndpointPort{{Port: 1, Protocol: \"TCP\"}}, }}, } kubeClient := p.plugin.host.GetKubeClient() if kubeClient == nil { return nil, nil, fmt.Errorf(\"glusterfs: failed to get kube client when creating endpoint service\") } _, err = kubeClient.Core().Endpoints(namespace).Create(endpoint) if err != nil && errors.IsAlreadyExists(err) { glog.V(1).Infof(\"glusterfs: endpoint [%s] already exist in namespace [%s]\", endpoint, namespace) err = nil } if err != nil { glog.Errorf(\"glusterfs: failed to create endpoint: %v\", err) return nil, nil, fmt.Errorf(\"error creating endpoint: %v\", err) } service = &v1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: epServiceName, Namespace: namespace, Labels: map[string]string{ \"gluster.kubernetes.io/provisioned-for-pvc\": pvcname, }, }, Spec: v1.ServiceSpec{ Ports: []v1.ServicePort{ {Protocol: \"TCP\", Port: 1}}}} _, err = kubeClient.Core().Services(namespace).Create(service) if err != nil && errors.IsAlreadyExists(err) { glog.V(1).Infof(\"glusterfs: service [%s] already exist in namespace [%s]\", service, namespace) err = nil } if err != nil { glog.Errorf(\"glusterfs: failed to create service: %v\", err) return nil, nil, fmt.Errorf(\"error creating service: %v\", err) } return endpoint, service, nil } func (d *glusterfsVolumeDeleter) deleteEndpointService(namespace string, epServiceName string) (err error) { kubeClient := d.plugin.host.GetKubeClient() if kubeClient == nil { return fmt.Errorf(\"glusterfs: failed to get kube client when deleting endpoint service\") } err = kubeClient.Core().Services(namespace).Delete(epServiceName, nil) if err != nil { glog.Errorf(\"glusterfs: error deleting service %s/%s: %v\", namespace, epServiceName, err) return fmt.Errorf(\"error deleting service %s/%s: %v\", namespace, epServiceName, err) } glog.V(1).Infof(\"glusterfs: service/endpoint %s/%s deleted successfully\", namespace, epServiceName) return nil } // parseSecret finds a given Secret instance and reads user password from it. func parseSecret(namespace, secretName string, kubeClient clientset.Interface) (string, error) { secretMap, err := volutil.GetSecretForPV(namespace, secretName, glusterfsPluginName, kubeClient)"} {"_id":"doc-en-kubernetes-228e05814cf2aad90801badf92dc3782de1c25074b273c830fdd4acd721611cf","title":"","text":"return &cfg, nil } // getClusterNodes() returns the cluster nodes of a given cluster func getClusterNodes(cli *gcli.Client, cluster string) (dynamicHostIps []string, err error) { clusterinfo, err := cli.ClusterInfo(cluster) if err != nil { glog.Errorf(\"glusterfs: failed to get cluster details: %v\", err) return nil, fmt.Errorf(\"failed to get cluster details: %v\", err) } // For the dynamically provisioned volume, we gather the list of node IPs // of the cluster on which provisioned volume belongs to, as there can be multiple // clusters. for _, node := range clusterinfo.Nodes { nodei, err := cli.NodeInfo(string(node)) if err != nil { glog.Errorf(\"glusterfs: failed to get hostip: %v\", err) return nil, fmt.Errorf(\"failed to get hostip: %v\", err) } ipaddr := dstrings.Join(nodei.NodeAddRequest.Hostnames.Storage, \"\") dynamicHostIps = append(dynamicHostIps, ipaddr) } glog.V(3).Infof(\"glusterfs: hostlist :%v\", dynamicHostIps) if len(dynamicHostIps) == 0 { glog.Errorf(\"glusterfs: no hosts found: %v\", err) return nil, fmt.Errorf(\"no hosts found: %v\", err) } return dynamicHostIps, nil } "} {"_id":"doc-en-kubernetes-71699812004e0e9dee6d6091470567e6c87562fe68a9c605d1b97f7f0ea88444","title":"","text":"DNS_SERVER_IP=${KUBE_DNS_SERVER_IP:-10.0.0.10} DNS_DOMAIN=${KUBE_DNS_NAME:-\"cluster.local\"} KUBECTL=${KUBECTL:-cluster/kubectl.sh} WAIT_FOR_URL_API_SERVER=${WAIT_FOR_URL_API_SERVER:-10} WAIT_FOR_URL_API_SERVER=${WAIT_FOR_URL_API_SERVER:-20} ENABLE_DAEMON=${ENABLE_DAEMON:-false} HOSTNAME_OVERRIDE=${HOSTNAME_OVERRIDE:-\"127.0.0.1\"} CLOUD_PROVIDER=${CLOUD_PROVIDER:-\"\"}"} {"_id":"doc-en-kubernetes-4fc7ae07d00085597d24b0d1e36ebb22b4636f5f5a53029fa25a34356a909f13","title":"","text":"# this uses the API port because if you don't have any authenticator, you can't seem to use the secure port at all. # this matches what happened with the combination in 1.4. # TODO change this conditionally based on whether API_PORT is on or off kube::util::wait_for_url \"http://${API_HOST_IP}:${API_SECURE_PORT}/healthz\" \"apiserver: \" 1 ${WAIT_FOR_URL_API_SERVER} kube::util::wait_for_url \"https://${API_HOST_IP}:${API_SECURE_PORT}/healthz\" \"apiserver: \" 1 ${WAIT_FOR_URL_API_SERVER} || { echo \"check apiserver logs: ${APISERVER_LOG}\" ; exit 1 ; } # Create kubeconfigs for all components, using client certs"} {"_id":"doc-en-kubernetes-a24a1710fcb2dc466c7a3362f9bcf2083c53845ac798b6f81789583b9875d525","title":"","text":"apiVersion: apps/v1beta1 kind: Deployment metadata: name: event-exporter-v0.1.0 name: event-exporter-v0.1.4 namespace: kube-system labels: k8s-app: event-exporter"} {"_id":"doc-en-kubernetes-6a1b0872dab68d089605077754a375404f032cef3d97587b961f0aee741c4b5d","title":"","text":"containers: # TODO: Add resources in 1.8 - name: event-exporter image: gcr.io/google-containers/event-exporter:v0.1.0-r2 image: gcr.io/google-containers/event-exporter:v0.1.4 command: - '/event-exporter' - name: prometheus-to-sd-exporter"} {"_id":"doc-en-kubernetes-ef26f3e4b06449df6ceea29cd2aa9fede26d3c2f30d0e02cd3baa58abb3061f8","title":"","text":"} } // For a given node checks its conditions and tries to update it. Returns grace period to which given node // is entitled, state of current and last observed Ready Condition, and an error if it occurred. // tryUpdateNodeStatus checks a given node's conditions and tries to update it. Returns grace period to // which given node is entitled, state of current and last observed Ready Condition, and an error if it occurred. func (nc *NodeController) tryUpdateNodeStatus(node *v1.Node) (time.Duration, v1.NodeCondition, *v1.NodeCondition, error) { var err error var gracePeriod time.Duration"} {"_id":"doc-en-kubernetes-75e524eaab1877659566fab24cb7633e6cad40c6dcbff1d2d8b918820124b8fe","title":"","text":"// otherwise we leave it as it is. if savedCondition.LastTransitionTime != observedCondition.LastTransitionTime { glog.V(3).Infof(\"ReadyCondition for Node %s transitioned from %v to %v\", node.Name, savedCondition.Status, observedCondition) transitionTime = nc.now() } else { transitionTime = savedNodeStatus.readyTransitionTimestamp"} {"_id":"doc-en-kubernetes-032f6f1ef48e3ec85e0a958ac5e1bc08c5bfa953a08df0b7d9b79cb327ec4efb","title":"","text":"} // remaining node conditions should also be set to Unknown remainingNodeConditionTypes := []v1.NodeConditionType{v1.NodeOutOfDisk, v1.NodeMemoryPressure, v1.NodeDiskPressure} remainingNodeConditionTypes := []v1.NodeConditionType{ v1.NodeOutOfDisk, v1.NodeMemoryPressure, v1.NodeDiskPressure, // We don't change 'NodeInodePressure' condition, as it'll be removed in future. // v1.NodeInodePressure, // We don't change 'NodeNetworkUnavailable' condition, as it's managed on a control plane level. // v1.NodeNetworkUnavailable, } nowTimestamp := nc.now() for _, nodeConditionType := range remainingNodeConditionTypes { _, currentCondition := nodeutil.GetNodeCondition(&node.Status, nodeConditionType)"} {"_id":"doc-en-kubernetes-017e8174cf0fd099a501457935f1fd4e47ce116d6eff109d47b2bc248a9e2a5e","title":"","text":"return 0 } // This function is expected to get a slice of NodeReadyConditions for all Nodes in a given zone. // ComputeZoneState returns a slice of NodeReadyConditions for all Nodes in a given zone. // The zone is considered: // - fullyDisrupted if there're no Ready Nodes, // - partiallyDisrupted if at least than nc.unhealthyZoneThreshold percent of Nodes are not Ready,"} {"_id":"doc-en-kubernetes-a81c381b3a4d3137bc09a23c89a41d26c4c9389a9d6bc933a0dbfa71cbb51141","title":"","text":"\"math\" \"time\" meta_v1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/kubernetes/test/e2e/framework\" \"k8s.io/kubernetes/test/e2e/instrumentation\""} {"_id":"doc-en-kubernetes-29a5b0d39e30d5c6ac636391bd0b18c93ea05fabb72f7af7dfe0774ec76c4b2b","title":"","text":"nodes := framework.GetReadySchedulableNodesOrDie(f.ClientSet).Items maxPodCount := 10 jobDuration := 1 * time.Hour jobDuration := 30 * time.Minute linesPerPodPerSecond := 100 testDuration := 21 * time.Hour // TODO(crassirostris): Increase to 21 hrs testDuration := 3 * time.Hour ingestionTimeout := testDuration + 30*time.Minute allowedRestarts := int(math.Ceil(float64(testDuration) / float64(time.Hour) * maxAllowedRestartsPerHour))"} {"_id":"doc-en-kubernetes-81c670719f03800a4973bde84fb889bd2326b190b8ed48e9a0f482463ec392dd","title":"","text":"podRunCount := maxPodCount*(int(testDuration/jobDuration)-1) + 1 linesPerPod := linesPerPodPerSecond * int(jobDuration.Seconds()) // pods is a flat array of all pods to be run and to expect in Stackdriver. pods := []*loggingPod{} // podsByRun is a two-dimensional array of pods, first dimension is the run // index, the second dimension is the node index. Since we want to create // an equal load on all nodes, for the same run we have one pod per node. podsByRun := [][]*loggingPod{} for runIdx := 0; runIdx < podRunCount; runIdx++ { podsInRun := []*loggingPod{} for nodeIdx, node := range nodes { podName := fmt.Sprintf(\"job-logs-generator-%d-%d-%d-%d\", maxPodCount, linesPerPod, runIdx, nodeIdx) pods = append(pods, newLoggingPod(podName, node.Name, linesPerPod, jobDuration)) pod := newLoggingPod(podName, node.Name, linesPerPod, jobDuration) pods = append(pods, pod) podsInRun = append(podsInRun, pod) } podsByRun = append(podsByRun, podsInRun) } By(\"Running short-living pods\") go func() { for _, pod := range pods { pod.Start(f) for runIdx := 0; runIdx < podRunCount; runIdx++ { // Starting one pod on each node. for _, pod := range podsByRun[runIdx] { pod.Start(f) } time.Sleep(podRunDelay) defer f.PodClient().Delete(pod.Name, &meta_v1.DeleteOptions{}) } // Waiting until the last pod has completed time.Sleep(jobDuration - podRunDelay + lastPodIngestionSlack)"} {"_id":"doc-en-kubernetes-19950dcdb18bb7795f2b29a1c22669059b7e58f1de0ba90164ec95531ad2572a","title":"","text":"return nil } list, err := c.lister.PodPresets(pod.GetNamespace()).List(labels.Everything()) list, err := c.lister.PodPresets(a.GetNamespace()).List(labels.Everything()) // Ignore if exclusion annotation is present if podAnnotations := pod.GetAnnotations(); podAnnotations != nil {"} {"_id":"doc-en-kubernetes-b64027a45e12b27607cb788153d3122afd9f6dba4e924b314f58ab1a6dacc3d0","title":"","text":"} } func TestAdmitEmptyPodNamespace(t *testing.T) { containerName := \"container\" pod := &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"mypod\", Labels: map[string]string{ \"security\": \"S2\", }, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: containerName, Env: []api.EnvVar{{Name: \"abc\", Value: \"value2\"}, {Name: \"ABCD\", Value: \"value3\"}}, }, }, }, } pip := &settings.PodPreset{ ObjectMeta: v1.ObjectMeta{ Name: \"hello\", Namespace: \"different\", // (pod will be submitted to namespace 'namespace') }, Spec: settings.PodPresetSpec{ Selector: v1.LabelSelector{ MatchExpressions: []v1.LabelSelectorRequirement{ { Key: \"security\", Operator: v1.LabelSelectorOpIn, Values: []string{\"S2\"}, }, }, }, Volumes: []api.Volume{{Name: \"vol\", VolumeSource: api.VolumeSource{EmptyDir: &api.EmptyDirVolumeSource{}}}}, Env: []api.EnvVar{{Name: \"abcd\", Value: \"value\"}, {Name: \"ABC\", Value: \"value\"}}, EnvFrom: []api.EnvFromSource{ { ConfigMapRef: &api.ConfigMapEnvSource{ LocalObjectReference: api.LocalObjectReference{Name: \"abc\"}, }, }, { Prefix: \"pre_\", ConfigMapRef: &api.ConfigMapEnvSource{ LocalObjectReference: api.LocalObjectReference{Name: \"abc\"}, }, }, }, }, } originalPod, err := api.Scheme.Copy(pod) if err != nil { t.Fatal(err) } err = admitPod(pod, pip) if err != nil { t.Fatal(err) } // verify PodSpec has not been mutated if !reflect.DeepEqual(pod, originalPod) { t.Fatalf(\"Expected pod spec of '%v' to be unchanged\", pod.Name) } } func admitPod(pod *api.Pod, pip *settings.PodPreset) error { informerFactory := informers.NewSharedInformerFactory(nil, controller.NoResyncPeriodFunc()) store := informerFactory.Settings().InternalVersion().PodPresets().Informer().GetStore()"} {"_id":"doc-en-kubernetes-bd1cfc54022e025b37d4c7aa0c024ab27b9e75d48b9e87501a7813d3f8ef160a","title":"","text":"\"spec.nodeName\", \"spec.restartPolicy\", \"spec.serviceAccountName\", \"spec.schedulerName\", \"status.phase\", \"status.hostIP\", \"status.podIP\":"} {"_id":"doc-en-kubernetes-c22f1a23145b81cfcb81d1405cca297044a40d8f7c476063bc9497cba43505c9","title":"","text":"// amount of allocations needed to create the fields.Set. If you add any // field here or the number of object-meta related fields changes, this should // be adjusted. podSpecificFieldsSet := make(fields.Set, 6) podSpecificFieldsSet := make(fields.Set, 7) podSpecificFieldsSet[\"spec.nodeName\"] = pod.Spec.NodeName podSpecificFieldsSet[\"spec.restartPolicy\"] = string(pod.Spec.RestartPolicy) podSpecificFieldsSet[\"spec.schedulerName\"] = string(pod.Spec.SchedulerName) podSpecificFieldsSet[\"status.phase\"] = string(pod.Status.Phase) podSpecificFieldsSet[\"status.podIP\"] = string(pod.Status.PodIP) return generic.AddObjectMetaFieldsSet(podSpecificFieldsSet, &pod.ObjectMeta, true)"} {"_id":"doc-en-kubernetes-06164a6bdc966227b908e623a1f610425cf7d433dbbe1e4a689cdbd65ee8ee10","title":"","text":"}, { in: &api.Pod{ Spec: api.PodSpec{SchedulerName: \"scheduler1\"}, }, fieldSelector: fields.ParseSelectorOrDie(\"spec.schedulerName=scheduler1\"), expectMatch: true, }, { in: &api.Pod{ Spec: api.PodSpec{SchedulerName: \"scheduler1\"}, }, fieldSelector: fields.ParseSelectorOrDie(\"spec.schedulerName=scheduler2\"), expectMatch: false, }, { in: &api.Pod{ Status: api.PodStatus{Phase: api.PodRunning}, }, fieldSelector: fields.ParseSelectorOrDie(\"status.phase=Running\"),"} {"_id":"doc-en-kubernetes-ea6d74c31fc73e46067a575369032db99eb763ea3ee10151703df3044ea4cdee","title":"","text":"return zones, nil } // ListInstanceNames returns a string of instance names seperated by spaces. func (gce *GCECloud) ListInstanceNames(project, zone string) (string, error) { res, err := gce.service.Instances.List(project, zone).Fields(\"items(name)\").Do() if err != nil { return \"\", err } var output string for _, item := range res.Items { output += item.Name + \" \" } return output, nil } // DeleteInstance deletes an instance specified by project, zone, and name func (gce *GCECloud) DeleteInstance(project, zone, name string) (*compute.Operation, error) { return gce.service.Instances.Delete(project, zone, name).Do() } // Implementation of Instances.CurrentNodeName func (gce *GCECloud) CurrentNodeName(hostname string) (types.NodeName, error) { return types.NodeName(hostname), nil"} {"_id":"doc-en-kubernetes-a31ca3d92b89ecee20b6ee5d5b44ffa728c6cf5e7c0bf870fd7e590718376f1a","title":"","text":"import ( \"fmt\" mathrand \"math/rand\" \"os/exec\" \"strings\" \"time\""} {"_id":"doc-en-kubernetes-2c0f79813cf984ea5bd75067b8de08b3a6bd1dfa096fc7550d391c4853760124","title":"","text":"// Verify that disk shows up in node 0's volumeInUse list framework.ExpectNoError(waitForPDInVolumesInUse(nodeClient, diskName, host0Name, nodeStatusTimeout, true /* should exist*/)) output, err := exec.Command(\"gcloud\", \"compute\", \"instances\", \"list\", \"--project=\"+framework.TestContext.CloudConfig.ProjectID).CombinedOutput() gceCloud, err := framework.GetGCECloud() framework.ExpectNoError(err, fmt.Sprintf(\"Unable to create gcloud client err=%v\", err)) output, err := gceCloud.ListInstanceNames(framework.TestContext.CloudConfig.ProjectID, framework.TestContext.CloudConfig.Zone) framework.ExpectNoError(err, fmt.Sprintf(\"Unable to get list of node instances err=%v output=%s\", err, output)) Expect(true, strings.Contains(string(output), string(host0Name))) By(\"deleting host0\") resp, err := gceCloud.DeleteInstance(framework.TestContext.CloudConfig.ProjectID, framework.TestContext.CloudConfig.Zone, string(host0Name)) framework.ExpectNoError(err, fmt.Sprintf(\"Failed to delete host0pod: err=%v response=%#v\", err, resp)) output, err = exec.Command(\"gcloud\", \"compute\", \"instances\", \"delete\", string(host0Name), \"--project=\"+framework.TestContext.CloudConfig.ProjectID, \"--zone=\"+framework.TestContext.CloudConfig.Zone).CombinedOutput() framework.ExpectNoError(err, fmt.Sprintf(\"Failed to delete host0pod: err=%v output=%s\", err, output)) output, err = exec.Command(\"gcloud\", \"compute\", \"instances\", \"list\", \"--project=\"+framework.TestContext.CloudConfig.ProjectID).CombinedOutput() output, err = gceCloud.ListInstanceNames(framework.TestContext.CloudConfig.ProjectID, framework.TestContext.CloudConfig.Zone) framework.ExpectNoError(err, fmt.Sprintf(\"Unable to get list of node instances err=%v output=%s\", err, output)) Expect(false, strings.Contains(string(output), string(host0Name)))"} {"_id":"doc-en-kubernetes-10e1cbb6f4c9c7bc01dde93ca17e7bc3e9d98d89cfe242388bee7745a5f6c14b","title":"","text":"//NewRandomNameCustomResourceDefinition generates a CRD with random name to avoid name conflict in e2e tests func NewRandomNameCustomResourceDefinition(scope apiextensionsv1beta1.ResourceScope) *apiextensionsv1beta1.CustomResourceDefinition { gName := names.SimpleNameGenerator.GenerateName(\"foo\") // ensure the singular doesn't end in an s for now gName := names.SimpleNameGenerator.GenerateName(\"foo\") + \"a\" return &apiextensionsv1beta1.CustomResourceDefinition{ ObjectMeta: metav1.ObjectMeta{Name: gName + \"s.mygroup.example.com\"}, Spec: apiextensionsv1beta1.CustomResourceDefinitionSpec{"} {"_id":"doc-en-kubernetes-e984c5c53b273829c30d7f0519ad57ef22e7bfb4edf32f3f85b3069b60ab401c","title":"","text":"glog.V(4).Infof(\"Disk %s is already attached to instance %s\", volumeID, instanceID) return volume.ID, nil } glog.V(2).Infof(\"Disk %s is attached to a different instance (%s), detaching\", volumeID, volume.AttachedServerId) err = os.DetachDisk(volume.AttachedServerId, volumeID) if err != nil { return \"\", err } errmsg := fmt.Sprintf(\"Disk %s is attached to a different instance (%s)\", volumeID, volume.AttachedServerId) glog.V(2).Infof(errmsg) return \"\", errors.New(errmsg) } startTime := time.Now()"} {"_id":"doc-en-kubernetes-bf2d4509c8ee0f642c293274515f8998d4351bd71d9de9bb5efa73cc1899117c","title":"","text":"go_library( name = \"go_default_library\", srcs = [ \"conversion.go\", \"deepcopy.go\", \"doc.go\", \"generated.pb.go\","} {"_id":"doc-en-kubernetes-5e61d83f8b4ef181404bcb262b4e26538fef242255597ca53643e6cc67e0eec1","title":"","text":" /* Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package v1alpha1 import \"k8s.io/apimachinery/pkg/conversion\" // Convert_Slice_string_To_v1alpha1_IncludeObjectPolicy allows converting a URL query parameter value func Convert_Slice_string_To_v1alpha1_IncludeObjectPolicy(input *[]string, out *IncludeObjectPolicy, s conversion.Scope) error { if len(*input) > 0 { *out = IncludeObjectPolicy((*input)[0]) } return nil } "} {"_id":"doc-en-kubernetes-2f6fe25458bdf8b438601742accd6a7646974e78d3612cd10a47db449a1200fc","title":"","text":"&PartialObjectMetadataList{}, ) if err := scheme.AddConversionFuncs( Convert_Slice_string_To_v1alpha1_IncludeObjectPolicy, ); err != nil { panic(err) } // register manually. This usually goes through the SchemeBuilder, which we cannot use here. //scheme.AddGeneratedDeepCopyFuncs(GetGeneratedDeepCopyFuncs()...) }"} {"_id":"doc-en-kubernetes-33a22704cb55211e138ff5146992d91cf86ecdf1ed28fdf17cb97a1223f2b507","title":"","text":"}, }, }, { accept: runtime.ContentTypeJSON + \";as=Table;v=v1alpha1;g=meta.k8s.io\", params: url.Values{\"includeObject\": []string{\"Metadata\"}}, expected: &metav1alpha1.Table{ TypeMeta: metav1.TypeMeta{Kind: \"Table\", APIVersion: \"meta.k8s.io/v1alpha1\"}, ColumnDefinitions: []metav1alpha1.TableColumnDefinition{ {Name: \"Name\", Type: \"string\", Description: metaDoc[\"name\"]}, {Name: \"Created At\", Type: \"date\", Description: metaDoc[\"creationTimestamp\"]}, }, Rows: []metav1alpha1.TableRow{ {Cells: []interface{}{\"foo1\", now.Time.UTC().Format(time.RFC3339)}, Object: runtime.RawExtension{Raw: encodedBody}}, }, }, }, } for i, test := range tests { u, err := url.Parse(server.URL + \"/\" + prefix + \"/\" + testGroupVersion.Group + \"/\" + testGroupVersion.Version + \"/namespaces/default/simple/id\")"} {"_id":"doc-en-kubernetes-7911c1932e04d63c0fe4a9605419a98cc51d5fc35f1198dfb1a704fdf553b58a","title":"","text":"} if test.statusCode != 0 { if resp.StatusCode != test.statusCode { t.Errorf(\"%d: unexpected response: %#v\", resp) t.Errorf(\"%d: unexpected response: %#v\", i, resp) } continue } if resp.StatusCode != http.StatusOK { t.Errorf(\"%d: unexpected response: %#v\", resp) t.Errorf(\"%d: unexpected response: %#v\", i, resp) } var itemOut metav1alpha1.Table if _, err = extractBody(resp, &itemOut); err != nil { body, err := extractBody(resp, &itemOut) if err != nil { t.Fatal(err) } if !reflect.DeepEqual(test.expected, &itemOut) { t.Log(body) t.Errorf(\"%d: did not match: %s\", i, diff.ObjectReflectDiff(test.expected, &itemOut)) } }"} {"_id":"doc-en-kubernetes-2961bc0ca863d036fcd132c9142f0d6e9348dac7d6be7020aca3397fe888d9eb","title":"","text":"t.Errorf(\"unexpected error: %v\", err) } if len(times) != 0 { t.Errorf(\"expected 0 start times, got: , got: %v\", times) t.Errorf(\"expected 0 start times, got: %v\", times) } } {"} {"_id":"doc-en-kubernetes-4040f51aff129e6290fbe30dca665b4a519d6c43d00c17e5f2ef24599f029591","title":"","text":"t.Errorf(\"unexpected error: %v\", err) } if len(times) != 1 { t.Errorf(\"expected 2 start times, got: , got: %v\", times) t.Errorf(\"expected 1 start times, got: %v\", times) } else if !times[0].Equal(T2) { t.Errorf(\"expected: %v, got: %v\", T1, times[0]) }"} {"_id":"doc-en-kubernetes-795fc6bd0f0b6caba1c6e935766e0d53ec706798b68901677de3d5bdcd82a2cf","title":"","text":"t.Errorf(\"unexpected error: %v\", err) } if len(times) != 2 { t.Errorf(\"expected 2 start times, got: , got: %v\", times) t.Errorf(\"expected 2 start times, got: %v\", times) } else { if !times[0].Equal(T1) { t.Errorf(\"expected: %v, got: %v\", T1, times[0])"} {"_id":"doc-en-kubernetes-8f615b3f814cb8d55c1fb13f3541c1f5c0c8f04c5aa3f6d62aed08846ca36cf3","title":"","text":"set = obj.(*apps.StatefulSet) } if set.Status.Replicas != 3 { t.Error(\"Falied to scale statefulset to 3 replicas\") t.Errorf(\"set.Status.Replicas = %v; want 3\", set.Status.Replicas) } }"} {"_id":"doc-en-kubernetes-f7f23bb4df2d6b66d809d077c49ddf0cb037dec47901f99dd86a919d533bb396","title":"","text":"set = obj.(*apps.StatefulSet) } if set.Status.Replicas != 3 { t.Error(\"Falied to scale statefulset to 3 replicas\") t.Errorf(\"set.Status.Replicas = %v; want 3\", set.Status.Replicas) } *set.Spec.Replicas = 0 if err := scaleDownStatefulSetController(set, ssc, spc); err != nil {"} {"_id":"doc-en-kubernetes-11ccd90bbc5644a2ec5a3e4c840259808df42f5d4bd065d2a36a52a5325ad90b","title":"","text":"set = obj.(*apps.StatefulSet) } if set.Status.Replicas != 3 { t.Error(\"Falied to scale statefulset to 3 replicas\") t.Errorf(\"set.Status.Replicas = %v; want 3\", set.Status.Replicas) } pods, err := spc.addTerminatingPod(set, 3) if err != nil {"} {"_id":"doc-en-kubernetes-235471ec556de537890511d51fce450c1698e551e904edeb6163e49fa11540f1","title":"","text":"set = obj.(*apps.StatefulSet) } if set.Status.Replicas != 0 { t.Error(\"Falied to scale statefulset to 3 replicas\") t.Errorf(\"set.Status.Replicas = %v; want 0\", set.Status.Replicas) } }"} {"_id":"doc-en-kubernetes-4e9ad890b14de49c42b0ff38413925cceeb77ee9b743d786d43f5929eb2d7776","title":"","text":"set = obj.(*apps.StatefulSet) } if set.Status.Replicas != 3 { t.Error(\"Falied to scale statefulset to 3 replicas\") t.Errorf(\"set.Status.Replicas = %v; want 3\", set.Status.Replicas) } *set.Spec.Replicas = 5 fakeResourceVersion(set)"} {"_id":"doc-en-kubernetes-1a32f5e57311e5beed7b47a3ea261fa816ac0963b95a90366fe59b3395a978ba","title":"","text":"} func sleepUpTo(d time.Duration) { time.Sleep(time.Duration(rand.Int63n(d.Nanoseconds()))) if d.Nanoseconds() > 0 { time.Sleep(time.Duration(rand.Int63n(d.Nanoseconds()))) } } func createAllResources(configs []testutils.RunObjectConfig, creatingTime time.Duration) {"} {"_id":"doc-en-kubernetes-0488c1df890f349769f24b41eadc6e66660b4f690ea2e35ede119638386b533f","title":"","text":"dockercontainer \"github.com/docker/docker/api/types/container\" dockerfilters \"github.com/docker/docker/api/types/filters\" \"github.com/golang/glog\" \"k8s.io/api/core/v1\" runtimeapi \"k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime\" )"} {"_id":"doc-en-kubernetes-5f17ebf387b9c5829933399653790df061306a0667483bdbc7ebb26b66102bbf","title":"","text":"return 0 } func (ds *dockerService) getSecurityOpts(containerName string, sandboxConfig *runtimeapi.PodSandboxConfig, separator rune) ([]string, error) { hasSeccompSetting := false annotations := sandboxConfig.GetAnnotations() if _, ok := annotations[v1.SeccompContainerAnnotationKeyPrefix+containerName]; !ok { _, hasSeccompSetting = annotations[v1.SeccompPodAnnotationKey] } else { hasSeccompSetting = true func (ds *dockerService) getSecurityOpts(seccompProfile string, separator rune) ([]string, error) { if seccompProfile != \"\" { glog.Warningf(\"seccomp annotations are not supported on windows\") } if hasSeccompSetting { glog.Warningf(\"seccomp annotations found, but it is not supported on windows\") } return nil, nil }"} {"_id":"doc-en-kubernetes-1d4511c22236fb00e6e652fda276ce041f093582765a8e7ca5cf57d635debfc8","title":"","text":"existed = true } // If address exists, get it by IP, because name might be different. // This can specifically happen if the IP was changed from ephemeral to static, // which results in a new name for the IP. if existingIP != \"\" { addr, err := s.GetRegionAddressByIP(region, existingIP) if err != nil { return \"\", false, fmt.Errorf(\"error getting static IP address: %v\", err) } return addr.Address, existed, nil } // Otherwise, get address by name addr, err := s.GetRegionAddress(name, region) if err != nil { return \"\", false, fmt.Errorf(\"error getting static IP address: %v\", err)"} {"_id":"doc-en-kubernetes-3499ae2e2f45d2cfc620ab36ef01420ea536c90358e4be69231e2e6528570536","title":"","text":"if err != nil || !existed || ip != ipPrime { t.Fatalf(`ensureStaticIP(%v, %v, %v, %v, %v) = %v, %v, %v; want %v, true, nil`, gce, ipName, serviceName, gce.region, ip, ipPrime, existed, err, ip) } // Ensure call with different name ipName = \"another-name-for-static-ip\" ipPrime, existed, err = ensureStaticIP(gce, ipName, serviceName, gce.region, ip, cloud.NetworkTierDefault) if err != nil || !existed || ip != ipPrime { t.Fatalf(`ensureStaticIP(%v, %v, %v, %v, %v) = %v, %v, %v; want %v, true, nil`, gce, ipName, serviceName, gce.region, ip, ipPrime, existed, err, ip) } } func TestEnsureStaticIPWithTier(t *testing.T) {"} {"_id":"doc-en-kubernetes-e124472ef0d00f0397005a403b7af0551029ec99067c23a16214e22592489761","title":"","text":"--network ${NETWORK} --region ${REGION} --range ${NODE_IP_RANGE} --secondary-range \"pods-default=${CLUSTER_IP_RANGE}\" --secondary-range \"pods-default=${CLUSTER_IP_RANGE}\" --secondary-range \"services-default=${SERVICE_CLUSTER_IP_RANGE}\" echo \"Created subnetwork ${IP_ALIAS_SUBNETWORK}\" else if ! echo ${subnet} | grep --quiet secondaryIpRanges ${subnet}; then"} {"_id":"doc-en-kubernetes-0fbf5871f5dc3ddea71b7531e19a6b4f6acc2431aea91fb9288823b5c27788c9","title":"","text":"exit 1 fi fi # Services subnetwork. local subnet=$(gcloud beta compute networks subnets describe --project \"${PROJECT}\" --region ${REGION} ${SERVICE_CLUSTER_IP_SUBNETWORK} 2>/dev/null) if [[ -z ${subnet} ]]; then if [[ ${SERVICE_CLUSTER_IP_SUBNETWORK} != ${INSTANCE_PREFIX}-subnet-services ]]; then echo \"${color_red}Subnetwork ${NETWORK}:${SERVICE_CLUSTER_IP_SUBNETWORK} does not exist${color_norm}\" exit 1 fi echo \"Creating subnet for reserving service cluster IPs ${NETWORK}:${SERVICE_CLUSTER_IP_SUBNETWORK}\" gcloud beta compute networks subnets create ${SERVICE_CLUSTER_IP_SUBNETWORK} --description \"Automatically generated subnet for ${INSTANCE_PREFIX} cluster. This will be removed on cluster teardown.\" --project \"${PROJECT}\" --network ${NETWORK} --region ${REGION} --range ${SERVICE_CLUSTER_IP_RANGE} echo \"Created subnetwork ${SERVICE_CLUSTER_IP_SUBNETWORK}\" else echo \"Subnet ${SERVICE_CLUSTER_IP_SUBNETWORK} already exists\" fi } function delete-firewall-rules() {"} {"_id":"doc-en-kubernetes-32c5bb2c503839bc226f37399d6bd5b2094189a030d3661ba2c41e1d2ddd1273","title":"","text":"${IP_ALIAS_SUBNETWORK} fi fi if [[ ${SERVICE_CLUSTER_IP_SUBNETWORK} == ${INSTANCE_PREFIX}-subnet-services ]]; then echo \"Removing auto-created subnet ${NETWORK}:${SERVICE_CLUSTER_IP_SUBNETWORK}\" if [[ -n $(gcloud beta compute networks subnets describe --project \"${PROJECT}\" --region ${REGION} ${SERVICE_CLUSTER_IP_SUBNETWORK} 2>/dev/null) ]]; then gcloud --quiet beta compute networks subnets delete --project \"${PROJECT}\" --region ${REGION} ${SERVICE_CLUSTER_IP_SUBNETWORK} fi fi } # Generates SSL certificates for etcd cluster. Uses cfssl program."} {"_id":"doc-en-kubernetes-dee94ba3fe9e8ad838e8c2c65f2751ff9b5667c193d8735c08be32ab5c8812a8","title":"","text":"testCannotConnect(f, f.Namespace, \"client-b\", service, 81) }) }) It(\"should allow egress access on one named port [Feature:NetworkPolicy]\", func() { clientPodName := \"client-a\" protocolUDP := v1.ProtocolUDP policy := &networkingv1.NetworkPolicy{ ObjectMeta: metav1.ObjectMeta{ Name: \"allow-client-a-via-named-port-egress-rule\", }, Spec: networkingv1.NetworkPolicySpec{ // Apply this policy to client-a PodSelector: metav1.LabelSelector{ MatchLabels: map[string]string{ \"pod-name\": clientPodName, }, }, // Allow traffic to only one named port: \"serve-80\". Egress: []networkingv1.NetworkPolicyEgressRule{{ Ports: []networkingv1.NetworkPolicyPort{ { Port: &intstr.IntOrString{Type: intstr.String, StrVal: \"serve-80\"}, }, // Allow DNS look-ups { Protocol: &protocolUDP, Port: &intstr.IntOrString{Type: intstr.Int, IntVal: 53}, }, }, }}, }, } policy, err := f.ClientSet.NetworkingV1().NetworkPolicies(f.Namespace.Name).Create(policy) Expect(err).NotTo(HaveOccurred()) defer cleanupNetworkPolicy(f, policy) By(\"Creating client-a which should be able to contact the server.\", func() { testCanConnect(f, f.Namespace, clientPodName, service, 80) }) By(\"Creating client-a which should not be able to contact the server on port 81.\", func() { testCannotConnect(f, f.Namespace, clientPodName, service, 81) }) }) }) })"} {"_id":"doc-en-kubernetes-3ea8a6149105d556ae984fd1456cf9ce1d1be51a455ffcbd6f4cbd761f76c3b5","title":"","text":"// hostname or not, we only check for the suffix match. if strings.HasSuffix(image, tag) || strings.HasSuffix(tag, image) { return true } else { // TODO: We need to remove this hack when project atomic based // docker distro(s) like centos/fedora/rhel image fix problems on // their end. // Say the tag is \"docker.io/busybox:latest\" // and the image is \"docker.io/library/busybox:latest\" t, err := dockerref.ParseNormalizedNamed(tag) if err != nil { continue } // the parsed/normalized tag will look like // reference.taggedReference { // \t namedRepository: reference.repository { // \t domain: \"docker.io\", // \t path: \"library/busybox\" //\t}, // \ttag: \"latest\" // } // If it does not have tags then we bail out t2, ok := t.(dockerref.Tagged) if !ok { continue } // normalized tag would look like \"docker.io/library/busybox:latest\" // note the library get added in the string normalizedTag := t2.String() if normalizedTag == \"\" { continue } if strings.HasSuffix(image, normalizedTag) || strings.HasSuffix(normalizedTag, image) { return true } } } }"} {"_id":"doc-en-kubernetes-c721f3d097e7988b89459e532a6a3d11b319532a5db421d6ee75f55f038b321f","title":"","text":"Output: true, }, { Inspected: dockertypes.ImageInspect{ ID: \"sha256:9bbdf247c91345f0789c10f50a57e36a667af1189687ad1de88a6243d05a2227\", RepoTags: []string{\"docker.io/busybox:latest\"}, }, Image: \"docker.io/library/busybox:latest\", Output: true, }, { // RepoDigest match is is required Inspected: dockertypes.ImageInspect{ ID: \"\","} {"_id":"doc-en-kubernetes-90a127760981a422f37973fc2ed35862b5ccf592c750a5e22d4eaa7e3121a2b5","title":"","text":"} updated := accessor.GetInitializers() // controllers deployed with an empty initializers.pending have their initializers set to nil // but should be able to update without changing their manifest if updated != nil && len(updated.Pending) == 0 && updated.Result == nil { accessor.SetInitializers(nil) updated = nil } existingAccessor, err := meta.Accessor(a.GetOldObject()) if err != nil { // if the old object does not have an accessor, but the new one does, error out"} {"_id":"doc-en-kubernetes-8d77189393d6ab6e83dc75280e5ec175ef0732a278efcebe807bd35e5251704c","title":"","text":"glog.V(5).Infof(\"Modifying uninitialized resource %s\", a.GetResource()) if updated != nil && len(updated.Pending) == 0 && updated.Result == nil { accessor.SetInitializers(nil) } // because we are called before validation, we need to ensure the update transition is valid. if errs := validation.ValidateInitializersUpdate(updated, existing, initializerFieldPath); len(errs) > 0 { return errors.NewInvalid(a.GetKind().GroupKind(), a.GetName(), errs)"} {"_id":"doc-en-kubernetes-a89a86b550173428c12324af56a445d50c9106b81ba06578dd841a1a0b9c0fd6","title":"","text":"newInitializers: &metav1.Initializers{Pending: []metav1.Initializer{{Name: \"init.k8s.io\"}}}, err: \"field is immutable once initialization has completed\", }, { name: \"empty initializer list is treated as nil initializer\", oldInitializers: nil, newInitializers: &metav1.Initializers{}, verifyUpdatedObj: func(obj runtime.Object) (bool, string) { accessor, err := meta.Accessor(obj) if err != nil { return false, \"cannot get accessor\" } if accessor.GetInitializers() != nil { return false, \"expect nil initializers\" } return true, \"\" }, err: \"\", }, } plugin := initializer{"} {"_id":"doc-en-kubernetes-639c781f5a0b80f81b6a39c1c3458a004518a198d0c23ffd96c50f3041f5c30a","title":"","text":"\"flag\" \"fmt\" \"net\" \"sort\" \"strings\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-dd0b19685f43a185fe11d580c2d439dfa4e87fc3b3c92232b17839e9bfd83e6f","title":"","text":"// String is the method to format the flag's value, part of the flag.Value interface. func (c *cidrs) String() string { return strings.Join(c.ipn.StringSlice(), \",\") s := c.ipn.StringSlice() sort.Strings(s) return strings.Join(s, \",\") } // Set supports a value of CSV or the flag repeated multiple times"} {"_id":"doc-en-kubernetes-4871295a0f9f05c0f96a6b20d582d9405c338c119ad7289d8d0dab0eae458c8b","title":"","text":"for format := range jsonFormats { formats = append(formats, format) } sort.Strings(formats) return formats }"} {"_id":"doc-en-kubernetes-46dd2b6ecc3315894a96f6ff76214b59aec279b6d27cbfcc3c9dc9d8bd0d2400","title":"","text":"printFlags := JSONPathPrintFlags{ TemplateArgument: templateArg, } if !sort.StringsAreSorted(printFlags.AllowedFormats()) { t.Fatalf(\"allowed formats are not sorted\") } p, err := printFlags.ToPrinter(tc.outputFormat) if tc.expectNoMatch {"} {"_id":"doc-en-kubernetes-df6d2ff1a8637e12cb455fef3fe97fc629f652bd88b7e44ad3b96f234ee063b4","title":"","text":"TemplateArgument: &tc.templateArg, AllowMissingKeys: tc.allowMissingKeys, } if !sort.StringsAreSorted(printFlags.AllowedFormats()) { t.Fatalf(\"allowed formats are not sorted\") } outputFormat := \"jsonpath\" p, err := printFlags.ToPrinter(outputFormat)"} {"_id":"doc-en-kubernetes-94fb390b199f3e59402151ea026768c466ec840fa8cb46bd6f052f1699e5d7c5","title":"","text":"import ( \"fmt\" \"io/ioutil\" \"sort\" \"strings\" \"github.com/spf13/cobra\""} {"_id":"doc-en-kubernetes-85b073b847bcf21d50c96fecea75f9ae52bb8859946d9db9db8db8757e300a25","title":"","text":"for format := range templateFormats { formats = append(formats, format) } sort.Strings(formats) return formats }"} {"_id":"doc-en-kubernetes-f741b7cd02657c79471641d0407a12ce85c141e7271c06a4a74897d43614e3a3","title":"","text":"\"fmt\" \"io/ioutil\" \"os\" \"sort\" \"strings\" \"testing\""} {"_id":"doc-en-kubernetes-93d52b241c0339d706ab9e98cccf6722bdb565f709f4d697db5efc073e10ff18","title":"","text":"printFlags := GoTemplatePrintFlags{ TemplateArgument: templateArg, } if !sort.StringsAreSorted(printFlags.AllowedFormats()) { t.Fatalf(\"allowed formats are not sorted\") } p, err := printFlags.ToPrinter(tc.outputFormat) if tc.expectNoMatch {"} {"_id":"doc-en-kubernetes-df4158af818002642a28b24a0ad4eb4e7f0ae4154c6d7f4e4398f1710208b454","title":"","text":"TemplateArgument: &tc.templateArg, AllowMissingKeys: tc.allowMissingKeys, } if !sort.StringsAreSorted(printFlags.AllowedFormats()) { t.Fatalf(\"allowed formats are not sorted\") } outputFormat := \"template\" p, err := printFlags.ToPrinter(outputFormat)"} {"_id":"doc-en-kubernetes-ce440a92dbe280fdf26e3519176d77c55176ad2810c3445224ef83df34b60eee","title":"","text":"ETCD_QUORUM_READ: $(yaml-quote ${ETCD_QUORUM_READ}) EOF fi if [ -n \"${CLUSTER_SIGNING_DURATION:-}\" ]; then cat >>$file < else # Node-only env vars."} {"_id":"doc-en-kubernetes-a04b7d54f0809a543afa23664b88430d20fa4bcc3a9ec89187e8d89c03ea6a91","title":"","text":"# Optional: [Experiment Only] Run kube-proxy as a DaemonSet if set to true, run as static pods otherwise. KUBE_PROXY_DAEMONSET=\"${KUBE_PROXY_DAEMONSET:-false}\" # true, false # Optional: duration of cluster signed certificates. CLUSTER_SIGNING_DURATION=\"${CLUSTER_SIGNING_DURATION:-}\" # Optional: enable pod priority ENABLE_POD_PRIORITY=\"${ENABLE_POD_PRIORITY:-}\" if [[ \"${ENABLE_POD_PRIORITY}\" == \"true\" ]]; then"} {"_id":"doc-en-kubernetes-026986dee71135ecc84c7789c61f3de18c4050e1bedf2b5d14ba7e995fa89660","title":"","text":"if [[ -n \"${VOLUME_PLUGIN_DIR:-}\" ]]; then params+=\" --flex-volume-plugin-dir=${VOLUME_PLUGIN_DIR}\" fi if [[ -n \"${CLUSTER_SIGNING_DURATION:-}\" ]]; then params+=\" --experimental-cluster-signing-duration=$CLUSTER_SIGNING_DURATION\" fi local -r kube_rc_docker_tag=$(cat /home/kubernetes/kube-docker-files/kube-controller-manager.docker_tag) local container_env=\"\" if [[ -n \"${ENABLE_CACHE_MUTATION_DETECTOR:-}\" ]]; then"} {"_id":"doc-en-kubernetes-94832abb959d6ef89823cf4debcfe80db092467c5488f6146f6ec9f59439eded","title":"","text":"if [[ -z \"${KUBE_MASTER_IP-}\" ]]; then local master_address_name=\"${MASTER_NAME}-ip\" echo \"Looking for address '${master_address_name}'\" >&2 KUBE_MASTER_IP=$(gcloud compute addresses describe \"${master_address_name}\" --project \"${PROJECT}\" --region \"${REGION}\" -q --format='value(address)') fi if [[ -z \"${KUBE_MASTER_IP-}\" ]]; then echo \"Could not detect Kubernetes master node. Make sure you've launched a cluster with 'kube-up.sh'\" >&2 exit 1 if ! KUBE_MASTER_IP=$(gcloud compute addresses describe \"${master_address_name}\" --project \"${PROJECT}\" --region \"${REGION}\" -q --format='value(address)') || [[ -z \"${KUBE_MASTER_IP-}\" ]]; then echo \"Could not detect Kubernetes master node. Make sure you've launched a cluster with 'kube-up.sh'\" >&2 exit 1 fi fi echo \"Using master: $KUBE_MASTER (external IP: $KUBE_MASTER_IP)\" >&2 }"} {"_id":"doc-en-kubernetes-6440c49b0a5fd9a260072773bd2b96d68ccb409c5b4c6debc6d77a8444b151d4","title":"","text":"} // update our local observations based on the amount reported to have been reclaimed. // note: this is optimistic, other things could have been still consuming the pressured resource in the interim. signal := resourceToSignal[resourceToReclaim] value, ok := observations[signal] if !ok { glog.Errorf(\"eviction manager: unable to find value associated with signal %v\", signal) continue for _, signal := range resourceClaimToSignal[resourceToReclaim] { value, ok := observations[signal] if !ok { glog.Errorf(\"eviction manager: unable to find value associated with signal %v\", signal) continue } value.available.Add(*reclaimed) } value.available.Add(*reclaimed) // evaluate all current thresholds to see if with adjusted observations, we think we have met min reclaim goals if len(thresholdsMet(m.thresholdsMet, observations, true)) == 0 { return true"} {"_id":"doc-en-kubernetes-e14713ea05852a077d2fdfb36cf02e28599e45e9d883b35b12e6e4ad97a91d56","title":"","text":"} } } // TestAllocatableNodeFsPressure func TestAllocatableNodeFsPressure(t *testing.T) { podMaker := makePodWithDiskStats summaryStatsMaker := makeDiskStats podsToMake := []podToMake{ {name: \"guaranteed-low\", requests: newEphemeralStorageResourceList(\"200Mi\", \"100m\", \"1Gi\"), limits: newEphemeralStorageResourceList(\"200Mi\", \"100m\", \"1Gi\"), rootFsUsed: \"200Mi\"}, {name: \"guaranteed-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"1Gi\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"1Gi\"), rootFsUsed: \"800Mi\"}, {name: \"burstable-low\", requests: newEphemeralStorageResourceList(\"300Mi\", \"100m\", \"100Mi\"), limits: newEphemeralStorageResourceList(\"300Mi\", \"200m\", \"1Gi\"), logsFsUsed: \"300Mi\"}, {name: \"burstable-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"100Mi\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"200m\", \"1Gi\"), rootFsUsed: \"800Mi\"}, {name: \"best-effort-low\", requests: newEphemeralStorageResourceList(\"300Mi\", \"\", \"\"), limits: newEphemeralStorageResourceList(\"300Mi\", \"\", \"\"), logsFsUsed: \"300Mi\"}, {name: \"best-effort-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"\", \"\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"\", \"\"), rootFsUsed: \"800Mi\"}, } pods := []*v1.Pod{} podStats := map[*v1.Pod]statsapi.PodStats{} for _, podToMake := range podsToMake { pod, podStat := podMaker(podToMake.name, podToMake.requests, podToMake.limits, podToMake.rootFsUsed, podToMake.logsFsUsed, podToMake.perLocalVolumeUsed) pods = append(pods, pod) podStats[pod] = podStat } podToEvict := pods[5] activePodsFunc := func() []*v1.Pod { return pods } fakeClock := clock.NewFakeClock(time.Now()) podKiller := &mockPodKiller{} diskInfoProvider := &mockDiskInfoProvider{dedicatedImageFs: false} capacityProvider := newMockCapacityProvider(newEphemeralStorageResourceList(\"6Gi\", \"1000m\", \"10Gi\"), newEphemeralStorageResourceList(\"1Gi\", \"1000m\", \"10Gi\")) diskGC := &mockDiskGC{imageBytesFreed: int64(0), err: nil} nodeRef := &v1.ObjectReference{Kind: \"Node\", Name: \"test\", UID: types.UID(\"test\"), Namespace: \"\"} config := Config{ MaxPodGracePeriodSeconds: 5, PressureTransitionPeriod: time.Minute * 5, Thresholds: []evictionapi.Threshold{ { Signal: evictionapi.SignalAllocatableNodeFsAvailable, Operator: evictionapi.OpLessThan, Value: evictionapi.ThresholdValue{ Quantity: quantityMustParse(\"1Ki\"), }, }, }, } summaryProvider := &fakeSummaryProvider{result: summaryStatsMaker(\"3Gi\", \"6Gi\", podStats)} manager := &managerImpl{ clock: fakeClock, killPodFunc: podKiller.killPodNow, imageGC: diskGC, containerGC: diskGC, config: config, recorder: &record.FakeRecorder{}, summaryProvider: summaryProvider, nodeRef: nodeRef, nodeConditionsLastObservedAt: nodeConditionsObservedAt{}, thresholdsFirstObservedAt: thresholdsObservedAt{}, } // create a best effort pod to test admission bestEffortPodToAdmit, _ := podMaker(\"best-admit\", newEphemeralStorageResourceList(\"\", \"\", \"\"), newEphemeralStorageResourceList(\"\", \"\", \"\"), \"0Gi\", \"\", \"\") burstablePodToAdmit, _ := podMaker(\"burst-admit\", newEphemeralStorageResourceList(\"1Gi\", \"\", \"\"), newEphemeralStorageResourceList(\"1Gi\", \"\", \"\"), \"1Gi\", \"\", \"\") // synchronize manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should not have disk pressure if manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should not report disk pressure\") } // try to admit our pods (they should succeed) expected := []bool{true, true} for i, pod := range []*v1.Pod{bestEffortPodToAdmit, burstablePodToAdmit} { if result := manager.Admit(&lifecycle.PodAdmitAttributes{Pod: pod}); expected[i] != result.Admit { t.Fatalf(\"Admit pod: %v, expected: %v, actual: %v\", pod, expected[i], result.Admit) } } // induce disk pressure! fakeClock.Step(1 * time.Minute) pod, podStat := podMaker(\"guaranteed-high-2\", newEphemeralStorageResourceList(\"2000Mi\", \"100m\", \"1Gi\"), newEphemeralStorageResourceList(\"2000Mi\", \"100m\", \"1Gi\"), \"2000Mi\", \"\", \"\") podStats[pod] = podStat pods = append(pods, pod) summaryProvider.result = summaryStatsMaker(\"6Gi\", \"6Gi\", podStats) manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should have disk pressure if !manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should report disk pressure\") } // check the right pod was killed if podKiller.pod != podToEvict { t.Fatalf(\"Manager chose to kill pod: %v, but should have chosen %v\", podKiller.pod, podToEvict.Name) } observedGracePeriod := *podKiller.gracePeriodOverride if observedGracePeriod != int64(0) { t.Fatalf(\"Manager chose to kill pod with incorrect grace period. Expected: %d, actual: %d\", 0, observedGracePeriod) } // reset state podKiller.pod = nil podKiller.gracePeriodOverride = nil // try to admit our pod (should fail) expected = []bool{false, false} for i, pod := range []*v1.Pod{bestEffortPodToAdmit, burstablePodToAdmit} { if result := manager.Admit(&lifecycle.PodAdmitAttributes{Pod: pod}); expected[i] != result.Admit { t.Fatalf(\"Admit pod: %v, expected: %v, actual: %v\", pod, expected[i], result.Admit) } } // reduce disk pressure fakeClock.Step(1 * time.Minute) pods[5] = pods[len(pods)-1] pods = pods[:len(pods)-1] // we should have disk pressure (because transition period not yet met) if !manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should report disk pressure\") } // try to admit our pod (should fail) expected = []bool{false, false} for i, pod := range []*v1.Pod{bestEffortPodToAdmit, burstablePodToAdmit} { if result := manager.Admit(&lifecycle.PodAdmitAttributes{Pod: pod}); expected[i] != result.Admit { t.Fatalf(\"Admit pod: %v, expected: %v, actual: %v\", pod, expected[i], result.Admit) } } // move the clock past transition period to ensure that we stop reporting pressure fakeClock.Step(5 * time.Minute) manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should not have disk pressure (because transition period met) if manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should not report disk pressure\") } // no pod should have been killed if podKiller.pod != nil { t.Fatalf(\"Manager chose to kill pod: %v when no pod should have been killed\", podKiller.pod.Name) } // all pods should admit now expected = []bool{true, true} for i, pod := range []*v1.Pod{bestEffortPodToAdmit, burstablePodToAdmit} { if result := manager.Admit(&lifecycle.PodAdmitAttributes{Pod: pod}); expected[i] != result.Admit { t.Fatalf(\"Admit pod: %v, expected: %v, actual: %v\", pod, expected[i], result.Admit) } } } func TestNodeReclaimForAllocatableFuncs(t *testing.T) { podMaker := makePodWithDiskStats summaryStatsMaker := makeDiskStats podsToMake := []podToMake{ {name: \"guaranteed-low\", requests: newEphemeralStorageResourceList(\"200Mi\", \"100m\", \"1Gi\"), limits: newEphemeralStorageResourceList(\"200Mi\", \"100m\", \"1Gi\"), rootFsUsed: \"200Mi\"}, {name: \"guaranteed-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"1Gi\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"1Gi\"), rootFsUsed: \"800Mi\"}, {name: \"burstable-low\", requests: newEphemeralStorageResourceList(\"300Mi\", \"100m\", \"100Mi\"), limits: newEphemeralStorageResourceList(\"300Mi\", \"200m\", \"1Gi\"), logsFsUsed: \"300Mi\"}, {name: \"burstable-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"100m\", \"100Mi\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"200m\", \"1Gi\"), rootFsUsed: \"800Mi\"}, {name: \"best-effort-low\", requests: newEphemeralStorageResourceList(\"300Mi\", \"\", \"\"), limits: newEphemeralStorageResourceList(\"300Mi\", \"\", \"\"), logsFsUsed: \"300Mi\"}, {name: \"best-effort-high\", requests: newEphemeralStorageResourceList(\"800Mi\", \"\", \"\"), limits: newEphemeralStorageResourceList(\"800Mi\", \"\", \"\"), rootFsUsed: \"800Mi\"}, } pods := []*v1.Pod{} podStats := map[*v1.Pod]statsapi.PodStats{} for _, podToMake := range podsToMake { pod, podStat := podMaker(podToMake.name, podToMake.requests, podToMake.limits, podToMake.rootFsUsed, podToMake.logsFsUsed, podToMake.perLocalVolumeUsed) pods = append(pods, pod) podStats[pod] = podStat } podToEvict := pods[5] activePodsFunc := func() []*v1.Pod { return pods } fakeClock := clock.NewFakeClock(time.Now()) podKiller := &mockPodKiller{} diskInfoProvider := &mockDiskInfoProvider{dedicatedImageFs: false} capacityProvider := newMockCapacityProvider(newEphemeralStorageResourceList(\"6Gi\", \"1000m\", \"10Gi\"), newEphemeralStorageResourceList(\"1Gi\", \"1000m\", \"10Gi\")) imageGcFree := resource.MustParse(\"800Mi\") diskGC := &mockDiskGC{imageBytesFreed: imageGcFree.Value(), err: nil} nodeRef := &v1.ObjectReference{Kind: \"Node\", Name: \"test\", UID: types.UID(\"test\"), Namespace: \"\"} config := Config{ MaxPodGracePeriodSeconds: 5, PressureTransitionPeriod: time.Minute * 5, Thresholds: []evictionapi.Threshold{ { Signal: evictionapi.SignalAllocatableNodeFsAvailable, Operator: evictionapi.OpLessThan, Value: evictionapi.ThresholdValue{ Quantity: quantityMustParse(\"10Mi\"), }, }, }, } summaryProvider := &fakeSummaryProvider{result: summaryStatsMaker(\"6Gi\", \"6Gi\", podStats)} manager := &managerImpl{ clock: fakeClock, killPodFunc: podKiller.killPodNow, imageGC: diskGC, containerGC: diskGC, config: config, recorder: &record.FakeRecorder{}, summaryProvider: summaryProvider, nodeRef: nodeRef, nodeConditionsLastObservedAt: nodeConditionsObservedAt{}, thresholdsFirstObservedAt: thresholdsObservedAt{}, } // synchronize manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should not have disk pressure if manager.IsUnderDiskPressure() { t.Errorf(\"Manager should not report disk pressure\") } // induce hard threshold fakeClock.Step(1 * time.Minute) pod, podStat := podMaker(\"guaranteed-high-2\", newEphemeralStorageResourceList(\"2000Mi\", \"100m\", \"1Gi\"), newEphemeralStorageResourceList(\"2000Mi\", \"100m\", \"1Gi\"), \"2000Mi\", \"\", \"\") podStats[pod] = podStat pods = append(pods, pod) summaryProvider.result = summaryStatsMaker(\"6Gi\", \"6Gi\", podStats) manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should have disk pressure if !manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should report disk pressure since soft threshold was met\") } // verify image gc was invoked if !diskGC.imageGCInvoked || !diskGC.containerGCInvoked { t.Fatalf(\"Manager should have invoked image gc\") } // verify no pod was killed because image gc was sufficient if podKiller.pod == nil { t.Fatalf(\"Manager should have killed a pod, but not killed\") } // check the right pod was killed if podKiller.pod != podToEvict { t.Fatalf(\"Manager chose to kill pod: %v, but should have chosen %v\", podKiller.pod, podToEvict.Name) } observedGracePeriod := *podKiller.gracePeriodOverride if observedGracePeriod != int64(0) { t.Fatalf(\"Manager chose to kill pod with incorrect grace period. Expected: %d, actual: %d\", 0, observedGracePeriod) } // reset state diskGC.imageGCInvoked = false diskGC.containerGCInvoked = false podKiller.pod = nil podKiller.gracePeriodOverride = nil // reduce disk pressure fakeClock.Step(1 * time.Minute) pods[5] = pods[len(pods)-1] pods = pods[:len(pods)-1] // we should have disk pressure (because transition period not yet met) if !manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should report disk pressure\") } // move the clock past transition period to ensure that we stop reporting pressure fakeClock.Step(5 * time.Minute) manager.synchronize(diskInfoProvider, activePodsFunc, capacityProvider) // we should not have disk pressure (because transition period met) if manager.IsUnderDiskPressure() { t.Fatalf(\"Manager should not report disk pressure\") } // no pod should have been killed if podKiller.pod != nil { t.Fatalf(\"Manager chose to kill pod: %v when no pod should have been killed\", podKiller.pod.Name) } // no image gc should have occurred if diskGC.imageGCInvoked || diskGC.containerGCInvoked { t.Errorf(\"Manager chose to perform image gc when it was not neeed\") } // no pod should have been killed if podKiller.pod != nil { t.Errorf(\"Manager chose to kill pod: %v when no pod should have been killed\", podKiller.pod.Name) } } "} {"_id":"doc-en-kubernetes-fd82dd50b6bcf95cef44ad4f7e94b9e2e2914f8c49b6e666cd3c5bfbeeca5126","title":"","text":"signalToNodeCondition map[evictionapi.Signal]v1.NodeConditionType // signalToResource maps a Signal to its associated Resource. signalToResource map[evictionapi.Signal]v1.ResourceName // resourceToSignal maps a Resource to its associated Signal resourceToSignal map[v1.ResourceName]evictionapi.Signal // resourceClaimToSignal maps a Resource that can be reclaimed to its associated Signal resourceClaimToSignal map[v1.ResourceName][]evictionapi.Signal ) func init() {"} {"_id":"doc-en-kubernetes-d07baa431fd1083d65aee2586889f47e96b4c02e18917206d603412412f5e325","title":"","text":"signalToResource[evictionapi.SignalNodeFsAvailable] = resourceNodeFs signalToResource[evictionapi.SignalNodeFsInodesFree] = resourceNodeFsInodes resourceToSignal = map[v1.ResourceName]evictionapi.Signal{} for key, value := range signalToResource { resourceToSignal[value] = key } // Hard-code here to make sure resourceNodeFs maps to evictionapi.SignalNodeFsAvailable // (TODO) resourceToSignal is a map from resource name to a list of signals resourceToSignal[resourceNodeFs] = evictionapi.SignalNodeFsAvailable // maps resource to signals (the following resource could be reclaimed) resourceClaimToSignal = map[v1.ResourceName][]evictionapi.Signal{} resourceClaimToSignal[resourceNodeFs] = []evictionapi.Signal{evictionapi.SignalNodeFsAvailable} resourceClaimToSignal[resourceImageFs] = []evictionapi.Signal{evictionapi.SignalImageFsAvailable} resourceClaimToSignal[resourceNodeFsInodes] = []evictionapi.Signal{evictionapi.SignalNodeFsInodesFree} resourceClaimToSignal[resourceImageFsInodes] = []evictionapi.Signal{evictionapi.SignalImageFsInodesFree} } // validSignal returns true if the signal is supported."} {"_id":"doc-en-kubernetes-bf1e669406473df5d1140f1a553db16dca03942432fee2c9d85b86bcacbf72d6","title":"","text":"return res } func newEphemeralStorageResourceList(ephemeral, cpu, memory string) v1.ResourceList { res := v1.ResourceList{} if ephemeral != \"\" { res[v1.ResourceEphemeralStorage] = resource.MustParse(ephemeral) } if cpu != \"\" { res[v1.ResourceCPU] = resource.MustParse(cpu) } if memory != \"\" { res[v1.ResourceMemory] = resource.MustParse(\"1Mi\") } return res } func newResourceRequirements(requests, limits v1.ResourceList) v1.ResourceRequirements { res := v1.ResourceRequirements{} res.Requests = requests"} {"_id":"doc-en-kubernetes-b7aa6d9434c50c7f1a7e78a1cdb22be3995f9e47b2f8c4e7afba250ea5fda447","title":"","text":"ginkgo := filepath.Join(outputDir, \"ginkgo\") test := filepath.Join(outputDir, \"e2e_node.test\") if *systemSpecName == \"\" { runCommand(ginkgo, *ginkgoFlags, test, \"--\", *testFlags) return args := []string{*ginkgoFlags, test, \"--\", *testFlags} if *systemSpecName != \"\" { rootDir, err := builder.GetK8sRootDir() if err != nil { glog.Fatalf(\"Failed to get k8s root directory: %v\", err) } systemSpecFile := filepath.Join(rootDir, systemSpecPath, *systemSpecName+\".yaml\") args = append(args, fmt.Sprintf(\"--system-spec-name=%s --system-spec-file=%s\", *systemSpecName, systemSpecFile)) } rootDir, err := builder.GetK8sRootDir() if err != nil { glog.Fatalf(\"Failed to get k8s root directory: %v\", err) if err := runCommand(ginkgo, args...); err != nil { glog.Exitf(\"Test failed: %v\", err) } systemSpecFile := filepath.Join(rootDir, systemSpecPath, *systemSpecName+\".yaml\") runCommand(ginkgo, *ginkgoFlags, test, \"--\", fmt.Sprintf(\"--system-spec-name=%s --system-spec-file=%s\", *systemSpecName, systemSpecFile), *testFlags) return }"} {"_id":"doc-en-kubernetes-efecb8bc24c66fbea89a9b7b2c04a9e048dbf063988fec5a45155c37cf02855b","title":"","text":"\"//vendor/google.golang.org/api/googleapi:go_default_library\", \"//vendor/k8s.io/api/batch/v1:go_default_library\", \"//vendor/k8s.io/api/core/v1:go_default_library\", \"//vendor/k8s.io/api/policy/v1beta1:go_default_library\", \"//vendor/k8s.io/api/rbac/v1beta1:go_default_library\", \"//vendor/k8s.io/api/storage/v1:go_default_library\", \"//vendor/k8s.io/api/storage/v1beta1:go_default_library\","} {"_id":"doc-en-kubernetes-5c5fccf8838252cc231e69e116dd177b928351a14bb6da2170f37413c6b4524f","title":"","text":". \"github.com/onsi/ginkgo\" . \"github.com/onsi/gomega\" \"k8s.io/api/core/v1\" policy \"k8s.io/api/policy/v1beta1\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/uuid\" \"k8s.io/apimachinery/pkg/util/wait\" clientset \"k8s.io/client-go/kubernetes\" v1core \"k8s.io/client-go/kubernetes/typed/core/v1\" \"k8s.io/kubernetes/pkg/api/testapi\""} {"_id":"doc-en-kubernetes-00e9a79930da90ae9ce7d60ffa247b9924b2abf6d81e3e15d0cb81678629db4b","title":"","text":"gcePDDetachPollTime = 10 * time.Second nodeStatusTimeout = 10 * time.Minute nodeStatusPollTime = 1 * time.Second podEvictTimeout = 2 * time.Minute maxReadRetry = 3 minNodes = 2 ) var _ = SIGDescribe(\"Pod Disks\", func() { var ( ns string cs clientset.Interface podClient v1core.PodInterface nodeClient v1core.NodeInterface host0Name types.NodeName"} {"_id":"doc-en-kubernetes-4402d491801f62d390f12c43ba4861a74ca1ec3d60bd270668a6292a0e9b8ff0","title":"","text":"BeforeEach(func() { framework.SkipUnlessNodeCountIsAtLeast(minNodes) cs = f.ClientSet ns = f.Namespace.Name podClient = f.ClientSet.Core().Pods(f.Namespace.Name) nodeClient = f.ClientSet.Core().Nodes() nodes = framework.GetReadySchedulableNodesOrDie(f.ClientSet) podClient = cs.Core().Pods(ns) nodeClient = cs.Core().Nodes() nodes = framework.GetReadySchedulableNodesOrDie(cs) Expect(len(nodes.Items)).To(BeNumerically(\">=\", minNodes), fmt.Sprintf(\"Requires at least %d nodes\", minNodes)) host0Name = types.NodeName(nodes.Items[0].ObjectMeta.Name) host1Name = types.NodeName(nodes.Items[1].ObjectMeta.Name)"} {"_id":"doc-en-kubernetes-8a8702c8bf76bf3595cbdb5e90050141f2ee4221ab28bbe7178615b23713fe6c","title":"","text":"} }) Context(\"detach from a disrupted node [Slow] [Disruptive]\", func() { Context(\"detach in a disrupted environment [Slow] [Disruptive]\", func() { const ( deleteNode = 1 // delete physical node deleteNodeObj = 2 // delete node's api object only evictPod = 3 // evict host0Pod on node0 ) type testT struct { descr string // It description nodeOp int // disruptive operation performed on target node descr string // It description disruptOp int // disruptive operation performed on target node } tests := []testT{ { descr: \"node is deleted\", nodeOp: deleteNode, descr: \"node is deleted\", disruptOp: deleteNode, }, { descr: \"node's API object is deleted\", nodeOp: deleteNodeObj, descr: \"node's API object is deleted\", disruptOp: deleteNodeObj, }, { descr: \"pod is evicted\", disruptOp: evictPod, }, } for _, t := range tests { nodeOp := t.nodeOp disruptOp := t.disruptOp It(fmt.Sprintf(\"when %s\", t.descr), func() { framework.SkipUnlessProviderIs(\"gce\") origNodeCnt := len(nodes.Items) // healhy nodes running kublet"} {"_id":"doc-en-kubernetes-ccde8a5160dee571884a22dc34a09569874621e3623858178de8d50fcba79194","title":"","text":"diskName, err := framework.CreatePDWithRetry() framework.ExpectNoError(err, \"Error creating a pd\") targetNode := &nodes.Items[0] targetNode := &nodes.Items[0] // for node delete ops host0Pod := testPDPod([]string{diskName}, host0Name, false, 1) containerName := \"mycontainer\" defer func() { By(\"defer: cleaning up PD-RW test env\") framework.Logf(\"defer cleanup errors can usually be ignored\") if nodeOp == deleteNode { podClient.Delete(host0Pod.Name, metav1.NewDeleteOptions(0)) } By(\"defer: delete host0Pod\") podClient.Delete(host0Pod.Name, metav1.NewDeleteOptions(0)) By(\"defer: detach and delete PDs\") detachAndDeletePDs(diskName, []types.NodeName{host0Name}) if nodeOp == deleteNodeObj { targetNode.ObjectMeta.SetResourceVersion(\"0\") // need to set the resource version or else the Create() fails _, err := nodeClient.Create(targetNode) framework.ExpectNoError(err, \"defer: Unable to re-create the deleted node\") if disruptOp == deleteNode || disruptOp == deleteNodeObj { if disruptOp == deleteNodeObj { targetNode.ObjectMeta.SetResourceVersion(\"0\") // need to set the resource version or else the Create() fails By(\"defer: re-create host0 node object\") _, err := nodeClient.Create(targetNode) framework.ExpectNoError(err, fmt.Sprintf(\"defer: Unable to re-create the deleted node object %q\", targetNode.Name)) } By(\"defer: verify the number of ready nodes\") numNodes := countReadyNodes(cs, host0Name) // if this defer is reached due to an Expect then nested // Expects are lost, so use Failf here if numNodes != origNodeCnt { framework.Failf(\"defer: Requires current node count (%d) to return to original node count (%d)\", numNodes, origNodeCnt) } } numNodes := countReadyNodes(f.ClientSet, host0Name) Expect(numNodes).To(Equal(origNodeCnt), fmt.Sprintf(\"defer: Requires current node count (%d) to return to original node count (%d)\", numNodes, origNodeCnt)) }() By(\"creating host0Pod on node0\") _, err = podClient.Create(host0Pod) framework.ExpectNoError(err, fmt.Sprintf(\"Failed to create host0Pod: %v\", err)) By(\"waiting for host0Pod to be running\") framework.ExpectNoError(f.WaitForPodRunningSlow(host0Pod.Name)) By(\"writing content to host0Pod\")"} {"_id":"doc-en-kubernetes-3a6b1f48929585865eae903b5c108daaf849474c8b2cba29b9f56637ac0bb610","title":"","text":"By(\"verifying PD is present in node0's VolumeInUse list\") framework.ExpectNoError(waitForPDInVolumesInUse(nodeClient, diskName, host0Name, nodeStatusTimeout, true /* should exist*/)) if nodeOp == deleteNode { if disruptOp == deleteNode { By(\"getting gce instances\") gceCloud, err := framework.GetGCECloud() framework.ExpectNoError(err, fmt.Sprintf(\"Unable to create gcloud client err=%v\", err))"} {"_id":"doc-en-kubernetes-0350eeecc160d452781d8d58cf9c30ee0996fd7eb196440e1a23aa8a31fc7a03","title":"","text":"By(\"deleting host0\") resp, err := gceCloud.DeleteInstance(framework.TestContext.CloudConfig.ProjectID, framework.TestContext.CloudConfig.Zone, string(host0Name)) framework.ExpectNoError(err, fmt.Sprintf(\"Failed to delete host0Pod: err=%v response=%#v\", err, resp)) By(\"expecting host0 node to be recreated\") numNodes := countReadyNodes(f.ClientSet, host0Name) By(\"expecting host0 node to be re-created\") numNodes := countReadyNodes(cs, host0Name) Expect(numNodes).To(Equal(origNodeCnt), fmt.Sprintf(\"Requires current node count (%d) to return to original node count (%d)\", numNodes, origNodeCnt)) output, err = gceCloud.ListInstanceNames(framework.TestContext.CloudConfig.ProjectID, framework.TestContext.CloudConfig.Zone) framework.ExpectNoError(err, fmt.Sprintf(\"Unable to get list of node instances err=%v output=%s\", err, output)) Expect(false, strings.Contains(string(output), string(host0Name))) } else if nodeOp == deleteNodeObj { } else if disruptOp == deleteNodeObj { By(\"deleting host0's node api object\") framework.ExpectNoError(nodeClient.Delete(string(host0Name), metav1.NewDeleteOptions(0)), \"Unable to delete host0's node object\") By(\"deleting host0Pod\") framework.ExpectNoError(podClient.Delete(host0Pod.Name, metav1.NewDeleteOptions(0)), \"Unable to delete host0Pod\") } else if disruptOp == evictPod { evictTarget := &policy.Eviction{ ObjectMeta: metav1.ObjectMeta{ Name: host0Pod.Name, Namespace: ns, }, } By(\"evicting host0Pod\") err = wait.PollImmediate(framework.Poll, podEvictTimeout, func() (bool, error) { err = cs.CoreV1().Pods(ns).Evict(evictTarget) if err != nil { return false, nil } else { return true, nil } }) Expect(err).NotTo(HaveOccurred(), fmt.Sprintf(\"failed to evict host0Pod after %v\", podEvictTimeout)) } By(\"waiting for pd to detach from host0\")"} {"_id":"doc-en-kubernetes-ce5b1f06889176d6d93326c844d38ab479efd5b503ec9251ca81e8c5cac45bae","title":"","text":"RUN echo CACHEBUST>/dev/null && clean-install iptables e2fsprogs ebtables ethtool ca-certificates "} {"_id":"doc-en-kubernetes-3a904304d6af11bd7ebd679be08d5dc3d915f3a62076b6a108f26ad56c84c648","title":"","text":"REGISTRY?=gcr.io/google-containers IMAGE?=debian-hyperkube-base TAG=0.2 TAG=0.3 ARCH?=amd64 CACHEBUST?=1"} {"_id":"doc-en-kubernetes-7eb759d928b619ad8d24f58a91aa5a1a1c84845c963070732a9e6f8c605fb3ac","title":"","text":"ARCH?=amd64 HYPERKUBE_BIN?=_output/dockerized/bin/linux/$(ARCH)/hyperkube BASEIMAGE=gcr.io/google-containers/debian-hyperkube-base-$(ARCH):0.2 BASEIMAGE=gcr.io/google-containers/debian-hyperkube-base-$(ARCH):0.3 TEMP_DIR:=$(shell mktemp -d -t hyperkubeXXXXXX) all: build"} {"_id":"doc-en-kubernetes-2fafe9b6116bda8684ea5d3336b34c8c9e183a44ad04a16725e1cd577382a691","title":"","text":"} if len(o.Output) > 0 { return o.PrintObject(o.Cmd, o.Local, o.Mapper, obj, o.Out) if err := o.PrintObject(o.Cmd, o.Local, o.Mapper, obj, o.Out); err != nil { return err } continue } cmdutil.PrintSuccess(o.Mapper, o.ShortOutput, o.Out, info.Mapping.Resource, info.Name, false, \"env updated\")"} {"_id":"doc-en-kubernetes-e568b65ab794edf6f546d443987159c53c9cbace483065b5d60214711781c982","title":"","text":"t.Errorf(\"did not set env: %s\", buf.String()) } } func TestSetMultiResourcesEnvLocal(t *testing.T) { f, tf, codec, ns := cmdtesting.NewAPIFactory() tf.Client = &fake.RESTClient{ GroupVersion: legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion, NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { t.Fatalf(\"unexpected request: %s %#vn%#v\", req.Method, req.URL, req) return nil, nil }), } tf.Namespace = \"test\" tf.ClientConfig = &restclient.Config{ContentConfig: restclient.ContentConfig{GroupVersion: &legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion}} buf := bytes.NewBuffer([]byte{}) cmd := NewCmdEnv(f, os.Stdin, buf, buf) cmd.SetOutput(buf) cmd.Flags().Set(\"output\", \"name\") cmd.Flags().Set(\"local\", \"true\") mapper, typer := f.Object() tf.Printer = &printers.NamePrinter{Decoders: []runtime.Decoder{codec}, Typer: typer, Mapper: mapper} opts := EnvOptions{FilenameOptions: resource.FilenameOptions{ Filenames: []string{\"../../../../test/fixtures/pkg/kubectl/cmd/set/multi-resource-yaml.yaml\"}}, Out: buf, Local: true} err := opts.Complete(f, cmd, []string{\"env=prod\"}) if err == nil { err = opts.RunEnv(f) } if err != nil { t.Fatalf(\"unexpected error: %v\", err) } expectedOut := \"replicationcontrollers/first-rcnreplicationcontrollers/second-rcn\" if buf.String() != expectedOut { t.Errorf(\"expected out:n%snbut got:n%s\", expectedOut, buf.String()) } } "} {"_id":"doc-en-kubernetes-7ee1da84ef5ae7e8426967b2f790dcd1a47933df4fce5a8a708665607c9065ca","title":"","text":"} } } func TestSetMultiResourcesImageLocal(t *testing.T) { f, tf, codec, ns := cmdtesting.NewAPIFactory() tf.Client = &fake.RESTClient{ GroupVersion: legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion, NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { t.Fatalf(\"unexpected request: %s %#vn%#v\", req.Method, req.URL, req) return nil, nil }), } tf.Namespace = \"test\" tf.ClientConfig = &restclient.Config{ContentConfig: restclient.ContentConfig{GroupVersion: &legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion}} buf := bytes.NewBuffer([]byte{}) cmd := NewCmdImage(f, buf, buf) cmd.SetOutput(buf) cmd.Flags().Set(\"output\", \"name\") cmd.Flags().Set(\"local\", \"true\") mapper, typer := f.Object() tf.Printer = &printers.NamePrinter{Decoders: []runtime.Decoder{codec}, Typer: typer, Mapper: mapper} opts := ImageOptions{FilenameOptions: resource.FilenameOptions{ Filenames: []string{\"../../../../test/fixtures/pkg/kubectl/cmd/set/multi-resource-yaml.yaml\"}}, Out: buf, Local: true} err := opts.Complete(f, cmd, []string{\"*=thingy\"}) if err == nil { err = opts.Validate() } if err == nil { err = opts.Run() } if err != nil { t.Fatalf(\"unexpected error: %v\", err) } expectedOut := \"replicationcontrollers/first-rcnreplicationcontrollers/second-rcn\" if buf.String() != expectedOut { t.Errorf(\"expected out:n%snbut got:n%s\", expectedOut, buf.String()) } } "} {"_id":"doc-en-kubernetes-fc04115ed3fd992ea148d1a324c6962a777ff384053c1cd295b70dde19aaf93c","title":"","text":"} if o.Local || cmdutil.GetDryRunFlag(o.Cmd) { return o.PrintObject(o.Cmd, o.Local, o.Mapper, info.Object, o.Out) if err := o.PrintObject(o.Cmd, o.Local, o.Mapper, info.Object, o.Out); err != nil { return err } continue } obj, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, types.StrategicMergePatchType, patch.Patch)"} {"_id":"doc-en-kubernetes-8869fa672e0000e1be855aff272c509593970cedcff32eb2e1ccb94dab61451d","title":"","text":"t.Errorf(\"did not set resources: %s\", buf.String()) } } func TestSetMultiResourcesLimitsLocal(t *testing.T) { f, tf, codec, ns := cmdtesting.NewAPIFactory() tf.Client = &fake.RESTClient{ GroupVersion: legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion, NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { t.Fatalf(\"unexpected request: %s %#vn%#v\", req.Method, req.URL, req) return nil, nil }), } tf.Namespace = \"test\" tf.ClientConfig = &restclient.Config{ContentConfig: restclient.ContentConfig{GroupVersion: &legacyscheme.Registry.GroupOrDie(api.GroupName).GroupVersion}} buf := bytes.NewBuffer([]byte{}) cmd := NewCmdResources(f, buf, buf) cmd.SetOutput(buf) cmd.Flags().Set(\"output\", \"name\") cmd.Flags().Set(\"local\", \"true\") mapper, typer := f.Object() tf.Printer = &printers.NamePrinter{Decoders: []runtime.Decoder{codec}, Typer: typer, Mapper: mapper} opts := ResourcesOptions{FilenameOptions: resource.FilenameOptions{ Filenames: []string{\"../../../../test/fixtures/pkg/kubectl/cmd/set/multi-resource-yaml.yaml\"}}, Out: buf, Local: true, Limits: \"cpu=200m,memory=512Mi\", Requests: \"cpu=200m,memory=512Mi\", ContainerSelector: \"*\"} err := opts.Complete(f, cmd, []string{}) if err == nil { err = opts.Validate() } if err == nil { err = opts.Run() } if err != nil { t.Fatalf(\"unexpected error: %v\", err) } expectedOut := \"replicationcontrollers/first-rcnreplicationcontrollers/second-rcn\" if buf.String() != expectedOut { t.Errorf(\"expected out:n%snbut got:n%s\", expectedOut, buf.String()) } } "} {"_id":"doc-en-kubernetes-cd4f240d3200b15d44b875690374da13b099afb464f7ef1074c850342422741e","title":"","text":"} if o.Local || o.DryRun { return o.PrintObject(o.Mapper, info.Object, o.Out) if err := o.PrintObject(o.Mapper, info.Object, o.Out); err != nil { return err } continue } obj, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, types.StrategicMergePatchType, patch.Patch)"} {"_id":"doc-en-kubernetes-743f201778f12c7ba2bb19be0ac35a77bc184d36d466adccf87bca7f3e6e92b6","title":"","text":" apiVersion: v1 kind: ReplicationController metadata: name: first-rc spec: replicas: 1 selector: app: mock template: metadata: labels: app: mock spec: containers: - name: mock-container image: gcr.io/google-containers/pause:2.0 --- apiVersion: v1 kind: ReplicationController metadata: name: second-rc spec: replicas: 1 selector: app: mock template: metadata: labels: app: mock spec: containers: - name: mock-container image: gcr.io/google-containers/pause:2.0 No newline at end of file"} {"_id":"doc-en-kubernetes-c704a67ad002abcf1406f63f6b67f7d37e3113960b328ddc381855a5eef5d385","title":"","text":"name = \"all-srcs\", srcs = [ \":package-srcs\", \"//staging/src/k8s.io/apiserver/plugin/pkg/audit/buffered:all-srcs\", \"//staging/src/k8s.io/apiserver/plugin/pkg/audit/log:all-srcs\", \"//staging/src/k8s.io/apiserver/plugin/pkg/audit/webhook:all-srcs\", ],"} {"_id":"doc-en-kubernetes-513c48ed330d9ae0d9ef08a07e8dad77b099360a97d138df348c2a52bbbe4606","title":"","text":" load(\"@io_bazel_rules_go//go:def.bzl\", \"go_library\", \"go_test\") go_library( name = \"go_default_library\", srcs = [ \"buffered.go\", \"doc.go\", ], importpath = \"k8s.io/apiserver/plugin/pkg/audit/buffered\", visibility = [\"//visibility:public\"], deps = [ \"//vendor/k8s.io/apimachinery/pkg/util/runtime:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/apis/audit:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/audit:go_default_library\", \"//vendor/k8s.io/client-go/util/flowcontrol:go_default_library\", ], ) go_test( name = \"go_default_test\", srcs = [\"buffered_test.go\"], embed = [\":go_default_library\"], deps = [ \"//vendor/github.com/stretchr/testify/require:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/apis/audit:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/audit:go_default_library\", ], ) filegroup( name = \"package-srcs\", srcs = glob([\"**\"]), tags = [\"automanaged\"], visibility = [\"//visibility:private\"], ) filegroup( name = \"all-srcs\", srcs = [\":package-srcs\"], tags = [\"automanaged\"], visibility = [\"//visibility:public\"], ) "} {"_id":"doc-en-kubernetes-1b0dd6ec2b11c0fc3df8399d8720bd1ebebad2dcc84d63d27365374633578ab7","title":"","text":" /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package buffered import ( \"fmt\" \"sync\" \"time\" \"k8s.io/apimachinery/pkg/util/runtime\" \"k8s.io/apimachinery/pkg/util/wait\" auditinternal \"k8s.io/apiserver/pkg/apis/audit\" \"k8s.io/apiserver/pkg/audit\" \"k8s.io/client-go/util/flowcontrol\" ) // The plugin name reported in error metrics. const pluginName = \"buffered\" const ( // Default configuration values for ModeBatch. defaultBatchBufferSize = 10000 // Buffer up to 10000 events before starting discarding. defaultBatchMaxSize = 400 // Only send up to 400 events at a time. defaultBatchMaxWait = 30 * time.Second // Send events at least twice a minute. defaultBatchThrottleQPS = 10 // Limit the send rate by 10 QPS. defaultBatchThrottleBurst = 15 // Allow up to 15 QPS burst. ) // BatchConfig represents batching delegate audit backend configuration. type BatchConfig struct { // BufferSize defines a size of the buffering queue. BufferSize int // MaxBatchSize defines maximum size of a batch. MaxBatchSize int // MaxBatchWait indicates the maximum interval between two batches. MaxBatchWait time.Duration // ThrottleEnable defines whether throttling will be applied to the batching process. ThrottleEnable bool // ThrottleQPS defines the allowed rate of batches per second sent to the delegate backend. ThrottleQPS float32 // ThrottleBurst defines the maximum rate of batches per second sent to the delegate backend in case // the capacity defined by ThrottleQPS was not utilized. ThrottleBurst int } // NewDefaultBatchConfig returns new Config objects populated by default values. func NewDefaultBatchConfig() BatchConfig { return BatchConfig{ BufferSize: defaultBatchBufferSize, MaxBatchSize: defaultBatchMaxSize, MaxBatchWait: defaultBatchMaxWait, ThrottleEnable: true, ThrottleQPS: defaultBatchThrottleQPS, ThrottleBurst: defaultBatchThrottleBurst, } } type bufferedBackend struct { // The delegate backend that actually exports events. delegateBackend audit.Backend // Channel to buffer events before sending to the delegate backend. buffer chan *auditinternal.Event // Maximum number of events in a batch sent to the delegate backend. maxBatchSize int // Amount of time to wait after sending a batch to the delegate backend before sending another one. // // Receiving maxBatchSize events will always trigger sending a batch, regardless of the amount of time passed. maxBatchWait time.Duration // Channel to signal that the batching routine has processed all remaining events and exited. // Once `shutdownCh` is closed no new events will be sent to the delegate backend. shutdownCh chan struct{} // WaitGroup to control the concurrency of sending batches to the delegate backend. // Worker routine calls Add before sending a batch and // then spawns a routine that calls Done after batch was processed by the delegate backend. // This WaitGroup is used to wait for all sending routines to finish before shutting down audit backend. wg sync.WaitGroup // Limits the number of batches sent to the delegate backend per second. throttle flowcontrol.RateLimiter } var _ audit.Backend = &bufferedBackend{} // NewBackend returns a buffered audit backend that wraps delegate backend. func NewBackend(delegate audit.Backend, config BatchConfig) audit.Backend { var throttle flowcontrol.RateLimiter if config.ThrottleEnable { throttle = flowcontrol.NewTokenBucketRateLimiter(config.ThrottleQPS, config.ThrottleBurst) } return &bufferedBackend{ delegateBackend: delegate, buffer: make(chan *auditinternal.Event, config.BufferSize), maxBatchSize: config.MaxBatchSize, maxBatchWait: config.MaxBatchWait, shutdownCh: make(chan struct{}), wg: sync.WaitGroup{}, throttle: throttle, } } func (b *bufferedBackend) Run(stopCh <-chan struct{}) error { go func() { // Signal that the working routine has exited. defer close(b.shutdownCh) b.processIncomingEvents(stopCh) // Handle the events that were received after the last buffer // scraping and before this line. Since the buffer is closed, no new // events will come through. allEventsProcessed := false timer := make(chan time.Time) for !allEventsProcessed { allEventsProcessed = func() bool { // Recover from any panic in order to try to process all remaining events. // Note, that in case of a panic, the return value will be false and // the loop execution will continue. defer runtime.HandleCrash() events := b.collectEvents(timer, wait.NeverStop) b.processEvents(events) return len(events) == 0 }() } }() return b.delegateBackend.Run(stopCh) } // Shutdown blocks until stopCh passed to the Run method is closed and all // events added prior to that moment are batched and sent to the delegate backend. func (b *bufferedBackend) Shutdown() { // Wait until the routine spawned in Run method exits. <-b.shutdownCh // Wait until all sending routines exit. b.wg.Wait() b.delegateBackend.Shutdown() } // processIncomingEvents runs a loop that collects events from the buffer. When // b.stopCh is closed, processIncomingEvents stops and closes the buffer. func (b *bufferedBackend) processIncomingEvents(stopCh <-chan struct{}) { defer close(b.buffer) t := time.NewTimer(b.maxBatchWait) defer t.Stop() for { func() { // Recover from any panics caused by this function so a panic in the // goroutine can't bring down the main routine. defer runtime.HandleCrash() t.Reset(b.maxBatchWait) b.processEvents(b.collectEvents(t.C, stopCh)) }() select { case <-stopCh: return default: } } } // collectEvents attempts to collect some number of events in a batch. // // The following things can cause collectEvents to stop and return the list // of events: // // * Maximum number of events for a batch. // * Timer has passed. // * Buffer channel is closed and empty. // * stopCh is closed. func (b *bufferedBackend) collectEvents(timer <-chan time.Time, stopCh <-chan struct{}) []*auditinternal.Event { var events []*auditinternal.Event L: for i := 0; i < b.maxBatchSize; i++ { select { case ev, ok := <-b.buffer: // Buffer channel was closed and no new events will follow. if !ok { break L } events = append(events, ev) case <-timer: // Timer has expired. Send currently accumulated batch. break L case <-stopCh: // Backend has been stopped. Send currently accumulated batch. break L } } return events } // processEvents process the batch events in a goroutine using delegateBackend's ProcessEvents. func (b *bufferedBackend) processEvents(events []*auditinternal.Event) { if len(events) == 0 { return } // TODO(audit): Should control the number of active goroutines // if one goroutine takes 5 seconds to finish, the number of goroutines can be 5 * defaultBatchThrottleQPS if b.throttle != nil { b.throttle.Accept() } b.wg.Add(1) go func() { defer b.wg.Done() defer runtime.HandleCrash() // Execute the real processing in a goroutine to keep it from blocking. // This lets the batching routine continue draining the queue immediately. b.delegateBackend.ProcessEvents(events...) }() } func (b *bufferedBackend) ProcessEvents(ev ...*auditinternal.Event) { // The following mechanism is in place to support the situation when audit // events are still coming after the backend was stopped. var sendErr error var evIndex int // If the delegateBackend was shutdown and the buffer channel was closed, an // attempt to add an event to it will result in panic that we should // recover from. defer func() { if err := recover(); err != nil { sendErr = fmt.Errorf(\"panic when processing events: %v\", err) } if sendErr != nil { audit.HandlePluginError(pluginName, sendErr, ev[evIndex:]...) } }() for i, e := range ev { evIndex = i // Per the audit.Backend interface these events are reused after being // sent to the Sink. Deep copy and send the copy to the queue. event := e.DeepCopy() select { case b.buffer <- event: default: sendErr = fmt.Errorf(\"audit buffer queue blocked\") return } } } "} {"_id":"doc-en-kubernetes-01b93570357cc47f8ff567766f0d2185b27962dc0fc61ff6b644c51f38993870","title":"","text":" /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package buffered import ( \"testing\" \"time\" \"github.com/stretchr/testify/require\" \"k8s.io/apimachinery/pkg/util/wait\" auditinternal \"k8s.io/apiserver/pkg/apis/audit\" \"k8s.io/apiserver/pkg/audit\" ) var ( closedStopCh = func() <-chan struct{} { ch := make(chan struct{}) close(ch) return ch }() infiniteTimeCh <-chan time.Time = make(chan time.Time) closedTimeCh = func() <-chan time.Time { ch := make(chan time.Time) close(ch) return ch }() ) func newEvents(number int) []*auditinternal.Event { events := make([]*auditinternal.Event, number) for i := range events { events[i] = &auditinternal.Event{} } return events } func TestBufferedBackendCollectEvents(t *testing.T) { config := NewDefaultBatchConfig() testCases := []struct { desc string timer <-chan time.Time stopCh <-chan struct{} numEvents int wantBatchSize int }{ { desc: \"max batch size encountered\", timer: infiniteTimeCh, stopCh: wait.NeverStop, numEvents: config.MaxBatchSize + 1, wantBatchSize: config.MaxBatchSize, }, { desc: \"timer expired\", timer: closedTimeCh, stopCh: wait.NeverStop, }, { desc: \"chanel closed\", timer: infiniteTimeCh, stopCh: closedStopCh, }, } for _, tc := range testCases { tc := tc t.Run(tc.desc, func(t *testing.T) { t.Parallel() backend := NewBackend(&fakeBackend{}, config).(*bufferedBackend) backend.ProcessEvents(newEvents(tc.numEvents)...) batch := backend.collectEvents(tc.timer, tc.stopCh) require.Equal(t, tc.wantBatchSize, len(batch), \"unexpected batch size\") }) } } func TestBufferedBackendProcessEventsAfterStop(t *testing.T) { t.Parallel() backend := NewBackend(&fakeBackend{}, NewDefaultBatchConfig()).(*bufferedBackend) backend.Run(closedStopCh) backend.Shutdown() backend.ProcessEvents(newEvents(1)...) batch := backend.collectEvents(infiniteTimeCh, wait.NeverStop) require.Equal(t, 0, len(batch), \"processed events after the backed has been stopped\") } func TestBufferedBackendProcessEventsBufferFull(t *testing.T) { t.Parallel() config := NewDefaultBatchConfig() config.BufferSize = 1 backend := NewBackend(&fakeBackend{}, config).(*bufferedBackend) backend.ProcessEvents(newEvents(2)...) require.Equal(t, 1, len(backend.buffer), \"buffed contains more elements than it should\") } func TestBufferedBackendShutdownWaitsForDelegatedCalls(t *testing.T) { t.Parallel() delegatedCallStartCh := make(chan struct{}) delegatedCallEndCh := make(chan struct{}) delegateBackend := &fakeBackend{ onRequest: func(_ []*auditinternal.Event) { close(delegatedCallStartCh) <-delegatedCallEndCh }, } config := NewDefaultBatchConfig() backend := NewBackend(delegateBackend, config) // Run backend, process events, wait for them to be batched and for delegated call to start. stopCh := make(chan struct{}) backend.Run(stopCh) backend.ProcessEvents(newEvents(config.MaxBatchSize)...) <-delegatedCallStartCh // Start shutdown procedure. shutdownEndCh := make(chan struct{}) go func() { close(stopCh) backend.Shutdown() close(shutdownEndCh) }() // Wait for some time and then check whether Shutdown has exited. Can give false positive, // but never false negative. time.Sleep(100 * time.Millisecond) select { case <-shutdownEndCh: t.Fatalf(\"Shutdown exited before delegated call ended\") default: } // Wait for Shutdown to exit after delegated call has exited. close(delegatedCallEndCh) <-shutdownEndCh } type fakeBackend struct { onRequest func(events []*auditinternal.Event) } var _ audit.Backend = &fakeBackend{} func (b *fakeBackend) Run(stopCh <-chan struct{}) error { return nil } func (b *fakeBackend) Shutdown() { return } func (b *fakeBackend) ProcessEvents(ev ...*auditinternal.Event) { if b.onRequest != nil { b.onRequest(ev) } } "} {"_id":"doc-en-kubernetes-f388952a25d80a9a0d787867a377daee33952ae719b6804db8385bfe612f47ff","title":"","text":" /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Package buffered provides an implementation for the audit.Backend interface // that batches incoming audit events and sends batches to the delegate audit.Backend. package buffered // import \"k8s.io/apiserver/plugin/pkg/audit/buffered\" "} {"_id":"doc-en-kubernetes-1fec9abbd042a0ed6cc7b456b18d1a9664f3fed1b65947524e02a649a6acac5b","title":"","text":"// ServiceAnnotationLoadBalancerInternal is the annotation used on the service // to indicate that we want an internal loadbalancer service. // If the value of ServiceAnnotationLoadBalancerInternal is false, it indicates that we want an external loadbalancer service. Default to true. // If the value of ServiceAnnotationLoadBalancerInternal is false, it indicates that we want an external loadbalancer service. Default to false. ServiceAnnotationLoadBalancerInternal = \"service.beta.kubernetes.io/openstack-internal-load-balancer\" )"} {"_id":"doc-en-kubernetes-0beb699cc4a304e0afe65bd6163d5dd6810447dbdee22c1e95b9fe1ec26a8039","title":"","text":"glog.V(4).Infof(\"EnsureLoadBalancer using floatingPool: %v\", floatingPool) var internalAnnotation bool internal := getStringFromServiceAnnotation(apiService, ServiceAnnotationLoadBalancerInternal, \"true\") internal := getStringFromServiceAnnotation(apiService, ServiceAnnotationLoadBalancerInternal, \"false\") switch internal { case \"true\": glog.V(4).Infof(\"Ensure an internal loadbalancer service.\")"} {"_id":"doc-en-kubernetes-853fd4d7cced2fe78c50020cf6d5261e6ae6e5162748d8bfd6fd31e5cd86ae50","title":"","text":"glog.V(4).Infof(\"Ensure an external loadbalancer service.\") internalAnnotation = false } else { return nil, fmt.Errorf(\"floating-network-id or loadbalancer.openstack.org/floating-network-id should be specified when service.beta.kubernetes.io/openstack-internal-load-balancer is false\") return nil, fmt.Errorf(\"floating-network-id or loadbalancer.openstack.org/floating-network-id should be specified when ensuring an external loadbalancer service.\") } default: return nil, fmt.Errorf(\"unknow service.beta.kubernetes.io/openstack-internal-load-balancer annotation: %v, specify \"true\" or \"false\".\","} {"_id":"doc-en-kubernetes-d85accc32c8664e5fd7d93a734b39718ee6681ce884a7faa0fd367f6f02837e8","title":"","text":"pod, err = c.CoreV1().Pods(ns).Get(pod.Name, metav1.GetOptions{}) Expect(err).NotTo(HaveOccurred()) // Wait for `VolumeStatsAggPeriod' to grab metrics time.Sleep(1 * time.Minute) // Grab kubelet metrics from the node the pod was scheduled on kubeMetrics, err := metricsGrabber.GrabFromKubelet(pod.Spec.NodeName) Expect(err).NotTo(HaveOccurred(), \"Error getting kubelet metrics : %v\", err) framework.Logf(\"Deleting pod %q/%q\", pod.Namespace, pod.Name) framework.ExpectNoError(framework.DeletePodWithWait(f, c, pod)) // Verify volume stat metrics were collected for the referenced PVC volumeStatKeys := []string{ kubeletmetrics.VolumeStatsUsedBytesKey,"} {"_id":"doc-en-kubernetes-fb8cda6b4773a819bbd47122213084f5b4efe1e52a0d5f7825c9e22ae3a6bad9","title":"","text":"kubeletmetrics.VolumeStatsInodesFreeKey, kubeletmetrics.VolumeStatsInodesUsedKey, } // Poll kubelet metrics waiting for the volume to be picked up // by the volume stats collector var kubeMetrics metrics.KubeletMetrics waitErr := wait.Poll(30*time.Second, 5*time.Minute, func() (bool, error) { framework.Logf(\"Grabbing Kubelet metrics\") // Grab kubelet metrics from the node the pod was scheduled on var err error kubeMetrics, err = metricsGrabber.GrabFromKubelet(pod.Spec.NodeName) if err != nil { framework.Logf(\"Error fetching kubelet metrics\") return false, err } key := volumeStatKeys[0] kubeletKeyName := fmt.Sprintf(\"%s_%s\", kubeletmetrics.KubeletSubsystem, key) if !findVolumeStatMetric(kubeletKeyName, pvc.Namespace, pvc.Name, kubeMetrics) { return false, nil } return true, nil }) Expect(waitErr).NotTo(HaveOccurred(), \"Error finding volume metrics : %v\", waitErr) for _, key := range volumeStatKeys { kubeletKeyName := fmt.Sprintf(\"%s_%s\", kubeletmetrics.KubeletSubsystem, key) verifyVolumeStatMetric(kubeletKeyName, pvc.Namespace, pvc.Name, kubeMetrics) found := findVolumeStatMetric(kubeletKeyName, pvc.Namespace, pvc.Name, kubeMetrics) Expect(found).To(BeTrue(), \"PVC %s, Namespace %s not found for %s\", pvc.Name, pvc.Namespace, kubeletKeyName) } framework.Logf(\"Deleting pod %q/%q\", pod.Namespace, pod.Name) framework.ExpectNoError(framework.DeletePodWithWait(f, c, pod)) }) })"} {"_id":"doc-en-kubernetes-9ad1c603d451b66a69d187a6d8c509cbf3d1a5daa22323cd47950d1db2d9ba7a","title":"","text":"return result } // Verifies the specified metrics are in `kubeletMetrics` func verifyVolumeStatMetric(metricKeyName string, namespace string, pvcName string, kubeletMetrics metrics.KubeletMetrics) { // Finds the sample in the specified metric from `KubeletMetrics` tagged with // the specified namespace and pvc name func findVolumeStatMetric(metricKeyName string, namespace string, pvcName string, kubeletMetrics metrics.KubeletMetrics) bool { found := false errCount := 0 framework.Logf(\"Looking for sample in metric `%s` tagged with namespace `%s`, PVC `%s`\", metricKeyName, namespace, pvcName) if samples, ok := kubeletMetrics[metricKeyName]; ok { for _, sample := range samples { framework.Logf(\"Found sample %s\", sample.String()) samplePVC, ok := sample.Metric[\"persistentvolumeclaim\"] if !ok { framework.Logf(\"Error getting pvc for metric %s, sample %s\", metricKeyName, sample.String())"} {"_id":"doc-en-kubernetes-079ac775f1c75fe9f8dbb63497758fb4f5a9d2a114fec7a4eb47806473f5167f","title":"","text":"} } Expect(errCount).To(Equal(0), \"Found invalid samples\") Expect(found).To(BeTrue(), \"PVC %s, Namespace %s not found for %s\", pvcName, namespace, metricKeyName) return found }"} {"_id":"doc-en-kubernetes-c66c0a96b5b60318ae93c6874a704e1df64e07e9d1a5368461eaf6a9cbcfbf42","title":"","text":"PortOpenCheck{port: 10252}, HTTPProxyCheck{Proto: \"https\", Host: cfg.API.AdvertiseAddress, Port: int(cfg.API.BindPort)}, DirAvailableCheck{Path: filepath.Join(kubeadmconstants.KubernetesDir, kubeadmconstants.ManifestsSubDirName)}, DirAvailableCheck{Path: \"/var/lib/kubelet\"}, FileContentCheck{Path: bridgenf, Content: []byte{'1'}}, SwapCheck{}, InPathCheck{executable: \"ip\", mandatory: true},"} {"_id":"doc-en-kubernetes-12ec821089bd0d7b353e75c1f213250ccccfe5b66da3488e43f39ef14a984d92","title":"","text":"ServiceCheck{Service: \"docker\", CheckIfActive: true}, PortOpenCheck{port: 10250}, DirAvailableCheck{Path: filepath.Join(kubeadmconstants.KubernetesDir, kubeadmconstants.ManifestsSubDirName)}, DirAvailableCheck{Path: \"/var/lib/kubelet\"}, FileAvailableCheck{Path: cfg.CACertPath}, FileAvailableCheck{Path: filepath.Join(kubeadmconstants.KubernetesDir, kubeadmconstants.KubeletKubeConfigFileName)}, FileContentCheck{Path: bridgenf, Content: []byte{'1'}},"} {"_id":"doc-en-kubernetes-402589e4bafdcf8a3d902eb342bc69c0d1941a6427c7b777aaee009f4a987beb","title":"","text":"RequireKubeConfig: false, KubeConfig: flag.NewStringFlag(\"/var/lib/kubelet/kubeconfig\"), ContainerRuntimeOptions: *NewContainerRuntimeOptions(), CertDirectory: \"/var/run/kubernetes\", CertDirectory: \"/var/lib/kubelet/pki\", RootDirectory: v1alpha1.DefaultRootDir, // DEPRECATED: auto detecting cloud providers goes against the initiative // for out-of-tree cloud providers as we'll now depend on cAdvisor integrations"} {"_id":"doc-en-kubernetes-09ba942f4100f76508438efa1282a69e54cdcf21172437fdce6cedf3ab005788","title":"","text":"\"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/runtime:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/types:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/watch:go_default_library\", \"//vendor/k8s.io/apiserver/pkg/util/feature:go_default_library\","} {"_id":"doc-en-kubernetes-f83e3d708392a283ebba98a2de2bfc65a8d14a1574be05b79b70b9c814dbf6fc","title":"","text":"metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/apimachinery/pkg/watch\" \"k8s.io/client-go/dynamic\""} {"_id":"doc-en-kubernetes-3a1fd1ecf5986d6e0ae6fce4fea3c3fd4ef399e9f999b0a3d9caec7910b524ba","title":"","text":"} } func TestPatch(t *testing.T) { stopCh, apiExtensionClient, clientPool, err := testserver.StartDefaultServer() if err != nil { t.Fatal(err) } defer close(stopCh) noxuDefinition := testserver.NewNoxuCustomResourceDefinition(apiextensionsv1beta1.ClusterScoped) noxuVersionClient, err := testserver.CreateNewCustomResourceDefinition(noxuDefinition, apiExtensionClient, clientPool) if err != nil { t.Fatal(err) } ns := \"not-the-default\" noxuNamespacedResourceClient := noxuVersionClient.Resource(&metav1.APIResource{ Name: noxuDefinition.Spec.Names.Plural, Namespaced: true, }, ns) noxuInstanceToCreate := testserver.NewNoxuInstance(ns, \"foo\") createdNoxuInstance, err := noxuNamespacedResourceClient.Create(noxuInstanceToCreate) if err != nil { t.Fatal(err) } patch := []byte(`{\"num\": {\"num2\":999}}`) createdNoxuInstance, err = noxuNamespacedResourceClient.Patch(\"foo\", types.MergePatchType, patch) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } // a patch with no change createdNoxuInstance, err = noxuNamespacedResourceClient.Patch(\"foo\", types.MergePatchType, patch) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } // an empty patch createdNoxuInstance, err = noxuNamespacedResourceClient.Patch(\"foo\", types.MergePatchType, []byte(`{}`)) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } originalJSON, err := runtime.Encode(unstructured.UnstructuredJSONScheme, createdNoxuInstance) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } gottenNoxuInstance, err := runtime.Decode(unstructured.UnstructuredJSONScheme, originalJSON) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } // Check if int is preserved. unstructuredObj := gottenNoxuInstance.(*unstructured.Unstructured).Object num := unstructuredObj[\"num\"].(map[string]interface{}) num1 := num[\"num1\"].(int64) num2 := num[\"num2\"].(int64) if num1 != 9223372036854775807 || num2 != 999 { t.Errorf(\"Expected %v, got %v, %v\", `9223372036854775807, 999`, num1, num2) } } func TestCrossNamespaceListWatch(t *testing.T) { stopCh, apiExtensionClient, clientPool, err := testserver.StartDefaultServer() if err != nil {"} {"_id":"doc-en-kubernetes-d31cf03a946e8f5368d7bd0d2de9546c6a35ea19f752affb73016800d3a6e4c7","title":"","text":"if err != nil { return err } mustCheckData = false continue if !bytes.Equal(data, origState.data) { // original data changed, restart loop mustCheckData = false continue } } return decode(s.codec, s.versioner, origState.data, out, origState.rev) }"} {"_id":"doc-en-kubernetes-16de65748754c10be5aea03f696ceae821440f39b46be88dfb88e7f5f3bfd141","title":"","text":"CPUManager utilfeature.Feature = \"CPUManager\" // owner: @derekwaynecarr // alpha: v1.8 // beta: v1.10 // // Enable pods to consume pre-allocated huge pages of varying page sizes HugePages utilfeature.Feature = \"HugePages\""} {"_id":"doc-en-kubernetes-cb26900187cb2692ae0cd452a8da882793d8852d9a6a194253280397133632e4","title":"","text":"RotateKubeletClientCertificate: {Default: true, PreRelease: utilfeature.Beta}, PersistentLocalVolumes: {Default: false, PreRelease: utilfeature.Alpha}, LocalStorageCapacityIsolation: {Default: false, PreRelease: utilfeature.Alpha}, HugePages: {Default: false, PreRelease: utilfeature.Alpha}, HugePages: {Default: true, PreRelease: utilfeature.Beta}, DebugContainers: {Default: false, PreRelease: utilfeature.Alpha}, PodPriority: {Default: false, PreRelease: utilfeature.Alpha}, EnableEquivalenceClassCache: {Default: false, PreRelease: utilfeature.Alpha},"} {"_id":"doc-en-kubernetes-fb60e4b400c73f7472d474b8b92ebfd9ef489c04eea1534dd2f8e23c40208d6c","title":"","text":"RUN ln -s /bin/sh /bin/bash RUN echo CACHEBUST>/dev/null && clean-install iptables ca-certificates ceph-common cifs-utils conntrack e2fsprogs ebtables ethtool kmod ca-certificates conntrack util-linux socat git glusterfs-client iptables jq kmod openssh-client nfs-common glusterfs-client cifs-utils ceph-common socat util-linux COPY cni-bin/bin /opt/cni/bin"} {"_id":"doc-en-kubernetes-09b8c32084a23050b6fe994daba65b193098c7df396e022a20f70a59219c6411","title":"","text":"REGISTRY?=gcr.io/google-containers IMAGE?=debian-hyperkube-base TAG=0.5 TAG=0.6 ARCH?=amd64 CACHEBUST?=1"} {"_id":"doc-en-kubernetes-5d574613f7b06edd538737cf928a8da24c5eb576603a49d7434b9dae456d940b","title":"","text":"docker_pull( name = \"debian-hyperkube-base-amd64\", digest = \"sha256:d216b425004fcb6d8047f74e81b30e7ead55f73e73511ca53a329c358786b6c9\", digest = \"sha256:10546d592e58d5fdb2e25d79f291b8ac62c8d3a3d83337ad7309cca766dbebce\", registry = \"gcr.io\", repository = \"google-containers/debian-hyperkube-base-amd64\", tag = \"0.5\", # ignored, but kept here for documentation tag = \"0.6\", # ignored, but kept here for documentation ) docker_pull("} {"_id":"doc-en-kubernetes-913e01543bce7403108d4ba05afe6cbf509d89e951861a78cca54c32db274f05","title":"","text":"ARCH?=amd64 HYPERKUBE_BIN?=_output/dockerized/bin/linux/$(ARCH)/hyperkube BASEIMAGE=gcr.io/google-containers/debian-hyperkube-base-$(ARCH):0.5 BASEIMAGE=gcr.io/google-containers/debian-hyperkube-base-$(ARCH):0.6 TEMP_DIR:=$(shell mktemp -d -t hyperkubeXXXXXX) all: build"} {"_id":"doc-en-kubernetes-4de2367d6c65b3e06a6fe128ae52c399590d1dabc06ddc20ae10bc229b92d6f8","title":"","text":"var _ volume.Mounter = &azureDiskMounter{} func (m *azureDiskMounter) GetAttributes() volume.Attributes { volumeSource, _ := getVolumeSource(m.spec) readOnly := false volumeSource, err := getVolumeSource(m.spec) if err != nil && volumeSource.ReadOnly != nil { readOnly = *volumeSource.ReadOnly } return volume.Attributes{ ReadOnly: *volumeSource.ReadOnly, Managed: !*volumeSource.ReadOnly, ReadOnly: readOnly, Managed: !readOnly, SupportsSELinux: true, } }"} {"_id":"doc-en-kubernetes-7b0a3428ffc4ef2466ba31fe67dfb8fb342a52b585493cbee6dc39d36af122ad","title":"","text":"options := []string{\"bind\"} if *volumeSource.ReadOnly { if volumeSource.ReadOnly != nil && *volumeSource.ReadOnly { options = append(options, \"ro\") }"} {"_id":"doc-en-kubernetes-8ee820d48e4a1aa63eaa618ba2c7b2aff2b0af599bd6c86c696b53cf5d35f2ad","title":"","text":"return mountErr } if !*volumeSource.ReadOnly { if volumeSource.ReadOnly == nil || !*volumeSource.ReadOnly { volume.SetVolumeOwnership(m, fsGroup) }"} {"_id":"doc-en-kubernetes-b27bc9bb4470d84252cbd63b1b8133f462885f9732d3b581e6ee6438f5d428c1","title":"","text":"annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: [\"kube-dns\"] topologyKey: kubernetes.io/hostname tolerations: - key: \"CriticalAddonsOnly\" operator: \"Exists\""} {"_id":"doc-en-kubernetes-17a4ba8b747bfbbe90fce83ed979c1f6b5a2f91e095ca264fe6e42e7dfef59ce","title":"","text":"CreateLoadBalancer(*elb.CreateLoadBalancerInput) (*elb.CreateLoadBalancerOutput, error) DeleteLoadBalancer(*elb.DeleteLoadBalancerInput) (*elb.DeleteLoadBalancerOutput, error) DescribeLoadBalancers(*elb.DescribeLoadBalancersInput) (*elb.DescribeLoadBalancersOutput, error) AddTags(*elb.AddTagsInput) (*elb.AddTagsOutput, error) RegisterInstancesWithLoadBalancer(*elb.RegisterInstancesWithLoadBalancerInput) (*elb.RegisterInstancesWithLoadBalancerOutput, error) DeregisterInstancesFromLoadBalancer(*elb.DeregisterInstancesFromLoadBalancerInput) (*elb.DeregisterInstancesFromLoadBalancerOutput, error) CreateLoadBalancerPolicy(*elb.CreateLoadBalancerPolicyInput) (*elb.CreateLoadBalancerPolicyOutput, error)"} {"_id":"doc-en-kubernetes-6508560b46659764bc43854af999e2b91b33d2879238abfbd3a1971bb15cf121","title":"","text":"return ret, nil } func (c *Cloud) addLoadBalancerTags(loadBalancerName string, requested map[string]string) error { var tags []*elb.Tag for k, v := range requested { tag := &elb.Tag{ Key: aws.String(k), Value: aws.String(v), } tags = append(tags, tag) } request := &elb.AddTagsInput{} request.LoadBalancerNames = []*string{&loadBalancerName} request.Tags = tags _, err := c.elb.AddTags(request) if err != nil { return fmt.Errorf(\"error adding tags to load balancer: %v\", err) } return nil } // Retrieves instance's vpc id from metadata func (c *Cloud) findVPCID() (string, error) { macs, err := c.metadata.GetMetadata(\"network/interfaces/macs/\")"} {"_id":"doc-en-kubernetes-3b3baafeb630d41d9656b0c60db0ef0a76269859ba08f72f3e7679e89847564a","title":"","text":"panic(\"Not implemented\") } func (elb *FakeELB) AddTags(input *elb.AddTagsInput) (*elb.AddTagsOutput, error) { panic(\"Not implemented\") } func (elb *FakeELB) RegisterInstancesWithLoadBalancer(*elb.RegisterInstancesWithLoadBalancerInput) (*elb.RegisterInstancesWithLoadBalancerOutput, error) { panic(\"Not implemented\") }"} {"_id":"doc-en-kubernetes-e48a1a826a87fb3705c41edb502cd79a1ba3abe07174afddd8062e482da08e64","title":"","text":"} } } { // Add additional tags glog.V(2).Infof(\"Creating additional load balancer tags for %s\", loadBalancerName) tags := getLoadBalancerAdditionalTags(annotations) if len(tags) > 0 { err := c.addLoadBalancerTags(loadBalancerName, tags) if err != nil { return nil, fmt.Errorf(\"unable to create additional load balancer tags: %v\", err) } } } } // Whether the ELB was new or existing, sync attributes regardless. This accounts for things"} {"_id":"doc-en-kubernetes-4c7e06690a0a0849e382705b8690304780c0edebfe98ebfcdec5c0f0cedee04e","title":"","text":"}) } func (m *MockedFakeELB) AddTags(input *elb.AddTagsInput) (*elb.AddTagsOutput, error) { args := m.Called(input) return args.Get(0).(*elb.AddTagsOutput), nil } func TestReadAWSCloudConfig(t *testing.T) { tests := []struct { name string"} {"_id":"doc-en-kubernetes-6b67bc51a5a0a2a8a6ebc23e170b1b2db7760f66c7c6df251822447fa34205f4","title":"","text":"} } // Test that we can add a load balancer tag func TestAddLoadBalancerTags(t *testing.T) { loadBalancerName := \"test-elb\" awsServices := newMockedFakeAWSServices(TestClusterId) c, _ := newAWSCloud(strings.NewReader(\"[global]\"), awsServices) want := make(map[string]string) want[\"tag1\"] = \"val1\" expectedAddTagsRequest := &elb.AddTagsInput{ LoadBalancerNames: []*string{&loadBalancerName}, Tags: []*elb.Tag{ { Key: aws.String(\"tag1\"), Value: aws.String(\"val1\"), }, }, } awsServices.elb.(*MockedFakeELB).On(\"AddTags\", expectedAddTagsRequest).Return(&elb.AddTagsOutput{}) err := c.addLoadBalancerTags(loadBalancerName, want) assert.Nil(t, err, \"Error adding load balancer tags: %v\", err) awsServices.elb.(*MockedFakeELB).AssertExpectations(t) } func newMockedFakeAWSServices(id string) *FakeAWSServices { s := NewFakeAWSServices(id) s.ec2 = &MockedFakeEC2{FakeEC2Impl: s.ec2.(*FakeEC2Impl)}"} {"_id":"doc-en-kubernetes-e109a2fbfce288406a296774d7b3a4f5ecd070fb0d7e7d4a1acda38f2aaa252b","title":"","text":"ec2Device := \"/dev/xvd\" + string(mountDevice) if !alreadyAttached { available, err := c.checkIfAvailable(disk, \"attaching\", awsInstance.awsID) if !available { attachEnded = true return \"\", err } request := &ec2.AttachVolumeInput{ Device: aws.String(ec2Device), InstanceId: aws.String(awsInstance.awsID),"} {"_id":"doc-en-kubernetes-04be82110cbf88dd459082f0928194ed446e38de1924990dd3c12e017e2b7c5b","title":"","text":"if err != nil { return false, err } available, err := c.checkIfAvailable(awsDisk, \"deleting\", \"\") if !available { return false, err } return awsDisk.deleteVolume() } func (c *Cloud) checkIfAvailable(disk *awsDisk, opName string, instance string) (bool, error) { info, err := disk.describeVolume() if err != nil { glog.Errorf(\"Error describing volume %q: %q\", disk.awsID, err) // if for some reason we can not describe volume we will return error return false, err } volumeState := aws.StringValue(info.State) opError := fmt.Sprintf(\"Error %s EBS volume %q\", opName, disk.awsID) if len(instance) != 0 { opError = fmt.Sprintf(\"%q to instance %q\", opError, instance) } // Only available volumes can be attached or deleted if volumeState != \"available\" { // Volume is attached somewhere else and we can not attach it here if len(info.Attachments) > 0 { attachment := info.Attachments[0] attachErr := fmt.Errorf(\"%s since volume is currently attached to %q\", opError, aws.StringValue(attachment.InstanceId)) glog.Error(attachErr) return false, attachErr } attachErr := fmt.Errorf(\"%s since volume is in %q state\", opError, volumeState) glog.Error(attachErr) return false, attachErr } return true, nil } func (c *Cloud) GetLabelsForVolume(pv *v1.PersistentVolume) (map[string]string, error) { // Ignore any volumes that are being provisioned if pv.Spec.AWSElasticBlockStore.VolumeID == volume.ProvisionedVolumeName {"} {"_id":"doc-en-kubernetes-8651d4b721c6bafa4306e2b3718dfa90c347fad4bec4755e7efb22e42698133e","title":"","text":"\"//pkg/kubelet/container:go_default_library\", \"//pkg/kubelet/events:go_default_library\", \"//pkg/kubelet/images:go_default_library\", \"//pkg/kubelet/kuberuntime/logs:go_default_library\", \"//pkg/kubelet/lifecycle:go_default_library\", \"//pkg/kubelet/metrics:go_default_library\", \"//pkg/kubelet/prober/results:go_default_library\","} {"_id":"doc-en-kubernetes-cf711d69c90b9e2152d313d48274011a019522ee3fb8444e12fda381497fa316","title":"","text":"\"//pkg/util/tail:go_default_library\", \"//pkg/util/version:go_default_library\", \"//vendor/github.com/armon/circbuf:go_default_library\", \"//vendor/github.com/docker/docker/pkg/jsonlog:go_default_library\", \"//vendor/github.com/fsnotify/fsnotify:go_default_library\", \"//vendor/github.com/golang/glog:go_default_library\", \"//vendor/github.com/google/cadvisor/info/v1:go_default_library\", \"//vendor/google.golang.org/grpc:go_default_library\","} {"_id":"doc-en-kubernetes-ac5f3e5e69be3a1788922d687979c10af22297611c02563cfd2145ea83a79760","title":"","text":"\"kuberuntime_container_test.go\", \"kuberuntime_gc_test.go\", \"kuberuntime_image_test.go\", \"kuberuntime_logs_test.go\", \"kuberuntime_manager_test.go\", \"kuberuntime_sandbox_test.go\", \"labels_test.go\","} {"_id":"doc-en-kubernetes-caf43fc0ef890d26f9f8f5dfaadcbf8ed1a45f0d32ef3913adc4f3a860fafd65","title":"","text":"filegroup( name = \"all-srcs\", srcs = [\":package-srcs\"], srcs = [ \":package-srcs\", \"//pkg/kubelet/kuberuntime/logs:all-srcs\", ], tags = [\"automanaged\"], )"} {"_id":"doc-en-kubernetes-890ca39b270da9db92004019ce8e1dd82ff716e684aea7217dcc5bfc4c5d8be3","title":"","text":"package kuberuntime import ( \"bufio\" \"bytes\" \"encoding/json\" \"errors\" \"fmt\" \"io\" \"math\" \"os\" \"time\" \"github.com/docker/docker/pkg/jsonlog\" \"github.com/fsnotify/fsnotify\" \"github.com/golang/glog\" \"k8s.io/api/core/v1\" runtimeapi \"k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime\" \"k8s.io/kubernetes/pkg/util/tail\" \"k8s.io/kubernetes/pkg/kubelet/kuberuntime/logs\" ) // Notice that the current kuberuntime logs implementation doesn't handle // log rotation. // * It will not retrieve logs in rotated log file. // * If log rotation happens when following the log: // * If the rotation is using create mode, we'll still follow the old file. // * If the rotation is using copytruncate, we'll be reading at the original position and get nothing. // TODO(random-liu): Support log rotation. // streamType is the type of the stream. type streamType string const ( stderrType streamType = \"stderr\" stdoutType streamType = \"stdout\" // timeFormat is the time format used in the log. timeFormat = time.RFC3339Nano // blockSize is the block size used in tail. blockSize = 1024 // stateCheckPeriod is the period to check container state while following // the container log. Kubelet should not keep following the log when the // container is not running. stateCheckPeriod = 5 * time.Second ) var ( // eol is the end-of-line sign in the log. eol = []byte{'n'} // delimiter is the delimiter for timestamp and streamtype in log line. delimiter = []byte{' '} ) // logMessage is the internal log type. type logMessage struct { timestamp time.Time stream streamType log []byte } // reset resets the log to nil. func (l *logMessage) reset() { l.timestamp = time.Time{} l.stream = \"\" l.log = nil } // logOptions is the internal type of all log options. type logOptions struct { tail int64 bytes int64 since time.Time follow bool timestamp bool } // newLogOptions convert the v1.PodLogOptions to internal logOptions. func newLogOptions(apiOpts *v1.PodLogOptions, now time.Time) *logOptions { opts := &logOptions{ tail: -1, // -1 by default which means read all logs. bytes: -1, // -1 by default which means read all logs. follow: apiOpts.Follow, timestamp: apiOpts.Timestamps, } if apiOpts.TailLines != nil { opts.tail = *apiOpts.TailLines } if apiOpts.LimitBytes != nil { opts.bytes = *apiOpts.LimitBytes } if apiOpts.SinceSeconds != nil { opts.since = now.Add(-time.Duration(*apiOpts.SinceSeconds) * time.Second) } if apiOpts.SinceTime != nil && apiOpts.SinceTime.After(opts.since) { opts.since = apiOpts.SinceTime.Time } return opts } // ReadLogs read the container log and redirect into stdout and stderr. // Note that containerID is only needed when following the log, or else // just pass in empty string \"\". func (m *kubeGenericRuntimeManager) ReadLogs(path, containerID string, apiOpts *v1.PodLogOptions, stdout, stderr io.Writer) error { f, err := os.Open(path) if err != nil { return fmt.Errorf(\"failed to open log file %q: %v\", path, err) } defer f.Close() // Convert v1.PodLogOptions into internal log options. opts := newLogOptions(apiOpts, time.Now()) // Search start point based on tail line. start, err := tail.FindTailLineStartIndex(f, opts.tail) if err != nil { return fmt.Errorf(\"failed to tail %d lines of log file %q: %v\", opts.tail, path, err) } if _, err := f.Seek(start, os.SEEK_SET); err != nil { return fmt.Errorf(\"failed to seek %d in log file %q: %v\", start, path, err) } // Start parsing the logs. r := bufio.NewReader(f) // Do not create watcher here because it is not needed if `Follow` is false. var watcher *fsnotify.Watcher var parse parseFunc writer := newLogWriter(stdout, stderr, opts) msg := &logMessage{} for { l, err := r.ReadBytes(eol[0]) if err != nil { if err != io.EOF { // This is an real error return fmt.Errorf(\"failed to read log file %q: %v\", path, err) } if !opts.follow { // Return directly when reading to the end if not follow. if len(l) > 0 { glog.Warningf(\"Incomplete line in log file %q: %q\", path, l) } glog.V(2).Infof(\"Finish parsing log file %q\", path) return nil } // Reset seek so that if this is an incomplete line, // it will be read again. if _, err := f.Seek(-int64(len(l)), os.SEEK_CUR); err != nil { return fmt.Errorf(\"failed to reset seek in log file %q: %v\", path, err) } if watcher == nil { // Intialize the watcher if it has not been initialized yet. if watcher, err = fsnotify.NewWatcher(); err != nil { return fmt.Errorf(\"failed to create fsnotify watcher: %v\", err) } defer watcher.Close() if err := watcher.Add(f.Name()); err != nil { return fmt.Errorf(\"failed to watch file %q: %v\", f.Name(), err) } } // Wait until the next log change. if found, err := m.waitLogs(containerID, watcher); !found { return err } continue } if parse == nil { // Intialize the log parsing function. parse, err = getParseFunc(l) if err != nil { return fmt.Errorf(\"failed to get parse function: %v\", err) } } // Parse the log line. msg.reset() if err := parse(l, msg); err != nil { glog.Errorf(\"Failed with err %v when parsing log for log file %q: %q\", err, path, l) continue } // Write the log line into the stream. if err := writer.write(msg); err != nil { if err == errMaximumWrite { glog.V(2).Infof(\"Finish parsing log file %q, hit bytes limit %d(bytes)\", path, opts.bytes) return nil } glog.Errorf(\"Failed with err %v when writing log for log file %q: %+v\", err, path, msg) return err } } } // waitLogs wait for the next log write. It returns a boolean and an error. The boolean // indicates whether a new log is found; the error is error happens during waiting new logs. func (m *kubeGenericRuntimeManager) waitLogs(id string, w *fsnotify.Watcher) (bool, error) { errRetry := 5 for { select { case e := <-w.Events: switch e.Op { case fsnotify.Write: return true, nil default: glog.Errorf(\"Unexpected fsnotify event: %v, retrying...\", e) } case err := <-w.Errors: glog.Errorf(\"Fsnotify watch error: %v, %d error retries remaining\", err, errRetry) if errRetry == 0 { return false, err } errRetry-- case <-time.After(stateCheckPeriod): s, err := m.runtimeService.ContainerStatus(id) if err != nil { return false, err } // Only keep following container log when it is running. if s.State != runtimeapi.ContainerState_CONTAINER_RUNNING { glog.Errorf(\"Container %q is not running (state=%q)\", id, s.State) // Do not return error because it's normal that the container stops // during waiting. return false, nil } } } } // parseFunc is a function parsing one log line to the internal log type. // Notice that the caller must make sure logMessage is not nil. type parseFunc func([]byte, *logMessage) error var parseFuncs []parseFunc = []parseFunc{ parseCRILog, // CRI log format parse function parseDockerJSONLog, // Docker JSON log format parse function } // parseCRILog parses logs in CRI log format. CRI Log format example: // 2016-10-06T00:17:09.669794202Z stdout log content 1 // 2016-10-06T00:17:09.669794203Z stderr log content 2 func parseCRILog(log []byte, msg *logMessage) error { var err error // Parse timestamp idx := bytes.Index(log, delimiter) if idx < 0 { return fmt.Errorf(\"timestamp is not found\") } msg.timestamp, err = time.Parse(timeFormat, string(log[:idx])) if err != nil { return fmt.Errorf(\"unexpected timestamp format %q: %v\", timeFormat, err) } // Parse stream type log = log[idx+1:] idx = bytes.Index(log, delimiter) if idx < 0 { return fmt.Errorf(\"stream type is not found\") } msg.stream = streamType(log[:idx]) if msg.stream != stdoutType && msg.stream != stderrType { return fmt.Errorf(\"unexpected stream type %q\", msg.stream) } // Get log content msg.log = log[idx+1:] return nil } // parseDockerJSONLog parses logs in Docker JSON log format. Docker JSON log format // example: // {\"log\":\"content 1\",\"stream\":\"stdout\",\"time\":\"2016-10-20T18:39:20.57606443Z\"} // {\"log\":\"content 2\",\"stream\":\"stderr\",\"time\":\"2016-10-20T18:39:20.57606444Z\"} func parseDockerJSONLog(log []byte, msg *logMessage) error { var l = &jsonlog.JSONLog{} l.Reset() // TODO: JSON decoding is fairly expensive, we should evaluate this. if err := json.Unmarshal(log, l); err != nil { return fmt.Errorf(\"failed with %v to unmarshal log %q\", err, l) } msg.timestamp = l.Created msg.stream = streamType(l.Stream) msg.log = []byte(l.Log) return nil } // getParseFunc returns proper parse function based on the sample log line passed in. func getParseFunc(log []byte) (parseFunc, error) { for _, p := range parseFuncs { if err := p(log, &logMessage{}); err == nil { return p, nil } } return nil, fmt.Errorf(\"unsupported log format: %q\", log) } // logWriter controls the writing into the stream based on the log options. type logWriter struct { stdout io.Writer stderr io.Writer opts *logOptions remain int64 } // errMaximumWrite is returned when all bytes have been written. var errMaximumWrite = errors.New(\"maximum write\") // errShortWrite is returned when the message is not fully written. var errShortWrite = errors.New(\"short write\") func newLogWriter(stdout io.Writer, stderr io.Writer, opts *logOptions) *logWriter { w := &logWriter{ stdout: stdout, stderr: stderr, opts: opts, remain: math.MaxInt64, // initialize it as infinity } if opts.bytes >= 0 { w.remain = opts.bytes } return w } opts := logs.NewLogOptions(apiOpts, time.Now()) // writeLogs writes logs into stdout, stderr. func (w *logWriter) write(msg *logMessage) error { if msg.timestamp.Before(w.opts.since) { // Skip the line because it's older than since return nil } line := msg.log if w.opts.timestamp { prefix := append([]byte(msg.timestamp.Format(timeFormat)), delimiter[0]) line = append(prefix, line...) } // If the line is longer than the remaining bytes, cut it. if int64(len(line)) > w.remain { line = line[:w.remain] } // Get the proper stream to write to. var stream io.Writer switch msg.stream { case stdoutType: stream = w.stdout case stderrType: stream = w.stderr default: return fmt.Errorf(\"unexpected stream type %q\", msg.stream) } n, err := stream.Write(line) w.remain -= int64(n) if err != nil { return err } // If the line has not been fully written, return errShortWrite if n < len(line) { return errShortWrite } // If there are no more bytes left, return errMaximumWrite if w.remain <= 0 { return errMaximumWrite } return nil return logs.ReadLogs(path, containerID, opts, m.runtimeService, stdout, stderr) }"} {"_id":"doc-en-kubernetes-ed93fbd6c9c69e8102eda01d0710f659118fe4ed61c6707f567c8bd7fbc324d6","title":"","text":" /* Copyright 2016 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package kuberuntime import ( \"bytes\" \"testing\" \"time\" \"github.com/stretchr/testify/assert\" \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) func TestLogOptions(t *testing.T) { var ( line = int64(8) bytes = int64(64) timestamp = metav1.Now() sinceseconds = int64(10) ) for c, test := range []struct { apiOpts *v1.PodLogOptions expect *logOptions }{ { // empty options apiOpts: &v1.PodLogOptions{}, expect: &logOptions{tail: -1, bytes: -1}, }, { // test tail lines apiOpts: &v1.PodLogOptions{TailLines: &line}, expect: &logOptions{tail: line, bytes: -1}, }, { // test limit bytes apiOpts: &v1.PodLogOptions{LimitBytes: &bytes}, expect: &logOptions{tail: -1, bytes: bytes}, }, { // test since timestamp apiOpts: &v1.PodLogOptions{SinceTime: ×tamp}, expect: &logOptions{tail: -1, bytes: -1, since: timestamp.Time}, }, { // test since seconds apiOpts: &v1.PodLogOptions{SinceSeconds: &sinceseconds}, expect: &logOptions{tail: -1, bytes: -1, since: timestamp.Add(-10 * time.Second)}, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) opts := newLogOptions(test.apiOpts, timestamp.Time) assert.Equal(t, test.expect, opts) } } func TestParseLog(t *testing.T) { timestamp, err := time.Parse(timeFormat, \"2016-10-20T18:39:20.57606443Z\") assert.NoError(t, err) msg := &logMessage{} for c, test := range []struct { line string msg *logMessage err bool }{ { // Docker log format stdout line: `{\"log\":\"docker stdout test log\",\"stream\":\"stdout\",\"time\":\"2016-10-20T18:39:20.57606443Z\"}` + \"n\", msg: &logMessage{ timestamp: timestamp, stream: stdoutType, log: []byte(\"docker stdout test log\"), }, }, { // Docker log format stderr line: `{\"log\":\"docker stderr test log\",\"stream\":\"stderr\",\"time\":\"2016-10-20T18:39:20.57606443Z\"}` + \"n\", msg: &logMessage{ timestamp: timestamp, stream: stderrType, log: []byte(\"docker stderr test log\"), }, }, { // CRI log format stdout line: \"2016-10-20T18:39:20.57606443Z stdout cri stdout test logn\", msg: &logMessage{ timestamp: timestamp, stream: stdoutType, log: []byte(\"cri stdout test logn\"), }, }, { // CRI log format stderr line: \"2016-10-20T18:39:20.57606443Z stderr cri stderr test logn\", msg: &logMessage{ timestamp: timestamp, stream: stderrType, log: []byte(\"cri stderr test logn\"), }, }, { // Unsupported Log format line: \"unsupported log format test logn\", msg: &logMessage{}, err: true, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) parse, err := getParseFunc([]byte(test.line)) if test.err { assert.Error(t, err) continue } assert.NoError(t, err) err = parse([]byte(test.line), msg) assert.NoError(t, err) assert.Equal(t, test.msg, msg) } } func TestWriteLogs(t *testing.T) { timestamp := time.Unix(1234, 4321) log := \"abcdefgn\" for c, test := range []struct { stream streamType since time.Time timestamp bool expectStdout string expectStderr string }{ { // stderr log stream: stderrType, expectStderr: log, }, { // stdout log stream: stdoutType, expectStdout: log, }, { // since is after timestamp stream: stdoutType, since: timestamp.Add(1 * time.Second), }, { // timestamp enabled stream: stderrType, timestamp: true, expectStderr: timestamp.Format(timeFormat) + \" \" + log, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) msg := &logMessage{ timestamp: timestamp, stream: test.stream, log: []byte(log), } stdoutBuf := bytes.NewBuffer(nil) stderrBuf := bytes.NewBuffer(nil) w := newLogWriter(stdoutBuf, stderrBuf, &logOptions{since: test.since, timestamp: test.timestamp, bytes: -1}) err := w.write(msg) assert.NoError(t, err) assert.Equal(t, test.expectStdout, stdoutBuf.String()) assert.Equal(t, test.expectStderr, stderrBuf.String()) } } func TestWriteLogsWithBytesLimit(t *testing.T) { timestamp := time.Unix(1234, 4321) timestampStr := timestamp.Format(timeFormat) log := \"abcdefgn\" for c, test := range []struct { stdoutLines int stderrLines int bytes int timestamp bool expectStdout string expectStderr string }{ { // limit bytes less than one line stdoutLines: 3, bytes: 3, expectStdout: \"abc\", }, { // limit bytes across lines stdoutLines: 3, bytes: len(log) + 3, expectStdout: \"abcdefgnabc\", }, { // limit bytes more than all lines stdoutLines: 3, bytes: 3 * len(log), expectStdout: \"abcdefgnabcdefgnabcdefgn\", }, { // limit bytes for stderr stderrLines: 3, bytes: len(log) + 3, expectStderr: \"abcdefgnabc\", }, { // limit bytes for both stdout and stderr, stdout first. stdoutLines: 1, stderrLines: 2, bytes: len(log) + 3, expectStdout: \"abcdefgn\", expectStderr: \"abc\", }, { // limit bytes with timestamp stdoutLines: 3, timestamp: true, bytes: len(timestampStr) + 1 + len(log) + 2, expectStdout: timestampStr + \" \" + log + timestampStr[:2], }, } { t.Logf(\"TestCase #%d: %+v\", c, test) msg := &logMessage{ timestamp: timestamp, log: []byte(log), } stdoutBuf := bytes.NewBuffer(nil) stderrBuf := bytes.NewBuffer(nil) w := newLogWriter(stdoutBuf, stderrBuf, &logOptions{timestamp: test.timestamp, bytes: int64(test.bytes)}) for i := 0; i < test.stdoutLines; i++ { msg.stream = stdoutType if err := w.write(msg); err != nil { assert.EqualError(t, err, errMaximumWrite.Error()) } } for i := 0; i < test.stderrLines; i++ { msg.stream = stderrType if err := w.write(msg); err != nil { assert.EqualError(t, err, errMaximumWrite.Error()) } } assert.Equal(t, test.expectStdout, stdoutBuf.String()) assert.Equal(t, test.expectStderr, stderrBuf.String()) } } "} {"_id":"doc-en-kubernetes-cdf965b3e8cb847e6dbf0e8ccc2a11c3314d29ace67a110009c797cb239f4afc","title":"","text":" load(\"@io_bazel_rules_go//go:def.bzl\", \"go_library\", \"go_test\") go_library( name = \"go_default_library\", srcs = [\"logs.go\"], importpath = \"k8s.io/kubernetes/pkg/kubelet/kuberuntime/logs\", visibility = [\"//visibility:public\"], deps = [ \"//pkg/kubelet/apis/cri:go_default_library\", \"//pkg/kubelet/apis/cri/v1alpha1/runtime:go_default_library\", \"//pkg/util/tail:go_default_library\", \"//vendor/github.com/docker/docker/pkg/jsonlog:go_default_library\", \"//vendor/github.com/fsnotify/fsnotify:go_default_library\", \"//vendor/github.com/golang/glog:go_default_library\", \"//vendor/k8s.io/api/core/v1:go_default_library\", ], ) go_test( name = \"go_default_test\", srcs = [\"logs_test.go\"], importpath = \"k8s.io/kubernetes/pkg/kubelet/kuberuntime/logs\", library = \":go_default_library\", deps = [ \"//vendor/github.com/stretchr/testify/assert:go_default_library\", \"//vendor/k8s.io/api/core/v1:go_default_library\", \"//vendor/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", ], ) filegroup( name = \"package-srcs\", srcs = glob([\"**\"]), tags = [\"automanaged\"], visibility = [\"//visibility:private\"], ) filegroup( name = \"all-srcs\", srcs = [\":package-srcs\"], tags = [\"automanaged\"], visibility = [\"//visibility:public\"], ) "} {"_id":"doc-en-kubernetes-dc26b117d9268f2359423549c1164b3dc76060c6bfe562a34ecde3a1b2a147a5","title":"","text":" /* Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package logs import ( \"bufio\" \"bytes\" \"encoding/json\" \"errors\" \"fmt\" \"io\" \"math\" \"os\" \"time\" \"github.com/docker/docker/pkg/jsonlog\" \"github.com/fsnotify/fsnotify\" \"github.com/golang/glog\" \"k8s.io/api/core/v1\" internalapi \"k8s.io/kubernetes/pkg/kubelet/apis/cri\" runtimeapi \"k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime\" \"k8s.io/kubernetes/pkg/util/tail\" ) // Notice that the current CRI logs implementation doesn't handle // log rotation. // * It will not retrieve logs in rotated log file. // * If log rotation happens when following the log: // * If the rotation is using create mode, we'll still follow the old file. // * If the rotation is using copytruncate, we'll be reading at the original position and get nothing. // TODO(random-liu): Support log rotation. // streamType is the type of the stream. type streamType string const ( stderrType streamType = \"stderr\" stdoutType streamType = \"stdout\" // timeFormat is the time format used in the log. timeFormat = time.RFC3339Nano // blockSize is the block size used in tail. blockSize = 1024 // stateCheckPeriod is the period to check container state while following // the container log. Kubelet should not keep following the log when the // container is not running. stateCheckPeriod = 5 * time.Second ) var ( // eol is the end-of-line sign in the log. eol = []byte{'n'} // delimiter is the delimiter for timestamp and streamtype in log line. delimiter = []byte{' '} ) // logMessage is the CRI internal log type. type logMessage struct { timestamp time.Time stream streamType log []byte } // reset resets the log to nil. func (l *logMessage) reset() { l.timestamp = time.Time{} l.stream = \"\" l.log = nil } // LogOptions is the CRI internal type of all log options. type LogOptions struct { tail int64 bytes int64 since time.Time follow bool timestamp bool } // NewLogOptions convert the v1.PodLogOptions to CRI internal LogOptions. func NewLogOptions(apiOpts *v1.PodLogOptions, now time.Time) *LogOptions { opts := &LogOptions{ tail: -1, // -1 by default which means read all logs. bytes: -1, // -1 by default which means read all logs. follow: apiOpts.Follow, timestamp: apiOpts.Timestamps, } if apiOpts.TailLines != nil { opts.tail = *apiOpts.TailLines } if apiOpts.LimitBytes != nil { opts.bytes = *apiOpts.LimitBytes } if apiOpts.SinceSeconds != nil { opts.since = now.Add(-time.Duration(*apiOpts.SinceSeconds) * time.Second) } if apiOpts.SinceTime != nil && apiOpts.SinceTime.After(opts.since) { opts.since = apiOpts.SinceTime.Time } return opts } // parseFunc is a function parsing one log line to the internal log type. // Notice that the caller must make sure logMessage is not nil. type parseFunc func([]byte, *logMessage) error var parseFuncs = []parseFunc{ parseCRILog, // CRI log format parse function parseDockerJSONLog, // Docker JSON log format parse function } // parseCRILog parses logs in CRI log format. CRI Log format example: // 2016-10-06T00:17:09.669794202Z stdout log content 1 // 2016-10-06T00:17:09.669794203Z stderr log content 2 func parseCRILog(log []byte, msg *logMessage) error { var err error // Parse timestamp idx := bytes.Index(log, delimiter) if idx < 0 { return fmt.Errorf(\"timestamp is not found\") } msg.timestamp, err = time.Parse(timeFormat, string(log[:idx])) if err != nil { return fmt.Errorf(\"unexpected timestamp format %q: %v\", timeFormat, err) } // Parse stream type log = log[idx+1:] idx = bytes.Index(log, delimiter) if idx < 0 { return fmt.Errorf(\"stream type is not found\") } msg.stream = streamType(log[:idx]) if msg.stream != stdoutType && msg.stream != stderrType { return fmt.Errorf(\"unexpected stream type %q\", msg.stream) } // Get log content msg.log = log[idx+1:] return nil } // parseDockerJSONLog parses logs in Docker JSON log format. Docker JSON log format // example: // {\"log\":\"content 1\",\"stream\":\"stdout\",\"time\":\"2016-10-20T18:39:20.57606443Z\"} // {\"log\":\"content 2\",\"stream\":\"stderr\",\"time\":\"2016-10-20T18:39:20.57606444Z\"} func parseDockerJSONLog(log []byte, msg *logMessage) error { var l = &jsonlog.JSONLog{} l.Reset() // TODO: JSON decoding is fairly expensive, we should evaluate this. if err := json.Unmarshal(log, l); err != nil { return fmt.Errorf(\"failed with %v to unmarshal log %q\", err, l) } msg.timestamp = l.Created msg.stream = streamType(l.Stream) msg.log = []byte(l.Log) return nil } // getParseFunc returns proper parse function based on the sample log line passed in. func getParseFunc(log []byte) (parseFunc, error) { for _, p := range parseFuncs { if err := p(log, &logMessage{}); err == nil { return p, nil } } return nil, fmt.Errorf(\"unsupported log format: %q\", log) } // logWriter controls the writing into the stream based on the log options. type logWriter struct { stdout io.Writer stderr io.Writer opts *LogOptions remain int64 } // errMaximumWrite is returned when all bytes have been written. var errMaximumWrite = errors.New(\"maximum write\") // errShortWrite is returned when the message is not fully written. var errShortWrite = errors.New(\"short write\") func newLogWriter(stdout io.Writer, stderr io.Writer, opts *LogOptions) *logWriter { w := &logWriter{ stdout: stdout, stderr: stderr, opts: opts, remain: math.MaxInt64, // initialize it as infinity } if opts.bytes >= 0 { w.remain = opts.bytes } return w } // writeLogs writes logs into stdout, stderr. func (w *logWriter) write(msg *logMessage) error { if msg.timestamp.Before(w.opts.since) { // Skip the line because it's older than since return nil } line := msg.log if w.opts.timestamp { prefix := append([]byte(msg.timestamp.Format(timeFormat)), delimiter[0]) line = append(prefix, line...) } // If the line is longer than the remaining bytes, cut it. if int64(len(line)) > w.remain { line = line[:w.remain] } // Get the proper stream to write to. var stream io.Writer switch msg.stream { case stdoutType: stream = w.stdout case stderrType: stream = w.stderr default: return fmt.Errorf(\"unexpected stream type %q\", msg.stream) } n, err := stream.Write(line) w.remain -= int64(n) if err != nil { return err } // If the line has not been fully written, return errShortWrite if n < len(line) { return errShortWrite } // If there are no more bytes left, return errMaximumWrite if w.remain <= 0 { return errMaximumWrite } return nil } // ReadLogs read the container log and redirect into stdout and stderr. // Note that containerID is only needed when following the log, or else // just pass in empty string \"\". func ReadLogs(path, containerID string, opts *LogOptions, runtimeService internalapi.RuntimeService, stdout, stderr io.Writer) error { f, err := os.Open(path) if err != nil { return fmt.Errorf(\"failed to open log file %q: %v\", path, err) } defer f.Close() // Search start point based on tail line. start, err := tail.FindTailLineStartIndex(f, opts.tail) if err != nil { return fmt.Errorf(\"failed to tail %d lines of log file %q: %v\", opts.tail, path, err) } if _, err := f.Seek(start, os.SEEK_SET); err != nil { return fmt.Errorf(\"failed to seek %d in log file %q: %v\", start, path, err) } // Start parsing the logs. r := bufio.NewReader(f) // Do not create watcher here because it is not needed if `Follow` is false. var watcher *fsnotify.Watcher var parse parseFunc writer := newLogWriter(stdout, stderr, opts) msg := &logMessage{} for { l, err := r.ReadBytes(eol[0]) if err != nil { if err != io.EOF { // This is an real error return fmt.Errorf(\"failed to read log file %q: %v\", path, err) } if !opts.follow { // Return directly when reading to the end if not follow. if len(l) > 0 { glog.Warningf(\"Incomplete line in log file %q: %q\", path, l) } glog.V(2).Infof(\"Finish parsing log file %q\", path) return nil } // Reset seek so that if this is an incomplete line, // it will be read again. if _, err := f.Seek(-int64(len(l)), os.SEEK_CUR); err != nil { return fmt.Errorf(\"failed to reset seek in log file %q: %v\", path, err) } if watcher == nil { // Intialize the watcher if it has not been initialized yet. if watcher, err = fsnotify.NewWatcher(); err != nil { return fmt.Errorf(\"failed to create fsnotify watcher: %v\", err) } defer watcher.Close() if err := watcher.Add(f.Name()); err != nil { return fmt.Errorf(\"failed to watch file %q: %v\", f.Name(), err) } } // Wait until the next log change. if found, err := waitLogs(containerID, watcher, runtimeService); !found { return err } continue } if parse == nil { // Intialize the log parsing function. parse, err = getParseFunc(l) if err != nil { return fmt.Errorf(\"failed to get parse function: %v\", err) } } // Parse the log line. msg.reset() if err := parse(l, msg); err != nil { glog.Errorf(\"Failed with err %v when parsing log for log file %q: %q\", err, path, l) continue } // Write the log line into the stream. if err := writer.write(msg); err != nil { if err == errMaximumWrite { glog.V(2).Infof(\"Finish parsing log file %q, hit bytes limit %d(bytes)\", path, opts.bytes) return nil } glog.Errorf(\"Failed with err %v when writing log for log file %q: %+v\", err, path, msg) return err } } } // waitLogs wait for the next log write. It returns a boolean and an error. The boolean // indicates whether a new log is found; the error is error happens during waiting new logs. func waitLogs(id string, w *fsnotify.Watcher, runtimeService internalapi.RuntimeService) (bool, error) { errRetry := 5 for { select { case e := <-w.Events: switch e.Op { case fsnotify.Write: return true, nil default: glog.Errorf(\"Unexpected fsnotify event: %v, retrying...\", e) } case err := <-w.Errors: glog.Errorf(\"Fsnotify watch error: %v, %d error retries remaining\", err, errRetry) if errRetry == 0 { return false, err } errRetry-- case <-time.After(stateCheckPeriod): s, err := runtimeService.ContainerStatus(id) if err != nil { return false, err } // Only keep following container log when it is running. if s.State != runtimeapi.ContainerState_CONTAINER_RUNNING { glog.Errorf(\"Container %q is not running (state=%q)\", id, s.State) // Do not return error because it's normal that the container stops // during waiting. return false, nil } } } } "} {"_id":"doc-en-kubernetes-496b691a7c58ceb0420d0221c4ad88e1ffe59683ff00954872a4c206662f02f9","title":"","text":" /* Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package logs import ( \"bytes\" \"testing\" \"time\" \"github.com/stretchr/testify/assert\" \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) func TestLogOptions(t *testing.T) { var ( line = int64(8) bytes = int64(64) timestamp = metav1.Now() sinceseconds = int64(10) ) for c, test := range []struct { apiOpts *v1.PodLogOptions expect *LogOptions }{ { // empty options apiOpts: &v1.PodLogOptions{}, expect: &LogOptions{tail: -1, bytes: -1}, }, { // test tail lines apiOpts: &v1.PodLogOptions{TailLines: &line}, expect: &LogOptions{tail: line, bytes: -1}, }, { // test limit bytes apiOpts: &v1.PodLogOptions{LimitBytes: &bytes}, expect: &LogOptions{tail: -1, bytes: bytes}, }, { // test since timestamp apiOpts: &v1.PodLogOptions{SinceTime: ×tamp}, expect: &LogOptions{tail: -1, bytes: -1, since: timestamp.Time}, }, { // test since seconds apiOpts: &v1.PodLogOptions{SinceSeconds: &sinceseconds}, expect: &LogOptions{tail: -1, bytes: -1, since: timestamp.Add(-10 * time.Second)}, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) opts := NewLogOptions(test.apiOpts, timestamp.Time) assert.Equal(t, test.expect, opts) } } func TestParseLog(t *testing.T) { timestamp, err := time.Parse(timeFormat, \"2016-10-20T18:39:20.57606443Z\") assert.NoError(t, err) msg := &logMessage{} for c, test := range []struct { line string msg *logMessage err bool }{ { // Docker log format stdout line: `{\"log\":\"docker stdout test log\",\"stream\":\"stdout\",\"time\":\"2016-10-20T18:39:20.57606443Z\"}` + \"n\", msg: &logMessage{ timestamp: timestamp, stream: stdoutType, log: []byte(\"docker stdout test log\"), }, }, { // Docker log format stderr line: `{\"log\":\"docker stderr test log\",\"stream\":\"stderr\",\"time\":\"2016-10-20T18:39:20.57606443Z\"}` + \"n\", msg: &logMessage{ timestamp: timestamp, stream: stderrType, log: []byte(\"docker stderr test log\"), }, }, { // CRI log format stdout line: \"2016-10-20T18:39:20.57606443Z stdout cri stdout test logn\", msg: &logMessage{ timestamp: timestamp, stream: stdoutType, log: []byte(\"cri stdout test logn\"), }, }, { // CRI log format stderr line: \"2016-10-20T18:39:20.57606443Z stderr cri stderr test logn\", msg: &logMessage{ timestamp: timestamp, stream: stderrType, log: []byte(\"cri stderr test logn\"), }, }, { // Unsupported Log format line: \"unsupported log format test logn\", msg: &logMessage{}, err: true, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) parse, err := getParseFunc([]byte(test.line)) if test.err { assert.Error(t, err) continue } assert.NoError(t, err) err = parse([]byte(test.line), msg) assert.NoError(t, err) assert.Equal(t, test.msg, msg) } } func TestWriteLogs(t *testing.T) { timestamp := time.Unix(1234, 4321) log := \"abcdefgn\" for c, test := range []struct { stream streamType since time.Time timestamp bool expectStdout string expectStderr string }{ { // stderr log stream: stderrType, expectStderr: log, }, { // stdout log stream: stdoutType, expectStdout: log, }, { // since is after timestamp stream: stdoutType, since: timestamp.Add(1 * time.Second), }, { // timestamp enabled stream: stderrType, timestamp: true, expectStderr: timestamp.Format(timeFormat) + \" \" + log, }, } { t.Logf(\"TestCase #%d: %+v\", c, test) msg := &logMessage{ timestamp: timestamp, stream: test.stream, log: []byte(log), } stdoutBuf := bytes.NewBuffer(nil) stderrBuf := bytes.NewBuffer(nil) w := newLogWriter(stdoutBuf, stderrBuf, &LogOptions{since: test.since, timestamp: test.timestamp, bytes: -1}) err := w.write(msg) assert.NoError(t, err) assert.Equal(t, test.expectStdout, stdoutBuf.String()) assert.Equal(t, test.expectStderr, stderrBuf.String()) } } func TestWriteLogsWithBytesLimit(t *testing.T) { timestamp := time.Unix(1234, 4321) timestampStr := timestamp.Format(timeFormat) log := \"abcdefgn\" for c, test := range []struct { stdoutLines int stderrLines int bytes int timestamp bool expectStdout string expectStderr string }{ { // limit bytes less than one line stdoutLines: 3, bytes: 3, expectStdout: \"abc\", }, { // limit bytes across lines stdoutLines: 3, bytes: len(log) + 3, expectStdout: \"abcdefgnabc\", }, { // limit bytes more than all lines stdoutLines: 3, bytes: 3 * len(log), expectStdout: \"abcdefgnabcdefgnabcdefgn\", }, { // limit bytes for stderr stderrLines: 3, bytes: len(log) + 3, expectStderr: \"abcdefgnabc\", }, { // limit bytes for both stdout and stderr, stdout first. stdoutLines: 1, stderrLines: 2, bytes: len(log) + 3, expectStdout: \"abcdefgn\", expectStderr: \"abc\", }, { // limit bytes with timestamp stdoutLines: 3, timestamp: true, bytes: len(timestampStr) + 1 + len(log) + 2, expectStdout: timestampStr + \" \" + log + timestampStr[:2], }, } { t.Logf(\"TestCase #%d: %+v\", c, test) msg := &logMessage{ timestamp: timestamp, log: []byte(log), } stdoutBuf := bytes.NewBuffer(nil) stderrBuf := bytes.NewBuffer(nil) w := newLogWriter(stdoutBuf, stderrBuf, &LogOptions{timestamp: test.timestamp, bytes: int64(test.bytes)}) for i := 0; i < test.stdoutLines; i++ { msg.stream = stdoutType if err := w.write(msg); err != nil { assert.EqualError(t, err, errMaximumWrite.Error()) } } for i := 0; i < test.stderrLines; i++ { msg.stream = stderrType if err := w.write(msg); err != nil { assert.EqualError(t, err, errMaximumWrite.Error()) } } assert.Equal(t, test.expectStdout, stdoutBuf.String()) assert.Equal(t, test.expectStderr, stderrBuf.String()) } } "} {"_id":"doc-en-kubernetes-0b2e617d9dc2d0e8e9cae5a24f2b7cdf9af079961d2003f4ad79e5df0fd255c9","title":"","text":"hash := HashControllerRevision(revision, collisionCount) // Update the revisions name and labels clone.Name = ControllerRevisionName(parent.GetName(), hash) created, err := rh.client.AppsV1().ControllerRevisions(parent.GetNamespace()).Create(clone) ns := parent.GetNamespace() created, err := rh.client.AppsV1().ControllerRevisions(ns).Create(clone) if errors.IsAlreadyExists(err) { exists, err := rh.client.AppsV1().ControllerRevisions(ns).Get(clone.Name, metav1.GetOptions{}) if err != nil { return nil, err } if bytes.Equal(exists.Data.Raw, clone.Data.Raw) { return exists, nil } *collisionCount++ continue }"} {"_id":"doc-en-kubernetes-b7e23e4f16f7fd7b2f7aab14b0379d04a92ba69558d8ad6d54bcd9760d85452b","title":"","text":"history := NewHistory(client, informer.Lister()) var collisionCount int32 for i := range test.existing { _, err := history.CreateControllerRevision(test.existing[i].parent, test.existing[i].revision, &collisionCount) for _, item := range test.existing { _, err := client.AppsV1().ControllerRevisions(item.parent.GetNamespace()).Create(item.revision) if err != nil { t.Fatal(err) }"} {"_id":"doc-en-kubernetes-9e507651e005f6fb71cf5052bf2002e4cd637d7ce5d0275f6ba0d2b7bab9aba4","title":"","text":"t.Errorf(\"%s: on name collision wanted new name %s got %s\", test.name, expectedName, created.Name) } // Second name collision should have incremented collisionCount to 2 // Second name collision will be caused by an identical revision, so no need to do anything _, err = history.CreateControllerRevision(test.parent, test.revision, &collisionCount) if err != nil { t.Errorf(\"%s: %s\", test.name, err) } if collisionCount != 2 { if collisionCount != 1 { t.Errorf(\"%s: on second name collision wanted collisionCount 1 got %d\", test.name, collisionCount) } }"} {"_id":"doc-en-kubernetes-0e1540588338c060ccdc433b9906355784ccd323753a20fe58f9db14e9292d88","title":"","text":"t.Fatal(err) } ss1Rev1.Namespace = ss1.Namespace ss1Rev2, err := NewControllerRevision(ss1, parentKind, ss1.Spec.Template.Labels, rawTemplate(&ss1.Spec.Template), 2, ss1.Status.CollisionCount) // Create a new revision with the same name and hash label as an existing revision, but with // a different template. This could happen as a result of a hash collision, but in this test // this situation is created by setting name and hash label to values known to be in use by // an existing revision. modTemplate := ss1.Spec.Template.DeepCopy() modTemplate.Labels[\"foo\"] = \"not_bar\" ss1Rev2, err := NewControllerRevision(ss1, parentKind, ss1.Spec.Template.Labels, rawTemplate(modTemplate), 2, ss1.Status.CollisionCount) ss1Rev2.Name = ss1Rev1.Name ss1Rev2.Labels[ControllerRevisionHashLabel] = ss1Rev1.Labels[ControllerRevisionHashLabel] if err != nil { t.Fatal(err) }"} {"_id":"doc-en-kubernetes-91a097ec01c8c1a0396ac5d5b7506136ed36af892c39024039b19e15f6e12947","title":"","text":"}{ { parent: ss1, revision: ss1Rev1, revision: ss1Rev2, }, }, rename: true,"} {"_id":"doc-en-kubernetes-29ec6bab57948ea7cea1c13d9cbad02f44d730db039e14132773c87c4eddb092","title":"","text":"end end pods_json = `kubectl get pods -o json` namespace = ENV['KUBECTL_PLUGINS_CURRENT_NAMESPACE'] || 'default' pods_json = `kubectl --namespace #{namespace} get pods -o json` pods_parsed = JSON.parse(pods_json) puts \"The Magnificent Aging Plugin.\""} {"_id":"doc-en-kubernetes-ff260104d88412aae1bc707ebdbf760d001fac4612544a4a01a01685043c7f7d","title":"","text":"shortDesc: \"Aging shows pods by age\" longDesc: > Aging shows pods from the current namespace by age. Once we have plugin support for global flags through env vars (planned for V1) we'll be able to switch between namespaces using the --namespace flag. command: ./aging.rb"} {"_id":"doc-en-kubernetes-daf9073b07cdb93fca17f55666b5048e42b6ff33f403daddb9217b7155bb2e01","title":"","text":"\"sync\" \"time\" \"github.com/aws/aws-sdk-go/aws\" \"github.com/aws/aws-sdk-go/aws/awserr\" \"github.com/aws/aws-sdk-go/aws/request\" \"github.com/golang/glog\""} {"_id":"doc-en-kubernetes-d4a5c1b4fb84ccf12c880dda856944d18db2dca26c3aea998789412ee3d4ff90","title":"","text":"if delay > 0 { glog.Warningf(\"Inserting delay before AWS request (%s) to avoid RequestLimitExceeded: %s\", describeRequest(r), delay.String()) r.Config.SleepDelay(delay) if sleepFn := r.Config.SleepDelay; sleepFn != nil { // Support SleepDelay for backwards compatibility sleepFn(delay) } else if err := aws.SleepWithContext(r.Context(), delay); err != nil { r.Error = awserr.New(request.CanceledErrorCode, \"request context canceled\", err) r.Retryable = aws.Bool(false) return } // Avoid clock skew problems r.Time = now"} {"_id":"doc-en-kubernetes-ff481b2b814e08a2bc8fafd76c578a67f2190f9a3e29eb6ff11384ac46802789","title":"","text":"proxyTransport http.RoundTripper } // ServiceNodePort includes protocol and port number of a service NodePort. type ServiceNodePort struct { // The IP protocol for this port. Supports \"TCP\" and \"UDP\". Protocol api.Protocol // The port on each node on which this service is exposed. // Default is to auto-allocate a port if the ServiceType of this Service requires one. NodePort int32 } // NewStorage returns a new REST. func NewStorage(registry Registry, endpoints endpoint.Registry, serviceIPs ipallocator.Interface, serviceNodePorts portallocator.Interface, proxyTransport http.RoundTripper) *ServiceRest {"} {"_id":"doc-en-kubernetes-252c9563e9778d7c54d3889188ae004c5a0ba37b398c1c860ef823c86448a852","title":"","text":"// This is O(N), but we expect haystack to be small; // so small that we expect a linear search to be faster func contains(haystack []int, needle int) bool { func containsNumber(haystack []int, needle int) bool { for _, v := range haystack { if v == needle { return true"} {"_id":"doc-en-kubernetes-ce3a0c00c5baa977594479108afda15204d629d85175235c8e7cdf85f6f9eb21","title":"","text":"return false } // This is O(N), but we expect serviceNodePorts to be small; // so small that we expect a linear search to be faster func containsNodePort(serviceNodePorts []ServiceNodePort, serviceNodePort ServiceNodePort) bool { for _, snp := range serviceNodePorts { if snp == serviceNodePort { return true } } return false } func CollectServiceNodePorts(service *api.Service) []int { servicePorts := []int{} for i := range service.Spec.Ports {"} {"_id":"doc-en-kubernetes-68b0636f7a84ff65de0aee522f3b0711f856d9700bbfa3586c4cf65afd6c0681","title":"","text":"} func (rs *REST) updateNodePorts(oldService, newService *api.Service, nodePortOp *portallocator.PortAllocationOperation) error { oldNodePorts := CollectServiceNodePorts(oldService) oldNodePortsNumbers := CollectServiceNodePorts(oldService) newNodePorts := []ServiceNodePort{} portAllocated := map[int]bool{} newNodePorts := []int{} for i := range newService.Spec.Ports { servicePort := &newService.Spec.Ports[i] nodePort := int(servicePort.NodePort) if nodePort != 0 { if !contains(oldNodePorts, nodePort) { err := nodePortOp.Allocate(nodePort) nodePort := ServiceNodePort{Protocol: servicePort.Protocol, NodePort: servicePort.NodePort} if nodePort.NodePort != 0 { if !containsNumber(oldNodePortsNumbers, int(nodePort.NodePort)) && !portAllocated[int(nodePort.NodePort)] { err := nodePortOp.Allocate(int(nodePort.NodePort)) if err != nil { el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), nodePort, err.Error())} el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), nodePort.NodePort, err.Error())} return errors.NewInvalid(api.Kind(\"Service\"), newService.Name, el) } portAllocated[int(nodePort.NodePort)] = true } } else { nodePort, err := nodePortOp.AllocateNext() nodePortNumber, err := nodePortOp.AllocateNext() if err != nil { // TODO: what error should be returned here? It's not a // field-level validation failure (the field is valid), and it's // not really an internal error. return errors.NewInternalError(fmt.Errorf(\"failed to allocate a nodePort: %v\", err)) } servicePort.NodePort = int32(nodePort) servicePort.NodePort = int32(nodePortNumber) nodePort.NodePort = servicePort.NodePort } // Detect duplicate node ports; this should have been caught by validation, so we panic if contains(newNodePorts, nodePort) { panic(\"duplicate node port\") if containsNodePort(newNodePorts, nodePort) { return fmt.Errorf(\"duplicate nodePort: %v\", nodePort) } newNodePorts = append(newNodePorts, nodePort) } newNodePortsNumbers := CollectServiceNodePorts(newService) // The comparison loops are O(N^2), but we don't expect N to be huge // (there's a hard-limit at 2^16, because they're ports; and even 4 ports would be a lot) for _, oldNodePort := range oldNodePorts { if contains(newNodePorts, oldNodePort) { for _, oldNodePortNumber := range oldNodePortsNumbers { if containsNumber(newNodePortsNumbers, oldNodePortNumber) { continue } nodePortOp.ReleaseDeferred(oldNodePort) nodePortOp.ReleaseDeferred(int(oldNodePortNumber)) } return nil"} {"_id":"doc-en-kubernetes-b080d6c8f7b40cd77c4fd9ab1c835834f143899a755485c4b51a81d2e86e75b1","title":"","text":"}, expectSpecifiedNodePorts: []int{}, }, { name: \"Add new ServicePort with a different protocol without changing port numbers\", oldService: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, SessionAffinity: api.ServiceAffinityNone, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30053, }, }, }, }, newService: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, SessionAffinity: api.ServiceAffinityNone, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30053, }, { Name: \"port-udp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, NodePort: 30053, }, }, }, }, expectSpecifiedNodePorts: []int{30053, 30053}, }, { name: \"Change service type from ClusterIP to NodePort with same NodePort number but different protocols\", oldService: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, SessionAffinity: api.ServiceAffinityNone, Type: api.ServiceTypeClusterIP, Ports: []api.ServicePort{{ Port: 53, Protocol: api.ProtocolTCP, TargetPort: intstr.FromInt(6502), }}, }, }, newService: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, SessionAffinity: api.ServiceAffinityNone, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30053, }, { Name: \"port-udp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, NodePort: 30053, }, }, }, }, expectSpecifiedNodePorts: []int{30053, 30053}, }, } for _, test := range testCases {"} {"_id":"doc-en-kubernetes-68258f10780a7f043c59edbd5a28e35ae37bf7063cee987a6ebd3f6f25e5914e","title":"","text":"} }) It(\"should be able to update NodePorts with two same port numbers but different protocols\", func() { serviceName := \"nodeport-update-service\" ns := f.Namespace.Name jig := framework.NewServiceTestJig(cs, serviceName) By(\"creating a TCP service \" + serviceName + \" with type=ClusterIP in namespace \" + ns) tcpService := jig.CreateTCPServiceOrFail(ns, nil) defer func() { framework.Logf(\"Cleaning up the updating NodePorts test service\") err := cs.Core().Services(ns).Delete(serviceName, nil) Expect(err).NotTo(HaveOccurred()) }() jig.SanityCheckService(tcpService, v1.ServiceTypeClusterIP) svcPort := int(tcpService.Spec.Ports[0].Port) framework.Logf(\"service port TCP: %d\", svcPort) // Change the services to NodePort and add a UDP port. By(\"changing the TCP service to type=NodePort and add a UDP port\") newService := jig.UpdateServiceOrFail(ns, tcpService.Name, func(s *v1.Service) { s.Spec.Type = v1.ServiceTypeNodePort s.Spec.Ports = []v1.ServicePort{ { Name: \"tcp-port\", Port: 80, Protocol: v1.ProtocolTCP, }, { Name: \"udp-port\", Port: 80, Protocol: v1.ProtocolUDP, }, } }) jig.SanityCheckService(newService, v1.ServiceTypeNodePort) if len(newService.Spec.Ports) != 2 { framework.Failf(\"new service should have two Ports\") } for _, port := range newService.Spec.Ports { if port.NodePort == 0 { framework.Failf(\"new service failed to allocate NodePort for Port %s\", port.Name) } framework.Logf(\"new service allocates NodePort %d for Port %s\", port.NodePort, port.Name) } }) It(\"should be able to change the type from ExternalName to ClusterIP\", func() { serviceName := \"externalname-service\" ns := f.Namespace.Name"} {"_id":"doc-en-kubernetes-0b36a9029f3b31cf1efb96380ec2757059ab5767488a7a2f6abd00e9303137a1","title":"","text":" # How to become a contributor and submit your own code ## Contributor License Agreements We'd love to accept your patches! Before we can take them, we have to jump a couple of legal hurdles. Please fill out either the individual or corporate Contributor License Agreement (CLA). * If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an [individual CLA](http://code.google.com/legal/individual-cla-v1.0.html). * If you work for a company that wants to allow you to contribute your work, then you'll need to sign a [corporate CLA](http://code.google.com/legal/corporate-cla-v1.0.html). Follow either of the two links above to access the appropriate CLA and instructions for how to sign and return it. Once we receive it, we'll be able to accept your pull requests. ## Contributing A Patch 1. Submit an issue describing your proposed change to the repo in question. 1. The repo owner will respond to your issue promptly. 1. If your proposed change is accepted, and you haven't already done so, sign a Contributor License Agreement (see details above). 1. Fork the desired repo, develop and test your code changes. 1. Submit a pull request. "} {"_id":"doc-en-kubernetes-8cc9689b5dc657e0097753fb920670e6efd9c37715ee358d92867c1404cfe80f","title":"","text":"return o.writeConfigFile() } proxyServer, err := NewProxyServer(o.config, o.CleanupAndExit, o.CleanupIPVS, o.scheme, o.master) proxyServer, err := NewProxyServer(o) if err != nil { return err }"} {"_id":"doc-en-kubernetes-56b980af3277d0c9d927e736c28859c9e3c0b5ac7f7164299deccf525dd35972","title":"","text":") // NewProxyServer returns a new ProxyServer. func NewProxyServer(config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, cleanupIPVS bool, scheme *runtime.Scheme, master string) (*ProxyServer, error) { func NewProxyServer(o *Options) (*ProxyServer, error) { return newProxyServer(o.config, o.CleanupAndExit, o.CleanupIPVS, o.scheme, o.master) } func newProxyServer( config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, cleanupIPVS bool, scheme *runtime.Scheme, master string) (*ProxyServer, error) { if config == nil { return nil, errors.New(\"config is required\") }"} {"_id":"doc-en-kubernetes-fd8bf1916df498e1b4b1cd970c1a65c816709d6c7a2417efd99c94f3262aab9d","title":"","text":"} options.CleanupAndExit = true proxyserver, err := NewProxyServer(options.config, options.CleanupAndExit, options.CleanupIPVS, options.scheme, options.master) proxyserver, err := NewProxyServer(options) assert.Nil(t, err, \"unexpected error in NewProxyServer, addr: %s\", addr) assert.NotNil(t, proxyserver, \"nil proxy server obj, addr: %s\", addr)"} {"_id":"doc-en-kubernetes-3cbe17ac3c5a011f23555eb207e21a45d87f97885911bfdaebb3b32c0858347c","title":"","text":") // NewProxyServer returns a new ProxyServer. func NewProxyServer(config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, scheme *runtime.Scheme, master string) (*ProxyServer, error) { func NewProxyServer(o *Options) (*ProxyServer, error) { return newProxyServer(o.config, o.CleanupAndExit, o.scheme, o.master) } func newProxyServer(config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, scheme *runtime.Scheme, master string) (*ProxyServer, error) { if config == nil { return nil, errors.New(\"config is required\") }"} {"_id":"doc-en-kubernetes-744f0fded90d2ca194f6de479e3b39eb5bb79a95bc5038a665e626bd23cdad4d","title":"","text":"\"command\": [ \"/bin/sh\", \"-c\", \"if [ -e /usr/local/bin/migrate-if-needed.sh ]; then /usr/local/bin/migrate-if-needed.sh 1>>/var/log/etcd{{ suffix }}.log 2>&1; fi; /usr/local/bin/etcd --name etcd-{{ hostname }} --listen-peer-urls {{ etcd_protocol }}://{{ hostname }}:{{ server_port }} --initial-advertise-peer-urls {{ etcd_protocol }}://{{ hostname }}:{{ server_port }} --advertise-client-urls http://127.0.0.1:{{ port }} --listen-client-urls http://127.0.0.1:{{ port }} {{ quota_bytes }} --data-dir /var/etcd/data{{ suffix }} --initial-cluster-state {{ cluster_state }} --initial-cluster {{ etcd_cluster }} {{ etcd_creds }} 1>>/var/log/etcd{{ suffix }}.log 2>&1\" \"if [ -e /usr/local/bin/migrate-if-needed.sh ]; then /usr/local/bin/migrate-if-needed.sh 1>>/var/log/etcd{{ suffix }}.log 2>&1; fi; exec /usr/local/bin/etcd --name etcd-{{ hostname }} --listen-peer-urls {{ etcd_protocol }}://{{ hostname }}:{{ server_port }} --initial-advertise-peer-urls {{ etcd_protocol }}://{{ hostname }}:{{ server_port }} --advertise-client-urls http://127.0.0.1:{{ port }} --listen-client-urls http://127.0.0.1:{{ port }} {{ quota_bytes }} --data-dir /var/etcd/data{{ suffix }} --initial-cluster-state {{ cluster_state }} --initial-cluster {{ etcd_cluster }} {{ etcd_creds }} 1>>/var/log/etcd{{ suffix }}.log 2>&1\" ], \"env\": [ { \"name\": \"TARGET_STORAGE\","} {"_id":"doc-en-kubernetes-cf71883e258a2412e331a02bbe251905d2270f7f56b2e0a0e770b1f85130e04f","title":"","text":"command: - /bin/bash - -c - /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1 - exec /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1 resources: requests: cpu: 5m"} {"_id":"doc-en-kubernetes-fc3af2581bb964259abdf4ed52f7d84dbcadf70129d54f5a1bca300dce9ccb11","title":"","text":"\"command\": [ \"/bin/sh\", \"-c\", \"/usr/local/bin/kube-apiserver {{params}} --allow-privileged={{pillar['allow_privileged']}} 1>>/var/log/kube-apiserver.log 2>&1\" \"exec /usr/local/bin/kube-apiserver {{params}} --allow-privileged={{pillar['allow_privileged']}} 1>>/var/log/kube-apiserver.log 2>&1\" ], {{container_env}} \"livenessProbe\": {"} {"_id":"doc-en-kubernetes-ecfb08c588e1e4f605e8af7a4b7e09e84b90b41490bcea5f75f61b8991f6e453","title":"","text":"\"command\": [ \"/bin/sh\", \"-c\", \"/usr/local/bin/kube-controller-manager {{params}} 1>>/var/log/kube-controller-manager.log 2>&1\" \"exec /usr/local/bin/kube-controller-manager {{params}} 1>>/var/log/kube-controller-manager.log 2>&1\" ], {{container_env}} \"livenessProbe\": {"} {"_id":"doc-en-kubernetes-bf68959bee023a7a2bbdfb8c2337dd302625ba3dd90aae157da14ea9e36ec4f6","title":"","text":"command: - /bin/sh - -c - kube-proxy {{api_servers_with_port}} {{kubeconfig}} {{cluster_cidr}} --resource-container=\"\" --oom-score-adj=-998 {{params}} 1>>/var/log/kube-proxy.log 2>&1 - exec kube-proxy {{api_servers_with_port}} {{kubeconfig}} {{cluster_cidr}} --resource-container=\"\" --oom-score-adj=-998 {{params}} 1>>/var/log/kube-proxy.log 2>&1 {{container_env}} {{kube_cache_mutation_detector_env_name}} {{kube_cache_mutation_detector_env_value}}"} {"_id":"doc-en-kubernetes-3c9546a4f9e2e4801d7dcd42743f861fbca7866fd50fbe5b68ca32c5ae530cce","title":"","text":"\"command\": [ \"/bin/sh\", \"-c\", \"/usr/local/bin/kube-scheduler {{params}} 1>>/var/log/kube-scheduler.log 2>&1\" \"exec /usr/local/bin/kube-scheduler {{params}} 1>>/var/log/kube-scheduler.log 2>&1\" ], \"livenessProbe\": { \"httpGet\": {"} {"_id":"doc-en-kubernetes-f7926838194f200df8d09d5e46a343b2d4318082d0103429f99b5f2828d5571c","title":"","text":"# TODO: split this out into args when we no longer need to pipe stdout to a file #6428 - sh - -c - '/glbc --verbose=true --apiserver-host=http://localhost:8080 --default-backend-service=kube-system/default-http-backend --sync-period=600s --running-in-cluster=false --use-real-cloud=true --config-file-path=/etc/gce.conf --healthz-port=8086 1>>/var/log/glbc.log 2>&1' - 'exec /glbc --verbose=true --apiserver-host=http://localhost:8080 --default-backend-service=kube-system/default-http-backend --sync-period=600s --running-in-cluster=false --use-real-cloud=true --config-file-path=/etc/gce.conf --healthz-port=8086 1>>/var/log/glbc.log 2>&1' volumes: - hostPath: path: /etc/gce.conf"} {"_id":"doc-en-kubernetes-705fadbaa28812ad1789cbe6e02b7e123a08ced848d30799064babe17ba8aa4b","title":"","text":"# TODO: split this out into args when we no longer need to pipe stdout to a file #6428 - sh - -c - '/rescheduler --running-in-cluster=false 1>>/var/log/rescheduler.log 2>&1' - 'exec /rescheduler --running-in-cluster=false 1>>/var/log/rescheduler.log 2>&1' volumes: - hostPath: path: /var/log/rescheduler.log"} {"_id":"doc-en-kubernetes-4e5230f8b0f2f7aa213d913275ff1d851a84d56b1257fdc49e91745f7a709803","title":"","text":"// Start starts resource collector and connects to the standalone Cadvisor pod // then repeatedly runs collectStats. func (r *ResourceCollector) Start() { // Get the cgroup container names for kubelet and docker // Get the cgroup container names for kubelet and runtime kubeletContainer, err := getContainerNameForProcess(kubeletProcessName, \"\") dockerContainer, err := getContainerNameForProcess(dockerProcessName, dockerPidFile) runtimeContainer, err := getContainerNameForProcess(framework.TestContext.ContainerRuntimeProcessName, framework.TestContext.ContainerRuntimePidFile) if err == nil { systemContainers = map[string]string{ stats.SystemContainerKubelet: kubeletContainer, stats.SystemContainerRuntime: dockerContainer, stats.SystemContainerRuntime: runtimeContainer, } } else { framework.Failf(\"Failed to get docker container name in test-e2e-node resource collector.\") framework.Failf(\"Failed to get runtime container name in test-e2e-node resource collector.\") } wait.Poll(1*time.Second, 1*time.Minute, func() (bool, error) {"} {"_id":"doc-en-kubernetes-ed2336ec4fc2332a11321d29fbdf8e46c6020e276458a63130bbc783a2b60e6e","title":"","text":"func formatResourceUsageStats(containerStats framework.ResourceUsagePerContainer) string { // Example output: // // Resource usage for node \"e2e-test-foo-node-abcde\": // container cpu(cores) memory(MB) // \"/\" 0.363 2942.09 // \"/docker-daemon\" 0.088 521.80 // \"/kubelet\" 0.086 424.37 // \"/system\" 0.007 119.88 // Resource usage: //container cpu(cores) memory_working_set(MB) memory_rss(MB) //\"kubelet\" 0.068 27.92 15.43 //\"runtime\" 0.664 89.88 68.13 buf := &bytes.Buffer{} w := tabwriter.NewWriter(buf, 1, 0, 1, ' ', 0) fmt.Fprintf(w, \"containertcpu(cores)tmemory_working_set(MB)tmemory_rss(MB)n\")"} {"_id":"doc-en-kubernetes-d43a6e372ac61fe3b830521af8cbc0bd6cbfc9f28183de9fe0eca77c012d82a6","title":"","text":"func formatCPUSummary(summary framework.ContainersCPUSummary) string { // Example output for a node (the percentiles may differ): // CPU usage of containers on node \"e2e-test-foo-node-0vj7\": // CPU usage of containers: // container 5th% 50th% 90th% 95th% // \"/\" 0.051 0.159 0.387 0.455 // \"/runtime 0.000 0.000 0.146 0.166"} {"_id":"doc-en-kubernetes-621052aa95c06830c11c54b153ecae405d0997eaf5efead46f9d72d95b7a3711","title":"","text":"return resourceSeries } // Code for getting container name of docker, copied from pkg/kubelet/cm/container_manager_linux.go // since they are not exposed const ( kubeletProcessName = \"kubelet\" dockerProcessName = \"docker\" dockerPidFile = \"/var/run/docker.pid\" containerdProcessName = \"docker-containerd\" containerdPidFile = \"/run/docker/libcontainerd/docker-containerd.pid\" ) const kubeletProcessName = \"kubelet\" func getPidsForProcess(name, pidFile string) ([]int, error) { if len(pidFile) > 0 {"} {"_id":"doc-en-kubernetes-fa3c1bd5b33fc71e70d6415bc9d4a05d6f43a1ca79140319f04fb3def0f5fe6d","title":"","text":"if !ok { return errors.NewBadRequest(\"resource was marked with kind Pod but was unable to be converted\") } if _, isMirrorPod := pod.Annotations[api.MirrorPodAnnotationKey]; isMirrorPod { return nil } // Make sure that the client has not set `priority` at the time of pod creation. if operation == admission.Create && pod.Spec.Priority != nil { return admission.NewForbidden(a, fmt.Errorf(\"the integer value of priority must not be provided in pod spec. Priority admission controller populates the value from the given PriorityClass name\"))"} {"_id":"doc-en-kubernetes-f3aaec3f3d869fea32fc11dd5ec048cd031d43f7801679d580bd4f02efc9fd66","title":"","text":"PriorityClassName: \"system-cluster-critical\", }, }, // pod[5]: mirror Pod with a system priority class name { ObjectMeta: metav1.ObjectMeta{ Name: \"mirror-pod-w-system-priority\", Namespace: \"namespace\", Annotations: map[string]string{api.MirrorPodAnnotationKey: \"\"}, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: containerName, }, }, PriorityClassName: \"system-cluster-critical\", }, }, // pod[6]: mirror Pod with integer value of priority { ObjectMeta: metav1.ObjectMeta{ Name: \"mirror-pod-w-integer-priority\", Namespace: \"namespace\", Annotations: map[string]string{api.MirrorPodAnnotationKey: \"\"}, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: containerName, }, }, PriorityClassName: \"default1\", Priority: &intPriority, }, }, } // Enable PodPriority feature gate. utilfeature.DefaultFeatureGate.Set(fmt.Sprintf(\"%s=true\", features.PodPriority))"} {"_id":"doc-en-kubernetes-980ea58bccd4952f09d3af4c4dce58a4713ecd3e14c219e73f0167c66b106b79","title":"","text":"0, true, }, { \"mirror pod with system priority class\", []*scheduling.PriorityClass{}, *pods[5], SystemCriticalPriority, false, }, { \"mirror pod with integer priority\", []*scheduling.PriorityClass{}, *pods[6], 0, true, }, } for _, test := range tests {"} {"_id":"doc-en-kubernetes-0c9ca3587e96b1c07d244c5c3cb19c0f04bc880c3d17e99c64514eb40c496eb2","title":"","text":"} }) // TODO: move this under volumeType loop Context(\"when one pod requests one prebound PVC\", func() { var testVol *localTestVolume"} {"_id":"doc-en-kubernetes-2da9c5047ff5958a2b240aa981775e40454a467efe67e42c43d9035ad7a47260","title":"","text":"}) }) Context(\"when pod using local volume with non-existant path\", func() { ep := &eventPatterns{ reason: \"FailedMount\", pattern: make([]string, 2)} ep.pattern = append(ep.pattern, \"MountVolume.SetUp failed\") ep.pattern = append(ep.pattern, \"does not exist\") It(\"should not be able to mount\", func() { testVol := &localTestVolume{ node: config.node0, hostDir: \"/non-existent/location/nowhere\", localVolumeType: testVolType, } By(\"Creating local PVC and PV\") createLocalPVCsPVs(config, []*localTestVolume{testVol}, testMode) pod, err := createLocalPod(config, testVol) Expect(err).To(HaveOccurred()) checkPodEvents(config, pod.Name, ep) verifyLocalVolume(config, testVol) cleanupLocalPVCsPVs(config, []*localTestVolume{testVol}) }) }) }) } Context(\"when pod's node is different from PV's NodeAffinity\", func() { Context(\"when local volume cannot be mounted [Slow]\", func() { // TODO: // - make the pod create timeout shorter // - check for these errors in unit tests intead It(\"should fail mount due to non-existant path\", func() { ep := &eventPatterns{ reason: \"FailedMount\", pattern: make([]string, 2)} ep.pattern = append(ep.pattern, \"MountVolume.SetUp failed\") ep.pattern = append(ep.pattern, \"does not exist\") testVol := &localTestVolume{ node: config.node0, hostDir: \"/non-existent/location/nowhere\", localVolumeType: DirectoryLocalVolumeType, } By(\"Creating local PVC and PV\") createLocalPVCsPVs(config, []*localTestVolume{testVol}, immediateMode) pod, err := createLocalPod(config, testVol) Expect(err).To(HaveOccurred()) checkPodEvents(config, pod.Name, ep) verifyLocalVolume(config, testVol) cleanupLocalPVCsPVs(config, []*localTestVolume{testVol}) }) BeforeEach(func() { if len(config.nodes) < 2 { framework.Skipf(\"Runs only when number of nodes >= 2\") } }) It(\"should fail mount due to wrong node\", func() { if len(config.nodes) < 2 { framework.Skipf(\"Runs only when number of nodes >= 2\") } ep := &eventPatterns{ reason: \"FailedScheduling\", pattern: make([]string, 2)} ep.pattern = append(ep.pattern, \"MatchNodeSelector\") ep.pattern = append(ep.pattern, \"VolumeNodeAffinityConflict\") ep := &eventPatterns{ reason: \"FailedMount\", pattern: make([]string, 2)} ep.pattern = append(ep.pattern, \"NodeSelectorTerm\") ep.pattern = append(ep.pattern, \"MountVolume.NodeAffinity check failed\") It(\"should not be able to mount due to different NodeAffinity\", func() { testPodWithNodeName(config, testVolType, ep, config.nodes[1].Name, makeLocalPodWithNodeAffinity, testMode) }) testVols := setupLocalVolumesPVCsPVs(config, DirectoryLocalVolumeType, config.node0, 1, immediateMode) testVol := testVols[0] It(\"should not be able to mount due to different NodeSelector\", func() { testPodWithNodeName(config, testVolType, ep, config.nodes[1].Name, makeLocalPodWithNodeSelector, testMode) }) pod := makeLocalPodWithNodeName(config, testVol, config.nodes[1].Name) pod, err := config.client.CoreV1().Pods(config.ns).Create(pod) Expect(err).NotTo(HaveOccurred()) }) err = framework.WaitForPodNameRunningInNamespace(config.client, pod.Name, pod.Namespace) Expect(err).To(HaveOccurred()) checkPodEvents(config, pod.Name, ep) Context(\"when pod's node is different from PV's NodeName\", func() { cleanupLocalVolumes(config, []*localTestVolume{testVol}) }) }) BeforeEach(func() { if len(config.nodes) < 2 { framework.Skipf(\"Runs only when number of nodes >= 2\") } }) Context(\"when pod's node is different from PV's NodeAffinity\", func() { var ( testVol *localTestVolume volumeType localVolumeType ) BeforeEach(func() { if len(config.nodes) < 2 { framework.Skipf(\"Runs only when number of nodes >= 2\") } ep := &eventPatterns{ reason: \"FailedMount\", pattern: make([]string, 2)} ep.pattern = append(ep.pattern, \"NodeSelectorTerm\") ep.pattern = append(ep.pattern, \"MountVolume.NodeAffinity check failed\") volumeType = DirectoryLocalVolumeType setupStorageClass(config, &immediateMode) testVols := setupLocalVolumesPVCsPVs(config, volumeType, config.node0, 1, immediateMode) testVol = testVols[0] }) It(\"should not be able to mount due to different NodeName\", func() { testPodWithNodeName(config, testVolType, ep, config.nodes[1].Name, makeLocalPodWithNodeName, testMode) }) }) AfterEach(func() { cleanupLocalVolumes(config, []*localTestVolume{testVol}) cleanupStorageClass(config) }) } It(\"should not be able to mount due to different NodeAffinity\", func() { testPodWithNodeConflict(config, volumeType, config.nodes[1].Name, makeLocalPodWithNodeAffinity, immediateMode) }) It(\"should not be able to mount due to different NodeSelector\", func() { testPodWithNodeConflict(config, volumeType, config.nodes[1].Name, makeLocalPodWithNodeSelector, immediateMode) }) }) Context(\"when using local volume provisioner\", func() { var volumePath string"} {"_id":"doc-en-kubernetes-b07e5ca3940ab5bb2ddab3cc773f24bc154e7bcd3673df915d8e6214a2d41179","title":"","text":"type makeLocalPodWith func(config *localTestConfig, volume *localTestVolume, nodeName string) *v1.Pod func testPodWithNodeName(config *localTestConfig, testVolType localVolumeType, ep *eventPatterns, nodeName string, makeLocalPodFunc makeLocalPodWith, bindingMode storagev1.VolumeBindingMode) { func testPodWithNodeConflict(config *localTestConfig, testVolType localVolumeType, nodeName string, makeLocalPodFunc makeLocalPodWith, bindingMode storagev1.VolumeBindingMode) { By(fmt.Sprintf(\"local-volume-type: %s\", testVolType)) testVols := setupLocalVolumesPVCsPVs(config, testVolType, config.node0, 1, bindingMode) testVol := testVols[0]"} {"_id":"doc-en-kubernetes-2001ed9cebd868a45b35804b626f643da85b11a5a8a1c9ebfaa9bd104d7937b7","title":"","text":"pod := makeLocalPodFunc(config, testVol, nodeName) pod, err := config.client.CoreV1().Pods(config.ns).Create(pod) Expect(err).NotTo(HaveOccurred()) err = framework.WaitForPodRunningInNamespace(config.client, pod) Expect(err).To(HaveOccurred()) checkPodEvents(config, pod.Name, ep) err = framework.WaitForPodNameUnschedulableInNamespace(config.client, pod.Name, pod.Namespace) Expect(err).NotTo(HaveOccurred()) cleanupLocalVolumes(config, []*localTestVolume{testVol}) }"} {"_id":"doc-en-kubernetes-dafe23ea2f65b107cb022770a1c171fa1f8b017fd10f7527e0a09cd1ae1941e2","title":"","text":"mountArgs = append(mountArgs, mountpoint) mountArgs = append(mountArgs, \"-r\") mountArgs = append(mountArgs, cephfsVolume.path) mountArgs = append(mountArgs, \"--id\") mountArgs = append(mountArgs, cephfsVolume.id) glog.V(4).Infof(\"Mounting cmd ceph-fuse with arguments (%s)\", mountArgs) command := exec.Command(\"ceph-fuse\", mountArgs...)"} {"_id":"doc-en-kubernetes-9c8754dcd56b90b115cc0f8238fe8c038f38d7d71e898258e37de7a3c607e6d5","title":"","text":"export ARTIFACTS_DIR=${WORKSPACE}/artifacts # Save the verbose stdout as well. export KUBE_KEEP_VERBOSE_TEST_OUTPUT=y export KUBE_TIMEOUT='-timeout 300s' export KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=4 export LOG_LEVEL=4"} {"_id":"doc-en-kubernetes-4494f7379ee41113840627169a8110fe566c020fd93e9e9ddfdcd6f0e45c53d0","title":"","text":"} if a.usernameClaim == \"email\" { // Check the email_verified claim to ensure the email is valid. // If the email_verified claim is present, ensure the email is valid. // https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims var emailVerified bool if err := c.unmarshalClaim(\"email_verified\", &emailVerified); err != nil { return nil, false, fmt.Errorf(\"oidc: parse 'email_verified' claim: %v\", err) } if !emailVerified { return nil, false, fmt.Errorf(\"oidc: email not verified\") if hasEmailVerified := c.hasClaim(\"email_verified\"); hasEmailVerified { var emailVerified bool if err := c.unmarshalClaim(\"email_verified\", &emailVerified); err != nil { return nil, false, fmt.Errorf(\"oidc: parse 'email_verified' claim: %v\", err) } // If the email_verified claim is present we have to verify it is set to `true`. if !emailVerified { return nil, false, fmt.Errorf(\"oidc: email not verified\") } } }"} {"_id":"doc-en-kubernetes-c9c5ae89ad1ba4f378086a2862693ffddae5e1eec062fab705a9e8d50a247895","title":"","text":"} return json.Unmarshal([]byte(val), v) } func (c claims) hasClaim(name string) bool { if _, ok := c[name]; !ok { return false } return true } "} {"_id":"doc-en-kubernetes-e51e1989d6bd054eafa95d209e56244f06a870bfdd41bc115c9b99088f4a20c3","title":"","text":"wantErr: true, }, { // If \"email_verified\" isn't present, assume false // If \"email_verified\" isn't present, assume true name: \"no-email-verified-claim\", options: Options{ IssuerURL: \"https://auth.example.com\","} {"_id":"doc-en-kubernetes-b8cf9787e6d2de45ad3f73e95f2e9eaab836a15db7d982201c9c9d079a887fe7","title":"","text":"\"email\": \"jane@example.com\", \"exp\": %d }`, valid.Unix()), want: &user.DefaultInfo{ Name: \"jane@example.com\", }, }, { name: \"invalid-email-verified-claim\", options: Options{ IssuerURL: \"https://auth.example.com\", ClientID: \"my-client\", UsernameClaim: \"email\", now: func() time.Time { return now }, }, signingKey: loadRSAPrivKey(t, \"testdata/rsa_1.pem\", jose.RS256), pubKeys: []*jose.JSONWebKey{ loadRSAKey(t, \"testdata/rsa_1.pem\", jose.RS256), }, // string value for \"email_verified\" claims: fmt.Sprintf(`{ \"iss\": \"https://auth.example.com\", \"aud\": \"my-client\", \"email\": \"jane@example.com\", \"email_verified\": \"false\", \"exp\": %d }`, valid.Unix()), wantErr: true, }, {"} {"_id":"doc-en-kubernetes-b7c2218e62967fe1c4513ccb70443824d9e391160e1b20020cf1ab0f61556891","title":"","text":"return nil, err } // TODO: we should fix this up better (PR 59732) o.config.LeaderElection.LeaderElect = true return o, nil }"} {"_id":"doc-en-kubernetes-c8f3d297b5605611c738f7c8279d5f63fa7aa948f6cd50931afc9f571a2c173e","title":"","text":"import ( \"context\" \"fmt\" \"os\" \"strings\" \"k8s.io/api/core/v1\" \"k8s.io/kubernetes/pkg/cloudprovider\""} {"_id":"doc-en-kubernetes-80d3a35b5338dff690a5df11f51dde0e1b57419a3860775deb5e64ee8cea5082","title":"","text":"func (az *Cloud) isCurrentInstance(name types.NodeName) (bool, error) { nodeName := mapNodeNameToVMName(name) metadataName, err := az.metadata.Text(\"instance/compute/name\") if err != nil { return false, err } if az.VMType == vmTypeVMSS { // VMSS vmName is not same with hostname, use hostname instead. metadataName, err = os.Hostname() if err != nil { return false, err } } metadataName = strings.ToLower(metadataName) return (metadataName == nodeName), err } // InstanceID returns the cloud provider ID of the specified instance. // Note that if the instance does not exist or is no longer running, we must return (\"\", cloudprovider.InstanceNotFound) func (az *Cloud) InstanceID(ctx context.Context, name types.NodeName) (string, error) { nodeName := mapNodeNameToVMName(name) if az.UseInstanceMetadata { isLocalInstance, err := az.isCurrentInstance(name) if err != nil { return \"\", err } if isLocalInstance { nodeName := mapNodeNameToVMName(name) return az.getMachineID(nodeName), nil // Not local instance, get instanceID from Azure ARM API. if !isLocalInstance { return az.vmSet.GetInstanceIDByNodeName(nodeName) } // Compose instanceID based on nodeName for standard instance. if az.VMType == vmTypeStandard { return az.getStandardMachineID(nodeName), nil } // Get scale set name and instanceID from vmName for vmss. metadataName, err := az.metadata.Text(\"instance/compute/name\") if err != nil { return \"\", err } ssName, instanceID, err := extractVmssVMName(metadataName) if err != nil { return \"\", err } // Compose instanceID based on ssName and instanceID for vmss instance. return az.getVmssMachineID(ssName, instanceID), nil } return az.vmSet.GetInstanceIDByNodeName(string(name)) return az.vmSet.GetInstanceIDByNodeName(nodeName) } // InstanceTypeByProviderID returns the cloudprovider instance type of the node with the specified unique providerID"} {"_id":"doc-en-kubernetes-8ef28ee99b2d212e20b36b805de7afa150350e23c428ae1b2f358314f0c322bb","title":"","text":"var errNotInVMSet = errors.New(\"vm is not in the vmset\") var providerIDRE = regexp.MustCompile(`^` + CloudProviderName + `://(?:.*)/Microsoft.Compute/virtualMachines/(.+)$`) // returns the full identifier of a machine func (az *Cloud) getMachineID(machineName string) string { // getStandardMachineID returns the full identifier of a virtual machine. func (az *Cloud) getStandardMachineID(machineName string) string { return fmt.Sprintf( machineIDTemplate, az.SubscriptionID,"} {"_id":"doc-en-kubernetes-f246c2f079a8cc6c97ac08056aa38c614e8ccd60ce71bc41ed23638986162727","title":"","text":"// ErrorNotVmssInstance indicates an instance is not belongint to any vmss. ErrorNotVmssInstance = errors.New(\"not a vmss instance\") scaleSetNameRE = regexp.MustCompile(`.*/subscriptions/(?:.*)/Microsoft.Compute/virtualMachineScaleSets/(.+)/virtualMachines(?:.*)`) scaleSetNameRE = regexp.MustCompile(`.*/subscriptions/(?:.*)/Microsoft.Compute/virtualMachineScaleSets/(.+)/virtualMachines(?:.*)`) vmssMachineIDTemplate = \"/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Compute/virtualMachineScaleSets/%s/virtualMachines/%s\" ) // scaleSet implements VMSet interface for Azure scale set."} {"_id":"doc-en-kubernetes-9bc0e55e4876dca67419e8e4ff8ee4d7bfa5d23aee16b5c7e4ef67b7149d0022","title":"","text":"return nil } // getVmssMachineID returns the full identifier of a vmss virtual machine. func (az *Cloud) getVmssMachineID(scaleSetName, instanceID string) string { return fmt.Sprintf( vmssMachineIDTemplate, az.SubscriptionID, az.ResourceGroup, scaleSetName, instanceID) } "} {"_id":"doc-en-kubernetes-46535b59b10862ad35cfa48d50d27ae18446e7d9f450af0f81581554c6583695","title":"","text":"return fmt.Sprintf(\"%s%s%s\", scaleSetName, vmssNameSeparator, instanceID) } func (ss *scaleSet) extractVmssVMName(name string) (string, string, error) { func extractVmssVMName(name string) (string, string, error) { ret := strings.Split(name, vmssNameSeparator) if len(ret) != 2 { glog.Errorf(\"Failed to extract vmssVMName %q\", name)"} {"_id":"doc-en-kubernetes-1aed93bd869dd85f95d7e183688944d97b5042474525154090b0877c55781d78","title":"","text":"func (ss *scaleSet) newVmssVMCache() (*timedCache, error) { getter := func(key string) (interface{}, error) { // vmssVM name's format is 'scaleSetName_instanceID' ssName, instanceID, err := ss.extractVmssVMName(key) ssName, instanceID, err := extractVmssVMName(key) if err != nil { return nil, err }"} {"_id":"doc-en-kubernetes-66dab6c4f67a0c7e3199d409fc0ff5f5ce3e8aa0ed8fa6ab455364c3f599504b","title":"","text":") func TestExtractVmssVMName(t *testing.T) { ss := &scaleSet{} cases := []struct { description string vmName string"} {"_id":"doc-en-kubernetes-a531bedda1e711f2d684c3f28eb9a6858108c351962eadd1099091e2042d97ca","title":"","text":"} for _, c := range cases { ssName, instanceID, err := ss.extractVmssVMName(c.vmName) ssName, instanceID, err := extractVmssVMName(c.vmName) if c.expectError { assert.Error(t, err, c.description) continue"} {"_id":"doc-en-kubernetes-a7fac5612b82bbef51d74194c04de305ed598c3abc899d264c264c3363d9a108","title":"","text":"// RangeSize returns the size of a range in valid addresses. func RangeSize(subnet *net.IPNet) int64 { ones, bits := subnet.Mask.Size() if bits == 32 && (bits-ones) >= 31 || bits == 128 && (bits-ones) >= 63 { if bits == 32 && (bits-ones) >= 31 || bits == 128 && (bits-ones) >= 127 { return 0 } max := int64(1) << uint(bits-ones) return max // For IPv6, the max size will be limited to 65536 // This is due to the allocator keeping track of all the // allocated IP's in a bitmap. This will keep the size of // the bitmap to 64k. if bits == 128 && (bits-ones) >= 16 { return int64(1) << uint(16) } else { return int64(1) << uint(bits-ones) } } // GetIndexedIP returns a net.IP that is subnet.IP + index in the contiguous IP space."} {"_id":"doc-en-kubernetes-4e576db01c795b1853e02fe900c1dcfacde28f11ced5a799f07e114a2f1dbc70","title":"","text":") func TestAllocate(t *testing.T) { _, cidr, err := net.ParseCIDR(\"192.168.1.0/24\") if err != nil { t.Fatal(err) } r := NewCIDRRange(cidr) t.Logf(\"base: %v\", r.base.Bytes()) if f := r.Free(); f != 254 { t.Errorf(\"unexpected free %d\", f) } if f := r.Used(); f != 0 { t.Errorf(\"unexpected used %d\", f) testCases := []struct { name string cidr string free int released string outOfRange1 string outOfRange2 string outOfRange3 string alreadyAllocated string }{ { name: \"IPv4\", cidr: \"192.168.1.0/24\", free: 254, released: \"192.168.1.5\", outOfRange1: \"192.168.0.1\", outOfRange2: \"192.168.1.0\", outOfRange3: \"192.168.1.255\", alreadyAllocated: \"192.168.1.1\", }, { name: \"IPv6\", cidr: \"2001:db8:1::/48\", free: 65534, released: \"2001:db8:1::5\", outOfRange1: \"2001:db8::1\", outOfRange2: \"2001:db8:1::\", outOfRange3: \"2001:db8:1::ffff\", alreadyAllocated: \"2001:db8:1::1\", }, } found := sets.NewString() count := 0 for r.Free() > 0 { ip, err := r.AllocateNext() for _, tc := range testCases { _, cidr, err := net.ParseCIDR(tc.cidr) if err != nil { t.Fatalf(\"error @ %d: %v\", count, err) t.Fatal(err) } count++ if !cidr.Contains(ip) { t.Fatalf(\"allocated %s which is outside of %s\", ip, cidr) r := NewCIDRRange(cidr) t.Logf(\"base: %v\", r.base.Bytes()) if f := r.Free(); f != tc.free { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } if found.Has(ip.String()) { t.Fatalf(\"allocated %s twice @ %d\", ip, count) if f := r.Used(); f != 0 { t.Errorf(\"Test %s unexpected used %d\", tc.name, f) } found := sets.NewString() count := 0 for r.Free() > 0 { ip, err := r.AllocateNext() if err != nil { t.Fatalf(\"Test %s error @ %d: %v\", tc.name, count, err) } count++ if !cidr.Contains(ip) { t.Fatalf(\"Test %s allocated %s which is outside of %s\", tc.name, ip, cidr) } if found.Has(ip.String()) { t.Fatalf(\"Test %s allocated %s twice @ %d\", tc.name, ip, count) } found.Insert(ip.String()) } if _, err := r.AllocateNext(); err != ErrFull { t.Fatal(err) } found.Insert(ip.String()) } if _, err := r.AllocateNext(); err != ErrFull { t.Fatal(err) } released := net.ParseIP(\"192.168.1.5\") if err := r.Release(released); err != nil { t.Fatal(err) } if f := r.Free(); f != 1 { t.Errorf(\"unexpected free %d\", f) } if f := r.Used(); f != 253 { t.Errorf(\"unexpected free %d\", f) } ip, err := r.AllocateNext() if err != nil { t.Fatal(err) } if !released.Equal(ip) { t.Errorf(\"unexpected %s : %s\", ip, released) } released := net.ParseIP(tc.released) if err := r.Release(released); err != nil { t.Fatal(err) } if f := r.Free(); f != 1 { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } if f := r.Used(); f != (tc.free - 1) { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } ip, err := r.AllocateNext() if err != nil { t.Fatal(err) } if !released.Equal(ip) { t.Errorf(\"Test %s unexpected %s : %s\", tc.name, ip, released) } if err := r.Release(released); err != nil { t.Fatal(err) } err = r.Allocate(net.ParseIP(\"192.168.0.1\")) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } if err := r.Allocate(net.ParseIP(\"192.168.1.1\")); err != ErrAllocated { t.Fatal(err) } err = r.Allocate(net.ParseIP(\"192.168.1.0\")) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } err = r.Allocate(net.ParseIP(\"192.168.1.255\")) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } if f := r.Free(); f != 1 { t.Errorf(\"unexpected free %d\", f) } if f := r.Used(); f != 253 { t.Errorf(\"unexpected free %d\", f) } if err := r.Allocate(released); err != nil { t.Fatal(err) } if f := r.Free(); f != 0 { t.Errorf(\"unexpected free %d\", f) } if f := r.Used(); f != 254 { t.Errorf(\"unexpected free %d\", f) if err := r.Release(released); err != nil { t.Fatal(err) } err = r.Allocate(net.ParseIP(tc.outOfRange1)) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } if err := r.Allocate(net.ParseIP(tc.alreadyAllocated)); err != ErrAllocated { t.Fatal(err) } err = r.Allocate(net.ParseIP(tc.outOfRange2)) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } err = r.Allocate(net.ParseIP(tc.outOfRange3)) if _, ok := err.(*ErrNotInRange); !ok { t.Fatal(err) } if f := r.Free(); f != 1 { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } if f := r.Used(); f != (tc.free - 1) { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } if err := r.Allocate(released); err != nil { t.Fatal(err) } if f := r.Free(); f != 0 { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } if f := r.Used(); f != tc.free { t.Errorf(\"Test %s unexpected free %d\", tc.name, f) } } }"} {"_id":"doc-en-kubernetes-d39fc4ffcc6084f194dec70694370e77adc9ebdd6cd4b802eaa0b92ad0f0dc6f","title":"","text":"}, { name: \"supported IPv6 cidr\", cidr: \"2001:db8::/98\", addrs: 1073741824, cidr: \"2001:db8::/48\", addrs: 65536, }, { name: \"unsupported IPv6 mask\", cidr: \"2001:db8::/65\", cidr: \"2001:db8::/1\", addrs: 0, }, }"} {"_id":"doc-en-kubernetes-1d24a659c889b986ef2a13a349ccc97fcb9fe5ddf64189e908b9c728598c40ee","title":"","text":"// DefaultClusterDNSIP defines default DNS IP DefaultClusterDNSIP = \"10.96.0.10\" // DefaultKubernetesVersion defines default kubernetes version DefaultKubernetesVersion = \"stable-1.9\" DefaultKubernetesVersion = \"stable-1.10\" // DefaultAPIBindPort defines default API port DefaultAPIBindPort = 6443 // DefaultAuthorizationModes defines default authorization modes"} {"_id":"doc-en-kubernetes-3d36433b0a004468937813ba8af69af1e1388989dfb07a6b285f33a605bf1645","title":"","text":"return fmt.Errorf(\"waiting for kubelet timed out\") } func RestartApiserver(c discovery.ServerVersionInterface) error { func RestartApiserver(cs clientset.Interface) error { // TODO: Make it work for all providers. if !ProviderIs(\"gce\", \"gke\", \"aws\") { return fmt.Errorf(\"unsupported provider: %s\", TestContext.Provider) } if ProviderIs(\"gce\", \"aws\") { return sshRestartMaster() initialRestartCount, err := getApiserverRestartCount(cs) if err != nil { return fmt.Errorf(\"failed to get apiserver's restart count: %v\", err) } if err := sshRestartMaster(); err != nil { return fmt.Errorf(\"failed to restart apiserver: %v\", err) } return waitForApiserverRestarted(cs, initialRestartCount) } // GKE doesn't allow ssh access, so use a same-version master // upgrade to teardown/recreate master. v, err := c.ServerVersion() v, err := cs.Discovery().ServerVersion() if err != nil { return err }"} {"_id":"doc-en-kubernetes-b7c500c8f86990f336d9d472bf27d7ebdbf87af5f9c0794497578b9683924604","title":"","text":"return fmt.Errorf(\"waiting for apiserver timed out\") } // WaitForApiserverRestarted waits until apiserver's restart count increased. func WaitForApiserverRestarted(c clientset.Interface, initialRestartCount int32) error { // waitForApiserverRestarted waits until apiserver's restart count increased. func waitForApiserverRestarted(c clientset.Interface, initialRestartCount int32) error { for start := time.Now(); time.Since(start) < time.Minute; time.Sleep(5 * time.Second) { restartCount, err := GetApiserverRestartCount(c) restartCount, err := getApiserverRestartCount(c) if err != nil { Logf(\"Failed to get apiserver's restart count: %v\", err) continue"} {"_id":"doc-en-kubernetes-06472af30c5c1fe6cbe371bcaf6f07ceb18fd8e2ee5de8002e8a453e7313b8a7","title":"","text":"return fmt.Errorf(\"timed out waiting for apiserver to be restarted\") } func GetApiserverRestartCount(c clientset.Interface) (int32, error) { func getApiserverRestartCount(c clientset.Interface) (int32, error) { label := labels.SelectorFromSet(labels.Set(map[string]string{\"component\": \"kube-apiserver\"})) listOpts := metav1.ListOptions{LabelSelector: label.String()} pods, err := c.CoreV1().Pods(metav1.NamespaceSystem).List(listOpts)"} {"_id":"doc-en-kubernetes-919fd37fcf804ff8c704ca14d4a4ab403bab771779c38d5cc67e21eb1f9a483c","title":"","text":"framework.ExpectNoError(framework.VerifyServeHostnameServiceUp(cs, ns, host, podNames1, svc1IP, servicePort)) // Restart apiserver initialRestartCount, err := framework.GetApiserverRestartCount(cs) Expect(err).NotTo(HaveOccurred(), \"failed to get apiserver's restart count\") By(\"Restarting apiserver\") if err := framework.RestartApiserver(cs.Discovery()); err != nil { if err := framework.RestartApiserver(cs); err != nil { framework.Failf(\"error restarting apiserver: %v\", err) } By(\"Waiting for apiserver to be restarted\") if err := framework.WaitForApiserverRestarted(cs, initialRestartCount); err != nil { framework.Failf(\"error while waiting for apiserver to be restarted: %v\", err) } By(\"Waiting for apiserver to come up by polling /healthz\") if err := framework.WaitForApiserverUp(cs); err != nil { framework.Failf(\"error while waiting for apiserver up: %v\", err)"} {"_id":"doc-en-kubernetes-a8b43b656d38c3de0bf5645bfbb9c2d359e773bc0798df7b2df577bbf6301557","title":"","text":"return nil } func waitForMultiPathToExist(devicePaths []string, maxRetries int, deviceUtil volumeutil.DeviceUtil) string { if 0 == len(devicePaths) { return \"\" } for i := 0; i < maxRetries; i++ { for _, path := range devicePaths { // There shouldnt be any empty device paths. However adding this check // for safer side to avoid the possibility of an empty entry. if path == \"\" { continue } // check if the dev is using mpio and if so mount it via the dm-XX device if mappedDevicePath := deviceUtil.FindMultipathDeviceForDevice(path); mappedDevicePath != \"\" { return mappedDevicePath } } if i == maxRetries-1 { break } time.Sleep(time.Second) } return \"\" } // AttachDisk returns devicePath of volume if attach succeeded otherwise returns error func (util *ISCSIUtil) AttachDisk(b iscsiDiskMounter) (string, error) { var devicePath string"} {"_id":"doc-en-kubernetes-6188f5739f5024a85ada1fc3f706e7b50690772597d0292fd092153239018a40","title":"","text":"glog.Errorf(\"iscsi: last error occurred during iscsi init:n%v\", lastErr) } //Make sure we use a valid devicepath to find mpio device. devicePath = devicePaths[0] for _, path := range devicePaths { // There shouldnt be any empty device paths. However adding this check // for safer side to avoid the possibility of an empty entry. if path == \"\" { continue } // check if the dev is using mpio and if so mount it via the dm-XX device if mappedDevicePath := b.deviceUtil.FindMultipathDeviceForDevice(path); mappedDevicePath != \"\" { devicePath = mappedDevicePath break } // Try to find a multipath device for the volume if 1 < len(bkpPortal) { // If the PV has 2 or more portals, wait up to 10 seconds for the multipath // device to appear devicePath = waitForMultiPathToExist(devicePaths, 10, b.deviceUtil) } else { // For PVs with 1 portal, just try one time to find the multipath device. This // avoids a long pause when the multipath device will never get created, and // matches legacy behavior. devicePath = waitForMultiPathToExist(devicePaths, 1, b.deviceUtil) } // When no multipath device is found, just use the first (and presumably only) device if devicePath == \"\" { devicePath = devicePaths[0] } glog.V(5).Infof(\"iscsi: AttachDisk devicePath: %s\", devicePath)"} {"_id":"doc-en-kubernetes-f23c1599f9137b27ac8b315291e238f37a1bc19f23f3335ea45766768b81dbbd","title":"","text":"# When a 'git archive' is exported, the '$Format:%D$' below will look # something like 'HEAD -> release-1.8, tag: v1.8.3' where then 'tag: ' # can be extracted from it. if [[ '$Format:%D$' =~ tag: (v[^ ]+) ]]; then if [[ '$Format:%D$' =~ tag: (v[^ ,]+) ]]; then KUBE_GIT_VERSION=\"${BASH_REMATCH[1]}\" fi fi"} {"_id":"doc-en-kubernetes-d8b5b7b988afec59cb144f82f6ded70ab20fcd0503af6cd9b83fc8c6e56348dc","title":"","text":"} if !proxier.lbWhiteListCIDRSet.isEmpty() || !proxier.lbWhiteListIPSet.isEmpty() { // link kube-services chain -> kube-fire-wall chain args := []string{\"-m\", \"set\", \"--match-set\", proxier.lbIngressSet.Name, \"dst,dst\", \"-j\", string(KubeFireWallChain)} if _, err := proxier.iptables.EnsureRule(utiliptables.Append, utiliptables.TableNAT, kubeServicesChain, args...); err != nil { glog.Errorf(\"Failed to ensure that ipset %s chain %s jumps to %s: %v\", proxier.lbIngressSet.Name, kubeServicesChain, KubeFireWallChain, err) args := []string{ \"-A\", string(kubeServicesChain), \"-m\", \"set\", \"--match-set\", proxier.lbIngressSet.Name, \"dst,dst\", \"-j\", string(KubeFireWallChain), } writeLine(proxier.natRules, args...) if !proxier.lbWhiteListCIDRSet.isEmpty() { args = append(args[:0], \"-A\", string(KubeFireWallChain),"} {"_id":"doc-en-kubernetes-cb587eb71b08fdf3c04dd1339074e019337bb8d9c2ff042cb4631abcddb39e70","title":"","text":"\"fmt\" \"net\" \"reflect\" \"strings\" \"testing\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\""} {"_id":"doc-en-kubernetes-c3be7299c05eef6b667287d1018676222ff29060d0795250627a26330de752bd","title":"","text":"t.Errorf(\"Expect node port type service, got none\") } } func TestLoadBalanceSourceRanges(t *testing.T) { ipt := iptablestest.NewFake() ipvs := ipvstest.NewFake() ipset := ipsettest.NewFake(testIPSetVersion) fp := NewFakeProxier(ipt, ipvs, ipset, nil) svcIP := \"10.20.30.41\" svcPort := 80 svcLBIP := \"1.2.3.4\" svcLBSource := \"10.0.0.0/8\" svcPortName := proxy.ServicePortName{ NamespacedName: makeNSN(\"ns1\", \"svc1\"), Port: \"p80\", } epIP := \"10.180.0.1\" makeServiceMap(fp, makeTestService(svcPortName.Namespace, svcPortName.Name, func(svc *api.Service) { svc.Spec.Type = \"LoadBalancer\" svc.Spec.ClusterIP = svcIP svc.Spec.Ports = []api.ServicePort{{ Name: svcPortName.Port, Port: int32(svcPort), Protocol: api.ProtocolTCP, }} svc.Status.LoadBalancer.Ingress = []api.LoadBalancerIngress{{ IP: svcLBIP, }} svc.Spec.LoadBalancerSourceRanges = []string{ svcLBSource, } }), ) makeEndpointsMap(fp, makeTestEndpoints(svcPortName.Namespace, svcPortName.Name, func(ept *api.Endpoints) { ept.Subsets = []api.EndpointSubset{{ Addresses: []api.EndpointAddress{{ IP: epIP, NodeName: strPtr(testHostname), }}, Ports: []api.EndpointPort{{ Name: svcPortName.Port, Port: int32(svcPort), }}, }} }), ) fp.syncProxyRules() // Check ipvs service and destinations services, err := ipvs.GetVirtualServers() if err != nil { t.Errorf(\"Failed to get ipvs services, err: %v\", err) } found := false for _, svc := range services { fmt.Printf(\"address: %s:%d, %s\", svc.Address.String(), svc.Port, svc.Protocol) if svc.Address.Equal(net.ParseIP(svcLBIP)) && svc.Port == uint16(svcPort) && svc.Protocol == string(api.ProtocolTCP) { destinations, _ := ipvs.GetRealServers(svc) if len(destinations) != 1 { t.Errorf(\"Unexpected %d destinations, expect 0 destinations\", len(destinations)) } for _, ep := range destinations { if ep.Address.String() == epIP && ep.Port == uint16(svcPort) { found = true } } } } if !found { t.Errorf(\"Did not got expected loadbalance service\") } // Check ipset entry expectIPSet := map[string]*utilipset.Entry{ KubeLoadBalancerSet: { IP: svcLBIP, Port: svcPort, Protocol: strings.ToLower(string(api.ProtocolTCP)), SetType: utilipset.HashIPPort, }, KubeLoadBalancerMasqSet: { IP: svcLBIP, Port: svcPort, Protocol: strings.ToLower(string(api.ProtocolTCP)), SetType: utilipset.HashIPPort, }, KubeLoadBalancerSourceCIDRSet: { IP: svcLBIP, Port: svcPort, Protocol: strings.ToLower(string(api.ProtocolTCP)), Net: svcLBSource, SetType: utilipset.HashIPPortNet, }, } for set, entry := range expectIPSet { ents, err := ipset.ListEntries(set) if err != nil || len(ents) != 1 { t.Errorf(\"Check ipset entries failed for ipset: %q\", set) continue } if ents[0] != entry.String() { t.Errorf(\"Check ipset entries failed for ipset: %q\", set) } } // Check iptables chain and rules kubeSvcRules := ipt.GetRules(string(kubeServicesChain)) kubeFWRules := ipt.GetRules(string(KubeFireWallChain)) if !hasJump(kubeSvcRules, string(KubeMarkMasqChain), KubeLoadBalancerMasqSet) { t.Errorf(\"Didn't find jump from chain %v match set %v to MASQUERADE\", kubeServicesChain, KubeLoadBalancerMasqSet) } if !hasJump(kubeSvcRules, string(KubeFireWallChain), KubeLoadBalancerSet) { t.Errorf(\"Didn't find jump from chain %v match set %v to %v\", kubeServicesChain, KubeLoadBalancerSet, KubeFireWallChain) } if !hasJump(kubeFWRules, \"ACCEPT\", KubeLoadBalancerSourceCIDRSet) { t.Errorf(\"Didn't find jump from chain %v match set %v to ACCEPT\", kubeServicesChain, KubeLoadBalancerSourceCIDRSet) } } func TestOnlyLocalLoadBalancing(t *testing.T) { ipt := iptablestest.NewFake()"} {"_id":"doc-en-kubernetes-7370a595c266f58e16dfadcc43e67e51e6781c1ebbd5ad0b3f2237523c96ac67","title":"","text":"} } } func hasJump(rules []iptablestest.Rule, destChain, ipSet string) bool { match := false for _, r := range rules { if r[iptablestest.Jump] == destChain { match = true if ipSet != \"\" { if strings.Contains(r[iptablestest.MatchSet], ipSet) { return true } match = false } } } return match } "} {"_id":"doc-en-kubernetes-469d90a0027521be7f34c93b119d7a460cfc9ca845b80a3804fd3868551580ae","title":"","text":"Reject = \"REJECT\" ToDest = \"--to-destination \" Recent = \"recent \" MatchSet = \"--match-set \" ) type Rule map[string]string"} {"_id":"doc-en-kubernetes-fbfbad523648010f75f75eda52015a203819bfd213b9fdbf65c0683bfbc1e410","title":"","text":"for _, l := range strings.Split(string(f.Lines), \"n\") { if strings.Contains(l, fmt.Sprintf(\"-A %v\", chainName)) { newRule := Rule(map[string]string{}) for _, arg := range []string{Destination, Source, DPort, Protocol, Jump, ToDest, Recent} { for _, arg := range []string{Destination, Source, DPort, Protocol, Jump, ToDest, Recent, MatchSet} { tok := getToken(l, arg) if tok != \"\" { newRule[arg] = tok"} {"_id":"doc-en-kubernetes-086eef9604a2614cb1940562ab07df053b6265e68437ee83fa88c287bd8f437c","title":"","text":"} type gcepdSource struct { diskName string pvc *v1.PersistentVolumeClaim } func initGCEPD() volSource {"} {"_id":"doc-en-kubernetes-afb8259b72328e33b04ceb5df41ec289c627dc4f135cbc191e848845e9f32bfd","title":"","text":"func (s *gcepdSource) createVolume(f *framework.Framework) volInfo { var err error framework.Logf(\"Creating GCE PD volume\") s.diskName, err = framework.CreatePDWithRetry() framework.ExpectNoError(err, \"Error creating PD\") framework.Logf(\"Creating GCE PD volume via dynamic provisioning\") testCase := storageClassTest{ name: \"subpath\", claimSize: \"2G\", } pvc := newClaim(testCase, f.Namespace.Name, \"subpath\") s.pvc, err = framework.CreatePVC(f.ClientSet, f.Namespace.Name, pvc) framework.ExpectNoError(err, \"Error creating PVC\") return volInfo{ source: &v1.VolumeSource{ GCEPersistentDisk: &v1.GCEPersistentDiskVolumeSource{PDName: s.diskName}, PersistentVolumeClaim: &v1.PersistentVolumeClaimVolumeSource{ ClaimName: s.pvc.Name, }, }, } } func (s *gcepdSource) cleanupVolume(f *framework.Framework) { if s.diskName != \"\" { err := framework.DeletePDWithRetry(s.diskName) framework.ExpectNoError(err, \"Error deleting PD\") if s.pvc != nil { err := f.ClientSet.CoreV1().PersistentVolumeClaims(f.Namespace.Name).Delete(s.pvc.Name, nil) framework.ExpectNoError(err, \"Error deleting PVC\") } }"} {"_id":"doc-en-kubernetes-dda332f7e4ffb2a04ba19eb7acbd49cca29605c4f8dc2f351aabc94ff9c1cc21","title":"","text":"- --namespace=kube-system - --configmap=kube-dns-autoscaler # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base - --target=Deployment/kube-dns - --target={{.Target}} # When cluster is using large nodes(with more cores), \"coresPerReplica\" should dominate. # If using small nodes, \"nodesPerReplica\" should dominate. - --default-params={\"linear\":{\"coresPerReplica\":256,\"nodesPerReplica\":16,\"preventSinglePointFailure\":true}}"} {"_id":"doc-en-kubernetes-5f3008db0eff7e075d131cc6b0c9322c67d6ffc88d207fc590b6a6dc6265991b","title":"","text":"apiVersion: v1 kind: Service metadata: name: coredns name: kube-dns namespace: kube-system labels: k8s-app: coredns k8s-app: kube-dns kubernetes.io/cluster-service: \"true\" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: \"CoreDNS\" spec: selector: k8s-app: coredns k8s-app: kube-dns clusterIP: __PILLAR__DNS__SERVER__ ports: - name: dns"} {"_id":"doc-en-kubernetes-cabf65f86133816f68e4373bd44d1565de74e5133727e5e758aa26b43a50c30b","title":"","text":"apiVersion: v1 kind: Service metadata: name: coredns name: kube-dns namespace: kube-system labels: k8s-app: coredns k8s-app: kube-dns kubernetes.io/cluster-service: \"true\" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: \"CoreDNS\" spec: selector: k8s-app: coredns k8s-app: kube-dns clusterIP: {{ pillar['dns_server'] }} ports: - name: dns"} {"_id":"doc-en-kubernetes-268596d9580dbed0c12fb681ab4717168615bf8a3526ed0048473dfb50c8afcb","title":"","text":"name: coredns namespace: kube-system labels: k8s-app: coredns k8s-app: kube-dns kubernetes.io/cluster-service: \"true\" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: \"CoreDNS\" spec: replicas: 2 # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: coredns k8s-app: kube-dns template: metadata: labels: k8s-app: coredns k8s-app: kube-dns spec: serviceAccountName: coredns tolerations:"} {"_id":"doc-en-kubernetes-ac9cb48b0587730256f82a94efd5587742ce13747c2e20a4af539d1bc54af396","title":"","text":"apiVersion: v1 kind: Service metadata: name: coredns name: kube-dns namespace: kube-system labels: k8s-app: coredns k8s-app: kube-dns kubernetes.io/cluster-service: \"true\" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: \"CoreDNS\" spec: selector: k8s-app: coredns k8s-app: kube-dns clusterIP: $DNS_SERVER_IP ports: - name: dns"} {"_id":"doc-en-kubernetes-52f1e74a604d1e606f381316298f2e75501bb8ae9e37b3268eec658f50eb6826","title":"","text":"readonly UUID_MNT_PREFIX=\"/mnt/disks/by-uuid/google-local-ssds\" readonly UUID_BLOCK_PREFIX=\"/dev/disk/by-uuid/google-local-ssds\" readonly COREDNS_AUTOSCALER=\"Deployment/coredns\" readonly KUBEDNS_AUTOSCALER=\"Deployment/kube-dns\" # Use --retry-connrefused opt only if it's supported by curl. CURL_RETRY_CONNREFUSED=\"\""} {"_id":"doc-en-kubernetes-bfdd3c6789bfeed3a7497af3d7939ca27c4c971ea8984c1550e9af74823ad8b0","title":"","text":"sed -i -e \"s@{{ *pillar['dns_domain'] *}}@${DNS_DOMAIN}@g\" \"${coredns_file}\" sed -i -e \"s@{{ *pillar['dns_server'] *}}@${DNS_SERVER_IP}@g\" \"${coredns_file}\" sed -i -e \"s@{{ *pillar['service_cluster_ip_range'] *}}@${SERVICE_CLUSTER_IP_RANGE}@g\" \"${coredns_file}\" if [[ \"${ENABLE_DNS_HORIZONTAL_AUTOSCALER:-}\" == \"true\" ]]; then setup-addon-manifests \"addons\" \"dns-horizontal-autoscaler\" \"gce\" local -r dns_autoscaler_file=\"${dst_dir}/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml\" sed -i'' -e \"s@{{.Target}}@${COREDNS_AUTOSCALER}@g\" \"${dns_autoscaler_file}\" fi } # Sets up the manifests of Fluentd configmap and yamls for k8s addons."} {"_id":"doc-en-kubernetes-163a93ecb7527f21403a0296333c036a253355453b0d056f6e05cda3fecf3dc3","title":"","text":"if [[ \"${ENABLE_DNS_HORIZONTAL_AUTOSCALER:-}\" == \"true\" ]]; then setup-addon-manifests \"addons\" \"dns-horizontal-autoscaler\" \"gce\" local -r dns_autoscaler_file=\"${dst_dir}/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml\" sed -i'' -e \"s@{{.Target}}@${KUBEDNS_AUTOSCALER}@g\" \"${dns_autoscaler_file}\" fi }"} {"_id":"doc-en-kubernetes-81e55ee5b1e7db05e2fd9deb724f64eba763bcf1fbf76f8f3b15f3ac0af9a4eb","title":"","text":"containers: - name: influxdb image: gcr.io/google_containers/heapster_influxdb:v0.3 ports: - containerPort: 8083 hostPort: 8083 - containerPort: 8086 hostPort: 8086 - name: grafana image: gcr.io/google_containers/heapster_grafana:v0.6 env:"} {"_id":"doc-en-kubernetes-3b53ce3da93f066e552282c318b411f91b88e3e485ac4274fe7fe9ca69035a2b","title":"","text":"containers: - image: gcr.io/google_containers/heapster_influxdb:v0.3 name: influxdb ports: - containerPort: 8083 hostPort: 8083 - containerPort: 8086 hostPort: 8086 - name: grafana image: gcr.io/google_containers/heapster_grafana:v0.6 env:"} {"_id":"doc-en-kubernetes-479247e3e0c0729ebc42ba7abbb564f173e1a9ecca0504777e00c44f17e1f6a9","title":"","text":"defer cancel() vsi, err := vs.getVSphereInstance(nodeName) if err != nil { // If node doesn't exist, disk is already detached from node. if err == vclib.ErrNoVMFound { glog.Infof(\"Node %q does not exist, disk %s is already detached from node.\", convertToString(nodeName), volPath) return nil } return err } // Ensure client is logged in and session is valid"} {"_id":"doc-en-kubernetes-635384596e3e374881368b5fa8bb7f7cb15015dcf05883ee7e49d616f342dfa9","title":"","text":"go_library( name = \"go_default_library\", srcs = [\"validation.go\"], srcs = [ \"validation.go\", ] + select({ \"@io_bazel_rules_go//go/platform:android\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:darwin\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:dragonfly\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:freebsd\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:linux\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:nacl\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:netbsd\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:openbsd\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:plan9\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:solaris\": [ \"validation_others.go\", ], \"@io_bazel_rules_go//go/platform:windows\": [ \"validation_windows.go\", ], \"//conditions:default\": [], }), importpath = \"k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig/validation\", deps = [ \"//pkg/features:go_default_library\","} {"_id":"doc-en-kubernetes-cd81f29e7d619c2fa42bbd8ff9344e773b02ae1866ea53f385c08e95deeb4315","title":"","text":"allErrors = append(allErrors, fmt.Errorf(\"invalid configuration: option %q specified for HairpinMode (--hairpin-mode). Valid options are %q, %q or %q\", kc.HairpinMode, kubeletconfig.HairpinNone, kubeletconfig.HairpinVeth, kubeletconfig.PromiscuousBridge)) } if err := validateKubeletOSConfiguration(kc); err != nil { allErrors = append(allErrors, err) } return utilerrors.NewAggregate(allErrors) }"} {"_id":"doc-en-kubernetes-b6b1b11013b50fadb2aa4334f8f1dae5ba41c8f71d81736483df057edb4bb49c","title":"","text":" // +build !windows /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package validation import ( \"k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig\" ) // validateKubeletOSConfiguration validates os specific kubelet configuration and returns an error if it is invalid. func validateKubeletOSConfiguration(kc *kubeletconfig.KubeletConfiguration) error { return nil } "} {"_id":"doc-en-kubernetes-84e3d23cd960dfa081cdb96e180ea853820ad2202f2117dd2f266d9e1c4b3c23","title":"","text":" // +build windows /* Copyright 2018 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package validation import ( \"fmt\" utilerrors \"k8s.io/apimachinery/pkg/util/errors\" \"k8s.io/kubernetes/pkg/kubelet/apis/kubeletconfig\" ) // validateKubeletOSConfiguration validates os specific kubelet configuration and returns an error if it is invalid. func validateKubeletOSConfiguration(kc *kubeletconfig.KubeletConfiguration) error { message := \"invalid configuration: %v (%v) %v is not supported on Windows\" allErrors := []error{} if kc.CgroupsPerQOS { allErrors = append(allErrors, fmt.Errorf(message, \"CgroupsPerQOS\", \"--cgroups-per-qos\", kc.CgroupsPerQOS)) } if len(kc.EnforceNodeAllocatable) > 0 { allErrors = append(allErrors, fmt.Errorf(message, \"EnforceNodeAllocatable\", \"--enforce-node-allocatable\", kc.EnforceNodeAllocatable)) } return utilerrors.NewAggregate(allErrors) } "} {"_id":"doc-en-kubernetes-5ac242b047183fcf8ccd303a20bfe3606bb1879b3b084dac4d551dcca98f921f","title":"","text":"// GetPods returns all pods bound to the kubelet and their spec, and the mirror // pods. func (kl *Kubelet) GetPods() []*v1.Pod { return kl.podManager.GetPods() pods := kl.podManager.GetPods() // a kubelet running without apiserver requires an additional // update of the static pod status. See #57106 for _, p := range pods { if status, ok := kl.statusManager.GetPodStatus(p.UID); ok { p.Status = status } } return pods } // GetRunningPods returns all pods running on kubelet from looking at the"} {"_id":"doc-en-kubernetes-093a23cc2cee09cb7ffa4b07b49a974b05be1545dbf2d85725b77b0c41e21719","title":"","text":"// whether or not a container name was given via --container ContainerNameSpecified bool Interactive bool Selector string Object runtime.Object"} {"_id":"doc-en-kubernetes-a1d92eec177738f780125d4e5b4f7b1a770d1118cf29c3ba380bad33df643b58","title":"","text":"cmd.Flags().StringVar(&o.SinceTime, \"since-time\", o.SinceTime, i18n.T(\"Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.\")) cmd.Flags().DurationVar(&o.SinceSeconds, \"since\", o.SinceSeconds, \"Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.\") cmd.Flags().StringVarP(&o.Container, \"container\", \"c\", o.Container, \"Print the logs of this container\") cmd.Flags().BoolVar(&o.Interactive, \"interactive\", o.Interactive, \"If true, prompt the user for input when required.\") cmd.Flags().MarkDeprecated(\"interactive\", \"This flag is no longer respected and there is no replacement.\") cmdutil.AddPodRunningTimeoutFlag(cmd, defaultPodLogsTimeout) cmd.Flags().StringVarP(&o.Selector, \"selector\", \"l\", o.Selector, \"Selector (label query) to filter on.\") return cmd"} {"_id":"doc-en-kubernetes-105f079e07f33ef8c54ef791a07becfb7f6c257ccb05fb3f76703e6d6a904438","title":"","text":"}, }, Spec: v1.PodSpec{ ActiveDeadlineSeconds: int64ptr(20), Containers: []v1.Container{ { Name: \"example\","} {"_id":"doc-en-kubernetes-05b61285bd80aaf29723e7c40a322d4748a96f787e2582ee060d96d051d757e8","title":"","text":"expectNoEvent(watchB, watch.Added, testPodA) testPodA, err = updatePod(f, testPodA.GetName(), func(p *v1.Pod) { p.Spec.ActiveDeadlineSeconds = int64ptr(10) p.ObjectMeta.Labels[\"mutation\"] = \"1\" }) Expect(err).NotTo(HaveOccurred()) expectEvent(watchA, watch.Modified, testPodA)"} {"_id":"doc-en-kubernetes-c721002779de53e02017bb35a601713ba36c08e38cd6042431d78fe73086fe6d","title":"","text":"expectNoEvent(watchB, watch.Modified, testPodA) testPodA, err = updatePod(f, testPodA.GetName(), func(p *v1.Pod) { p.Spec.ActiveDeadlineSeconds = int64ptr(5) p.ObjectMeta.Labels[\"mutation\"] = \"2\" }) Expect(err).NotTo(HaveOccurred()) expectEvent(watchA, watch.Modified, testPodA)"} {"_id":"doc-en-kubernetes-6fc83c52daa325960098c2e84387459716c3fea3ed3fe6d65b8d1be7363fb3ba","title":"","text":"func (proxier *Proxier) OnEndpointsSynced() { proxier.mu.Lock() proxier.endpointsSynced = true proxier.setInitialized(proxier.servicesSynced && proxier.endpointsSynced) proxier.mu.Unlock() // Sync unconditionally - this is called once per lifetime. proxier.syncProxyRules() }"} {"_id":"doc-en-kubernetes-9353cf5cd7b9363ff14676d53900dd14c2fae47f1511c72cf600edcd98ed8543","title":"","text":"import ( \"bytes\" \"fmt\" \"io/ioutil\" \"net\" \"regexp\" \"strconv\" \"strings\" \"sync\""} {"_id":"doc-en-kubernetes-56644ec6684b6484278d9170dc963ecd0f6bfe6dc6f08c9df9fca2906bc50123","title":"","text":"// GetModules returns all installed kernel modules. func (handle *LinuxKernelHandler) GetModules() ([]string, error) { // Check whether IPVS required kernel modules are built-in kernelVersionFile := \"/proc/sys/kernel/osrelease\" b, err := ioutil.ReadFile(kernelVersionFile) if err != nil { glog.Errorf(\"Failed to read file %s with error %v\", kernelVersionFile, err) } kernelVersion := strings.TrimSpace(string(b)) builtinModsFilePath := fmt.Sprintf(\"/lib/modules/%s/modules.builtin\", kernelVersion) b, err = ioutil.ReadFile(builtinModsFilePath) if err != nil { glog.Errorf(\"Failed to read file %s with error %v\", builtinModsFilePath, err) } var bmods []string for _, module := range ipvsModules { if match, _ := regexp.Match(module+\".ko\", b); match { bmods = append(bmods, module) } } // Try to load IPVS required kernel modules using modprobe first for _, kmod := range ipvsModules { err := handle.executor.Command(\"modprobe\", \"--\", kmod).Run()"} {"_id":"doc-en-kubernetes-e33957593ebecae0848194042e709667188897abaa5a218d3da04db476f4c76b","title":"","text":"} mods := strings.Split(string(out), \"n\") return mods, nil return append(mods, bmods...), nil } // CanUseIPVSProxier returns true if we can use the ipvs Proxier."} {"_id":"doc-en-kubernetes-484555d92fbd8af0225147cd935634933b44c9a87b2e8d82bed4a51b086351ba","title":"","text":"// For this test, we'll actually cycle, \"list/watch/create/delete\" until we get an RV from list that observes the create and not an error. // This way all the tests that are checking for watches don't have to worry about RV too old problems because crazy things *could* happen // before like the created RV could be too old to watch. for _, version := range servedVersions(crd) { err := wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) { return isWatchCachePrimed(crd, dynamicClientSet, version) }) if err != nil { return nil, err } err = wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) { return isWatchCachePrimed(crd, dynamicClientSet) }) if err != nil { return nil, err } return crd, nil } func resourceClientForVersion(crd *apiextensionsv1beta1.CustomResourceDefinition, dynamicClientSet dynamic.Interface, namespace, version string) dynamic.ResourceInterface { gvr := schema.GroupVersionResource{Group: crd.Spec.Group, Version: version, Resource: crd.Spec.Names.Plural} if crd.Spec.Scope != apiextensionsv1beta1.ClusterScoped { return dynamicClientSet.Resource(gvr).Namespace(namespace) } else { return dynamicClientSet.Resource(gvr) } } // isWatchCachePrimed returns true if the watch is primed for an specified version of CRD watch func isWatchCachePrimed(crd *apiextensionsv1beta1.CustomResourceDefinition, dynamicClientSet dynamic.Interface, version string) (bool, error) { func isWatchCachePrimed(crd *apiextensionsv1beta1.CustomResourceDefinition, dynamicClientSet dynamic.Interface) (bool, error) { ns := \"\" if crd.Spec.Scope != apiextensionsv1beta1.ClusterScoped { ns = \"aval\" } gvr := schema.GroupVersionResource{Group: crd.Spec.Group, Version: version, Resource: crd.Spec.Names.Plural} var resourceClient dynamic.ResourceInterface if crd.Spec.Scope != apiextensionsv1beta1.ClusterScoped { resourceClient = dynamicClientSet.Resource(gvr).Namespace(ns) } else { resourceClient = dynamicClientSet.Resource(gvr) versions := servedVersions(crd) if len(versions) == 0 { return true, nil } resourceClient := resourceClientForVersion(crd, dynamicClientSet, ns, versions[0]) instanceName := \"setup-instance\" instance := &unstructured.Unstructured{ Object: map[string]interface{}{ \"apiVersion\": crd.Spec.Group + \"/\" + version, \"apiVersion\": crd.Spec.Group + \"/\" + versions[0], \"kind\": crd.Spec.Names.Kind, \"metadata\": map[string]interface{}{ \"namespace\": ns,"} {"_id":"doc-en-kubernetes-23c8278256d7086e94da850959c553ef0bd42739123f4f224e0756b5c348a30f","title":"","text":"return false, err } noxuWatch, err := resourceClient.Watch(metav1.ListOptions{ResourceVersion: createdInstance.GetResourceVersion()}) if err != nil { return false, err } defer noxuWatch.Stop() select { case watchEvent := <-noxuWatch.ResultChan(): if watch.Error == watchEvent.Type { return false, nil // Wait for all versions of watch cache to be primed and also make sure we consumed the DELETE event for all // versions so that any new watch with ResourceVersion=0 does not get those events. This is source of some flaky tests. // When a client creates a watch with resourceVersion=0, it will get an ADD event for any existing objects // but because they specified resourceVersion=0, there is no starting point in the cache buffer to return existing events // from, thus the server will return anything from current head of the cache to the end. By accessing the delete // events for all versions here, we make sure that the head of the cache is passed those events and they will not being // delivered to any future watch with resourceVersion=0. for _, v := range versions { noxuWatch, err := resourceClientForVersion(crd, dynamicClientSet, ns, v).Watch( metav1.ListOptions{ResourceVersion: createdInstance.GetResourceVersion()}) if err != nil { return false, err } if watch.Deleted != watchEvent.Type { return false, fmt.Errorf(\"expected DELETE, but got %#v\", watchEvent) defer noxuWatch.Stop() select { case watchEvent := <-noxuWatch.ResultChan(): if watch.Error == watchEvent.Type { return false, nil } if watch.Deleted != watchEvent.Type { return false, fmt.Errorf(\"expected DELETE, but got %#v\", watchEvent) } case <-time.After(5 * time.Second): return false, fmt.Errorf(\"gave up waiting for watch event\") } return true, nil case <-time.After(5 * time.Second): return false, fmt.Errorf(\"gave up waiting for watch event\") } return true, nil } func DeleteCustomResourceDefinition(crd *apiextensionsv1beta1.CustomResourceDefinition, apiExtensionsClient clientset.Interface) error {"} {"_id":"doc-en-kubernetes-e8b2cbf35a780954a45b141d3ee2ceb19c5276a9ffb19b840bb69c36560b379a","title":"","text":"kube::etcd::install() { ( local os cd \"${KUBE_ROOT}/third_party\" if [[ $(readlink etcd) == etcd-v${ETCD_VERSION}-* ]]; then os=$(uname | tr \"[:upper:]\" \"[:lower:]\") if [[ $(readlink etcd) == etcd-v${ETCD_VERSION}-${os}-* ]]; then return # already installed fi if [[ $(uname) == \"Darwin\" ]]; then if [[ ${os} == \"darwin\" ]]; then download_file=\"etcd-v${ETCD_VERSION}-darwin-amd64.zip\" url=\"https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/${download_file}\" kube::util::download_file \"${url}\" \"${download_file}\""} {"_id":"doc-en-kubernetes-d834ff6f3db9a137d30d7446b71254aef8dcaaeb1182ff2185a3a4e4ef9c4591","title":"","text":"kube::util::download_file \"${url}\" \"${download_file}\" tar xzf \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-linux-amd64\" etcd rm \"${download_file}\" fi kube::log::info \"etcd v${ETCD_VERSION} installed. To use:\" kube::log::info \"export PATH=$(pwd)/etcd:${PATH}\""} {"_id":"doc-en-kubernetes-64f77bddadf74fb2432fbaf2bc1c467e47d7cf828e5ca436425ac45968697464","title":"","text":"func (s *Controller) getNodeConditionPredicate() NodeConditionPredicate { return func(node *v1.Node) bool { // We add the master to the node list, but its unschedulable. So we use this to filter // the master. if node.Spec.Unschedulable { return false } if s.legacyNodeRoleFeatureEnabled { // As of 1.6, we will taint the master, but not necessarily mark it unschedulable. // Recognize nodes labeled as master, and filter them also, as we were doing previously."} {"_id":"doc-en-kubernetes-15d36c83b20b960cab6624bab196ab42b7da2fb7c638168f08ad95a8e476b005","title":"","text":"input *v1.Node want bool }{ {want: false, input: &v1.Node{}}, {want: true, input: &v1.Node{Status: v1.NodeStatus{Conditions: []v1.NodeCondition{{Type: v1.NodeReady, Status: v1.ConditionTrue}}}}}, {want: false, input: &v1.Node{Status: v1.NodeStatus{Conditions: []v1.NodeCondition{{Type: v1.NodeReady, Status: v1.ConditionFalse}}}}}, {want: true, input: &v1.Node{Spec: v1.NodeSpec{Unschedulable: true}, Status: v1.NodeStatus{Conditions: []v1.NodeCondition{{Type: v1.NodeReady, Status: v1.ConditionTrue}}}}}, {want: true, input: &v1.Node{Status: validNodeStatus, ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{}}}}, {want: true, input: &v1.Node{Status: validNodeStatus, ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{labelNodeRoleMaster: \"\"}}}}, {want: true, input: &v1.Node{Status: validNodeStatus, ObjectMeta: metav1.ObjectMeta{Labels: map[string]string{labelNodeRoleExcludeBalancer: \"\"}}}},"} {"_id":"doc-en-kubernetes-28060553700264661bbbaa635a6a8c8572d672e7b2819a0a7e3e84ac87009638","title":"","text":"const dummyRegistryEmail = \"name@contoso.com\" var containerRegistryUrls = []string{\"*.azurecr.io\", \"*.azurecr.cn\", \"*.azurecr.de\", \"*.azurecr.us\"} // init registers the various means by which credentials may // be resolved on Azure. func init() {"} {"_id":"doc-en-kubernetes-8c84f2cf40ce63d41e419b322ae76d9f68e533a05816178c00f8eb73df6f5176","title":"","text":"ctx, cancel := getContextWithCancel() defer cancel() glog.V(4).Infof(\"listing registries\") result, err := a.registryClient.List(ctx) if err != nil { glog.Errorf(\"Failed to list registries: %v\", err) return cfg } for ix := range result { loginServer := getLoginServer(result[ix]) var cred *credentialprovider.DockerConfigEntry if a.config.UseManagedIdentityExtension { glog.V(4).Infof(\"listing registries\") result, err := a.registryClient.List(ctx) if err != nil { glog.Errorf(\"Failed to list registries: %v\", err) return cfg } if a.config.UseManagedIdentityExtension { cred, err = getACRDockerEntryFromARMToken(a, loginServer) for ix := range result { loginServer := getLoginServer(result[ix]) glog.V(2).Infof(\"loginServer: %s\", loginServer) cred, err := getACRDockerEntryFromARMToken(a, loginServer) if err != nil { continue } } else { cred = &credentialprovider.DockerConfigEntry{ cfg[loginServer] = *cred } } else { // Add our entry for each of the supported container registry URLs for _, url := range containerRegistryUrls { cred := &credentialprovider.DockerConfigEntry{ Username: a.config.AADClientID, Password: a.config.AADClientSecret, Email: dummyRegistryEmail, } cfg[url] = *cred } cfg[loginServer] = *cred } return cfg }"} {"_id":"doc-en-kubernetes-20f06e957ebd253b2f9a3c0baaa653051235d42395bb311ded9636a3dafe6453","title":"","text":"{ Name: to.StringPtr(\"foo\"), RegistryProperties: &containerregistry.RegistryProperties{ LoginServer: to.StringPtr(\"foo-microsoft.azurecr.io\"), LoginServer: to.StringPtr(\"*.azurecr.io\"), }, }, { Name: to.StringPtr(\"bar\"), RegistryProperties: &containerregistry.RegistryProperties{ LoginServer: to.StringPtr(\"bar-microsoft.azurecr.io\"), LoginServer: to.StringPtr(\"*.azurecr.cn\"), }, }, { Name: to.StringPtr(\"baz\"), RegistryProperties: &containerregistry.RegistryProperties{ LoginServer: to.StringPtr(\"baz-microsoft.azurecr.io\"), LoginServer: to.StringPtr(\"*.azurecr.de\"), }, }, { Name: to.StringPtr(\"bus\"), RegistryProperties: &containerregistry.RegistryProperties{ LoginServer: to.StringPtr(\"*.azurecr.us\"), }, }, }"} {"_id":"doc-en-kubernetes-b4e3512cb79b947656cba5e5237a5e90c520421e044aad820ba60ef6d5b80799","title":"","text":"return \"\", err } pc := d.Core().Pods(namespace) mountPods, err := getMountPods(pc, pvc.Name) if err != nil { return \"\", err } events, _ := d.Core().Events(namespace).Search(legacyscheme.Scheme, pvc) return describePersistentVolumeClaim(pvc, events) return describePersistentVolumeClaim(pvc, events, mountPods) } func getMountPods(c coreclient.PodInterface, pvcName string) ([]api.Pod, error) { nsPods, err := c.List(metav1.ListOptions{}) if err != nil { return []api.Pod{}, err } var pods []api.Pod for _, pod := range nsPods.Items { pvcs := getPvcs(pod.Spec.Volumes) for _, pvc := range pvcs { if pvc.PersistentVolumeClaim.ClaimName == pvcName { pods = append(pods, pod) } } } return pods, nil } func describePersistentVolumeClaim(pvc *api.PersistentVolumeClaim, events *api.EventList) (string, error) { func getPvcs(volumes []api.Volume) []api.Volume { var pvcs []api.Volume for _, volume := range volumes { if volume.VolumeSource.PersistentVolumeClaim != nil { pvcs = append(pvcs, volume) } } return pvcs } func describePersistentVolumeClaim(pvc *api.PersistentVolumeClaim, events *api.EventList, mountPods []api.Pod) (string, error) { return tabbedString(func(out io.Writer) error { w := NewPrefixWriter(out) w.Write(LEVEL_0, \"Name:t%sn\", pvc.Name)"} {"_id":"doc-en-kubernetes-03893711550b910023fb896859b81e1ea918738eec7af48edb2e99e3a32f4294","title":"","text":"DescribeEvents(events, w) } printPodsMultiline(w, \"Mounted By\", mountPods) return nil }) }"} {"_id":"doc-en-kubernetes-38f2c460b7a46178a2142071559a72fb71787feabe6626bc6205bfeb0a6cb934","title":"","text":"} } // printPodsMultiline prints multiple pods with a proper alignment. func printPodsMultiline(w PrefixWriter, title string, pods []api.Pod) { printPodsMultilineWithIndent(w, \"\", title, \"t\", pods) } // printPodsMultilineWithIndent prints multiple pods with a user-defined alignment. func printPodsMultilineWithIndent(w PrefixWriter, initialIndent, title, innerIndent string, pods []api.Pod) { w.Write(LEVEL_0, \"%s%s:%s\", initialIndent, title, innerIndent) if pods == nil || len(pods) == 0 { w.WriteLine(\"\") return } // to print pods in the sorted order sort.Slice(pods, func(i, j int) bool { cmpKey := func(pod api.Pod) string { return pod.Name } return cmpKey(pods[i]) < cmpKey(pods[j]) }) for i, pod := range pods { if i != 0 { w.Write(LEVEL_0, \"%s\", initialIndent) w.Write(LEVEL_0, \"%s\", innerIndent) } w.Write(LEVEL_0, \"%sn\", pod.Name) } } // printPodTolerationsMultiline prints multiple tolerations with a proper alignment. func printPodTolerationsMultiline(w PrefixWriter, title string, tolerations []api.Toleration) { printTolerationsMultilineWithIndent(w, \"\", title, \"t\", tolerations)"} {"_id":"doc-en-kubernetes-f3b57084afd47c355dac2ad186ed47e64a3924af3734cdc12cff2cd2081d0af5","title":"","text":"// If ignoreExistErr is set to true, then the -exist option of ipset will be specified, ipset ignores the error // otherwise raised when the same set (setname and create parameters are identical) already exists. func (runner *runner) createSet(set *IPSet, ignoreExistErr bool) error { args := []string{\"create\", set.Name, string(set.SetType), \"comment\"} args := []string{\"create\", set.Name, string(set.SetType)} if set.SetType == HashIPPortIP || set.SetType == HashIPPort { args = append(args, \"family\", set.HashFamily,"} {"_id":"doc-en-kubernetes-834c6ebc51ab7503c46edb5ce4b11eea4ddaab2defcf1742d128f64ac5fb51c5","title":"","text":"// If the -exist option is specified, ipset ignores the error otherwise raised when // the same set (setname and create parameters are identical) already exists. func (runner *runner) AddEntry(entry string, set *IPSet, ignoreExistErr bool) error { args := []string{\"add\", set.Name, entry, \"comment\", set.Comment} args := []string{\"add\", set.Name, entry} if ignoreExistErr { args = append(args, \"-exist\") }"} {"_id":"doc-en-kubernetes-ad330240bd74565a455e26f2ddfe1c704e33e4883cea45fdc4a811ca8e0b8985","title":"","text":"// DelEntry is used to delete the specified entry from the set. func (runner *runner) DelEntry(entry string, set string) error { entry = strings.Split(entry, \" comment\")[0] if _, err := runner.exec.Command(IPSetCmd, \"del\", set, entry).CombinedOutput(); err != nil { return fmt.Errorf(\"error deleting entry %s: from set: %s, error: %v\", entry, set, err) }"} {"_id":"doc-en-kubernetes-58ec4257316fa4d9e593fe2b528d24c7b0575e0f5bb6fba721fd5609c76a8759","title":"","text":"le.config.Lock.RecordEvent(\"stopped leading\") glog.Infof(\"failed to renew lease %v: %v\", desc, err) cancel() }, 0, ctx.Done()) }, le.config.RetryPeriod, ctx.Done()) } // tryAcquireOrRenew tries to acquire a leader lease if it is not already acquired,"} {"_id":"doc-en-kubernetes-acee50c05f03a4393f6a404c5e26bc6467581cc19a9e2f738383c48f277573dc","title":"","text":"// Logout calls SessionManager.Logout for the given connection. func (connection *VSphereConnection) Logout(ctx context.Context) { m := session.NewManager(connection.Client) clientLock.Lock() c := connection.Client clientLock.Unlock() if c == nil { return } m := session.NewManager(c) hasActiveSession, err := m.SessionIsActive(ctx) if err != nil {"} {"_id":"doc-en-kubernetes-000dd86bdcecaee7b0ae8413dbb33fd19c857f05d80586112e54ecfb6f1b6741","title":"","text":"func logout(vs *VSphere) { for _, vsphereIns := range vs.vsphereInstanceMap { if vsphereIns.conn.Client != nil { vsphereIns.conn.Logout(context.TODO()) } vsphereIns.conn.Logout(context.TODO()) } } // Instances returns an implementation of Instances for vSphere."} {"_id":"doc-en-kubernetes-fcfe7ba3d79af145897111a88eabb172a8c6b1c561cbbbf52c3d54c8d9d544d0","title":"","text":"dnsImage := GetGenericArchImage(cfg.ImageRepository, \"k8s-dns-kube-dns\", constants.KubeDNSVersion) if features.Enabled(cfg.FeatureGates, features.CoreDNS) { dnsImage = GetGenericArchImage(cfg.ImageRepository, \"coredns\", constants.CoreDNSVersion) dnsImage = fmt.Sprintf(\"%s/%s:%s\", cfg.ImageRepository, constants.CoreDNS, constants.CoreDNSVersion) } imgs = append(imgs, dnsImage) return imgs"} {"_id":"doc-en-kubernetes-deb39527973221cd7518e35e8c3f193530e4b836694e43d6f5cb8b71033f00a4","title":"","text":"func (f *fakeVMSet) GetDataDisks(nodeName types.NodeName) ([]compute.DataDisk, error) { return nil, fmt.Errorf(\"unimplemented\") } func (f *fakeVMSet) GetProvisioningStateByNodeName(name string) (string, error) { return \"\", fmt.Errorf(\"unimplemented\") } "} {"_id":"doc-en-kubernetes-72b449c3ea1a23fb85af5876b917df56148e4cac38203e465ca65db9b36ecaf5","title":"","text":"// InstanceShutdownByProviderID returns true if the instance is in safe state to detach volumes func (az *Cloud) InstanceShutdownByProviderID(ctx context.Context, providerID string) (bool, error) { return false, cloudprovider.NotImplemented nodeName, err := az.vmSet.GetNodeNameByProviderID(providerID) if err != nil { return false, err } provisioningState, err := az.vmSet.GetProvisioningStateByNodeName(string(nodeName)) if err != nil { return false, err } return strings.ToLower(provisioningState) == \"stopped\" || strings.ToLower(provisioningState) == \"deallocated\", nil } // getComputeMetadata gets compute information from instance metadata."} {"_id":"doc-en-kubernetes-855362c04ac678a1bdd60796b9c1c584779e5c3f556d3ac5eb846f8670708975","title":"","text":"return *machine.ID, nil } func (as *availabilitySet) GetProvisioningStateByNodeName(name string) (provisioningState string, err error) { vm, err := as.getVirtualMachine(types.NodeName(name)) if err != nil { return provisioningState, err } return *vm.ProvisioningState, nil } // GetNodeNameByProviderID gets the node name by provider ID. func (as *availabilitySet) GetNodeNameByProviderID(providerID string) (types.NodeName, error) { // NodeName is part of providerID for standard instances."} {"_id":"doc-en-kubernetes-9df1c1b341bf75b539d9b041f0b754008ad5d482652ffa2149407a8011e8890e","title":"","text":"DetachDiskByName(diskName, diskURI string, nodeName types.NodeName) error // GetDataDisks gets a list of data disks attached to the node. GetDataDisks(nodeName types.NodeName) ([]compute.DataDisk, error) // GetProvisioningStateByNodeName gets the provisioning state by node name. GetProvisioningStateByNodeName(name string) (string, error) }"} {"_id":"doc-en-kubernetes-20fa639dffd6a6161128886a7a4659025b2d830e9dec2d7f1a3bd7da9fb7af1b","title":"","text":"return ssName, instanceID, *(cachedVM.(*compute.VirtualMachineScaleSetVM)), nil } func (ss *scaleSet) GetProvisioningStateByNodeName(name string) (provisioningState string, err error) { _, _, vm, err := ss.getVmssVM(name) if err != nil { return provisioningState, err } return *vm.ProvisioningState, nil } // getCachedVirtualMachineByInstanceID gets scaleSetVMInfo from cache. // The node must belong to one of scale sets. func (ss *scaleSet) getVmssVMByInstanceID(resourceGroup, scaleSetName, instanceID string) (vm compute.VirtualMachineScaleSetVM, err error) {"} {"_id":"doc-en-kubernetes-0ef91e9080dea5daea24112145ccbc47f79741356cdf77654e67d1bd0ae7192e","title":"","text":"type listenPortOpener struct{} // OpenLocalPort holds the given local port open. func (l *listenPortOpener) OpenLocalPort(lp *utilproxy.LocalPort) (utilproxy.Closeable, error) { return openLocalPort(lp) func (l *listenPortOpener) OpenLocalPort(lp *utilproxy.LocalPort, isIPv6 bool) (utilproxy.Closeable, error) { return openLocalPort(lp, isIPv6) } // Proxier implements proxy.Provider"} {"_id":"doc-en-kubernetes-08223336ee4efeb2f228a6a4f31802fa4dccc84a41bc5478b97ba488fa6a5208","title":"","text":"klog.V(4).Infof(\"Port %s was open before and is still needed\", lp.String()) replacementPortsMap[lp] = proxier.portsMap[lp] } else if svcInfo.Protocol() != v1.ProtocolSCTP { socket, err := proxier.portMapper.OpenLocalPort(&lp) socket, err := proxier.portMapper.OpenLocalPort(&lp, isIPv6) if err != nil { klog.Errorf(\"can't open %s, skipping this nodePort: %v\", lp.String(), err) continue"} {"_id":"doc-en-kubernetes-86261ee9f904138774d674fe8d96f91b723d183de8c960ec85032d098fe4946e","title":"","text":"buf.WriteByte('n') } func openLocalPort(lp *utilproxy.LocalPort) (utilproxy.Closeable, error) { func openLocalPort(lp *utilproxy.LocalPort, isIPv6 bool) (utilproxy.Closeable, error) { // For ports on node IPs, open the actual port and hold it, even though we // use iptables to redirect traffic. // This ensures a) that it's safe to use that port and b) that (a) stays"} {"_id":"doc-en-kubernetes-2de70c59ee837fce07e8169ead469f5ed4e34be16aeda2adfd79ba7cb5583bff","title":"","text":"klog.Errorf(\"Failed to cast serviceInfo %q\", svcName.String()) continue } isIPv6 := utilnet.IsIPv6(svcInfo.ClusterIP()) protocol := strings.ToLower(string(svcInfo.Protocol())) // Precompute svcNameString; with many services the many calls // to ServicePortName.String() show up in CPU profiles."} {"_id":"doc-en-kubernetes-68b897f61f5f5d98395177eea1dc804d46542eb9ee06edcf4672f5e1b9cfe69d","title":"","text":"klog.V(4).Infof(\"Port %s was open before and is still needed\", lp.String()) replacementPortsMap[lp] = proxier.portsMap[lp] } else { socket, err := proxier.portMapper.OpenLocalPort(&lp) socket, err := proxier.portMapper.OpenLocalPort(&lp, isIPv6) if err != nil { msg := fmt.Sprintf(\"can't open %s, skipping this externalIP: %v\", lp.String(), err)"} {"_id":"doc-en-kubernetes-be14a23b8836d777acb23777dc3f999d0779bb26a4949601e049a79bfff1d4be","title":"","text":"// We do not start listening on SCTP ports, according to our agreement in the // SCTP support KEP } else if svcInfo.Protocol() != v1.ProtocolSCTP { socket, err := proxier.portMapper.OpenLocalPort(&lp) socket, err := proxier.portMapper.OpenLocalPort(&lp, isIPv6) if err != nil { klog.Errorf(\"can't open %s, skipping this nodePort: %v\", lp.String(), err) continue } if lp.Protocol == \"udp\" { isIPv6 := utilnet.IsIPv6(svcInfo.ClusterIP()) conntrack.ClearEntriesForPort(proxier.exec, lp.Port, isIPv6, v1.ProtocolUDP) } replacementPortsMap[lp] = socket"} {"_id":"doc-en-kubernetes-5723b118cffa789c9eae2162d5fa4c94c7096fa251d82ffc496911c5d3fbf344","title":"","text":"type listenPortOpener struct{} // OpenLocalPort holds the given local port open. func (l *listenPortOpener) OpenLocalPort(lp *utilproxy.LocalPort) (utilproxy.Closeable, error) { return openLocalPort(lp) func (l *listenPortOpener) OpenLocalPort(lp *utilproxy.LocalPort, isIPv6 bool) (utilproxy.Closeable, error) { return openLocalPort(lp, isIPv6) } func openLocalPort(lp *utilproxy.LocalPort) (utilproxy.Closeable, error) { func openLocalPort(lp *utilproxy.LocalPort, isIPv6 bool) (utilproxy.Closeable, error) { // For ports on node IPs, open the actual port and hold it, even though we // use ipvs to redirect traffic. // This ensures a) that it's safe to use that port and b) that (a) stays"} {"_id":"doc-en-kubernetes-9019d27716e03c22977dde1edfc4a8d2035e700b82b063043731276ee213e339","title":"","text":"var socket utilproxy.Closeable switch lp.Protocol { case \"tcp\": listener, err := net.Listen(\"tcp\", net.JoinHostPort(lp.IP, strconv.Itoa(lp.Port))) network := \"tcp4\" if isIPv6 { network = \"tcp6\" } listener, err := net.Listen(network, net.JoinHostPort(lp.IP, strconv.Itoa(lp.Port))) if err != nil { return nil, err } socket = listener case \"udp\": addr, err := net.ResolveUDPAddr(\"udp\", net.JoinHostPort(lp.IP, strconv.Itoa(lp.Port))) network := \"udp4\" if isIPv6 { network = \"udp6\" } addr, err := net.ResolveUDPAddr(network, net.JoinHostPort(lp.IP, strconv.Itoa(lp.Port))) if err != nil { return nil, err } conn, err := net.ListenUDP(\"udp\", addr) conn, err := net.ListenUDP(network, addr) if err != nil { return nil, err }"} {"_id":"doc-en-kubernetes-794d8bef7896ad34888010c053e465009adaac49facee7827081d5514ca09d31","title":"","text":"// OpenLocalPort fakes out the listen() and bind() used by syncProxyRules // to lock a local port. func (f *fakePortOpener) OpenLocalPort(lp *utilproxy.LocalPort) (utilproxy.Closeable, error) { func (f *fakePortOpener) OpenLocalPort(lp *utilproxy.LocalPort, isIPv6 bool) (utilproxy.Closeable, error) { f.openPorts = append(f.openPorts, lp) return nil, nil }"} {"_id":"doc-en-kubernetes-0adc53fbfa957689f36015921fe76dd015c59f7bc59540450261aad39d17f79d","title":"","text":"// PortOpener is an interface around port opening/closing. // Abstracted out for testing. type PortOpener interface { OpenLocalPort(lp *LocalPort) (Closeable, error) OpenLocalPort(lp *LocalPort, isIPv6 bool) (Closeable, error) } // RevertPorts is closing ports in replacementPortsMap but not in originalPortsMap. In other words, it only"} {"_id":"doc-en-kubernetes-ad52b923a9b0acee898a8ea9d892a5b66b2193f13f73720874e84c3f2679acb2","title":"","text":" 2.1 2.2 "} {"_id":"doc-en-kubernetes-6986f659db39717e571658b429dffecc8768bceca9a64ed111e7704b969c96a8","title":"","text":"Request Headers: {% for i, key in ipairs(keys) do %} {{key}}={{headers[key]}} {% local val = headers[key] %} {% if type(val) == \"table\" then %} {% for i = 1,#val do %} {{key}}={{val[i]}} {% end %} {% else %} {{key}}={{val}} {% end %} {% end %} Request Body:"} {"_id":"doc-en-kubernetes-a4666785878f10db3fb3110b4c4b5e4f2a8e72991d9e642aa5a50e3d616dadf3","title":"","text":"if err != nil { return nil, err } if existsPip { return &pip, nil } serviceName := getServiceName(service) if shouldPIPExisted { return nil, fmt.Errorf(\"PublicIP from annotation azure-pip-name=%s for service %s doesn't exist\", pipName, serviceName) } pip.Name = to.StringPtr(pipName) pip.Location = to.StringPtr(az.Location) pip.PublicIPAddressPropertiesFormat = &network.PublicIPAddressPropertiesFormat{ PublicIPAllocationMethod: network.Static, if existsPip { // return if pip exist and dns label is the same if getDomainNameLabel(&pip) == domainNameLabel { return &pip, nil } klog.V(2).Infof(\"ensurePublicIPExists for service(%s): pip(%s) - updating\", serviceName, *pip.Name) if pip.PublicIPAddressPropertiesFormat == nil { pip.PublicIPAddressPropertiesFormat = &network.PublicIPAddressPropertiesFormat{ PublicIPAllocationMethod: network.Static, } } } else { if shouldPIPExisted { return nil, fmt.Errorf(\"PublicIP from annotation azure-pip-name=%s for service %s doesn't exist\", pipName, serviceName) } pip.Name = to.StringPtr(pipName) pip.Location = to.StringPtr(az.Location) pip.PublicIPAddressPropertiesFormat = &network.PublicIPAddressPropertiesFormat{ PublicIPAllocationMethod: network.Static, } pip.Tags = map[string]*string{ serviceTagKey: &serviceName, clusterNameKey: &clusterName, } if az.useStandardLoadBalancer() { pip.Sku = &network.PublicIPAddressSku{ Name: network.PublicIPAddressSkuNameStandard, } } klog.V(2).Infof(\"ensurePublicIPExists for service(%s): pip(%s) - creating\", serviceName, *pip.Name) } if len(domainNameLabel) > 0 { if len(domainNameLabel) == 0 { pip.PublicIPAddressPropertiesFormat.DNSSettings = nil } else { pip.PublicIPAddressPropertiesFormat.DNSSettings = &network.PublicIPAddressDNSSettings{ DomainNameLabel: &domainNameLabel, } } pip.Tags = map[string]*string{ serviceTagKey: &serviceName, clusterNameKey: &clusterName, } if az.useStandardLoadBalancer() { pip.Sku = &network.PublicIPAddressSku{ Name: network.PublicIPAddressSkuNameStandard, } } klog.V(2).Infof(\"ensurePublicIPExists for service(%s): pip(%s) - creating\", serviceName, *pip.Name) klog.V(10).Infof(\"CreateOrUpdatePIP(%s, %q): start\", pipResourceGroup, *pip.Name) err = az.CreateOrUpdatePIP(service, pipResourceGroup, pip) if err != nil { klog.V(2).Infof(\"ensure(%s) abort backoff: pip(%s) - creating\", serviceName, *pip.Name) klog.V(2).Infof(\"ensure(%s) abort backoff: pip(%s)\", serviceName, *pip.Name) return nil, err } klog.V(10).Infof(\"CreateOrUpdatePIP(%s, %q): end\", pipResourceGroup, *pip.Name)"} {"_id":"doc-en-kubernetes-15b191e9da16a073af9cb1d2535afbdaccde608275f3eced7ad5e5f6f4437f92","title":"","text":"return &pip, nil } func getDomainNameLabel(pip *network.PublicIPAddress) string { if pip == nil || pip.PublicIPAddressPropertiesFormat == nil || pip.PublicIPAddressPropertiesFormat.DNSSettings == nil { return \"\" } return to.String(pip.PublicIPAddressPropertiesFormat.DNSSettings.DomainNameLabel) } func getIdleTimeout(s *v1.Service) (*int32, error) { const ( min = 4"} {"_id":"doc-en-kubernetes-49444ccfa8ddd1192a5f11e96e4311d967b74ccb6e22ecae0b7e2c6973fd25e4","title":"","text":"existingPIPs []network.PublicIPAddress expectedPIP *network.PublicIPAddress expectedID string expectedDNS string expectedError bool }{ {"} {"_id":"doc-en-kubernetes-8b1ec78870345e77b86b1e9a4d599f7f9e0f0f4a0d9b0e76372c5830e161d057","title":"","text":"expectedID: \"/subscriptions/subscription/resourceGroups/rg/providers/\" + \"Microsoft.Network/publicIPAddresses/pip1\", }, { desc: \"ensurePublicIPExists shall update existed PIP's dns label\", existingPIPs: []network.PublicIPAddress{{ Name: to.StringPtr(\"pip1\"), PublicIPAddressPropertiesFormat: &network.PublicIPAddressPropertiesFormat{ DNSSettings: &network.PublicIPAddressDNSSettings{ DomainNameLabel: to.StringPtr(\"previousdns\"), }, }, }}, expectedPIP: &network.PublicIPAddress{ Name: to.StringPtr(\"pip1\"), ID: to.StringPtr(\"/subscriptions/subscription/resourceGroups/rg\" + \"/providers/Microsoft.Network/publicIPAddresses/pip1\"), PublicIPAddressPropertiesFormat: &network.PublicIPAddressPropertiesFormat{ DNSSettings: &network.PublicIPAddressDNSSettings{ DomainNameLabel: to.StringPtr(\"newdns\"), }, }, }, expectedDNS: \"newdns\", }, } for i, test := range testCases {"} {"_id":"doc-en-kubernetes-60356dcfe85c14d8fc32f3bf7b5abf78c94bf0447a7054063e6939a3b2ca6721","title":"","text":"t.Fatalf(\"TestCase[%d] meets unexpected error: %v\", i, err) } } pip, err := az.ensurePublicIPExists(&service, \"pip1\", \"\", \"\", false) pip, err := az.ensurePublicIPExists(&service, \"pip1\", test.expectedDNS, \"\", false) if test.expectedID != \"\" { assert.Equal(t, test.expectedID, to.String(pip.ID), \"TestCase[%d]: %s\", i, test.desc) } else {"} {"_id":"doc-en-kubernetes-1aebe7aa63a887ba25fb7e0c20be3cb3a249b2e40c794a580d0825b79ad0db1d","title":"","text":"\"io\" \"io/ioutil\" \"os\" \"regexp\" \"time\" \"github.com/Azure/azure-sdk-for-go/services/containerregistry/mgmt/2017-10-01/containerregistry\""} {"_id":"doc-en-kubernetes-e3ff496c688cc87a5e2fa64aa47becde4a842a37251429104b5a5fe51fe39224","title":"","text":"maxReadLength = 10 * 1 << 20 // 10MB ) var containerRegistryUrls = []string{\"*.azurecr.io\", \"*.azurecr.cn\", \"*.azurecr.de\", \"*.azurecr.us\"} var ( containerRegistryUrls = []string{\"*.azurecr.io\", \"*.azurecr.cn\", \"*.azurecr.de\", \"*.azurecr.us\"} acrRE = regexp.MustCompile(`.*.azurecr.io|.*.azurecr.cn|.*.azurecr.de|.*.azurecr.us`) ) // init registers the various means by which credentials may // be resolved on Azure."} {"_id":"doc-en-kubernetes-f2e8616ab2f7c5b9f223dd75e830fb0206885ae6aab407957a5b1ed1053e9ff3","title":"","text":"} func (a *acrProvider) Provide(image string) credentialprovider.DockerConfig { klog.V(4).Infof(\"try to provide secret for image %s\", image) cfg := credentialprovider.DockerConfig{} ctx, cancel := getContextWithCancel() defer cancel() if a.config.UseManagedIdentityExtension { klog.V(4).Infof(\"listing registries\") result, err := a.registryClient.List(ctx) if err != nil { klog.Errorf(\"Failed to list registries: %v\", err) return cfg } for ix := range result { loginServer := getLoginServer(result[ix]) klog.V(2).Infof(\"loginServer: %s\", loginServer) cred, err := getACRDockerEntryFromARMToken(a, loginServer) if err != nil { continue if loginServer := parseACRLoginServerFromImage(image); loginServer == \"\" { klog.V(4).Infof(\"image(%s) is not from ACR, skip MSI authentication\", image) } else { if cred, err := getACRDockerEntryFromARMToken(a, loginServer); err == nil { cfg[loginServer] = *cred } cfg[loginServer] = *cred } } else { // Add our entry for each of the supported container registry URLs"} {"_id":"doc-en-kubernetes-e17fb485afeb23fb2d70dac684306f2521e565dc5131cc19717ce10ac757d53e","title":"","text":"} func getACRDockerEntryFromARMToken(a *acrProvider, loginServer string) (*credentialprovider.DockerConfigEntry, error) { // Run EnsureFresh to make sure the token is valid and does not expire if err := a.servicePrincipalToken.EnsureFresh(); err != nil { klog.Errorf(\"Failed to ensure fresh service principal token: %v\", err) return nil, err } armAccessToken := a.servicePrincipalToken.OAuthToken() klog.V(4).Infof(\"discovering auth redirects for: %s\", loginServer)"} {"_id":"doc-en-kubernetes-5ab69eb35adc0dadc12bfe20e20f0531b8a55ecf11d4ea97b0d72cfe6560f70f","title":"","text":"}, nil } // parseACRLoginServerFromImage takes image as parameter and returns login server of it. // Parameter `image` is expected in following format: foo.azurecr.io/bar/imageName:version // If the provided image is not an acr image, this function will return an empty string. func parseACRLoginServerFromImage(image string) string { match := acrRE.FindAllString(image, -1) if len(match) == 1 { return match[0] } return \"\" } func (a *acrProvider) LazyProvide(image string) *credentialprovider.DockerConfigEntry { return nil }"} {"_id":"doc-en-kubernetes-50969f894a1a0c27750ecf804f108ac9ab9eddf419335dd41f4002ad1f0b4f10","title":"","text":"} } } func TestParseACRLoginServerFromImage(t *testing.T) { tests := []struct { image string expected string }{ { image: \"invalidImage\", expected: \"\", }, { image: \"docker.io/library/busybox:latest\", expected: \"\", }, { image: \"foo.azurecr.io/bar/image:version\", expected: \"foo.azurecr.io\", }, { image: \"foo.azurecr.cn/bar/image:version\", expected: \"foo.azurecr.cn\", }, { image: \"foo.azurecr.de/bar/image:version\", expected: \"foo.azurecr.de\", }, { image: \"foo.azurecr.us/bar/image:version\", expected: \"foo.azurecr.us\", }, } for _, test := range tests { if loginServer := parseACRLoginServerFromImage(test.image); loginServer != test.expected { t.Errorf(\"function parseACRLoginServerFromImage returns \"%s\" for image %s, expected \"%s\"\", loginServer, test.image, test.expected) } } } "} {"_id":"doc-en-kubernetes-ecc8841359217335701171eb35a235eceacf5e7e7d2e828e9a16269f873838b0","title":"","text":"\"github.com/golang/glog\" \"github.com/xanzy/go-cloudstack/cloudstack\" \"k8s.io/api/core/v1\" cloudprovider \"k8s.io/cloud-provider\" ) type loadBalancer struct {"} {"_id":"doc-en-kubernetes-e542e0f6543131f40640ce9755f32ef3be06c82fcb70fb02bb7fbf3c30779aae","title":"","text":"// GetLoadBalancerName retrieves the name of the LoadBalancer. func (cs *CSCloud) GetLoadBalancerName(ctx context.Context, clusterName string, service *v1.Service) string { return cs.GetLoadBalancerName(ctx, clusterName, service) return cloudprovider.DefaultLoadBalancerName(service) } // getLoadBalancer retrieves the IP address and ID and all the existing rules it can find."} {"_id":"doc-en-kubernetes-e2e836054ec97697b1f7bae68182a579b48add44fe6dd5ff896a2efdcdc99382","title":"","text":"\"//pkg/api/legacyscheme:go_default_library\", \"//pkg/apis/core:go_default_library\", \"//pkg/capabilities:go_default_library\", \"//pkg/client/chaosclient:go_default_library\", \"//pkg/cloudprovider:go_default_library\", \"//pkg/cloudprovider/providers:go_default_library\", \"//pkg/credentialprovider:go_default_library\","} {"_id":"doc-en-kubernetes-3c49035bc7cc34be769eb78be0c7bc0146075b352b717ca4f2f2105bad903823","title":"","text":"\"k8s.io/kubernetes/pkg/api/legacyscheme\" api \"k8s.io/kubernetes/pkg/apis/core\" \"k8s.io/kubernetes/pkg/capabilities\" \"k8s.io/kubernetes/pkg/client/chaosclient\" \"k8s.io/kubernetes/pkg/cloudprovider\" \"k8s.io/kubernetes/pkg/credentialprovider\" \"k8s.io/kubernetes/pkg/features\""} {"_id":"doc-en-kubernetes-0e87698f6433d77fe28a15dc0ba5b9130a6225e745d31228a0491cac243f53ec","title":"","text":"clientConfig.QPS = float32(s.KubeAPIQPS) clientConfig.Burst = int(s.KubeAPIBurst) addChaosToClientConfig(s, clientConfig) return clientConfig, nil } // addChaosToClientConfig injects random errors into client connections if configured. func addChaosToClientConfig(s *options.KubeletServer, config *restclient.Config) { if s.ChaosChance != 0.0 { config.WrapTransport = func(rt http.RoundTripper) http.RoundTripper { seed := chaosclient.NewSeed(1) // TODO: introduce a standard chaos package with more tunables - this is just a proof of concept // TODO: introduce random latency and stalls return chaosclient.NewChaosRoundTripper(rt, chaosclient.LogChaos, seed.P(s.ChaosChance, chaosclient.ErrSimulatedConnectionResetByPeer)) } } } // RunKubelet is responsible for setting up and running a kubelet. It is used in three different applications: // 1 Integration tests // 2 Kubelet binary"} {"_id":"doc-en-kubernetes-707d907c31ab6ca84d382ec1f2cac534acfb0c11fe22ac7b0e444f7ac4dbb6dd","title":"","text":"\"//pkg/auth/authorizer/abac:all-srcs\", \"//pkg/auth/nodeidentifier:all-srcs\", \"//pkg/capabilities:all-srcs\", \"//pkg/client/chaosclient:all-srcs\", \"//pkg/client/clientset_generated/internalclientset:all-srcs\", \"//pkg/client/conditions:all-srcs\", \"//pkg/client/informers/informers_generated/internalversion:all-srcs\","} {"_id":"doc-en-kubernetes-dc761b30ce2de0daa9efe06c3764d41840acba14bfc07a5a5cfac02316b8733a","title":"","text":" package(default_visibility = [\"//visibility:public\"]) load( \"@io_bazel_rules_go//go:def.bzl\", \"go_library\", \"go_test\", ) go_library( name = \"go_default_library\", srcs = [\"chaosclient.go\"], importpath = \"k8s.io/kubernetes/pkg/client/chaosclient\", deps = [\"//staging/src/k8s.io/apimachinery/pkg/util/net:go_default_library\"], ) go_test( name = \"go_default_test\", srcs = [\"chaosclient_test.go\"], embed = [\":go_default_library\"], ) filegroup( name = \"package-srcs\", srcs = glob([\"**\"]), tags = [\"automanaged\"], visibility = [\"//visibility:private\"], ) filegroup( name = \"all-srcs\", srcs = [\":package-srcs\"], tags = [\"automanaged\"], ) "} {"_id":"doc-en-kubernetes-b6fa9889c278e856af95f30786654e6dd79f0a603cef0fd3a0db10e4b1d8ef04","title":"","text":" reviewers: - smarterclayton - liggitt - davidopp - eparis - resouer "} {"_id":"doc-en-kubernetes-c39fbc8d5f585bdd97fd3a1acf48901e4c5b2414b1df1aebef8dce9c5f55c6f3","title":"","text":" /* Copyright 2015 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Package chaosclient makes it easy to simulate network latency, misbehaving // servers, and random errors from servers. It is intended to stress test components // under failure conditions and expose weaknesses in the error handling logic // of the codebase. package chaosclient import ( \"errors\" \"fmt\" \"log\" \"math/rand\" \"net/http\" \"reflect\" \"runtime\" \"k8s.io/apimachinery/pkg/util/net\" ) // chaosrt provides the ability to perform simulations of HTTP client failures // under the Golang http.Transport interface. type chaosrt struct { rt http.RoundTripper notify ChaosNotifier c []Chaos } // Chaos intercepts requests to a remote HTTP endpoint and can inject arbitrary // failures. type Chaos interface { // Intercept should return true if the normal flow should be skipped, and the // return response and error used instead. Modifications to the request will // be ignored, but may be used to make decisions about types of failures. Intercept(req *http.Request) (bool, *http.Response, error) } // ChaosNotifier notifies another component that the ChaosRoundTripper has simulated // a failure. type ChaosNotifier interface { // OnChaos is invoked when a chaotic outcome was triggered. fn is the // source of Chaos and req was the outgoing request OnChaos(req *http.Request, c Chaos) } // ChaosFunc takes an http.Request and decides whether to alter the response. It // returns true if it wishes to mutate the response, with a http.Response or // error. type ChaosFunc func(req *http.Request) (bool, *http.Response, error) // Intercept calls the nested method `Intercept` func (fn ChaosFunc) Intercept(req *http.Request) (bool, *http.Response, error) { return fn.Intercept(req) } func (fn ChaosFunc) String() string { return runtime.FuncForPC(reflect.ValueOf(fn).Pointer()).Name() } // NewChaosRoundTripper creates an http.RoundTripper that will intercept requests // based on the provided Chaos functions. The notifier is invoked when a Chaos // Intercept is fired. func NewChaosRoundTripper(rt http.RoundTripper, notify ChaosNotifier, c ...Chaos) http.RoundTripper { return &chaosrt{rt, notify, c} } // RoundTrip gives each ChaosFunc an opportunity to intercept the request. The first // interceptor wins. func (rt *chaosrt) RoundTrip(req *http.Request) (*http.Response, error) { for _, c := range rt.c { if intercept, resp, err := c.Intercept(req); intercept { rt.notify.OnChaos(req, c) return resp, err } } return rt.rt.RoundTrip(req) } var _ = net.RoundTripperWrapper(&chaosrt{}) func (rt *chaosrt) WrappedRoundTripper() http.RoundTripper { return rt.rt } // Seed represents a consistent stream of chaos. type Seed struct { *rand.Rand } // NewSeed creates an object that assists in generating random chaotic events // based on a deterministic seed. func NewSeed(seed int64) Seed { return Seed{rand.New(rand.NewSource(seed))} } type pIntercept struct { Chaos s Seed p float64 } // P returns a ChaosFunc that fires with a probability of p (p between 0.0 // and 1.0 with 0.0 meaning never and 1.0 meaning always). func (s Seed) P(p float64, c Chaos) Chaos { return pIntercept{c, s, p} } // Intercept intercepts requests with the provided probability p. func (c pIntercept) Intercept(req *http.Request) (bool, *http.Response, error) { if c.s.Float64() < c.p { return c.Chaos.Intercept(req) } return false, nil, nil } func (c pIntercept) String() string { return fmt.Sprintf(\"P{%f %s}\", c.p, c.Chaos) } // ErrSimulatedConnectionResetByPeer emulates the golang net error when a connection // is reset by a peer. // TODO: make this more accurate // TODO: add other error types // TODO: add a helper for returning multiple errors randomly. var ErrSimulatedConnectionResetByPeer = Error{errors.New(\"connection reset by peer\")} // Error returns the nested error when C() is invoked. type Error struct { error } // Intercept returns the nested error func (e Error) Intercept(_ *http.Request) (bool, *http.Response, error) { return true, nil, e.error } // LogChaos is the default ChaosNotifier and writes a message to the Golang log. var LogChaos = ChaosNotifier(logChaos{}) type logChaos struct{} func (logChaos) OnChaos(req *http.Request, c Chaos) { log.Printf(\"Triggered chaotic behavior for %s %s: %v\", req.Method, req.URL.String(), c) } "} {"_id":"doc-en-kubernetes-a4c16b5469fd17cb8ef530ab482c31acc2b20298d99bfbe5ab7d6d9286c6c1c3","title":"","text":" /* Copyright 2015 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package chaosclient import ( \"net/http\" \"net/http/httptest\" \"net/url\" \"testing\" ) type TestLogChaos struct { *testing.T } func (t TestLogChaos) OnChaos(req *http.Request, c Chaos) { t.Logf(\"CHAOS: chaotic behavior for %s %s: %v\", req.Method, req.URL.String(), c) } func unwrapURLError(err error) error { if urlErr, ok := err.(*url.Error); ok && urlErr != nil { return urlErr.Err } return err } func TestChaos(t *testing.T) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) })) defer server.Close() client := http.Client{ Transport: NewChaosRoundTripper(http.DefaultTransport, TestLogChaos{t}, ErrSimulatedConnectionResetByPeer), } resp, err := client.Get(server.URL) if unwrapURLError(err) != ErrSimulatedConnectionResetByPeer.error { t.Fatalf(\"expected reset by peer: %v\", err) } if resp != nil { t.Fatalf(\"expected no response object: %#v\", resp) } } func TestPartialChaos(t *testing.T) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) })) defer server.Close() seed := NewSeed(1) client := http.Client{ Transport: NewChaosRoundTripper( http.DefaultTransport, TestLogChaos{t}, seed.P(0.5, ErrSimulatedConnectionResetByPeer), ), } success, fail := 0, 0 for { _, err := client.Get(server.URL) if err != nil { fail++ } else { success++ } if success > 1 && fail > 1 { break } } } "} {"_id":"doc-en-kubernetes-d15cd06c83a61943f01d8f2fa440ac993ff9d6b39629d293d7b545aa1cae37ae","title":"","text":"k8s.io/kubernetes/pkg/apis/rbac/validation,erictune,0, k8s.io/kubernetes/pkg/apis/storage/validation,caesarxuchao,1, k8s.io/kubernetes/pkg/auth/authorizer/abac,liggitt,0, k8s.io/kubernetes/pkg/client/chaosclient,deads2k,1, k8s.io/kubernetes/pkg/client/legacylisters,jsafrane,1, k8s.io/kubernetes/pkg/client/listers/batch/internalversion,mqliang,0, k8s.io/kubernetes/pkg/client/listers/extensions/internalversion,eparis,1,"} {"_id":"doc-en-kubernetes-2a98a9d50643677c32eb18abbdb357e815ccb5f3d650086e193c4cebc2ec85c3","title":"","text":"// must unmounted prior to detach. // If a volume with the name volumeName does not exist in the list of // attached volumes, an error is returned. SetVolumeGloballyMounted(volumeName v1.UniqueVolumeName, globallyMounted bool) error SetVolumeGloballyMounted(volumeName v1.UniqueVolumeName, globallyMounted bool, devicePath string) error // DeletePodFromVolume removes the given pod from the given volume in the // cache indicating the volume has been successfully unmounted from the pod."} {"_id":"doc-en-kubernetes-7c5172710a6b366946e544290e250c11cc7efa836673d317eb30764412f5818b","title":"","text":"} func (asw *actualStateOfWorld) MarkDeviceAsMounted( volumeName v1.UniqueVolumeName) error { return asw.SetVolumeGloballyMounted(volumeName, true /* globallyMounted */) volumeName v1.UniqueVolumeName, devicePath string) error { return asw.SetVolumeGloballyMounted(volumeName, true /* globallyMounted */, devicePath) } func (asw *actualStateOfWorld) MarkDeviceAsUnmounted( volumeName v1.UniqueVolumeName) error { return asw.SetVolumeGloballyMounted(volumeName, false /* globallyMounted */) return asw.SetVolumeGloballyMounted(volumeName, false /* globallyMounted */, \"\" /*devicePath*/) } // addVolume adds the given volume to the cache indicating the specified"} {"_id":"doc-en-kubernetes-822f103de9dc2ec5d578035b11839384cb2f49e2ad361a635c7cac411f0063aa","title":"","text":"} func (asw *actualStateOfWorld) SetVolumeGloballyMounted( volumeName v1.UniqueVolumeName, globallyMounted bool) error { volumeName v1.UniqueVolumeName, globallyMounted bool, devicePath string) error { asw.Lock() defer asw.Unlock()"} {"_id":"doc-en-kubernetes-1fe7a32ff7a41c268027ccbcfb528cf014e64c1e01b1ca3cd7df6f104b5d6722","title":"","text":"} volumeObj.globallyMounted = globallyMounted volumeObj.devicePath = devicePath asw.attachedVolumes[volumeName] = volumeObj return nil }"} {"_id":"doc-en-kubernetes-0ddfac4983c09eb8429c5395ebad167dd7bb82556e95d70fb56dcc06652ecdae","title":"","text":"} // Act err = asw.MarkDeviceAsMounted(generatedVolumeName) err = asw.MarkDeviceAsMounted(generatedVolumeName, devicePath) // Assert if err != nil {"} {"_id":"doc-en-kubernetes-16fef5296348557a7f3ff2a03842638cb1d81c0102679ec7ceab4eacec0c20a9","title":"","text":"continue } if volume.pluginIsAttachable { err = rc.actualStateOfWorld.MarkDeviceAsMounted(volume.volumeName) err = rc.actualStateOfWorld.MarkDeviceAsMounted(volume.volumeName, volume.devicePath) if err != nil { glog.Errorf(\"Could not mark device is mounted to actual state of world: %v\", err) continue"} {"_id":"doc-en-kubernetes-349700f9e3117a1a8e78dcf719dc4bbe33dd1b59a258373d834fdef3af4209b4","title":"","text":"MarkVolumeAsUnmounted(podName volumetypes.UniquePodName, volumeName v1.UniqueVolumeName) error // Marks the specified volume as having been globally mounted. MarkDeviceAsMounted(volumeName v1.UniqueVolumeName) error MarkDeviceAsMounted(volumeName v1.UniqueVolumeName, devicePath string) error // Marks the specified volume as having its global mount unmounted. MarkDeviceAsUnmounted(volumeName v1.UniqueVolumeName) error"} {"_id":"doc-en-kubernetes-8d86d174329f8e7d5ee1501a755b0f2f9b6d38f7f0eea05322e5666ca69a5bde","title":"","text":"// Update actual state of world to reflect volume is globally mounted markDeviceMountedErr := actualStateOfWorld.MarkDeviceAsMounted( volumeToMount.VolumeName) volumeToMount.VolumeName, devicePath) if markDeviceMountedErr != nil { // On failure, return error. Caller will log and retry. return volumeToMount.GenerateErrorDetailed(\"MountVolume.MarkDeviceAsMounted failed\", markDeviceMountedErr)"} {"_id":"doc-en-kubernetes-22464ba153b548617c1b6e9d118cb09106870f3cf4f9427879c0766bf7064646","title":"","text":"// Update actual state of world to reflect volume is globally mounted markDeviceMappedErr := actualStateOfWorld.MarkDeviceAsMounted( volumeToMount.VolumeName) volumeToMount.VolumeName, devicePath) if markDeviceMappedErr != nil { // On failure, return error. Caller will log and retry. return volumeToMount.GenerateErrorDetailed(\"MapVolume.MarkDeviceAsMounted failed\", markDeviceMappedErr)"} {"_id":"doc-en-kubernetes-56a6ef697beab01e96c837c120ac625b9f941d042ecc0bc9a9f0bdd01e1a4161","title":"","text":"* kube-apiserver: the `Priority` admission plugin is now enabled by default when using `--enable-admission-plugins`. If using `--admission-control` to fully specify the set of admission plugins, the `Priority` admission plugin should be added if using the `PodPriority` feature, which is enabled by default in 1.11. ([#65739](https://github.com/kubernetes/kubernetes/pull/65739), [@liggitt](https://github.com/liggitt)) * The `system-node-critical` and `system-cluster-critical` priority classes are now limited to the `kube-system` namespace by the `PodPriority` admission plugin. ([#65593](https://github.com/kubernetes/kubernetes/pull/65593), [@bsalamat](https://github.com/bsalamat)) * kubernetes-worker juju charm: Added support for setting the --enable-ssl-chain-completion option on the ingress proxy. \"action required\": if your installation relies on supplying incomplete certificate chains and using OCSP to fill them in, you must set \"ingress-ssl-chain-completion\" to \"true\" in your juju configuration. ([#63845](https://github.com/kubernetes/kubernetes/pull/63845), [@paulgear](https://github.com/paulgear)) * In anticipation of CSI 1.0 in the next release, Kubernetes 1.12 calls the CSI `NodeGetInfo` RPC instead of `NodeGetId` RPC. Ensure your CSI Driver implements `NodeGetInfo(...)` before upgrading to 1.12. [@saad-ali](https://github.com/kubernetes/kubernetes/issues/68688) * Kubernetes 1.12 also enables [Kubelet device plugin registration](https://github.com/kubernetes/features/issues/595) feature by default. Before upgrading to 1.12, ensure the `driver-registrar` CSI sidecar container for your CSI driver is configured to handle plugin registration (set the `--kubelet-registration-path` parameter on `driver-registrar` to expose a new unix domain socket to handle Kubelet Plugin Registration). ### Other notable changes"} {"_id":"doc-en-kubernetes-220a0cf07d816a166dbf08eb8387227178420ee74456d68ec76db5a15b43a78f","title":"","text":"set -e source \"${KUBE_ROOT}/hack/lib/init.sh\" kube::util::ensure-gnu-sed function usage { echo \"This script starts a local kube cluster. \""} {"_id":"doc-en-kubernetes-e07f302731ad78c90d8821091a5ed9b3f61427167001f4ae3c7b840189661a72","title":"","text":"# foo: true # bar: false for gate in $(echo ${FEATURE_GATES} | tr ',' ' '); do echo $gate | sed -e 's/(.*)=(.*)/ 1: 2/' echo $gate | ${SED} -e 's/(.*)=(.*)/ 1: 2/' done fi >>/tmp/kube-proxy.yaml"} {"_id":"doc-en-kubernetes-a69affbbc5c958570b2b56dc29fbc34b3b3d716b6da9d9ab63c2951dd99b9c52","title":"","text":"function start_kubedns { if [[ \"${ENABLE_CLUSTER_DNS}\" = true ]]; then cp \"${KUBE_ROOT}/cluster/addons/dns/kube-dns/kube-dns.yaml.in\" kube-dns.yaml sed -i -e \"s/{{ pillar['dns_domain'] }}/${DNS_DOMAIN}/g\" kube-dns.yaml sed -i -e \"s/{{ pillar['dns_server'] }}/${DNS_SERVER_IP}/g\" kube-dns.yaml ${SED} -i -e \"s/{{ pillar['dns_domain'] }}/${DNS_DOMAIN}/g\" kube-dns.yaml ${SED} -i -e \"s/{{ pillar['dns_server'] }}/${DNS_SERVER_IP}/g\" kube-dns.yaml # TODO update to dns role once we have one. # use kubectl to create kubedns addon ${KUBECTL} --kubeconfig=\"${CERT_DIR}/admin.kubeconfig\" --namespace=kube-system create -f kube-dns.yaml"} {"_id":"doc-en-kubernetes-17a5129d9aee90b56084ddb1d7829317d74aea1e8db88bc080f6029761fa1aaf","title":"","text":"} if len(instances) == 0 { glog.Warningf(\"the instance %s does not exist anymore\", providerID) return true, nil // returns false, because otherwise node is not deleted from cluster // false means that it will continue to check InstanceExistsByProviderID return false, nil } if len(instances) > 1 { return false, fmt.Errorf(\"multiple instances found for instance: %s\", instanceID)"} {"_id":"doc-en-kubernetes-3f40507417225133fc15b5431023ca5898ab4e404a3ce6f029440b877b7c95e9","title":"","text":"if instance.State != nil { state := aws.StringValue(instance.State.Name) // valid state for detaching volumes if state == ec2.InstanceStateNameStopped || state == ec2.InstanceStateNameTerminated { if state == ec2.InstanceStateNameStopped { return true, nil } }"} {"_id":"doc-en-kubernetes-bddc622e5822f3ef794148af10ba92e96814a468bb32c67dc99057b7e1588f17","title":"","text":"// hoping external CCM or admin can set it. Returning // default values from here will mean, no one can // override them. glog.Errorf(\"failed to get azure cloud in GetVolumeLimits, plugin.host: %s\", plugin.host.GetHostName()) return volumeLimits, nil return nil, fmt.Errorf(\"failed to get azure cloud in GetVolumeLimits, plugin.host: %s\", plugin.host.GetHostName()) } instances, ok := az.Instances()"} {"_id":"doc-en-kubernetes-dbb4032b8df9fc40976fc7df215f1ea8ff53add3f13cbeac8fc444b5bf461b13","title":"","text":"LogSSHResult(result) if result.Code != 0 || err != nil { return nil, fmt.Errorf(\"failed running %q: %v (exit code %d)\", cmd, err, result.Code) return nil, fmt.Errorf(\"failed running %q: %v (exit code %d, stderr %v)\", cmd, err, result.Code, result.Stderr) } return &result, nil"} {"_id":"doc-en-kubernetes-1ce92c7fe6f91d42961b73a51afd7509152cc14780da0e272f4b2eff59be4c7e","title":"","text":"zero := int64(0) // Some distributions (Ubuntu 16.04 etc.) don't support the proc file. _, err := framework.IssueSSHCommandWithResult( \"ls /proc/net/nf_conntrack\", framework.TestContext.Provider, clientNodeInfo.node) if err != nil && strings.Contains(err.Error(), \"No such file or directory\") { framework.Skipf(\"The node %s does not support /proc/net/nf_conntrack\", clientNodeInfo.name) } framework.ExpectNoError(err) clientPodSpec := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"e2e-net-client\","} {"_id":"doc-en-kubernetes-125df28c642b7eb936b111f34778fcc0c323dbe8dd1dd83ec1fa66da5578706f","title":"","text":"// SetCRDCondition sets the status condition. It either overwrites the existing one or creates a new one. func SetCRDCondition(crd *CustomResourceDefinition, newCondition CustomResourceDefinitionCondition) { newCondition.LastTransitionTime = metav1.NewTime(time.Now()) existingCondition := FindCRDCondition(crd, newCondition.Type) if existingCondition == nil { newCondition.LastTransitionTime = metav1.NewTime(time.Now()) crd.Status.Conditions = append(crd.Status.Conditions, newCondition) return } if existingCondition.Status != newCondition.Status { existingCondition.Status = newCondition.Status if existingCondition.Status != newCondition.Status || existingCondition.LastTransitionTime.IsZero() { existingCondition.LastTransitionTime = newCondition.LastTransitionTime } existingCondition.Status = newCondition.Status existingCondition.Reason = newCondition.Reason existingCondition.Message = newCondition.Message }"} {"_id":"doc-en-kubernetes-eef38b083633348d10e9ee018b017447c7fb3f8858a8064102551f33877e8422","title":"","text":"}, }, }, { name: \"set new condition which doesn't have lastTransitionTime set\", crdCondition: []CustomResourceDefinitionCondition{ { Type: Established, Status: ConditionTrue, Reason: \"Accepted\", Message: \"the initial names have been accepted\", LastTransitionTime: metav1.Date(2018, 1, 1, 0, 0, 0, 0, time.UTC), }, }, newCondition: CustomResourceDefinitionCondition{ Type: Established, Status: ConditionFalse, Reason: \"NotAccepted\", Message: \"Not accepted\", }, expectedcrdCondition: []CustomResourceDefinitionCondition{ { Type: Established, Status: ConditionFalse, Reason: \"NotAccepted\", Message: \"Not accepted\", LastTransitionTime: metav1.Date(2018, 1, 2, 0, 0, 0, 0, time.UTC), }, }, }, { name: \"append new condition which doesn't have lastTransitionTime set\", crdCondition: []CustomResourceDefinitionCondition{ { Type: Established, Status: ConditionTrue, Reason: \"Accepted\", Message: \"the initial names have been accepted\", LastTransitionTime: metav1.Date(2018, 1, 1, 0, 0, 0, 0, time.UTC), }, }, newCondition: CustomResourceDefinitionCondition{ Type: Terminating, Status: ConditionFalse, Reason: \"NeverEstablished\", Message: \"resource was never established\", }, expectedcrdCondition: []CustomResourceDefinitionCondition{ { Type: Established, Status: ConditionTrue, Reason: \"Accepted\", Message: \"the initial names have been accepted\", LastTransitionTime: metav1.Date(2018, 1, 1, 0, 0, 0, 0, time.UTC), }, { Type: Terminating, Status: ConditionFalse, Reason: \"NeverEstablished\", Message: \"resource was never established\", LastTransitionTime: metav1.Date(2018, 2, 1, 0, 0, 0, 0, time.UTC), }, }, }, } for _, tc := range tests { crd := generateCRDwithCondition(tc.crdCondition)"} {"_id":"doc-en-kubernetes-b58a5fb4bd995f9f6d7f0e03bc86c9cb414b5f16be4bdebaf3b6add04085c6a8","title":"","text":"if !IsCRDConditionEquivalent(&tc.expectedcrdCondition[i], &crd.Status.Conditions[i]) { t.Errorf(\"%v expected %v, got %v\", tc.name, tc.expectedcrdCondition, crd.Status.Conditions) } if crd.Status.Conditions[i].LastTransitionTime.IsZero() { t.Errorf(\"%q lastTransitionTime should not be null: %v\", tc.name, i, crd.Status.Conditions) } } } }"} {"_id":"doc-en-kubernetes-e6a60db10a9be8407fb3825141863922a07fb53284c91e5c01cfd2661ddd8c40","title":"","text":"le.maybeReportTransition() desc := le.config.Lock.Describe() if err == nil { glog.V(4).Infof(\"successfully renewed lease %v\", desc) glog.V(5).Infof(\"successfully renewed lease %v\", desc) return } le.config.Lock.RecordEvent(\"stopped leading\")"} {"_id":"doc-en-kubernetes-29de97240d7a10f7ef624203ca661e07a0431e33cc2ed890ba3af9ae586be447","title":"","text":"providers[name] = factory } // GetProviders returns the names of all currently registered providers. func GetProviders() []string { mutex.Lock() defer mutex.Unlock() var providerNames []string for name := range providers { providerNames = append(providerNames, name) } return providerNames } func init() { // \"local\" or \"skeleton\" can always be used. RegisterProvider(\"local\", func() (ProviderInterface, error) {"} {"_id":"doc-en-kubernetes-99c4c93d3c6d8012cef00c15ef0e1058168a071191d33f12843c0ee2810ee25d","title":"","text":"RegisterProvider(\"skeleton\", func() (ProviderInterface, error) { return NullProvider{}, nil }) // The empty string also works, but triggers a warning. RegisterProvider(\"\", func() (ProviderInterface, error) { Logf(\"The --provider flag is not set. Treating as a conformance test. Some tests may not be run.\") return NullProvider{}, nil }) // The empty string used to be accepted in the past, but is not // a valid value anymore. } // SetupProviderConfig validates the chosen provider and creates"} {"_id":"doc-en-kubernetes-7dd70bac96f71bc3257af2c97318482ef425c2698ba1fa9ffdda73a923427fcd","title":"","text":"\"fmt\" \"io/ioutil\" \"os\" \"sort\" \"strings\" \"time\" \"github.com/onsi/ginkgo/config\""} {"_id":"doc-en-kubernetes-d1cc82801e6e626c849b1beecd39daea491429f7c706dc4aec3bb367c67887c3","title":"","text":"flag.StringVar(&TestContext.KubeVolumeDir, \"volume-dir\", \"/var/lib/kubelet\", \"Path to the directory containing the kubelet volumes.\") flag.StringVar(&TestContext.CertDir, \"cert-dir\", \"\", \"Path to the directory containing the certs. Default is empty, which doesn't use certs.\") flag.StringVar(&TestContext.RepoRoot, \"repo-root\", \"../../\", \"Root directory of kubernetes repository, for finding test files.\") flag.StringVar(&TestContext.Provider, \"provider\", \"\", \"The name of the Kubernetes provider (gce, gke, local, etc.)\") flag.StringVar(&TestContext.Provider, \"provider\", \"\", \"The name of the Kubernetes provider (gce, gke, local, skeleton (the fallback if not set), etc.)\") flag.StringVar(&TestContext.Tooling, \"tooling\", \"\", \"The tooling in use (kops, gke, etc.)\") flag.StringVar(&TestContext.KubectlPath, \"kubectl-path\", \"kubectl\", \"The kubectl binary to use. For development, you might use 'cluster/kubectl.sh' here.\") flag.StringVar(&TestContext.OutputDir, \"e2e-output-dir\", \"/tmp\", \"Output directory for interesting/useful test data, like performance data, benchmarks, and other metrics.\")"} {"_id":"doc-en-kubernetes-3124eee514432c0429af63b0ab4ebe9c9c9b06b066f3504991cb7d4e3d771f07","title":"","text":"} // Make sure that all test runs have a valid TestContext.CloudConfig.Provider. // TODO: whether and how long this code is needed is getting discussed // in https://github.com/kubernetes/kubernetes/issues/70194. if TestContext.Provider == \"\" { // Some users of the e2e.test binary pass --provider=. // We need to support that, changing it would break those usages. Logf(\"The --provider flag is not set. Continuing as if --provider=skeleton had been used.\") TestContext.Provider = \"skeleton\" } var err error TestContext.CloudConfig.Provider, err = SetupProviderConfig(TestContext.Provider) if err == nil { return } if !os.IsNotExist(errors.Cause(err)) { Failf(\"Failed to setup provider config: %v\", err) } // We allow unknown provider parameters for historic reasons. At least log a // warning to catch typos. // TODO (https://github.com/kubernetes/kubernetes/issues/70200): // - remove the fallback for unknown providers // - proper error message instead of Failf (which panics) klog.Warningf(\"Unknown provider %q, proceeding as for --provider=skeleton.\", TestContext.Provider) TestContext.CloudConfig.Provider, err = SetupProviderConfig(\"skeleton\") if err != nil { Failf(\"Failed to setup fallback skeleton provider config: %v\", err) if os.IsNotExist(errors.Cause(err)) { // Provide a more helpful error message when the provider is unknown. var providers []string for _, name := range GetProviders() { // The empty string is accepted, but looks odd in the output below unless we quote it. if name == \"\" { name = `\"\"` } providers = append(providers, name) } sort.Strings(providers) klog.Errorf(\"Unknown provider %q. The following providers are known: %v\", TestContext.Provider, strings.Join(providers, \" \")) } else { klog.Errorf(\"Failed to setup provider config for %q: %v\", TestContext.Provider, err) } os.Exit(1) } }"} {"_id":"doc-en-kubernetes-f49c9900400c30aa4775b00ea9fb1cc29dbdb798c10e6a9d886919ada25d0edf","title":"","text":"fi } create_csi_crd() { echo \"create_csi_crd $1\" YAML_FILE=${KUBE_ROOT}/cluster/addons/storage-crds/$1.yaml if [ -e $YAML_FILE ]; then echo \"Create $1 crd\" ${KUBECTL} --kubeconfig=\"${CERT_DIR}/admin.kubeconfig\" create -f $YAML_FILE else echo \"No $1 available.\" fi } function print_success { if [[ \"${START_MODE}\" != \"kubeletonly\" ]]; then if [[ \"${ENABLE_DAEMON}\" = false ]]; then"} {"_id":"doc-en-kubernetes-d6ef28cf1336cc67d04e4280ac9995ecd9e782a7f778877d52acabe986246609","title":"","text":"create_storage_class fi if [[ \"${FEATURE_GATES:-}\" == \"AllAlpha=true\" || \"${FEATURE_GATES:-}\" =~ \"CSIDriverRegistry=true\" ]]; then create_csi_crd \"csidriver\" fi if [[ \"${FEATURE_GATES:-}\" == \"AllAlpha=true\" || \"${FEATURE_GATES:-}\" =~ \"CSINodeInfo=true\" ]]; then create_csi_crd \"csinodeinfo\" fi print_success if [[ \"${ENABLE_DAEMON}\" = false ]]; then"} {"_id":"doc-en-kubernetes-36d1af981a2f7a57e56c0f0505737bcf02eee1720f902b1faa15386fba0c2527","title":"","text":"if err != nil { return false, fmt.Errorf(\"got error while getting pod events: %s\", err) } if len(events.Items) == 0 { return false, nil // no events have occurred yet for _, event := range events.Items { if strings.Contains(event.Message, msg) { return true, nil } } return strings.Contains(events.Items[0].Message, msg), nil return false, nil } }"} {"_id":"doc-en-kubernetes-0a352af816a32a2c80d4209d9be04de73012cbc63c9d2c41388b6f4f9d33e893","title":"","text":"defer func() { framework.DeletePodWithWait(f, f.ClientSet, pod) }() err = framework.WaitForPodRunningInNamespace(f.ClientSet, pod) Expect(err).To(HaveOccurred(), \"while waiting for pod to be running\") By(\"Checking for subpath error event\") selector := fields.Set{"} {"_id":"doc-en-kubernetes-70583c087c4cd896be9ecc635a74d36e7845959bd66534b055dc431dd82b57ec","title":"","text":"\"//staging/src/k8s.io/apimachinery/pkg/api/meta:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/apis/testapigroup/v1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/runtime:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/runtime/schema:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/runtime/serializer:go_default_library\","} {"_id":"doc-en-kubernetes-cdb2c4511f17e094806e2e334ef0e3a77ab73e04e7fcf40e04726d7f537dbf61","title":"","text":"\"//staging/src/k8s.io/apiserver/pkg/apis/example/v1:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/endpoints/request:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/registry/rest:go_default_library\", \"//staging/src/k8s.io/client-go/kubernetes/scheme:go_default_library\", \"//vendor/github.com/evanphx/json-patch:go_default_library\", \"//vendor/github.com/google/gofuzz:go_default_library\", \"//vendor/k8s.io/utils/trace:go_default_library\","} {"_id":"doc-en-kubernetes-15b772c5e1fc947582c5e39f1ea17577ab3a44afa1f2340054486a40d35f3e7d","title":"","text":"} } // transformDecodeError adds additional information when a decode fails. // transformDecodeError adds additional information into a bad-request api error when a decode fails. func transformDecodeError(typer runtime.ObjectTyper, baseErr error, into runtime.Object, gvk *schema.GroupVersionKind, body []byte) error { objGVKs, _, err := typer.ObjectKinds(into) if err != nil { return err return errors.NewBadRequest(err.Error()) } objGVK := objGVKs[0] if gvk != nil && len(gvk.Kind) > 0 {"} {"_id":"doc-en-kubernetes-250c8253cd30053bbfd15615d7bbbc7be40dda9fa60f398b220bd16e512780ca","title":"","text":"apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\" testapigroupv1 \"k8s.io/apimachinery/pkg/apis/testapigroup/v1\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/runtime/schema\" \"k8s.io/apimachinery/pkg/runtime/serializer\""} {"_id":"doc-en-kubernetes-6a0898daa82b8b906c547fbee05894a764e57462587318f6a2970aac3c0e79b6","title":"","text":"examplev1 \"k8s.io/apiserver/pkg/apis/example/v1\" \"k8s.io/apiserver/pkg/endpoints/request\" \"k8s.io/apiserver/pkg/registry/rest\" clientgoscheme \"k8s.io/client-go/kubernetes/scheme\" utiltrace \"k8s.io/utils/trace\" )"} {"_id":"doc-en-kubernetes-a14fcfa40f75cd92a0f2e4cf3f99f7a8e3447924dcf26ced31e77024444e3488","title":"","text":"func (f mutateObjectUpdateFunc) Admit(a admission.Attributes, o admission.ObjectInterfaces) (err error) { return f(a.GetObject(), a.GetOldObject()) } func TestTransformDecodeErrorEnsuresBadRequestError(t *testing.T) { testCases := []struct { name string typer runtime.ObjectTyper decodedGVK *schema.GroupVersionKind decodeIntoObject runtime.Object baseErr error expectedErr error }{ { name: \"decoding normal objects fails and returns a bad-request error\", typer: clientgoscheme.Scheme, decodedGVK: &schema.GroupVersionKind{ Group: testapigroupv1.GroupName, Version: \"v1\", Kind: \"Carp\", }, decodeIntoObject: &testapigroupv1.Carp{}, // which client-go's scheme doesn't recognize baseErr: fmt.Errorf(\"plain error\"), }, { name: \"decoding objects with unknown GVK fails and returns a bad-request error\", typer: alwaysErrorTyper{}, decodedGVK: nil, decodeIntoObject: &testapigroupv1.Carp{}, // which client-go's scheme doesn't recognize baseErr: nil, }, } for _, testCase := range testCases { err := transformDecodeError(testCase.typer, testCase.baseErr, testCase.decodeIntoObject, testCase.decodedGVK, []byte(``)) if apiStatus, ok := err.(apierrors.APIStatus); !ok || apiStatus.Status().Code != http.StatusBadRequest { t.Errorf(\"expected bad request error but got: %v\", err) } } } var _ runtime.ObjectTyper = alwaysErrorTyper{} type alwaysErrorTyper struct{} func (alwaysErrorTyper) ObjectKinds(runtime.Object) ([]schema.GroupVersionKind, bool, error) { return nil, false, fmt.Errorf(\"always error\") } func (alwaysErrorTyper) Recognizes(gvk schema.GroupVersionKind) bool { return false } "} {"_id":"doc-en-kubernetes-8437e21b771918767a9e687a9aa2e60258135c5634b9b4103b43784eb4d30874","title":"","text":"\"fmt\" \"io\" \"net\" \"strings\" \"time\" csipbv1 \"github.com/container-storage-interface/spec/lib/go/csi\" \"google.golang.org/grpc\" api \"k8s.io/api/core/v1\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/apimachinery/pkg/util/wait\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" \"k8s.io/klog\" \"k8s.io/kubernetes/pkg/features\""} {"_id":"doc-en-kubernetes-fe6122b6ab510cc11bd43fcaf6ae15629a6f4d158cddc6bbf4ee66b1dea00ecc","title":"","text":"err error, ) const ( initialDuration = 1 * time.Second factor = 2.0 steps = 5 ) // newV1NodeClient creates a new NodeClient with the internally used gRPC // connection set up. It also returns a closer which must to be called to close // the gRPC connection when the NodeClient is not used anymore."} {"_id":"doc-en-kubernetes-900dccc9f43d8c12678606485e8f2d513fc8623da1641dbeb4276358955bc275","title":"","text":"accessibleTopology map[string]string, err error) { klog.V(4).Info(log(\"calling NodeGetInfo rpc\")) if c.nodeV1ClientCreator != nil { return c.nodeGetInfoV1(ctx) } else if c.nodeV0ClientCreator != nil { return c.nodeGetInfoV0(ctx) } err = fmt.Errorf(\"failed to call NodeGetInfo. Both nodeV1ClientCreator and nodeV0ClientCreator are nil\") // TODO retries should happen at a lower layer (issue #73371) backoff := wait.Backoff{Duration: initialDuration, Factor: factor, Steps: steps} err = wait.ExponentialBackoff(backoff, func() (bool, error) { var getNodeInfoError error if c.nodeV1ClientCreator != nil { nodeID, maxVolumePerNode, accessibleTopology, getNodeInfoError = c.nodeGetInfoV1(ctx) } else if c.nodeV0ClientCreator != nil { nodeID, maxVolumePerNode, accessibleTopology, getNodeInfoError = c.nodeGetInfoV0(ctx) } if nodeID != \"\" { return true, nil } // kubelet plugin registration service not implemented is a terminal error, no need to retry if strings.Contains(getNodeInfoError.Error(), \"no handler registered for plugin type\") { return false, getNodeInfoError } // Continue with exponential backoff return false, nil }) return nodeID, maxVolumePerNode, accessibleTopology, err }"} {"_id":"doc-en-kubernetes-7c98007d5fdee52516222849fd0297fbbc790519dc41270900f58b681a998457","title":"","text":"return err } // TODO (verult) retry with exponential backoff, possibly added in csi client library. ctx, cancel := context.WithTimeout(context.Background(), csiTimeout) defer cancel()"} {"_id":"doc-en-kubernetes-9c57d4c5798d552bc1aa6e9638450c0650e3d4524c8df4f6e5e357dc66670a54","title":"","text":"if err != nil { klog.Error(log(\"registrationHandler.RegisterPlugin failed at CSI.NodeGetInfo: %v\", err)) if unregErr := unregisterDriver(pluginName); unregErr != nil { klog.Error(log(\"registrationHandler.RegisterPlugin failed to unregister plugin due to previous: %v\", unregErr)) klog.Error(log(\"registrationHandler.RegisterPlugin failed to unregister plugin due to previous error: %v\", unregErr)) return unregErr } return err"} {"_id":"doc-en-kubernetes-8359b74da41c66ae6cd4b76528d2bbfdec4160fe4da5db8601f78081ad4c5cc5","title":"","text":"return fmt.Errorf(\"PersistentVolumeClaims %v not all in phase %s within %v\", pvcNames, phase, timeout) } // findAvailableNamespaceName random namespace name starting with baseName. func findAvailableNamespaceName(baseName string, c clientset.Interface) (string, error) { var name string err := wait.PollImmediate(Poll, 30*time.Second, func() (bool, error) { name = fmt.Sprintf(\"%v-%v\", baseName, randomSuffix()) _, err := c.CoreV1().Namespaces().Get(name, metav1.GetOptions{}) if err == nil { // Already taken return false, nil } if apierrs.IsNotFound(err) { return true, nil } Logf(\"Unexpected error while getting namespace: %v\", err) return false, nil }) return name, err } // CreateTestingNS should be used by every test, note that we append a common prefix to the provided test name. // Please see NewFramework instead of using this directly. func CreateTestingNS(baseName string, c clientset.Interface, labels map[string]string) (*v1.Namespace, error) {"} {"_id":"doc-en-kubernetes-63ab684902c6393fb60627763290fe3fc53a3311863fe9c305a6362e3ad9c9d1","title":"","text":"} labels[\"e2e-run\"] = string(RunId) // We don't use ObjectMeta.GenerateName feature, as in case of API call // failure we don't know whether the namespace was created and what is its // name. name, err := findAvailableNamespaceName(baseName, c) if err != nil { return nil, err } namespaceObj := &v1.Namespace{ ObjectMeta: metav1.ObjectMeta{ GenerateName: fmt.Sprintf(\"e2e-tests-%v-\", baseName), Namespace: \"\", Labels: labels, Name: name, Namespace: \"\", Labels: labels, }, Status: v1.NamespaceStatus{}, }"} {"_id":"doc-en-kubernetes-04af920a3ae753b670023e2b7c66d786aeb988a6c17da3a72b3b66ac3680b728","title":"","text":"type azureToken struct { token adal.Token environment string clientID string tenantID string apiserverID string"} {"_id":"doc-en-kubernetes-d98977a4a7aff4fbf48b06c7a750f6387502edb1155a019645821ae7d3b091a6","title":"","text":"if refreshToken == \"\" { return nil, fmt.Errorf(\"no refresh token in cfg: %s\", cfgRefreshToken) } environment := ts.cfg[cfgEnvironment] if environment == \"\" { return nil, fmt.Errorf(\"no environment in cfg: %s\", cfgEnvironment) } clientID := ts.cfg[cfgClientID] if clientID == \"\" { return nil, fmt.Errorf(\"no client ID in cfg: %s\", cfgClientID)"} {"_id":"doc-en-kubernetes-4cd66cd0e266dc3e8ee903a380497901267a667b37564741f9e8e9a9fb29ae7c","title":"","text":"Resource: fmt.Sprintf(\"spn:%s\", apiserverID), Type: tokenType, }, environment: environment, clientID: clientID, tenantID: tenantID, apiserverID: apiserverID,"} {"_id":"doc-en-kubernetes-539cb8ff49fb6dc334c43e94d8431a8b1920cbf9e1aef004f9459fd2f3e8bbd9","title":"","text":"newCfg := make(map[string]string) newCfg[cfgAccessToken] = token.token.AccessToken newCfg[cfgRefreshToken] = token.token.RefreshToken newCfg[cfgEnvironment] = token.environment newCfg[cfgClientID] = token.clientID newCfg[cfgTenantID] = token.tenantID newCfg[cfgApiserverID] = token.apiserverID"} {"_id":"doc-en-kubernetes-089e36ff9fc7ffa879f373a8bb280f2f9bcb62de055b4fbf22da9b68eedcb56f","title":"","text":"} func (ts *azureTokenSource) refreshToken(token *azureToken) (*azureToken, error) { oauthConfig, err := adal.NewOAuthConfig(azure.PublicCloud.ActiveDirectoryEndpoint, token.tenantID) env, err := azure.EnvironmentFromName(token.environment) if err != nil { return nil, err } oauthConfig, err := adal.NewOAuthConfig(env.ActiveDirectoryEndpoint, token.tenantID) if err != nil { return nil, fmt.Errorf(\"building the OAuth configuration for token refresh: %v\", err) }"} {"_id":"doc-en-kubernetes-4b532f899f842a5efe75694da58080f7574178696ed27a0d4d7771387ffc9574","title":"","text":"return &azureToken{ token: spt.Token(), environment: token.environment, clientID: token.clientID, tenantID: token.tenantID, apiserverID: token.apiserverID,"} {"_id":"doc-en-kubernetes-4f5725782c06d956e8a7efe811c576b2eae37b1947a654457d8e0db04c07f978","title":"","text":"return &azureToken{ token: *token, environment: ts.environment.Name, clientID: ts.clientID, tenantID: ts.tenantID, apiserverID: ts.apiserverID,"} {"_id":"doc-en-kubernetes-a8cce56750e35e9e54be170bb03134eb7ee5b948a1b45c76b4d3eb2234ca94d8","title":"","text":"wantCfg := token2Cfg(token) persistedCfg := persiter.Cache() wantCfgLen := len(wantCfg) persistedCfgLen := len(persistedCfg) if wantCfgLen != persistedCfgLen { t.Errorf(\"wantCfgLen and persistedCfgLen do not match, wantCfgLen=%v, persistedCfgLen=%v\", wantCfgLen, persistedCfgLen) } for k, v := range persistedCfg { if strings.Compare(v, wantCfg[k]) != 0 { t.Errorf(\"Token() persisted cfg %s: got %v, want %v\", k, v, wantCfg[k])"} {"_id":"doc-en-kubernetes-0c47b7e79f07d71bcecc90cc1dbde0efef3827e04b06331f903ccb3b3cbc9fac","title":"","text":"func (ts *fakeTokenSource) Token() (*azureToken, error) { return &azureToken{ token: newFackeAzureToken(ts.accessToken, ts.expiresOn), environment: \"testenv\", clientID: \"fake\", tenantID: \"fake\", apiserverID: \"fake\","} {"_id":"doc-en-kubernetes-7d0d990cd50cf046087b3d52a631733ed0ca7f53332b30e1d5d95be2bfc78f17","title":"","text":"cfg := make(map[string]string) cfg[cfgAccessToken] = token.token.AccessToken cfg[cfgRefreshToken] = token.token.RefreshToken cfg[cfgEnvironment] = token.environment cfg[cfgClientID] = token.clientID cfg[cfgTenantID] = token.tenantID cfg[cfgApiserverID] = token.apiserverID"} {"_id":"doc-en-kubernetes-f040fe5644e671e982af9893b2534d7e5b775117b27f587115806b55f7e62e03","title":"","text":"nodestatus.PIDPressureCondition(kl.clock.Now, kl.evictionManager.IsUnderPIDPressure, kl.recordNodeStatusEvent), nodestatus.ReadyCondition(kl.clock.Now, kl.runtimeState.runtimeErrors, kl.runtimeState.networkErrors, validateHostFunc, kl.containerManager.Status, kl.recordNodeStatusEvent), nodestatus.VolumesInUse(kl.volumeManager.ReconcilerStatesHasBeenSynced, kl.volumeManager.GetVolumesInUse), nodestatus.RemoveOutOfDiskCondition(), // TODO(mtaufen): I decided not to move this setter for now, since all it does is send an event // and record state back to the Kubelet runtime object. In the future, I'd like to isolate // these side-effects by decoupling the decisions to send events and partial status recording"} {"_id":"doc-en-kubernetes-4ade082c4e52ff48f60d0fe5ac10ed4ca6b7322ed883510496959d159519c47b","title":"","text":"return nil } } // RemoveOutOfDiskCondition removes stale OutOfDisk condition // OutOfDisk condition has been removed from kubelet in 1.12 func RemoveOutOfDiskCondition() Setter { return func(node *v1.Node) error { var conditions []v1.NodeCondition for i := range node.Status.Conditions { if node.Status.Conditions[i].Type != v1.NodeOutOfDisk { conditions = append(conditions, node.Status.Conditions[i]) } } node.Status.Conditions = conditions return nil } } "} {"_id":"doc-en-kubernetes-66c091b0793ca0ccaf9c5770f8d241a285c18559e4579850116bd63bfa9f078d","title":"","text":"} } func TestRemoveOutOfDiskCondition(t *testing.T) { now := time.Now() var cases = []struct { desc string inputNode *v1.Node expectNode *v1.Node }{ { desc: \"should remove stale OutOfDiskCondition from node status\", inputNode: &v1.Node{ Status: v1.NodeStatus{ Conditions: []v1.NodeCondition{ *makeMemoryPressureCondition(false, now, now), { Type: v1.NodeOutOfDisk, Status: v1.ConditionFalse, }, *makeDiskPressureCondition(false, now, now), }, }, }, expectNode: &v1.Node{ Status: v1.NodeStatus{ Conditions: []v1.NodeCondition{ *makeMemoryPressureCondition(false, now, now), *makeDiskPressureCondition(false, now, now), }, }, }, }, } for _, tc := range cases { t.Run(tc.desc, func(t *testing.T) { // construct setter setter := RemoveOutOfDiskCondition() // call setter on node if err := setter(tc.inputNode); err != nil { t.Fatalf(\"unexpected error: %v\", err) } // check expected node assert.True(t, apiequality.Semantic.DeepEqual(tc.expectNode, tc.inputNode), \"Diff: %s\", diff.ObjectDiff(tc.expectNode, tc.inputNode)) }) } } // Test Helpers: // sortableNodeAddress is a type for sorting []v1.NodeAddress"} {"_id":"doc-en-kubernetes-ee257af44e77a909d912997ff6303098bdbecc5162591f3e7a2d324dd78bdf08","title":"","text":"go_test( name = \"go_default_test\", srcs = [\"storage_test.go\"], srcs = [ \"eviction_test.go\", \"storage_test.go\", ], embed = [\":go_default_library\"], deps = [ \"//pkg/apis/core:go_default_library\", \"//pkg/apis/policy:go_default_library\", \"//pkg/client/clientset_generated/internalclientset/fake:go_default_library\", \"//pkg/registry/registrytest:go_default_library\", \"//pkg/securitycontext:go_default_library\", \"//staging/src/k8s.io/api/core/v1:go_default_library\","} {"_id":"doc-en-kubernetes-eec4aa3499e09cb0a93f8ba9e4cb5fc2cc2aec9abd0fdc9abd118dd3542ef32c","title":"","text":"// At this point there was either no PDB or we succeeded in decrementing // Try the delete _, _, err = r.store.Delete(ctx, eviction.Name, eviction.DeleteOptions) deletionOptions := eviction.DeleteOptions if deletionOptions == nil { // default to non-nil to trigger graceful deletion deletionOptions = &metav1.DeleteOptions{} } _, _, err = r.store.Delete(ctx, eviction.Name, deletionOptions) if err != nil { return nil, err }"} {"_id":"doc-en-kubernetes-956940777e2c22051beeaa34d172dfbe6228c3a199b16221a45c1f790e7819b3","title":"","text":" /* Copyright 2019 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package storage import ( \"testing\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/runtime\" genericapirequest \"k8s.io/apiserver/pkg/endpoints/request\" api \"k8s.io/kubernetes/pkg/apis/core\" \"k8s.io/kubernetes/pkg/apis/policy\" \"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/fake\" ) func TestEviction(t *testing.T) { testcases := []struct { name string pdbs []runtime.Object pod *api.Pod eviction *policy.Eviction expectError bool expectDeleted bool }{ { name: \"no pdbs, unscheduled pod, nil delete options, deletes immediately\", pdbs: nil, pod: validNewPod(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}}, expectDeleted: true, }, { name: \"no pdbs, scheduled pod, nil delete options, deletes gracefully\", pdbs: nil, pod: func() *api.Pod { pod := validNewPod(); pod.Spec.NodeName = \"foo\"; return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}}, expectDeleted: false, // not deleted immediately because of graceful deletion }, { name: \"no pdbs, scheduled pod, empty delete options, deletes gracefully\", pdbs: nil, pod: func() *api.Pod { pod := validNewPod(); pod.Spec.NodeName = \"foo\"; return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, DeleteOptions: &metav1.DeleteOptions{}}, expectDeleted: false, // not deleted immediately because of graceful deletion }, { name: \"no pdbs, scheduled pod, graceless delete options, deletes immediately\", pdbs: nil, pod: func() *api.Pod { pod := validNewPod(); pod.Spec.NodeName = \"foo\"; return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, DeleteOptions: metav1.NewDeleteOptions(0)}, expectDeleted: true, }, { name: \"matching pdbs with no disruptions allowed\", pdbs: []runtime.Object{&policy.PodDisruptionBudget{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, Spec: policy.PodDisruptionBudgetSpec{Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"a\": \"true\"}}}, Status: policy.PodDisruptionBudgetStatus{PodDisruptionsAllowed: 0}, }}, pod: func() *api.Pod { pod := validNewPod() pod.Labels = map[string]string{\"a\": \"true\"} pod.Spec.NodeName = \"foo\" return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, DeleteOptions: metav1.NewDeleteOptions(0)}, expectError: true, }, { name: \"matching pdbs with disruptions allowed\", pdbs: []runtime.Object{&policy.PodDisruptionBudget{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, Spec: policy.PodDisruptionBudgetSpec{Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"a\": \"true\"}}}, Status: policy.PodDisruptionBudgetStatus{PodDisruptionsAllowed: 1}, }}, pod: func() *api.Pod { pod := validNewPod() pod.Labels = map[string]string{\"a\": \"true\"} pod.Spec.NodeName = \"foo\" return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, DeleteOptions: metav1.NewDeleteOptions(0)}, expectDeleted: true, }, { name: \"non-matching pdbs\", pdbs: []runtime.Object{&policy.PodDisruptionBudget{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, Spec: policy.PodDisruptionBudgetSpec{Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"b\": \"true\"}}}, Status: policy.PodDisruptionBudgetStatus{PodDisruptionsAllowed: 0}, }}, pod: func() *api.Pod { pod := validNewPod() pod.Labels = map[string]string{\"a\": \"true\"} pod.Spec.NodeName = \"foo\" return pod }(), eviction: &policy.Eviction{ObjectMeta: metav1.ObjectMeta{Name: \"foo\", Namespace: \"default\"}, DeleteOptions: metav1.NewDeleteOptions(0)}, expectDeleted: true, }, } for _, tc := range testcases { t.Run(tc.name, func(t *testing.T) { testContext := genericapirequest.WithNamespace(genericapirequest.NewContext(), metav1.NamespaceDefault) storage, _, _, server := newStorage(t) defer server.Terminate(t) defer storage.Store.DestroyFunc() if tc.pod != nil { if _, err := storage.Create(testContext, tc.pod, nil, &metav1.CreateOptions{}); err != nil { t.Error(err) } } client := fake.NewSimpleClientset(tc.pdbs...) evictionRest := newEvictionStorage(storage.Store, client.Policy()) _, err := evictionRest.Create(testContext, tc.eviction, nil, &metav1.CreateOptions{}) if (err != nil) != tc.expectError { t.Errorf(\"expected error=%v, got %v\", tc.expectError, err) return } if tc.expectError { return } if tc.pod != nil { existingPod, err := storage.Get(testContext, tc.pod.Name, &metav1.GetOptions{}) if tc.expectDeleted { if !apierrors.IsNotFound(err) { t.Errorf(\"expected to be deleted, lookup returned %#v\", existingPod) } return } else if apierrors.IsNotFound(err) { t.Errorf(\"expected graceful deletion, got %v\", err) return } if err != nil { t.Errorf(\"%#v\", err) return } if existingPod.(*api.Pod).DeletionTimestamp == nil { t.Errorf(\"expected gracefully deleted pod with deletionTimestamp set, got %#v\", existingPod) } } }) } } "} {"_id":"doc-en-kubernetes-e99d757f7cb833dd982e7a60aeb8d36d0d6cbba03e6139eaf704359c2a5e9556","title":"","text":"} createNode2NormalPodEviction := func(client clientset.Interface) func() error { return func() error { zero := int64(0) return client.Policy().Evictions(\"ns\").Evict(&policy.Eviction{ TypeMeta: metav1.TypeMeta{ APIVersion: \"policy/v1beta1\","} {"_id":"doc-en-kubernetes-c4fe8052c3b0459d1f7ccb441047c8ad17706af9e1687a73bb029459b6d36e87","title":"","text":"Name: \"node2normalpod\", Namespace: \"ns\", }, DeleteOptions: &metav1.DeleteOptions{GracePeriodSeconds: &zero}, }) } } createNode2MirrorPodEviction := func(client clientset.Interface) func() error { return func() error { zero := int64(0) return client.Policy().Evictions(\"ns\").Evict(&policy.Eviction{ TypeMeta: metav1.TypeMeta{ APIVersion: \"policy/v1beta1\","} {"_id":"doc-en-kubernetes-b2191605d871de0a265683ec884acc97ba38a17259c9cf0590f0c81ae48e97e1","title":"","text":"Name: \"node2mirrorpod\", Namespace: \"ns\", }, DeleteOptions: &metav1.DeleteOptions{GracePeriodSeconds: &zero}, }) } }"} {"_id":"doc-en-kubernetes-387370c29d14d810fd240ace1aacaa9f4b658c6f342bbe0bb70eb475f1705fe2","title":"","text":"import ( \"context\" \"fmt\" \"path\" \"sync\" \"sync/atomic\" \"time\""} {"_id":"doc-en-kubernetes-1b08f8fa97e9fd0473c753acf9ddae7d6aaae16fcca42ce43d82e13b82fa119b","title":"","text":"client := clientValue.Load().(*clientv3.Client) ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() if _, err := client.Cluster.MemberList(ctx); err != nil { return fmt.Errorf(\"error listing etcd members: %v\", err) // See https://github.com/etcd-io/etcd/blob/master/etcdctl/ctlv3/command/ep_command.go#L118 _, err := client.Get(ctx, path.Join(c.Prefix, \"health\")) if err == nil { return nil } return nil return fmt.Errorf(\"error getting data from etcd: %v\", err) }, nil }"} {"_id":"doc-en-kubernetes-ce015c87df116d623a6afdf5fd18940767194a6a6da911a161c6f57803974af3","title":"","text":"} } // Mount : mounts source to target as NTFS with given options. // Mount : mounts source to target with given options. // currently only supports cifs(smb), bind mount(for disk) func (mounter *Mounter) Mount(source string, target string, fstype string, options []string) error { target = normalizeWindowsPath(target) if source == \"tmpfs\" { klog.V(3).Infof(\"azureMount: mounting source (%q), target (%q), with options (%q)\", source, target, options) klog.V(3).Infof(\"mounting source (%q), target (%q), with options (%q)\", source, target, options) return os.MkdirAll(target, 0755) }"} {"_id":"doc-en-kubernetes-ce51aec141a822e4ed3997accf85cc21f71f76028d3fa6bd46cce5daeae04b41","title":"","text":"return err } klog.V(4).Infof(\"azureMount: mount options(%q) source:%q, target:%q, fstype:%q, begin to mount\", klog.V(4).Infof(\"mount options(%q) source:%q, target:%q, fstype:%q, begin to mount\", options, source, target, fstype) bindSource := \"\" bindSource := source // tell it's going to mount azure disk or azure file according to options if bind, _, _ := isBind(options); bind {"} {"_id":"doc-en-kubernetes-a2cb8bdcc942501e7b45d0ac639f4a515307a9d86fd5e4fb21186ab882e3ab22","title":"","text":"bindSource = normalizeWindowsPath(source) } else { if len(options) < 2 { klog.Warningf(\"azureMount: mount options(%q) command number(%d) less than 2, source:%q, target:%q, skip mounting\", klog.Warningf(\"mount options(%q) command number(%d) less than 2, source:%q, target:%q, skip mounting\", options, len(options), source, target) return nil } // currently only cifs mount is supported if strings.ToLower(fstype) != \"cifs\" { return fmt.Errorf(\"azureMount: only cifs mount is supported now, fstype: %q, mounting source (%q), target (%q), with options (%q)\", fstype, source, target, options) return fmt.Errorf(\"only cifs mount is supported now, fstype: %q, mounting source (%q), target (%q), with options (%q)\", fstype, source, target, options) } bindSource = source // use PowerShell Environment Variables to store user input string to prevent command line injection // https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-5.1 cmdLine := fmt.Sprintf(`$PWord = ConvertTo-SecureString -String $Env:smbpassword -AsPlainText -Force` + `;$Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Env:smbuser, $PWord` + `;New-SmbGlobalMapping -RemotePath $Env:smbremotepath -Credential $Credential`) cmd := exec.Command(\"powershell\", \"/c\", cmdLine) cmd.Env = append(os.Environ(), fmt.Sprintf(\"smbuser=%s\", options[0]), fmt.Sprintf(\"smbpassword=%s\", options[1]), fmt.Sprintf(\"smbremotepath=%s\", source)) if output, err := cmd.CombinedOutput(); err != nil { return fmt.Errorf(\"azureMount: SmbGlobalMapping failed: %v, only SMB mount is supported now, output: %q\", err, string(output)) if output, err := newSMBMapping(options[0], options[1], source); err != nil { if isSMBMappingExist(source) { klog.V(2).Infof(\"SMB Mapping(%s) already exists, now begin to remove and remount\", source) if output, err := removeSMBMapping(source); err != nil { return fmt.Errorf(\"Remove-SmbGlobalMapping failed: %v, output: %q\", err, output) } if output, err := newSMBMapping(options[0], options[1], source); err != nil { return fmt.Errorf(\"New-SmbGlobalMapping remount failed: %v, output: %q\", err, output) } } else { return fmt.Errorf(\"New-SmbGlobalMapping failed: %v, output: %q\", err, output) } } }"} {"_id":"doc-en-kubernetes-71a6d2fdd07d97361f94081c54ca0df81d389a8f4c1f6b69fcbe8f487c666ce3","title":"","text":"return nil } // do the SMB mount with username, password, remotepath // return (output, error) func newSMBMapping(username, password, remotepath string) (string, error) { if username == \"\" || password == \"\" || remotepath == \"\" { return \"\", fmt.Errorf(\"invalid parameter(username: %s, password: %s, remoteapth: %s)\", username, password, remotepath) } // use PowerShell Environment Variables to store user input string to prevent command line injection // https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-5.1 cmdLine := `$PWord = ConvertTo-SecureString -String $Env:smbpassword -AsPlainText -Force` + `;$Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $Env:smbuser, $PWord` + `;New-SmbGlobalMapping -RemotePath $Env:smbremotepath -Credential $Credential` cmd := exec.Command(\"powershell\", \"/c\", cmdLine) cmd.Env = append(os.Environ(), fmt.Sprintf(\"smbuser=%s\", username), fmt.Sprintf(\"smbpassword=%s\", password), fmt.Sprintf(\"smbremotepath=%s\", remotepath)) output, err := cmd.CombinedOutput() return string(output), err } // check whether remotepath is already mounted func isSMBMappingExist(remotepath string) bool { cmd := exec.Command(\"powershell\", \"/c\", `Get-SmbGlobalMapping -RemotePath $Env:smbremotepath`) cmd.Env = append(os.Environ(), fmt.Sprintf(\"smbremotepath=%s\", remotepath)) _, err := cmd.CombinedOutput() return err == nil } // remove SMB mapping func removeSMBMapping(remotepath string) (string, error) { cmd := exec.Command(\"powershell\", \"/c\", `Remove-SmbGlobalMapping -RemotePath $Env:smbremotepath -Force`) cmd.Env = append(os.Environ(), fmt.Sprintf(\"smbremotepath=%s\", remotepath)) output, err := cmd.CombinedOutput() return string(output), err } // Unmount unmounts the target. func (mounter *Mounter) Unmount(target string) error { klog.V(4).Infof(\"azureMount: Unmount target (%q)\", target)"} {"_id":"doc-en-kubernetes-d47984bf983e95b842ef4be00fe353e1bb8948996c3e90d2fe94e042b8f2dfd2","title":"","text":"} } } func TestNewSMBMapping(t *testing.T) { tests := []struct { username string password string remotepath string expectError bool }{ { \"\", \"password\", `remotepath`, true, }, { \"username\", \"\", `remotepath`, true, }, { \"username\", \"password\", \"\", true, }, } for _, test := range tests { _, err := newSMBMapping(test.username, test.password, test.remotepath) if test.expectError { assert.NotNil(t, err, \"Expect error during newSMBMapping(%s, %s, %s, %v)\", test.username, test.password, test.remotepath) } else { assert.Nil(t, err, \"Expect error is nil during newSMBMapping(%s, %s, %s, %v)\", test.username, test.password, test.remotepath) } } } "} {"_id":"doc-en-kubernetes-326b1166293395eac03d777910db2588a3f1dcb02baf622fbd4dbb5e4098eb1f","title":"","text":"name = \"go_default_library\", srcs = [ \"density.go\", \"dns.go\", \"framework.go\", \"gmsa.go\", \"hybrid_network.go\","} {"_id":"doc-en-kubernetes-f0c794665e94cef32132ecb9f8c8484990771cdda799d35619c3cbe4c90571a1","title":"","text":" /* Copyright 2019 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package windows import ( \"regexp\" \"strings\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/kubernetes/test/e2e/framework\" e2elog \"k8s.io/kubernetes/test/e2e/framework/log\" imageutils \"k8s.io/kubernetes/test/utils/image\" \"github.com/onsi/ginkgo\" ) const dnsTestPodHostName = \"dns-querier-1\" const dnsTestServiceName = \"dns-test-service\" var _ = SIGDescribe(\"DNS\", func() { ginkgo.BeforeEach(func() { framework.SkipUnlessNodeOSDistroIs(\"windows\") }) f := framework.NewDefaultFramework(\"dns\") ginkgo.It(\"should support configurable pod DNS servers\", func() { ginkgo.By(\"Preparing a test DNS service with injected DNS names...\") testInjectedIP := \"1.1.1.1\" testSearchPath := \"resolv.conf.local\" ginkgo.By(\"Creating a pod with dnsPolicy=None and customized dnsConfig...\") testUtilsPod := generateDNSUtilsPod() testUtilsPod.Spec.DNSPolicy = v1.DNSNone testUtilsPod.Spec.DNSConfig = &v1.PodDNSConfig{ Nameservers: []string{testInjectedIP}, Searches: []string{testSearchPath}, } testUtilsPod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(testUtilsPod) framework.ExpectNoError(err) e2elog.Logf(\"Created pod %v\", testUtilsPod) defer func() { e2elog.Logf(\"Deleting pod %s...\", testUtilsPod.Name) if err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Delete(testUtilsPod.Name, metav1.NewDeleteOptions(0)); err != nil { e2elog.Failf(\"Failed to delete pod %s: %v\", testUtilsPod.Name, err) } }() framework.ExpectNoError(f.WaitForPodRunning(testUtilsPod.Name), \"failed to wait for pod %s to be running\", testUtilsPod.Name) ginkgo.By(\"Verifying customized DNS option is configured on pod...\") cmd := []string{\"ipconfig\", \"/all\"} stdout, _, err := f.ExecWithOptions(framework.ExecOptions{ Command: cmd, Namespace: f.Namespace.Name, PodName: testUtilsPod.Name, ContainerName: \"util\", CaptureStdout: true, CaptureStderr: true, }) framework.ExpectNoError(err) e2elog.Logf(\"ipconfig /all:n%s\", stdout) dnsRegex, err := regexp.Compile(`DNS Servers[s*.]*:(s*[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})+`) if dnsRegex.MatchString(stdout) { match := dnsRegex.FindString(stdout) if !strings.Contains(match, testInjectedIP) { e2elog.Failf(\"customized DNS options not found in ipconfig /all, got: %s\", match) } } else { e2elog.Failf(\"cannot find DNS server info in ipconfig /all output: n%s\", stdout) } // TODO: Add more test cases for other DNSPolicies. }) }) func generateDNSUtilsPod() *v1.Pod { return &v1.Pod{ TypeMeta: metav1.TypeMeta{ Kind: \"Pod\", }, ObjectMeta: metav1.ObjectMeta{ GenerateName: \"e2e-dns-utils-\", }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"util\", Image: imageutils.GetE2EImage(imageutils.Dnsutils), Command: []string{\"sleep\", \"10000\"}, }, }, }, } } "} {"_id":"doc-en-kubernetes-5c406def9f19d087b43622eba3441511fcbc5e75949e569c5e6683a747c5dd5b","title":"","text":"if spec.IsKubeletExpandable() { // for kubelet expandable volumes, return a noop plugin that // returns success for expand on the controller klog.Warningf(\"FindExpandablePluginBySpec(%s) -> returning noopExpandableVolumePluginInstance\", spec.Name()) klog.V(4).Infof(\"FindExpandablePluginBySpec(%s) -> returning noopExpandableVolumePluginInstance\", spec.Name()) return &noopExpandableVolumePluginInstance{spec}, nil } klog.Warningf(\"FindExpandablePluginBySpec(%s) -> err:%v\", spec.Name(), err) klog.V(4).Infof(\"FindExpandablePluginBySpec(%s) -> err:%v\", spec.Name(), err) return nil, err }"} {"_id":"doc-en-kubernetes-21e119333236d4f8db2ed7f7b4cb2f60caaf61469f0957c8b4c7922fd65b0be7","title":"","text":"optional APIServiceStatus status = 3; } // APIServiceCondition describes the state of an APIService at a particular point message APIServiceCondition { // Type is the type of the condition. optional string type = 1;"} {"_id":"doc-en-kubernetes-4267b3ec6dbc8c0d9d2ff70a382a0d4f2bde6b5200d2abbf5ffeb35ee35c28ae","title":"","text":"# Admission Controllers to invoke prior to persisting objects in cluster ENABLE_ADMISSION_PLUGINS=\"LimitRanger,ResourceQuota\" DISABLE_ADMISSION_PLUGINS=\"ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook\" DISABLE_ADMISSION_PLUGINS=\"ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,StorageObjectInUseProtection\" # Include RBAC (to exercise bootstrapping), and AlwaysAllow to allow all actions AUTHORIZATION_MODE=\"RBAC,AlwaysAllow\""} {"_id":"doc-en-kubernetes-6a354443d63dcc7852dfc234aff0605708480aaff761d6bac5fcd6004a11d5e8","title":"","text":"ETCDCTL=$(which etcdctl) KUBECTL=\"${KUBE_OUTPUT_HOSTBIN}/kubectl\" UPDATE_ETCD_OBJECTS_SCRIPT=\"${KUBE_ROOT}/cluster/update-storage-objects.sh\" DISABLE_ADMISSION_PLUGINS=\"ServiceAccount,NamespaceLifecycle,LimitRanger,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PersistentVolumeLabel,DefaultStorageClass\" DISABLE_ADMISSION_PLUGINS=\"ServiceAccount,NamespaceLifecycle,LimitRanger,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection\" function startApiServer() { local storage_versions=${1:-\"\"}"} {"_id":"doc-en-kubernetes-6bcb65b488b20e53230f85dbdfe443195b733d9c5a2d44a4a3b8683a9f562d70","title":"","text":"// DefaultOffAdmissionPlugins get admission plugins off by default for kube-apiserver. func DefaultOffAdmissionPlugins() sets.String { defaultOnPlugins := sets.NewString( lifecycle.PluginName, //NamespaceLifecycle limitranger.PluginName, //LimitRanger serviceaccount.PluginName, //ServiceAccount setdefault.PluginName, //DefaultStorageClass resize.PluginName, //PersistentVolumeClaimResize defaulttolerationseconds.PluginName, //DefaultTolerationSeconds mutatingwebhook.PluginName, //MutatingAdmissionWebhook validatingwebhook.PluginName, //ValidatingAdmissionWebhook resourcequota.PluginName, //ResourceQuota lifecycle.PluginName, //NamespaceLifecycle limitranger.PluginName, //LimitRanger serviceaccount.PluginName, //ServiceAccount setdefault.PluginName, //DefaultStorageClass resize.PluginName, //PersistentVolumeClaimResize defaulttolerationseconds.PluginName, //DefaultTolerationSeconds mutatingwebhook.PluginName, //MutatingAdmissionWebhook validatingwebhook.PluginName, //ValidatingAdmissionWebhook resourcequota.PluginName, //ResourceQuota storageobjectinuseprotection.PluginName, //StorageObjectInUseProtection ) if utilfeature.DefaultFeatureGate.Enabled(features.PodPriority) {"} {"_id":"doc-en-kubernetes-7d6dad6af470623bf82d49ff3be1f0dc5b9ed002cc522acd8d85a3c575c266a8","title":"","text":"var hns HostNetworkService hns = hnsV1{} supportedFeatures := hcn.GetSupportedFeatures() if supportedFeatures.Api.V2 { if supportedFeatures.RemoteSubnet { hns = hnsV2{} }"} {"_id":"doc-en-kubernetes-461c1cfc2820adde1e2bb4addde373da2c67ab09a9a4561e14fe1f21b7d1bde2","title":"","text":"} By(\"creating the pod with failed condition\") pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod) Expect(err).ToNot(HaveOccurred(), \"while creating pod\") err = framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) Expect(err).To(HaveOccurred(), \"while waiting for pod to be running\") var podClient *framework.PodClient podClient = f.PodClient() pod = podClient.Create(pod) err := framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) Expect(err).To(HaveOccurred(), \"while waiting for pod to be running\") By(\"updating the pod\") podClient.Update(podName, func(pod *v1.Pod) {"} {"_id":"doc-en-kubernetes-6f81eb5f18c35d95c1265384fae11abfc82c1db09cff6bbc4ae106e0ac551783","title":"","text":"} By(\"creating the pod\") pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod) var podClient *framework.PodClient podClient = f.PodClient() pod = podClient.Create(pod) By(\"waiting for pod running\") err = framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) err := framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) Expect(err).NotTo(HaveOccurred(), \"while waiting for pod to be running\") By(\"creating a file in subpath\")"} {"_id":"doc-en-kubernetes-43d645af64ccd6471e724a890fbf07dca6cd627bad6c8b9bccaa30b3789f5ff1","title":"","text":"} By(\"updating the annotation value\") var podClient *framework.PodClient podClient = f.PodClient() podClient.Update(podName, func(pod *v1.Pod) { pod.ObjectMeta.Annotations[\"mysubpath\"] = \"mynewpath\" })"} {"_id":"doc-en-kubernetes-8f706dc69fcfd3384580aaa8f75853060d1fb2ee8f56c0b2ff0ccfec0b60f580","title":"","text":"// Start pod By(fmt.Sprintf(\"Creating pod %s\", pod.Name)) pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod) Expect(err).ToNot(HaveOccurred(), \"while creating pod\") var podClient *framework.PodClient podClient = f.PodClient() pod = podClient.Create(pod) defer func() { framework.DeletePodWithWait(f, f.ClientSet, pod) }() err = framework.WaitForPodRunningInNamespace(f.ClientSet, pod) err := framework.WaitForPodRunningInNamespace(f.ClientSet, pod) Expect(err).ToNot(HaveOccurred(), \"while waiting for pod to be running\") var podClient *framework.PodClient podClient = f.PodClient() By(\"updating the pod\") podClient.Update(podName, func(pod *v1.Pod) { pod.ObjectMeta.Annotations = map[string]string{\"mysubpath\": \"newsubpath\"}"} {"_id":"doc-en-kubernetes-c65e24624e3311d679b55f03d170478739e91461856ba324dd88a1d791668ca8","title":"","text":"func testPodFailSubpath(f *framework.Framework, pod *v1.Pod) { pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod) Expect(err).ToNot(HaveOccurred(), \"while creating pod\") var podClient *framework.PodClient podClient = f.PodClient() pod = podClient.Create(pod) defer func() { framework.DeletePodWithWait(f, f.ClientSet, pod) }() err = framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) err := framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, framework.PodStartShortTimeout) Expect(err).To(HaveOccurred(), \"while waiting for pod to be running\") }"} {"_id":"doc-en-kubernetes-4ced30d4993e469fe05e14443f0304caf18841914751d61e033ea4b715722700","title":"","text":"var _ = utils.SIGDescribe(\"CSI mock volume\", func() { type testParameters struct { attachable bool disableAttach bool attachLimit int registerDriver bool podInfoVersion *string"} {"_id":"doc-en-kubernetes-8c5fe8aafb80b59cc31861b9d1de23e5abbf43274d1f1989e3a03b8d4d1d5192","title":"","text":"csics := f.CSIClientSet var err error m.driver = drivers.InitMockCSIDriver(tp.registerDriver, tp.attachable, tp.podInfoVersion, tp.attachLimit) m.driver = drivers.InitMockCSIDriver(tp.registerDriver, !tp.disableAttach, tp.podInfoVersion, tp.attachLimit) config, testCleanup := m.driver.PrepareTest(f) m.testCleanups = append(m.testCleanups, testCleanup) m.config = config"} {"_id":"doc-en-kubernetes-b4942d1e3ae82d318fcc3eec30b30ac3a1ccba14a26dc76c4ddc3f41de3e611b","title":"","text":"// The CSIDriverRegistry feature gate is needed for this test in Kubernetes 1.12. Context(\"CSI attach test using mock driver [Feature:CSIDriverRegistry]\", func() { tests := []struct { name string driverAttachable bool deployDriverCRD bool name string disableAttach bool deployDriverCRD bool }{ { name: \"should not require VolumeAttach for drivers without attachment\", driverAttachable: false, deployDriverCRD: true, name: \"should not require VolumeAttach for drivers without attachment\", disableAttach: true, deployDriverCRD: true, }, { name: \"should require VolumeAttach for drivers with attachment\", driverAttachable: true, deployDriverCRD: true, name: \"should require VolumeAttach for drivers with attachment\", deployDriverCRD: true, }, { name: \"should preserve attachment policy when no CSIDriver present\", driverAttachable: true, deployDriverCRD: false, name: \"should preserve attachment policy when no CSIDriver present\", deployDriverCRD: false, }, } for _, t := range tests { test := t It(t.name, func() { var err error init(testParameters{registerDriver: test.deployDriverCRD, attachable: test.driverAttachable}) init(testParameters{registerDriver: test.deployDriverCRD, disableAttach: test.disableAttach}) defer cleanup() _, claim, pod := createPod()"} {"_id":"doc-en-kubernetes-d24b0902f4623ec0e625ea5ea18ab5cacbc10065e5aad2dbe9c4f94d19887768","title":"","text":"_, err = m.cs.StorageV1beta1().VolumeAttachments().Get(attachmentName, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { if test.driverAttachable { if !test.disableAttach { framework.ExpectNoError(err, \"Expected VolumeAttachment but none was found\") } } else { framework.ExpectNoError(err, \"Failed to find VolumeAttachment\") } } if !test.driverAttachable { if test.disableAttach { Expect(err).To(HaveOccurred(), \"Unexpected VolumeAttachment found\") } })"} {"_id":"doc-en-kubernetes-ed9879deabe4187cdbf8eb4c52a8d5634645821a5a521791ad3abbbf63208bfd","title":"","text":"// define volume limit to be 2 for this test var err error init(testParameters{attachable: true, nodeSelectorKey: \"node-attach-limit-csi\", attachLimit: 2}) init(testParameters{nodeSelectorKey: \"node-attach-limit-csi\", attachLimit: 2}) defer cleanup() nodeName := m.config.ClientNodeName attachKey := v1.ResourceName(volumeutil.GetCSIAttachLimitKey(m.provisioner))"} {"_id":"doc-en-kubernetes-84331981d2f8dbdadd0c4a3931e6fbf822afd1ab6156e104d1aaa822b5c74faa","title":"","text":"func TestGetZone(t *testing.T) { cloud := &Cloud{ Config: Config{ Location: \"eastus\", Location: \"eastus\", UseInstanceMetadata: true, }, } testcases := []struct {"} {"_id":"doc-en-kubernetes-e2a48b3c53b9b3fb3c4a6c3128cf0e8c188c016ea7e8e2ccabe26df16cfe5be4","title":"","text":"import ( \"context\" \"fmt\" \"os\" \"strconv\" \"strings\""} {"_id":"doc-en-kubernetes-f433fe13f3a25062648137a953850f7f4c58234d922a154d27be5976d1e0d199","title":"","text":"// GetZone returns the Zone containing the current availability zone and locality region that the program is running in. // If the node is not running with availability zones, then it will fall back to fault domain. func (az *Cloud) GetZone(ctx context.Context) (cloudprovider.Zone, error) { metadata, err := az.metadata.GetMetadata() if err != nil { return cloudprovider.Zone{}, err } if az.UseInstanceMetadata { metadata, err := az.metadata.GetMetadata() if err != nil { return cloudprovider.Zone{}, err } if metadata.Compute == nil { return cloudprovider.Zone{}, fmt.Errorf(\"failure of getting compute information from instance metadata\") } if metadata.Compute == nil { return cloudprovider.Zone{}, fmt.Errorf(\"failure of getting compute information from instance metadata\") } zone := \"\" if metadata.Compute.Zone != \"\" { zoneID, err := strconv.Atoi(metadata.Compute.Zone) if err != nil { return cloudprovider.Zone{}, fmt.Errorf(\"failed to parse zone ID %q: %v\", metadata.Compute.Zone, err) zone := \"\" if metadata.Compute.Zone != \"\" { zoneID, err := strconv.Atoi(metadata.Compute.Zone) if err != nil { return cloudprovider.Zone{}, fmt.Errorf(\"failed to parse zone ID %q: %v\", metadata.Compute.Zone, err) } zone = az.makeZone(zoneID) } else { klog.V(3).Infof(\"Availability zone is not enabled for the node, falling back to fault domain\") zone = metadata.Compute.FaultDomain } zone = az.makeZone(zoneID) } else { klog.V(3).Infof(\"Availability zone is not enabled for the node, falling back to fault domain\") zone = metadata.Compute.FaultDomain } return cloudprovider.Zone{ FailureDomain: zone, Region: az.Location, }, nil return cloudprovider.Zone{ FailureDomain: zone, Region: az.Location, }, nil } // if UseInstanceMetadata is false, get Zone name by calling ARM hostname, err := os.Hostname() if err != nil { return cloudprovider.Zone{}, fmt.Errorf(\"failure getting hostname from kernel\") } return az.vmSet.GetZoneByNodeName(strings.ToLower(hostname)) } // GetZoneByProviderID implements Zones.GetZoneByProviderID"} {"_id":"doc-en-kubernetes-cf358c6a382ddc504b10f10196871867a9bf38db84ea07ab7988ebdb636bcb22","title":"","text":"\"//pkg/kubelet/apis/resourcemetrics/v1alpha1:go_default_library\", \"//pkg/kubelet/container:go_default_library\", \"//pkg/kubelet/prober:go_default_library\", \"//pkg/kubelet/server/metrics:go_default_library\", \"//pkg/kubelet/server/portforward:go_default_library\", \"//pkg/kubelet/server/remotecommand:go_default_library\", \"//pkg/kubelet/server/stats:go_default_library\","} {"_id":"doc-en-kubernetes-b22ea17444458c161bf326b91420aae2b36f81f949ea032516ae1a60ab15dae9","title":"","text":"name = \"all-srcs\", srcs = [ \":package-srcs\", \"//pkg/kubelet/server/metrics:all-srcs\", \"//pkg/kubelet/server/portforward:all-srcs\", \"//pkg/kubelet/server/remotecommand:all-srcs\", \"//pkg/kubelet/server/stats:all-srcs\","} {"_id":"doc-en-kubernetes-2d18338f3f3b69a1af2638fe20e1bd88d65259605571be561834ed2070651fe8","title":"","text":" package(default_visibility = [\"//visibility:public\"]) load( \"@io_bazel_rules_go//go:def.bzl\", \"go_library\", ) go_library( name = \"go_default_library\", srcs = [\"metrics.go\"], importpath = \"k8s.io/kubernetes/pkg/kubelet/server/metrics\", deps = [ \"//vendor/github.com/prometheus/client_golang/prometheus:go_default_library\", ], ) filegroup( name = \"package-srcs\", srcs = glob([\"**\"]), tags = [\"automanaged\"], visibility = [\"//visibility:private\"], ) filegroup( name = \"all-srcs\", srcs = [\":package-srcs\"], tags = [\"automanaged\"], ) "} {"_id":"doc-en-kubernetes-0a59063922f00854bdcad24ea52a5f6feec5abea4759f984d1bd0288b4a95da6","title":"","text":" /* Copyright 2019 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package metrics import ( \"sync\" \"time\" \"github.com/prometheus/client_golang/prometheus\" ) const ( kubeletSubsystem = \"kubelet\" ) var ( // HTTPRequests tracks the number of the http requests received since the server started. HTTPRequests = prometheus.NewCounterVec( prometheus.CounterOpts{ Subsystem: kubeletSubsystem, Name: \"http_requests_total\", Help: \"Number of the http requests received since the server started\", }, // server_type aims to differentiate the readonly server and the readwrite server. // long_running marks whether the request is long-running or not. // Currently, long-running requests include exec/attach/portforward/debug. []string{\"method\", \"path\", \"host\", \"server_type\", \"long_running\"}, ) // HTTPRequestsDuration tracks the duration in seconds to serve http requests. HTTPRequestsDuration = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Subsystem: kubeletSubsystem, Name: \"http_requests_duration_seconds\", Help: \"Duration in seconds to serve http requests\", // Use DefBuckets for now, will customize the buckets if necessary. Buckets: prometheus.DefBuckets, }, []string{\"method\", \"path\", \"host\", \"server_type\", \"long_running\"}, ) // HTTPInflightRequests tracks the number of the inflight http requests. HTTPInflightRequests = prometheus.NewGaugeVec( prometheus.GaugeOpts{ Subsystem: kubeletSubsystem, Name: \"http_inflight_requests\", Help: \"Number of the inflight http requests\", }, []string{\"method\", \"path\", \"host\", \"server_type\", \"long_running\"}, ) ) var registerMetrics sync.Once // Register all metrics. func Register() { registerMetrics.Do(func() { prometheus.MustRegister(HTTPRequests) prometheus.MustRegister(HTTPRequestsDuration) prometheus.MustRegister(HTTPInflightRequests) }) } // SinceInSeconds gets the time since the specified start in seconds. func SinceInSeconds(start time.Time) float64 { return time.Since(start).Seconds() } "} {"_id":"doc-en-kubernetes-2f9bde220860cfcada40e43e2392a5c5e55fafde418b8eff29cecb23461db0e0","title":"","text":"\"k8s.io/kubernetes/pkg/kubelet/apis/resourcemetrics/v1alpha1\" kubecontainer \"k8s.io/kubernetes/pkg/kubelet/container\" \"k8s.io/kubernetes/pkg/kubelet/prober\" servermetrics \"k8s.io/kubernetes/pkg/kubelet/server/metrics\" \"k8s.io/kubernetes/pkg/kubelet/server/portforward\" remotecommandserver \"k8s.io/kubernetes/pkg/kubelet/server/remotecommand\" \"k8s.io/kubernetes/pkg/kubelet/server/stats\""} {"_id":"doc-en-kubernetes-698fc4b9b6356d57203b189d222080f840a4fda4f408eba97e096aff4aa2abf1","title":"","text":"proxyStream(response.ResponseWriter, request.Request, url) } // trimURLPath trims a URL path. // For paths in the format of \"/metrics/xxx\", \"metrics/xxx\" is returned; // For all other paths, the first part of the path is returned. func trimURLPath(path string) string { parts := strings.SplitN(strings.TrimPrefix(path, \"/\"), \"/\", 3) if len(parts) == 0 { return path } if parts[0] == \"metrics\" && len(parts) > 1 { return fmt.Sprintf(\"%s/%s\", parts[0], parts[1]) } return parts[0] } // isLongRunningRequest determines whether the request is long-running or not. func isLongRunningRequest(path string) bool { longRunningRequestPaths := []string{\"exec\", \"attach\", \"portforward\", \"debug\"} for _, p := range longRunningRequestPaths { if p == path { return true } } return false } // ServeHTTP responds to HTTP requests on the Kubelet. func (s *Server) ServeHTTP(w http.ResponseWriter, req *http.Request) { defer httplog.NewLogged(req, &w).StacktraceWhen("} {"_id":"doc-en-kubernetes-b25d6f1ffbdf9be89767429efbf03854ab969f11a993ab7146f51ad701361689","title":"","text":"http.StatusSwitchingProtocols, ), ).Log() // monitor http requests var serverType string if s.auth == nil { serverType = \"readonly\" } else { serverType = \"readwrite\" } method, path, host := req.Method, trimURLPath(req.URL.Path), req.URL.Host longRunning := strconv.FormatBool(isLongRunningRequest(path)) servermetrics.HTTPRequests.WithLabelValues(method, path, host, serverType, longRunning).Inc() servermetrics.HTTPInflightRequests.WithLabelValues(method, path, host, serverType, longRunning).Inc() defer servermetrics.HTTPInflightRequests.WithLabelValues(method, path, host, serverType, longRunning).Dec() startTime := time.Now() defer servermetrics.HTTPRequestsDuration.WithLabelValues(method, path, host, serverType, longRunning).Observe(servermetrics.SinceInSeconds(startTime)) s.restfulCont.ServeHTTP(w, req) }"} {"_id":"doc-en-kubernetes-f29cd3a5c75b09dc27dc6f956e91ba0df3dc6eb2ed66d33d083397a8d18b4a34","title":"","text":"assert.Equal(t, http.StatusOK, resp.StatusCode) } func TestTrimURLPath(t *testing.T) { tests := []struct { path, expected string }{ {\"\", \"\"}, {\"//\", \"\"}, {\"/pods\", \"pods\"}, {\"pods\", \"pods\"}, {\"pods/\", \"pods\"}, {\"good/\", \"good\"}, {\"pods/probes\", \"pods\"}, {\"metrics\", \"metrics\"}, {\"metrics/resource\", \"metrics/resource\"}, {\"metrics/hello\", \"metrics/hello\"}, } for _, test := range tests { assert.Equal(t, test.expected, trimURLPath(test.path), fmt.Sprintf(\"path is: %s\", test.path)) } } "} {"_id":"doc-en-kubernetes-2a7ba1ec855649a9ebd07d5999c32b0b31c7e346070915ea7788f9a3ca370173","title":"","text":"return &container, nil } } for _, container := range pod.Spec.InitContainers { if container.Name == containerName { return &container, nil } } return nil, fmt.Errorf(\"container %s not found\", containerName) }"} {"_id":"doc-en-kubernetes-a6ee950038b7c4a2dc665d4d1fb6c43c5203da58be90719a9c10c66684a830b6","title":"","text":"pod: getPod(\"foo\", \"\", \"\", \"10Mi\", \"100Mi\"), expectedValue: \"104857600\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"limits.cpu\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"\", \"9\", \"\", \"\"), expectedValue: \"9\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.cpu\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"\", \"\", \"\", \"\"), expectedValue: \"0\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.cpu\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"8\", \"\", \"\", \"\"), expectedValue: \"8\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.cpu\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"100m\", \"\", \"\", \"\"), expectedValue: \"1\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.cpu\", Divisor: resource.MustParse(\"100m\"), }, cName: \"init-foo\", pod: getPod(\"foo\", \"1200m\", \"\", \"\", \"\"), expectedValue: \"12\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.memory\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"\", \"\", \"100Mi\", \"\"), expectedValue: \"104857600\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"requests.memory\", Divisor: resource.MustParse(\"1Mi\"), }, cName: \"init-foo\", pod: getPod(\"foo\", \"\", \"\", \"100Mi\", \"1Gi\"), expectedValue: \"100\", }, { fs: &v1.ResourceFieldSelector{ Resource: \"limits.memory\", }, cName: \"init-foo\", pod: getPod(\"foo\", \"\", \"\", \"10Mi\", \"100Mi\"), expectedValue: \"104857600\", }, } as := assert.New(t) for idx, tc := range cases {"} {"_id":"doc-en-kubernetes-f87c74aca0410e393c09815892e79546e62f6fe97b270bacd65d93f136c8fa0c","title":"","text":"Resources: resources, }, }, InitContainers: []v1.Container{ { Name: \"init-\" + cname, Resources: resources, }, }, }, } }"} {"_id":"doc-en-kubernetes-e9e931422b114d50e6619238332ce3eabae1651b4b1681ab5195fa05e42d26e8","title":"","text":"// BusyBoxImage is the image URI of BusyBox. BusyBoxImage = imageutils.GetE2EImage(imageutils.BusyBox) // AgnHostImage is the image URI of AgnHost AgnHostImage = imageutils.GetE2EImage(imageutils.Agnhost) // For parsing Kubectl version for version-skewed testing. gitVersionRegexp = regexp.MustCompile(\"GitVersion:\"(v.+?)\"\")"} {"_id":"doc-en-kubernetes-38541e4b7f3a9134ab574553218ddcbbe90a45ca08ce40892f6b4cc8f70ed200","title":"","text":"Containers: []v1.Container{ { Name: \"detector\", Image: imageutils.GetE2EImage(imageutils.Agnhost), Command: []string{\"/bin/sleep\", \"3600\"}, Image: AgnHostImage, Command: []string{\"pause\"}, }, }, },"} {"_id":"doc-en-kubernetes-0a6ca716dd54949193ed4602fc5e5ff01a687ee4157411a6e32f484236806fb3","title":"","text":"} } // PingCommand is the type to hold ping command. type PingCommand string const ( // IPv4PingCommand is a ping command for IPv4. IPv4PingCommand PingCommand = \"ping\" // IPv6PingCommand is a ping command for IPv6. IPv6PingCommand PingCommand = \"ping6\" ) // CheckConnectivityToHost launches a pod to test connectivity to the specified // host. An error will be returned if the host is not reachable from the pod. // // An empty nodeName will use the schedule to choose where the pod is executed. func CheckConnectivityToHost(f *Framework, nodeName, podName, host string, pingCmd PingCommand, timeout int) error { func CheckConnectivityToHost(f *Framework, nodeName, podName, host string, port, timeout int) error { contName := fmt.Sprintf(\"%s-container\", podName) command := []string{ string(pingCmd), \"-c\", \"3\", // send 3 pings \"-W\", \"2\", // wait at most 2 seconds for a reply \"nc\", \"-vz\", \"-w\", strconv.Itoa(timeout), host, strconv.Itoa(port), } pod := &v1.Pod{"} {"_id":"doc-en-kubernetes-c9b51d34e6cfe070ce252a096a84d72f2dc77439a9b4906a130335390063237e","title":"","text":"Containers: []v1.Container{ { Name: contName, Image: BusyBoxImage, Image: AgnHostImage, Command: command, }, },"} {"_id":"doc-en-kubernetes-fbd8bb3b44d07c113bf3ba8e3b6889a7265fcaacff6705b689779678df3811f2","title":"","text":"Containers: []v1.Container{ { Name: \"agnhost\", Image: imageutils.GetE2EImage(imageutils.Agnhost), Image: AgnHostImage, Args: args, }, },"} {"_id":"doc-en-kubernetes-b6e93be832656e556d1475c53afa9b56eeae10c6070757aceee1be6af87f1a77","title":"","text":"}) ginkgo.It(\"should provide Internet connection for containers [Feature:Networking-IPv4]\", func() { ginkgo.By(\"Running container which tries to ping 8.8.8.8\") ginkgo.By(\"Running container which tries to connect to 8.8.8.8\") framework.ExpectNoError( framework.CheckConnectivityToHost(f, \"\", \"ping-test\", \"8.8.8.8\", framework.IPv4PingCommand, 30)) framework.CheckConnectivityToHost(f, \"\", \"connectivity-test\", \"8.8.8.8\", 53, 30)) }) ginkgo.It(\"should provide Internet connection for containers [Feature:Networking-IPv6][Experimental]\", func() { ginkgo.By(\"Running container which tries to ping 2001:4860:4860::8888\") ginkgo.By(\"Running container which tries to connect to 2001:4860:4860::8888\") framework.ExpectNoError( framework.CheckConnectivityToHost(f, \"\", \"ping-test\", \"2001:4860:4860::8888\", framework.IPv6PingCommand, 30)) framework.CheckConnectivityToHost(f, \"\", \"connectivity-test\", \"2001:4860:4860::8888\", 53, 30)) }) // First test because it has no dependencies on variables created later on."} {"_id":"doc-en-kubernetes-729758a472fc8c64e0f6a2b8154f624a2f48ebb17d8715e463798e2fe6b3e6a0","title":"","text":"fs.BoolVar(&o.CleanupAndExit, \"cleanup-iptables\", o.CleanupAndExit, \"If true cleanup iptables and ipvs rules and exit.\") fs.MarkDeprecated(\"cleanup-iptables\", \"This flag is replaced by --cleanup.\") fs.BoolVar(&o.CleanupAndExit, \"cleanup\", o.CleanupAndExit, \"If true cleanup iptables and ipvs rules and exit.\") fs.BoolVar(&o.CleanupIPVS, \"cleanup-ipvs\", o.CleanupIPVS, \"If true make kube-proxy cleanup ipvs rules before running. Default is true\") fs.BoolVar(&o.CleanupIPVS, \"cleanup-ipvs\", o.CleanupIPVS, \"If true and --cleanup is specified, kube-proxy will also flush IPVS rules, in addition to normal cleanup.\") // All flags below here are deprecated and will eventually be removed."} {"_id":"doc-en-kubernetes-f2ce1b2b9670fa7fb0d75067b3738b6fbf600ed064736e51ffa2540be4615659","title":"","text":"// NewProxyServer returns a new ProxyServer. func NewProxyServer(o *Options) (*ProxyServer, error) { return newProxyServer(o.config, o.CleanupAndExit, o.CleanupIPVS, o.scheme, o.master) return newProxyServer(o.config, o.CleanupAndExit, o.scheme, o.master) } func newProxyServer( config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, cleanupIPVS bool, scheme *runtime.Scheme, master string) (*ProxyServer, error) {"} {"_id":"doc-en-kubernetes-98bf17442d230a31d2ebe08a2930add9e13a3cbe8eedbd7fc5108faa845081a1","title":"","text":"proxier = proxierIPTables serviceEventHandler = proxierIPTables endpointsEventHandler = proxierIPTables // No turning back. Remove artifacts that might still exist from the userspace Proxier. klog.V(0).Info(\"Tearing down inactive rules.\") // TODO this has side effects that should only happen when Run() is invoked. userspace.CleanupLeftovers(iptInterface) // IPVS Proxier will generate some iptables rules, need to clean them before switching to other proxy mode. // Besides, ipvs proxier will create some ipvs rules as well. Because there is no way to tell if a given // ipvs rule is created by IPVS proxier or not. Users should explicitly specify `--clean-ipvs=true` to flush // all ipvs rules when kube-proxy start up. Users do this operation should be with caution. if canUseIPVS { ipvs.CleanupLeftovers(ipvsInterface, iptInterface, ipsetInterface, cleanupIPVS) } } else if proxyMode == proxyModeIPVS { klog.V(0).Info(\"Using ipvs Proxier.\") proxierIPVS, err := ipvs.NewProxier("} {"_id":"doc-en-kubernetes-a080d8d485cdcd0784c7a2dbcbae37e1c4c15723238ad107b4320698b7c5ef3a","title":"","text":"proxier = proxierIPVS serviceEventHandler = proxierIPVS endpointsEventHandler = proxierIPVS klog.V(0).Info(\"Tearing down inactive rules.\") // TODO this has side effects that should only happen when Run() is invoked. userspace.CleanupLeftovers(iptInterface) iptables.CleanupLeftovers(iptInterface) } else { klog.V(0).Info(\"Using userspace Proxier.\") // This is a proxy.LoadBalancer which NewProxier needs but has methods we don't need for"} {"_id":"doc-en-kubernetes-9f81c325c18f1c8f2ceb3e4d61af37518016f9a1c8224aa4135a332863aa3b7b","title":"","text":"} serviceEventHandler = proxierUserspace proxier = proxierUserspace // Remove artifacts from the iptables and ipvs Proxier, if not on Windows. klog.V(0).Info(\"Tearing down inactive rules.\") // TODO this has side effects that should only happen when Run() is invoked. iptables.CleanupLeftovers(iptInterface) // IPVS Proxier will generate some iptables rules, need to clean them before switching to other proxy mode. // Besides, ipvs proxier will create some ipvs rules as well. Because there is no way to tell if a given // ipvs rule is created by IPVS proxier or not. Users should explicitly specify `--clean-ipvs=true` to flush // all ipvs rules when kube-proxy start up. Users do this operation should be with caution. if canUseIPVS { ipvs.CleanupLeftovers(ipvsInterface, iptInterface, ipsetInterface, cleanupIPVS) } } iptInterface.AddReloadFunc(proxier.Sync)"} {"_id":"doc-en-kubernetes-9470bad82f9139750eece0dad1dd0f49b47ed91faf683330dcef6924295cbbcc","title":"","text":"} proxier = proxierUserspace serviceEventHandler = proxierUserspace klog.V(0).Info(\"Tearing down pure-winkernel proxy rules.\") winkernel.CleanupLeftovers() } return &ProxyServer{"} {"_id":"doc-en-kubernetes-4f67df2e87ba5a21f575627e7bc0b28ea95028a8fa45d13551a882b3e2d25308","title":"","text":"\"testing\" ) func TestLoopbackHostPort(t *testing.T) { func TestLoopbackHostPortIPv4(t *testing.T) { _, ipv6only, err := isIPv6LoopbackSupported() if err != nil { t.Fatalf(\"fail to enumerate network interface, %s\", err) } if ipv6only { t.Fatalf(\"no ipv4 loopback interface\") } host, port, err := LoopbackHostPort(\"1.2.3.4:443\") if err != nil { t.Fatalf(\"unexpected error: %v\", err)"} {"_id":"doc-en-kubernetes-6719055e8e5a07bd8cc7a4f33037185ef3b63593d6b789e79040e7f99e1bbdc3","title":"","text":"if port != \"443\" { t.Fatalf(\"expected 443 as port, got %q\", port) } } func TestLoopbackHostPortIPv6(t *testing.T) { ipv6, _, err := isIPv6LoopbackSupported() if err != nil { t.Fatalf(\"fail to enumerate network interface, %s\", err) } if !ipv6 { t.Fatalf(\"no ipv6 loopback interface\") } host, port, err = LoopbackHostPort(\"[ff06:0:0:0:0:0:0:c3]:443\") host, port, err := LoopbackHostPort(\"[ff06:0:0:0:0:0:0:c3]:443\") if err != nil { t.Fatalf(\"unexpected error: %v\", err) }"} {"_id":"doc-en-kubernetes-b4c475c125eeddaa879057e46795442ca846cf3c997e8dcb4b8ce6c1a3cdc2f3","title":"","text":"if ip := net.ParseIP(host); ip == nil || !ip.IsLoopback() || ip.To4() != nil { t.Fatalf(\"expected IPv6 host to be loopback, got %q\", host) } if port != \"443\" { t.Fatalf(\"expected 443 as port, got %q\", port) } } func isIPv6LoopbackSupported() (ipv6 bool, ipv6only bool, err error) { addrs, err := net.InterfaceAddrs() if err != nil { return false, false, err } ipv4 := false for _, address := range addrs { ipnet, ok := address.(*net.IPNet) if !ok || !ipnet.IP.IsLoopback() { continue } if ipnet.IP.To4() == nil { ipv6 = true continue } ipv4 = true } ipv6only = ipv6 && !ipv4 return ipv6, ipv6only, nil } "} {"_id":"doc-en-kubernetes-276652e7609fc82a75614e54fdfd3dd55e8a50a6e0361b75370693ff857a1493","title":"","text":"syncPeriod time.Duration minSyncPeriod time.Duration // Values are CIDR's to exclude when cleaning up IPVS rules. excludeCIDRs []string excludeCIDRs []*net.IPNet // Set to true to set sysctls arp_ignore and arp_announce strictARP bool iptables utiliptables.Interface"} {"_id":"doc-en-kubernetes-8778cbfd5138b1022caa64d5f6c959eef9f99c43c74950f161f7943069e2327b","title":"","text":"// Proxier implements ProxyProvider var _ proxy.ProxyProvider = &Proxier{} // ParseExcludedCIDRs parses the input strings and returns net.IPNet // The validation has been done earlier so the error condition will never happen under normal conditions func ParseExcludedCIDRs(excludeCIDRStrs []string) []*net.IPNet { var cidrExclusions []*net.IPNet for _, excludedCIDR := range excludeCIDRStrs { _, n, err := net.ParseCIDR(excludedCIDR) if err == nil { cidrExclusions = append(cidrExclusions, n) } } return cidrExclusions } // NewProxier returns a new Proxier given an iptables and ipvs Interface instance. // Because of the iptables and ipvs logic, it is assumed that there is only a single Proxier active on a machine. // An error will be returned if it fails to update or acquire the initial lock."} {"_id":"doc-en-kubernetes-110b7379388b284a8be1f3b22e905b132886953cc89e12b2a47388c79f85b0e3","title":"","text":"exec utilexec.Interface, syncPeriod time.Duration, minSyncPeriod time.Duration, excludeCIDRs []string, excludeCIDRStrs []string, strictARP bool, masqueradeAll bool, masqueradeBit int,"} {"_id":"doc-en-kubernetes-1ff21f193dc858938fb18637b2cdef1756d4e1a5008f0492ae12d4d46c7f9d70","title":"","text":"endpointsChanges: proxy.NewEndpointChangeTracker(hostname, nil, &isIPv6, recorder), syncPeriod: syncPeriod, minSyncPeriod: minSyncPeriod, excludeCIDRs: excludeCIDRs, excludeCIDRs: ParseExcludedCIDRs(excludeCIDRStrs), iptables: ipt, masqueradeAll: masqueradeAll, masqueradeMark: masqueradeMark,"} {"_id":"doc-en-kubernetes-2556c988271d18e50517b935ec199a925fd072af9c1ec5f387e32164befe3e32","title":"","text":"func (proxier *Proxier) isIPInExcludeCIDRs(ip net.IP) bool { // make sure it does not fall within an excluded CIDR range. for _, excludedCIDR := range proxier.excludeCIDRs { // Any validation of this CIDR already should have occurred. _, n, _ := net.ParseCIDR(excludedCIDR) if n.Contains(ip) { if excludedCIDR.Contains(ip) { return true } }"} {"_id":"doc-en-kubernetes-6781bd85107ba1641d962293fc825f091d7291be94c3462d246057063034be8c","title":"","text":"return nil } func NewFakeProxier(ipt utiliptables.Interface, ipvs utilipvs.Interface, ipset utilipset.Interface, nodeIPs []net.IP, excludeCIDRs []string) *Proxier { func NewFakeProxier(ipt utiliptables.Interface, ipvs utilipvs.Interface, ipset utilipset.Interface, nodeIPs []net.IP, excludeCIDRs []*net.IPNet) *Proxier { fcmd := fakeexec.FakeCmd{ CombinedOutputScript: []fakeexec.FakeCombinedOutputAction{ func() ([]byte, error) { return []byte(\"dummy device have been created\"), nil },"} {"_id":"doc-en-kubernetes-70760f710d8128b2b3746527a77bfa0d34e7ce32891620aaa3b5e1d5d384e11e","title":"","text":"ipt := iptablestest.NewFake() ipvs := ipvstest.NewFake() ipset := ipsettest.NewFake(testIPSetVersion) fp := NewFakeProxier(ipt, ipvs, ipset, nil, []string{\"3.3.3.0/24\", \"4.4.4.0/24\"}) fp := NewFakeProxier(ipt, ipvs, ipset, nil, ParseExcludedCIDRs([]string{\"3.3.3.0/24\", \"4.4.4.0/24\"})) // All ipvs services that were processed in the latest sync loop. activeServices := map[string]bool{\"ipvs0\": true, \"ipvs1\": true}"} {"_id":"doc-en-kubernetes-019f7e15ebbc2d48f8b95e52215350c0b75f01f4d547a3661f23a5054f90a52c","title":"","text":"ipvs := ipvstest.NewFake() ipset := ipsettest.NewFake(testIPSetVersion) gtm := NewGracefulTerminationManager(ipvs) fp := NewFakeProxier(ipt, ipvs, ipset, nil, []string{\"4.4.4.4/32\"}) fp := NewFakeProxier(ipt, ipvs, ipset, nil, ParseExcludedCIDRs([]string{\"4.4.4.4/32\"})) fp.gracefuldeleteManager = gtm vs := &utilipvs.VirtualServer{"} {"_id":"doc-en-kubernetes-7845aba1be8efb1ec01cd4ccf8758abb72f064e70bd476b1200a6cdc354b4a00","title":"","text":"ipt := iptablestest.NewFake() ipvs := ipvstest.NewFake() ipset := ipsettest.NewFake(testIPSetVersion) fp := NewFakeProxier(ipt, ipvs, ipset, nil, []string{\"3000::/64\", \"4000::/64\"}) fp := NewFakeProxier(ipt, ipvs, ipset, nil, ParseExcludedCIDRs([]string{\"3000::/64\", \"4000::/64\"})) fp.nodeIP = net.ParseIP(\"::1\") // All ipvs services that were processed in the latest sync loop."} {"_id":"doc-en-kubernetes-6c704c71637da422eb25b0a00e9fa47999bc4f7e195acd0783dd70dbefe75b4f","title":"","text":"package ipvs import ( \"fmt\" \"sync\" \"time\" \"fmt\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/klog\" utilipvs \"k8s.io/kubernetes/pkg/util/ipvs\""} {"_id":"doc-en-kubernetes-8b0ac55d57483e1baa49f2e98ca6b7162ded15b5d7a5ce9a70f20d69fe7d6598","title":"","text":"} for _, rs := range rss { if rsToDelete.RealServer.Equal(rs) { // Delete RS with no connections // For UDP, ActiveConn is always 0 // For TCP, InactiveConn are connections not in ESTABLISHED state if rs.ActiveConn+rs.InactiveConn != 0 { // For UDP traffic, no graceful termination, we immediately delete the RS // (existing connections will be deleted on the next packet because sysctlExpireNoDestConn=1) // For other protocols, don't delete until all connections have expired) if rsToDelete.VirtualServer.Protocol != \"udp\" && rs.ActiveConn+rs.InactiveConn != 0 { klog.Infof(\"Not deleting, RS %v: %v ActiveConn, %v InactiveConn\", rsToDelete.String(), rs.ActiveConn, rs.InactiveConn) return false, nil }"} {"_id":"doc-en-kubernetes-70da332ac0f79e4bd7dc3b13567889fe3294088f50b90cba4aaace6c05c0e75a","title":"","text":" { \"Rules\": [ { \"SelectorRegexp\": \"k8s[.]io/kubernetes/pkg/\", \"AllowedPrefixes\": [ \"k8s.io/kubernetes/pkg/api/legacyscheme\", \"k8s.io/kubernetes/pkg/api/service\", \"k8s.io/kubernetes/pkg/api/v1/pod\", \"k8s.io/kubernetes/pkg/apis/apps\", \"k8s.io/kubernetes/pkg/apis/apps/validation\", \"k8s.io/kubernetes/pkg/apis/autoscaling\", \"k8s.io/kubernetes/pkg/apis/batch\", \"k8s.io/kubernetes/pkg/apis/core\", \"k8s.io/kubernetes/pkg/apis/core/helper\", \"k8s.io/kubernetes/pkg/apis/core/install\", \"k8s.io/kubernetes/pkg/apis/core/pods\", \"k8s.io/kubernetes/pkg/apis/core/v1\", \"k8s.io/kubernetes/pkg/apis/core/v1/helper\", \"k8s.io/kubernetes/pkg/apis/core/v1/helper/qos\", \"k8s.io/kubernetes/pkg/apis/core/validation\", \"k8s.io/kubernetes/pkg/apis/extensions\", \"k8s.io/kubernetes/pkg/apis/networking\", \"k8s.io/kubernetes/pkg/apis/policy\", \"k8s.io/kubernetes/pkg/apis/policy/validation\", \"k8s.io/kubernetes/pkg/apis/scheduling\", \"k8s.io/kubernetes/pkg/apis/storage/v1/util\", \"k8s.io/kubernetes/pkg/capabilities\", \"k8s.io/kubernetes/pkg/client/conditions\", \"k8s.io/kubernetes/pkg/controller\", \"k8s.io/kubernetes/pkg/controller/deployment/util\", \"k8s.io/kubernetes/pkg/controller/nodelifecycle\", \"k8s.io/kubernetes/pkg/controller/nodelifecycle/scheduler\", \"k8s.io/kubernetes/pkg/controller/service\", \"k8s.io/kubernetes/pkg/controller/util/node\", \"k8s.io/kubernetes/pkg/controller/volume/persistentvolume/util\", \"k8s.io/kubernetes/pkg/controller/volume/scheduling\", \"k8s.io/kubernetes/pkg/features\", \"k8s.io/kubernetes/pkg/fieldpath\", \"k8s.io/kubernetes/pkg/kubectl\", \"k8s.io/kubernetes/pkg/kubectl/apps\", \"k8s.io/kubernetes/pkg/kubectl/describe\", \"k8s.io/kubernetes/pkg/kubectl/describe/versioned\", \"k8s.io/kubernetes/pkg/kubectl/scheme\", \"k8s.io/kubernetes/pkg/kubectl/util\", \"k8s.io/kubernetes/pkg/kubectl/util/certificate\", \"k8s.io/kubernetes/pkg/kubectl/util/deployment\", \"k8s.io/kubernetes/pkg/kubectl/util/event\", \"k8s.io/kubernetes/pkg/kubectl/util/fieldpath\", \"k8s.io/kubernetes/pkg/kubectl/util/podutils\", \"k8s.io/kubernetes/pkg/kubectl/util/qos\", \"k8s.io/kubernetes/pkg/kubectl/util/rbac\", \"k8s.io/kubernetes/pkg/kubectl/util/resource\", \"k8s.io/kubernetes/pkg/kubectl/util/slice\", \"k8s.io/kubernetes/pkg/kubectl/util/storage\", \"k8s.io/kubernetes/pkg/kubelet/apis\", \"k8s.io/kubernetes/pkg/kubelet/apis/config\", \"k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1\", \"k8s.io/kubernetes/pkg/kubelet/checkpoint\", \"k8s.io/kubernetes/pkg/kubelet/checkpointmanager\", \"k8s.io/kubernetes/pkg/kubelet/checkpointmanager/checksum\", \"k8s.io/kubernetes/pkg/kubelet/checkpointmanager/errors\", \"k8s.io/kubernetes/pkg/kubelet/configmap\", \"k8s.io/kubernetes/pkg/kubelet/container\", \"k8s.io/kubernetes/pkg/kubelet/dockershim/metrics\", \"k8s.io/kubernetes/pkg/kubelet/events\", \"k8s.io/kubernetes/pkg/kubelet/lifecycle\", \"k8s.io/kubernetes/pkg/kubelet/metrics\", \"k8s.io/kubernetes/pkg/kubelet/pod\", \"k8s.io/kubernetes/pkg/kubelet/secret\", \"k8s.io/kubernetes/pkg/kubelet/sysctl\", \"k8s.io/kubernetes/pkg/kubelet/types\", \"k8s.io/kubernetes/pkg/kubelet/util\", \"k8s.io/kubernetes/pkg/kubelet/util/format\", \"k8s.io/kubernetes/pkg/kubelet/util/manager\", \"k8s.io/kubernetes/pkg/kubelet/util/store\", \"k8s.io/kubernetes/pkg/master/ports\", \"k8s.io/kubernetes/pkg/registry/core/service/allocator\", \"k8s.io/kubernetes/pkg/registry/core/service/portallocator\", \"k8s.io/kubernetes/pkg/scheduler/algorithm\", \"k8s.io/kubernetes/pkg/scheduler/algorithm/predicates\", \"k8s.io/kubernetes/pkg/scheduler/algorithm/priorities/util\", \"k8s.io/kubernetes/pkg/scheduler/api\", \"k8s.io/kubernetes/pkg/scheduler/metrics\", \"k8s.io/kubernetes/pkg/scheduler/nodeinfo\", \"k8s.io/kubernetes/pkg/scheduler/util\", \"k8s.io/kubernetes/pkg/scheduler/volumebinder\", \"k8s.io/kubernetes/pkg/security/apparmor\", \"k8s.io/kubernetes/pkg/security/podsecuritypolicy/seccomp\", \"k8s.io/kubernetes/pkg/security/podsecuritypolicy/util\", \"k8s.io/kubernetes/pkg/serviceaccount\", \"k8s.io/kubernetes/pkg/ssh\", \"k8s.io/kubernetes/pkg/util/filesystem\", \"k8s.io/kubernetes/pkg/util/hash\", \"k8s.io/kubernetes/pkg/util/labels\", \"k8s.io/kubernetes/pkg/util/metrics\", \"k8s.io/kubernetes/pkg/util/mount\", \"k8s.io/kubernetes/pkg/util/node\", \"k8s.io/kubernetes/pkg/util/parsers\", \"k8s.io/kubernetes/pkg/util/resizefs\", \"k8s.io/kubernetes/pkg/util/slice\", \"k8s.io/kubernetes/pkg/util/system\", \"k8s.io/kubernetes/pkg/util/taints\", \"k8s.io/kubernetes/pkg/volume\", \"k8s.io/kubernetes/pkg/volume/util\", \"k8s.io/kubernetes/pkg/volume/util/fs\", \"k8s.io/kubernetes/pkg/volume/util/fsquota\", \"k8s.io/kubernetes/pkg/volume/util/recyclerclient\", \"k8s.io/kubernetes/pkg/volume/util/subpath\", \"k8s.io/kubernetes/pkg/volume/util/types\", \"k8s.io/kubernetes/pkg/volume/util/volumepathhandler\" ], \"ForbiddenPrefixes\": [] }, { \"SelectorRegexp\": \"k8s[.]io/kubernetes/test/\", \"AllowedPrefixes\": [ \"k8s.io/kubernetes/test/e2e/framework/auth\", \"k8s.io/kubernetes/test/e2e/framework/ginkgowrapper\", \"k8s.io/kubernetes/test/e2e/framework/log\", \"k8s.io/kubernetes/test/e2e/framework/metrics\", \"k8s.io/kubernetes/test/e2e/framework/node\", \"k8s.io/kubernetes/test/e2e/framework/pod\", \"k8s.io/kubernetes/test/e2e/framework/resource\", \"k8s.io/kubernetes/test/e2e/framework/ssh\", \"k8s.io/kubernetes/test/e2e/framework/testfiles\", \"k8s.io/kubernetes/test/e2e/manifest\", \"k8s.io/kubernetes/test/e2e/perftype\", \"k8s.io/kubernetes/test/utils\", \"k8s.io/kubernetes/test/utils/image\" ], \"ForbiddenPrefixes\": [] }, { \"SelectorRegexp\": \"k8s[.]io/kubernetes/third_party/\", \"AllowedPrefixes\": [ \"k8s.io/kubernetes/third_party/forked/golang/expansion\" ], \"ForbiddenPrefixes\": [] }, { \"SelectorRegexp\": \"k8s[.]io/utils/\", \"AllowedPrefixes\": [ \"k8s.io/utils/buffer\", \"k8s.io/utils/exec\", \"k8s.io/utils/integer\", \"k8s.io/utils/net\", \"k8s.io/utils/nsenter\", \"k8s.io/utils/path\", \"k8s.io/utils/pointer\", \"k8s.io/utils/strings\", \"k8s.io/utils/trace\" ], \"ForbiddenPrefixes\": [] }, { \"SelectorRegexp\": \"k8s[.]io/(api/|apimachinery/|apiextensions-apiserver/|apiserver/)\", \"AllowedPrefixes\": [ \"\" ] }, { \"SelectorRegexp\": \"k8s[.]io/client-go/\", \"AllowedPrefixes\": [ \"\" ] } ] } "} {"_id":"doc-en-kubernetes-2c9f433b6998d85f769acdc68231fd03e391ecd5050a68960a4520e149a067d3","title":"","text":"IFS=\" \" read -ra KUBE_TEST_SERVER_TARGETS <<< \"$(kube::golang::server_test_targets)\" readonly KUBE_TEST_SERVER_TARGETS readonly KUBE_TEST_SERVER_BINARIES=(\"${KUBE_TEST_SERVER_TARGETS[@]##*/}\") readonly KUBE_TEST_SERVER_PLATFORMS=(\"${KUBE_SERVER_PLATFORMS[@]}\") readonly KUBE_TEST_SERVER_PLATFORMS=(\"${KUBE_SERVER_PLATFORMS[@]:+\"${KUBE_SERVER_PLATFORMS[@]}\"}\") # Gigabytes necessary for parallel platform builds. # As of January 2018, RAM usage is exceeding 30G"} {"_id":"doc-en-kubernetes-ad697a3f7fca92c071311a76cb6a85c18f199bc5ac946f703f84c2210e84ab5e","title":"","text":"\"//vendor/github.com/onsi/ginkgo/reporters:go_default_library\", \"//vendor/github.com/onsi/gomega:go_default_library\", \"//vendor/k8s.io/klog:go_default_library\", \"//vendor/k8s.io/utils/net:go_default_library\", ], )"} {"_id":"doc-en-kubernetes-802164a7743aa18ee19ca4fc8067d54d882cc190495e79228edd3fdc8726711a","title":"","text":"e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" \"k8s.io/kubernetes/test/e2e/manifest\" testutils \"k8s.io/kubernetes/test/utils\" utilnet \"k8s.io/utils/net\" // ensure auth plugins are loaded _ \"k8s.io/client-go/plugin/pkg/client/auth\""} {"_id":"doc-en-kubernetes-c48e64e4995fdc9c558f74e4fd0877cee269a65d1d0aba6233429e7174be4344","title":"","text":"e2elog.Logf(\"kube-apiserver version: %s\", serverVersion.GitVersion) } // Obtain the default IP family of the cluster // Some e2e test are designed to work on IPv4 only, this global variable // allows to adapt those tests to work on both IPv4 and IPv6 // TODO(dual-stack): dual stack clusters should pass full e2e testing at least with the primary IP family // the dual stack clusters can be ipv4-ipv6 or ipv6-ipv4, order matters, // and services use the primary IP family by default // If we´ll need to provide additional context for dual-stack, we can detect it // because pods have two addresses (one per family) framework.TestContext.IPFamily = getDefaultClusterIPFamily(c) e2elog.Logf(\"Cluster IP family: %s\", framework.TestContext.IPFamily) // Reference common test to make the import valid. commontest.CurrentSuite = commontest.E2E"} {"_id":"doc-en-kubernetes-9b7b2c2d2f80d319fe4f20e51930f4569a6b35bb41feb115a9cea020ce4e9ba4","title":"","text":"e2elog.Logf(\"Output of clusterapi-tester:n%v\", logs) } } // getDefaultClusterIPFamily obtains the default IP family of the cluster // using the Cluster IP address of the kubernetes service created in the default namespace // This unequivocally identifies the default IP family because services are single family func getDefaultClusterIPFamily(c clientset.Interface) string { // Get the ClusterIP of the kubernetes service created in the default namespace svc, err := c.CoreV1().Services(metav1.NamespaceDefault).Get(\"kubernetes\", metav1.GetOptions{}) if err != nil { e2elog.Failf(\"Failed to get kubernetes service ClusterIP: %v\", err) } if utilnet.IsIPv6String(svc.Spec.ClusterIP) { return \"ipv6\" } return \"ipv4\" } "} {"_id":"doc-en-kubernetes-0cd1b83e4b51d2beeea4122edb8f4132e4eda3f1eeb8e7a3447ea02515e15add","title":"","text":"\"//vendor/golang.org/x/net/websocket:go_default_library\", \"//vendor/k8s.io/klog:go_default_library\", \"//vendor/k8s.io/utils/exec:go_default_library\", \"//vendor/k8s.io/utils/net:go_default_library\", ], )"} {"_id":"doc-en-kubernetes-2627b4a123e0ca5209f27e2eee104645722b7ec371ff1dd3a41791636626c125","title":"","text":"\"time\" \"github.com/onsi/ginkgo\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/util/intstr\""} {"_id":"doc-en-kubernetes-36e0e6e1e73a02898aacaa0c1881cf55c94cfed4feb5ffab9e57afc7c1850906","title":"","text":"e2enode \"k8s.io/kubernetes/test/e2e/framework/node\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" imageutils \"k8s.io/kubernetes/test/utils/image\" k8utilnet \"k8s.io/utils/net\" ) const ("} {"_id":"doc-en-kubernetes-eb2cfd00fedba1f594ef8cfa0d681c9799907f9d8070274533050f3437517790","title":"","text":"var netexecImageName = imageutils.GetE2EImage(imageutils.Agnhost) // TranslateIPv4ToIPv6 maps an IPv4 address into a valid IPv6 address // adding the well known prefix \"0::ffff:\" https://tools.ietf.org/html/rfc2765 // if the ip is IPv4 and the cluster IPFamily is IPv6, otherwise returns the same ip func TranslateIPv4ToIPv6(ip string) string { if TestContext.IPFamily == \"ipv6\" && !k8utilnet.IsIPv6String(ip) { ip = \"0::ffff:\" + ip } return ip } // NewNetworkingTestConfig creates and sets up a new test config helper. func NewNetworkingTestConfig(f *Framework) *NetworkingTestConfig { config := &NetworkingTestConfig{f: f, Namespace: f.Namespace.Name, HostNetwork: true}"} {"_id":"doc-en-kubernetes-c6211447ec695e53357eff7fb7331b1bca7ada265278c4867c6e4291a58cf13c","title":"","text":"// The configuration of NodeKiller. NodeKiller NodeKillerConfig // The Default IP Family of the cluster (\"ipv4\" or \"ipv6\") IPFamily string } // NodeKillerConfig describes configuration of NodeKiller -- a utility to"} {"_id":"doc-en-kubernetes-b897724f3829acbc9d7cb9fdfa6f5965b13cebce7d9c4a6915451445096e8b9f","title":"","text":"// create pod which using hostport on the specified node according to the nodeSelector func createHostPortPodOnNode(f *framework.Framework, podName, ns, hostIP string, port int32, protocol v1.Protocol, nodeSelector map[string]string, expectScheduled bool) { hostIP = framework.TranslateIPv4ToIPv6(hostIP) createPausePod(f, pausePodConfig{ Name: podName, Ports: []v1.ContainerPort{"} {"_id":"doc-en-kubernetes-ab3e82462059f73d9fd258c46a15e2a4b37e5f7d037f3912201cd89c8f663e7d","title":"","text":"if service.Spec.ClusterIP != \"\" { allErrs = append(allErrs, field.Forbidden(specPath.Child(\"clusterIP\"), \"must be empty for ExternalName services\")) } if len(service.Spec.ExternalName) > 0 { allErrs = append(allErrs, ValidateDNS1123Subdomain(service.Spec.ExternalName, specPath.Child(\"externalName\"))...) // The value (a CNAME) may have a trailing dot to denote it as fully qualified cname := strings.TrimSuffix(service.Spec.ExternalName, \".\") if len(cname) > 0 { allErrs = append(allErrs, ValidateDNS1123Subdomain(cname, specPath.Child(\"externalName\"))...) } else { allErrs = append(allErrs, field.Required(specPath.Child(\"externalName\"), \"\")) }"} {"_id":"doc-en-kubernetes-b6453372f90721db4fb533c1a4e75ff799d077bc81ac044c746996e5c388174f","title":"","text":"numErrs: 0, }, { name: \"valid ExternalName (trailing dot)\", tweakSvc: func(s *core.Service) { s.Spec.Type = core.ServiceTypeExternalName s.Spec.ClusterIP = \"\" s.Spec.ExternalName = \"foo.bar.example.com.\" }, numErrs: 0, }, { name: \"invalid ExternalName clusterIP (valid IP)\", tweakSvc: func(s *core.Service) { s.Spec.Type = core.ServiceTypeExternalName"} {"_id":"doc-en-kubernetes-711309b8a197481470234435c3fb1566d17746156bc15633764895be869cafde","title":"","text":"// this is a blocking call and should only return when the pod and its containers are killed. err := c.killPodFunc(pod, status, nil) if err != nil { return fmt.Errorf(\"preemption: pod %s failed to evict %v\", format.Pod(pod), err) klog.Warningf(\"preemption: pod %s failed to evict %v\", format.Pod(pod), err) // In future syncPod loops, the kubelet will retry the pod deletion steps that it was stuck on. continue } klog.Infof(\"preemption: pod %s evicted successfully\", format.Pod(pod)) }"} {"_id":"doc-en-kubernetes-850b6a8124b71cca5892a7aba48f0f9c649ef28f504e0f2836f3897ec607789f","title":"","text":") type fakePodKiller struct { killedPods []*v1.Pod killedPods []*v1.Pod errDuringPodKilling bool } func newFakePodKiller() *fakePodKiller { return &fakePodKiller{killedPods: []*v1.Pod{}} func newFakePodKiller(errPodKilling bool) *fakePodKiller { return &fakePodKiller{killedPods: []*v1.Pod{}, errDuringPodKilling: errPodKilling} } func (f *fakePodKiller) clear() {"} {"_id":"doc-en-kubernetes-2a8096b623bc3a2dec32c09d70148f82f9fa42bb49abf47c3763472814cbb214","title":"","text":"} func (f *fakePodKiller) killPodNow(pod *v1.Pod, status v1.PodStatus, gracePeriodOverride *int64) error { if f.errDuringPodKilling { f.killedPods = []*v1.Pod{} return fmt.Errorf(\"problem killing pod %v\", pod) } f.killedPods = append(f.killedPods, pod) return nil }"} {"_id":"doc-en-kubernetes-4f620cf759f8722ab8856392056ff7f4fd933bc5e037c926fcceb7c3d29e6dc2","title":"","text":"} } func TestEvictPodsToFreeRequestsWithError(t *testing.T) { defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.ExperimentalCriticalPodAnnotation, true)() type testRun struct { testName string inputPods []*v1.Pod insufficientResources admissionRequirementList expectErr bool expectedOutput []*v1.Pod } podProvider := newFakePodProvider() podKiller := newFakePodKiller(true) criticalPodAdmissionHandler := getTestCriticalPodAdmissionHandler(podProvider, podKiller) allPods := getTestPods() runs := []testRun{ { testName: \"multiple pods eviction error\", inputPods: []*v1.Pod{ allPods[critical], allPods[bestEffort], allPods[burstable], allPods[highRequestBurstable], allPods[guaranteed], allPods[highRequestGuaranteed]}, insufficientResources: getAdmissionRequirementList(0, 550, 0), expectErr: false, expectedOutput: nil, }, } for _, r := range runs { podProvider.setPods(r.inputPods) outErr := criticalPodAdmissionHandler.evictPodsToFreeRequests(allPods[critical], r.insufficientResources) outputPods := podKiller.getKilledPods() if !r.expectErr && outErr != nil { t.Errorf(\"evictPodsToFreeRequests returned an unexpected error during the %s test. Err: %v\", r.testName, outErr) } else if r.expectErr && outErr == nil { t.Errorf(\"evictPodsToFreeRequests expected an error but returned a successful output=%v during the %s test.\", outputPods, r.testName) } else if !podListEqual(r.expectedOutput, outputPods) { t.Errorf(\"evictPodsToFreeRequests expected %v but got %v during the %s test.\", r.expectedOutput, outputPods, r.testName) } podKiller.clear() } } func TestEvictPodsToFreeRequests(t *testing.T) { defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.ExperimentalCriticalPodAnnotation, true)() type testRun struct {"} {"_id":"doc-en-kubernetes-1bf499caec70808c6aa4d545ed5021a8572fc499bdd53998d002a0bc1d820b6d","title":"","text":"expectedOutput []*v1.Pod } podProvider := newFakePodProvider() podKiller := newFakePodKiller() podKiller := newFakePodKiller(false) criticalPodAdmissionHandler := getTestCriticalPodAdmissionHandler(podProvider, podKiller) allPods := getTestPods() runs := []testRun{"} {"_id":"doc-en-kubernetes-596c09ffcc4e884183e1645e13dac17c96ae28b0610d23bfc959ba43ee9b8e12","title":"","text":"if err := opts.Validate(args); err != nil { klog.Fatalf(\"failed validate: %v\", err) } klog.Fatal(opts.Run()) if err := opts.Run(); err != nil { klog.Exit(err) } }, }"} {"_id":"doc-en-kubernetes-cb0107493533f820b34c539192006535d74417fa6f51a81419bf6e5cbe3d2296","title":"","text":"}, }, { // a role for the csi external attacher ObjectMeta: metav1.ObjectMeta{Name: \"system:csi-external-attacher\"}, Rules: []rbacv1.PolicyRule{ rbacv1helpers.NewRule(\"get\", \"list\", \"watch\", \"update\", \"patch\").Groups(legacyGroup).Resources(\"persistentvolumes\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\").Groups(legacyGroup).Resources(\"nodes\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\", \"update\", \"patch\").Groups(storageGroup).Resources(\"volumeattachments\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\").Groups(legacyGroup).Resources(\"events\").RuleOrDie(), }, }, { // a role making the csrapprover controller approve a node client CSR ObjectMeta: metav1.ObjectMeta{Name: \"system:certificates.k8s.io:certificatesigningrequests:nodeclient\"}, Rules: []rbacv1.PolicyRule{"} {"_id":"doc-en-kubernetes-01088ddce912f218dcbcf091f77a80ddf0c6b0f8ae14ba3434a6bb7f818d5397","title":"","text":"Rules: kubeSchedulerRules, }) externalProvisionerRules := []rbacv1.PolicyRule{ rbacv1helpers.NewRule(\"create\", \"delete\", \"get\", \"list\", \"watch\").Groups(legacyGroup).Resources(\"persistentvolumes\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\", \"update\", \"patch\").Groups(legacyGroup).Resources(\"persistentvolumeclaims\").RuleOrDie(), rbacv1helpers.NewRule(\"list\", \"watch\").Groups(storageGroup).Resources(\"storageclasses\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\").Groups(legacyGroup).Resources(\"events\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"list\", \"watch\").Groups(legacyGroup).Resources(\"nodes\").RuleOrDie(), } if utilfeature.DefaultFeatureGate.Enabled(features.CSINodeInfo) { externalProvisionerRules = append(externalProvisionerRules, rbacv1helpers.NewRule(\"get\", \"watch\", \"list\").Groups(\"storage.k8s.io\").Resources(\"csinodes\").RuleOrDie()) } roles = append(roles, rbacv1.ClusterRole{ // a role for the csi external provisioner ObjectMeta: metav1.ObjectMeta{Name: \"system:csi-external-provisioner\"}, Rules: externalProvisionerRules, }) addClusterRoleLabel(roles) return roles }"} {"_id":"doc-en-kubernetes-a35e2f6d2278a3d4338b67f593ae860d215763fd0fd83cb1828782d189d661f4","title":"","text":"creationTimestamp: null labels: kubernetes.io/bootstrapping: rbac-defaults name: system:csi-external-attacher rules: - apiGroups: - \"\" resources: - persistentvolumes verbs: - get - list - patch - update - watch - apiGroups: - \"\" resources: - nodes verbs: - get - list - watch - apiGroups: - storage.k8s.io resources: - volumeattachments verbs: - get - list - patch - update - watch - apiGroups: - \"\" resources: - events verbs: - create - get - list - patch - update - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: null labels: kubernetes.io/bootstrapping: rbac-defaults name: system:csi-external-provisioner rules: - apiGroups: - \"\" resources: - persistentvolumes verbs: - create - delete - get - list - watch - apiGroups: - \"\" resources: - persistentvolumeclaims verbs: - get - list - patch - update - watch - apiGroups: - storage.k8s.io resources: - storageclasses verbs: - list - watch - apiGroups: - \"\" resources: - events verbs: - create - get - list - patch - update - watch - apiGroups: - \"\" resources: - nodes verbs: - get - list - watch - apiGroups: - storage.k8s.io resources: - csinodes verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" creationTimestamp: null labels: kubernetes.io/bootstrapping: rbac-defaults name: system:discovery rules: - nonResourceURLs:"} {"_id":"doc-en-kubernetes-018ecba037828f5f59349091e3f0f6427d58a2f18f64e7a933c38b6586b32e59","title":"","text":"cgroupRoots = append(cgroupRoots, cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupDriver)) kubeletCgroup, err := cm.GetKubeletContainer(s.KubeletCgroups) if err != nil { return fmt.Errorf(\"failed to get the kubelet's cgroup: %v\", err) } if kubeletCgroup != \"\" { klog.Warningf(\"failed to get the kubelet's cgroup: %v. Kubelet system container metrics may be missing.\", err) } else if kubeletCgroup != \"\" { cgroupRoots = append(cgroupRoots, kubeletCgroup) } runtimeCgroup, err := cm.GetRuntimeContainer(s.ContainerRuntime, s.RuntimeCgroups) if err != nil { return fmt.Errorf(\"failed to get the container runtime's cgroup: %v\", err) } if runtimeCgroup != \"\" { klog.Warningf(\"failed to get the container runtime's cgroup: %v. Runtime system container metrics may be missing.\", err) } else if runtimeCgroup != \"\" { // RuntimeCgroups is optional, so ignore if it isn't specified cgroupRoots = append(cgroupRoots, runtimeCgroup) }"} {"_id":"doc-en-kubernetes-05cbbb1130d078f0cdb84100ef880fca820725df0ad18718541203fea4a908fe","title":"","text":"}, []string{\"runtime_handler\"}, ) // RunningPodCount is a gauge that tracks the number of Pods currently running RunningPodCount = metrics.NewGauge( &metrics.GaugeOpts{ Subsystem: KubeletSubsystem, Name: \"running_pod_count\", Help: \"Number of pods currently running\", StabilityLevel: metrics.ALPHA, }, ) // RunningContainerCount is a gauge that tracks the number of containers currently running RunningContainerCount = metrics.NewGaugeVec( &metrics.GaugeOpts{ Subsystem: KubeletSubsystem, Name: \"running_container_count\", Help: \"Number of containers currently running\", StabilityLevel: metrics.ALPHA, }, []string{\"container_state\"}, ) ) var registerMetrics sync.Once"} {"_id":"doc-en-kubernetes-17ff7a75a9db9e4ca6bc70a7ec1d793ea1974b069ef8822fa0a7e286df6efebc","title":"","text":"legacyregistry.MustRegister(CgroupManagerDuration) legacyregistry.MustRegister(PodWorkerStartDuration) legacyregistry.MustRegister(ContainersPerPodCount) legacyregistry.RawMustRegister(newPodAndContainerCollector(containerCache)) legacyregistry.MustRegister(PLEGRelistDuration) legacyregistry.MustRegister(PLEGDiscardEvents) legacyregistry.MustRegister(PLEGRelistInterval)"} {"_id":"doc-en-kubernetes-34e02a5b7da66294cb151af0a44c835598af80991b68a79683dc0a8d48e622e2","title":"","text":"legacyregistry.MustRegister(DeprecatedEvictionStatsAge) legacyregistry.MustRegister(DeprecatedDevicePluginRegistrationCount) legacyregistry.MustRegister(DeprecatedDevicePluginAllocationLatency) legacyregistry.MustRegister(RunningContainerCount) legacyregistry.MustRegister(RunningPodCount) if utilfeature.DefaultFeatureGate.Enabled(features.DynamicKubeletConfig) { legacyregistry.MustRegister(AssignedConfig) legacyregistry.MustRegister(ActiveConfig)"} {"_id":"doc-en-kubernetes-1719f8b5484dfc629f70b6ffffdab030df7896d0598f7417cff1da05b6362a47","title":"","text":"return time.Since(start).Seconds() } func newPodAndContainerCollector(containerCache kubecontainer.RuntimeCache) *podAndContainerCollector { return &podAndContainerCollector{ containerCache: containerCache, } } // Custom collector for current pod and container counts. type podAndContainerCollector struct { // Cache for accessing information about running containers. containerCache kubecontainer.RuntimeCache } // TODO(vmarmol): Split by source? var ( runningPodCountDesc = prometheus.NewDesc( prometheus.BuildFQName(\"\", KubeletSubsystem, \"running_pod_count\"), \"Number of pods currently running\", nil, nil) runningContainerCountDesc = prometheus.NewDesc( prometheus.BuildFQName(\"\", KubeletSubsystem, \"running_container_count\"), \"Number of containers currently running\", nil, nil) ) // Describe implements Prometheus' Describe method from the Collector interface. It sends all // available descriptions to the provided channel and retunrs once the last description has been sent. func (pc *podAndContainerCollector) Describe(ch chan<- *prometheus.Desc) { ch <- runningPodCountDesc ch <- runningContainerCountDesc } // Collect implements Prometheus' Collect method from the Collector interface. It's called by the Prometheus // registry when collecting metrics. func (pc *podAndContainerCollector) Collect(ch chan<- prometheus.Metric) { runningPods, err := pc.containerCache.GetPods() if err != nil { klog.Warningf(\"Failed to get running container information while collecting metrics: %v\", err) return } runningContainers := 0 for _, p := range runningPods { runningContainers += len(p.Containers) } ch <- prometheus.MustNewConstMetric( runningPodCountDesc, prometheus.GaugeValue, float64(len(runningPods))) ch <- prometheus.MustNewConstMetric( runningContainerCountDesc, prometheus.GaugeValue, float64(runningContainers)) } const configMapAPIPathFmt = \"/api/v1/namespaces/%s/configmaps/%s\" func configLabels(source *corev1.NodeConfigSource) (map[string]string, error) {"} {"_id":"doc-en-kubernetes-9543782f9f12fa4559a6dede0c9ffea8636fbf6ca800c33763147df6802a3ef4","title":"","text":"deps = [ \"//pkg/kubelet/container:go_default_library\", \"//pkg/kubelet/container/testing:go_default_library\", \"//pkg/kubelet/metrics:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/types:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/clock:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/diff:go_default_library\", \"//vendor/github.com/prometheus/client_model/go:go_default_library\", \"//vendor/github.com/stretchr/testify/assert:go_default_library\", ], )"} {"_id":"doc-en-kubernetes-b47795c0ab3f44c72016f8b2f55103796171d6d1da611c35c00d395c7a95701e","title":"","text":"g.updateRelistTime(timestamp) pods := kubecontainer.Pods(podList) // update running pod and container count updateRunningPodAndContainerMetrics(pods) g.podRecords.setCurrent(pods) // Compare the old and the current pods, and generate events."} {"_id":"doc-en-kubernetes-21f921969f0d4dfc592aceadceba3bf91413c5fd9364d3b30e36bcce31afdbdf","title":"","text":"return state } func updateRunningPodAndContainerMetrics(pods []*kubecontainer.Pod) { // Set the number of running pods in the parameter metrics.RunningPodCount.Set(float64(len(pods))) // intermediate map to store the count of each \"container_state\" containerStateCount := make(map[string]int) for _, pod := range pods { containers := pod.Containers for _, container := range containers { // update the corresponding \"container_state\" in map to set value for the gaugeVec metrics containerStateCount[string(container.State)]++ } } for key, value := range containerStateCount { metrics.RunningContainerCount.WithLabelValues(key).Set(float64(value)) } } func (pr podRecords) getOld(id types.UID) *kubecontainer.Pod { r, ok := pr[id] if !ok {"} {"_id":"doc-en-kubernetes-f9092873f54c1ef07c56a1364024bc84dfd5c8f62ea70ed79349cf7eba056d37","title":"","text":"\"testing\" \"time\" dto \"github.com/prometheus/client_model/go\" \"github.com/stretchr/testify/assert\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/clock\" \"k8s.io/apimachinery/pkg/util/diff\" kubecontainer \"k8s.io/kubernetes/pkg/kubelet/container\" containertest \"k8s.io/kubernetes/pkg/kubelet/container/testing\" \"k8s.io/kubernetes/pkg/kubelet/metrics\" ) const ("} {"_id":"doc-en-kubernetes-f8a6e0531ecdb6da2735962cfb98eb0c57cec7b361bde59f22317f92bc86ec5a","title":"","text":"assert.Exactly(t, []*PodLifecycleEvent{event}, actualEvents) } } func TestRunningPodAndContainerCount(t *testing.T) { fakeRuntime := &containertest.FakeRuntime{} runtimeCache, _ := kubecontainer.NewRuntimeCache(fakeRuntime) metrics.Register(runtimeCache) testPleg := newTestGenericPLEG() pleg, runtime := testPleg.pleg, testPleg.runtime runtime.AllPodList = []*containertest.FakePod{ {Pod: &kubecontainer.Pod{ ID: \"1234\", Containers: []*kubecontainer.Container{ createTestContainer(\"c1\", kubecontainer.ContainerStateRunning), createTestContainer(\"c2\", kubecontainer.ContainerStateUnknown), createTestContainer(\"c3\", kubecontainer.ContainerStateUnknown), }, }}, {Pod: &kubecontainer.Pod{ ID: \"4567\", Containers: []*kubecontainer.Container{ createTestContainer(\"c1\", kubecontainer.ContainerStateExited), }, }}, } pleg.relist() // assert for container count with label \"running\" actualMetricRunningContainerCount := &dto.Metric{} expectedMetricRunningContainerCount := float64(1) metrics.RunningContainerCount.WithLabelValues(string(kubecontainer.ContainerStateRunning)).Write(actualMetricRunningContainerCount) assert.Equal(t, expectedMetricRunningContainerCount, actualMetricRunningContainerCount.GetGauge().GetValue()) // assert for container count with label \"unknown\" actualMetricUnknownContainerCount := &dto.Metric{} expectedMetricUnknownContainerCount := float64(2) metrics.RunningContainerCount.WithLabelValues(string(kubecontainer.ContainerStateUnknown)).Write(actualMetricUnknownContainerCount) assert.Equal(t, expectedMetricUnknownContainerCount, actualMetricUnknownContainerCount.GetGauge().GetValue()) // assert for running pod count actualMetricRunningPodCount := &dto.Metric{} metrics.RunningPodCount.Write(actualMetricRunningPodCount) expectedMetricRunningPodCount := float64(2) assert.Equal(t, expectedMetricRunningPodCount, actualMetricRunningPodCount.GetGauge().GetValue()) } "} {"_id":"doc-en-kubernetes-5dafb51231dc9dbd08233e7f2eba6d8689b1e61e7e379035727f5e3dfd5e629b","title":"","text":") const regexDescribe = \"Describe|KubeDescribe|SIGDescribe\" const regexContext = \"Context\" const regexContext = \"^Context$\" type visitor struct { FileSet *token.FileSet lastDescribe describe cMap ast.CommentMap FileSet *token.FileSet describes []describe cMap ast.CommentMap //list of all the conformance tests in the path tests []conformanceData } //describe contains text associated with ginkgo describe container type describe struct { rparen token.Pos text string lastContext context }"} {"_id":"doc-en-kubernetes-fab0943675aa8278428d8a70eecfaf133a4ea9c8b51e3c3694181ce6b2204f70","title":"","text":"return } err := validateTestName(v.getDescription(at.Value)) description := v.getDescription(at.Value) err := validateTestName(description) if err != nil { v.failf(at, err.Error()) return"} {"_id":"doc-en-kubernetes-a1d6c4fa2cc096d0825a3a755ed85f4646a31f019598316d0c7740dc3cf5e00b","title":"","text":"} func (v *visitor) getDescription(value string) string { if len(v.lastDescribe.lastContext.text) > 0 { return strings.Trim(v.lastDescribe.text, \"\"\") + \" \" + strings.Trim(v.lastDescribe.lastContext.text, \"\"\") + \" \" + strings.Trim(value, \"\"\") tokens := []string{} for _, describe := range v.describes { tokens = append(tokens, describe.text) if len(describe.lastContext.text) > 0 { tokens = append(tokens, describe.lastContext.text) } } tokens = append(tokens, value) trimmed := []string{} for _, token := range tokens { trimmed = append(trimmed, strings.Trim(token, \"\"\")) } return strings.Trim(v.lastDescribe.text, \"\"\") + \" \" + strings.Trim(value, \"\"\") return strings.Join(trimmed, \" \") } var ("} {"_id":"doc-en-kubernetes-1b7c5cf55e72b22da9f0dfe6cb118423eb0e4a45f9139cbc8117b22bfd9efa9f","title":"","text":"// It() with a manually embedded [Conformance] tag, which it will complain // about. func (v *visitor) Visit(node ast.Node) (w ast.Visitor) { lastDescribe := len(v.describes) - 1 switch t := node.(type) { case *ast.CallExpr: if name := v.matchFuncName(t, regexDescribe); name != \"\" && len(t.Args) >= 2 { v.lastDescribe = describe{text: name} v.describes = append(v.describes, describe{text: name, rparen: t.Rparen}) } else if name := v.matchFuncName(t, regexContext); name != \"\" && len(t.Args) >= 2 { v.lastDescribe.lastContext = context{text: name} if lastDescribe > -1 { v.describes[lastDescribe].lastContext = context{text: name} } } else if v.isConformanceCall(t) { totalConfTests++ v.emit(t.Args[0])"} {"_id":"doc-en-kubernetes-9db5b58bda35d43d953b371c718e8fe720cbb0f795ca04b80f10d1df15faee76","title":"","text":"return nil } } // If we're past the position of the last describe's rparen, pop the describe off if lastDescribe > -1 && node != nil { if node.Pos() > v.describes[lastDescribe].rparen { v.describes = v.describes[:lastDescribe] } } return v }"} {"_id":"doc-en-kubernetes-4d6a8e4f16dd704e186269fbbedbfbf667796bc05cdd2f97c9c7eac4fcd342ab","title":"","text":"Description: `By default the stdout and stderr from the process being executed in a pod MUST be sent to the pod's logs.` + \"nn\"}}, }, // SIGDescribe + KubeDescribe + It, Describe + KubeDescribe + It {\"e2e/foo.go\", ` var _ = framework.SIGDescribe(\"Feature\", func() { KubeDescribe(\"Described by\", func() { // Description: description1 framework.ConformanceIt(\"A ConformanceIt\", func() {}) }) Describe(\"Also described via\", func() { KubeDescribe(\"A nested\", func() { // Description: description2 framework.ConformanceIt(\"ConformanceIt\", func() {}) }) }) })`, []conformanceData{ {URL: \"https://github.com/kubernetes/kubernetes/tree/master/e2e/foo.go#L6\", TestName: \"Feature Described by A ConformanceIt\", Description: \"description1nn\"}, {URL: \"https://github.com/kubernetes/kubernetes/tree/master/e2e/foo.go#L11\", TestName: \"Feature Also described via A nested ConformanceIt\", Description: \"description2nn\"}, }}, // KubeDescribe + Context + It {\"e2e/foo.go\", ` var _ = framework.KubeDescribe(\"Feature\", func() {"} {"_id":"doc-en-kubernetes-73af88422fc561709f5702363187384a43a09f9a1def9281ff34984b6828f93f","title":"","text":"*confDoc = true tests := scanfile(test.filename, code) if !reflect.DeepEqual(tests, test.output) { t.Errorf(\"code:n%sngot %vnwant %v\", t.Errorf(\"code:n%sngot %+vnwant %+v\", code, tests, test.output) } }"} {"_id":"doc-en-kubernetes-98724a3dd6567fbc9b6a7f091b3c3d71aeb828059179a74ba47c6fcb0149a1ef","title":"","text":"// TODO (vladimirvivien) would be nice to name socket with a .sock extension // for consistency. csiAddrTemplate = \"/var/lib/kubelet/plugins/%v/csi.sock\" csiTimeout = 15 * time.Second csiTimeout = 2 * time.Minute volNameSep = \"^\" volDataFileName = \"vol_data.json\" fsTypeBlockName = \"block\""} {"_id":"doc-en-kubernetes-3deefce19ef18251f431424bc0cf2b2c141a3750a807b9294aabebb343930379","title":"","text":"export DOCKER_CLI_EXPERIMENTAL := enabled # golang version should match the golang version from https://github.com/coreos/etcd/releases for the current ETCD_VERSION. GOLANG_VERSION?=1.10.4 GOARM=7 GOARM?=7 TEMP_DIR:=$(shell mktemp -d) ifeq ($(ARCH),amd64)"} {"_id":"doc-en-kubernetes-5199e8051f4a8a10507238d8502b89fb3cb75ca158d43eef07f28d02ee63e4a3","title":"","text":"# Download etcd in a golang container and cross-compile it statically # For each release create a tmp dir 'etcd_release_tmp_dir' and unpack the release tar there. arch_prefix=\"\" ifeq ($(ARCH),arm) arch_prefix=\"GOARM=$(GOARM)\" endif for version in $(BUNDLED_ETCD_VERSIONS); do etcd_release_tmp_dir=$(shell mktemp -d); docker run --interactive -v $${etcd_release_tmp_dir}:/etcdbin golang:$(GOLANG_VERSION) /bin/bash -c \"git clone https://github.com/coreos/etcd /go/src/github.com/coreos/etcd && cd /go/src/github.com/coreos/etcd && git checkout v$${version} && GOARM=$(GOARM) GOARCH=$(ARCH) ./build && $(arch_prefix) GOARCH=$(ARCH) ./build && cp -f bin/$(ARCH)/etcd* bin/etcd* /etcdbin; echo 'done'\"; cp $$etcd_release_tmp_dir/etcd $$etcd_release_tmp_dir/etcdctl $(TEMP_DIR)/; cp $(TEMP_DIR)/etcd $(TEMP_DIR)/etcd-$$version; "} {"_id":"doc-en-kubernetes-afeb873ad604deb3d1433110822bf5a88e7d9b03a642a7bf74222d89273cfc53","title":"","text":"include ../../hack/make-rules/Makefile.manifest REGISTRY ?= gcr.io/kubernetes-e2e-test-images GOARM=7 GOARM ?= 7 QEMUVERSION=v2.9.1 GOLANG_VERSION=1.12.6 export"} {"_id":"doc-en-kubernetes-7b88e26f2bfc203b949f7c5346ae3746b2f67b1ab60783015d80cad07a8bf13a","title":"","text":"if [[ $(id -u) != 0 ]]; then sudo=sudo fi \"${sudo}\" \"${KUBE_ROOT}/third_party/multiarch/qemu-user-static/register/register.sh\" --reset ${sudo} \"${KUBE_ROOT}/third_party/multiarch/qemu-user-static/register/register.sh\" --reset curl -sSL https://github.com/multiarch/qemu-user-static/releases/download/\"${QEMUVERSION}\"/x86_64_qemu-\"${QEMUARCHS[$arch]}\"-static.tar.gz | tar -xz -C \"${temp_dir}\" # Ensure we don't get surprised by umask settings chmod 0755 \"${temp_dir}/qemu-${QEMUARCHS[$arch]}-static\""} {"_id":"doc-en-kubernetes-7d5c958d2524a624d4bcceb9b75231c79efe99baa0737098c8882844da9ba670","title":"","text":"# This function is for building the go code bin() { local arch_prefix=\"\" if [[ \"${ARCH}\" == \"arm\" ]]; then arch_prefix=\"GOARM=${GOARM:-7}\" fi for SRC in $@; do docker run --rm -it -v \"${TARGET}:${TARGET}:Z\" -v \"${KUBE_ROOT}\":/go/src/k8s.io/kubernetes:Z golang:\"${GOLANG_VERSION}\" /bin/bash -c \" cd /go/src/k8s.io/kubernetes/test/images/${SRC_DIR} && CGO_ENABLED=0 GOARM=${GOARM} GOARCH=${ARCH} go build -a -installsuffix cgo --ldflags '-w' -o ${TARGET}/${SRC} ./$(dirname \"${SRC}\")\" CGO_ENABLED=0 ${arch_prefix} GOARCH=${ARCH} go build -a -installsuffix cgo --ldflags '-w' -o ${TARGET}/${SRC} ./$(dirname \"${SRC}\")\" done }"} {"_id":"doc-en-kubernetes-fc48e5f8a1a62067e66c99601a3167b06928d218c7eda19efabc6f9eaa85f28f","title":"","text":"SRCS=regression-issue-74839 ARCH ?= amd64 TARGET ?= $(CURDIR) GOARM = 7 GOARM ?= 7 GOLANG_VERSION ?= latest SRC_DIR = $(notdir $(shell pwd))"} {"_id":"doc-en-kubernetes-5ef2b01372a2f75b1530dcf928370fc3f705c29b1e9c11b68d7eed4d253ef79f","title":"","text":"// get an expired CSR (simulating historical output) server.backdate = 2 * time.Hour server.expectUserAgent = \"FirstClient\" server.SetExpectUserAgent(\"FirstClient\") ok, err := r.RotateCerts() if !ok || err != nil { t.Fatalf(\"unexpected rotation err: %t %v\", ok, err)"} {"_id":"doc-en-kubernetes-a86f9f82169b15d142963d9cbbe6ce4daacdd81c26b9518ed7b22ead5b1772e3","title":"","text":"// if m.Current() == nil, then we try again and get a valid // client server.backdate = 0 server.expectUserAgent = \"FirstClient\" server.SetExpectUserAgent(\"FirstClient\") if ok, err := r.RotateCerts(); !ok || err != nil { t.Fatalf(\"unexpected rotation err: %t %v\", ok, err) }"} {"_id":"doc-en-kubernetes-defaaf467cd86634adb5e63054402ed4ca6984ff4b3dd65672fdc46555014446","title":"","text":"} // if m.Current() != nil, then we should use the second client server.expectUserAgent = \"SecondClient\" server.SetExpectUserAgent(\"SecondClient\") if ok, err := r.RotateCerts(); !ok || err != nil { t.Fatalf(\"unexpected rotation err: %t %v\", ok, err) }"} {"_id":"doc-en-kubernetes-34068def6f22dabd8bab672186b322e3c545bda9bd0ccb9cb1ed2ca3bfbb9287","title":"","text":"serverCA *x509.Certificate backdate time.Duration userAgentLock sync.Mutex expectUserAgent string lock sync.Mutex csr *certapi.CertificateSigningRequest } func (s *csrSimulator) SetExpectUserAgent(a string) { s.userAgentLock.Lock() defer s.userAgentLock.Unlock() s.expectUserAgent = a } func (s *csrSimulator) ExpectUserAgent() string { s.userAgentLock.Lock() defer s.userAgentLock.Unlock() return s.expectUserAgent } func (s *csrSimulator) ServeHTTP(w http.ResponseWriter, req *http.Request) { s.lock.Lock() defer s.lock.Unlock()"} {"_id":"doc-en-kubernetes-30f64af020860e042ceac227845873f39f0320232eabe65ad983ae62a460f02a","title":"","text":"q := req.URL.Query() q.Del(\"timeout\") q.Del(\"timeoutSeconds\") q.Del(\"allowWatchBookmarks\") req.URL.RawQuery = q.Encode() t.Logf(\"Request %q %q %q\", req.Method, req.URL, req.UserAgent()) if len(s.expectUserAgent) > 0 && req.UserAgent() != s.expectUserAgent { if a := s.ExpectUserAgent(); len(a) > 0 && req.UserAgent() != a { t.Errorf(\"Unexpected user agent: %s\", req.UserAgent()) }"} {"_id":"doc-en-kubernetes-a6183e92efe4e8867ac74ee1c9c13ba5ec072f29be9361c40892375548aaa126","title":"","text":"if sp.failScore { return 0, framework.NewStatus(framework.Error, fmt.Sprintf(\"injecting failure for pod %v\", p.Name)) } score := 10 if nodeName == sp.highScoreNode { if sp.numCalled == 1 { // The first node is scored the highest, the rest is scored lower. sp.highScoreNode = nodeName score = 100 } return score, nil"} {"_id":"doc-en-kubernetes-5061ef39869e7c4b7c8b25a2e4816212f77bc35dd6f603efab4f0d382e4695f0","title":"","text":"cs := context.clientSet // Add multiple nodes, one of them will be scored much higher than the others. nodes, err := createNodes(cs, \"test-node\", nil, 10) _, err := createNodes(cs, \"test-node\", nil, 10) if err != nil { t.Fatalf(\"Cannot create nodes: %v\", err) } scPlugin.highScoreNode = nodes[3].Name for i, fail := range []bool{false, true} { scPlugin.failScore = fail"} {"_id":"doc-en-kubernetes-2277bb6b3d2e068cd7ca7d81c0459848f9fbd782d438e4b62fca4a558e9658c9","title":"","text":"if err := saveVolumeData(dataDir, volDataFileName, volData); err != nil { klog.Error(log(\"failed to save volume info data: %v\", err)) if err := os.RemoveAll(dataDir); err != nil { klog.Error(log(\"failed to remove dir after error [%s]: %v\", dataDir, err)) return nil, err if removeErr := os.RemoveAll(dataDir); removeErr != nil { klog.Error(log(\"failed to remove dir after error [%s]: %v\", dataDir, removeErr)) } return nil, err }"} {"_id":"doc-en-kubernetes-eaa826d28808d3a1dd132bb1b3e5e9e24b7ae07fc799f6caf1b97a5420d32784","title":"","text":"// FlattenListVisitor flattens any objects that runtime.ExtractList recognizes as a list // - has an \"Items\" public field that is a slice of runtime.Objects or objects satisfying // that interface - into multiple Infos. An error on any sub item (for instance, if a List // contains an object that does not have a registered client or resource) will terminate // the visit. // TODO: allow errors to be aggregated? // that interface - into multiple Infos. Returns nil in the case of no errors. // When an error is hit on sub items (for instance, if a List contains an object that does // not have a registered client or resource), returns an aggregate error. type FlattenListVisitor struct { visitor Visitor typer runtime.ObjectTyper"} {"_id":"doc-en-kubernetes-4e054c9fcf038c2771f36245f66f256e14ac770ee2a8efde46c7b74b118f1acb","title":"","text":"if info.Mapping != nil && !info.Mapping.GroupVersionKind.Empty() { preferredGVKs = append(preferredGVKs, info.Mapping.GroupVersionKind) } errs := []error{} for i := range items { item, err := v.mapper.infoForObject(items[i], v.typer, preferredGVKs) if err != nil { return err errs = append(errs, err) continue } if len(info.ResourceVersion) != 0 { item.ResourceVersion = info.ResourceVersion } if err := fn(item, nil); err != nil { return err errs = append(errs, err) } } return nil return utilerrors.NewAggregate(errs) }) }"} {"_id":"doc-en-kubernetes-8edad56653fceaf1e013a4656519a883ca149bdd3de1ba9660eb9fb8e388b7de","title":"","text":"import ( \"bytes\" \"errors\" \"fmt\" \"io\" \"io/ioutil\" \"strings\" \"testing\" \"time\""} {"_id":"doc-en-kubernetes-17d915fe4cc20cfba59f24b78016185f6e6c448928a63ed9757172fa828ad589","title":"","text":"t.Fatal(spew.Sdump(test.Infos)) } } func TestFlattenListVisitorWithVisitorError(t *testing.T) { b := newDefaultBuilder(). FilenameParam(false, &FilenameOptions{Recursive: false, Filenames: []string{\"../../artifacts/deeply-nested.yaml\"}}). Flatten() test := &testVisitor{InjectErr: errors.New(\"visitor error\")} err := b.Do().Visit(test.Handle) if err == nil || !strings.Contains(err.Error(), \"visitor error\") { t.Fatal(err) } if len(test.Infos) != 6 { t.Fatal(spew.Sdump(test.Infos)) } } "} {"_id":"doc-en-kubernetes-85585bad3884406a52bbbeefc29e8aa1c6cb204fb1544355dfe0eb5545463aea","title":"","text":"testEndpointReachability(clusterIP, sp.Port, sp.Protocol, execPod) } } func testReachabilityOverNodePorts(nodes *v1.NodeList, sp v1.ServicePort, pod *v1.Pod) { internalAddrs := e2enode.CollectAddresses(nodes, v1.NodeInternalIP) externalAddrs := e2enode.CollectAddresses(nodes, v1.NodeExternalIP) for _, internalAddr := range internalAddrs { // If the node's internal address points to localhost, then we are not // able to test the service reachability via that address if isInvalidOrLocalhostAddress(internalAddr) { e2elog.Logf(\"skipping testEndpointReachability() for internal adddress %s\", internalAddr) continue } testEndpointReachability(internalAddr, sp.NodePort, sp.Protocol, pod) } for _, externalAddr := range externalAddrs {"} {"_id":"doc-en-kubernetes-34ff268fd37f741f9f03f9e11328dff361b69ced2cce89d554a792d34eb3d11d","title":"","text":"} } // isInvalidOrLocalhostAddress returns `true` if the provided `ip` is either not // parsable or the loopback address. Otherwise it will return `false`. func isInvalidOrLocalhostAddress(ip string) bool { parsedIP := net.ParseIP(ip) if parsedIP == nil || parsedIP.IsLoopback() { return true } return false } // testEndpointReachability tests reachability to endpoints (i.e. IP, ServiceName) and ports. Test request is initiated from specified execPod. // TCP and UDP protocol based service are supported at this moment // TODO: add support to test SCTP Protocol based services."} {"_id":"doc-en-kubernetes-4f3993d28aa2bd9ff74e7cc03bb0c723c068a08b0e8569ee9df56acab27fdb04","title":"","text":"\"//staging/src/k8s.io/client-go/rest:go_default_library\", \"//staging/src/k8s.io/client-go/util/cert:go_default_library\", \"//staging/src/k8s.io/client-go/util/keyutil:go_default_library\", \"//staging/src/k8s.io/client-go/util/retry:go_default_library\", \"//staging/src/k8s.io/client-go/util/workqueue:go_default_library\", \"//staging/src/k8s.io/kube-aggregator/pkg/apis/apiregistration/v1:go_default_library\", \"//staging/src/k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset:go_default_library\","} {"_id":"doc-en-kubernetes-b5e8c0bb1d0772a409566084ba71761700eef2889803be7d3ad246717433c40f","title":"","text":"\"strings\" \"time\" \"k8s.io/utils/pointer\" admissionregistrationv1 \"k8s.io/api/admissionregistration/v1\" appsv1 \"k8s.io/api/apps/v1\" v1 \"k8s.io/api/core/v1\""} {"_id":"doc-en-kubernetes-235cd4e4f43057af653c31f4d80a15a18d2f09acfe7dd46e34603f04b2463964","title":"","text":"\"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/intstr\" \"k8s.io/apimachinery/pkg/util/uuid\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/client-go/dynamic\" clientset \"k8s.io/client-go/kubernetes\" \"k8s.io/client-go/util/retry\" \"k8s.io/kubernetes/test/e2e/framework\" e2edeploy \"k8s.io/kubernetes/test/e2e/framework/deployment\" e2elog \"k8s.io/kubernetes/test/e2e/framework/log\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" \"k8s.io/kubernetes/test/utils/crd\" imageutils \"k8s.io/kubernetes/test/utils/image\" \"k8s.io/utils/pointer\" \"github.com/onsi/ginkgo\" \"github.com/onsi/gomega\""} {"_id":"doc-en-kubernetes-cff78a710cb05b150c423100ee6755bd7596de68884f304c721415a321d928e9","title":"","text":"slowWebhookCleanup() }) ginkgo.It(\"patching/updating a validating webhook should work\", func() { client := f.ClientSet admissionClient := client.AdmissionregistrationV1() ginkgo.By(\"Creating a validating webhook configuration\") hook, err := createValidatingWebhookConfiguration(f, &admissionregistrationv1.ValidatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: f.UniqueName, }, Webhooks: []admissionregistrationv1.ValidatingWebhook{ newDenyConfigMapWebhookFixture(f, context), }, }) framework.ExpectNoError(err, \"Creating validating webhook configuration\") defer func() { err := client.AdmissionregistrationV1().ValidatingWebhookConfigurations().Delete(hook.Name, nil) framework.ExpectNoError(err, \"Deleting validating webhook configuration\") }() ginkgo.By(\"Creating a configMap that does not comply to the validation webhook rules\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedNonCompliantConfigMap(string(uuid.NewUUID()), f) _, err = client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err == nil { err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") return false, nil } if !strings.Contains(err.Error(), \"denied\") { return false, err } return true, nil }) ginkgo.By(\"Updating a validating webhook configuration's rules to not include the create operation\") err = retry.RetryOnConflict(retry.DefaultRetry, func() error { h, err := admissionClient.ValidatingWebhookConfigurations().Get(f.UniqueName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Getting validating webhook configuration\") h.Webhooks[0].Rules[0].Operations = []admissionregistrationv1.OperationType{admissionregistrationv1.Update} _, err = admissionClient.ValidatingWebhookConfigurations().Update(h) return err }) framework.ExpectNoError(err, \"Updating validating webhook configuration\") ginkgo.By(\"Creating a configMap that does not comply to the validation webhook rules\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedNonCompliantConfigMap(string(uuid.NewUUID()), f) _, err = client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { if !strings.Contains(err.Error(), \"denied\") { return false, err } return false, nil } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") return true, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be allowed creation since webhook was updated to not validate create\", f.Namespace.Name) ginkgo.By(\"Patching a validating webhook configuration's rules to include the create operation\") hook, err = admissionClient.ValidatingWebhookConfigurations().Patch( f.UniqueName, types.JSONPatchType, []byte(`[{\"op\": \"replace\", \"path\": \"/webhooks/0/rules/0/operations\", \"value\": [\"CREATE\"]}]`)) framework.ExpectNoError(err, \"Patching validating webhook configuration\") ginkgo.By(\"Creating a configMap that does not comply to the validation webhook rules\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedNonCompliantConfigMap(string(uuid.NewUUID()), f) _, err = client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err == nil { err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") return false, nil } if !strings.Contains(err.Error(), \"denied\") { return false, err } return true, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be denied creation by validating webhook\", f.Namespace.Name) }) ginkgo.It(\"patching/updating a mutating webhook should work\", func() { client := f.ClientSet admissionClient := client.AdmissionregistrationV1() ginkgo.By(\"Creating a mutating webhook configuration\") hook, err := createMutatingWebhookConfiguration(f, &admissionregistrationv1.MutatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: f.UniqueName, }, Webhooks: []admissionregistrationv1.MutatingWebhook{ newMutateConfigMapWebhookFixture(f, context, 1), }, }) framework.ExpectNoError(err, \"Creating mutating webhook configuration\") defer func() { err := client.AdmissionregistrationV1().MutatingWebhookConfigurations().Delete(hook.Name, nil) framework.ExpectNoError(err, \"Deleting mutating webhook configuration\") }() hook, err = admissionClient.MutatingWebhookConfigurations().Get(f.UniqueName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Getting mutating webhook configuration\") ginkgo.By(\"Updating a mutating webhook configuration's rules to not include the create operation\") hook.Webhooks[0].Rules[0].Operations = []admissionregistrationv1.OperationType{admissionregistrationv1.Update} hook, err = admissionClient.MutatingWebhookConfigurations().Update(hook) framework.ExpectNoError(err, \"Updating mutating webhook configuration\") ginkgo.By(\"Creating a configMap that should not be mutated\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedToBeMutatedConfigMap(string(uuid.NewUUID()), f) created, err := client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { return false, err } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") _, ok := created.Data[\"mutation-stage-1\"] return !ok, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s this is not mutated\", f.Namespace.Name) ginkgo.By(\"Patching a mutating webhook configuration's rules to include the create operation\") hook, err = admissionClient.MutatingWebhookConfigurations().Patch( f.UniqueName, types.JSONPatchType, []byte(`[{\"op\": \"replace\", \"path\": \"/webhooks/0/rules/0/operations\", \"value\": [\"CREATE\"]}]`)) framework.ExpectNoError(err, \"Patching mutating webhook configuration\") ginkgo.By(\"Creating a configMap that should be mutated\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedToBeMutatedConfigMap(string(uuid.NewUUID()), f) created, err := client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { return false, err } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") _, ok := created.Data[\"mutation-stage-1\"] return ok, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be mutated\", f.Namespace.Name) }) ginkgo.It(\"listing validating webhooks should work\", func() { testListSize := 10 testUUID := string(uuid.NewUUID()) for i := 0; i < testListSize; i++ { name := fmt.Sprintf(\"%s-%d\", f.UniqueName, i) _, err := createValidatingWebhookConfiguration(f, &admissionregistrationv1.ValidatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: name, Labels: map[string]string{\"e2e-list-test-uuid\": testUUID}, }, Webhooks: []admissionregistrationv1.ValidatingWebhook{ newDenyConfigMapWebhookFixture(f, context), }, }) framework.ExpectNoError(err, \"Creating validating webhook configuration\") } selectorListOpts := metav1.ListOptions{LabelSelector: \"e2e-list-test-uuid=\" + testUUID} ginkgo.By(\"Listing all of the created validation webhooks\") list, err := client.AdmissionregistrationV1beta1().ValidatingWebhookConfigurations().List(selectorListOpts) framework.ExpectNoError(err, \"Listing validating webhook configurations\") framework.ExpectEqual(len(list.Items), testListSize) ginkgo.By(\"Creating a configMap that does not comply to the validation webhook rules\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedNonCompliantConfigMap(string(uuid.NewUUID()), f) _, err = client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err == nil { err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") return false, nil } if !strings.Contains(err.Error(), \"denied\") { return false, err } return true, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be denied creation by validating webhook\", f.Namespace.Name) ginkgo.By(\"Deleting the collection of validation webhooks\") err = client.AdmissionregistrationV1beta1().ValidatingWebhookConfigurations().DeleteCollection(nil, selectorListOpts) framework.ExpectNoError(err, \"Deleting collection of validating webhook configurations\") ginkgo.By(\"Creating a configMap that does not comply to the validation webhook rules\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedNonCompliantConfigMap(string(uuid.NewUUID()), f) _, err = client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { if !strings.Contains(err.Error(), \"denied\") { return false, err } return false, nil } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") return true, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be allowed creation since there are no webhooks\", f.Namespace.Name) }) ginkgo.It(\"listing mutating webhooks should work\", func() { testListSize := 10 testUUID := string(uuid.NewUUID()) for i := 0; i < testListSize; i++ { name := fmt.Sprintf(\"%s-%d\", f.UniqueName, i) _, err := createMutatingWebhookConfiguration(f, &admissionregistrationv1.MutatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: name, Labels: map[string]string{\"e2e-list-test-uuid\": testUUID}, }, Webhooks: []admissionregistrationv1.MutatingWebhook{ newMutateConfigMapWebhookFixture(f, context, 1), }, }) framework.ExpectNoError(err, \"Creating mutating webhook configuration\") } selectorListOpts := metav1.ListOptions{LabelSelector: \"e2e-list-test-uuid=\" + testUUID} ginkgo.By(\"Listing all of the created validation webhooks\") list, err := client.AdmissionregistrationV1beta1().MutatingWebhookConfigurations().List(selectorListOpts) framework.ExpectNoError(err, \"Listing mutating webhook configurations\") framework.ExpectEqual(len(list.Items), testListSize) ginkgo.By(\"Creating a configMap that should be mutated\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedToBeMutatedConfigMap(string(uuid.NewUUID()), f) created, err := client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { return false, err } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") _, ok := created.Data[\"mutation-stage-1\"] return ok, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s to be mutated\", f.Namespace.Name) ginkgo.By(\"Deleting the collection of validation webhooks\") err = client.AdmissionregistrationV1beta1().MutatingWebhookConfigurations().DeleteCollection(nil, selectorListOpts) framework.ExpectNoError(err, \"Deleting collection of mutating webhook configurations\") ginkgo.By(\"Creating a configMap that should not be mutated\") err = wait.PollImmediate(100*time.Millisecond, 30*time.Second, func() (bool, error) { cm := namedToBeMutatedConfigMap(string(uuid.NewUUID()), f) created, err := client.CoreV1().ConfigMaps(f.Namespace.Name).Create(cm) if err != nil { return false, err } err = client.CoreV1().ConfigMaps(f.Namespace.Name).Delete(cm.Name, nil) framework.ExpectNoError(err, \"Deleting successfully created configMap\") _, ok := created.Data[\"mutation-stage-1\"] return !ok, nil }) framework.ExpectNoError(err, \"Waiting for configMap in namespace %s this is not mutated\", f.Namespace.Name) }) // TODO: add more e2e tests for mutating webhooks // 1. mutating webhook that mutates pod // 2. mutating webhook that sends empty patch"} {"_id":"doc-en-kubernetes-bb843dac607a7643afcf538ab1cf279ceef6bdb1550b6e7ad469e98636c770e7","title":"","text":"MatchLabels: map[string]string{f.UniqueName: \"true\"}, } sideEffectsNone := admissionregistrationv1.SideEffectClassNone _, err := createValidatingWebhookConfiguration(f, &admissionregistrationv1.ValidatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: configName, }, Webhooks: []admissionregistrationv1.ValidatingWebhook{ { Name: \"deny-unwanted-pod-container-name-and-label.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"pods\"}, }, }}, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: namespace, Name: serviceName, Path: strPtr(\"/pods\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, // Scope the webhook to just this namespace NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, }, }, { Name: \"deny-unwanted-configmap-data.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create, admissionregistrationv1.Update, admissionregistrationv1.Delete}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"configmaps\"}, }, }}, // The webhook skips the namespace that has label \"skip-webhook-admission\":\"yes\" NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, MatchExpressions: []metav1.LabelSelectorRequirement{ { Key: skipNamespaceLabelKey, Operator: metav1.LabelSelectorOpNotIn, Values: []string{skipNamespaceLabelValue}, }, }, }, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: namespace, Name: serviceName, Path: strPtr(\"/configmaps\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, }, newDenyPodWebhookFixture(f, context), newDenyConfigMapWebhookFixture(f, context), // Server cannot talk to this webhook, so it always fails. // Because this webhook is configured fail-open, request should be admitted after the call fails. failOpenHook,"} {"_id":"doc-en-kubernetes-063911b5d8a9cba0c6605b7fdfedb77a10164731f5f961c646b3086932e9638a","title":"","text":"ginkgo.By(\"Registering the mutating configmap webhook via the AdmissionRegistration API\") namespace := f.Namespace.Name sideEffectsNone := admissionregistrationv1.SideEffectClassNone _, err := createMutatingWebhookConfiguration(f, &admissionregistrationv1.MutatingWebhookConfiguration{ ObjectMeta: metav1.ObjectMeta{ Name: configName, }, Webhooks: []admissionregistrationv1.MutatingWebhook{ { Name: \"adding-configmap-data-stage-1.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"configmaps\"}, }, }}, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: namespace, Name: serviceName, Path: strPtr(\"/mutating-configmaps\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, // Scope the webhook to just this namespace NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, }, }, { Name: \"adding-configmap-data-stage-2.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"configmaps\"}, }, }}, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: namespace, Name: serviceName, Path: strPtr(\"/mutating-configmaps\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, // Scope the webhook to just this namespace NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, }, }, newMutateConfigMapWebhookFixture(f, context, 1), newMutateConfigMapWebhookFixture(f, context, 2), }, }) framework.ExpectNoError(err, \"registering mutating webhook config %s with namespace %s\", configName, namespace)"} {"_id":"doc-en-kubernetes-ecbc37ada3b8e35d7e239afefb030a9d6d5d8925915ef70bb0662247b6e153e2","title":"","text":"} func nonCompliantConfigMap(f *framework.Framework) *v1.ConfigMap { return namedNonCompliantConfigMap(disallowedConfigMapName, f) } func namedNonCompliantConfigMap(name string, f *framework.Framework) *v1.ConfigMap { return &v1.ConfigMap{ ObjectMeta: metav1.ObjectMeta{ Name: disallowedConfigMapName, Name: name, }, Data: map[string]string{ \"webhook-e2e-test\": \"webhook-disallow\","} {"_id":"doc-en-kubernetes-f96dccccfbfcd785aa192ed156ec9f4df5fa381770f852a50ecfcd3004fdd3e6","title":"","text":"} func toBeMutatedConfigMap(f *framework.Framework) *v1.ConfigMap { return namedToBeMutatedConfigMap(\"to-be-mutated\", f) } func namedToBeMutatedConfigMap(name string, f *framework.Framework) *v1.ConfigMap { return &v1.ConfigMap{ ObjectMeta: metav1.ObjectMeta{ Name: \"to-be-mutated\", Name: name, }, Data: map[string]string{ \"mutation-start\": \"yes\","} {"_id":"doc-en-kubernetes-d4d8b822c7d52a09d9e4420481990ec7067e329d0f1a86dab19de94b9727d7e3","title":"","text":"} return f.ClientSet.AdmissionregistrationV1().MutatingWebhookConfigurations().Create(config) } func newDenyPodWebhookFixture(f *framework.Framework, context *certContext) admissionregistrationv1.ValidatingWebhook { sideEffectsNone := admissionregistrationv1.SideEffectClassNone return admissionregistrationv1.ValidatingWebhook{ Name: \"deny-unwanted-pod-container-name-and-label.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"pods\"}, }, }}, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: f.Namespace.Name, Name: serviceName, Path: strPtr(\"/pods\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, // Scope the webhook to just this namespace NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, }, } } func newDenyConfigMapWebhookFixture(f *framework.Framework, context *certContext) admissionregistrationv1.ValidatingWebhook { sideEffectsNone := admissionregistrationv1.SideEffectClassNone return admissionregistrationv1.ValidatingWebhook{ Name: \"deny-unwanted-configmap-data.k8s.io\", Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create, admissionregistrationv1.Update, admissionregistrationv1.Delete}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"configmaps\"}, }, }}, // The webhook skips the namespace that has label \"skip-webhook-admission\":\"yes\" NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, MatchExpressions: []metav1.LabelSelectorRequirement{ { Key: skipNamespaceLabelKey, Operator: metav1.LabelSelectorOpNotIn, Values: []string{skipNamespaceLabelValue}, }, }, }, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: f.Namespace.Name, Name: serviceName, Path: strPtr(\"/configmaps\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, } } func newMutateConfigMapWebhookFixture(f *framework.Framework, context *certContext, stage int) admissionregistrationv1.MutatingWebhook { sideEffectsNone := admissionregistrationv1.SideEffectClassNone return admissionregistrationv1.MutatingWebhook{ Name: fmt.Sprintf(\"adding-configmap-data-stage-%d.k8s.io\", stage), Rules: []admissionregistrationv1.RuleWithOperations{{ Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.Create}, Rule: admissionregistrationv1.Rule{ APIGroups: []string{\"\"}, APIVersions: []string{\"v1\"}, Resources: []string{\"configmaps\"}, }, }}, ClientConfig: admissionregistrationv1.WebhookClientConfig{ Service: &admissionregistrationv1.ServiceReference{ Namespace: f.Namespace.Name, Name: serviceName, Path: strPtr(\"/mutating-configmaps\"), Port: pointer.Int32Ptr(servicePort), }, CABundle: context.signingCert, }, SideEffects: &sideEffectsNone, AdmissionReviewVersions: []string{\"v1beta1\"}, // Scope the webhook to just this namespace NamespaceSelector: &metav1.LabelSelector{ MatchLabels: map[string]string{f.UniqueName: \"true\"}, }, } } "} {"_id":"doc-en-kubernetes-d345479daea5ef8fb6d3903341be39c6ee0c697d5287b15837c4e20ff123c17a","title":"","text":"done for var in \"${__kubectl_override_flag_list[@]##*-}\"; do if eval \"test -n \"$${var}\"\"; then eval \"echo ${${var}}\" eval \"echo -n ${${var}}' '\" fi done }"} {"_id":"doc-en-kubernetes-5301d12ecb5793f55a562462ee27143bb098ec062d51b1de18bc0c08318d3493","title":"","text":"\"crypto/x509\" \"fmt\" \"io/ioutil\" \"strings\" \"github.com/Azure/go-autorest/autorest/adal\" \"github.com/Azure/go-autorest/autorest/azure\""} {"_id":"doc-en-kubernetes-1081b2353ac78799b817223fb8453e79d2927d69ac2960d24721ad9c2fadf582","title":"","text":"var ( // ErrorNoAuth indicates that no credentials are provided. ErrorNoAuth = fmt.Errorf(\"no credentials provided for Azure cloud provider\") // ADFSIdentitySystem indicates value of tenantId for ADFS on Azure Stack. ADFSIdentitySystem = \"ADFS\" ) // AzureAuthConfig holds auth related part of cloud config"} {"_id":"doc-en-kubernetes-db69d787daff1c2d141c6a8e729d15d6d71de7519d298287e840e1cfb9f3a9cf","title":"","text":"UserAssignedIdentityID string `json:\"userAssignedIdentityID,omitempty\" yaml:\"userAssignedIdentityID,omitempty\"` // The ID of the Azure Subscription that the cluster is deployed in SubscriptionID string `json:\"subscriptionId,omitempty\" yaml:\"subscriptionId,omitempty\"` // Identity system value for the deployment. This gets populate for Azure Stack case. IdentitySystem string `json:\"identitySystem,omitempty\" yaml:\"identitySystem,omitempty\"` } // GetServicePrincipalToken creates a new service principal token based on the configuration func GetServicePrincipalToken(config *AzureAuthConfig, env *azure.Environment) (*adal.ServicePrincipalToken, error) { var tenantID string if strings.EqualFold(config.IdentitySystem, ADFSIdentitySystem) { tenantID = \"adfs\" } else { tenantID = config.TenantID } if config.UseManagedIdentityExtension { klog.V(2).Infoln(\"azure: using managed identity extension to retrieve access token\") msiEndpoint, err := adal.GetMSIVMEndpoint()"} {"_id":"doc-en-kubernetes-717aeb1e2766b8bc5e10fa126010f777c51f9d1b18919e5007ac682678a14964","title":"","text":"env.ServiceManagementEndpoint) } oauthConfig, err := adal.NewOAuthConfig(env.ActiveDirectoryEndpoint, config.TenantID) oauthConfig, err := adal.NewOAuthConfig(env.ActiveDirectoryEndpoint, tenantID) if err != nil { return nil, fmt.Errorf(\"creating the OAuth config: %v\", err) }"} {"_id":"doc-en-kubernetes-91c2b75998672acc8f1ddce016ff53528397d0f5736d230ebc9f7708c4f5fc9e","title":"","text":"\"//staging/src/k8s.io/client-go/tools/watch:go_default_library\", \"//test/e2e/framework:go_default_library\", \"//test/e2e/framework/events:go_default_library\", \"//test/e2e/framework/kubelet:go_default_library\", \"//test/e2e/framework/network:go_default_library\", \"//test/e2e/framework/node:go_default_library\", \"//test/e2e/framework/pod:go_default_library\","} {"_id":"doc-en-kubernetes-ec10b2d0f17744380055a9d71c94ff91d45c844d8d7a20607b36986774f1969f","title":"","text":"podutil \"k8s.io/kubernetes/pkg/api/v1/pod\" \"k8s.io/kubernetes/pkg/kubelet\" \"k8s.io/kubernetes/test/e2e/framework\" e2ekubelet \"k8s.io/kubernetes/test/e2e/framework/kubelet\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" e2ewebsocket \"k8s.io/kubernetes/test/e2e/framework/websocket\" imageutils \"k8s.io/kubernetes/test/utils/image\""} {"_id":"doc-en-kubernetes-a2c6d6bb057469475f10372f729cce7ed770b791599f84bd8a69fa237636dd3e","title":"","text":"err = podClient.Delete(context.TODO(), pod.Name, *metav1.NewDeleteOptions(30)) framework.ExpectNoError(err, \"failed to delete pod\") ginkgo.By(\"verifying the kubelet observed the termination notice\") err = wait.Poll(time.Second*5, time.Second*30, func() (bool, error) { podList, err := e2ekubelet.GetKubeletPods(f.ClientSet, pod.Spec.NodeName) if err != nil { framework.Logf(\"Unable to retrieve kubelet pods for node %v: %v\", pod.Spec.NodeName, err) return false, nil } for _, kubeletPod := range podList.Items { if pod.Name != kubeletPod.Name { continue } if kubeletPod.ObjectMeta.DeletionTimestamp == nil { framework.Logf(\"deletion has not yet been observed\") return false, nil } return true, nil } framework.Logf(\"no pod exists with the name we were looking for, assuming the termination request was observed and completed\") return true, nil }) framework.ExpectNoError(err, \"kubelet never observed the termination notice\") ginkgo.By(\"verifying pod deletion was observed\") deleted := false var lastPod *v1.Pod"} {"_id":"doc-en-kubernetes-bae8b1d7bdb7855b749998d9aa73199752abc1027d54aeefabc87435a1ca47a3","title":"","text":"\"//test/e2e/framework:go_default_library\", \"//test/e2e/framework/gpu:go_default_library\", \"//test/e2e/framework/job:go_default_library\", \"//test/e2e/framework/kubelet:go_default_library\", \"//test/e2e/framework/node:go_default_library\", \"//test/e2e/framework/pod:go_default_library\", \"//test/e2e/framework/providers/gce:go_default_library\","} {"_id":"doc-en-kubernetes-8c6db7a01aebadc43034727366944bbfbe84e138f874f1db5aba2f6acb50568b","title":"","text":"clientset \"k8s.io/client-go/kubernetes\" podutil \"k8s.io/kubernetes/pkg/api/v1/pod\" \"k8s.io/kubernetes/test/e2e/framework\" e2ekubelet \"k8s.io/kubernetes/test/e2e/framework/kubelet\" e2enode \"k8s.io/kubernetes/test/e2e/framework/node\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" e2erc \"k8s.io/kubernetes/test/e2e/framework/rc\""} {"_id":"doc-en-kubernetes-27073f64cdc56dde473df22411206fa917d87e59f7cfbd224f6d9c332cb97a57","title":"","text":"framework.ExpectNoError(err) for _, node := range nodeList.Items { framework.Logf(\"nLogging pods the kubelet thinks is on node %v before test\", node.Name) printAllKubeletPods(cs, node.Name) framework.Logf(\"nLogging pods the apiserver thinks is on node %v before test\", node.Name) printAllPodsOnNode(cs, node.Name) } })"} {"_id":"doc-en-kubernetes-6b6b85d207d3c29a16d06b87ca27e5ac030bf7beecab3e8e2a259bc38bbae59b","title":"","text":"}) }) // printAllKubeletPods outputs status of all kubelet pods into log. func printAllKubeletPods(c clientset.Interface, nodeName string) { podList, err := e2ekubelet.GetKubeletPods(c, nodeName) // printAllPodsOnNode outputs status of all kubelet pods into log. func printAllPodsOnNode(c clientset.Interface, nodeName string) { podList, err := c.CoreV1().Pods(metav1.NamespaceAll).List(context.TODO(), metav1.ListOptions{FieldSelector: \"spec.nodeName=\" + nodeName}) if err != nil { framework.Logf(\"Unable to retrieve kubelet pods for node %v: %v\", nodeName, err) framework.Logf(\"Unable to retrieve pods for node %v: %v\", nodeName, err) return } for _, p := range podList.Items {"} {"_id":"doc-en-kubernetes-223148f8829a9050c570356fcdca47c3c208e100d426d2ad6c341d34d235fec7","title":"","text":") const ( dockerProcessName = \"docker\" dockerProcessName = \"dockerd\" dockerPidFile = \"/var/run/docker.pid\" containerdProcessName = \"docker-containerd\" containerdPidFile = \"/run/docker/libcontainerd/docker-containerd.pid\""} {"_id":"doc-en-kubernetes-2f6ca3ee32b4f31119f96bad679aa789db5ffd5e5ce4bcace37ff58c66f1d3f7","title":"","text":"result := c.client.clientBase.Get(). AbsPath(path). Namespace(c.namespace). NamespaceIfScoped(c.namespace, c.namespace != \"\"). Resource(gvr.Resource). Name(name). SubResource(\"scale\")."} {"_id":"doc-en-kubernetes-67b71e496c5b99b911220ec5aa173fdafd20e31acff4461b186b750c145a1504","title":"","text":"result := c.client.clientBase.Put(). AbsPath(path). Namespace(c.namespace). NamespaceIfScoped(c.namespace, c.namespace != \"\"). Resource(gvr.Resource). Name(scale.Name). SubResource(\"scale\")."} {"_id":"doc-en-kubernetes-ef1ab425471dbd7efd689f8c7f7c063ee4c2f3d1fa28a7ef8c1b4b9d1e8cb519","title":"","text":"groupVersion := gvr.GroupVersion() result := c.client.clientBase.Patch(pt). AbsPath(c.client.apiPathFor(groupVersion)). Namespace(c.namespace). NamespaceIfScoped(c.namespace, c.namespace != \"\"). Resource(gvr.Resource). Name(name). SubResource(\"scale\")."} {"_id":"doc-en-kubernetes-f727f46b58c8f33260668ef9ce374816b504cc56fc8d8079f08eee1c290faa31","title":"","text":") // ScalesGetter can produce a ScaleInterface // for a particular namespace. type ScalesGetter interface { // Scales produces a ScaleInterface for a particular namespace. // Set namespace to the empty string for non-namespaced resources. Scales(namespace string) ScaleInterface }"} {"_id":"doc-en-kubernetes-da53197942f8e1d3aa406e4ec8e05b4e57562dedfc18b94f1c6fee7b996faa74","title":"","text":"Containers: []v1.Container{ { Name: \"nfs-provisioner\", Image: \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.0-k8s1.12\", Image: \"quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2\", SecurityContext: &v1.SecurityContext{ Capabilities: &v1.Capabilities{ Add: []v1.Capability{\"DAC_READ_SEARCH\"},"} {"_id":"doc-en-kubernetes-de6a983284ca21c3857998246dbeb8dc1511dd2bfa3517570f26435b49b00c7d","title":"","text":"# Print the supported API Resources with more information kubectl api-resources -o wide # Print the supported API Resources sorted by a column kubectl api-resources --sort-by=name # Print the supported namespaced resources kubectl api-resources --namespaced=true"} {"_id":"doc-en-kubernetes-23079a0d960b329ff93e08a12e595e91fe957458555e25c53d368f19a9cd8199","title":"","text":"// As new fields are added, add them here instead of referencing the cmd.Flags() type APIResourceOptions struct { Output string SortBy string APIGroup string Namespaced bool Verbs []string"} {"_id":"doc-en-kubernetes-ed6cff1b472171be0a7869d6243f4a9382b45ae145cf7c634e0241df0dceb5ab","title":"","text":"cmd.Flags().StringVar(&o.APIGroup, \"api-group\", o.APIGroup, \"Limit to resources in the specified API group.\") cmd.Flags().BoolVar(&o.Namespaced, \"namespaced\", o.Namespaced, \"If false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.\") cmd.Flags().StringSliceVar(&o.Verbs, \"verbs\", o.Verbs, \"Limit to resources that support the specified verbs.\") cmd.Flags().StringVar(&o.SortBy, \"sort-by\", o.SortBy, \"If non-empty, sort nodes list using specified field. The field can be either 'name' or 'kind'.\") cmd.Flags().BoolVar(&o.Cached, \"cached\", o.Cached, \"Use the cached list of resources if available.\") return cmd }"} {"_id":"doc-en-kubernetes-9738223e353c36ea9c88e585693671ab37857ce8a8db3063c57a74744237d6ad","title":"","text":"if !supportedOutputTypes.Has(o.Output) { return fmt.Errorf(\"--output %v is not available\", o.Output) } supportedSortTypes := sets.NewString(\"\", \"name\", \"kind\") if len(o.SortBy) > 0 { if !supportedSortTypes.Has(o.SortBy) { return fmt.Errorf(\"--sort-by accepts only name or kind\") } } return nil }"} {"_id":"doc-en-kubernetes-501720d154d1297c03a5518f84bbfb564baba0f1e5f667c0fc3ff3e9d53c8e94","title":"","text":"} } sort.Stable(sortableGroupResource(resources)) sort.Stable(sortableResource{resources, o.SortBy}) for _, r := range resources { switch o.Output { case \"name\":"} {"_id":"doc-en-kubernetes-df497ea4eb12cc0f35f6113a363f1a949e1138f5035dd9e40e070f0de9bd699d","title":"","text":"return err } type sortableGroupResource []groupResource type sortableResource struct { resources []groupResource sortBy string } func (s sortableGroupResource) Len() int { return len(s) } func (s sortableGroupResource) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s sortableGroupResource) Less(i, j int) bool { ret := strings.Compare(s[i].APIGroup, s[j].APIGroup) func (s sortableResource) Len() int { return len(s.resources) } func (s sortableResource) Swap(i, j int) { s.resources[i], s.resources[j] = s.resources[j], s.resources[i] } func (s sortableResource) Less(i, j int) bool { ret := strings.Compare(s.compareValues(i, j)) if ret > 0 { return false } else if ret == 0 { return strings.Compare(s[i].APIResource.Name, s[j].APIResource.Name) < 0 return strings.Compare(s.resources[i].APIResource.Name, s.resources[j].APIResource.Name) < 0 } return true } func (s sortableResource) compareValues(i, j int) (string, string) { switch s.sortBy { case \"name\": return s.resources[i].APIResource.Name, s.resources[j].APIResource.Name case \"kind\": return s.resources[i].APIResource.Kind, s.resources[j].APIResource.Kind } return s.resources[i].APIGroup, s.resources[j].APIGroup } "} {"_id":"doc-en-kubernetes-a2355c65d4b3b0f45ea0b0766ff77c39c46ad975400f4448afa00af51b2e0914","title":"","text":"import ( \"fmt\" \"path\" \"time\" \"github.com/onsi/ginkgo\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/uuid\" \"k8s.io/kubernetes/test/e2e/framework\""} {"_id":"doc-en-kubernetes-ca25ca0ec56892838190e3fa62c05bd5ffaa3fea4a2c31b681f8b3e6ac72c273","title":"","text":"framework.ExpectNoError(err, \"failed to get pod %s\", pod.Name) ginkgo.By(\"Reading file content from the nginx-container\") resultString, err = framework.LookForStringInFile(f.Namespace.Name, pod.Name, busyBoxMainContainerName, busyBoxMainVolumeFilePath, message, 30*time.Second) framework.ExpectNoError(err, \"failed to match expected string %s with %s\", message, resultString) result := f.ExecShellInContainer(pod.Name, busyBoxMainContainerName, fmt.Sprintf(\"cat %s\", busyBoxMainVolumeFilePath)) framework.ExpectEqual(result, message, \"failed to match expected string %s with %s\", message, resultString) }) })"} {"_id":"doc-en-kubernetes-6269d544de6cc10dac81751d18279f9bcc8aab2e8788e78a754268e0ccf49ec1","title":"","text":"- [Metrics Changes](#metrics-changes) - [Added metrics](#added-metrics) - [Removed metrics](#removed-metrics) - [Depreciated/changed metrics](#depreciatedchanged-metrics) - [Deprecated/changed metrics](#deprecatedchanged-metrics) - [Notable Features](#notable-features) - [Beta](#beta) - [Alpha](#alpha)"} {"_id":"doc-en-kubernetes-9a0b96ebb0144138b512c972906f53e88688c4c372ce2b5e0bc138d3c7015d83","title":"","text":"- Removed cadvisor metric labels `pod_name` and `container_name` to match instrumentation guidelines. Any Prometheus queries that match `pod_name` and `container_name` labels (e.g. cadvisor or kubelet probe metrics) must be updated to use `pod` and `container` instead. ([#80376](https://github.com/kubernetes/kubernetes/pull/80376), [@ehashman](https://github.com/ehashman)) ### Depreciated/changed metrics ### Deprecated/changed metrics - kube-controller-manager and cloud-controller-manager metrics are now marked as with the ALPHA stability level. ([#81624](https://github.com/kubernetes/kubernetes/pull/81624), [@logicalhan](https://github.com/logicalhan)) - kube-proxy metrics are now marked as with the ALPHA stability level. ([#81626](https://github.com/kubernetes/kubernetes/pull/81626), [@logicalhan](https://github.com/logicalhan))"} {"_id":"doc-en-kubernetes-6d4b8fac3241b8f0a5a599f2b25836cc8066f8b89525c1213365f6e86bc82891","title":"","text":"embed = [\":go_default_library\"], deps = [ \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/endpoints/metrics:go_default_library\", \"//staging/src/k8s.io/component-base/metrics/legacyregistry:go_default_library\", \"//staging/src/k8s.io/component-base/metrics/testutil:go_default_library\", ], )"} {"_id":"doc-en-kubernetes-45114805302a162ed45756b78d6af76d8f4e24b96350f86fd5495c0d7e20ead9","title":"","text":"deps = [ \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/endpoints/metrics:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/server/httplog:go_default_library\", \"//vendor/k8s.io/klog:go_default_library\", ],"} {"_id":"doc-en-kubernetes-c485fd806bea4e9814a0c49b01a1951237e239b0bad447731e299d024a2b6e7f","title":"","text":"\"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/apiserver/pkg/endpoints/metrics\" \"k8s.io/apiserver/pkg/server/httplog\" \"k8s.io/klog\" )"} {"_id":"doc-en-kubernetes-69cde6a70cbe6cff70ed51d4439584a01a5330de7ac1639e3c044610e2f36d25","title":"","text":"klog.V(5).Infof(\"Installing health checkers for (%v): %v\", path, formatQuoted(checkerNames(checks...)...)) mux.Handle(path, handleRootHealthz(checks...)) mux.Handle(path, metrics.InstrumentHandlerFunc(\"GET\", /* group = */ \"\", /* version = */ \"\", /* resource = */ \"\", /* subresource = */ path, /* scope = */ \"\", /* component = */ \"\", handleRootHealthz(checks...))) for _, check := range checks { mux.Handle(fmt.Sprintf(\"%s/%v\", path, check.Name()), adaptCheckToHandler(check.Check)) }"} {"_id":"doc-en-kubernetes-fc77ba563ad84f6f588971abcd7e73d0631ff948013e4ab91e84dd81e0a6f920","title":"","text":"\"net/http/httptest\" \"net/url\" \"reflect\" \"strings\" \"testing\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apiserver/pkg/endpoints/metrics\" \"k8s.io/component-base/metrics/legacyregistry\" \"k8s.io/component-base/metrics/testutil\" ) func TestInstallHandler(t *testing.T) {"} {"_id":"doc-en-kubernetes-7d659a7870c402c05aeb8187f442d7e5aa862ac06f08b9804b5b453aff606ef1","title":"","text":"} } func TestMetrics(t *testing.T) { mux := http.NewServeMux() InstallHandler(mux) InstallLivezHandler(mux) InstallReadyzHandler(mux) metrics.Register() metrics.Reset() paths := []string{\"/healthz\", \"/livez\", \"/readyz\"} for _, path := range paths { req, err := http.NewRequest(\"GET\", fmt.Sprintf(\"http://example.com%s\", path), nil) if err != nil { t.Errorf(\"%v\", err) } mux.ServeHTTP(httptest.NewRecorder(), req) } expected := strings.NewReader(` # HELP apiserver_request_total [ALPHA] Counter of apiserver requests broken out for each verb, dry run value, group, version, resource, scope, component, client, and HTTP response contentType and code. # TYPE apiserver_request_total counter apiserver_request_total{client=\"unknown\",code=\"200\",component=\"\",contentType=\"text/plain; charset=utf-8\",dry_run=\"\",group=\"\",resource=\"\",scope=\"\",subresource=\"/healthz\",verb=\"GET\",version=\"\"} 1 apiserver_request_total{client=\"unknown\",code=\"200\",component=\"\",contentType=\"text/plain; charset=utf-8\",dry_run=\"\",group=\"\",resource=\"\",scope=\"\",subresource=\"/livez\",verb=\"GET\",version=\"\"} 1 apiserver_request_total{client=\"unknown\",code=\"200\",component=\"\",contentType=\"text/plain; charset=utf-8\",dry_run=\"\",group=\"\",resource=\"\",scope=\"\",subresource=\"/readyz\",verb=\"GET\",version=\"\"} 1 `) if err := testutil.GatherAndCompare(legacyregistry.DefaultGatherer, expected, \"apiserver_request_total\"); err != nil { t.Error(err) } } func createGetRequestWithUrl(rawUrlString string) *http.Request { url, _ := url.Parse(rawUrlString) return &http.Request{"} {"_id":"doc-en-kubernetes-670b673ef85999f2ba7f5e05ac69cdacf3dbe8e315ff89c46dffc4836fd0bf09","title":"","text":"return err } return wait.Poll(time.Second, b.bindTimeout, func() (bool, error) { err = wait.Poll(time.Second, b.bindTimeout, func() (bool, error) { b, err := b.checkBindings(assumedPod, bindings, claimsToProvision) return b, err }) if err != nil { pvcName := \"\" if len(claimsToProvision) > 0 { pvcName = claimsToProvision[0].Name } return fmt.Errorf(\"Failed to bind volumes: provisioning failed for PVC %q: %v\", pvcName, err) } return nil } func getPodName(pod *v1.Pod) string {"} {"_id":"doc-en-kubernetes-5d5563484c7f345d3e867772fe5336b8c26db5a760b0e59335ae2a48c719ca88","title":"","text":"} } // If the kubelet config controller is available, and dynamic config is enabled, start the config and status sync loops if utilfeature.DefaultFeatureGate.Enabled(features.DynamicKubeletConfig) && len(s.DynamicConfigDir.Value()) > 0 && kubeDeps.KubeletConfigController != nil && !standaloneMode && !s.RunOnce { if err := kubeDeps.KubeletConfigController.StartSync(kubeDeps.KubeClient, kubeDeps.EventClient, string(nodeName)); err != nil { return err } } if kubeDeps.Auth == nil { auth, err := BuildAuth(nodeName, kubeDeps.KubeClient, s.KubeletConfiguration) if err != nil {"} {"_id":"doc-en-kubernetes-4ec5eeb72e899bfc3aa2b3a65ec438a4af4716c5ff6d5c0a9b8f73149aff7efa","title":"","text":"return err } // If the kubelet config controller is available, and dynamic config is enabled, start the config and status sync loops if utilfeature.DefaultFeatureGate.Enabled(features.DynamicKubeletConfig) && len(s.DynamicConfigDir.Value()) > 0 && kubeDeps.KubeletConfigController != nil && !standaloneMode && !s.RunOnce { if err := kubeDeps.KubeletConfigController.StartSync(kubeDeps.KubeClient, kubeDeps.EventClient, string(nodeName)); err != nil { return err } } if s.HealthzPort > 0 { mux := http.NewServeMux() healthz.InstallHandler(mux)"} {"_id":"doc-en-kubernetes-79bcc461bc8d80201179cb07876ba1d6533924ce2fc4644ea6cb250d6c006bdc","title":"","text":"StabilityLevel: metrics.ALPHA, }, ) PreemptionVictims = metrics.NewGauge( &metrics.GaugeOpts{ Subsystem: SchedulerSubsystem, Name: \"pod_preemption_victims\", Help: \"Number of selected preemption victims\", PreemptionVictims = metrics.NewHistogram( &metrics.HistogramOpts{ Subsystem: SchedulerSubsystem, Name: \"pod_preemption_victims\", Help: \"Number of selected preemption victims\", // we think #victims>50 is pretty rare, therefore [50, +Inf) is considered a single bucket. Buckets: metrics.LinearBuckets(5, 5, 10), StabilityLevel: metrics.ALPHA, }) PreemptionAttempts = metrics.NewCounter("} {"_id":"doc-en-kubernetes-446c0fe158a7ca6d4ebe84bb45be7b738d139e3cf5ad857da2ed3f68f90812c3","title":"","text":"sched.Recorder.Eventf(victim, preemptor, v1.EventTypeNormal, \"Preempted\", \"Preempting\", \"Preempted by %v/%v on node %v\", preemptor.Namespace, preemptor.Name, nodeName) } metrics.PreemptionVictims.Set(float64(len(victims))) metrics.PreemptionVictims.Observe(float64(len(victims))) } // Clearing nominated pods should happen outside of \"if node != nil\". Node could // be nil when a pod with nominated node name is eligible to preempt again,"} {"_id":"doc-en-kubernetes-072defee6a578fe8ac77148940f4a60b6ebdbd3da59f10a285e0948f3c98abf9","title":"","text":"func ValidatePolicy(policy schedulerapi.Policy) error { var validationErrors []error priorities := make(map[string]schedulerapi.PriorityPolicy, len(policy.Priorities)) for _, priority := range policy.Priorities { if priority.Weight <= 0 || priority.Weight >= framework.MaxWeight { validationErrors = append(validationErrors, fmt.Errorf(\"Priority %s should have a positive weight applied to it or it has overflown\", priority.Name)) } validationErrors = append(validationErrors, validatePriorityRedeclared(priorities, priority)) } predicates := make(map[string]schedulerapi.PredicatePolicy, len(policy.Predicates)) for _, predicate := range policy.Predicates { validationErrors = append(validationErrors, validatePredicateRedeclared(predicates, predicate)) } binders := 0"} {"_id":"doc-en-kubernetes-2abc58f0a2c67744c2d5497413448cf556e1656ad4757c063e3b0d5a395b6b71","title":"","text":"return utilerrors.NewAggregate(validationErrors) } // validatePriorityRedeclared checks if any custom priorities have been declared multiple times in the policy config // by examining the specified priority arguments func validatePriorityRedeclared(priorities map[string]schedulerapi.PriorityPolicy, priority schedulerapi.PriorityPolicy) error { var priorityType string if priority.Argument != nil { if priority.Argument.LabelPreference != nil { priorityType = \"LabelPreference\" } else if priority.Argument.RequestedToCapacityRatioArguments != nil { priorityType = \"RequestedToCapacityRatioArguments\" } else if priority.Argument.ServiceAntiAffinity != nil { priorityType = \"ServiceAntiAffinity\" } else { return fmt.Errorf(\"No priority arguments set for priority %s\", priority.Name) } if existing, alreadyDeclared := priorities[priorityType]; alreadyDeclared { return fmt.Errorf(\"Priority '%s' redeclares custom priority '%s', from:'%s'\", priority.Name, priorityType, existing.Name) } priorities[priorityType] = priority } return nil } // validatePredicateRedeclared checks if any custom predicates have been declared multiple times in the policy config // by examining the specified predicate arguments func validatePredicateRedeclared(predicates map[string]schedulerapi.PredicatePolicy, predicate schedulerapi.PredicatePolicy) error { var predicateType string if predicate.Argument != nil { if predicate.Argument.LabelsPresence != nil { predicateType = \"LabelsPresence\" } else if predicate.Argument.ServiceAffinity != nil { predicateType = \"ServiceAffinity\" } else { return fmt.Errorf(\"No priority arguments set for priority %s\", predicate.Name) } if existing, alreadyDeclared := predicates[predicateType]; alreadyDeclared { return fmt.Errorf(\"Predicate '%s' redeclares custom predicate '%s', from:'%s'\", predicate.Name, predicateType, existing.Name) } predicates[predicateType] = predicate } return nil } // validateExtendedResourceName checks whether the specified name is a valid // extended resource name. func validateExtendedResourceName(name v1.ResourceName) []error {"} {"_id":"doc-en-kubernetes-0602033de7d9444ee8d3d0365086065def9d3d7eb8db28b32c88dad97462363c","title":"","text":"}}, expected: errors.New(\"kubernetes.io/foo is an invalid extended resource name\"), }, { name: \"invalid redeclared custom predicate\", policy: api.Policy{ Predicates: []api.PredicatePolicy{ {Name: \"customPredicate1\", Argument: &api.PredicateArgument{ServiceAffinity: &api.ServiceAffinity{Labels: []string{\"label1\"}}}}, {Name: \"customPredicate2\", Argument: &api.PredicateArgument{ServiceAffinity: &api.ServiceAffinity{Labels: []string{\"label2\"}}}}, }, }, expected: errors.New(\"Predicate 'customPredicate2' redeclares custom predicate 'ServiceAffinity', from:'customPredicate1'\"), }, { name: \"invalid redeclared custom priority\", policy: api.Policy{ Priorities: []api.PriorityPolicy{ {Name: \"customPriority1\", Weight: 1, Argument: &api.PriorityArgument{ServiceAntiAffinity: &api.ServiceAntiAffinity{Label: \"label1\"}}}, {Name: \"customPriority2\", Weight: 1, Argument: &api.PriorityArgument{ServiceAntiAffinity: &api.ServiceAntiAffinity{Label: \"label2\"}}}, }, }, expected: errors.New(\"Priority 'customPriority2' redeclares custom priority 'ServiceAntiAffinity', from:'customPriority1'\"), }, } for _, test := range tests {"} {"_id":"doc-en-kubernetes-a53151258f592821194517c0be3f5383ce4f4f332a20ddb028132953bb571217","title":"","text":"function wait_node_ready(){ # check the nodes information after kubelet daemon start local nodes_stats=\"${KUBECTL} --kubeconfig '${CERT_DIR}/admin.kubeconfig' get nodes\" local node_name=$KUBELET_HOST local node_name=$HOSTNAME_OVERRIDE local system_node_wait_time=30 local interval_time=2 kube::util::wait_for_success \"$system_node_wait_time\" \"$interval_time\" \"$nodes_stats | grep $node_name\""} {"_id":"doc-en-kubernetes-20d33e0ad15d715bd6c39311b857475ef9d0c660af4ccfe9884a6c1df5e19454","title":"","text":"disablePreemption bool percentageOfNodesToScore int32 enableNonPreempting bool lastProcessedNodeIndex int nextStartNodeIndex int } // snapshot snapshots scheduler cache and node infos for all fit and priority"} {"_id":"doc-en-kubernetes-0272028c35d14e49c7ca92d41fdec1b420b26a999a538465c87890a28771c34c","title":"","text":"checkNode := func(i int) { // We check the nodes starting from where we left off in the previous scheduling cycle, // this is to make sure all nodes have the same chance of being examined across pods. nodeInfo := g.nodeInfoSnapshot.NodeInfoList[(g.lastProcessedNodeIndex+i)%allNodes] nodeInfo := g.nodeInfoSnapshot.NodeInfoList[(g.nextStartNodeIndex+i)%allNodes] fits, failedPredicates, status, err := g.podFitsOnNode( ctx, state,"} {"_id":"doc-en-kubernetes-dac35ff61264ecad9a24c4b7f81deeeceec24a470ac2cb9239ec204aa7373af6","title":"","text":"// are found. workqueue.ParallelizeUntil(ctx, 16, allNodes, checkNode) processedNodes := int(filteredLen) + len(filteredNodesStatuses) + len(failedPredicateMap) g.lastProcessedNodeIndex = (g.lastProcessedNodeIndex + processedNodes) % allNodes g.nextStartNodeIndex = (g.nextStartNodeIndex + processedNodes) % allNodes filtered = filtered[:filteredLen] if err := errCh.ReceiveError(); err != nil {"} {"_id":"doc-en-kubernetes-617a9cc25eb0e814c94d130b5a9b4928ae000298639b2f8909e631b330a1a29a","title":"","text":"} } } func TestFairEvaluationForNodes(t *testing.T) { defer algorithmpredicates.SetPredicatesOrderingDuringTest(order)() predicates := map[string]algorithmpredicates.FitPredicate{\"true\": truePredicate} numAllNodes := 500 nodeNames := make([]string, 0, numAllNodes) for i := 0; i < numAllNodes; i++ { nodeNames = append(nodeNames, strconv.Itoa(i)) } nodes := makeNodeList(nodeNames) g := makeScheduler(predicates, nodes) // To make numAllNodes % nodesToFind != 0 g.percentageOfNodesToScore = 30 nodesToFind := int(g.numFeasibleNodesToFind(int32(numAllNodes))) // Iterating over all nodes more than twice for i := 0; i < 2*(numAllNodes/nodesToFind+1); i++ { nodesThatFit, _, _, err := g.findNodesThatFit(context.Background(), framework.NewCycleState(), &v1.Pod{}) if err != nil { t.Errorf(\"unexpected error: %v\", err) } if len(nodesThatFit) != nodesToFind { t.Errorf(\"got %d nodes filtered, want %d\", len(nodesThatFit), nodesToFind) } if g.nextStartNodeIndex != (i+1)*nodesToFind%numAllNodes { t.Errorf(\"got %d lastProcessedNodeIndex, want %d\", g.nextStartNodeIndex, (i+1)*nodesToFind%numAllNodes) } } } "} {"_id":"doc-en-kubernetes-64aed225b31f5566fdcd522d235959dbfa6315616ecabcf577dc3ca95ee1f1d7","title":"","text":"\"//pkg/api/v1/endpoints:go_default_library\", \"//staging/src/k8s.io/api/core/v1:go_default_library\", \"//staging/src/k8s.io/api/discovery/v1alpha1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/api/equality:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/api/errors:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/runtime:go_default_library\","} {"_id":"doc-en-kubernetes-3a2f39f9abc333fc5d2af805bd7a4774965e2f6b5759e12c20950eedd1dd77f9","title":"","text":"import ( corev1 \"k8s.io/api/core/v1\" discovery \"k8s.io/api/discovery/v1alpha1\" apiequality \"k8s.io/apimachinery/pkg/api/equality\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" corev1client \"k8s.io/client-go/kubernetes/typed/core/v1\""} {"_id":"doc-en-kubernetes-29d3000b1c61d5cc8a6a262939249245b4bbe901dffef43c7587c9b61e54e5e7","title":"","text":"// returned. func (adapter *EndpointsAdapter) Create(namespace string, endpoints *corev1.Endpoints) (*corev1.Endpoints, error) { endpoints, err := adapter.endpointClient.Endpoints(namespace).Create(endpoints) if err == nil && adapter.endpointSliceClient != nil { _, err = adapter.ensureEndpointSliceFromEndpoints(namespace, endpoints) if err == nil { err = adapter.EnsureEndpointSliceFromEndpoints(namespace, endpoints) } return endpoints, err }"} {"_id":"doc-en-kubernetes-e80ff961b20b1b5a1e447b9624aca9f22c977f1df75594679a1eb60d05525d27","title":"","text":"// updated. The updated Endpoints object or an error will be returned. func (adapter *EndpointsAdapter) Update(namespace string, endpoints *corev1.Endpoints) (*corev1.Endpoints, error) { endpoints, err := adapter.endpointClient.Endpoints(namespace).Update(endpoints) if err == nil && adapter.endpointSliceClient != nil { _, err = adapter.ensureEndpointSliceFromEndpoints(namespace, endpoints) if err == nil { err = adapter.EnsureEndpointSliceFromEndpoints(namespace, endpoints) } return endpoints, err } // ensureEndpointSliceFromEndpoints accepts a namespace and Endpoints resource // and creates or updates a corresponding EndpointSlice. The EndpointSlice // and/or an error will be returned. func (adapter *EndpointsAdapter) ensureEndpointSliceFromEndpoints(namespace string, endpoints *corev1.Endpoints) (*discovery.EndpointSlice, error) { // EnsureEndpointSliceFromEndpoints accepts a namespace and Endpoints resource // and creates or updates a corresponding EndpointSlice if an endpointSliceClient // exists. An error will be returned if it fails to sync the EndpointSlice. func (adapter *EndpointsAdapter) EnsureEndpointSliceFromEndpoints(namespace string, endpoints *corev1.Endpoints) error { if adapter.endpointSliceClient == nil { return nil } endpointSlice := endpointSliceFromEndpoints(endpoints) _, err := adapter.endpointSliceClient.EndpointSlices(namespace).Get(endpointSlice.Name, metav1.GetOptions{}) currentEndpointSlice, err := adapter.endpointSliceClient.EndpointSlices(namespace).Get(endpointSlice.Name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { return adapter.endpointSliceClient.EndpointSlices(namespace).Create(endpointSlice) if _, err = adapter.endpointSliceClient.EndpointSlices(namespace).Create(endpointSlice); errors.IsAlreadyExists(err) { err = nil } } return nil, err return err } if apiequality.Semantic.DeepEqual(currentEndpointSlice.Endpoints, endpointSlice.Endpoints) && apiequality.Semantic.DeepEqual(currentEndpointSlice.Ports, endpointSlice.Ports) && apiequality.Semantic.DeepEqual(currentEndpointSlice.Labels, endpointSlice.Labels) { return nil } return adapter.endpointSliceClient.EndpointSlices(namespace).Update(endpointSlice) _, err = adapter.endpointSliceClient.EndpointSlices(namespace).Update(endpointSlice) return err } // endpointSliceFromEndpoints generates an EndpointSlice from an Endpoints"} {"_id":"doc-en-kubernetes-1569220cdd6645154e4e59ada26c2ee9510c3a953d16557a170bc8354fd53dd0","title":"","text":"Subsets: []corev1.EndpointSubset{subset}, }, epSlice } func TestEndpointsAdapterEnsureEndpointSliceFromEndpoints(t *testing.T) { endpoints1, epSlice1 := generateEndpointsAndSlice(\"foo\", \"testing\", []int{80, 443}, []string{\"10.1.2.3\", \"10.1.2.4\"}) endpoints2, epSlice2 := generateEndpointsAndSlice(\"foo\", \"testing\", []int{80, 443}, []string{\"10.1.2.3\", \"10.1.2.4\", \"10.1.2.5\"}) testCases := map[string]struct { endpointSlicesEnabled bool expectedError error expectedEndpointSlice *discovery.EndpointSlice endpointSlices []*discovery.EndpointSlice namespaceParam string endpointsParam *corev1.Endpoints }{ \"existing-endpointslice-no-change\": { endpointSlicesEnabled: true, expectedError: nil, expectedEndpointSlice: epSlice1, endpointSlices: []*discovery.EndpointSlice{epSlice1}, namespaceParam: \"testing\", endpointsParam: endpoints1, }, \"existing-endpointslice-change\": { endpointSlicesEnabled: true, expectedError: nil, expectedEndpointSlice: epSlice2, endpointSlices: []*discovery.EndpointSlice{epSlice1}, namespaceParam: \"testing\", endpointsParam: endpoints2, }, \"missing-endpointslice\": { endpointSlicesEnabled: true, expectedError: nil, expectedEndpointSlice: epSlice1, endpointSlices: []*discovery.EndpointSlice{}, namespaceParam: \"testing\", endpointsParam: endpoints1, }, \"endpointslices-disabled\": { endpointSlicesEnabled: false, expectedError: nil, expectedEndpointSlice: nil, endpointSlices: []*discovery.EndpointSlice{}, namespaceParam: \"testing\", endpointsParam: endpoints1, }, } for name, testCase := range testCases { t.Run(name, func(t *testing.T) { client := fake.NewSimpleClientset() epAdapter := EndpointsAdapter{endpointClient: client.CoreV1()} if testCase.endpointSlicesEnabled { epAdapter.endpointSliceClient = client.DiscoveryV1alpha1() } for _, endpointSlice := range testCase.endpointSlices { _, err := client.DiscoveryV1alpha1().EndpointSlices(endpointSlice.Namespace).Create(endpointSlice) if err != nil { t.Fatalf(\"Error creating EndpointSlice: %v\", err) } } err := epAdapter.EnsureEndpointSliceFromEndpoints(testCase.namespaceParam, testCase.endpointsParam) if !apiequality.Semantic.DeepEqual(testCase.expectedError, err) { t.Errorf(\"Expected error: %v, got: %v\", testCase.expectedError, err) } endpointSlice, err := client.DiscoveryV1alpha1().EndpointSlices(testCase.namespaceParam).Get(testCase.endpointsParam.Name, metav1.GetOptions{}) if err != nil && !errors.IsNotFound(err) { t.Fatalf(\"Error getting Endpoint Slice: %v\", err) } if !apiequality.Semantic.DeepEqual(endpointSlice, testCase.expectedEndpointSlice) { t.Errorf(\"Expected Endpoint Slice: %v, got: %v\", testCase.expectedEndpointSlice, endpointSlice) } }) } } "} {"_id":"doc-en-kubernetes-4f066e1ef1219d8400b200e9240cc51649f113dc853472d51a8c46601b646fba","title":"","text":"// Next, we compare the current list of endpoints with the list of master IP keys formatCorrect, ipCorrect, portsCorrect := checkEndpointSubsetFormatWithLease(e, masterIPs, endpointPorts, reconcilePorts) if formatCorrect && ipCorrect && portsCorrect { return nil return r.epAdapter.EnsureEndpointSliceFromEndpoints(corev1.NamespaceDefault, e) } if !formatCorrect {"} {"_id":"doc-en-kubernetes-ad37c38d08e42ab61110b1b4d3870be9b90fe2526bdbdd29f964ad6463465a40","title":"","text":"return err } if ipCorrect && portsCorrect { return nil return r.epAdapter.EnsureEndpointSliceFromEndpoints(metav1.NamespaceDefault, e) } if !ipCorrect { // We *always* add our own IP address."} {"_id":"doc-en-kubernetes-f3c5418cd6fa9e0e73ab047dbd49a31e8cffc1f72b47067856b47942bce3dded","title":"","text":"func (m *podContainerManagerImpl) Destroy(podCgroup CgroupName) error { // Try killing all the processes attached to the pod cgroup if err := m.tryKillingCgroupProcesses(podCgroup); err != nil { klog.V(3).Infof(\"failed to kill all the processes attached to the %v cgroups\", podCgroup) klog.Warningf(\"failed to kill all the processes attached to the %v cgroups\", podCgroup) return fmt.Errorf(\"failed to kill all the processes attached to the %v cgroups : %v\", podCgroup, err) }"} {"_id":"doc-en-kubernetes-b75df5bece3611a0fee15b3caaa28446979609edc16f2d3ac4db541ba87fe508","title":"","text":"ResourceParameters: &ResourceConfig{}, } if err := m.cgroupManager.Destroy(containerConfig); err != nil { klog.Warningf(\"failed to delete cgroup paths for %v : %v\", podCgroup, err) return fmt.Errorf(\"failed to delete cgroup paths for %v : %v\", podCgroup, err) } return nil"} {"_id":"doc-en-kubernetes-ef04b78697ac4420c37ef3b798738cdcdb9117a5bf6d15705f992467744eff05","title":"","text":"// TODO(liggitt): drop this once golang json parser limits stack depth (https://github.com/golang/go/issues/31789) if len(p.patchBytes) > 1024*1024 { v := []interface{}{} if err := json.Unmarshal(p.patchBytes, v); err != nil { if err := json.Unmarshal(p.patchBytes, &v); err != nil { return nil, errors.NewBadRequest(fmt.Sprintf(\"error decoding patch: %v\", err)) } }"} {"_id":"doc-en-kubernetes-73dfa9e845bdb53882e727b1cc9cfb99c64cb0aeb85ff7c306aea52c3ab94c5c","title":"","text":"// TODO(liggitt): drop this once golang json parser limits stack depth (https://github.com/golang/go/issues/31789) if len(p.patchBytes) > 1024*1024 { v := map[string]interface{}{} if err := json.Unmarshal(p.patchBytes, v); err != nil { if err := json.Unmarshal(p.patchBytes, &v); err != nil { return nil, errors.NewBadRequest(fmt.Sprintf(\"error decoding patch: %v\", err)) } }"} {"_id":"doc-en-kubernetes-4ddeaf9e40f6bc028710a41a25e0fe607dd85ee90f329a08becf55950d437312","title":"","text":"t.Errorf(\"expected success or bad request err, got %v\", err) } }) t.Run(\"JSONPatchType should handle a valid patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`[{\"op\":\"add\",\"path\":\"/foo\",\"value\":0` + strings.Repeat(\" \", 3*1024*1024-100) + `}]`) err = rest.Patch(types.JSONPatchType).AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\")). Body(patchBody).Do().Error() if err != nil { t.Errorf(\"unexpected error: %v\", err) } }) t.Run(\"MergePatchType should handle a patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"value\":` + strings.Repeat(\"[\", 3*1024*1024/2-100) + strings.Repeat(\"]\", 3*1024*1024/2-100) + `}`) err = rest.Patch(types.MergePatchType).AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\"))."} {"_id":"doc-en-kubernetes-b963884e350ff8046d6ffbe0280ff926096fc2e75c1627b21c26f0ab25d165af","title":"","text":"t.Errorf(\"expected success or bad request err, got %v\", err) } }) t.Run(\"MergePatchType should handle a valid patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"value\":0` + strings.Repeat(\" \", 3*1024*1024-100) + `}`) err = rest.Patch(types.MergePatchType).AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\")). Body(patchBody).Do().Error() if err != nil { t.Errorf(\"unexpected error: %v\", err) } }) t.Run(\"StrategicMergePatchType should handle a patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"value\":` + strings.Repeat(\"[\", 3*1024*1024/2-100) + strings.Repeat(\"]\", 3*1024*1024/2-100) + `}`) err = rest.Patch(types.StrategicMergePatchType).AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\"))."} {"_id":"doc-en-kubernetes-e6a488a252f930ffbc5e20149add79bd3159ad09f48221db456bf02ecc7a9922","title":"","text":"t.Errorf(\"expected success or bad request err, got %v\", err) } }) t.Run(\"StrategicMergePatchType should handle a valid patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"value\":0` + strings.Repeat(\" \", 3*1024*1024-100) + `}`) err = rest.Patch(types.StrategicMergePatchType).AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\")). Body(patchBody).Do().Error() if err != nil { t.Errorf(\"unexpected error: %v\", err) } }) t.Run(\"ApplyPatchType should handle a patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"value\":` + strings.Repeat(\"[\", 3*1024*1024/2-100) + strings.Repeat(\"]\", 3*1024*1024/2-100) + `}`) err = rest.Patch(types.ApplyPatchType).Param(\"fieldManager\", \"test\").AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\"))."} {"_id":"doc-en-kubernetes-d60f5361ad8552ff5c09ea2aee9725c23fc3508ed916d7092305db3ff96dbb04","title":"","text":"t.Errorf(\"expected success or bad request err, got %#v\", err) } }) t.Run(\"ApplyPatchType should handle a valid patch just under the max limit\", func(t *testing.T) { patchBody := []byte(`{\"apiVersion\":\"v1\",\"kind\":\"Secret\"` + strings.Repeat(\" \", 3*1024*1024-100) + `}`) err = rest.Patch(types.ApplyPatchType).Param(\"fieldManager\", \"test\").AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\")). Body(patchBody).Do().Error() if err != nil { t.Errorf(\"unexpected error: %v\", err) } }) t.Run(\"Delete should limit the request body size\", func(t *testing.T) { err = c.Delete().AbsPath(fmt.Sprintf(\"/api/v1/namespaces/default/secrets/test\")). Body(hugeData).Do().Error()"} {"_id":"doc-en-kubernetes-2ed962974f9e2a41f59af0b16789012ddc7520ab4b86611085e9092696a42420","title":"","text":"} } // Enforce CLI specified namespace on server request. if o.EnforceNamespace { o.VisitedNamespaces.Insert(o.Namespace) } // Generates the objects using the resource builder if they have not // already been stored by calling \"SetObjects()\" in the pre-processor. infos, err := o.GetObjects()"} {"_id":"doc-en-kubernetes-ce88f1d888398b1723ec1d056b142b5201b82198851e9eb266e49b3e08578d2f","title":"","text":"} if errors.IsConflict(err) { err = fmt.Errorf(`%v Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply"} {"_id":"doc-en-kubernetes-d447385b890f931146592aec1d279ac84350269ede9d27ef96095cb34b3aabe6","title":"","text":"* You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts`, err) } return err"} {"_id":"doc-en-kubernetes-d7ee5bdc648b2e686f153cd7212792672c86bbea5698ecd3b8a15d7adf9ddd9a","title":"","text":"} } func TestApplyPruneObjects(t *testing.T) { cmdtesting.InitTestErrorHandler(t) nameRC, currentRC := readAndAnnotateReplicationController(t, filenameRC) pathRC := \"/namespaces/test/replicationcontrollers/\" + nameRC for _, fn := range testingOpenAPISchemaFns { t.Run(\"test apply returns correct output\", func(t *testing.T) { tf := cmdtesting.NewTestFactory().WithNamespace(\"test\") defer tf.Cleanup() tf.UnstructuredClient = &fake.RESTClient{ NegotiatedSerializer: resource.UnstructuredPlusDefaultContentConfig().NegotiatedSerializer, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == pathRC && m == \"GET\": bodyRC := ioutil.NopCloser(bytes.NewReader(currentRC)) return &http.Response{StatusCode: http.StatusOK, Header: cmdtesting.DefaultHeader(), Body: bodyRC}, nil case p == pathRC && m == \"PATCH\": validatePatchApplication(t, req) bodyRC := ioutil.NopCloser(bytes.NewReader(currentRC)) return &http.Response{StatusCode: http.StatusOK, Header: cmdtesting.DefaultHeader(), Body: bodyRC}, nil default: t.Fatalf(\"unexpected request: %#vn%#v\", req.URL, req) return nil, nil } }), } tf.OpenAPISchemaFunc = fn tf.ClientConfigVal = cmdtesting.DefaultClientConfig() ioStreams, _, buf, errBuf := genericclioptions.NewTestIOStreams() cmd := NewCmdApply(\"kubectl\", tf, ioStreams) cmd.Flags().Set(\"filename\", filenameRC) cmd.Flags().Set(\"prune\", \"true\") cmd.Flags().Set(\"namespace\", \"test\") cmd.Flags().Set(\"output\", \"yaml\") cmd.Flags().Set(\"all\", \"true\") cmd.Run(cmd, []string{}) if !strings.Contains(buf.String(), \"test-rc\") { t.Fatalf(\"unexpected output: %snexpected to contain: %s\", buf.String(), \"test-rc\") } if errBuf.String() != \"\" { t.Fatalf(\"unexpected error output: %s\", errBuf.String()) } }) } } func TestApplyObjectOutput(t *testing.T) { cmdtesting.InitTestErrorHandler(t) nameRC, currentRC := readAndAnnotateReplicationController(t, filenameRC)"} {"_id":"doc-en-kubernetes-bc4ef2ac35ca7cdc3797296760bf1ca371c187b93885ef71195ad8c18f670b35","title":"","text":"return &svcPort, nil } } return nil, errors.NewServiceUnavailable(fmt.Sprintf(\"no service port %q found for service %q\", port, svc.Name)) return nil, errors.NewServiceUnavailable(fmt.Sprintf(\"no service port %d found for service %q\", port, svc.Name)) } // ResourceLocation returns a URL to which one can send traffic for the specified service."} {"_id":"doc-en-kubernetes-314194f5f320f0449ab88ce779a3cd3cfae0d3243e911a62486b1c6df02d67cf","title":"","text":"reflect.DeepEqual(s.EnableTCPReset, t.EnableTCPReset) && reflect.DeepEqual(s.DisableOutboundSnat, t.DisableOutboundSnat) if wantLB { if wantLB && s.IdleTimeoutInMinutes != nil && t.IdleTimeoutInMinutes != nil { return properties && reflect.DeepEqual(s.IdleTimeoutInMinutes, t.IdleTimeoutInMinutes) } return properties"} {"_id":"doc-en-kubernetes-a9b8b34940d57a9631781638362d02269d2368e9e38f39bdb7c88913253d4b40","title":"","text":"expected: false, }, { msg: \"rule names match while idletimeout unmatch should return false\", existingRule: []network.LoadBalancingRule{ { Name: to.StringPtr(\"httpRule\"), LoadBalancingRulePropertiesFormat: &network.LoadBalancingRulePropertiesFormat{ IdleTimeoutInMinutes: to.Int32Ptr(1), }, }, }, curRule: network.LoadBalancingRule{ Name: to.StringPtr(\"httpRule\"), LoadBalancingRulePropertiesFormat: &network.LoadBalancingRulePropertiesFormat{ IdleTimeoutInMinutes: to.Int32Ptr(2), }, }, expected: false, }, { msg: \"rule names match while idletimeout nil should return true\", existingRule: []network.LoadBalancingRule{ { Name: to.StringPtr(\"httpRule\"), LoadBalancingRulePropertiesFormat: &network.LoadBalancingRulePropertiesFormat{}, }, }, curRule: network.LoadBalancingRule{ Name: to.StringPtr(\"httpRule\"), LoadBalancingRulePropertiesFormat: &network.LoadBalancingRulePropertiesFormat{ IdleTimeoutInMinutes: to.Int32Ptr(2), }, }, expected: true, }, { msg: \"rule names match while LoadDistribution unmatch should return false\", existingRule: []network.LoadBalancingRule{ {"} {"_id":"doc-en-kubernetes-8f7d889af3fb19085c7630e326fa852a8dc6c0e4cf8a395b5744b6a5284b7bfa","title":"","text":"# Translate a published version / (e.g. \"release/stable\") to version number. set_binary_version \"${release}\" if [[ -z \"${KUBERNETES_SKIP_RELEASE_VALIDATION-}\" ]]; then if [[ ${KUBE_VERSION} =~ ${KUBE_CI_VERSION_REGEX} ]]; then if [[ ${KUBE_VERSION} =~ ${KUBE_RELEASE_VERSION_REGEX} ]]; then # Use KUBERNETES_RELEASE_URL for Releases and Pre-Releases # ie. 1.18.0 or 1.19.0-beta.0 KUBERNETES_RELEASE_URL=\"${KUBERNETES_RELEASE_URL}\" elif [[ ${KUBE_VERSION} =~ ${KUBE_CI_VERSION_REGEX} ]]; then # Override KUBERNETES_RELEASE_URL to point to the CI bucket; # this will be used by get-kube-binaries.sh. # ie. v1.19.0-beta.0.318+b618411f1edb98 KUBERNETES_RELEASE_URL=\"${KUBERNETES_CI_RELEASE_URL}\" elif ! [[ ${KUBE_VERSION} =~ ${KUBE_RELEASE_VERSION_REGEX} ]]; then else echo \"Version doesn't match regexp\" >&2 exit 1 fi"} {"_id":"doc-en-kubernetes-450485b507cab32ababb87b967fa70364fc95122c493305b1711d07b7815dd18","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # This script generates `/pkg/kubelet/apis/podresources/v1alpha1/api.pb.go` # from the protobuf file `/pkg/kubelet/apis/podresources/v1alpha1/api.proto` # for pods. # Usage: `hack/update-generated-pod-resources.sh`. set -o errexit set -o nounset set -o pipefail"} {"_id":"doc-en-kubernetes-f405ca742e5d3d1c3c38cee8a75423f6697d724b5a80b24f0d585910ac6e485a","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # This script genertates `*/api.pb.go` from the protobuf file `*/api.proto`. # Usage: # hack/update-generated-protobuf-dockerized.sh \"${APIROOTS}\" # An example APIROOT is: \"k8s.io/api/admissionregistration/v1\" set -o errexit set -o nounset set -o pipefail"} {"_id":"doc-en-kubernetes-9895ac4046e5d5225abf97168583dae16c8498d163fca0813781b650d9db13ce","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # This script generates all go files from the corresponding protobuf files. # Usage: `hack/update-generated-protobuf.sh`. set -o errexit set -o nounset set -o pipefail"} {"_id":"doc-en-kubernetes-e44b1165f54a8c25d540b19ff63913bbdce4a2574f1a4b338091a6eedc86e094","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # This script builds protoc-gen-gogo binary in runtime and genertates # `*/api.pb.go` from the protobuf file `*/api.proto`. set -o errexit set -o nounset set -o pipefail"} {"_id":"doc-en-kubernetes-3bf207ba365062c68f3268b1162028992068989ffc586fe06ffa5591f302e246","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # This script builds protoc-gen-gogo binary in runtime and genertates # `*/api.pb.go` from the protobuf file `*/api.proto`. # Usage: # hack/update-generated-runtime.sh set -o errexit set -o nounset set -o pipefail"} {"_id":"doc-en-kubernetes-23a965315d9c022d034fcb0a4dd4bf67ebc2d21431a5b3023cae44316cbe0afd","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. # Generates `types_swagger_doc_generated.go` files for API group # This script generates `types_swagger_doc_generated.go` files for API group # versions. That file contains functions on API structs that return # the comments that should be surfaced for the corresponding API type # in our API docs."} {"_id":"doc-en-kubernetes-c7b0f69b34324804e65e16c8ecfaf717fed1ad8662dd7d169bd26019dca3d0ed","title":"","text":"func VerifyExecInPodSucceed(f *framework.Framework, pod *v1.Pod, shExec string) { _, err := PodExec(f, pod, shExec) if err != nil { if err, ok := err.(uexec.CodeExitError); ok { exitCode := err.ExitStatus() if exiterr, ok := err.(uexec.CodeExitError); ok { exitCode := exiterr.ExitStatus() framework.ExpectNoError(err, \"%q should succeed, but failed with exit code %d and error message %q\", shExec, exitCode, err) shExec, exitCode, exiterr) } else { framework.ExpectNoError(err, \"%q should succeed, but failed with error message %q\","} {"_id":"doc-en-kubernetes-0fea54305987694793abaa1c224ea0c4475213476a32f3faaac224d6a5276713","title":"","text":"func VerifyExecInPodFail(f *framework.Framework, pod *v1.Pod, shExec string, exitCode int) { _, err := PodExec(f, pod, shExec) if err != nil { if err, ok := err.(clientexec.ExitError); ok { actualExitCode := err.ExitStatus() if exiterr, ok := err.(clientexec.ExitError); ok { actualExitCode := exiterr.ExitStatus() framework.ExpectEqual(actualExitCode, exitCode, \"%q should fail with exit code %d, but failed with exit code %d and error message %q\", shExec, exitCode, actualExitCode, err) shExec, exitCode, actualExitCode, exiterr) } else { framework.ExpectNoError(err, \"%q should fail with exit code %d, but failed with error message %q\","} {"_id":"doc-en-kubernetes-8ab9556751547aa424e8a4a37d2406e9dbaad02961c21b76897629f627df8f99","title":"","text":"\"extender_test.go\", \"framework_test.go\", \"main_test.go\", \"plugins_test.go\", \"predicates_test.go\", \"preemption_test.go\", \"priorities_test.go\","} {"_id":"doc-en-kubernetes-e4560e5c3f852e1448d92bd26ea2210028a5760f89eef9bab15cd9109fe67a3a","title":"","text":" /* Copyright 2020 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package scheduler import ( \"testing\" \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" \"k8s.io/kubernetes/pkg/features\" ) func TestNodeResourceLimits(t *testing.T) { defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.ResourceLimitsPriorityFunction, true)() context := initTest(t, \"node-resource-limits\") defer cleanupTest(t, context) // Add one node expectedNode, err := createNode(context.clientSet, \"test-node1\", &v1.ResourceList{ v1.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), v1.ResourceCPU: *resource.NewMilliQuantity(2000, resource.DecimalSI), v1.ResourceMemory: *resource.NewQuantity(2000, resource.DecimalSI), }) if err != nil { t.Fatalf(\"Cannot create node: %v\", err) } // Add another node with less resource _, err = createNode(context.clientSet, \"test-node2\", &v1.ResourceList{ v1.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), v1.ResourceCPU: *resource.NewMilliQuantity(1000, resource.DecimalSI), v1.ResourceMemory: *resource.NewQuantity(1000, resource.DecimalSI), }) if err != nil { t.Fatalf(\"Cannot create node: %v\", err) } podName := \"pod-with-resource-limits\" pod, err := runPausePod(context.clientSet, initPausePod(context.clientSet, &pausePodConfig{ Name: podName, Namespace: context.ns.Name, Resources: &v1.ResourceRequirements{Requests: v1.ResourceList{ v1.ResourceCPU: *resource.NewMilliQuantity(500, resource.DecimalSI), v1.ResourceMemory: *resource.NewQuantity(500, resource.DecimalSI)}, }, })) if err != nil { t.Fatalf(\"Error running pause pod: %v\", err) } if pod.Spec.NodeName != expectedNode.Name { t.Errorf(\"pod %v got scheduled on an unexpected node: %v. Expected node: %v.\", podName, pod.Spec.NodeName, expectedNode.Name) } else { t.Logf(\"pod %v got successfully scheduled on node %v.\", podName, pod.Spec.NodeName) } } "} {"_id":"doc-en-kubernetes-3ae82e0bbc3c8d9a54af45edf3383f74c5037f0275a175946c70e687b17049fe","title":"","text":"// TolerationsTolerateTaintsWithFilter checks if given tolerations tolerates // all the taints that apply to the filter in given taint list. // DEPRECATED: Please use FindMatchingUntoleratedTaint instead. func TolerationsTolerateTaintsWithFilter(tolerations []v1.Toleration, taints []v1.Taint, applyFilter taintsFilterFunc) bool { if len(taints) == 0 { return true } _, isUntolerated := FindMatchingUntoleratedTaint(taints, tolerations, applyFilter) return !isUntolerated } for i := range taints { if applyFilter != nil && !applyFilter(&taints[i]) { continue // FindMatchingUntoleratedTaint checks if the given tolerations tolerates // all the filtered taints, and returns the first taint without a toleration func FindMatchingUntoleratedTaint(taints []v1.Taint, tolerations []v1.Toleration, inclusionFilter taintsFilterFunc) (v1.Taint, bool) { filteredTaints := getFilteredTaints(taints, inclusionFilter) for _, taint := range filteredTaints { if !TolerationsTolerateTaint(tolerations, &taint) { return taint, true } } return v1.Taint{}, false } if !TolerationsTolerateTaint(tolerations, &taints[i]) { return false // getFilteredTaints returns a list of taints satisfying the filter predicate func getFilteredTaints(taints []v1.Taint, inclusionFilter taintsFilterFunc) []v1.Taint { if inclusionFilter == nil { return taints } filteredTaints := []v1.Taint{} for _, taint := range taints { if !inclusionFilter(&taint) { continue } filteredTaints = append(filteredTaints, taint) } return true return filteredTaints } // Returns true and list of Tolerations matching all Taints if all are tolerated, or false otherwise."} {"_id":"doc-en-kubernetes-fc3d43ace78b9fc22d1f99f61a92546c4432b87381073c20e0158e1eb368cdf0","title":"","text":"return framework.NewStatus(framework.Error, err.Error()) } if v1helper.TolerationsTolerateTaintsWithFilter(pod.Spec.Tolerations, taints, func(t *v1.Taint) bool { filterPredicate := func(t *v1.Taint) bool { // PodToleratesNodeTaints is only interested in NoSchedule and NoExecute taints. return t.Effect == v1.TaintEffectNoSchedule || t.Effect == v1.TaintEffectNoExecute }) { } taint, isUntolerated := v1helper.FindMatchingUntoleratedTaint(taints, pod.Spec.Tolerations, filterPredicate) if !isUntolerated { return nil } return framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonNotMatch) errReason := fmt.Sprintf(\"node(s) had taint {%s: %s}, that the pod didn't tolerate\", taint.Key, taint.Value) return framework.NewStatus(framework.UnschedulableAndUnresolvable, errReason) } // postFilterState computed at PostFilter and used at Score."} {"_id":"doc-en-kubernetes-0849c110ea21df8bc827c31633303dbe259e2ed5df2325a3035069eed5de1547","title":"","text":"} func TestTaintTolerationFilter(t *testing.T) { unschedulable := framework.NewStatus(framework.UnschedulableAndUnresolvable, ErrReasonNotMatch) tests := []struct { name string pod *v1.Pod"} {"_id":"doc-en-kubernetes-a967c9c630df81013b54636f76e528459f9cf5ca5130cc544e6e6f0aba7014be","title":"","text":"wantStatus *framework.Status }{ { name: \"A pod having no tolerations can't be scheduled onto a node with nonempty taints\", pod: podWithTolerations(\"pod1\", []v1.Toleration{}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"NoSchedule\"}}), wantStatus: unschedulable, name: \"A pod having no tolerations can't be scheduled onto a node with nonempty taints\", pod: podWithTolerations(\"pod1\", []v1.Toleration{}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"NoSchedule\"}}), wantStatus: framework.NewStatus(framework.UnschedulableAndUnresolvable, \"node(s) had taint {dedicated: user1}, that the pod didn't tolerate\"), }, { name: \"A pod which can be scheduled on a dedicated node assigned to user1 with effect NoSchedule\","} {"_id":"doc-en-kubernetes-c4897998f52c98fc539084d11a4bf341fbffa8f4884db26b3cc58daf48a56cd4","title":"","text":"node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"NoSchedule\"}}), }, { name: \"A pod which can't be scheduled on a dedicated node assigned to user2 with effect NoSchedule\", pod: podWithTolerations(\"pod1\", []v1.Toleration{{Key: \"dedicated\", Operator: \"Equal\", Value: \"user2\", Effect: \"NoSchedule\"}}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"NoSchedule\"}}), wantStatus: unschedulable, name: \"A pod which can't be scheduled on a dedicated node assigned to user2 with effect NoSchedule\", pod: podWithTolerations(\"pod1\", []v1.Toleration{{Key: \"dedicated\", Operator: \"Equal\", Value: \"user2\", Effect: \"NoSchedule\"}}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"NoSchedule\"}}), wantStatus: framework.NewStatus(framework.UnschedulableAndUnresolvable, \"node(s) had taint {dedicated: user1}, that the pod didn't tolerate\"), }, { name: \"A pod can be scheduled onto the node, with a toleration uses operator Exists that tolerates the taints on the node\","} {"_id":"doc-en-kubernetes-d58bff5f9ecde2aa18df816711ca1a096a3bc826b5b9bc7b7fe2ecedd6007e69","title":"","text":"{ name: \"A pod has a toleration that keys and values match the taint on the node, but (non-empty) effect doesn't match, \" + \"can't be scheduled onto the node\", pod: podWithTolerations(\"pod1\", []v1.Toleration{{Key: \"foo\", Operator: \"Equal\", Value: \"bar\", Effect: \"PreferNoSchedule\"}}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"foo\", Value: \"bar\", Effect: \"NoSchedule\"}}), wantStatus: unschedulable, pod: podWithTolerations(\"pod1\", []v1.Toleration{{Key: \"foo\", Operator: \"Equal\", Value: \"bar\", Effect: \"PreferNoSchedule\"}}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"foo\", Value: \"bar\", Effect: \"NoSchedule\"}}), wantStatus: framework.NewStatus(framework.UnschedulableAndUnresolvable, \"node(s) had taint {foo: bar}, that the pod didn't tolerate\"), }, { name: \"The pod has a toleration that keys and values match the taint on the node, the effect of toleration is empty, \" +"} {"_id":"doc-en-kubernetes-5fce6bc877a2641a0153a71ca65db4d46f373f23e1fe0a81af577858284d2ca8","title":"","text":"}, { name: \"The pod has a toleration that key and value don't match the taint on the node, \" + \"but the effect of taint on node is PreferNochedule. Pod can be scheduled onto the node\", \"but the effect of taint on node is PreferNoSchedule. Pod can be scheduled onto the node\", pod: podWithTolerations(\"pod1\", []v1.Toleration{{Key: \"dedicated\", Operator: \"Equal\", Value: \"user2\", Effect: \"NoSchedule\"}}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"PreferNoSchedule\"}}), }, { name: \"The pod has no toleration, \" + \"but the effect of taint on node is PreferNochedule. Pod can be scheduled onto the node\", \"but the effect of taint on node is PreferNoSchedule. Pod can be scheduled onto the node\", pod: podWithTolerations(\"pod1\", []v1.Toleration{}), node: nodeWithTaints(\"nodeA\", []v1.Taint{{Key: \"dedicated\", Value: \"user1\", Effect: \"PreferNoSchedule\"}}), },"} {"_id":"doc-en-kubernetes-b25a13a220109ab56b45447b902aaaba9ddccb83302eb97c3e6618bd0312bb51","title":"","text":"// can reasonably expect seems questionable. {Group: \"extensions\", Version: \"v1beta1\"}: {group: 17900, version: 1}, // to my knowledge, nothing below here collides {Group: \"apps\", Version: \"v1\"}: {group: 17800, version: 15}, {Group: \"events.k8s.io\", Version: \"v1beta1\"}: {group: 17750, version: 5}, {Group: \"authentication.k8s.io\", Version: \"v1\"}: {group: 17700, version: 15}, {Group: \"authentication.k8s.io\", Version: \"v1beta1\"}: {group: 17700, version: 9}, {Group: \"authorization.k8s.io\", Version: \"v1\"}: {group: 17600, version: 15}, {Group: \"authorization.k8s.io\", Version: \"v1beta1\"}: {group: 17600, version: 9}, {Group: \"autoscaling\", Version: \"v1\"}: {group: 17500, version: 15}, {Group: \"autoscaling\", Version: \"v2beta1\"}: {group: 17500, version: 9}, {Group: \"autoscaling\", Version: \"v2beta2\"}: {group: 17500, version: 1}, {Group: \"batch\", Version: \"v1\"}: {group: 17400, version: 15}, {Group: \"batch\", Version: \"v1beta1\"}: {group: 17400, version: 9}, {Group: \"batch\", Version: \"v2alpha1\"}: {group: 17400, version: 9}, {Group: \"certificates.k8s.io\", Version: \"v1beta1\"}: {group: 17300, version: 9}, {Group: \"networking.k8s.io\", Version: \"v1\"}: {group: 17200, version: 15}, {Group: \"networking.k8s.io\", Version: \"v1beta1\"}: {group: 17200, version: 9}, {Group: \"policy\", Version: \"v1beta1\"}: {group: 17100, version: 9}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1\"}: {group: 17000, version: 15}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1beta1\"}: {group: 17000, version: 12}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1alpha1\"}: {group: 17000, version: 9}, {Group: \"settings.k8s.io\", Version: \"v1alpha1\"}: {group: 16900, version: 9}, {Group: \"storage.k8s.io\", Version: \"v1\"}: {group: 16800, version: 15}, {Group: \"storage.k8s.io\", Version: \"v1beta1\"}: {group: 16800, version: 9}, {Group: \"storage.k8s.io\", Version: \"v1alpha1\"}: {group: 16800, version: 1}, {Group: \"apiextensions.k8s.io\", Version: \"v1\"}: {group: 16700, version: 15}, {Group: \"apiextensions.k8s.io\", Version: \"v1beta1\"}: {group: 16700, version: 9}, {Group: \"admissionregistration.k8s.io\", Version: \"v1\"}: {group: 16700, version: 15}, {Group: \"admissionregistration.k8s.io\", Version: \"v1beta1\"}: {group: 16700, version: 12}, {Group: \"scheduling.k8s.io\", Version: \"v1\"}: {group: 16600, version: 15}, {Group: \"scheduling.k8s.io\", Version: \"v1beta1\"}: {group: 16600, version: 12}, {Group: \"scheduling.k8s.io\", Version: \"v1alpha1\"}: {group: 16600, version: 9}, {Group: \"coordination.k8s.io\", Version: \"v1\"}: {group: 16500, version: 15}, {Group: \"coordination.k8s.io\", Version: \"v1beta1\"}: {group: 16500, version: 9}, {Group: \"auditregistration.k8s.io\", Version: \"v1alpha1\"}: {group: 16400, version: 1}, {Group: \"node.k8s.io\", Version: \"v1alpha1\"}: {group: 16300, version: 1}, {Group: \"node.k8s.io\", Version: \"v1beta1\"}: {group: 16300, version: 9}, {Group: \"discovery.k8s.io\", Version: \"v1beta1\"}: {group: 16200, version: 12}, {Group: \"discovery.k8s.io\", Version: \"v1alpha1\"}: {group: 16200, version: 9}, {Group: \"apps\", Version: \"v1\"}: {group: 17800, version: 15}, {Group: \"events.k8s.io\", Version: \"v1beta1\"}: {group: 17750, version: 5}, {Group: \"authentication.k8s.io\", Version: \"v1\"}: {group: 17700, version: 15}, {Group: \"authentication.k8s.io\", Version: \"v1beta1\"}: {group: 17700, version: 9}, {Group: \"authorization.k8s.io\", Version: \"v1\"}: {group: 17600, version: 15}, {Group: \"authorization.k8s.io\", Version: \"v1beta1\"}: {group: 17600, version: 9}, {Group: \"autoscaling\", Version: \"v1\"}: {group: 17500, version: 15}, {Group: \"autoscaling\", Version: \"v2beta1\"}: {group: 17500, version: 9}, {Group: \"autoscaling\", Version: \"v2beta2\"}: {group: 17500, version: 1}, {Group: \"batch\", Version: \"v1\"}: {group: 17400, version: 15}, {Group: \"batch\", Version: \"v1beta1\"}: {group: 17400, version: 9}, {Group: \"batch\", Version: \"v2alpha1\"}: {group: 17400, version: 9}, {Group: \"certificates.k8s.io\", Version: \"v1beta1\"}: {group: 17300, version: 9}, {Group: \"networking.k8s.io\", Version: \"v1\"}: {group: 17200, version: 15}, {Group: \"networking.k8s.io\", Version: \"v1beta1\"}: {group: 17200, version: 9}, {Group: \"policy\", Version: \"v1beta1\"}: {group: 17100, version: 9}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1\"}: {group: 17000, version: 15}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1beta1\"}: {group: 17000, version: 12}, {Group: \"rbac.authorization.k8s.io\", Version: \"v1alpha1\"}: {group: 17000, version: 9}, {Group: \"settings.k8s.io\", Version: \"v1alpha1\"}: {group: 16900, version: 9}, {Group: \"storage.k8s.io\", Version: \"v1\"}: {group: 16800, version: 15}, {Group: \"storage.k8s.io\", Version: \"v1beta1\"}: {group: 16800, version: 9}, {Group: \"storage.k8s.io\", Version: \"v1alpha1\"}: {group: 16800, version: 1}, {Group: \"apiextensions.k8s.io\", Version: \"v1\"}: {group: 16700, version: 15}, {Group: \"apiextensions.k8s.io\", Version: \"v1beta1\"}: {group: 16700, version: 9}, {Group: \"admissionregistration.k8s.io\", Version: \"v1\"}: {group: 16700, version: 15}, {Group: \"admissionregistration.k8s.io\", Version: \"v1beta1\"}: {group: 16700, version: 12}, {Group: \"scheduling.k8s.io\", Version: \"v1\"}: {group: 16600, version: 15}, {Group: \"scheduling.k8s.io\", Version: \"v1beta1\"}: {group: 16600, version: 12}, {Group: \"scheduling.k8s.io\", Version: \"v1alpha1\"}: {group: 16600, version: 9}, {Group: \"coordination.k8s.io\", Version: \"v1\"}: {group: 16500, version: 15}, {Group: \"coordination.k8s.io\", Version: \"v1beta1\"}: {group: 16500, version: 9}, {Group: \"auditregistration.k8s.io\", Version: \"v1alpha1\"}: {group: 16400, version: 1}, {Group: \"node.k8s.io\", Version: \"v1alpha1\"}: {group: 16300, version: 1}, {Group: \"node.k8s.io\", Version: \"v1beta1\"}: {group: 16300, version: 9}, {Group: \"discovery.k8s.io\", Version: \"v1beta1\"}: {group: 16200, version: 12}, {Group: \"discovery.k8s.io\", Version: \"v1alpha1\"}: {group: 16200, version: 9}, {Group: \"flowcontrol.apiserver.k8s.io\", Version: \"v1alpha1\"}: {group: 16100, version: 9}, // Append a new group to the end of the list if unsure. // You can use min(existing group)-100 as the initial value for a group. // Version can be set to 9 (to have space around) for a new group."} {"_id":"doc-en-kubernetes-2fe7e53d41c635b6a9d24a8d2772cc32417f7dd1545b8273a7ba327d45819103","title":"","text":"}, // -- // k8s.io/kubernetes/pkg/apis/flowcontrol/v1alpha1 gvr(\"flowcontrol.apiserver.k8s.io\", \"v1alpha1\", \"flowschemas\"): { Stub: `{\"metadata\": {\"name\": \"va1\"}, \"spec\": {\"priorityLevelConfiguration\": {\"name\": \"name1\"}}}`, ExpectedEtcdPath: \"/registry/flowschemas/va1\", }, // -- // k8s.io/kubernetes/pkg/apis/flowcontrol/v1alpha1 gvr(\"flowcontrol.apiserver.k8s.io\", \"v1alpha1\", \"prioritylevelconfigurations\"): { Stub: `{\"metadata\": {\"name\": \"conf1\"}, \"spec\": {\"type\": \"Limited\", \"limited\": {\"assuredConcurrencyShares\":3, \"limitResponse\": {\"type\": \"Reject\"}}}}`, ExpectedEtcdPath: \"/registry/prioritylevelconfigurations/conf1\", }, // -- // k8s.io/kubernetes/pkg/apis/storage/v1beta1 gvr(\"storage.k8s.io\", \"v1beta1\", \"volumeattachments\"): { Stub: `{\"metadata\": {\"name\": \"va2\"}, \"spec\": {\"attacher\": \"gce\", \"nodeName\": \"localhost\", \"source\": {\"persistentVolumeName\": \"pv2\"}}}`,"} {"_id":"doc-en-kubernetes-ffce7929d4b2ba7cc1a0cb8456bdd662c249946920387ff35538153cc32ad832","title":"","text":"OPENSSL_BIN=$(command -v openssl) } # Query the API server for client certificate authentication capabilities function kube::util::test_client_certificate_authentication_enabled { local output kube::util::test_openssl_installed output=$(echo | \"${OPENSSL_BIN}\" s_client -connect \"127.0.0.1:${SECURE_API_PORT}\" 2> /dev/null | grep -A3 'Acceptable client certificate CA names') if [[ \"${output}\" != *\"/CN=127.0.0.1\"* ]] && [[ \"${output}\" != *\"CN = 127.0.0.1\"* ]]; then echo \"API server not configured for client certificate authentication\" echo \"Output of from acceptable client certificate check: ${output}\" exit 1 fi } # creates a client CA, args are sudo, dest-dir, ca-id, purpose # purpose is dropped in after \"key encipherment\", you usually want # '\"client auth\"'"} {"_id":"doc-en-kubernetes-a94781fe035c6f4fb55ac560ad09f9b8d878245c94cd835a872399a720623772","title":"","text":"--storage-media-type=\"${KUBE_TEST_API_STORAGE_TYPE-}\" --cert-dir=\"${TMPDIR:-/tmp/}\" --service-cluster-ip-range=\"10.0.0.0/24\" --client-ca-file=hack/testdata/ca.crt --token-auth-file=hack/testdata/auth-tokens.csv 1>&2 & export APISERVER_PID=$!"} {"_id":"doc-en-kubernetes-1f7fbf72dc0af7c50e2c4ea309783ce49923977ee349a9f9732cc33a3016ad7d","title":"","text":" -----BEGIN CERTIFICATE----- MIICpjCCAY4CCQCZBiNB23olFzANBgkqhkiG9w0BAQsFADAUMRIwEAYDVQQDDAkx MjcuMC4wLjEwIBcNMjAwNjE0MTk0OTM4WhgPMjI5NDAzMzAxOTQ5MzhaMBQxEjAQ BgNVBAMMCTEyNy4wLjAuMTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB AMxIjMd58IhiiyK4VjmuCWBUZksSs1CcQuo5HSpOqogVZ+vR5mdJDZ56Pw/NSM5c RqOB3cvjGrxYQe/lKvo9D3UmWLcRKtxdlWxCfPekioJ25/dhGOxtBQcjtp/TSqTM txprwT4fvsVwiwaURFoCOivF4xjQFG0K1i3/m7CiMHODy67M1EfJDrM7Vv5XPIuJ VF8HhWBH2HiM25ak34XhxVTX8K97k6wO9OZ5GMqbYuVobTZrSRdiv8s95rkmik6P jn0ePKqSz6cXNXgXqTl11WtsuoGgjOdB8j/noqTF3m3z17sSBqqG/xBFuSFoNceA yBDb9ohbs8oY3NIZzyMrt8MCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAFgcaqRgv qylx4ogL5iUr0K2e/8YzsvH7zLHG6xnr7HxpR/p0lQt3dPlppECZMGDKElbCgU8f xVDdZ3FOxHTJ51Vnq/U5xJo+UOMJ4sS8fEH8cfNliSsvmSKzjxpPKqbCJ7VTnkW8 lonedCPRksnhlD1U8CF21rEjKsXcLoX5PsxlS4DX3PtO0+e8aUh9F4XyZagpejq8 0ttXkWd3IyYrpFRGDlFDxIiKx7pf+mG6JZ/ms6jloBSwwcz/Nkn5FMxiq75bQuOH EV+99S2du/X2bRmD1JxCiMDw8cMacIFBr6BYXsvKOlivwfHBWk8U0f+lVi60jWje PpKFRd1mYuEZgw== -----END CERTIFICATE----- "} {"_id":"doc-en-kubernetes-92ce169c54acf357dce7e2e3961009df3b2a3aed6a5e2ea9ba6a5cac8c5f6be0","title":"","text":"// UpdateTransportConfig updates the transport.Config to use credentials // returned by the plugin. func (a *Authenticator) UpdateTransportConfig(c *transport.Config) error { // If a bearer token is present in the request - avoid the GetCert callback when // setting up the transport, as that triggers the exec action if the server is // also configured to allow client certificates for authentication. For requests // like \"kubectl get --token (token) pods\" we should assume the intention is to // use the provided token for authentication. if c.HasTokenAuth() { return nil } c.Wrap(func(rt http.RoundTripper) http.RoundTripper { return &roundTripper{a, rt} })"} {"_id":"doc-en-kubernetes-2f35858ae116578b4761da69a27012f29a93dde7faa482bde4eddfb634a7f6b7","title":"","text":"get(t, http.StatusOK) } func TestTokenPresentCancelsExecAction(t *testing.T) { a, err := newAuthenticator(newCache(), &api.ExecConfig{ Command: \"./testdata/test-plugin.sh\", APIVersion: \"client.authentication.k8s.io/v1alpha1\", }) if err != nil { t.Fatal(err) } // UpdateTransportConfig returns error on existing TLS certificate callback, unless a bearer token is present in the // transport config, in which case it takes precedence cert := func() (*tls.Certificate, error) { return nil, nil } tc := &transport.Config{BearerToken: \"token1\", TLS: transport.TLSConfig{Insecure: true, GetCert: cert}} if err := a.UpdateTransportConfig(tc); err != nil { t.Error(\"Expected presence of bearer token in config to cancel exec action\") } } func TestTLSCredentials(t *testing.T) { now := time.Now()"} {"_id":"doc-en-kubernetes-652868b7d7f90ee9cb17ab783c3394c60c4a429cd8cac7c76d51789f9a7e494a","title":"","text":" #!/usr/bin/env bash # Copyright 2020 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail run_exec_credentials_tests() { set -o nounset set -o errexit kube::log::status \"Testing kubectl with configured exec credentials plugin\" cat > \"${TMPDIR:-/tmp}\"/invalid_exec_plugin.yaml << EOF apiVersion: v1 clusters: - cluster: name: test contexts: - context: cluster: test user: invalid_token_user name: test current-context: test kind: Config preferences: {} users: - name: invalid_token_user user: exec: apiVersion: client.authentication.k8s.io/v1beta1 # Any invalid exec credential plugin will do to demonstrate command: ls EOF ### Provided --token should take precedence, thus not triggering the (invalid) exec credential plugin # Pre-condition: Client certificate authentication enabled on the API server kube::util::test_client_certificate_authentication_enabled # Command output=$(kubectl \"${kube_flags_with_token[@]:?}\" --kubeconfig=\"${TMPDIR:-/tmp}\"/invalid_exec_plugin.yaml get namespace kube-system -o name || true) if [[ \"${output}\" == \"namespace/kube-system\" ]]; then kube::log::status \"exec credential plugin not triggered since kubectl was called with provided --token\" else kube::log::status \"Unexpected output when providing --token for authentication - exec credential plugin likely triggered. Output: ${output}\" exit 1 fi # Post-condition: None ### Without provided --token, the exec credential plugin should be triggered # Pre-condition: Client certificate authentication enabled on the API server - already checked by positive test above # Command output2=$(kubectl \"${kube_flags_without_token[@]:?}\" --kubeconfig=\"${TMPDIR:-/tmp}\"/invalid_exec_plugin.yaml get namespace kube-system -o name 2>&1 || true) if [[ \"${output2}\" =~ \"json parse error\" ]]; then kube::log::status \"exec credential plugin triggered since kubectl was called without provided --token\" else kube::log::status \"Unexpected output when not providing --token for authentication - exec credential plugin not triggered. Output: ${output2}\" exit 1 fi # Post-condition: None rm \"${TMPDIR:-/tmp}\"/invalid_exec_plugin.yaml set +o nounset set +o errexit } "} {"_id":"doc-en-kubernetes-0d398164d07851ea670f3f4007623067c1ea545236957340c1b14046c594cb9f","title":"","text":"# source \"${KUBE_ROOT}/hack/lib/test.sh\" source \"${KUBE_ROOT}/test/cmd/apply.sh\" source \"${KUBE_ROOT}/test/cmd/apps.sh\" source \"${KUBE_ROOT}/test/cmd/authentication.sh\" source \"${KUBE_ROOT}/test/cmd/authorization.sh\" source \"${KUBE_ROOT}/test/cmd/batch.sh\" source \"${KUBE_ROOT}/test/cmd/certificate.sh\""} {"_id":"doc-en-kubernetes-2c90dab34971eac96519a918ee7caaf7a394dde783bc3a3d53d7f6b9e405e092","title":"","text":"'-s' \"http://127.0.0.1:${API_PORT}\" ) # token defined in hack/testdata/auth-tokens.csv kube_flags_with_token=( '-s' \"https://127.0.0.1:${SECURE_API_PORT}\" '--token=admin-token' '--insecure-skip-tls-verify=true' kube_flags_without_token=( '-s' \"https://127.0.0.1:${SECURE_API_PORT}\" '--insecure-skip-tls-verify=true' ) # token defined in hack/testdata/auth-tokens.csv kube_flags_with_token=( \"${kube_flags_without_token[@]}\" '--token=admin-token' ) if [[ -z \"${ALLOW_SKEW:-}\" ]]; then kube_flags+=('--match-server-version') kube_flags_with_token+=('--match-server-version')"} {"_id":"doc-en-kubernetes-4709195ade87d1c81bfe035cfe42a36fb2d31f117fb6cc1ca9a7df7fcd403a54","title":"","text":"record_command run_nodes_tests fi ######################## # Authentication ######################## record_command run_exec_credentials_tests ######################## # authorization.k8s.io #"} {"_id":"doc-en-kubernetes-be154280fbc5f3c17ed3cbbd234943c6b4c0e94016a737a13ea06649549f2491","title":"","text":"echo \"Running GKE internal configuration script\" . \"${KUBE_HOME}/bin/gke-internal-configure-helper.sh\" gke-internal-master-start elif [[ -n \"${KUBE_BEARER_TOKEN:-}\" ]]; then echo \"setting up local admin kubeconfig\" create-kubeconfig \"local-admin\" \"${KUBE_BEARER_TOKEN}\" echo \"export KUBECONFIG=/etc/srv/kubernetes/local-admin/kubeconfig\" > /etc/profile.d/kubeconfig.sh fi }"} {"_id":"doc-en-kubernetes-4199feaea31b98961f9adb935bfeaf2f89b41e194e3d50aed913be127cb172c2","title":"","text":"// the various afterEaches afterEaches map[string]AfterEachActionFunc // beforeEachStarted indicates that BeforeEach has started beforeEachStarted bool // configuration for framework's client Options Options"} {"_id":"doc-en-kubernetes-d7555334cd9d3ee3b3c9b22e37e169ccb525beb158ebb0da9f7a9ffd55ac79c2","title":"","text":"// BeforeEach gets a client and makes a namespace. func (f *Framework) BeforeEach() { f.beforeEachStarted = true // The fact that we need this feels like a bug in ginkgo. // https://github.com/onsi/ginkgo/issues/222 f.cleanupHandle = AddCleanupAction(f.AfterEach)"} {"_id":"doc-en-kubernetes-6df302733686ab045686921f35d464b30352a3789f5be170dd4a29781f2e4e62","title":"","text":"// AfterEach deletes the namespace, after reading its events. func (f *Framework) AfterEach() { // If BeforeEach never started AfterEach should be skipped. // Currently some tests under e2e/storage have this condition. if !f.beforeEachStarted { return } RemoveCleanupAction(f.cleanupHandle) // This should not happen. Given ClientSet is a public field a test must have updated it! // Error out early before any API calls during cleanup. if f.ClientSet == nil { Failf(\"The framework ClientSet must not be nil at this point\") } // DeleteNamespace at the very end in defer, to avoid any // expectation failures preventing deleting the namespace. defer func() {"} {"_id":"doc-en-kubernetes-e4a7355151d6250448555f7fc9df36bc54e17d02766e53f95b8d769411c98c7d","title":"","text":"} for n := range p.visitedNamespaces { if len(o.Namespace) != 0 && n != o.Namespace { continue } for _, m := range namespacedRESTMappings { if err := p.prune(n, m); err != nil { return fmt.Errorf(\"error pruning namespaced object %v: %v\", m.GroupVersionKind, err)"} {"_id":"doc-en-kubernetes-fd5527661bb7851c9a6f715a391c5b55b9fb44088dd3105212a1cdc2d183d186","title":"","text":"# cleanup kubectl delete svc prune-svc 2>&1 \"${kube_flags[@]:?}\" ## kubectl apply --prune can prune resources not in the defaulted namespace # Pre-Condition: namespace nsb exists; no POD exists kubectl create ns nsb kube::test::get_object_assert pods \"{{range.items}}{{${id_field:?}}}:{{end}}\" '' # apply a into namespace nsb kubectl apply --namespace nsb -f hack/testdata/prune/a.yaml \"${kube_flags[@]:?}\" kube::test::get_object_assert 'pods a -n nsb' \"{{${id_field:?}}}\" 'a' # apply b with namespace kubectl apply --namespace nsb -f hack/testdata/prune/b.yaml \"${kube_flags[@]:?}\" kube::test::get_object_assert 'pods b -n nsb' \"{{${id_field:?}}}\" 'b' # apply --prune must prune a kubectl apply --prune --all -f hack/testdata/prune/b.yaml # check wrong pod doesn't exist output_message=$(! kubectl get pods a -n nsb 2>&1 \"${kube_flags[@]:?}\") kube::test::if_has_string \"${output_message}\" 'pods \"a\" not found' # check right pod exists kube::test::get_object_assert 'pods b -n nsb' \"{{${id_field:?}}}\" 'b' # cleanup kubectl delete ns nsb ## kubectl apply -n must fail if input file contains namespace other than the one given in -n output_message=$(! kubectl apply -n foo -f hack/testdata/prune/b.yaml 2>&1 \"${kube_flags[@]:?}\") kube::test::if_has_string \"${output_message}\" 'the namespace from the provided object \"nsb\" does not match the namespace \"foo\".' ## kubectl apply -f some.yml --force # Pre-condition: no service exists"} {"_id":"doc-en-kubernetes-eb78ef48d36d92d2d60b75459752731eedf8c34ad519f3fa0c70d2d31a2f0e5e","title":"","text":"if err != nil { return err } if cml.cm.Annotations == nil { cml.cm.Annotations = make(map[string]string) } cml.cm.Annotations[LeaderElectionRecordAnnotationKey] = string(recordBytes) cml.cm, err = cml.Client.ConfigMaps(cml.ConfigMapMeta.Namespace).Update(cml.cm) return err"} {"_id":"doc-en-kubernetes-35f8ef6033d0baea94ce57a1ca54b860d5114b8b07c253e6a3c41839b745f07e","title":"","text":"\"fmt\" \"io\" \"net\" \"strings\" \"sync\" \"time\""} {"_id":"doc-en-kubernetes-63199030474f215438269bc015ea20517745ce832d56bc568f13d0b1060130b4","title":"","text":"if nodeID != \"\" { return true, nil } // kubelet plugin registration service not implemented is a terminal error, no need to retry if strings.Contains(getNodeInfoError.Error(), \"no handler registered for plugin type\") { return false, getNodeInfoError if getNodeInfoError != nil { klog.Warningf(\"Error calling CSI NodeGetInfo(): %v\", getNodeInfoError.Error()) } // Continue with exponential backoff return false, nil"} {"_id":"doc-en-kubernetes-f87e634c0100745834ea3b14130f6ea88882419ea1a25afe15ac20246b4e1e6d","title":"","text":"csipbv1 \"github.com/container-storage-interface/spec/lib/go/csi\" api \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/kubernetes/pkg/volume\" \"k8s.io/kubernetes/pkg/volume/csi/fake\" volumetypes \"k8s.io/kubernetes/pkg/volume/util/types\""} {"_id":"doc-en-kubernetes-dbef861ecc1eaaae344351a0bbf3dcc79d74e2d3340c09bc74faf37c5b6716a9","title":"","text":"expectedMaxVolumePerNode int64 expectedAccessibleTopology map[string]string mustFail bool mustTimeout bool err error }{ {"} {"_id":"doc-en-kubernetes-32225258249a66f8878e774c55af4001f0ba55a28b4b562e43b996edb730b033","title":"","text":"mustFail: true, err: errors.New(\"grpc error\"), }, { name: \"test empty nodeId\", mustTimeout: true, expectedNodeID: \"\", expectedMaxVolumePerNode: 16, expectedAccessibleTopology: map[string]string{\"com.example.csi-topology/zone\": \"zone1\"}, }, } for _, tc := range testCases {"} {"_id":"doc-en-kubernetes-107790bc4006221b922c19e2cc8d7e22d936d9c8aa6177851983d8bc33783b57","title":"","text":"} nodeID, maxVolumePerNode, accessibleTopology, err := client.NodeGetInfo(context.Background()) checkErr(t, tc.mustFail, err) if tc.mustTimeout { if wait.ErrWaitTimeout.Error() != err.Error() { t.Errorf(\"should have timed out : %s\", tc.name) } } else { checkErr(t, tc.mustFail, err) } if nodeID != tc.expectedNodeID { t.Errorf(\"expected nodeID: %v; got: %v\", tc.expectedNodeID, nodeID)"} {"_id":"doc-en-kubernetes-c6bfe16ca608432eadacb34ef855356e710b1addb68f7a8ed30b79ae5869339a","title":"","text":"exit 1 fi # Travis continuous build uses a head go release that doesn't report # a version number, so we skip this check on Travis. Its unnecessary # there anyway. if [ ${TRAVIS} != \"true\" ]; then GO_VERSION=($(go version)) if [ ${GO_VERSION[2]} < \"go1.2\" ]; then echo \"Detected go version: ${GO_VERSION}.\" echo \"Kubernetes requires go version 1.2 or greater.\" echo \"Please install Go version 1.2 or later\" exit 1 fi fi pushd $(dirname \"${BASH_SOURCE}\")/.. >/dev/null KUBE_REPO_ROOT=\"${PWD}\" KUBE_TARGET=\"${KUBE_REPO_ROOT}/output/go\""} {"_id":"doc-en-kubernetes-9fa0e1aaa6135f2e34f257f350af1152f3eeb43c4483643583950f14e9bdb37b","title":"","text":"config.PreFilter.Enabled = append(config.PreFilter.Enabled, f) config.Filter.Enabled = append(config.Filter.Enabled, f) config.PreScore.Enabled = append(config.PreScore.Enabled, f) s := schedulerapi.Plugin{Name: podtopologyspread.Name, Weight: 1} // Weight is doubled because: // - This is a score coming from user preference. // - It makes its signal comparable to NodeResourcesLeastAllocated. s := schedulerapi.Plugin{Name: podtopologyspread.Name, Weight: 2} config.Score.Enabled = append(config.Score.Enabled, s) }"} {"_id":"doc-en-kubernetes-bb65167780875d86ac887f82ef6660af7b4c0a90de48f937d8b7742028d7e77d","title":"","text":"{Name: nodepreferavoidpods.Name, Weight: 10000}, {Name: defaultpodtopologyspread.Name, Weight: 1}, {Name: tainttoleration.Name, Weight: 1}, {Name: podtopologyspread.Name, Weight: 1}, {Name: podtopologyspread.Name, Weight: 2}, }, }, Reserve: &schedulerapi.PluginSet{"} {"_id":"doc-en-kubernetes-e923f9c6f85410a6562057b52b8d118063afec3e05b84bebce02356d9516e084","title":"","text":"{Name: nodepreferavoidpods.Name, Weight: 10000}, {Name: defaultpodtopologyspread.Name, Weight: 1}, {Name: tainttoleration.Name, Weight: 1}, {Name: podtopologyspread.Name, Weight: 1}, {Name: podtopologyspread.Name, Weight: 2}, {Name: noderesources.ResourceLimitsName, Weight: 1}, }, },"} {"_id":"doc-en-kubernetes-deca14d0110197a7dc51927c0610c692ca00203c9ff0e0cac8cb42691c027d21","title":"","text":"{Name: \"NodePreferAvoidPods\", Weight: 10000}, {Name: \"DefaultPodTopologySpread\", Weight: 1}, {Name: \"TaintToleration\", Weight: 1}, {Name: \"PodTopologySpread\", Weight: 1}, {Name: \"PodTopologySpread\", Weight: 2}, }, \"BindPlugin\": {{Name: \"DefaultBinder\"}}, \"ReservePlugin\": {{Name: \"VolumeBinding\"}},"} {"_id":"doc-en-kubernetes-00630fe3953bdbe1a1a5a5dd3aaf30508c98b47d7d7cf76582f2954e4166a3e6","title":"","text":"{Name: \"NodePreferAvoidPods\", Weight: 10000}, {Name: \"DefaultPodTopologySpread\", Weight: 1}, {Name: \"TaintToleration\", Weight: 1}, {Name: \"PodTopologySpread\", Weight: 1}, {Name: \"PodTopologySpread\", Weight: 2}, }, \"ReservePlugin\": {{Name: \"VolumeBinding\"}}, \"UnreservePlugin\": {{Name: \"VolumeBinding\"}},"} {"_id":"doc-en-kubernetes-6615fadb28b5831db1d91160c73f94b48c5d2f956e47eebad21e106f9323635b","title":"","text":"\"k8s.io/apimachinery/pkg/util/sets\" ) // ciphers maps strings into tls package cipher constants in // https://golang.org/pkg/crypto/tls/#pkg-constants // to be replaced by tls.CipherSuites() when the project migrates to go1.14. var ciphers = map[string]uint16{ \"TLS_RSA_WITH_3DES_EDE_CBC_SHA\": tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA, \"TLS_RSA_WITH_AES_128_CBC_SHA\": tls.TLS_RSA_WITH_AES_128_CBC_SHA, \"TLS_RSA_WITH_AES_256_CBC_SHA\": tls.TLS_RSA_WITH_AES_256_CBC_SHA, \"TLS_RSA_WITH_AES_128_GCM_SHA256\": tls.TLS_RSA_WITH_AES_128_GCM_SHA256, \"TLS_RSA_WITH_AES_256_GCM_SHA384\": tls.TLS_RSA_WITH_AES_256_GCM_SHA384, \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\": tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, \"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA\": tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\": tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305\": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305\": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, \"TLS_AES_128_GCM_SHA256\": tls.TLS_AES_128_GCM_SHA256, \"TLS_CHACHA20_POLY1305_SHA256\": tls.TLS_CHACHA20_POLY1305_SHA256, \"TLS_AES_256_GCM_SHA384\": tls.TLS_AES_256_GCM_SHA384, } var ( // ciphers maps strings into tls package cipher constants in // https://golang.org/pkg/crypto/tls/#pkg-constants ciphers = map[string]uint16{} insecureCiphers = map[string]uint16{} ) // to be replaced by tls.InsecureCipherSuites() when the project migrates to go1.14. var insecureCiphers = map[string]uint16{ \"TLS_RSA_WITH_RC4_128_SHA\": tls.TLS_RSA_WITH_RC4_128_SHA, \"TLS_RSA_WITH_AES_128_CBC_SHA256\": tls.TLS_RSA_WITH_AES_128_CBC_SHA256, \"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA\": tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, \"TLS_ECDHE_RSA_WITH_RC4_128_SHA\": tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA, \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, func init() { for _, suite := range tls.CipherSuites() { ciphers[suite.Name] = suite.ID } // keep legacy names for backward compatibility ciphers[\"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305\"] = tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 ciphers[\"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305\"] = tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 for _, suite := range tls.InsecureCipherSuites() { insecureCiphers[suite.Name] = suite.ID } } // InsecureTLSCiphers returns the cipher suites implemented by crypto/tls which have"} {"_id":"doc-en-kubernetes-55537165e9a1395bc93c67301ecad54fe7a91a7d642fd3e5c26f31746f3f16d2","title":"","text":" //go:build go1.14 // +build go1.14 /* Copyright 2020 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package flag import ( \"crypto/tls\" ) func init() { // support official IANA names as well on go1.14 ciphers[\"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\"] = tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 ciphers[\"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\"] = tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 } "} {"_id":"doc-en-kubernetes-d128e1514991df576633943c57e544f6701ce565c9f6435e9e60acffccf32984","title":"","text":"import ( \"crypto/tls\" \"fmt\" \"go/importer\" \"reflect\" \"strings\" \"testing\" )"} {"_id":"doc-en-kubernetes-4fb88d7bddee2420e2d2ada7b6d3b70a101b151123d0be195ffa1f16c350fe2d","title":"","text":"expected: nil, expected_error: true, }, { // All existing cipher suites flag: []string{ \"TLS_RSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA\", \"TLS_RSA_WITH_AES_256_CBC_SHA\", \"TLS_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_AES_128_GCM_SHA256\", \"TLS_CHACHA20_POLY1305_SHA256\", \"TLS_AES_256_GCM_SHA384\", \"TLS_RSA_WITH_RC4_128_SHA\", \"TLS_RSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA\", \"TLS_ECDHE_RSA_WITH_RC4_128_SHA\", \"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256\", }, expected: []uint16{ tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA, tls.TLS_RSA_WITH_AES_128_CBC_SHA, tls.TLS_RSA_WITH_AES_256_CBC_SHA, tls.TLS_RSA_WITH_AES_128_GCM_SHA256, tls.TLS_RSA_WITH_AES_256_GCM_SHA384, tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, tls.TLS_AES_128_GCM_SHA256, tls.TLS_CHACHA20_POLY1305_SHA256, tls.TLS_AES_256_GCM_SHA384, tls.TLS_RSA_WITH_RC4_128_SHA, tls.TLS_RSA_WITH_AES_128_CBC_SHA256, tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA, tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, }, }, } for i, test := range tests { uIntFlags, err := TLSCipherSuites(test.flag) if reflect.DeepEqual(uIntFlags, test.expected) == false { if !reflect.DeepEqual(uIntFlags, test.expected) { t.Errorf(\"%d: expected %+v, got %+v\", i, test.expected, uIntFlags) } if test.expected_error && err == nil {"} {"_id":"doc-en-kubernetes-6c7ccdc7cedfbbe88c762918116cbdfb4575833f0369ff9a958d7ef4c256c18a","title":"","text":"} } } func TestConstantMaps(t *testing.T) { pkg, err := importer.Default().Import(\"crypto/tls\") if err != nil { fmt.Printf(\"error: %sn\", err.Error()) return } discoveredVersions := map[string]bool{} discoveredCiphers := map[string]bool{} for _, declName := range pkg.Scope().Names() { if strings.HasPrefix(declName, \"VersionTLS\") { discoveredVersions[declName] = true } if strings.HasPrefix(declName, \"TLS_\") && !strings.HasPrefix(declName, \"TLS_FALLBACK_\") { discoveredCiphers[declName] = true } } acceptedCiphers := allCiphers() for k := range discoveredCiphers { if _, ok := acceptedCiphers[k]; !ok { t.Errorf(\"discovered cipher tls.%s not in ciphers map\", k) } } for k := range acceptedCiphers { if _, ok := discoveredCiphers[k]; !ok { t.Errorf(\"ciphers map has %s not in tls package\", k) } } for k := range discoveredVersions { if _, ok := versions[k]; !ok { t.Errorf(\"discovered version tls.%s not in version map\", k) } } for k := range versions { if _, ok := discoveredVersions[k]; !ok { t.Errorf(\"versions map has %s not in tls package\", k) } } } "} {"_id":"doc-en-kubernetes-73c1a615a098195931aa07c1bd3ed57d8b2295fed8d99bb36b338aa95f91db1f","title":"","text":"dc := l.config.Framework.DynamicClient vsc := sDriver.GetSnapshotClass(l.config) dataSource, cleanupFunc := prepareSnapshotDataSourceForProvisioning(l.config.ClientNodeSelection, l.cs, dc, l.pvc, l.sc, vsc) testConfig := convertTestConfig(l.config) expectedContent := fmt.Sprintf(\"Hello from namespace %s\", f.Namespace.Name) dataSource, cleanupFunc := prepareSnapshotDataSourceForProvisioning(f, testConfig, l.cs, dc, l.pvc, l.sc, vsc, pattern.VolMode, expectedContent) defer cleanupFunc() l.pvc.Spec.DataSource = dataSource l.testCase.PvCheck = func(claim *v1.PersistentVolumeClaim) { ginkgo.By(\"checking whether the created volume has the pre-populated data\") command := fmt.Sprintf(\"grep '%s' /mnt/test/initialData\", claim.Namespace) RunInPodWithVolume(l.cs, claim.Namespace, claim.Name, \"pvc-snapshot-tester\", command, l.config.ClientNodeSelection) tests := []volume.Test{ { Volume: *createVolumeSource(claim.Name, false /* readOnly */), Mode: pattern.VolMode, File: \"index.html\", ExpectedContent: expectedContent, }, } volume.TestVolumeClient(f, testConfig, nil, \"\", tests) } l.testCase.TestDynamicProvisioning() })"} {"_id":"doc-en-kubernetes-ce78ad8eb7148c5b307d25e7e44cfd04510f761f8febb90b45f43e1e51e9c29e","title":"","text":"} func prepareSnapshotDataSourceForProvisioning( node e2epod.NodeSelection, f *framework.Framework, config volume.TestConfig, client clientset.Interface, dynamicClient dynamic.Interface, initClaim *v1.PersistentVolumeClaim, class *storagev1.StorageClass, snapshotClass *unstructured.Unstructured, mode v1.PersistentVolumeMode, injectContent string, ) (*v1.TypedLocalObjectReference, func()) { var err error if class != nil {"} {"_id":"doc-en-kubernetes-e5dceb4f67002741f15bf0925d0b13abfb7d4a392a22e809578a945a7e5fc02e","title":"","text":"framework.ExpectNoError(err) // write namespace to the /mnt/test (= the volume). ginkgo.By(\"[Initialize dataSource]write data to volume\") command := fmt.Sprintf(\"echo '%s' > /mnt/test/initialData\", updatedClaim.GetNamespace()) RunInPodWithVolume(client, updatedClaim.Namespace, updatedClaim.Name, \"pvc-snapshot-writer\", command, node) err = e2epv.WaitForPersistentVolumeClaimPhase(v1.ClaimBound, client, updatedClaim.Namespace, updatedClaim.Name, framework.Poll, framework.ClaimProvisionTimeout) framework.ExpectNoError(err) ginkgo.By(\"[Initialize dataSource]checking the initClaim\") // Get new copy of the initClaim _, err = client.CoreV1().PersistentVolumeClaims(updatedClaim.Namespace).Get(context.TODO(), updatedClaim.Name, metav1.GetOptions{}) framework.ExpectNoError(err) tests := []volume.Test{ { Volume: *createVolumeSource(updatedClaim.Name, false /* readOnly */), Mode: mode, File: \"index.html\", ExpectedContent: injectContent, }, } volume.InjectContent(f, config, nil, \"\", tests) ginkgo.By(\"[Initialize dataSource]creating a SnapshotClass\") snapshotClass, err = dynamicClient.Resource(snapshotClassGVR).Create(snapshotClass, metav1.CreateOptions{}) framework.ExpectNoError(err) ginkgo.By(\"[Initialize dataSource]creating a snapshot\") snapshot := getSnapshot(updatedClaim.Name, updatedClaim.Namespace, snapshotClass.GetName())"} {"_id":"doc-en-kubernetes-8cf9f6596b7a989a7e424ffe472288d562d33407ca6526ff9064f0654ab85506","title":"","text":"name: csi-data-dir - name: hostpath image: quay.io/k8scsi/hostpathplugin:v1.4.0-rc1 image: quay.io/k8scsi/hostpathplugin:v1.4.0-rc2 args: - \"--drivername=hostpath.csi.k8s.io\" - \"--v=5\""} {"_id":"doc-en-kubernetes-16ea4d21b317d9cbef400aa3d027a4e5aadbbe17631b1d1944ec60787c1800bd","title":"","text":"ginkgo.It(\"should never report success for a pending container\", func() { ginkgo.By(\"creating pods that should always exit 1 and terminating the pod after a random delay\") var reBug88766 = regexp.MustCompile(`ContainerCannotRun.*rootfs_linux.go.*kubernetes.io~secret.*no such file or directory`) var reBug88766 = regexp.MustCompile(`rootfs_linux.*kubernetes.io~secret.*no such file or directory`) var ( lock sync.Mutex"} {"_id":"doc-en-kubernetes-b13977e5dd832c7d7f671e0f825322199a5bf1c6d4ec112d486129a93f3e302e","title":"","text":"switch { case t.ExitCode == 1: // expected case t.ExitCode == 128 && reBug88766.MatchString(t.Message): case t.ExitCode == 128 && t.Reason == \"ContainerCannotRun\" && reBug88766.MatchString(t.Message): // pod volume teardown races with container start in CRI, which reports a failure framework.Logf(\"pod %s on node %s failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766\") default:"} {"_id":"doc-en-kubernetes-4340ce61bb0c350ea9809e436d1b629f8d751075c7b6ffcd1843844263ca7abe","title":"","text":"switch { case t.ExitCode == 1: // expected case t.ExitCode == 128 && t.Reason == \"ContainerCannotRun\" && reBug88766.MatchString(t.Message): case t.ExitCode == 128 && (t.Reason == \"StartError\" || t.Reason == \"ContainerCannotRun\") && reBug88766.MatchString(t.Message): // pod volume teardown races with container start in CRI, which reports a failure framework.Logf(\"pod %s on node %s failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766\") default:"} {"_id":"doc-en-kubernetes-bd4a84219da020c6231b1f6cd75870cc682ce2845576e01f51b3de9b6e8d5b53","title":"","text":"defer runtime.HandleCrash() for event := range outStream { if event.ContainerName == recordEventContainerName { if event.VictimContainerName == recordEventContainerName { klog.V(1).Infof(\"Got sys oom event: %v\", event) eventMsg := \"System OOM encountered\" if event.ProcessName != \"\" && event.Pid != 0 {"} {"_id":"doc-en-kubernetes-d91b3037e35d6d2f8da5ea09b08408a215ec3d82b3f5309cf830de6c98f2641b","title":"","text":"Pid: 1000, ProcessName: \"fakeProcess\", TimeOfDeath: time.Now(), ContainerName: recordEventContainerName, VictimContainerName: \"some-container\", ContainerName: recordEventContainerName + \"some-container\", VictimContainerName: recordEventContainerName, }, } numExpectedOomEvents := len(oomInstancesToStream)"} {"_id":"doc-en-kubernetes-8cb2062934168fd7ea2436a9298cdcfacc43a89b64bee492d1befdddf58a7a84","title":"","text":"Pid: 1000, ProcessName: \"fakeProcess\", TimeOfDeath: time.Now(), ContainerName: recordEventContainerName, VictimContainerName: \"some-container\", ContainerName: recordEventContainerName + \"some-container\", VictimContainerName: recordEventContainerName, }, { Pid: 1000, ProcessName: \"fakeProcess\", TimeOfDeath: time.Now(), ContainerName: \"/dont-record-oom-event\", VictimContainerName: \"some-container\", ContainerName: recordEventContainerName + \"kubepods/some-container\", VictimContainerName: recordEventContainerName + \"kubepods\", }, } numExpectedOomEvents := len(oomInstancesToStream) - numOomEventsWithIncorrectContainerName"} {"_id":"doc-en-kubernetes-95b313faaa79b8aba832b5e0b978467c56635f2bba62277e8480914530ba251d","title":"","text":"Pid: eventPid, ProcessName: processName, TimeOfDeath: time.Now(), ContainerName: recordEventContainerName, VictimContainerName: \"some-container\", ContainerName: recordEventContainerName + \"some-container\", VictimContainerName: recordEventContainerName, }, } numExpectedOomEvents := len(oomInstancesToStream)"} {"_id":"doc-en-kubernetes-07ddc511062230e204d96f97c07d178c4ddabe449ea759780ecf5f4d8118f136","title":"","text":"} var cgroupRoots []string cgroupRoots = append(cgroupRoots, cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupDriver)) nodeAllocatableRoot := cm.NodeAllocatableRoot(s.CgroupRoot, s.CgroupsPerQOS, s.CgroupDriver) cgroupRoots = append(cgroupRoots, nodeAllocatableRoot) kubeletCgroup, err := cm.GetKubeletContainer(s.KubeletCgroups) if err != nil { klog.Warningf(\"failed to get the kubelet's cgroup: %v. Kubelet system container metrics may be missing.\", err)"} {"_id":"doc-en-kubernetes-92485987d907c4634e38c53190fa82ee8760a44a70341bbe9331d30e14cd481d","title":"","text":"} // NodeAllocatableRoot returns the literal cgroup path for the node allocatable cgroup func NodeAllocatableRoot(cgroupRoot, cgroupDriver string) string { root := ParseCgroupfsToCgroupName(cgroupRoot) nodeAllocatableRoot := NewCgroupName(root, defaultNodeAllocatableCgroupName) func NodeAllocatableRoot(cgroupRoot string, cgroupsPerQOS bool, cgroupDriver string) string { nodeAllocatableRoot := ParseCgroupfsToCgroupName(cgroupRoot) if cgroupsPerQOS { nodeAllocatableRoot = NewCgroupName(nodeAllocatableRoot, defaultNodeAllocatableCgroupName) } if libcontainerCgroupManagerType(cgroupDriver) == libcontainerSystemd { return nodeAllocatableRoot.ToSystemd() }"} {"_id":"doc-en-kubernetes-6158a7acbcd7a5f6735863c60a489b6c6a005150f5a5fe166f7de5aae4afbfb4","title":"","text":"} // NodeAllocatableRoot returns the literal cgroup path for the node allocatable cgroup func NodeAllocatableRoot(cgroupRoot, cgroupDriver string) string { func NodeAllocatableRoot(cgroupRoot string, cgroupsPerQOS bool, cgroupDriver string) string { return \"\" }"} {"_id":"doc-en-kubernetes-a0cb1092c4b04ade1d5d2f6165579296152c918506d5fd27263c1dbab5d2fb65","title":"","text":"\"//staging/src/k8s.io/apimachinery/pkg/types:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/runtime:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/strategicpatch:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/util/feature:go_default_library\", \"//staging/src/k8s.io/client-go/informers:go_default_library\","} {"_id":"doc-en-kubernetes-75dda362ac28a2112e8d803b32f7bf25f886b7ec7981d3a08e06b6aac8a59545","title":"","text":"import ( \"context\" \"encoding/json\" \"fmt\" \"io/ioutil\" \"math/rand\""} {"_id":"doc-en-kubernetes-ed1e4b4bd3d9eb64ea4ae07f2213f9bfbc118683f23fdb63ea5a3be88ed34808","title":"","text":"v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/strategicpatch\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/client-go/informers\" coreinformers \"k8s.io/client-go/informers/core/v1\""} {"_id":"doc-en-kubernetes-32416125416e9216d581219364dafff38cec0df6208834c873bfee7556b5d743","title":"","text":"func (p *podConditionUpdaterImpl) update(pod *v1.Pod, condition *v1.PodCondition) error { klog.V(3).Infof(\"Updating pod condition for %s/%s to (%s==%s, Reason=%s)\", pod.Namespace, pod.Name, condition.Type, condition.Status, condition.Reason) if podutil.UpdatePodCondition(&pod.Status, condition) { _, err := p.Client.CoreV1().Pods(pod.Namespace).UpdateStatus(context.TODO(), pod, metav1.UpdateOptions{}) oldData, err := json.Marshal(pod) if err != nil { return err } return nil if !podutil.UpdatePodCondition(&pod.Status, condition) { return nil } newData, err := json.Marshal(pod) if err != nil { return err } patchBytes, err := strategicpatch.CreateTwoWayMergePatch(oldData, newData, &v1.Pod{}) if err != nil { return fmt.Errorf(\"failed to create merge patch for pod %q/%q: %v\", pod.Namespace, pod.Name, err) } _, err = p.Client.CoreV1().Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patchBytes, metav1.PatchOptions{}, \"status\") return err } type podPreemptorImpl struct {"} {"_id":"doc-en-kubernetes-0db79fc88a2cafc81468140ddda0f15a2ecefff9724a6ef741bf11ca2d9b6fe4","title":"","text":"} func (p *podPreemptorImpl) setNominatedNodeName(pod *v1.Pod, nominatedNodeName string) error { klog.V(3).Infof(\"Setting nominated node name for %s/%s to \"%s\"\", pod.Namespace, pod.Name, nominatedNodeName) if pod.Status.NominatedNodeName == nominatedNodeName { return nil } podCopy := pod.DeepCopy() oldData, err := json.Marshal(podCopy) if err != nil { return err } podCopy.Status.NominatedNodeName = nominatedNodeName _, err := p.Client.CoreV1().Pods(pod.Namespace).UpdateStatus(context.TODO(), podCopy, metav1.UpdateOptions{}) newData, err := json.Marshal(podCopy) if err != nil { return err } patchBytes, err := strategicpatch.CreateTwoWayMergePatch(oldData, newData, &v1.Pod{}) if err != nil { return fmt.Errorf(\"failed to create merge patch for pod %q/%q: %v\", pod.Namespace, pod.Name, err) } _, err = p.Client.CoreV1().Pods(pod.Namespace).Patch(context.TODO(), pod.Name, types.StrategicMergePatchType, patchBytes, metav1.PatchOptions{}, \"status\") return err }"} {"_id":"doc-en-kubernetes-fa87e7900f5577d412c0eb651bfc572f025957916ed4f4f846df0f9bb36ef4fa","title":"","text":"\"os\" \"path\" \"reflect\" \"regexp\" \"sort\" \"strings\" \"sync\""} {"_id":"doc-en-kubernetes-d083f3aae93c17570b32f7e948993041c5a63a43f49aa1f3c92d95bf2eb92246","title":"","text":"} } } func TestSetNominatedNodeName(t *testing.T) { tests := []struct { name string currentNominatedNodeName string newNominatedNodeName string expectedPatchRequests int expectedPatchData string }{ { name: \"Should make patch request to set node name\", currentNominatedNodeName: \"\", newNominatedNodeName: \"node1\", expectedPatchRequests: 1, expectedPatchData: `{\"status\":{\"nominatedNodeName\":\"node1\"}}`, }, { name: \"Should make patch request to clear node name\", currentNominatedNodeName: \"node1\", newNominatedNodeName: \"\", expectedPatchRequests: 1, expectedPatchData: `{\"status\":{\"nominatedNodeName\":null}}`, }, { name: \"Should not make patch request if nominated node is already set to the specified value\", currentNominatedNodeName: \"node1\", newNominatedNodeName: \"node1\", expectedPatchRequests: 0, }, { name: \"Should not make patch request if nominated node is already cleared\", currentNominatedNodeName: \"\", newNominatedNodeName: \"\", expectedPatchRequests: 0, }, } for _, test := range tests { t.Run(test.name, func(t *testing.T) { actualPatchRequests := 0 var actualPatchData string cs := &clientsetfake.Clientset{} cs.AddReactor(\"patch\", \"pods\", func(action clienttesting.Action) (bool, runtime.Object, error) { actualPatchRequests++ patch := action.(clienttesting.PatchAction) actualPatchData = string(patch.GetPatch()) // For this test, we don't care about the result of the patched pod, just that we got the expected // patch request, so just returning &v1.Pod{} here is OK because scheduler doesn't use the response. return true, &v1.Pod{}, nil }) pod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Status: v1.PodStatus{NominatedNodeName: test.currentNominatedNodeName}, } preemptor := &podPreemptorImpl{Client: cs} if err := preemptor.setNominatedNodeName(pod, test.newNominatedNodeName); err != nil { t.Fatalf(\"Error calling setNominatedNodeName: %v\", err) } if actualPatchRequests != test.expectedPatchRequests { t.Fatalf(\"Actual patch requests (%d) dos not equal expected patch requests (%d)\", actualPatchRequests, test.expectedPatchRequests) } if test.expectedPatchRequests > 0 && actualPatchData != test.expectedPatchData { t.Fatalf(\"Patch data mismatch: Actual was %v, but expected %v\", actualPatchData, test.expectedPatchData) } }) } } func TestUpdatePodCondition(t *testing.T) { tests := []struct { name string currentPodConditions []v1.PodCondition newPodCondition *v1.PodCondition expectedPatchRequests int expectedPatchDataPattern string }{ { name: \"Should make patch request to add pod condition when there are none currently\", currentPodConditions: []v1.PodCondition{}, newPodCondition: &v1.PodCondition{ Type: \"newType\", Status: \"newStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 1, 1, 1, 1, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 1, 1, 1, 1, time.UTC)), Reason: \"newReason\", Message: \"newMessage\", }, expectedPatchRequests: 1, expectedPatchDataPattern: `{\"status\":{\"conditions\":[{\"lastProbeTime\":\"2020-05-13T01:01:01Z\",\"lastTransitionTime\":\".*\",\"message\":\"newMessage\",\"reason\":\"newReason\",\"status\":\"newStatus\",\"type\":\"newType\"}]}}`, }, { name: \"Should make patch request to add a new pod condition when there is already one with another type\", currentPodConditions: []v1.PodCondition{ { Type: \"someOtherType\", Status: \"someOtherTypeStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 11, 0, 0, 0, 0, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 10, 0, 0, 0, 0, time.UTC)), Reason: \"someOtherTypeReason\", Message: \"someOtherTypeMessage\", }, }, newPodCondition: &v1.PodCondition{ Type: \"newType\", Status: \"newStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 1, 1, 1, 1, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 1, 1, 1, 1, time.UTC)), Reason: \"newReason\", Message: \"newMessage\", }, expectedPatchRequests: 1, expectedPatchDataPattern: `{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"someOtherType\"},{\"type\":\"newType\"}],\"conditions\":[{\"lastProbeTime\":\"2020-05-13T01:01:01Z\",\"lastTransitionTime\":\".*\",\"message\":\"newMessage\",\"reason\":\"newReason\",\"status\":\"newStatus\",\"type\":\"newType\"}]}}`, }, { name: \"Should make patch request to update an existing pod condition\", currentPodConditions: []v1.PodCondition{ { Type: \"currentType\", Status: \"currentStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 0, 0, 0, 0, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 0, 0, 0, 0, time.UTC)), Reason: \"currentReason\", Message: \"currentMessage\", }, }, newPodCondition: &v1.PodCondition{ Type: \"currentType\", Status: \"newStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 1, 1, 1, 1, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 1, 1, 1, 1, time.UTC)), Reason: \"newReason\", Message: \"newMessage\", }, expectedPatchRequests: 1, expectedPatchDataPattern: `{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"currentType\"}],\"conditions\":[{\"lastProbeTime\":\"2020-05-13T01:01:01Z\",\"lastTransitionTime\":\".*\",\"message\":\"newMessage\",\"reason\":\"newReason\",\"status\":\"newStatus\",\"type\":\"currentType\"}]}}`, }, { name: \"Should make patch request to update an existing pod condition, but the transition time should remain unchanged because the status is the same\", currentPodConditions: []v1.PodCondition{ { Type: \"currentType\", Status: \"currentStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 0, 0, 0, 0, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 0, 0, 0, 0, time.UTC)), Reason: \"currentReason\", Message: \"currentMessage\", }, }, newPodCondition: &v1.PodCondition{ Type: \"currentType\", Status: \"currentStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 1, 1, 1, 1, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 0, 0, 0, 0, time.UTC)), Reason: \"newReason\", Message: \"newMessage\", }, expectedPatchRequests: 1, expectedPatchDataPattern: `{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"currentType\"}],\"conditions\":[{\"lastProbeTime\":\"2020-05-13T01:01:01Z\",\"message\":\"newMessage\",\"reason\":\"newReason\",\"type\":\"currentType\"}]}}`, }, { name: \"Should not make patch request if pod condition already exists and is identical\", currentPodConditions: []v1.PodCondition{ { Type: \"currentType\", Status: \"currentStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 0, 0, 0, 0, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 0, 0, 0, 0, time.UTC)), Reason: \"currentReason\", Message: \"currentMessage\", }, }, newPodCondition: &v1.PodCondition{ Type: \"currentType\", Status: \"currentStatus\", LastProbeTime: metav1.NewTime(time.Date(2020, 5, 13, 0, 0, 0, 0, time.UTC)), LastTransitionTime: metav1.NewTime(time.Date(2020, 5, 12, 0, 0, 0, 0, time.UTC)), Reason: \"currentReason\", Message: \"currentMessage\", }, expectedPatchRequests: 0, }, } for _, test := range tests { t.Run(test.name, func(t *testing.T) { actualPatchRequests := 0 var actualPatchData string cs := &clientsetfake.Clientset{} cs.AddReactor(\"patch\", \"pods\", func(action clienttesting.Action) (bool, runtime.Object, error) { actualPatchRequests++ patch := action.(clienttesting.PatchAction) actualPatchData = string(patch.GetPatch()) // For this test, we don't care about the result of the patched pod, just that we got the expected // patch request, so just returning &v1.Pod{} here is OK because scheduler doesn't use the response. return true, &v1.Pod{}, nil }) pod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Status: v1.PodStatus{Conditions: test.currentPodConditions}, } updater := &podConditionUpdaterImpl{Client: cs} if err := updater.update(pod, test.newPodCondition); err != nil { t.Fatalf(\"Error calling update: %v\", err) } if actualPatchRequests != test.expectedPatchRequests { t.Fatalf(\"Actual patch requests (%d) dos not equal expected patch requests (%d)\", actualPatchRequests, test.expectedPatchRequests) } regex, err := regexp.Compile(test.expectedPatchDataPattern) if err != nil { t.Fatalf(\"Error compiling regexp for %v: %v\", test.expectedPatchDataPattern, err) } if test.expectedPatchRequests > 0 && !regex.MatchString(actualPatchData) { t.Fatalf(\"Patch data mismatch: Actual was %v, but expected to match regexp %v\", actualPatchData, test.expectedPatchDataPattern) } }) } } "} {"_id":"doc-en-kubernetes-7113a84af3e8f2361f4b1dabfd119c6bd3d5af2aaef05ff6cc9826fabb44528c","title":"","text":"framework \"k8s.io/kubernetes/pkg/scheduler/framework/v1alpha1\" \"k8s.io/kubernetes/plugin/pkg/admission/priority\" testutils \"k8s.io/kubernetes/test/integration/util\" utils \"k8s.io/kubernetes/test/utils\" \"k8s.io/kubernetes/test/utils\" ) var lowPriority, mediumPriority, highPriority = int32(100), int32(200), int32(300)"} {"_id":"doc-en-kubernetes-9fa550cd56fde647acbbcfbeb7366ed71712a79662357b6cca53816054735678","title":"","text":"Labels: map[string]string{\"pod\": name}, Resources: defaultPodRes, }) // Setting grace period to zero. Otherwise, we may never see the actual deletion // of the pods in integration tests. pod.Spec.TerminationGracePeriodSeconds = &grace return pod }"} {"_id":"doc-en-kubernetes-642f052f0cb8b16a19f242af429fd44cb2fab6253c6182082b191fd33cf9738d","title":"","text":"} // Step 5. Check that nominated node name of the high priority pod is set. if err := waitForNominatedNodeName(cs, highPriPod); err != nil { t.Errorf(\"NominatedNodeName annotation was not set for pod %v/%v: %v\", medPriPod.Namespace, medPriPod.Name, err) t.Errorf(\"NominatedNodeName annotation was not set for pod %v/%v: %v\", highPriPod.Namespace, highPriPod.Name, err) } // And the nominated node name of the medium priority pod is cleared. if err := wait.Poll(100*time.Millisecond, wait.ForeverTestTimeout, func() (bool, error) {"} {"_id":"doc-en-kubernetes-3e708e0786301606b2891ccc96a71d9fb3cf156b4bfd5e8f74ffeb0840ddfc86","title":"","text":"podSpec.EphemeralContainers = nil } if (!utilfeature.DefaultFeatureGate.Enabled(features.VolumeSubpath) || !utilfeature.DefaultFeatureGate.Enabled(features.VolumeSubpathEnvExpansion)) && !subpathExprInUse(oldPodSpec) { // drop subpath env expansion from the pod if either of the subpath features is disabled and the old spec did not specify subpath env expansion if !utilfeature.DefaultFeatureGate.Enabled(features.VolumeSubpath) && !subpathExprInUse(oldPodSpec) { // drop subpath env expansion from the pod if subpath feature is disabled and the old spec did not specify subpath env expansion VisitContainers(podSpec, AllContainers, func(c *api.Container, containerType ContainerType) bool { for i := range c.VolumeMounts { c.VolumeMounts[i].SubPathExpr = \"\""} {"_id":"doc-en-kubernetes-021f53738c2b4ca56970222e0a3c72d68689504e7dbac9188a359ffba5b32a2b","title":"","text":"} t.Run(fmt.Sprintf(\"feature enabled=%v, old pod %v, new pod %v\", enabled, oldPodInfo.description, newPodInfo.description), func(t *testing.T) { defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.VolumeSubpathEnvExpansion, enabled)() var oldPodSpec *api.PodSpec if oldPod != nil {"} {"_id":"doc-en-kubernetes-452a706cf53cc356105437790e5016c3d926759c016b802b4f1ebe760dd770f7","title":"","text":"} func TestValidateSubpathMutuallyExclusive(t *testing.T) { // Enable feature VolumeSubpathEnvExpansion and VolumeSubpath defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.VolumeSubpathEnvExpansion, true)() // Enable feature VolumeSubpath defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.VolumeSubpath, true)() volumes := []core.Volume{"} {"_id":"doc-en-kubernetes-cf2ba9a1038c82b0b72a2ac59a6f288d78e28e5a1c8c5462d890dee115dbe1ec","title":"","text":"} func TestValidateDisabledSubpathExpr(t *testing.T) { // Enable feature VolumeSubpathEnvExpansion defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.VolumeSubpathEnvExpansion, true)() volumes := []core.Volume{ {Name: \"abc\", VolumeSource: core.VolumeSource{PersistentVolumeClaim: &core.PersistentVolumeClaimVolumeSource{ClaimName: \"testclaim1\"}}},"} {"_id":"doc-en-kubernetes-17dfa8b63183135c62429b65a3ff62c93a116978ac68f015b9192b8735715001","title":"","text":"// while making decisions. BalanceAttachedNodeVolumes featuregate.Feature = \"BalanceAttachedNodeVolumes\" // owner: @kevtaylor // alpha: v1.14 // beta: v1.15 // ga: v1.17 // // Allow subpath environment variable substitution // Only applicable if the VolumeSubpath feature is also enabled VolumeSubpathEnvExpansion featuregate.Feature = \"VolumeSubpathEnvExpansion\" // owner: @vladimirvivien // alpha: v1.11 // beta: v1.14"} {"_id":"doc-en-kubernetes-bbd8cb150cf000f93af2cb1d30f843b55174d1adbdd850d35819aa46d1236c09","title":"","text":"VolumeSubpath: {Default: true, PreRelease: featuregate.GA}, ConfigurableFSGroupPolicy: {Default: false, PreRelease: featuregate.Alpha}, BalanceAttachedNodeVolumes: {Default: false, PreRelease: featuregate.Alpha}, VolumeSubpathEnvExpansion: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.19, CSIBlockVolume: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.20 CSIInlineVolume: {Default: true, PreRelease: featuregate.Beta}, RuntimeClass: {Default: true, PreRelease: featuregate.Beta},"} {"_id":"doc-en-kubernetes-113aeead39e8af4504fecb1c2529c94203bfbe9d643650eedefa712e4e83d1f5","title":"","text":"return nil, cleanupAction, fmt.Errorf(\"volume subpaths are disabled\") } if !utilfeature.DefaultFeatureGate.Enabled(features.VolumeSubpathEnvExpansion) { return nil, cleanupAction, fmt.Errorf(\"volume subpath expansion is disabled\") } subPath, err = kubecontainer.ExpandContainerVolumeMounts(mount, expandEnvs) if err != nil {"} {"_id":"doc-en-kubernetes-5154883151e32e437f6fdfde58697009998e3f68950f68a11609cb30ebceccdc","title":"","text":"var ( versionFlag = Version(versionFlagName, VersionFalse, \"Print version information and quit\") programName = \"Kubernetes\" ) // AddFlags registers this package's flags on arbitrary FlagSets, such that they point to the"} {"_id":"doc-en-kubernetes-96b3b8aa8f19305a76ed9386fe361906db382f7cf14d9bf7390bea79042b4793","title":"","text":"fmt.Printf(\"%#vn\", version.Get()) os.Exit(0) } else if *versionFlag == VersionTrue { fmt.Printf(\"Kubernetes %sn\", version.Get()) fmt.Printf(\"%s %sn\", programName, version.Get()) os.Exit(0) } }"} {"_id":"doc-en-kubernetes-258d6152845ba33e836c22f06d734ee3fc8cd4e62ccee8fa8b5fc811993015f5","title":"","text":"\"syscall\" \"os\" \"time\" v1 \"k8s.io/api/core/v1\" utilfeature \"k8s.io/apiserver/pkg/util/feature\""} {"_id":"doc-en-kubernetes-3087642969987a4b3ac6acee5bdca1719265f718657ea273a3d111c0982977ba","title":"","text":"fsGroupPolicyEnabled := utilfeature.DefaultFeatureGate.Enabled(features.ConfigurableFSGroupPolicy) klog.Warningf(\"Setting volume ownership for %s and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699\", mounter.GetPath()) timer := time.AfterFunc(30*time.Second, func() { klog.Warningf(\"Setting volume ownership for %s and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699\", mounter.GetPath()) }) defer timer.Stop() // This code exists for legacy purposes, so as old behaviour is entirely preserved when feature gate is disabled // TODO: remove this when ConfigurableFSGroupPolicy turns GA."} {"_id":"doc-en-kubernetes-6af6c8b320510f096283b30fecd3040a7522030eaaf7d63f61cc56ac7c03e528","title":"","text":"// NOTE: Adding suffix 'i' as result should be comparable with a medium size. // pagesize mount option is specified without a suffix, // e.g. pagesize=2M or pagesize=1024M for x86 CPUs pageSize, err := resource.ParseQuantity(strings.TrimPrefix(opt, prefix) + \"i\") trimmedOpt := strings.TrimPrefix(opt, prefix) if !strings.HasSuffix(trimmedOpt, \"i\") { trimmedOpt = trimmedOpt + \"i\" } pageSize, err := resource.ParseQuantity(trimmedOpt) if err != nil { return nil, fmt.Errorf(\"error getting page size from '%s' mount option: %v\", opt, err) }"} {"_id":"doc-en-kubernetes-727324fd90255ab3a214966b71e41f96d2d25f58ce6c6c858b33b237f9a87db1","title":"","text":"Opts: []string{\"rw\", \"relatime\", \"pagesize=2M\"}, }, { Device: \"/dev/hugepages\", Type: \"hugetlbfs\", Path: \"/mnt/hugepages-2Mi\", Opts: []string{\"rw\", \"relatime\", \"pagesize=2Mi\"}, }, { Device: \"sysfs\", Type: \"sysfs\", Path: \"/sys\","} {"_id":"doc-en-kubernetes-ee26942c125e2e577fea779aea767c1dbe3402fa6ebbc440ec1198c2d9c6fdd8","title":"","text":"\"//staging/src/k8s.io/apimachinery/pkg/types:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/audit:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/authentication/user:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/endpoints/request:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/features:go_default_library\", \"//staging/src/k8s.io/apiserver/pkg/util/feature:go_default_library\","} {"_id":"doc-en-kubernetes-fd07735b8e180817fb64a159cf6c33b71e64da17393dc78575f40b0ab4a0f6b0","title":"","text":"\"k8s.io/apimachinery/pkg/types\" utilsets \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apiserver/pkg/audit\" \"k8s.io/apiserver/pkg/authentication/user\" \"k8s.io/apiserver/pkg/endpoints/request\" \"k8s.io/apiserver/pkg/features\" utilfeature \"k8s.io/apiserver/pkg/util/feature\""} {"_id":"doc-en-kubernetes-95074a2521f332feaa0089cc80ca032a405d42a8727b880f104623b2cd4f6ba6","title":"","text":"}, []string{\"verb\", \"group\", \"version\", \"resource\", \"subresource\", \"scope\", \"component\", \"code\"}, ) apiSelfRequestCounter = compbasemetrics.NewCounterVec( &compbasemetrics.CounterOpts{ Name: \"apiserver_selfrequest_total\", Help: \"Counter of apiserver self-requests broken out for each verb, API resource and subresource.\", StabilityLevel: compbasemetrics.ALPHA, }, []string{\"verb\", \"resource\", \"subresource\"}, ) kubectlExeRegexp = regexp.MustCompile(`^.*((?i:kubectl.exe))`) metrics = []resettableCollector{"} {"_id":"doc-en-kubernetes-ff67ff88265b275efc52277b668bd4b7edc21b30da155870dcbb30ee3e1399a1","title":"","text":"currentInflightRequests, currentInqueueRequests, requestTerminationsTotal, apiSelfRequestCounter, } // these are the known (e.g. whitelisted/known) content types which we will report for"} {"_id":"doc-en-kubernetes-553bb1fea3112e943434c5ebced1e8badc845d8a13c5617d6288a8ef231a7165","title":"","text":"elapsedSeconds := elapsed.Seconds() cleanContentType := cleanContentType(contentType) requestCounter.WithLabelValues(reportedVerb, dryRun, group, version, resource, subresource, scope, component, cleanContentType, codeToString(httpCode)).Inc() // MonitorRequest happens after authentication, so we can trust the username given by the request info, ok := request.UserFrom(req.Context()) if ok && info.GetName() == user.APIServerUser { apiSelfRequestCounter.WithLabelValues(reportedVerb, resource, subresource).Inc() } if deprecated { deprecatedRequestGauge.WithLabelValues(group, version, resource, subresource, removedRelease).Set(1) audit.AddAuditAnnotation(req.Context(), deprecatedAnnotationKey, \"true\")"} {"_id":"doc-en-kubernetes-38f83b88b1f9d1b3a21fcfcc972cd64a3d672e8c36ef9f40fa2c0ffaa0ddb4bd","title":"","text":"buildBackOffDuration = time.Minute syncLoopFrequency = 10 * time.Second maxBackOffTolerance = time.Duration(1.3 * float64(kubelet.MaxContainerBackOff)) podRetryPeriod = 1 * time.Second podRetryTimeout = 1 * time.Minute ) // testHostIP tests that a pod gets a host IP"} {"_id":"doc-en-kubernetes-11e064269755632799c90cdf5078418dc624d48638deb05bc93ae7204be98867","title":"","text":"validatePodReadiness(false) }) ginkgo.It(\"should delete a collection of pods\", func() { podTestNames := []string{\"test-pod-1\", \"test-pod-2\", \"test-pod-3\"} ginkgo.By(\"Create set of pods\") // create a set of pods in test namespace for _, podTestName := range podTestNames { _, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(context.TODO(), &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: podTestName, Labels: map[string]string{ \"type\": \"Testing\"}, }, Spec: v1.PodSpec{ Containers: []v1.Container{{ Image: imageutils.GetE2EImage(imageutils.Agnhost), Name: \"token-test\", }}, RestartPolicy: v1.RestartPolicyNever, }}, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create pod\") framework.Logf(\"created %v\", podTestName) } // wait as required for all 3 pods to be found ginkgo.By(\"waiting for all 3 pods to be located\") err := wait.PollImmediate(podRetryPeriod, podRetryTimeout, checkPodListQuantity(f, \"type=Testing\", 3)) framework.ExpectNoError(err, \"3 pods not found\") // delete Collection of pods with a label in the current namespace err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).DeleteCollection(context.TODO(), metav1.DeleteOptions{}, metav1.ListOptions{ LabelSelector: \"type=Testing\"}) framework.ExpectNoError(err, \"failed to delete collection of pods\") // wait for all pods to be deleted ginkgo.By(\"waiting for all pods to be deleted\") err = wait.PollImmediate(podRetryPeriod, podRetryTimeout, checkPodListQuantity(f, \"type=Testing\", 0)) framework.ExpectNoError(err, \"found a pod(s)\") }) }) func checkPodListQuantity(f *framework.Framework, label string, quantity int) func() (bool, error) { return func() (bool, error) { var err error list, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).List(context.TODO(), metav1.ListOptions{ LabelSelector: label}) if err != nil { return false, err } if len(list.Items) != quantity { framework.Logf(\"Pod quantity %d is different from expected quantity %d\", len(list.Items), quantity) return false, err } return true, nil } } "} {"_id":"doc-en-kubernetes-a82d9b9a4d81e30cafa0437ea5a95d5f70da8e3eddb40246ee9d5722b110fb7c","title":"","text":"and FOOSERVICE_PORT_8765_TCP_ADDR that are populated with proper values. release: v1.9 file: test/e2e/common/pods.go - testname: Pods, delete a collection codename: '[k8s.io] Pods should delete a collection of pods [Conformance]' description: A set of pods is created with a label selector which MUST be found when listed. The set of pods is deleted and MUST NOT show up when listed by its label selector. release: v1.19 file: test/e2e/common/pods.go - testname: Pods, assigned hostip codename: '[k8s.io] Pods should get a host IP [NodeConformance] [Conformance]' description: Create a Pod. Pod status MUST return successfully and contains a valid"} {"_id":"doc-en-kubernetes-32482c9fdbf4c1d2ed232b519d570fb4579d161e244e9dd0a180b062eb313f94","title":"","text":"}) ginkgo.It(\"should delete a collection of pods\", func() { /* Release : v1.19 Testname: Pods, delete a collection Description: A set of pods is created with a label selector which MUST be found when listed. The set of pods is deleted and MUST NOT show up when listed by its label selector. */ framework.ConformanceIt(\"should delete a collection of pods\", func() { podTestNames := []string{\"test-pod-1\", \"test-pod-2\", \"test-pod-3\"} ginkgo.By(\"Create set of pods\")"} {"_id":"doc-en-kubernetes-736105b5935b462d42d1eb28b53af7fff1f0674ac283240d976d1fcd91aeb25a","title":"","text":"errLeaseFailed = \"AcquireDiskLeaseFailed\" errLeaseIDMissing = \"LeaseIdMissing\" errContainerNotFound = \"ContainerNotFound\" errDiskBlobNotFound = \"DiskBlobNotFound\" errStatusCode400 = \"statuscode=400\" errInvalidParameter = `code=\"invalidparameter\"` errTargetInstanceIds = `target=\"instanceids\"` sourceSnapshot = \"snapshot\" sourceVolume = \"volume\""} {"_id":"doc-en-kubernetes-68f09be650347e4603cd29896f4bc1ec73e17d525e7331267adc1d1483ebf3ed","title":"","text":"c.diskAttachDetachMap.Delete(strings.ToLower(diskURI)) c.vmLockMap.UnlockEntry(strings.ToLower(string(nodeName))) if err != nil && retry.IsErrorRetriable(err) && c.cloud.CloudProviderBackoff { klog.V(2).Infof(\"azureDisk - update backing off: detach disk(%s, %s), err: %v\", diskName, diskURI, err) retryErr := kwait.ExponentialBackoff(c.cloud.RequestBackoff(), func() (bool, error) { c.vmLockMap.LockEntry(strings.ToLower(string(nodeName))) c.diskAttachDetachMap.Store(strings.ToLower(diskURI), \"detaching\") err := vmset.DetachDisk(diskName, diskURI, nodeName) c.diskAttachDetachMap.Delete(strings.ToLower(diskURI)) c.vmLockMap.UnlockEntry(strings.ToLower(string(nodeName))) retriable := false if err != nil && retry.IsErrorRetriable(err) { retriable = true if err != nil { if isInstanceNotFoundError(err) { // if host doesn't exist, no need to detach klog.Warningf(\"azureDisk - got InstanceNotFoundError(%v), DetachDisk(%s) will assume disk is already detached\", err, diskURI) return nil } if retry.IsErrorRetriable(err) && c.cloud.CloudProviderBackoff { klog.Warningf(\"azureDisk - update backing off: detach disk(%s, %s), err: %v\", diskName, diskURI, err) retryErr := kwait.ExponentialBackoff(c.cloud.RequestBackoff(), func() (bool, error) { c.vmLockMap.LockEntry(strings.ToLower(string(nodeName))) c.diskAttachDetachMap.Store(strings.ToLower(diskURI), \"detaching\") err := vmset.DetachDisk(diskName, diskURI, nodeName) c.diskAttachDetachMap.Delete(strings.ToLower(diskURI)) c.vmLockMap.UnlockEntry(strings.ToLower(string(nodeName))) retriable := false if err != nil && retry.IsErrorRetriable(err) { retriable = true } return !retriable, err }) if retryErr != nil { err = retryErr klog.V(2).Infof(\"azureDisk - update abort backoff: detach disk(%s, %s), err: %v\", diskName, diskURI, err) } return !retriable, err }) if retryErr != nil { err = retryErr klog.V(2).Infof(\"azureDisk - update abort backoff: detach disk(%s, %s), err: %v\", diskName, diskURI, err) } } if err != nil {"} {"_id":"doc-en-kubernetes-b53b1fdb18d26cd8bcdf1c2db27c8ddbcf9804464af6174b9c5f084313aac6b6","title":"","text":"SourceResourceID: &sourceResourceID, }, nil } func isInstanceNotFoundError(err error) bool { errMsg := strings.ToLower(err.Error()) return strings.Contains(errMsg, errStatusCode400) && strings.Contains(errMsg, errInvalidParameter) && strings.Contains(errMsg, errTargetInstanceIds) } "} {"_id":"doc-en-kubernetes-7e6b4ed613ff168077390325b44e6e42d40657af0f39b473a1cd92a971200a42","title":"","text":"assert.Equal(t, 1, len(filteredDisks)) assert.Equal(t, newDiskName, *filteredDisks[0].Name) } func TestIsInstanceNotFoundError(t *testing.T) { testCases := []struct { errMsg string expectedResult bool }{ { errMsg: \"\", expectedResult: false, }, { errMsg: \"other error\", expectedResult: false, }, { errMsg: \"not an active Virtual Machine scale set vm\", expectedResult: false, }, { errMsg: `compute.VirtualMachineScaleSetVMsClient#Update: Failure sending request: StatusCode=400 -- Original Error: Code=\"InvalidParameter\" Message=\"The provided instanceId 1181 is not an active Virtual Machine Scale Set VM instanceId.\" Target=\"instanceIds\"`, expectedResult: true, }, } for i, test := range testCases { result := isInstanceNotFoundError(fmt.Errorf(test.errMsg)) assert.Equal(t, test.expectedResult, result, \"TestCase[%d]\", i, result) } } "} {"_id":"doc-en-kubernetes-30ecc1d86dadb150301cf919fb406558b15997be4ad6fa817f89898093052cc2","title":"","text":"echo \"Annotating node objects with ${annotation_size_bytes} byte label\" label=$( (< /dev/urandom tr -dc 'a-zA-Z0-9' | fold -w \"$annotation_size_bytes\"; true) | head -n 1) \"${KUBECTL}\" --kubeconfig=\"${LOCAL_KUBECONFIG}\" get nodes -o name | xargs -n10 -P100 -r -I% \"${KUBECTL}\" --kubeconfig=\"${LOCAL_KUBECONFIG}\" annotate --overwrite % label=\"$label\" | xargs -n10 -P100 -r -I% \"${KUBECTL}\" --kubeconfig=\"${LOCAL_KUBECONFIG}\" annotate --overwrite % label=\"$label\" > /dev/null echo \"Annotating node objects completed\" }"} {"_id":"doc-en-kubernetes-04c704fcc85d97565ce2b708453222440d0562453423d6c3dee2e0fefd3bd3e1","title":"","text":"return len(podSpec.SchedulingGates) != 0 } // SeccompAnnotationForField takes a pod seccomp profile field and returns the // converted annotation value func SeccompAnnotationForField(field *api.SeccompProfile) string { // If only seccomp fields are specified, add the corresponding annotations. // This ensures that the fields are enforced even if the node version // trails the API version switch field.Type { case api.SeccompProfileTypeUnconfined: return v1.SeccompProfileNameUnconfined case api.SeccompProfileTypeRuntimeDefault: return v1.SeccompProfileRuntimeDefault case api.SeccompProfileTypeLocalhost: if field.LocalhostProfile != nil { return v1.SeccompLocalhostProfileNamePrefix + *field.LocalhostProfile } } // we can only reach this code path if the LocalhostProfile is nil but the // provided field type is SeccompProfileTypeLocalhost or if an unrecognized // type is specified return \"\" } func hasInvalidLabelValueInAffinitySelector(spec *api.PodSpec) bool { if spec.Affinity != nil { if spec.Affinity.PodAffinity != nil {"} {"_id":"doc-en-kubernetes-fbd40fb5d560fda1c423093bb155f5cad783b40d500c723a0f53dcedefbfa282","title":"","text":"// use of pod seccomp annotation without accompanying field if podSpec.SecurityContext == nil || podSpec.SecurityContext.SeccompProfile == nil { if _, exists := meta.Annotations[api.SeccompPodAnnotationKey]; exists { warnings = append(warnings, fmt.Sprintf(`%s: deprecated since v1.19, non-functional in a future release; use the \"seccompProfile\" field instead`, fieldPath.Child(\"metadata\", \"annotations\").Key(api.SeccompPodAnnotationKey))) warnings = append(warnings, fmt.Sprintf(`%s: non-functional in v1.27+; use the \"seccompProfile\" field instead`, fieldPath.Child(\"metadata\", \"annotations\").Key(api.SeccompPodAnnotationKey))) } }"} {"_id":"doc-en-kubernetes-0352be6d2bffb2bdb4d8112abfb7f483fdd5aa61ed2eff5fb72b0ea91a8f7f70","title":"","text":"// use of container seccomp annotation without accompanying field if c.SecurityContext == nil || c.SecurityContext.SeccompProfile == nil { if _, exists := meta.Annotations[api.SeccompContainerAnnotationKeyPrefix+c.Name]; exists { warnings = append(warnings, fmt.Sprintf(`%s: deprecated since v1.19, non-functional in a future release; use the \"seccompProfile\" field instead`, fieldPath.Child(\"metadata\", \"annotations\").Key(api.SeccompContainerAnnotationKeyPrefix+c.Name))) warnings = append(warnings, fmt.Sprintf(`%s: non-functional in v1.27+; use the \"seccompProfile\" field instead`, fieldPath.Child(\"metadata\", \"annotations\").Key(api.SeccompContainerAnnotationKeyPrefix+c.Name))) } }"} {"_id":"doc-en-kubernetes-a3a6964f0146fcef733def508a848f3a42f28e4e30ca740a4a47dfe75261eda2","title":"","text":"}, expected: []string{ `metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the \"priorityClassName\" field instead`, `metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in a future release; use the \"seccompProfile\" field instead`, `metadata.annotations[container.seccomp.security.alpha.kubernetes.io/foo]: deprecated since v1.19, non-functional in a future release; use the \"seccompProfile\" field instead`, `metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: non-functional in v1.27+; use the \"seccompProfile\" field instead`, `metadata.annotations[container.seccomp.security.alpha.kubernetes.io/foo]: non-functional in v1.27+; use the \"seccompProfile\" field instead`, `metadata.annotations[security.alpha.kubernetes.io/sysctls]: non-functional in v1.11+; use the \"sysctls\" field instead`, `metadata.annotations[security.alpha.kubernetes.io/unsafe-sysctls]: non-functional in v1.11+; use the \"sysctls\" field instead`, },"} {"_id":"doc-en-kubernetes-ab98912cc1318d2b9548464a4612af1dcf31ba4fce38adec6e6ebe27a6e3bd56","title":"","text":"\"strings\" \"time\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\""} {"_id":"doc-en-kubernetes-29a2d9ca3da7616e4413bf81f08ba21d54fc0cb682fd68616ee22d630a5568a3","title":"","text":"podutil.DropDisabledPodFields(pod, nil) applySeccompVersionSkew(pod) applyWaitingForSchedulingGatesCondition(pod) }"} {"_id":"doc-en-kubernetes-bd26f052962fbcaa40f5b2d20fa39fa9f8432a8fd74214655c1fad6157c195bf","title":"","text":"Message: \"Scheduling is blocked due to non-empty scheduling gates\", }) } // applySeccompVersionSkew implements the version skew behavior described in: // https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/135-seccomp#version-skew-strategy // Note that we dropped copying the field to annotation synchronization in // v1.25 with the functional removal of the annotations. func applySeccompVersionSkew(pod *api.Pod) { // get possible annotation and field annotation, hasAnnotation := pod.Annotations[v1.SeccompPodAnnotationKey] hasField := false if pod.Spec.SecurityContext != nil && pod.Spec.SecurityContext.SeccompProfile != nil { hasField = true } // sync field and annotation if hasAnnotation && !hasField { newField := seccompFieldForAnnotation(annotation) if newField != nil { if pod.Spec.SecurityContext == nil { pod.Spec.SecurityContext = &api.PodSecurityContext{} } pod.Spec.SecurityContext.SeccompProfile = newField } } // Handle the containers of the pod podutil.VisitContainers(&pod.Spec, podutil.AllFeatureEnabledContainers(), func(ctr *api.Container, _ podutil.ContainerType) bool { // get possible annotation and field key := api.SeccompContainerAnnotationKeyPrefix + ctr.Name annotation, hasAnnotation := pod.Annotations[key] hasField := false if ctr.SecurityContext != nil && ctr.SecurityContext.SeccompProfile != nil { hasField = true } // sync field and annotation if hasAnnotation && !hasField { newField := seccompFieldForAnnotation(annotation) if newField != nil { if ctr.SecurityContext == nil { ctr.SecurityContext = &api.SecurityContext{} } ctr.SecurityContext.SeccompProfile = newField } } return true }) } // seccompFieldForAnnotation takes a pod annotation and returns the converted // seccomp profile field. func seccompFieldForAnnotation(annotation string) *api.SeccompProfile { // If only seccomp annotations are specified, copy the values into the // corresponding fields. This ensures that existing applications continue // to enforce seccomp, and prevents the kubelet from needing to resolve // annotations & fields. if annotation == v1.SeccompProfileNameUnconfined { return &api.SeccompProfile{Type: api.SeccompProfileTypeUnconfined} } if annotation == api.SeccompProfileRuntimeDefault || annotation == api.DeprecatedSeccompProfileDockerDefault { return &api.SeccompProfile{Type: api.SeccompProfileTypeRuntimeDefault} } if strings.HasPrefix(annotation, v1.SeccompLocalhostProfileNamePrefix) { localhostProfile := strings.TrimPrefix(annotation, v1.SeccompLocalhostProfileNamePrefix) if localhostProfile != \"\" { return &api.SeccompProfile{ Type: api.SeccompProfileTypeLocalhost, LocalhostProfile: &localhostProfile, } } } // we can only reach this code path if the localhostProfile name has a zero // length or if the annotation has an unrecognized value return nil } "} {"_id":"doc-en-kubernetes-f6c3eefda0d3112d06b4e1e5d0fd609edea03743f42965e42e2ad93ff9b91618","title":"","text":"\"github.com/google/go-cmp/cmp\" \"github.com/google/go-cmp/cmp/cmpopts\" \"github.com/stretchr/testify/require\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\""} {"_id":"doc-en-kubernetes-8661db54ce9d7ae94849eafec40b3e4ea1cc152437906abcae9be39f192cb2e3","title":"","text":"} } func TestApplySeccompVersionSkew(t *testing.T) { const containerName = \"container\" testProfile := \"test\" for _, test := range []struct { description string pod *api.Pod validation func(*testing.T, *api.Pod) }{ { description: \"Security context nil\", pod: &api.Pod{}, validation: func(t *testing.T, pod *api.Pod) { require.NotNil(t, pod) }, }, { description: \"Security context not nil\", pod: &api.Pod{ Spec: api.PodSpec{SecurityContext: &api.PodSecurityContext{}}, }, validation: func(t *testing.T, pod *api.Pod) { require.NotNil(t, pod) }, }, { description: \"Field set and no annotation present\", pod: &api.Pod{ Spec: api.PodSpec{ SecurityContext: &api.PodSecurityContext{ SeccompProfile: &api.SeccompProfile{ Type: api.SeccompProfileTypeUnconfined, }, }, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Len(t, pod.Annotations, 0) }, }, { description: \"Annotation 'unconfined' and no field present\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompPodAnnotationKey: v1.SeccompProfileNameUnconfined, }, }, Spec: api.PodSpec{}, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeUnconfined, pod.Spec.SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'runtime/default' and no field present\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompPodAnnotationKey: v1.SeccompProfileRuntimeDefault, }, }, Spec: api.PodSpec{SecurityContext: &api.PodSecurityContext{}}, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeRuntimeDefault, pod.Spec.SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'docker/default' and no field present\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompPodAnnotationKey: v1.DeprecatedSeccompProfileDockerDefault, }, }, Spec: api.PodSpec{SecurityContext: &api.PodSecurityContext{}}, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeRuntimeDefault, pod.Spec.SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'localhost/test' and no field present\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompPodAnnotationKey: v1.SeccompLocalhostProfileNamePrefix + testProfile, }, }, Spec: api.PodSpec{SecurityContext: &api.PodSecurityContext{}}, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeLocalhost, pod.Spec.SecurityContext.SeccompProfile.Type) require.Equal(t, testProfile, *pod.Spec.SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'localhost/' has zero length\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompPodAnnotationKey: v1.SeccompLocalhostProfileNamePrefix, }, }, Spec: api.PodSpec{SecurityContext: &api.PodSecurityContext{}}, }, validation: func(t *testing.T, pod *api.Pod) { require.Nil(t, pod.Spec.SecurityContext.SeccompProfile) }, }, { description: \"Security context nil (container)\", pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{}}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.NotNil(t, pod) }, }, { description: \"Security context not nil (container)\", pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ SecurityContext: &api.SecurityContext{}, }}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.NotNil(t, pod) }, }, { description: \"Field set and no annotation present (container)\", pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{ { Name: containerName, SecurityContext: &api.SecurityContext{ SeccompProfile: &api.SeccompProfile{ Type: api.SeccompProfileTypeUnconfined, }, }, }, }, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Len(t, pod.Annotations, 0) }, }, { description: \"Multiple containers with fields (container)\", pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{ { Name: containerName + \"1\", SecurityContext: &api.SecurityContext{ SeccompProfile: &api.SeccompProfile{ Type: api.SeccompProfileTypeUnconfined, }, }, }, { Name: containerName + \"2\", }, { Name: containerName + \"3\", SecurityContext: &api.SecurityContext{ SeccompProfile: &api.SeccompProfile{ Type: api.SeccompProfileTypeRuntimeDefault, }, }, }, }, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Len(t, pod.Annotations, 0) }, }, { description: \"Annotation 'unconfined' and no field present (container)\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompContainerAnnotationKeyPrefix + containerName: v1.SeccompProfileNameUnconfined, }, }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: containerName, }}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeUnconfined, pod.Spec.Containers[0].SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.Containers[0].SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'runtime/default' and no field present (container)\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompContainerAnnotationKeyPrefix + containerName: v1.SeccompProfileRuntimeDefault, }, }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: containerName, SecurityContext: &api.SecurityContext{}, }}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeRuntimeDefault, pod.Spec.Containers[0].SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.Containers[0].SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Annotation 'docker/default' and no field present (container)\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompContainerAnnotationKeyPrefix + containerName: v1.DeprecatedSeccompProfileDockerDefault, }, }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: containerName, SecurityContext: &api.SecurityContext{}, }}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeRuntimeDefault, pod.Spec.Containers[0].SecurityContext.SeccompProfile.Type) require.Nil(t, pod.Spec.Containers[0].SecurityContext.SeccompProfile.LocalhostProfile) }, }, { description: \"Multiple containers by annotations (container)\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompContainerAnnotationKeyPrefix + containerName + \"1\": v1.SeccompLocalhostProfileNamePrefix + testProfile, v1.SeccompContainerAnnotationKeyPrefix + containerName + \"3\": v1.SeccompProfileRuntimeDefault, }, }, Spec: api.PodSpec{ Containers: []api.Container{ {Name: containerName + \"1\"}, {Name: containerName + \"2\"}, {Name: containerName + \"3\"}, }, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeLocalhost, pod.Spec.Containers[0].SecurityContext.SeccompProfile.Type) require.Equal(t, testProfile, *pod.Spec.Containers[0].SecurityContext.SeccompProfile.LocalhostProfile) require.Equal(t, api.SeccompProfileTypeRuntimeDefault, pod.Spec.Containers[2].SecurityContext.SeccompProfile.Type) }, }, { description: \"Annotation 'localhost/test' and no field present (container)\", pod: &api.Pod{ ObjectMeta: metav1.ObjectMeta{ Annotations: map[string]string{ v1.SeccompContainerAnnotationKeyPrefix + containerName: v1.SeccompLocalhostProfileNamePrefix + testProfile, }, }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: containerName, SecurityContext: &api.SecurityContext{}, }}, }, }, validation: func(t *testing.T, pod *api.Pod) { require.Equal(t, api.SeccompProfileTypeLocalhost, pod.Spec.Containers[0].SecurityContext.SeccompProfile.Type) require.Equal(t, testProfile, *pod.Spec.Containers[0].SecurityContext.SeccompProfile.LocalhostProfile) }, }, } { output := &api.Pod{ ObjectMeta: metav1.ObjectMeta{Annotations: map[string]string{}}, } for i, ctr := range test.pod.Spec.Containers { output.Spec.Containers = append(output.Spec.Containers, api.Container{}) if ctr.SecurityContext != nil && ctr.SecurityContext.SeccompProfile != nil { output.Spec.Containers[i].SecurityContext = &api.SecurityContext{ SeccompProfile: &api.SeccompProfile{ Type: api.SeccompProfileType(ctr.SecurityContext.SeccompProfile.Type), LocalhostProfile: ctr.SecurityContext.SeccompProfile.LocalhostProfile, }, } } } applySeccompVersionSkew(test.pod) test.validation(t, test.pod) } } func newPodWithHugePageValue(resourceName api.ResourceName, value resource.Quantity) *api.Pod { return &api.Pod{"} {"_id":"doc-en-kubernetes-af532b0662b3fb32cd8400ce4524f4bfbf5ac5caf8455958531f029eddce26bd","title":"","text":"\"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/net:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/runtime:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//staging/src/k8s.io/client-go/kubernetes/typed/core/v1:go_default_library\", \"//staging/src/k8s.io/client-go/tools/record:go_default_library\","} {"_id":"doc-en-kubernetes-f0ca2c87c1f9890ea3d6b385e3be0c7ea12642650ac0fb57e159d06c3d6b97bc","title":"","text":"metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/net\" \"k8s.io/apimachinery/pkg/util/runtime\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/wait\" corev1client \"k8s.io/client-go/kubernetes/typed/core/v1\" \"k8s.io/client-go/tools/record\""} {"_id":"doc-en-kubernetes-20644f7f031ef6c8714dc2df2cab74c03aea4af1b81f26171d757f8b2fc3c201","title":"","text":"return nil } // collectServiceNodePorts returns nodePorts specified in the Service. // Please note that: // 1. same nodePort with *same* protocol will be duplicated as it is // 2. same nodePort with *different* protocol will be deduplicated func collectServiceNodePorts(service *corev1.Service) []int { servicePorts := []int{} for i := range service.Spec.Ports { servicePort := &service.Spec.Ports[i] if servicePort.NodePort != 0 { servicePorts = append(servicePorts, int(servicePort.NodePort)) var servicePorts []int // map from nodePort to set of protocols seen := make(map[int]sets.String) for _, port := range service.Spec.Ports { nodePort := int(port.NodePort) if nodePort == 0 { continue } proto := string(port.Protocol) s := seen[nodePort] if s == nil { // have not seen this nodePort before s = sets.NewString(proto) servicePorts = append(servicePorts, nodePort) } else if s.Has(proto) { // same nodePort with same protocol servicePorts = append(servicePorts, nodePort) } else { // same nodePort with different protocol s.Insert(proto) } seen[nodePort] = s } if service.Spec.HealthCheckNodePort != 0 { servicePorts = append(servicePorts, int(service.Spec.HealthCheckNodePort)) healthPort := int(service.Spec.HealthCheckNodePort) if healthPort != 0 { s := seen[healthPort] // TODO: is it safe to assume the protocol is always TCP? if s == nil || s.Has(string(corev1.ProtocolTCP)) { servicePorts = append(servicePorts, healthPort) } } return servicePorts"} {"_id":"doc-en-kubernetes-81608425dfcbd2e96c854a2c42df027809867c7435e640ade348ea730f341be9","title":"","text":"import ( \"fmt\" \"reflect\" \"sort\" \"strings\" \"testing\""} {"_id":"doc-en-kubernetes-04ca28e209619b51de2fc00ab3155bf930e55991a0c8a1b712ecaf344bf00a5a","title":"","text":"t.Errorf(\"unexpected portallocator state: %d free\", free) } } func TestCollectServiceNodePorts(t *testing.T) { tests := []struct { name string serviceSpec corev1.ServiceSpec expected []int }{ { name: \"no duplicated nodePorts\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolTCP}, {NodePort: 112, Protocol: corev1.ProtocolUDP}, {NodePort: 113, Protocol: corev1.ProtocolUDP}, }, }, expected: []int{111, 112, 113}, }, { name: \"duplicated nodePort with TCP protocol\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolTCP}, {NodePort: 111, Protocol: corev1.ProtocolTCP}, {NodePort: 112, Protocol: corev1.ProtocolUDP}, }, }, expected: []int{111, 111, 112}, }, { name: \"duplicated nodePort with UDP protocol\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolUDP}, {NodePort: 111, Protocol: corev1.ProtocolUDP}, {NodePort: 112, Protocol: corev1.ProtocolTCP}, }, }, expected: []int{111, 111, 112}, }, { name: \"duplicated nodePort with different protocol\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolTCP}, {NodePort: 112, Protocol: corev1.ProtocolTCP}, {NodePort: 111, Protocol: corev1.ProtocolUDP}, }, }, expected: []int{111, 112}, }, { name: \"no duplicated port(with health check port)\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolTCP}, {NodePort: 112, Protocol: corev1.ProtocolUDP}, }, HealthCheckNodePort: 113, }, expected: []int{111, 112, 113}, }, { name: \"nodePort has different protocol with duplicated health check port\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolUDP}, {NodePort: 112, Protocol: corev1.ProtocolTCP}, }, HealthCheckNodePort: 111, }, expected: []int{111, 112}, }, { name: \"nodePort has same protocol as duplicated health check port\", serviceSpec: corev1.ServiceSpec{ Ports: []corev1.ServicePort{ {NodePort: 111, Protocol: corev1.ProtocolUDP}, {NodePort: 112, Protocol: corev1.ProtocolTCP}, }, HealthCheckNodePort: 112, }, expected: []int{111, 112, 112}, }, } for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { ports := collectServiceNodePorts(&corev1.Service{ ObjectMeta: metav1.ObjectMeta{Namespace: \"one\", Name: \"one\"}, Spec: tc.serviceSpec, }) sort.Ints(ports) if !reflect.DeepEqual(tc.expected, ports) { t.Fatalf(\"Invalid resultnexpected: %vngot: %v\", tc.expected, ports) } }) } } "} {"_id":"doc-en-kubernetes-9fbbb84effe798a2de84ca922d663adf1b33bdb6fa315bc0c822772b5f92020a","title":"","text":"// GetArticleForNoun returns the article needed for the given noun. func GetArticleForNoun(noun string, padding string) string { if noun[len(noun)-2:] != \"ss\" && noun[len(noun)-1:] == \"s\" { if !strings.HasSuffix(noun, \"ss\") && strings.HasSuffix(noun, \"s\") { // Plurals don't have an article. // Don't catch words like class return fmt.Sprintf(\"%v\", padding)"} {"_id":"doc-en-kubernetes-ee39e1a28d62a8c5ed373220fb8d0325c568da694b330437b8c69f95ce4b4f7c","title":"","text":"padding: \" \", want: \" a \", }, { noun: \"S\", padding: \" \", want: \" a \", }, { noun: \"O\", padding: \" \", want: \" an \", }, } for _, tt := range tests { if got := GetArticleForNoun(tt.noun, tt.padding); got != tt.want {"} {"_id":"doc-en-kubernetes-e208a5036af79ee7fb2a405a6fd5336cb6bc317c7941b358d6628504e8e9928a","title":"","text":"# Set default values with override if [[ \"${CONTAINER_RUNTIME}\" == \"docker\" ]]; then export CONTAINER_RUNTIME_ENDPOINT=${KUBE_CONTAINER_RUNTIME_ENDPOINT:-unix:///var/run/docker.sock} export CONTAINER_RUNTIME_ENDPOINT=${KUBE_CONTAINER_RUNTIME_ENDPOINT:-unix:///var/run/dockershim.sock} export CONTAINER_RUNTIME_NAME=${KUBE_CONTAINER_RUNTIME_NAME:-docker} export LOAD_IMAGE_COMMAND=${KUBE_LOAD_IMAGE_COMMAND:-} elif [[ \"${CONTAINER_RUNTIME}\" == \"containerd\" ]]; then"} {"_id":"doc-en-kubernetes-2359a283211f452be8e2b746458287b5643b2f421310d3df06d6ca6354ddf2f7","title":"","text":"err := cs.CoreV1().Pods(f.Namespace.Name).Delete(context.TODO(), podName, metav1.DeleteOptions{}) framework.ExpectNoError(err, \"failed to delete pod: %s in namespace: %s\", podName, f.Namespace.Name) }() ginkgo.By(\"dumping iptables rules on the node\") // wait until host port manager syncs rules cmd = \"sudo iptables-save\" framework.Logf(\"Executing cmd %q on node %v\", cmd, node.Name) result, err := hostExec.IssueCommandWithResult(cmd, node) if err != nil { framework.Failf(\"Interrogation of iptables rules failed on node %v\", node.Name) if framework.TestContext.ClusterIsIPv6() { cmd = \"sudo ip6tables-save\" } err = wait.PollImmediate(framework.Poll, framework.PollShortTimeout, func() (bool, error) { framework.Logf(\"Executing cmd %q on node %v\", cmd, node.Name) result, err := hostExec.IssueCommandWithResult(cmd, node) if err != nil { framework.Logf(\"Interrogation of iptables rules failed on node %v\", node.Name) return false, nil } ginkgo.By(\"checking that iptables contains the necessary iptables rules\") found := false for _, line := range strings.Split(result, \"n\") { if strings.Contains(line, \"-p sctp\") && strings.Contains(line, \"--dport 5060\") { found = true break for _, line := range strings.Split(result, \"n\") { if strings.Contains(line, \"-p sctp\") && strings.Contains(line, \"--dport 5060\") { return true, nil } } } if !found { framework.Logf(\"retrying ... not hostport sctp iptables rules found on node %v\", node.Name) return false, nil }) if err != nil { framework.Failf(\"iptables rules are not set for a pod with sctp hostport\") } ginkgo.By(\"validating sctp module is still not loaded\")"} {"_id":"doc-en-kubernetes-3da5748301900987f14d0c77a246e058d45eafe54714dc39372d290841d4a6f0","title":"","text":"err = e2enetwork.WaitForService(f.ClientSet, ns, serviceName, true, 5*time.Second, e2eservice.TestTimeout) framework.ExpectNoError(err, fmt.Sprintf(\"error while waiting for service:%s err: %v\", serviceName, err)) ginkgo.By(\"dumping iptables rules on a node\") hostExec := utils.NewHostExec(f) defer hostExec.Cleanup() node, err := e2enode.GetRandomReadySchedulableNode(cs) framework.ExpectNoError(err) cmd := \"sudo iptables-save\" framework.Logf(\"Executing cmd %q on node %v\", cmd, node.Name) result, err := hostExec.IssueCommandWithResult(cmd, node) if err != nil { framework.Failf(\"Interrogation of iptables rules failed on node %v\", node.Name) if framework.TestContext.ClusterIsIPv6() { cmd = \"sudo ip6tables-save\" } err = wait.PollImmediate(framework.Poll, e2eservice.KubeProxyLagTimeout, func() (bool, error) { framework.Logf(\"Executing cmd %q on node %v\", cmd, node.Name) result, err := hostExec.IssueCommandWithResult(cmd, node) if err != nil { framework.Logf(\"Interrogation of iptables rules failed on node %v\", node.Name) return false, nil } ginkgo.By(\"checking that iptables contains the necessary iptables rules\") kubeService := false for _, line := range strings.Split(result, \"n\") { if strings.Contains(line, \"-A KUBE-SERVICES\") && strings.Contains(line, \"-p sctp\") { kubeService = true break for _, line := range strings.Split(result, \"n\") { if strings.Contains(line, \"-A KUBE-SERVICES\") && strings.Contains(line, \"-p sctp\") { return true, nil } } } if !kubeService { framework.Logf(\"retrying ... no iptables rules found for service with sctp ports on node %v\", node.Name) return false, nil }) if err != nil { framework.Failf(\"iptables rules are not set for a clusterip service with sctp ports\") } ginkgo.By(\"validating sctp module is still not loaded\")"} {"_id":"doc-en-kubernetes-9e65032d5be8d1eb334d7d135d9d0bd3b466ac135631274abed97cccfa3bb82f","title":"","text":"# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # INSTRUCTIONS AT https://kubernetes.io/security/ cjcullen joelsmith liggitt philips caesarxuchao deads2k lavalamp sttts tallclair "} {"_id":"doc-en-kubernetes-d20d3c4c7e590cfd60ac1f75c4b53673cd9bbca32e3294e6fb83435e05856d3b","title":"","text":"# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE # INSTRUCTIONS AT https://kubernetes.io/security/ cjcullen joelsmith liggitt philips cheftako deads2k lavalamp sttts tallclair "} {"_id":"doc-en-kubernetes-b73b667c6757a1f028e21b0a4cb6fed6dbc778c532e2f694d597aa0f9eb8d08d","title":"","text":"\"port\": 10001, \"labels\": { \"name\": \"redisslave\" } }, \"selector\": { \"name\": \"redisslave\" }"} {"_id":"doc-en-kubernetes-f8bad76e6eb67ed972dc87604f81ab99a4bcebe934e7e5bad9316d817dfbec77","title":"","text":"import ( \"context\" \"strings\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/apiserver/pkg/endpoints/discovery\""} {"_id":"doc-en-kubernetes-58cfa79db15c0869c9d140c472c12933cb529008f1a39edb0c8cb31a44324efd","title":"","text":"framework.ExpectNotEqual(len(list.Groups), 0, \"Missing APIGroups\") for _, group := range list.Groups { if strings.HasSuffix(group.Name, \".example.com\") { // ignore known example dynamic API groups that are added/removed during the e2e test run continue } framework.Logf(\"Checking APIGroup: %v\", group.Name) // locate APIGroup endpoint"} {"_id":"doc-en-kubernetes-0132744d9d2e28d96bc6f1c64a6478043b0dd60ad6e5263b52be76874a3e48dc","title":"","text":"func restartKubelet() { kubeletServiceName := findRunningKubletServiceName() stdout, err := exec.Command(\"sudo\", \"systemctl\", \"restart\", kubeletServiceName).CombinedOutput() // reset the kubelet service start-limit-hit stdout, err := exec.Command(\"sudo\", \"systemctl\", \"reset-failed\", kubeletServiceName).CombinedOutput() framework.ExpectNoError(err, \"Failed to reset kubelet start-limit-hit with systemctl: %v, %v\", err, stdout) stdout, err = exec.Command(\"sudo\", \"systemctl\", \"restart\", kubeletServiceName).CombinedOutput() framework.ExpectNoError(err, \"Failed to restart kubelet with systemctl: %v, %v\", err, stdout) }"} {"_id":"doc-en-kubernetes-b9318860de90b898b69d42bdeee89ea15d57774005ad0579f9bf0c29afa12030","title":"","text":"{\"allarray\", \"{.Book[*].Author}\", storeData, \"Nigel Rees Evelyn Waugh Herman Melville\", false}, {\"allfields\", `{range .Bicycle[*]}{ \"{\" }{ @.* }{ \"} \" }{end}`, storeData, \"{red 19.95 true} {green 20.01 false} \", false}, {\"recurfields\", \"{..Price}\", storeData, \"8.95 12.99 8.99 19.95 20.01\", false}, {\"recurdotfields\", \"{...Price}\", storeData, \"8.95 12.99 8.99 19.95 20.01\", false}, {\"superrecurfields\", \"{............................................................Price}\", storeData, \"\", true}, {\"allstructsSlice\", \"{.Bicycle}\", storeData, `[{\"Color\":\"red\",\"Price\":19.95,\"IsNew\":true},{\"Color\":\"green\",\"Price\":20.01,\"IsNew\":false}]`, false}, {\"allstructs\", `{range .Bicycle[*]}{ @ }{ \" \" }{end}`, storeData,"} {"_id":"doc-en-kubernetes-43fd9eb6e7a13940084c13bf05333138c25462b4851581f0ff54d08ab5958181","title":"","text":"return p.parseInsideAction(cur) } // parseRecursive scans the recursive desent operator .. // parseRecursive scans the recursive descent operator .. func (p *Parser) parseRecursive(cur *ListNode) error { if lastIndex := len(cur.Nodes) - 1; lastIndex >= 0 && cur.Nodes[lastIndex].Type() == NodeRecursive { return fmt.Errorf(\"invalid multiple recursive descent\") } p.pos += len(\"..\") p.consumeText() cur.append(newRecursive())"} {"_id":"doc-en-kubernetes-ac934444bfd00aea426a1e3b801c8603a852b256df8dd454a8ad2cf3de005d84","title":"","text":"{\"invalid number\", \"{+12.3.0}\", \"cannot parse number +12.3.0\"}, {\"unterminated array\", \"{[1}\", \"unterminated array\"}, {\"unterminated filter\", \"{[?(.price]}\", \"unterminated filter\"}, {\"invalid multiple recursive descent\", \"{........}\", \"invalid multiple recursive descent\"}, } for _, test := range failParserTests { _, err := Parse(test.name, test.text)"} {"_id":"doc-en-kubernetes-a3b143ba921e1193599078c7f5940ac74174aa0d3bf0c2f620423817f589da97","title":"","text":"kube-up echo \"... calling validate-cluster\" >&2 if [[ \"${EXIT_ON_WEAK_ERROR}\" == \"true\" ]]; then validate-cluster else validate-cluster validate_result=\"$?\" if [[ ${validate_result} != \"0\" ]]; then if [[ \"${validate_result}\" == \"1\" ]]; then exit 1 elif [[ \"${validate_result}\" == \"2\" ]]; then echo \"...ignoring non-fatal errors in validate-cluster\" >&2 else echo \"Got unknown validate result: ${validate_result}\" fi # Override errexit (validate-cluster) && validate_result=\"$?\" || validate_result=\"$?\" # We have two different failure modes from validate cluster: # - 1: fatal error - cluster won't be working correctly # - 2: weak error - something went wrong, but cluster probably will be working correctly # We always exit in case 1), but if EXIT_ON_WEAK_ERROR != true, then we don't fail on 2). if [[ \"${validate_result}\" == \"1\" ]]; then exit 1 elif [[ \"${validate_result}\" == \"2\" ]]; then if [[ \"${EXIT_ON_WEAK_ERROR}\" == \"true\" ]]; then exit 1; else echo \"...ignoring non-fatal errors in validate-cluster\" >&2 fi fi"} {"_id":"doc-en-kubernetes-b6fc330b7a1641d2d3fa2eb78449889e9e971ea0d4f8ef5fd20007759c467dfb","title":"","text":"return listener, nil } func (c *Cloud) getSubnetCidrs(subnetIDs []string) ([]string, error) { request := &ec2.DescribeSubnetsInput{} for _, subnetID := range subnetIDs { request.SubnetIds = append(request.SubnetIds, aws.String(subnetID)) } subnets, err := c.ec2.DescribeSubnets(request) if err != nil { return nil, fmt.Errorf(\"error querying Subnet for ELB: %q\", err) } if len(subnets) != len(subnetIDs) { return nil, fmt.Errorf(\"error querying Subnet for ELB, got %d subnets for %v\", len(subnets), subnetIDs) } cidrs := make([]string, 0, len(subnets)) for _, subnet := range subnets { cidrs = append(cidrs, aws.StringValue(subnet.CidrBlock)) } return cidrs, nil } // EnsureLoadBalancer implements LoadBalancer.EnsureLoadBalancer func (c *Cloud) EnsureLoadBalancer(ctx context.Context, clusterName string, apiService *v1.Service, nodes []*v1.Node) (*v1.LoadBalancerStatus, error) { annotations := apiService.Annotations"} {"_id":"doc-en-kubernetes-090fed8acd0ff6b39d8aeda956a9e7448079b4997d39051d24b611c708a3c53e","title":"","text":"return nil, err } subnetCidrs, err := c.getSubnetCidrs(subnetIDs) if err != nil { klog.Errorf(\"Error getting subnet cidrs: %q\", err) return nil, err } sourceRangeCidrs := []string{} for cidr := range sourceRanges { sourceRangeCidrs = append(sourceRangeCidrs, cidr)"} {"_id":"doc-en-kubernetes-a0968b689bf2059cc67b7c71cd506f0da73696f2c75452eb1e99a835bbc8c79f","title":"","text":"sourceRangeCidrs = append(sourceRangeCidrs, \"0.0.0.0/0\") } err = c.updateInstanceSecurityGroupsForNLB(loadBalancerName, instances, sourceRangeCidrs, v2Mappings) err = c.updateInstanceSecurityGroupsForNLB(loadBalancerName, instances, subnetCidrs, sourceRangeCidrs, v2Mappings) if err != nil { klog.Warningf(\"Error opening ingress rules for the load balancer to the instances: %q\", err) return nil, err"} {"_id":"doc-en-kubernetes-55409f2a86edc4f1333cfe145283718da839863f03b43ac4228c4d85969644b8","title":"","text":"} } return c.updateInstanceSecurityGroupsForNLB(loadBalancerName, nil, nil, nil) return c.updateInstanceSecurityGroupsForNLB(loadBalancerName, nil, nil, nil, nil) } lb, err := c.describeLoadBalancer(loadBalancerName)"} {"_id":"doc-en-kubernetes-d3d991764ee10b0c03eb8f33c8e96ca4e8a2b94d11f7358feb9e16b40b464b05","title":"","text":"return targetGroup, nil } func (c *Cloud) getVpcCidrBlocks() ([]string, error) { vpcs, err := c.ec2.DescribeVpcs(&ec2.DescribeVpcsInput{ VpcIds: []*string{aws.String(c.vpcID)}, }) if err != nil { return nil, fmt.Errorf(\"error querying VPC for ELB: %q\", err) } if len(vpcs.Vpcs) != 1 { return nil, fmt.Errorf(\"error querying VPC for ELB, got %d vpcs for %s\", len(vpcs.Vpcs), c.vpcID) } cidrBlocks := make([]string, 0, len(vpcs.Vpcs[0].CidrBlockAssociationSet)) for _, cidr := range vpcs.Vpcs[0].CidrBlockAssociationSet { if aws.StringValue(cidr.CidrBlockState.State) != ec2.VpcCidrBlockStateCodeAssociated { continue } cidrBlocks = append(cidrBlocks, aws.StringValue(cidr.CidrBlock)) } return cidrBlocks, nil } // updateInstanceSecurityGroupsForNLB will adjust securityGroup's settings to allow inbound traffic into instances from clientCIDRs and portMappings. // TIP: if either instances or clientCIDRs or portMappings are nil, then the securityGroup rules for lbName are cleared. func (c *Cloud) updateInstanceSecurityGroupsForNLB(lbName string, instances map[InstanceID]*ec2.Instance, clientCIDRs []string, portMappings []nlbPortMapping) error { func (c *Cloud) updateInstanceSecurityGroupsForNLB(lbName string, instances map[InstanceID]*ec2.Instance, subnetCIDRs []string, clientCIDRs []string, portMappings []nlbPortMapping) error { if c.cfg.Global.DisableSecurityGroupIngress { return nil }"} {"_id":"doc-en-kubernetes-2b4d8407d314e56b14282bf83c542d23350c34330490668a30497162550b6ae1","title":"","text":"} clientRuleAnnotation := fmt.Sprintf(\"%s=%s\", NLBClientRuleDescription, lbName) healthRuleAnnotation := fmt.Sprintf(\"%s=%s\", NLBHealthCheckRuleDescription, lbName) vpcCIDRs, err := c.getVpcCidrBlocks() if err != nil { return err } for sgID, sg := range clusterSGs { sgPerms := NewIPPermissionSet(sg.IpPermissions...).Ungroup() if desiredSGIDs.Has(sgID) { if err := c.updateInstanceSecurityGroupForNLBTraffic(sgID, sgPerms, healthRuleAnnotation, \"tcp\", healthCheckPorts, vpcCIDRs); err != nil { if err := c.updateInstanceSecurityGroupForNLBTraffic(sgID, sgPerms, healthRuleAnnotation, \"tcp\", healthCheckPorts, subnetCIDRs); err != nil { return err } if err := c.updateInstanceSecurityGroupForNLBTraffic(sgID, sgPerms, clientRuleAnnotation, clientProtocol, clientPorts, clientCIDRs); err != nil {"} {"_id":"doc-en-kubernetes-a776dc86655480f0640c5b3be229ea190ed56dd62ca3ced0f0eb95cb11cfc1db","title":"","text":"if err != nil { errors = append(errors, err) } default: } }"} {"_id":"doc-en-kubernetes-bff72c1c63d59e1d46f645012cd13be5396fc76c06905476e87956ea2a6f48d5","title":"","text":"package algorithmprovider import ( \"sort\" \"strings\" \"fmt\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" \"k8s.io/klog/v2\""} {"_id":"doc-en-kubernetes-a9a029785ffb93f1470ce9ca611fd917960bfe1abf2a8e32e259808d736e40ae","title":"","text":"// ListAlgorithmProviders lists registered algorithm providers. func ListAlgorithmProviders() string { r := NewRegistry() var providers []string for k := range r { providers = append(providers, k) } sort.Strings(providers) return strings.Join(providers, \" | \") return fmt.Sprintf(\"%s | %s\", ClusterAutoscalerProvider, schedulerapi.SchedulerDefaultProviderName) } func getDefaultConfig() *schedulerapi.Plugins {"} {"_id":"doc-en-kubernetes-152f6fe2a56500eb2c573cad395c033479c45b2259968ba578ec758b8f4f6780","title":"","text":"close(timeoutCh) select { case <-done: if !watcher.IsStopped() { eventCh := watcher.ResultChan() select { case _, opened := <-eventCh: if opened { t.Errorf(\"Watcher received unexpected event\") } if !watcher.IsStopped() { t.Errorf(\"Watcher is not stopped\") } case <-time.After(wait.ForeverTestTimeout): t.Errorf(\"Leaked watch on timeout\") } case <-time.After(wait.ForeverTestTimeout):"} {"_id":"doc-en-kubernetes-96c10a01e84121159d2aeea8a9af7e74fcece96b1a07089d231b1127c29aa5c4","title":"","text":"} if container == nil { // The container is not available yet, e.g. during validation of // PodPreset. Stop validation now, Pod validation will refuse final // The container is not available yet. // Stop validation now, Pod validation will refuse final // Pods with Bidirectional propagation in non-privileged containers. return allErrs }"} {"_id":"doc-en-kubernetes-204fdb3cfff1123b9ed8007395bfedeafe62453e2e49d2157956bf594ec2d541","title":"","text":" { \"kind\": \"PodPreset\", \"apiVersion\": \"settings.k8s.io/v1alpha1\", \"metadata\": { \"name\": \"2\", \"generateName\": \"3\", \"namespace\": \"4\", \"selfLink\": \"5\", \"uid\": \"7\", \"resourceVersion\": \"11042405498087606203\", \"generation\": 8071137005907523419, \"creationTimestamp\": null, \"deletionGracePeriodSeconds\": -4955867275792137171, \"labels\": { \"7\": \"8\" }, \"annotations\": { \"9\": \"10\" }, \"ownerReferences\": [ { \"apiVersion\": \"11\", \"kind\": \"12\", \"name\": \"13\", \"uid\": \"Dz廔ȇ{sŊƏp\", \"controller\": false, \"blockOwnerDeletion\": true } ], \"finalizers\": [ \"14\" ], \"clusterName\": \"15\", \"managedFields\": [ { \"manager\": \"16\", \"operation\": \"鐊唊飙Ş-U圴÷a/ɔ}摁(湗Ć]\", \"apiVersion\": \"17\", \"fieldsType\": \"18\" } ] }, \"spec\": { \"selector\": { \"matchLabels\": { \"8---jop9641lg.p-g8c2-k-912e5-c-e63-n-3n/E9.8ThjT9s-j41-0-6p-JFHn7y-74.-0MUORQQ.N2.3\": \"68._bQw.-dG6c-.6--_x.--0wmZk1_8._3s_-_Bq.m_4\" }, \"matchExpressions\": [ { \"key\": \"p503---477-49p---o61---4fy--9---7--9-9s-0-u5lj2--10pq-0-7-9-2-0/fP81.-.9Vdx.TB_M-H_5_.t..bG0\", \"operator\": \"In\", \"values\": [ \"D07.a_.y_y_o0_5qN2_---_M.N_._a6.9bHjdH.-.5_.I8__n\" ] } ] }, \"env\": [ { \"name\": \"25\", \"value\": \"26\", \"valueFrom\": { \"fieldRef\": { \"apiVersion\": \"27\", \"fieldPath\": \"28\" }, \"resourceFieldRef\": { \"containerName\": \"29\", \"resource\": \"30\", \"divisor\": \"91\" }, \"configMapKeyRef\": { \"name\": \"31\", \"key\": \"32\", \"optional\": false }, \"secretKeyRef\": { \"name\": \"33\", \"key\": \"34\", \"optional\": true } } } ], \"envFrom\": [ { \"prefix\": \"35\", \"configMapRef\": { \"name\": \"36\", \"optional\": true }, \"secretRef\": { \"name\": \"37\", \"optional\": false } } ], \"volumes\": [ { \"name\": \"38\", \"hostPath\": { \"path\": \"39\", \"type\": \"3fƻfʣ繡楙¯ĦE\" }, \"emptyDir\": { \"sizeLimit\": \"700\" }, \"gcePersistentDisk\": { \"pdName\": \"40\", \"fsType\": \"41\", \"partition\": -1215463021, \"readOnly\": true }, \"awsElasticBlockStore\": { \"volumeID\": \"42\", \"fsType\": \"43\", \"partition\": 1686297225, \"readOnly\": true }, \"gitRepo\": { \"repository\": \"44\", \"revision\": \"45\", \"directory\": \"46\" }, \"secret\": { \"secretName\": \"47\", \"items\": [ { \"key\": \"48\", \"path\": \"49\", \"mode\": -815194340 } ], \"defaultMode\": -999327618, \"optional\": false }, \"nfs\": { \"server\": \"50\", \"path\": \"51\", \"readOnly\": true }, \"iscsi\": { \"targetPortal\": \"52\", \"iqn\": \"53\", \"lun\": -388204860, \"iscsiInterface\": \"54\", \"fsType\": \"55\", \"readOnly\": true, \"portals\": [ \"56\" ], \"secretRef\": { \"name\": \"57\" }, \"initiatorName\": \"58\" }, \"glusterfs\": { \"endpoints\": \"59\", \"path\": \"60\" }, \"persistentVolumeClaim\": { \"claimName\": \"61\" }, \"rbd\": { \"monitors\": [ \"62\" ], \"image\": \"63\", \"fsType\": \"64\", \"pool\": \"65\", \"user\": \"66\", \"keyring\": \"67\", \"secretRef\": { \"name\": \"68\" }, \"readOnly\": true }, \"flexVolume\": { \"driver\": \"69\", \"fsType\": \"70\", \"secretRef\": { \"name\": \"71\" }, \"options\": { \"72\": \"73\" } }, \"cinder\": { \"volumeID\": \"74\", \"fsType\": \"75\", \"secretRef\": { \"name\": \"76\" } }, \"cephfs\": { \"monitors\": [ \"77\" ], \"path\": \"78\", \"user\": \"79\", \"secretFile\": \"80\", \"secretRef\": { \"name\": \"81\" }, \"readOnly\": true }, \"flocker\": { \"datasetName\": \"82\", \"datasetUUID\": \"83\" }, \"downwardAPI\": { \"items\": [ { \"path\": \"84\", \"fieldRef\": { \"apiVersion\": \"85\", \"fieldPath\": \"86\" }, \"resourceFieldRef\": { \"containerName\": \"87\", \"resource\": \"88\", \"divisor\": \"965\" }, \"mode\": 345648859 } ], \"defaultMode\": 1169718433 }, \"fc\": { \"targetWWNs\": [ \"89\" ], \"lun\": -460478410, \"fsType\": \"90\", \"wwids\": [ \"91\" ] }, \"azureFile\": { \"secretName\": \"92\", \"shareName\": \"93\", \"readOnly\": true }, \"configMap\": { \"name\": \"94\", \"items\": [ { \"key\": \"95\", \"path\": \"96\", \"mode\": -513127725 } ], \"defaultMode\": -958191807, \"optional\": true }, \"vsphereVolume\": { \"volumePath\": \"97\", \"fsType\": \"98\", \"storagePolicyName\": \"99\", \"storagePolicyID\": \"100\" }, \"quobyte\": { \"registry\": \"101\", \"volume\": \"102\", \"user\": \"103\", \"group\": \"104\", \"tenant\": \"105\" }, \"azureDisk\": { \"diskName\": \"106\", \"diskURI\": \"107\", \"cachingMode\": \"穠C]躢|)黰eȪ嵛4$%Qɰ\", \"fsType\": \"108\", \"readOnly\": false, \"kind\": \"Ï抴ŨfZhUʎ浵ɲõTou0026蕭k\" }, \"photonPersistentDisk\": { \"pdID\": \"109\", \"fsType\": \"110\" }, \"projected\": { \"sources\": [ { \"secret\": { \"name\": \"111\", \"items\": [ { \"key\": \"112\", \"path\": \"113\", \"mode\": -163325250 } ], \"optional\": false }, \"downwardAPI\": { \"items\": [ { \"path\": \"114\", \"fieldRef\": { \"apiVersion\": \"115\", \"fieldPath\": \"116\" }, \"resourceFieldRef\": { \"containerName\": \"117\", \"resource\": \"118\", \"divisor\": \"85\" }, \"mode\": -1996616480 } ] }, \"configMap\": { \"name\": \"119\", \"items\": [ { \"key\": \"120\", \"path\": \"121\", \"mode\": -1120128337 } ], \"optional\": false }, \"serviceAccountToken\": { \"audience\": \"122\", \"expirationSeconds\": -1239370187818888272, \"path\": \"123\" } } ], \"defaultMode\": 1366821517 }, \"portworxVolume\": { \"volumeID\": \"124\", \"fsType\": \"125\", \"readOnly\": true }, \"scaleIO\": { \"gateway\": \"126\", \"system\": \"127\", \"secretRef\": { \"name\": \"128\" }, \"sslEnabled\": true, \"protectionDomain\": \"129\", \"storagePool\": \"130\", \"storageMode\": \"131\", \"volumeName\": \"132\", \"fsType\": \"133\" }, \"storageos\": { \"volumeName\": \"134\", \"volumeNamespace\": \"135\", \"fsType\": \"136\", \"secretRef\": { \"name\": \"137\" } }, \"csi\": { \"driver\": \"138\", \"readOnly\": true, \"fsType\": \"139\", \"volumeAttributes\": { \"140\": \"141\" }, \"nodePublishSecretRef\": { \"name\": \"142\" } }, \"ephemeral\": { \"volumeClaimTemplate\": { \"metadata\": { \"name\": \"143\", \"generateName\": \"144\", \"namespace\": \"145\", \"selfLink\": \"146\", \"uid\": \"y綸_Ú8參遼ū\", \"resourceVersion\": \"16267283576845911679\", \"generation\": 2131277878630553496, \"creationTimestamp\": null, \"deletionGracePeriodSeconds\": -2351574817327272831, \"labels\": { \"148\": \"149\" }, \"annotations\": { \"150\": \"151\" }, \"ownerReferences\": [ { \"apiVersion\": \"152\", \"kind\": \"153\", \"name\": \"154\", \"uid\": \"臷Ľð»ųKĵu00264ʑ%:;栍dʪ\", \"controller\": false, \"blockOwnerDeletion\": false } ], \"finalizers\": [ \"155\" ], \"clusterName\": \"156\", \"managedFields\": [ { \"manager\": \"157\", \"operation\": \"ɍi縱ù墴1Rƥ贫\", \"apiVersion\": \"158\", \"fieldsType\": \"159\" } ] }, \"spec\": { \"accessModes\": [ \"掊°nʮ閼咎櫸eʔŊƞ究:hoĂɋ瀐\" ], \"selector\": { \"matchLabels\": { \"d.iUaC_wYSJfB._.zS-._..3le-Q4-R-083.SD..P.---5.-Z3P__D__6t-2.-m\": \"wE._._3.-.83_iQ\" }, \"matchExpressions\": [ { \"key\": \"x--r7v66bm71u-n4f9wk-3--652x01--p--n4-4-t--2/C.A-j..9dfn3Y8d_0_.---M_4FpF_W-1._-vL_i.-_-a--G-I.-_Y33--.8U.6\", \"operator\": \"In\", \"values\": [ \"A.0.__cd..lv-_aLQbI2_-.XFw.8._..._Wxpe..J7r6\" ] } ] }, \"resources\": { \"limits\": { \"u003c鴒翁杙Ŧ癃8鸖ɱJȉ罴\": \"587\" }, \"requests\": { \"Ó6dz娝嘚庎D}埽uʎȺ眖R#yV\": \"156\" } }, \"volumeName\": \"166\", \"storageClassName\": \"167\", \"volumeMode\": \"瘦ɖ緕ȚÍ勅跦Opwǩ曬逴\", \"dataSource\": { \"apiGroup\": \"168\", \"kind\": \"169\", \"name\": \"170\" } } }, \"readOnly\": true } } ], \"volumeMounts\": [ { \"name\": \"171\", \"readOnly\": true, \"mountPath\": \"172\", \"subPath\": \"173\", \"mountPropagation\": \"œȠƬQg鄠[颐o啛更偢ɇ卷荙JLĹ]\", \"subPathExpr\": \"174\" } ] } } No newline at end of file"} {"_id":"doc-en-kubernetes-e1b8f5ce86e875f6c810973200e0d14bb3190ea9350dc1798fc9e36f8eda5340","title":"","text":" apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: annotations: \"9\": \"10\" clusterName: \"15\" creationTimestamp: null deletionGracePeriodSeconds: -4955867275792137171 finalizers: - \"14\" generateName: \"3\" generation: 8071137005907523419 labels: \"7\": \"8\" managedFields: - apiVersion: \"17\" fieldsType: \"18\" manager: \"16\" operation: 鐊唊飙Ş-U圴÷a/ɔ}摁(湗Ć] name: \"2\" namespace: \"4\" ownerReferences: - apiVersion: \"11\" blockOwnerDeletion: true controller: false kind: \"12\" name: \"13\" uid: Dz廔ȇ{sŊƏp resourceVersion: \"11042405498087606203\" selfLink: \"5\" uid: \"7\" spec: env: - name: \"25\" value: \"26\" valueFrom: configMapKeyRef: key: \"32\" name: \"31\" optional: false fieldRef: apiVersion: \"27\" fieldPath: \"28\" resourceFieldRef: containerName: \"29\" divisor: \"91\" resource: \"30\" secretKeyRef: key: \"34\" name: \"33\" optional: true envFrom: - configMapRef: name: \"36\" optional: true prefix: \"35\" secretRef: name: \"37\" optional: false selector: matchExpressions: - key: p503---477-49p---o61---4fy--9---7--9-9s-0-u5lj2--10pq-0-7-9-2-0/fP81.-.9Vdx.TB_M-H_5_.t..bG0 operator: In values: - D07.a_.y_y_o0_5qN2_---_M.N_._a6.9bHjdH.-.5_.I8__n matchLabels: 8---jop9641lg.p-g8c2-k-912e5-c-e63-n-3n/E9.8ThjT9s-j41-0-6p-JFHn7y-74.-0MUORQQ.N2.3: 68._bQw.-dG6c-.6--_x.--0wmZk1_8._3s_-_Bq.m_4 volumeMounts: - mountPath: \"172\" mountPropagation: œȠƬQg鄠[颐o啛更偢ɇ卷荙JLĹ] name: \"171\" readOnly: true subPath: \"173\" subPathExpr: \"174\" volumes: - awsElasticBlockStore: fsType: \"43\" partition: 1686297225 readOnly: true volumeID: \"42\" azureDisk: cachingMode: 穠C]躢|)黰eȪ嵛4$%Qɰ diskName: \"106\" diskURI: \"107\" fsType: \"108\" kind: Ï抴ŨfZhUʎ浵ɲõTo&蕭k readOnly: false azureFile: readOnly: true secretName: \"92\" shareName: \"93\" cephfs: monitors: - \"77\" path: \"78\" readOnly: true secretFile: \"80\" secretRef: name: \"81\" user: \"79\" cinder: fsType: \"75\" secretRef: name: \"76\" volumeID: \"74\" configMap: defaultMode: -958191807 items: - key: \"95\" mode: -513127725 path: \"96\" name: \"94\" optional: true csi: driver: \"138\" fsType: \"139\" nodePublishSecretRef: name: \"142\" readOnly: true volumeAttributes: \"140\": \"141\" downwardAPI: defaultMode: 1169718433 items: - fieldRef: apiVersion: \"85\" fieldPath: \"86\" mode: 345648859 path: \"84\" resourceFieldRef: containerName: \"87\" divisor: \"965\" resource: \"88\" emptyDir: sizeLimit: \"700\" ephemeral: readOnly: true volumeClaimTemplate: metadata: annotations: \"150\": \"151\" clusterName: \"156\" creationTimestamp: null deletionGracePeriodSeconds: -2351574817327272831 finalizers: - \"155\" generateName: \"144\" generation: 2131277878630553496 labels: \"148\": \"149\" managedFields: - apiVersion: \"158\" fieldsType: \"159\" manager: \"157\" operation: ɍi縱ù墴1Rƥ贫 name: \"143\" namespace: \"145\" ownerReferences: - apiVersion: \"152\" blockOwnerDeletion: false controller: false kind: \"153\" name: \"154\" uid: 臷Ľð»ųKĵ&4ʑ%:;栍dʪ resourceVersion: \"16267283576845911679\" selfLink: \"146\" uid: y綸_Ú8參遼ū spec: accessModes: - 掊°nʮ閼咎櫸eʔŊƞ究:hoĂɋ瀐 dataSource: apiGroup: \"168\" kind: \"169\" name: \"170\" resources: limits: <鴒翁杙Ŧ癃8鸖ɱJȉ罴: \"587\" requests: Ó6dz娝嘚庎D}埽uʎȺ眖R#yV: \"156\" selector: matchExpressions: - key: x--r7v66bm71u-n4f9wk-3--652x01--p--n4-4-t--2/C.A-j..9dfn3Y8d_0_.---M_4FpF_W-1._-vL_i.-_-a--G-I.-_Y33--.8U.6 operator: In values: - A.0.__cd..lv-_aLQbI2_-.XFw.8._..._Wxpe..J7r6 matchLabels: d.iUaC_wYSJfB._.zS-._..3le-Q4-R-083.SD..P.---5.-Z3P__D__6t-2.-m: wE._._3.-.83_iQ storageClassName: \"167\" volumeMode: 瘦ɖ緕ȚÍ勅跦Opwǩ曬逴 volumeName: \"166\" fc: fsType: \"90\" lun: -460478410 targetWWNs: - \"89\" wwids: - \"91\" flexVolume: driver: \"69\" fsType: \"70\" options: \"72\": \"73\" secretRef: name: \"71\" flocker: datasetName: \"82\" datasetUUID: \"83\" gcePersistentDisk: fsType: \"41\" partition: -1215463021 pdName: \"40\" readOnly: true gitRepo: directory: \"46\" repository: \"44\" revision: \"45\" glusterfs: endpoints: \"59\" path: \"60\" hostPath: path: \"39\" type: 3fƻfʣ繡楙¯ĦE iscsi: fsType: \"55\" initiatorName: \"58\" iqn: \"53\" iscsiInterface: \"54\" lun: -388204860 portals: - \"56\" readOnly: true secretRef: name: \"57\" targetPortal: \"52\" name: \"38\" nfs: path: \"51\" readOnly: true server: \"50\" persistentVolumeClaim: claimName: \"61\" photonPersistentDisk: fsType: \"110\" pdID: \"109\" portworxVolume: fsType: \"125\" readOnly: true volumeID: \"124\" projected: defaultMode: 1366821517 sources: - configMap: items: - key: \"120\" mode: -1120128337 path: \"121\" name: \"119\" optional: false downwardAPI: items: - fieldRef: apiVersion: \"115\" fieldPath: \"116\" mode: -1996616480 path: \"114\" resourceFieldRef: containerName: \"117\" divisor: \"85\" resource: \"118\" secret: items: - key: \"112\" mode: -163325250 path: \"113\" name: \"111\" optional: false serviceAccountToken: audience: \"122\" expirationSeconds: -1239370187818888272 path: \"123\" quobyte: group: \"104\" registry: \"101\" tenant: \"105\" user: \"103\" volume: \"102\" rbd: fsType: \"64\" image: \"63\" keyring: \"67\" monitors: - \"62\" pool: \"65\" readOnly: true secretRef: name: \"68\" user: \"66\" scaleIO: fsType: \"133\" gateway: \"126\" protectionDomain: \"129\" secretRef: name: \"128\" sslEnabled: true storageMode: \"131\" storagePool: \"130\" system: \"127\" volumeName: \"132\" secret: defaultMode: -999327618 items: - key: \"48\" mode: -815194340 path: \"49\" optional: false secretName: \"47\" storageos: fsType: \"136\" secretRef: name: \"137\" volumeName: \"134\" volumeNamespace: \"135\" vsphereVolume: fsType: \"98\" storagePolicyID: \"100\" storagePolicyName: \"99\" volumePath: \"97\" "} {"_id":"doc-en-kubernetes-9c32799a595e38a66b9e214e9215a1421da0854d18c8ba6009587856518616a5","title":"","text":"\"ResourceQuota\", \"Role\", \"PriorityClass\", \"PodPreset\", \"AuditSink\", )"} {"_id":"doc-en-kubernetes-a2615d99e302f31d92054e248471930d2c2303d96966feb85e326f3b8acfaf59","title":"","text":"} var _ framework.FilterPlugin = &CSILimits{} var _ framework.EnqueueExtensions = &CSILimits{} // CSIName is the name of the plugin used in the plugin registry and configurations. const CSIName = \"NodeVolumeLimits\""} {"_id":"doc-en-kubernetes-7dcd8a2daebd0abaa5cce8f5c0f1e8dbfec9e48550ef4a0018db40b286cb670e","title":"","text":"return CSIName } // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. func (pl *CSILimits) EventsToRegister() []framework.ClusterEvent { return []framework.ClusterEvent{ {Resource: framework.CSINode, ActionType: framework.Add}, {Resource: framework.Pod, ActionType: framework.Delete}, } } // Filter invoked at the filter extension point. func (pl *CSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { // If the new pod doesn't have any volume attached to it, the predicate will always be true"} {"_id":"doc-en-kubernetes-75ba70bd223c63c8178c6a04906b71f37242db219cf51f71cda8068b467f0fa3","title":"","text":"} var _ framework.FilterPlugin = &nonCSILimits{} var _ framework.EnqueueExtensions = &nonCSILimits{} // newNonCSILimitsWithInformerFactory returns a plugin with filter name and informer factory. func newNonCSILimitsWithInformerFactory("} {"_id":"doc-en-kubernetes-3c6936e2f5d138320ca27843db3357f3b2084b06bf5e626ccd7ab2c6b3fd816c","title":"","text":"return pl.name } // EventsToRegister returns the possible events that may make a Pod // failed by this plugin schedulable. func (pl *nonCSILimits) EventsToRegister() []framework.ClusterEvent { return []framework.ClusterEvent{ {Resource: framework.Node, ActionType: framework.Add}, {Resource: framework.Pod, ActionType: framework.Delete}, } } // Filter invoked at the filter extension point. func (pl *nonCSILimits) Filter(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status { // If a pod doesn't have any volume attached to it, the predicate will always be true."} {"_id":"doc-en-kubernetes-d4d49d0bd8ef4029f6bf031ccbe975d6c5275e03990d2f76eff021e9699e4ba2","title":"","text":"request, retry.DoExponentialBackoffRetry(&sendBackoff), ) if response == nil && err == nil { return response, retry.NewError(false, fmt.Errorf(\"Empty response and no HTTP code\")) } return response, retry.GetError(response, err) }"} {"_id":"doc-en-kubernetes-a932de7f601b2ce44f9b3a0df075c96a2647220b6b124bef583c6bcff3fde7dc","title":"","text":"if m.adapter.cgroupManagerType == libcontainerSystemd { updateSystemdCgroupInfo(libcontainerCgroupConfig, cgroupConfig.Name) } else { if libcontainercgroups.IsCgroup2UnifiedMode() { libcontainerCgroupConfig.Path = m.buildCgroupUnifiedPath(cgroupConfig.Name) } else { libcontainerCgroupConfig.Path = cgroupConfig.Name.ToCgroupfs() } libcontainerCgroupConfig.Path = cgroupConfig.Name.ToCgroupfs() } manager, err := m.adapter.newManager(libcontainerCgroupConfig, cgroupPaths)"} {"_id":"doc-en-kubernetes-b2221d938525126b8fefac3ad05570cef7e40dcf0b629f8ea7aafa5f66e112c8","title":"","text":"// propagateControllers on an unified hierarchy enables all the supported controllers for the specified cgroup func propagateControllers(path string) error { if err := os.MkdirAll(path, 0755); err != nil { if err := os.MkdirAll(filepath.Join(cmutil.CgroupRoot, path), 0755); err != nil { return fmt.Errorf(\"failed to create cgroup %q : %v\", path, err) }"} {"_id":"doc-en-kubernetes-e5dba9993f71556af19136f43117f0ad80fbf9b3ba85c14f29de9656194ec536","title":"","text":"} current := cmutil.CgroupRoot relPath, err := filepath.Rel(cmutil.CgroupRoot, path) if err != nil { return fmt.Errorf(\"failed to get relative path to cgroup root from %q: %v\", path, err) } // Write the controllers list to each \"cgroup.subtree_control\" file until it reaches the parent cgroup. // For the /foo/bar/baz cgroup, controllers must be enabled sequentially in the files: // - /sys/fs/cgroup/foo/cgroup.subtree_control // - /sys/fs/cgroup/foo/bar/cgroup.subtree_control for _, p := range strings.Split(filepath.Dir(relPath), \"/\") { for _, p := range strings.Split(filepath.Dir(path), \"/\") { current = filepath.Join(current, p) if err := ioutil.WriteFile(filepath.Join(current, \"cgroup.subtree_control\"), []byte(controllers), 0755); err != nil { return fmt.Errorf(\"failed to enable controllers on %q: %v\", cmutil.CgroupRoot, err)"} {"_id":"doc-en-kubernetes-8db86bc2b2f146ac58c6dc7215a3881ea5e894a8943171b5570d8858a04b21c5","title":"","text":"klog.V(6).Infof(\"Optional subsystem not supported: hugetlb\") } manager, err := cgroupfs2.NewManager(cgroupConfig, cgroupConfig.Path, false) manager, err := cgroupfs2.NewManager(cgroupConfig, filepath.Join(cmutil.CgroupRoot, cgroupConfig.Path), false) if err != nil { return fmt.Errorf(\"failed to create cgroup v2 manager: %v\", err) }"} {"_id":"doc-en-kubernetes-a20a4c596998a3ae709d3ced222753090e18e3e81aa2bc685fb81916b84c2f3c","title":"","text":"unified := libcontainercgroups.IsCgroup2UnifiedMode() if unified { libcontainerCgroupConfig.Path = m.buildCgroupUnifiedPath(cgroupConfig.Name) libcontainerCgroupConfig.Path = cgroupConfig.Name.ToCgroupfs() } else { libcontainerCgroupConfig.Paths = m.buildCgroupPaths(cgroupConfig.Name) }"} {"_id":"doc-en-kubernetes-e4ac7aad7fbd90d7a501cb685b200ef8983c655cbbaa9f0bf4253692842577d9","title":"","text":"if m.adapter.cgroupManagerType == libcontainerSystemd { updateSystemdCgroupInfo(libcontainerCgroupConfig, cgroupConfig.Name) } else { if libcontainercgroups.IsCgroup2UnifiedMode() { libcontainerCgroupConfig.Path = m.buildCgroupUnifiedPath(cgroupConfig.Name) } else { libcontainerCgroupConfig.Path = cgroupConfig.Name.ToCgroupfs() } libcontainerCgroupConfig.Path = cgroupConfig.Name.ToCgroupfs() } if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.SupportPodPidsLimit) && cgroupConfig.ResourceParameters != nil && cgroupConfig.ResourceParameters.PidsLimit != nil {"} {"_id":"doc-en-kubernetes-daa501b199fa6570e4802de4391d6d64f107da113ae8669f2213deddae7f6978","title":"","text":"} } output, err := exec.Command(\"cmd\", \"/c\", \"mklink\", \"/D\", target, bindSource).CombinedOutput() // There is an issue in golang where EvalSymlinks fails on Windows when passed a // UNC share root path without a trailing backslash. // Ex: SERVERshare will fail to resolve but SERVERshare will resolve // containerD on Windows calls EvalSymlinks so we'll add the backslash when making the symlink if it is missing. // https://github.com/golang/go/pull/42096 fixes this issue in golang but a fix will not be available until // golang v1.16 mklinkSource := bindSource if !strings.HasSuffix(mklinkSource, \"\") { mklinkSource = mklinkSource + \"\" } output, err := exec.Command(\"cmd\", \"/c\", \"mklink\", \"/D\", target, mklinkSource).CombinedOutput() if err != nil { klog.Errorf(\"mklink failed: %v, source(%q) target(%q) output: %q\", err, bindSource, target, string(output)) klog.Errorf(\"mklink failed: %v, source(%q) target(%q) output: %q\", err, mklinkSource, target, string(output)) return err } klog.V(2).Infof(\"mklink source(%q) on target(%q) successfully, output: %q\", bindSource, target, string(output)) klog.V(2).Infof(\"mklink source(%q) on target(%q) successfully, output: %q\", mklinkSource, target, string(output)) return nil }"} {"_id":"doc-en-kubernetes-adf3a0d07b8bd2acc9f80a26fed1559978b64adecd07aac5d1d716d339e7ebdd","title":"","text":"// Name of the workload. Name string // Values of parameters used in the workloadTemplate. Params map[string]int Params params } type params struct { params map[string]int // isUsed field records whether params is used or not. isUsed map[string]bool } // UnmarshalJSON is a custom unmarshaler for params. // // from(json): // \t{ // \t\t\"initNodes\": 500, // \t\t\"initPods\": 50 // \t} // // to: //\tparams{ //\t\tparams: map[string]int{ //\t\t\t\"intNodes\": 500, //\t\t\t\"initPods\": 50, //\t\t}, //\t\tisUsed: map[string]bool{}, // empty map //\t} // func (p *params) UnmarshalJSON(b []byte) error { aux := map[string]int{} if err := json.Unmarshal(b, &aux); err != nil { return err } p.params = aux p.isUsed = map[string]bool{} return nil } // get returns param. func (p params) get(key string) (int, error) { p.isUsed[key] = true param, ok := p.params[key] if ok { return param, nil } return 0, fmt.Errorf(\"parameter %s is undefined\", key) } // unusedParams returns the names of unusedParams func (w workload) unusedParams() []string { var ret []string for name := range w.Params.params { if !w.Params.isUsed[name] { ret = append(ret, name) } } return ret } // op is a dummy struct which stores the real op in itself."} {"_id":"doc-en-kubernetes-8889a662a5e42f4447fa19f66502b06fd62c21c156634100629ac07ddd4d4f08","title":"","text":"func (cno createNodesOp) patchParams(w *workload) (realOp, error) { if cno.CountParam != \"\" { var ok bool if cno.Count, ok = w.Params[cno.CountParam[1:]]; !ok { return nil, fmt.Errorf(\"parameter %s is undefined\", cno.CountParam) var err error cno.Count, err = w.Params.get(cno.CountParam[1:]) if err != nil { return nil, err } } return &cno, (&cno).isValid(false)"} {"_id":"doc-en-kubernetes-662c7c7e26cbd2736249d648219fc2b6afe0748e94a842d9422847dadecf59b2","title":"","text":"func (cmo createNamespacesOp) patchParams(w *workload) (realOp, error) { if cmo.CountParam != \"\" { var ok bool if cmo.Count, ok = w.Params[cmo.CountParam[1:]]; !ok { return nil, fmt.Errorf(\"parameter %s is undefined\", cmo.CountParam) var err error cmo.Count, err = w.Params.get(cmo.CountParam[1:]) if err != nil { return nil, err } } return &cmo, (&cmo).isValid(false)"} {"_id":"doc-en-kubernetes-9788a4a4bdfdb59cbc13733ab10c5527d68ec96963e67e9c4435b8e9523ae385","title":"","text":"func (cpo createPodsOp) patchParams(w *workload) (realOp, error) { if cpo.CountParam != \"\" { var ok bool if cpo.Count, ok = w.Params[cpo.CountParam[1:]]; !ok { return nil, fmt.Errorf(\"parameter %s is undefined\", cpo.CountParam) var err error cpo.Count, err = w.Params.get(cpo.CountParam[1:]) if err != nil { return nil, err } } return &cpo, (&cpo).isValid(false)"} {"_id":"doc-en-kubernetes-c9cfb12696929fa8868fad4347314134fca80526aca62114bc3ce500a4086a56","title":"","text":"func (cpso createPodSetsOp) patchParams(w *workload) (realOp, error) { if cpso.CountParam != \"\" { var ok bool if cpso.Count, ok = w.Params[cpso.CountParam[1:]]; !ok { return nil, fmt.Errorf(\"parameter %s is undefined\", cpso.CountParam) var err error cpso.Count, err = w.Params.get(cpso.CountParam[1:]) if err != nil { return nil, err } } return &cpso, (&cpso).isValid(true)"} {"_id":"doc-en-kubernetes-4098952c908d30cd004be3980784915e2d73f5b3e3b484a709a5b3afbe43b9a9","title":"","text":"b.Fatalf(\"op %d: invalid op %v\", opIndex, concreteOp) } } // check unused params and inform users unusedParams := w.unusedParams() if len(unusedParams) != 0 { b.Fatalf(\"the parameters %v are defined on workload %s, but unused.nPlease make sure there are no typos.\", unusedParams, w.Name) } // Some tests have unschedulable pods. Do not add an implicit barrier at the // end as we do not want to wait for them. return dataItems"} {"_id":"doc-en-kubernetes-72b947ef23e13129ad71526f2b799c76c4bd87319f4f4c788eb465a506095a81","title":"","text":"var output string output, err = remote.SSH(name, \"sh\", \"-c\", \"'systemctl list-units --type=service --state=running | grep -e docker -e containerd'\") \"'systemctl list-units --type=service --state=running | grep -e docker -e containerd -e crio'\") if err != nil { err = fmt.Errorf(\"instance %s not running docker/containerd daemon - Command failed: %s\", name, output) err = fmt.Errorf(\"instance %s not running docker/containerd/crio daemon - Command failed: %s\", name, output) continue } if !strings.Contains(output, \"docker.service\") && !strings.Contains(output, \"containerd.service\") { err = fmt.Errorf(\"instance %s not running docker/containerd daemon: %s\", name, output) !strings.Contains(output, \"containerd.service\") && !strings.Contains(output, \"crio.service\") { err = fmt.Errorf(\"instance %s not running docker/containerd/crio daemon: %s\", name, output) continue } instanceRunning = true"} {"_id":"doc-en-kubernetes-e5fce742c53fb0f52364b5e3ecb3385fe9a68877d456362049d34946be5b13bc","title":"","text":" 1.5 1.6 "} {"_id":"doc-en-kubernetes-4945be8bae2a3392af66a5318b0034a8df8168ba3b23a48b365b75a5dfef08e4","title":"","text":"\"http://metadata.google.internal\", \"http://169.254.169.254/\", \"http://metadata.google.internal/\", \"http://metadata.google.internal/0.1\", \"http://metadata.google.internal/0.1/\", \"http://metadata.google.internal/computeMetadata\", \"http://metadata.google.internal/computeMetadata/v1\", // Allowed API versions."} {"_id":"doc-en-kubernetes-162678183284232606abe99dd606022465cb369e3e637372a1c15b2b619ab0c5","title":"","text":"\"http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true&timeout_sec=0\", \"http://metadata.google.internal/computeMetadata/v1/instance/tags?wait_for_change=true&last_etag=d34db33f\", } legacySuccessEndpoints = []string{ // Discovery \"http://metadata.google.internal/0.1/meta-data\", \"http://metadata.google.internal/computeMetadata/v1beta1\", // Allowed API versions. \"http://metadata.google.internal/0.1/meta-data/\", \"http://metadata.google.internal/computeMetadata/v1beta1/\", // Service account token endpoints. \"http://metadata.google.internal/0.1/meta-data/service-accounts/default/acquire\", \"http://metadata.google.internal/computeMetadata/v1beta1/instance/service-accounts/default/token\", // Known query params. \"http://metadata.google.internal/0.1/meta-data/service-accounts/default/acquire?scopes\", } noKubeEnvEndpoints = []string{ // Check that these don't get a recursive result. \"http://metadata.google.internal/computeMetadata/v1/instance/?recursive%3Dtrue\", // urlencoded"} {"_id":"doc-en-kubernetes-37ceb11673d55a75f460911c28548238ae26894e63e73aa67534ff352a64adf0","title":"","text":"\"http://metadata.google.internal/0.2/\", \"http://metadata.google.internal/computeMetadata/v2/\", // kube-env. \"http://metadata.google.internal/0.1/meta-data/attributes/kube-env\", \"http://metadata.google.internal/computeMetadata/v1beta1/instance/attributes/kube-env\", \"http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env\", // VM identity. \"http://metadata.google.internal/0.1/meta-data/service-accounts/default/identity\", \"http://metadata.google.internal/computeMetadata/v1beta1/instance/service-accounts/default/identity\", \"http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity\", // Forbidden recursive queries. \"http://metadata.google.internal/computeMetadata/v1/instance/?recursive=true\","} {"_id":"doc-en-kubernetes-6e15a6ae5921d117408957eb119df714f44162ed4bceeb3a2b43e0c3f228b230","title":"","text":"} } legacyEndpointExpectedStatus := 200 if err := checkURL(\"http://metadata.google.internal/computeMetadata/v1/instance/attributes/disable-legacy-endpoints\", h, 200, \"true\", \"\"); err == nil { // If `disable-legacy-endpoints` is set to true, queries to unconcealed legacy endpoints will return a 403. legacyEndpointExpectedStatus = 403 } for _, e := range legacySuccessEndpoints { if err := checkURL(e, h, legacyEndpointExpectedStatus, \"\", \"\"); err != nil { log.Printf(\"Wrong response for %v: %v\", e, err) success = 1 } } xForwardedForHeader := map[string][]string{ \"X-Forwarded-For\": {\"Somebody-somewhere\"}, }"} {"_id":"doc-en-kubernetes-9f2b52bbddfe25f85dcb208fb6acc0b55837e9c0f9a19813f7f629067c69b82f","title":"","text":"match: k8s.gcr.io/pause:d+.d+ - path: test/utils/image/manifest.go match: configs[Pause] = Config{gcRegistry, \"pause\", \"d+.d+\"} # metadata-concealment: bump this one first - name: \"metadata-concealment\" version: \"1.6\" refPaths: - path: test/images/metadata-concealment/VERSION match: d.d # then after merge and successful postsubmit image push / promotion, bump this - name: \"metadata-concealment: dependents\" version: \"1.6\" refPaths: - path: test/utils/image/manifest.go match: configs[CheckMetadataConcealment] = Config{promoterE2eRegistry, \"metadata-concealment\", \"d+.d+\"} "} {"_id":"doc-en-kubernetes-2af6f584df20a174859d5c1c27a1795381d76e85e59a083ef3bbb87dc3b875c1","title":"","text":"configs[APIServer] = Config{e2eRegistry, \"sample-apiserver\", \"1.17\"} configs[AppArmorLoader] = Config{e2eRegistry, \"apparmor-loader\", \"1.0\"} configs[BusyBox] = Config{dockerLibraryRegistry, \"busybox\", \"1.29\"} configs[CheckMetadataConcealment] = Config{e2eRegistry, \"metadata-concealment\", \"1.2\"} configs[CheckMetadataConcealment] = Config{promoterE2eRegistry, \"metadata-concealment\", \"1.6\"} configs[CudaVectorAdd] = Config{e2eRegistry, \"cuda-vector-add\", \"1.0\"} configs[CudaVectorAdd2] = Config{e2eRegistry, \"cuda-vector-add\", \"2.0\"} configs[DebianIptables] = Config{buildImageRegistry, \"debian-iptables\", \"buster-v1.3.0\"}"} {"_id":"doc-en-kubernetes-abc60e2427ecbd2618aeac36bfaeab76d1c1e74ce1382a18e4a46799b6bc0515","title":"","text":"ginkgo.By(\"wait for the deleted pod to be cleaned up from the state file\") waitForStateFileCleanedUp() ginkgo.By(\"the deleted pod has already been deleted from the state file\") }) ginkgo.AfterEach(func() { setOldKubeletConfig(f, oldCfg) }) }"} {"_id":"doc-en-kubernetes-94d2369f15199129458f92f6e51d07ee84bd685fd74193014eed1d26a387c725","title":"","text":"// Run the tests runTopologyManagerPolicySuiteTests(f) } // restore kubelet config setOldKubeletConfig(f, oldCfg) // Delete state file to allow repeated runs deleteStateFile() }) ginkgo.It(\"run Topology Manager node alignment test suite\", func() {"} {"_id":"doc-en-kubernetes-2674c5354346531e3647bfe6aab575877f482bce036a8ea025bfc50a26c03ac3","title":"","text":"runTopologyManagerNodeAlignmentSuiteTests(f, sd, reservedSystemCPUs, policy, numaNodes, coreCount) } // restore kubelet config setOldKubeletConfig(f, oldCfg) // Delete state file to allow repeated runs deleteStateFile() }) ginkgo.It(\"run the Topology Manager pod scope alignment test suite\", func() {"} {"_id":"doc-en-kubernetes-5efe5c18c4d38d7266c5309de2421216a1405c5d4ff8d8f403d33033d480b3e9","title":"","text":"reservedSystemCPUs := configureTopologyManagerInKubelet(f, oldCfg, policy, scope, configMap, numaNodes) runTMScopeResourceAlignmentTestSuite(f, configMap, reservedSystemCPUs, policy, numaNodes, coreCount) }) ginkgo.AfterEach(func() { // restore kubelet config setOldKubeletConfig(f, oldCfg) deleteStateFile() }) }"} {"_id":"doc-en-kubernetes-a859a8cf2dfd815b3ed383566b6f71f674ba137e40902c19efb1459249aeac4f","title":"","text":"ginkgo.Context(\"With kubeconfig updated to static CPU Manager policy run the Topology Manager tests\", func() { runTopologyManagerTests(f) }) })"} {"_id":"doc-en-kubernetes-28d7ddeea80c30f9af1a2a7eac351608b036154842fe1cc23eee3ab06523f068","title":"","text":"// zone based on pod labels. }) ginkgo.Describe(\"GCE [Slow] [Feature:NEG]\", func() { ginkgo.Describe(\"GCE [Slow] [Feature:NEG] [Flaky]\", func() { var gceController *gce.IngressController // Platform specific setup"} {"_id":"doc-en-kubernetes-162d21370c7ec4f102f2431400d4199b14f98f0bc1d8c77b086f1c16b68f0424","title":"","text":"// Defines a time budget that can be spend on waiting for not-ready watchers // while dispatching event before shutting them down. dispatchTimeoutBudget *timeBudget dispatchTimeoutBudget timeBudget // Handling graceful termination. stopLock sync.RWMutex"} {"_id":"doc-en-kubernetes-4d4f0aac6f4a37ba02e1f8ebffc14cb3e9dd4e2b9c9e5c025a7b66a6776072ea","title":"","text":"wg.Wait() } type fakeTimeBudget struct{} func (f *fakeTimeBudget) takeAvailable() time.Duration { return 2 * time.Second } func (f *fakeTimeBudget) returnUnused(_ time.Duration) {} func TestDispatchEventWillNotBeBlockedByTimedOutWatcher(t *testing.T) { backingStorage := &dummyStorage{} cacher, _, err := newTestCacher(backingStorage)"} {"_id":"doc-en-kubernetes-21b4783472b08ad3a00a2feff617f4605830080373b61c438d6c91dea9c6f9a3","title":"","text":"cacher.ready.wait() // Ensure there is some budget for slowing down processing. cacher.dispatchTimeoutBudget.returnUnused(50 * time.Millisecond) // When using the official `timeBudgetImpl` we were observing flakiness // due under the following conditions: // 1) the watch w1 is blocked, so we were consuming the whole budget once // its buffer was filled in (10 items) // 2) the budget is refreshed once per second, so it basically wasn't // happening in the test at all // 3) if the test was cpu-starved and we weren't able to consume events // from w2 ResultCh it could have happened that its buffer was also // filling in and given we no longer had timeBudget (consumed in (1)) // trying to put next item was simply breaking the watch // Using fakeTimeBudget gives us always a budget to wait and have a test // pick up something from ResultCh in the meantime. cacher.dispatchTimeoutBudget = &fakeTimeBudget{} makePod := func(i int) *examplev1.Pod { return &examplev1.Pod{"} {"_id":"doc-en-kubernetes-b51a191c9e1fdc4b358eb239d446e9ceefe4354bcf7b3169b8e50f7f422f37a0","title":"","text":"shouldContinue = false break } // Ensure there is some budget for fast watcher after slower one is blocked. cacher.dispatchTimeoutBudget.returnUnused(50 * time.Millisecond) if event.Type == watch.Added { eventsCount++ if eventsCount == totalPods {"} {"_id":"doc-en-kubernetes-8ca5641233e503805e9ea599915314d81879ffcb5738caed6eb85b841d0d3cbf","title":"","text":"// NOTE: It's not recommended to be used concurrently from multiple threads - // if first user takes the whole timeout, the second one will get 0 timeout // even though the first one may return something later. type timeBudget struct { type timeBudget interface { takeAvailable() time.Duration returnUnused(unused time.Duration) } type timeBudgetImpl struct { sync.Mutex budget time.Duration"} {"_id":"doc-en-kubernetes-695216a7754772536fb004371a4f10a72d9a5397a4c440da043d80655c9def4e","title":"","text":"maxBudget time.Duration } func newTimeBudget(stopCh <-chan struct{}) *timeBudget { result := &timeBudget{ func newTimeBudget(stopCh <-chan struct{}) timeBudget { result := &timeBudgetImpl{ budget: time.Duration(0), refresh: refreshPerSecond, maxBudget: maxBudget,"} {"_id":"doc-en-kubernetes-90e098cf8c873f7ca88ec3a21f2216252f7eb3faabb733d0a01bf844c94573c6","title":"","text":"return result } func (t *timeBudget) periodicallyRefresh(stopCh <-chan struct{}) { func (t *timeBudgetImpl) periodicallyRefresh(stopCh <-chan struct{}) { ticker := time.NewTicker(time.Second) defer ticker.Stop() for {"} {"_id":"doc-en-kubernetes-539582514ab4719fb943613bcf043849bd24a1c0f981db44bfb5006c0fa59541","title":"","text":"} } func (t *timeBudget) takeAvailable() time.Duration { func (t *timeBudgetImpl) takeAvailable() time.Duration { t.Lock() defer t.Unlock() result := t.budget"} {"_id":"doc-en-kubernetes-2b51c17b396d7ca9cb0541a9ab21e2a4d69ea8d621bf5543778e95a5b44d5292","title":"","text":"return result } func (t *timeBudget) returnUnused(unused time.Duration) { func (t *timeBudgetImpl) returnUnused(unused time.Duration) { t.Lock() defer t.Unlock() if unused < 0 {"} {"_id":"doc-en-kubernetes-3a533d75e0c39c10c04b1c8d3a6262b78c04ae47e5aff94584096b507a5efe0d","title":"","text":") func TestTimeBudget(t *testing.T) { budget := &timeBudget{ budget := &timeBudgetImpl{ budget: time.Duration(0), maxBudget: time.Duration(200), }"} {"_id":"doc-en-kubernetes-b424421a0b165deac5950ead5abeba9a738ab80fbe2ae3acd66a4e29ce963a1f","title":"","text":"shouldContinue = false } } case <-time.After(2 * time.Second): case <-time.After(wait.ForeverTestTimeout): shouldContinue = false w2.Stop() }"} {"_id":"doc-en-kubernetes-a80a2dfdabb9c3874e607c66dae917e93f8c64b78513aa27e8a38f87dec3552f","title":"","text":"setup-addon-manifests \"addons\" \"0-dns/nodelocaldns\" local -r localdns_file=\"${dst_dir}/0-dns/nodelocaldns/nodelocaldns.yaml\" setup-addon-custom-yaml \"addons\" \"0-dns/nodelocaldns\" \"nodelocaldns.yaml\" \"${CUSTOM_NODELOCAL_DNS_YAML:-}\" # Replace the sed configurations with variable values. sed -i -e \"s/_.*_DNS__DOMAIN__/${DNS_DOMAIN}/g\" \"${localdns_file}\" sed -i -e \"s/_.*_DNS__SERVER__/${DNS_SERVER_IP}/g\" \"${localdns_file}\" sed -i -e \"s/_.*_LOCAL__DNS__/${LOCAL_DNS_IP}/g\" \"${localdns_file}\" # eventually all the __PILLAR__ stuff will be gone, but theyre still in nodelocaldns for backward compat. sed -i -e \"s/__PILLAR__DNS__DOMAIN__/${DNS_DOMAIN}/g\" \"${localdns_file}\" sed -i -e \"s/__PILLAR__DNS__SERVER__/${DNS_SERVER_IP}/g\" \"${localdns_file}\" sed -i -e \"s/__PILLAR__LOCAL__DNS__/${LOCAL_DNS_IP}/g\" \"${localdns_file}\" } # Sets up the manifests of netd for k8s addons."} {"_id":"doc-en-kubernetes-17e709e02b02debadba15e2815bd3f6d02c7313e25a1462a512b9fc30153034e","title":"","text":"function start_nodelocaldns { cp \"${KUBE_ROOT}/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml\" nodelocaldns.yaml # .* because of the __PILLLAR__ references that eventually will be removed ${SED} -i -e \"s/_.*_DNS__DOMAIN__/${DNS_DOMAIN}/g\" nodelocaldns.yaml ${SED} -i -e \"s/_.*_DNS__SERVER__/${DNS_SERVER_IP}/g\" nodelocaldns.yaml ${SED} -i -e \"s/_.*_LOCAL__DNS__/${LOCAL_DNS_IP}/g\" nodelocaldns.yaml # eventually all the __PILLAR__ stuff will be gone, but theyre still in nodelocaldns for backward compat. ${SED} -i -e \"s/__PILLAR__DNS__DOMAIN__/${DNS_DOMAIN}/g\" nodelocaldns.yaml ${SED} -i -e \"s/__PILLAR__DNS__SERVER__/${DNS_SERVER_IP}/g\" nodelocaldns.yaml ${SED} -i -e \"s/__PILLAR__LOCAL__DNS__/${LOCAL_DNS_IP}/g\" nodelocaldns.yaml # use kubectl to create nodelocaldns addon ${KUBECTL} --kubeconfig=\"${CERT_DIR}/admin.kubeconfig\" --namespace=kube-system create -f nodelocaldns.yaml echo \"NodeLocalDNS addon successfully deployed.\""} {"_id":"doc-en-kubernetes-224209f2d74abe06997c383300e2871ca10cf03edfe20c70355a74d1f152d8bb","title":"","text":" #!/usr/bin/env bash # Copyright 2020 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail run_kubectl_debug_pod_tests() { set -o nounset set -o errexit create_and_use_new_namespace kube::log::status \"Testing kubectl debug (pod tests)\" ### Pod Troubleshooting by Copy # Pre-Condition: Pod \"nginx\" is created kubectl run target \"--image=${IMAGE_NGINX:?}\" \"${kube_flags[@]:?}\" kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target:' # Command: create a copy of target with a new debug container kubectl alpha debug target -it --copy-to=target-copy --image=busybox --container=debug-container --attach=false \"${kube_flags[@]:?}\" # Post-Conditions kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target:target-copy:' kube::test::get_object_assert pod/target-copy '{{range.spec.containers}}{{.name}}:{{end}}' 'target:debug-container:' kube::test::get_object_assert pod/target-copy '{{range.spec.containers}}{{.image}}:{{end}}' \"${IMAGE_NGINX:?}:busybox:\" # Clean up kubectl delete pod target target-copy \"${kube_flags[@]:?}\" # Pre-Condition: Pod \"nginx\" is created kubectl run target \"--image=${IMAGE_NGINX:?}\" \"${kube_flags[@]:?}\" kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target:' # Command: create a copy of target with a new debug container replacing the previous pod kubectl alpha debug target -it --copy-to=target-copy --image=busybox --container=debug-container --attach=false --replace \"${kube_flags[@]:?}\" # Post-Conditions kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target-copy:' kube::test::get_object_assert pod/target-copy '{{range.spec.containers}}{{.name}}:{{end}}' 'target:debug-container:' kube::test::get_object_assert pod/target-copy '{{range.spec.containers}}{{.image}}:{{end}}' \"${IMAGE_NGINX:?}:busybox:\" # Clean up kubectl delete pod target-copy \"${kube_flags[@]:?}\" # Pre-Condition: Pod \"nginx\" is created kubectl run target \"--image=${IMAGE_NGINX:?}\" \"${kube_flags[@]:?}\" kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target:' kube::test::get_object_assert pod/target '{{(index .spec.containers 0).name}}' 'target' # Command: copy the pod and replace the image of an existing container kubectl alpha debug target --image=busybox --container=target --copy-to=target-copy \"${kube_flags[@]:?}\" -- sleep 1m # Post-Conditions kube::test::get_object_assert pod \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'target:target-copy:' kube::test::get_object_assert pod/target-copy \"{{(len .spec.containers)}}:{{${image_field:?}}}\" '1:busybox' # Clean up kubectl delete pod target target-copy \"${kube_flags[@]:?}\" set +o nounset set +o errexit } run_kubectl_debug_node_tests() { set -o nounset set -o errexit create_and_use_new_namespace kube::log::status \"Testing kubectl debug (pod tests)\" ### Node Troubleshooting by Privileged Container # Pre-Condition: Pod \"nginx\" is created kube::test::get_object_assert nodes \"{{range.items}}{{${id_field:?}}}:{{end}}\" '127.0.0.1:' # Command: create a new node debugger pod output_message=$(kubectl alpha debug node/127.0.0.1 --image=busybox --attach=false \"${kube_flags[@]:?}\" -- true) # Post-Conditions kube::test::get_object_assert pod \"{{(len .items)}}\" '1' debugger=$(kubectl get pod -o go-template=\"{{(index .items 0)${id_field:?}}}\") kube::test::if_has_string \"${output_message:?}\" \"${debugger:?}\" kube::test::get_object_assert \"pod/${debugger:?}\" \"{{${image_field:?}}}\" 'busybox' kube::test::get_object_assert \"pod/${debugger:?}\" '{{.spec.nodeName}}' '127.0.0.1' kube::test::get_object_assert \"pod/${debugger:?}\" '{{.spec.hostIPC}}' 'true' kube::test::get_object_assert \"pod/${debugger:?}\" '{{.spec.hostNetwork}}' 'true' kube::test::get_object_assert \"pod/${debugger:?}\" '{{.spec.hostPID}}' 'true' kube::test::get_object_assert \"pod/${debugger:?}\" '{{(index (index .spec.containers 0).volumeMounts 0).mountPath}}' '/host' kube::test::get_object_assert \"pod/${debugger:?}\" '{{(index .spec.volumes 0).hostPath.path}}' '/' # Clean up # pod.spec.nodeName is set by kubectl debug node which causes the delete to hang, # presumably waiting for a kubelet that's not present. Force the delete. kubectl delete --force pod \"${debugger:?}\" \"${kube_flags[@]:?}\" set +o nounset set +o errexit } "} {"_id":"doc-en-kubernetes-b120513474cb43bd43433d677c53d294e0a2cb23dc279a6382bf371ca73467f5","title":"","text":"source \"${KUBE_ROOT}/test/cmd/core.sh\" source \"${KUBE_ROOT}/test/cmd/crd.sh\" source \"${KUBE_ROOT}/test/cmd/create.sh\" source \"${KUBE_ROOT}/test/cmd/debug.sh\" source \"${KUBE_ROOT}/test/cmd/delete.sh\" source \"${KUBE_ROOT}/test/cmd/diff.sh\" source \"${KUBE_ROOT}/test/cmd/discovery.sh\""} {"_id":"doc-en-kubernetes-69b8db6f07c877a925ebd4fd91117c05f8c92280c726a5cb477a114d042b647e","title":"","text":"record_command run_wait_tests #################### # kubectl debug # #################### if kube::test::if_supports_resource \"${pods}\" ; then record_command run_kubectl_debug_pod_tests fi if kube::test::if_supports_resource \"${nodes}\" ; then record_command run_kubectl_debug_node_tests fi cleanup_tests }"} {"_id":"doc-en-kubernetes-0dd50982f29f46d0c47282ac93b0714f9a2145385cdc298bd9986568e87dbf87","title":"","text":") destChain := svcXlbChain // We have to SNAT packets to external IPs if externalTrafficPolicy is cluster. // We have to SNAT packets to external IPs if externalTrafficPolicy is cluster // and the traffic is NOT Local. Local traffic coming from Pods and Nodes will // be always forwarded to the corresponding Service, so no need to SNAT // If we can't differentiate the local traffic we always SNAT. if !svcInfo.OnlyNodeLocalEndpoints() { destChain = svcChain writeLine(proxier.natRules, append(args, \"-j\", string(KubeMarkMasqChain))...) // This masquerades off-cluster traffic to a External IP. if proxier.localDetector.IsImplemented() { writeLine(proxier.natRules, proxier.localDetector.JumpIfNotLocal(args, string(KubeMarkMasqChain))...) } else { writeLine(proxier.natRules, append(args, \"-j\", string(KubeMarkMasqChain))...) } } // Sent traffic bound for external IPs to the service chain. writeLine(proxier.natRules, append(args, \"-j\", string(destChain))...) // Allow traffic for external IPs that does not come from a bridge (i.e. not from a container) // nor from a local process to be forwarded to the service. // This rule roughly translates to \"all traffic from off-machine\". // This is imperfect in the face of network plugins that might not use a bridge, but we can revisit that later. externalTrafficOnlyArgs := append(args, \"-m\", \"physdev\", \"!\", \"--physdev-is-in\", \"-m\", \"addrtype\", \"!\", \"--src-type\", \"LOCAL\") writeLine(proxier.natRules, append(externalTrafficOnlyArgs, \"-j\", string(destChain))...) dstLocalOnlyArgs := append(args, \"-m\", \"addrtype\", \"--dst-type\", \"LOCAL\") // Allow traffic bound for external IPs that happen to be recognized as local IPs to stay local. // This covers cases like GCE load-balancers which get added to the local routing table. writeLine(proxier.natRules, append(dstLocalOnlyArgs, \"-j\", string(destChain))...) } else { // No endpoints. writeLine(proxier.filterRules,"} {"_id":"doc-en-kubernetes-cdad361c2c50c39029aa17a1993b3dc0d1a8d980e7d2815a25433bd78e725d58","title":"","text":"return nil } func testReachabilityOverExternalIP(externalIP string, sp v1.ServicePort, execPod *v1.Pod) error { return testEndpointReachability(externalIP, sp.Port, sp.Protocol, execPod) } func testReachabilityOverNodePorts(nodes *v1.NodeList, sp v1.ServicePort, pod *v1.Pod, clusterIP string) error { internalAddrs := e2enode.CollectAddresses(nodes, v1.NodeInternalIP) externalAddrs := e2enode.CollectAddresses(nodes, v1.NodeExternalIP)"} {"_id":"doc-en-kubernetes-e7398ac1f9914e0f2eab743d3b6ecc4cedc2f9891e15bcc09d1c25cf5af228fa","title":"","text":"func (j *TestJig) checkClusterIPServiceReachability(svc *v1.Service, pod *v1.Pod) error { clusterIP := svc.Spec.ClusterIP servicePorts := svc.Spec.Ports externalIPs := svc.Spec.ExternalIPs err := j.waitForAvailableEndpoint(ServiceEndpointsTimeout) if err != nil {"} {"_id":"doc-en-kubernetes-0bc494552dd1fe929ff1e7aff4df579562b06cb775f6cc456b7ba7c9c82669e1","title":"","text":"if err != nil { return err } if len(externalIPs) > 0 { for _, externalIP := range externalIPs { err = testReachabilityOverExternalIP(externalIP, servicePort, pod) if err != nil { return err } } } } return nil }"} {"_id":"doc-en-kubernetes-a9de0de187f28bac73daaefdb22d29bbf02df5b2e2e19c101a22e287974a2581","title":"","text":"framework.ExpectNoError(err) }) /* Create a ClusterIP service with an External IP that is not assigned to an interface. The IP ranges here are reserved for documentation according to [RFC 5737](https://tools.ietf.org/html/rfc5737) Section 3 and should not be used by any host. */ ginkgo.It(\"should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node\", func() { serviceName := \"externalip-test\" ns := f.Namespace.Name externalIP := \"203.0.113.250\" if framework.TestContext.ClusterIsIPv6() { externalIP = \"2001:DB8::cb00:71fa\" } jig := e2eservice.NewTestJig(cs, ns, serviceName) ginkgo.By(\"creating service \" + serviceName + \" with type=clusterIP in namespace \" + ns) clusterIPService, err := jig.CreateTCPService(func(svc *v1.Service) { svc.Spec.Type = v1.ServiceTypeClusterIP svc.Spec.ExternalIPs = []string{externalIP} svc.Spec.Ports = []v1.ServicePort{ {Port: 80, Name: \"http\", Protocol: v1.ProtocolTCP, TargetPort: intstr.FromInt(9376)}, } }) framework.ExpectNoError(err) err = jig.CreateServicePods(2) framework.ExpectNoError(err) execPod := e2epod.CreateExecPodOrFail(cs, ns, \"execpod\", nil) err = jig.CheckServiceReachability(clusterIPService, execPod) framework.ExpectNoError(err) }) // TODO: Get rid of [DisabledForLargeClusters] tag when issue #56138 is fixed. ginkgo.It(\"should be able to change the type and ports of a service [Slow] [DisabledForLargeClusters]\", func() { // requires cloud load-balancer support"} {"_id":"doc-en-kubernetes-476c4c0972998fe2451b4e3367a925c1e278d2c80df2962647126557d9701642","title":"","text":" ## Contributor guidelines 1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). 1. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md). 1. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below. (_this would also contain updated detail later about this new template_) ```release-note * Use the release-note-* labels to set the release note state * Clear this block to use the PR title as the release note -OR- * Enter your extended release note here ``` "} {"_id":"doc-en-kubernetes-3268db52d8a2f8041d0d1a8d7b379ee615afc63d2222832bfc3fa574573b3e94","title":"","text":" ## Pull Request Guidelines 1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). 1. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md). 1. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below. ```release-note * Use the release-note-* labels to set the release note state * Clear this block to use the PR title as the release note -OR- * Enter your extended release note here ``` [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]() "} {"_id":"doc-en-kubernetes-4c1e9b59d43d2db2622babf0ee503c21b58b822ca7486e0971e2f187eedb9474","title":"","text":"hostsFileContent = managedHostsFileContent(hostIPs, hostName, hostDomainName, hostAliases) } return os.WriteFile(fileName, hostsFileContent, 0644) hostsFilePerm := os.FileMode(0644) if err := os.WriteFile(fileName, hostsFileContent, hostsFilePerm); err != nil { return err } return os.Chmod(fileName, hostsFilePerm) } // nodeHostsFileContent reads the content of node's hosts file."} {"_id":"doc-en-kubernetes-66db5386d5a73eed6d3c114455e92a28b1abdf210671e1af1af06a3470ef9d2d","title":"","text":"fmt.Sprintf(\"%s::%s::%s\", f.Namespace.Name, pod1, \"busybox-container\"): boundedSample(10*e2evolume.Kb, 80*e2evolume.Mb), }), \"pod_cpu_usage_seconds_total\": gstruct.MatchElements(containerID, gstruct.IgnoreExtras, gstruct.Elements{ \"pod_cpu_usage_seconds_total\": gstruct.MatchElements(podID, gstruct.IgnoreExtras, gstruct.Elements{ fmt.Sprintf(\"%s::%s\", f.Namespace.Name, pod0): boundedSample(0, 100), fmt.Sprintf(\"%s::%s\", f.Namespace.Name, pod1): boundedSample(0, 100), }), \"pod_memory_working_set_bytes\": gstruct.MatchAllElements(containerID, gstruct.Elements{ \"pod_memory_working_set_bytes\": gstruct.MatchElements(podID, gstruct.IgnoreExtras, gstruct.Elements{ fmt.Sprintf(\"%s::%s\", f.Namespace.Name, pod0): boundedSample(10*e2evolume.Kb, 80*e2evolume.Mb), fmt.Sprintf(\"%s::%s\", f.Namespace.Name, pod1): boundedSample(10*e2evolume.Kb, 80*e2evolume.Mb), }),"} {"_id":"doc-en-kubernetes-1c7888aa19ca6dcfdc2e964f6ff52b119f3e198434841b1758af69fff91af64f","title":"","text":"return \"\" } func podID(element interface{}) string { el := element.(*model.Sample) return fmt.Sprintf(\"%s::%s\", el.Metric[\"namespace\"], el.Metric[\"pod\"]) } func containerID(element interface{}) string { el := element.(*model.Sample) return fmt.Sprintf(\"%s::%s::%s\", el.Metric[\"namespace\"], el.Metric[\"pod\"], el.Metric[\"container\"])"} {"_id":"doc-en-kubernetes-89382f994ae211d0ce576d253b9060b7448bc60d5f40c46308d774b6e602294e","title":"","text":"podUID := string(pod.UID) contName := container.Name needsReAllocate := false for k := range container.Resources.Limits { for k, v := range container.Resources.Limits { resource := string(k) if !m.isDevicePluginResource(resource) { if !m.isDevicePluginResource(resource) || v.Value() == 0 { continue } err := m.callPreStartContainerIfNeeded(podUID, contName, resource)"} {"_id":"doc-en-kubernetes-98ab66a7dabaa33462983b13ab4e757a22ceab4bc64bde46533d2c1ad41545b4","title":"","text":"as.Equal(len(runContainerOpts.Devices), len(expectedResp.Devices)) as.Equal(len(runContainerOpts.Mounts), len(expectedResp.Mounts)) as.Equal(len(runContainerOpts.Envs), len(expectedResp.Envs)) pod2 := makePod(v1.ResourceList{ v1.ResourceName(res1.resourceName): *resource.NewQuantity(int64(0), resource.DecimalSI)}) activePods = append(activePods, pod2) podsStub.updateActivePods(activePods) err = testManager.Allocate(pod2, &pod2.Spec.Containers[0]) as.Nil(err) _, err = testManager.GetDeviceRunContainerOptions(pod2, &pod2.Spec.Containers[0]) as.Nil(err) select { case <-time.After(time.Millisecond): t.Log(\"When pod resourceQuantity is 0, PreStartContainer RPC stub will be skipped\") case <-ch: break } } func TestResetExtendedResource(t *testing.T) {"} {"_id":"doc-en-kubernetes-660938dbcb689c10815d857f44eecac7a3d9fae6269cf6811e70ee267c5c7a60","title":"","text":"restoredState, err := NewCheckpointState(testingDir, testingCheckpoint, tc.policyName, tc.initialContainers) if err != nil { if strings.TrimSpace(tc.expectedError) != \"\" { tc.expectedError = \"could not restore state from checkpoint: \" + tc.expectedError if strings.HasPrefix(err.Error(), tc.expectedError) { if strings.Contains(err.Error(), \"could not restore state from checkpoint\") && strings.Contains(err.Error(), tc.expectedError) { t.Logf(\"got expected error: %v\", err) return }"} {"_id":"doc-en-kubernetes-3ff1b8d08100a08f3d19a06321feeddf479f63a137fb9563a2d32342ce18fd1e","title":"","text":"if !apierrors.IsBadRequest(err) { t.Errorf(\"expected HTTP status: BadRequest, got: %#v\", apierrors.ReasonForError(err)) } if err.Error() != expectedError { if !strings.Contains(err.Error(), expectedError) { t.Errorf(\"expected %#v, got %#v\", expectedError, err.Error()) } }"} {"_id":"doc-en-kubernetes-c9667454a72dbef147eaf74d68005f5c9c6db06ba9b7460f165ea628a3bdcdbc","title":"","text":"t.Errorf(\"%s: expect no error when applying json patch, but got %v\", test.name, err) continue } if err.Error() != test.expectedError { if !strings.Contains(err.Error(), test.expectedError) { t.Errorf(\"%s: expected error %v, but got %v\", test.name, test.expectedError, err) } if test.expectedErrorType != apierrors.ReasonForError(err) {"} {"_id":"doc-en-kubernetes-497eaace1e80105629b4df313bb48300a519f943ad476b33c965fe0906a44b21","title":"","text":"import ( \"fmt\" \"strings\" \"testing\" )"} {"_id":"doc-en-kubernetes-f286a1e96d66903fca4ee0789949116482edb1c5736ca29bf6dd3033dfb39405","title":"","text":"}, { message: \"{\", err: \"error stream protocol error: unexpected end of JSON input in \"{\"\", err: \"unexpected end of JSON input in \"{\"\", }, { message: `{\"status\": \"Success\" }`,"} {"_id":"doc-en-kubernetes-e21bdc5763b59871a954222e0bdac0c992772aab12ead9263a9ec2d15935431d","title":"","text":"if want == \"\" { want = \"\" } if got := fmt.Sprintf(\"%v\", err); got != want { if got := fmt.Sprintf(\"%v\", err); !strings.Contains(got, want) { t.Errorf(\"wrong error for message %q: want=%q, got=%q\", test.message, want, got) } }"} {"_id":"doc-en-kubernetes-b2d26e8fdf24bd363852be94473683dd76a378f7355c56dc40b59bba825eee56","title":"","text":"func setupInProbeMode(t *testing.T, devs []*pluginapi.Device, callback monitorCallback, socketName string, pluginSocketName string) (Manager, <-chan interface{}, *Stub, pluginmanager.PluginManager) { m, updateChan := setupDeviceManager(t, devs, callback, socketName) pm := setupPluginManager(t, pluginSocketName, m) p := setupDevicePlugin(t, devs, pluginSocketName) pm := setupPluginManager(t, pluginSocketName, m) return m, updateChan, p, pm }"} {"_id":"doc-en-kubernetes-6e0a05c53c652a5392e049071dfba6053c56bbd1625c28e246d072127162f5c5","title":"","text":"rc.RLock() defer rc.RUnlock() return rc.handlers var copyHandlers = make(map[string]cache.PluginHandler) for pluginType, handler := range rc.handlers { copyHandlers[pluginType] = handler } return copyHandlers } func (rc *reconciler) reconcile() {"} {"_id":"doc-en-kubernetes-4d5f45013f521509fc13f927f84cb91746955b92595dc602fbd953090c60afee","title":"","text":"} return notMnt, nil } // PathExists returns true if the specified path exists. // TODO: clean this up to use pkg/util/file/FileExists func PathExists(path string) (bool, error) { _, err := os.Stat(path) if err == nil { return true, nil } else if os.IsNotExist(err) { return false, nil } else if IsCorruptedMnt(err) { return true, err } return false, err } "} {"_id":"doc-en-kubernetes-53e51dd4dbace44d65ae77d0723d46165784a88f9206bbdd5d4f14a2e66bf5ab","title":"","text":"package mount import ( \"errors\" \"fmt\" \"io/fs\" \"os\" \"strconv\" \"strings\" \"syscall\" \"k8s.io/klog/v2\" utilio \"k8s.io/utils/io\" )"} {"_id":"doc-en-kubernetes-314244ba10360ed7e87fb7f76e509a4166312087972367d986bd68d6a0e2b187","title":"","text":"underlyingError = pe.Err case *os.SyscallError: underlyingError = pe.Err case syscall.Errno: underlyingError = err } return underlyingError == syscall.ENOTCONN || underlyingError == syscall.ESTALE || underlyingError == syscall.EIO || underlyingError == syscall.EACCES || underlyingError == syscall.EHOSTDOWN"} {"_id":"doc-en-kubernetes-bf2450c591d634a4bc0fd48fadc504018104465a2b65253d0c17024a0df2289a","title":"","text":"deletedDir := fmt.Sprintf(\"%s040(deleted)\", dir) return ((mp.Path == dir) || (mp.Path == deletedDir)) } // PathExists returns true if the specified path exists. // TODO: clean this up to use pkg/util/file/FileExists func PathExists(path string) (bool, error) { _, err := os.Stat(path) if err == nil { return true, nil } else if errors.Is(err, fs.ErrNotExist) { err = syscall.Access(path, syscall.F_OK) if err == nil { // The access syscall says the file exists, the stat syscall says it // doesn't. This was observed on CIFS when the path was removed at // the server somehow. POSIX calls this a stale file handle, let's fake // that error and treat the path as existing but corrupted. klog.Warningf(\"Potential stale file handle detected: %s\", path) return true, syscall.ESTALE } return false, nil } else if IsCorruptedMnt(err) { return true, err } return false, err } "} {"_id":"doc-en-kubernetes-38301c467e8e59bddaac6857fe5b586629be238c609d05fe2e6d99338529f3d7","title":"","text":"func isMountPointMatch(mp MountPoint, dir string) bool { return mp.Path == dir } // PathExists returns true if the specified path exists. // TODO: clean this up to use pkg/util/file/FileExists func PathExists(path string) (bool, error) { _, err := os.Stat(path) if err == nil { return true, nil } else if os.IsNotExist(err) { return false, nil } else if IsCorruptedMnt(err) { return true, err } return false, err } "} {"_id":"doc-en-kubernetes-10dcc33eb27f5a19af28b0257cc33a8229a063a9f5daf525f6d8148e87ad1e61","title":"","text":"} parentFD = childFD childFD = -1 } // Everything was created. mkdirat(..., perm) above was affected by current // umask and we must apply the right permissions to the last directory // (that's the one that will be available to the container as subpath) // so user can read/write it. This is the behavior of previous code. // TODO: chmod all created directories, not just the last one. // parentFD is the last created directory. // Everything was created. mkdirat(..., perm) above was affected by current // umask and we must apply the right permissions to the all created directory. // (that's the one that will be available to the container as subpath) // so user can read/write it. // parentFD is the last created directory. // Translate perm (os.FileMode) to uint32 that fchmod() expects kernelPerm := uint32(perm & os.ModePerm) if perm&os.ModeSetgid > 0 { kernelPerm |= syscall.S_ISGID } if perm&os.ModeSetuid > 0 { kernelPerm |= syscall.S_ISUID } if perm&os.ModeSticky > 0 { kernelPerm |= syscall.S_ISVTX } if err = syscall.Fchmod(parentFD, kernelPerm); err != nil { return fmt.Errorf(\"chmod %q failed: %s\", currentPath, err) // Translate perm (os.FileMode) to uint32 that fchmod() expects kernelPerm := uint32(perm & os.ModePerm) if perm&os.ModeSetgid > 0 { kernelPerm |= syscall.S_ISGID } if perm&os.ModeSetuid > 0 { kernelPerm |= syscall.S_ISUID } if perm&os.ModeSticky > 0 { kernelPerm |= syscall.S_ISVTX } if err = syscall.Fchmod(parentFD, kernelPerm); err != nil { return fmt.Errorf(\"chmod %q failed: %s\", currentPath, err) } } return nil }"} {"_id":"doc-en-kubernetes-31ef9545a786e531bc7ef7fce14bf30ffe1030526c3bdfcd8bad3c0b550c223b","title":"","text":"func TestSafeMakeDir(t *testing.T) { defaultPerm := os.FileMode(0750) + os.ModeDir maxPerm := os.FileMode(0777) + os.ModeDir tests := []struct { name string // Function that prepares directory structure for the test under given"} {"_id":"doc-en-kubernetes-f6428e1588a7bfa56bc58dd0985bfabefaca4d8ad493c8548ad0d13c48d978f2","title":"","text":"false, }, { \"all-created-subpath-directory-with-permissions\", func(base string) error { return nil }, \"test/directory\", \"test\", maxPerm, false, }, { \"directory-with-sgid\", func(base string) error { return nil"} {"_id":"doc-en-kubernetes-e7934ef5eadbab9f9295fd6601acf6989e5fa45c0f6864cee5aaed19d7f5979d","title":"","text":"ValidateOrFail(k8s, model, &TestCase{FromPort: 81, ToPort: 81, Protocol: v1.ProtocolTCP, Reachability: reachabilityPort81}) }) ginkgo.It(\"should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]\", func() { // x/a --> y/a and y/b // Egress allowed to y/a only. Egress to y/b should be blocked // Ingress on y/a and y/b allow traffic from x/a // Expectation: traffic from x/a to y/a allowed only, traffic from x/a to y/b denied by egress policy nsX, nsY, _, model, k8s := getK8SModel(f) // Building egress policy for x/a to y/a only allowedEgressNamespaces := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"ns\": nsY, }, } allowedEgressPods := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"pod\": \"a\", }, } egressPolicy := GetAllowEgressByNamespaceAndPod(\"allow-to-ns-y-pod-a\", map[string]string{\"pod\": \"a\"}, allowedEgressNamespaces, allowedEgressPods) CreatePolicy(k8s, egressPolicy, nsX) // Creating ingress policy to allow from x/a to y/a and y/b allowedIngressNamespaces := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"ns\": nsX, }, } allowedIngressPods := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"pod\": \"a\", }, } allowIngressPolicyPodA := GetAllowIngressByNamespaceAndPod(\"allow-from-xa-on-ya-match-selector\", map[string]string{\"pod\": \"a\"}, allowedIngressNamespaces, allowedIngressPods) allowIngressPolicyPodB := GetAllowIngressByNamespaceAndPod(\"allow-from-xa-on-yb-match-selector\", map[string]string{\"pod\": \"b\"}, allowedIngressNamespaces, allowedIngressPods) CreatePolicy(k8s, allowIngressPolicyPodA, nsY) CreatePolicy(k8s, allowIngressPolicyPodB, nsY) // While applying the policies, traffic needs to be allowed by both egress and ingress rules. // Egress rules only // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\tX\tX\tX\t.\t*X*\tX\tX\tX\tX // xb\t.\t.\t.\t.\t.\t.\t.\t.\t. // xc\t.\t.\t.\t.\t.\t.\t.\t.\t. // ya\t.\t.\t.\t.\t.\t.\t.\t.\t. // yb\t.\t.\t.\t.\t.\t.\t.\t.\t. // yc\t.\t.\t.\t.\t.\t.\t.\t.\t. // za\t.\t.\t.\t.\t.\t.\t.\t.\t. // zb\t.\t.\t.\t.\t.\t.\t.\t.\t. // zc\t.\t.\t.\t.\t.\t.\t.\t.\t. // Ingress rules only // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\t.\t.\t.\t*.*\t.\t.\t.\t.\t. // xb\t.\t.\tX\tX\t.\t.\t.\t.\t. // xc\t.\t.\tX\tX\t.\t.\t.\t.\t. // ya\t.\t.\tX\tX\t.\t.\t.\t.\t. // yb\t.\t.\tX\tX\t.\t.\t.\t.\t. // yc\t.\t.\tX\tX\t.\t.\t.\t.\t. // za\t.\t.\tX\tX\t.\t.\t.\t.\t. // zb\t.\t.\tX\tX\t.\t.\t.\t.\t. // zc\t.\t.\tX\tX\t.\t.\t.\t.\t. // In the resulting truth table, connections from x/a should only be allowed to y/a. x/a to y/b should be blocked by the egress on x/a. // Expected results // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\tX\tX\tX\t.\t*X*\tX\tX\tX\tX // xb\t.\t.\t.\tX\tX\t.\t.\t.\t. // xc\t.\t.\t.\tX\tX\t.\t.\t.\t. // ya\t.\t.\t.\tX\tX\t.\t.\t.\t. // yb\t.\t.\t.\tX\tX\t.\t.\t.\t. // yc\t.\t.\t.\tX\tX\t.\t.\t.\t. // za\t.\t.\t.\tX\tX\t.\t.\t.\t. // zb\t.\t.\t.\tX\tX\t.\t.\t.\t. // zc\t.\t.\t.\tX\tX\t.\t.\t.\t. reachability := NewReachability(model.AllPods(), true) // Default all traffic flows. // Exception: x/a can only egress to y/a, others are false // Exception: y/a can only allow ingress from x/a, others are false // Exception: y/b has no allowed traffic (due to limit on x/a egress) reachability.ExpectPeer(&Peer{Namespace: nsX, Pod: \"a\"}, &Peer{}, false) reachability.ExpectPeer(&Peer{}, &Peer{Namespace: nsY, Pod: \"a\"}, false) reachability.ExpectPeer(&Peer{Namespace: nsX, Pod: \"a\"}, &Peer{Namespace: nsY, Pod: \"a\"}, true) reachability.ExpectPeer(&Peer{}, &Peer{Namespace: nsY, Pod: \"b\"}, false) ValidateOrFail(k8s, model, &TestCase{FromPort: 81, ToPort: 80, Protocol: v1.ProtocolTCP, Reachability: reachability}) }) ginkgo.It(\"should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]\", func() { nsX, nsY, _, model, k8s := getK8SModel(f) allowedNamespaces := &metav1.LabelSelector{"} {"_id":"doc-en-kubernetes-e6067591872630dbb5a04be38754512fc6afa9fe34f3b60fd9e51033a854ac59","title":"","text":"REGISTRY ?= staging-k8s.gcr.io IMAGE = $(REGISTRY)/pause IMAGE_WITH_OS_ARCH = $(IMAGE)-$(OS)-$(ARCH) TAG = 3.4 TAG = 3.4.1 REV = $(shell git describe --contains --always --match='v*') # Architectures supported: amd64, arm, arm64, ppc64le and s390x"} {"_id":"doc-en-kubernetes-1a27181ce8bcabc10cd8b884d1d599077cb3710cf103bcfba90de32d36958148","title":"","text":"all-push: all-container-registry push-manifest push-manifest: docker manifest create --amend $(IMAGE):$(TAG) $(shell echo $(ALL_OS_ARCH) | sed -e \"s~[^ ]*~$(IMAGE)-&:$(TAG)~g\") set -x; for arch in $(ALL_ARCH.linux); do docker manifest annotate --os linux --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}-linux-$${arch}:${TAG}; done docker manifest create --amend $(IMAGE):$(TAG) $(shell echo $(ALL_OS_ARCH) | sed -e \"s~[^ ]*~$(IMAGE):$(TAG)-&~g\") set -x; for arch in $(ALL_ARCH.linux); do docker manifest annotate --os linux --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}:${TAG}-linux-$${arch}; done # For Windows images, we also need to include the \"os.version\" in the manifest list, so the Windows node can pull the proper image it needs. # At the moment, docker manifest annotate doesn't allow us to set the os.version, so we'll have to it ourselves. The manifest list can be found locally as JSONs. # See: https://github.com/moby/moby/issues/41417"} {"_id":"doc-en-kubernetes-6466db1a79003cb7cedbb76c5a5274111db4f0008e5bf13ad23b2b804f84f2eb","title":"","text":"manifest_image_folder=`echo \"$${registry_prefix}${IMAGE}\" | sed \"s|/|_|g\" | sed \"s/:/-/\"`; for arch in $(ALL_ARCH.windows); do for osversion in ${ALL_OSVERSIONS.windows}; do docker manifest annotate --os windows --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}-windows-$${arch}-$${osversion}:${TAG}; docker manifest annotate --os windows --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}:${TAG}-windows-$${arch}-$${osversion}; BASEIMAGE=${BASE.windows}:$${osversion}; full_version=`docker manifest inspect ${BASE.windows}:$${osversion} | grep \"os.version\" | head -n 1 | awk '{print $$2}'` || true; sed -i -r \"s/(\"os\":\"windows\")/0,\"os.version\":$${full_version}/\" \"${HOME}/.docker/manifests/$${manifest_image_folder}-${TAG}/$${manifest_image_folder}-windows-$${arch}-$${osversion}-${TAG}\"; sed -i -r \"s/(\"os\":\"windows\")/0,\"os.version\":$${full_version}/\" \"${HOME}/.docker/manifests/$${manifest_image_folder}-${TAG}/$${manifest_image_folder}-${TAG}-windows-$${arch}-$${osversion}\"; done; done docker manifest push --purge ${IMAGE}:${TAG}"} {"_id":"doc-en-kubernetes-2f75e46c3fc4e4605b140fbf800a627ccd1f255facf14488b4dac200f7572a22","title":"","text":"container: .container-${OS}-$(ARCH) .container-linux-$(ARCH): bin/$(BIN)-$(OS)-$(ARCH) docker buildx build --pull --output=type=${OUTPUT_TYPE} --platform ${OS}/$(ARCH) -t $(IMAGE_WITH_OS_ARCH):$(TAG) --build-arg BASE=${BASE} --build-arg ARCH=$(ARCH) . -t $(IMAGE):$(TAG)-${OS}-$(ARCH) --build-arg BASE=${BASE} --build-arg ARCH=$(ARCH) . touch $@ .container-windows-$(ARCH): $(foreach binary, ${BIN}, bin/${binary}-${OS}-${ARCH}) docker buildx build --pull --output=type=${OUTPUT_TYPE} --platform ${OS}/$(ARCH) -t $(IMAGE_WITH_OS_ARCH)-${OSVERSION}:$(TAG) --build-arg BASE=${BASE}:${OSVERSION} --build-arg ARCH=$(ARCH) -f Dockerfile_windows . -t $(IMAGE):$(TAG)-${OS}-$(ARCH)-${OSVERSION} --build-arg BASE=${BASE}:${OSVERSION} --build-arg ARCH=$(ARCH) -f Dockerfile_windows . touch $@ # Useful for testing, not automatically included in container image"} {"_id":"doc-en-kubernetes-f860c6c560dcdb33a13ba631c255ba3daeedf941a07fc253a9871983eba50388","title":"","text":"description: Create a StatefulSet resource. Newly created StatefulSet resource MUST have a scale of one. Bring the scale of the StatefulSet resource up to two. StatefulSet scale MUST be at two replicas. release: v1.16 release: v1.16, v1.21 file: test/e2e/apps/statefulset.go - testname: StatefulSet, Rolling Update with Partition codename: '[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]"} {"_id":"doc-en-kubernetes-7fda540bd0d703609a9fdc95a7a4e4d787b0d9bc2561c6fec1e1a029ef380091","title":"","text":"import ( \"context\" \"encoding/json\" \"fmt\" \"strings\" \"sync\""} {"_id":"doc-en-kubernetes-2d9c12829852a49465d3dd8e7d26ccaad413cfc2e79d05f9fefefedd74e5a822","title":"","text":"\"github.com/onsi/gomega\" appsv1 \"k8s.io/api/apps/v1\" autoscalingv1 \"k8s.io/api/autoscaling/v1\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\""} {"_id":"doc-en-kubernetes-900e329b7d0e0c0989d866f858b1a4a12a1ca99375c9d28f9a9979e9dfb342c8","title":"","text":"}) /* Release: v1.16 Release: v1.16, v1.21 Testname: StatefulSet resource Replica scaling Description: Create a StatefulSet resource. Newly created StatefulSet resource MUST have a scale of one."} {"_id":"doc-en-kubernetes-b8c6c6db3f30040287c89a4b45b532cada1c12debc7ec94f5231df5036085712","title":"","text":"framework.Failf(\"Failed to get statefulset resource: %v\", err) } framework.ExpectEqual(*(ss.Spec.Replicas), int32(2)) ginkgo.By(\"Patch a scale subresource\") scale.ResourceVersion = \"\" // indicate the scale update should be unconditional scale.Spec.Replicas = 4 // should be 2 after \"UpdateScale\" operation, now Patch to 4 ssScalePatchPayload, err := json.Marshal(autoscalingv1.Scale{ Spec: autoscalingv1.ScaleSpec{ Replicas: scale.Spec.Replicas, }, }) framework.ExpectNoError(err, \"Could not Marshal JSON for patch payload\") _, err = c.AppsV1().StatefulSets(ns).Patch(context.TODO(), ssName, types.StrategicMergePatchType, []byte(ssScalePatchPayload), metav1.PatchOptions{}, \"scale\") framework.ExpectNoError(err, \"Failed to patch stateful set: %v\", err) ginkgo.By(\"verifying the statefulset Spec.Replicas was modified\") ss, err = c.AppsV1().StatefulSets(ns).Get(context.TODO(), ssName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Failed to get statefulset resource: %v\", err) framework.ExpectEqual(*(ss.Spec.Replicas), int32(4), \"statefulset should have 4 replicas\") }) })"} {"_id":"doc-en-kubernetes-f9d36f2b580349bd706c86df4afb2f7145d7314cb1ce61a1c3538d570135adc1","title":"","text":"// Initialize CPU manager if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.CPUManager) { containerMap, err := buildContainerMapFromRuntime(runtimeService) if err != nil { return fmt.Errorf(\"failed to build map of initial containers from runtime: %v\", err) } err = cm.cpuManager.Start(cpumanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) containerMap := buildContainerMapFromRuntime(runtimeService) err := cm.cpuManager.Start(cpumanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf(\"start cpu manager error: %v\", err) }"} {"_id":"doc-en-kubernetes-39c680af728bf7303a666067244cfa3046879a011009dbc0dd954838f172d759","title":"","text":"// Initialize memory manager if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.MemoryManager) { containerMap, err := buildContainerMapFromRuntime(runtimeService) if err != nil { return fmt.Errorf(\"failed to build map of initial containers from runtime: %v\", err) } err = cm.memoryManager.Start(memorymanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) containerMap := buildContainerMapFromRuntime(runtimeService) err := cm.memoryManager.Start(memorymanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf(\"start memory manager error: %v\", err) }"} {"_id":"doc-en-kubernetes-5998c5b703858219c4e15088ee5be6086da32e7d27e9b72edb290426616e122c","title":"","text":"} } func buildContainerMapFromRuntime(runtimeService internalapi.RuntimeService) (containermap.ContainerMap, error) { func buildContainerMapFromRuntime(runtimeService internalapi.RuntimeService) containermap.ContainerMap { podSandboxMap := make(map[string]string) podSandboxList, _ := runtimeService.ListPodSandbox(nil) for _, p := range podSandboxList {"} {"_id":"doc-en-kubernetes-f89229e84df458218cd833a7cd75eb300245bac17e86b5773f2e1e6088a249e0","title":"","text":"containerList, _ := runtimeService.ListContainers(nil) for _, c := range containerList { if _, exists := podSandboxMap[c.PodSandboxId]; !exists { return nil, fmt.Errorf(\"no PodsandBox found with Id '%s' for container with ID '%s' and Name '%s'\", c.PodSandboxId, c.Id, c.Metadata.Name) klog.InfoS(\"no PodSandBox found for the container\", \"podSandboxId\", c.PodSandboxId, \"containerName\", c.Metadata.Name, \"containerId\", c.Id) continue } containerMap.Add(podSandboxMap[c.PodSandboxId], c.Metadata.Name, c.Id) } return containerMap, nil return containerMap } func isProcessRunningInHost(pid int) (bool, error) {"} {"_id":"doc-en-kubernetes-fd22047972ce572eb059f1c19db7b9072555918440d247d8f5829932cb870413","title":"","text":"# # $1 - server architecture kube::build::get_docker_wrapped_binaries() { local debian_iptables_version=buster-v1.4.0 local debian_iptables_version=buster-v1.5.0 local go_runner_version=buster-v2.2.4 ### If you change any of these lists, please also update DOCKERIZED_BINARIES ### in build/BUILD. And kube::golang::server_image_targets"} {"_id":"doc-en-kubernetes-b801831883f01477a39d432c9e77dc4213e98ee0bdf60355e9f382018e89fd93","title":"","text":"# Base images - name: \"k8s.gcr.io/debian-base: dependents\" version: buster-v1.3.0 version: buster-v1.4.0 refPaths: - path: build/workspace.bzl match: tag ="} {"_id":"doc-en-kubernetes-5f2fb841329d3cccb57bb3ba72f0a542792802bf1f1c84cc3829c59e5488753f","title":"","text":"match: BASEIMAGE?=k8s.gcr.io/build-image/debian-base-s390x:[a-zA-Z]+-v((([0-9]+).([0-9]+).([0-9]+)(?:-([0-9a-zA-Z-]+(?:.[0-9a-zA-Z-]+)*))?)(?:+([0-9a-zA-Z-]+(?:.[0-9a-zA-Z-]+)*))?) - name: \"k8s.gcr.io/debian-iptables: dependents\" version: buster-v1.4.0 version: buster-v1.5.0 refPaths: - path: build/common.sh match: debian_iptables_version="} {"_id":"doc-en-kubernetes-acc4196190eea24ea44133d74f1897487191bf3099e4a941a823dfe8a63b45f2","title":"","text":"# Use skopeo to find these values: https://github.com/containers/skopeo # # Example # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.3.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.3.0 # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.4.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.4.0 _DEBIAN_BASE_DIGEST = { \"manifest\": \"sha256:d66137c7c362d1026dca670d1ff4c25e5b0770e8ace87ac3d008d52e4b0db338\", \"amd64\": \"sha256:a5ab028d9a730b78af9abb15b5db9b2e6f82448ab269d6f3a07d1834c571ccc6\", \"arm\": \"sha256:94e611363760607366ca1fed9375105b6c5fc922ab1249869b708690ca13733c\", \"arm64\": \"sha256:83512c52d44587271cd0f355c0a9a7e6c2412ddc66b8a8eb98f994277297a72f\", \"ppc64le\": \"sha256:9c8284b2797b114ebe8f3f1b2b5817a9c7f07f3f82513c49a30e6191a1acc1fc\", \"s390x\": \"sha256:d617637dd4df0bc1cfa524fae3b4892cfe57f7fec9402ad8dfa28e38e82ec688\", \"manifest\": \"sha256:36652ef8e4dd6715de02e9b68e5c122ed8ee06c75f83f5c574b97301e794c3fb\", \"amd64\": \"sha256:afff10fcd513483e492807f8d934bdf0be4a237997f55e0f1f8e34c04a6cb213\", \"arm\": \"sha256:27e6e66ea3c4c4ca6dbfc8c949f0c4c870f038f4500fd267c242422a244f233c\", \"arm64\": \"sha256:4333a5edc9ce6d6660c76104749c2e50e6158e57c8e5956f732991bb032a8ce1\", \"ppc64le\": \"sha256:01a0ba2645883ea8d985460c2913070a90a098056cc6d188122942678923ddb7\", \"s390x\": \"sha256:610526b047d4b528d9e14b4f15347aa4e37af0c47e1307a2f7aebf8745c8a323\", } # Use skopeo to find these values: https://github.com/containers/skopeo # # Example # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.4.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.4.0 # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.5.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.5.0 _DEBIAN_IPTABLES_DIGEST = { \"manifest\": \"sha256:87f97cf2b62eb107871ee810f204ccde41affb70b29883aa898e93df85dea0f0\", \"amd64\": \"sha256:da837f39cf3af78adb796c0caa9733449ae99e51cf624590c328e4c9951ace7a\", \"arm\": \"sha256:bb6677337a4dbc3e578a3e87642d99be740dea391dc5e8987f04211c5e23abcd\", \"arm64\": \"sha256:6ad4717d69db2cc47bc2efc91cebb96ba736be1de49e62e0deffdbaf0fa2318c\", \"ppc64le\": \"sha256:168ccfeb861239536826a26da24ab5f68bb5349d7439424b7008b01e8f6534fc\", \"s390x\": \"sha256:5a88d4f4c29bac5b5c93195059b928f7346be11d0f0f7f6da0e14c0bfdbd1362\", \"manifest\": \"sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb\", \"amd64\": \"sha256:b4b8b1e0d4617011dd03f20b804cc2e50bf48bafc36b1c8c7bd23fd44bfd641e\", \"arm\": \"sha256:09f79b3a00268705a8f8462f1528fed536e204905359f21e9965f08dd306c60a\", \"arm64\": \"sha256:b4fa11965f34a9f668c424b401c0af22e88f600d22c899699bdb0bd1e6953ad6\", \"ppc64le\": \"sha256:0ea0be4dec281b506f6ceef4cb3594cabea8d80e2dc0d93c7eb09d46259dd837\", \"s390x\": \"sha256:50ef25fba428b6002ef0a9dea7ceae5045430dc1035d50498a478eefccba17f5\", } # Use skopeo to find these values: https://github.com/containers/skopeo"} {"_id":"doc-en-kubernetes-92618f9008bfb67b8195de8208c30a4726f24c48504df0cd40f940aeab112be8","title":"","text":"registry = \"k8s.gcr.io/build-image\", repository = \"debian-base\", # Ensure the digests above are updated to match a new tag tag = \"buster-v1.3.0\", # ignored, but kept here for documentation tag = \"buster-v1.4.0\", # ignored, but kept here for documentation ) container_pull("} {"_id":"doc-en-kubernetes-dfa1f907939ed4f08b1d3a4a0f5678850ca45f6291ee7ef3472b32ac5adbf122","title":"","text":"registry = \"k8s.gcr.io/build-image\", repository = \"debian-iptables\", # Ensure the digests above are updated to match a new tag tag = \"buster-v1.4.0\", # ignored, but kept here for documentation tag = \"buster-v1.5.0\", # ignored, but kept here for documentation ) def etcd_tarballs():"} {"_id":"doc-en-kubernetes-5c1cc06492bafc26d6777b37484b3d518c341f6ad9a09f3a800dc2adccfbc3b8","title":"","text":"# REVISION provides a version number fo this image and all it's bundled # artifacts. It should start at zero for each LATEST_ETCD_VERSION and increment # for each revision of this image at that etcd version. REVISION?=2 REVISION?=3 # IMAGE_TAG Uniquely identifies k8s.gcr.io/etcd docker image with a tag of the form \"-\". IMAGE_TAG=$(LATEST_ETCD_VERSION)-$(REVISION)"} {"_id":"doc-en-kubernetes-6c4e91a988d6c2cf43075e523b28868790ea0987e2c64d256bbdaa1824540e42","title":"","text":"TEMP_DIR:=$(shell mktemp -d) ifeq ($(ARCH),amd64) BASEIMAGE?=k8s.gcr.io/build-image/debian-base:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base:buster-v1.4.0 endif ifeq ($(ARCH),arm) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm:buster-v1.4.0 endif ifeq ($(ARCH),arm64) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm64:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm64:buster-v1.4.0 endif ifeq ($(ARCH),ppc64le) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-ppc64le:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-ppc64le:buster-v1.4.0 endif ifeq ($(ARCH),s390x) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-s390x:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-s390x:buster-v1.4.0 endif RUNNERIMAGE?=gcr.io/distroless/static:latest"} {"_id":"doc-en-kubernetes-4b575b1c0df097d3d6bbe8f23323aa3c062c69815f9ca6c1e1d569975ca2c8d1","title":"","text":"configs[CheckMetadataConcealment] = Config{e2eRegistry, \"metadata-concealment\", \"1.2\"} configs[CudaVectorAdd] = Config{e2eRegistry, \"cuda-vector-add\", \"1.0\"} configs[CudaVectorAdd2] = Config{e2eRegistry, \"cuda-vector-add\", \"2.0\"} configs[DebianIptables] = Config{buildImageRegistry, \"debian-iptables\", \"buster-v1.4.0\"} configs[DebianIptables] = Config{buildImageRegistry, \"debian-iptables\", \"buster-v1.5.0\"} configs[EchoServer] = Config{e2eRegistry, \"echoserver\", \"2.2\"} configs[Etcd] = Config{gcRegistry, \"etcd\", \"3.4.13-0\"} configs[GlusterDynamicProvisioner] = Config{dockerGluster, \"glusterdynamic-provisioner\", \"v1.0\"}"} {"_id":"doc-en-kubernetes-02bdd479102388cc7f3b5299a6d1947e4f12ea431ee54b6de00ec86127673481","title":"","text":"for k, v := range labels { pv.Labels[k] = v var values []string if k == v1.LabelFailureDomainBetaZone { if k == v1.LabelTopologyZone || k == v1.LabelFailureDomainBetaZone { values, err = volumehelpers.LabelZonesToList(v) if err != nil { return nil, fmt.Errorf(\"failed to convert label string for Zone: %s to a List: %v\", v, err)"} {"_id":"doc-en-kubernetes-080cce53daed321928ef9a9b17bb84ac6fdd8960a370be2aeba98dd0ac09ad3a","title":"","text":"translator := NewvSphereCSITranslator() topologySelectorTerm := v1.TopologySelectorTerm{[]v1.TopologySelectorLabelRequirement{ { Key: v1.LabelTopologyZone, Values: []string{\"zone-a\"}, }, { Key: v1.LabelTopologyRegion, Values: []string{\"region-a\"}, }, }} topologySelectorTermWithBetaLabels := v1.TopologySelectorTerm{[]v1.TopologySelectorLabelRequirement{ { Key: v1.LabelFailureDomainBetaZone, Values: []string{\"zone-a\"}, },"} {"_id":"doc-en-kubernetes-f2463c6f416fd9bb47e7dc54afea340ab49dc3dbef1692321c72d860cebebfa5","title":"","text":"expSc: NewStorageClass(map[string]string{\"storagepolicyname\": \"test-policy-name\", paramcsiMigration: \"true\"}, []v1.TopologySelectorTerm{topologySelectorTerm}), }, { name: \"translate with storagepolicyname and allowedTopology beta labels\", sc: NewStorageClass(map[string]string{\"storagepolicyname\": \"test-policy-name\"}, []v1.TopologySelectorTerm{topologySelectorTermWithBetaLabels}), expSc: NewStorageClass(map[string]string{\"storagepolicyname\": \"test-policy-name\", paramcsiMigration: \"true\"}, []v1.TopologySelectorTerm{topologySelectorTermWithBetaLabels}), }, { name: \"translate with raw vSAN policy parameters, datastore and diskformat\", sc: NewStorageClass(map[string]string{\"hostfailurestotolerate\": \"2\", \"datastore\": \"vsanDatastore\", \"diskformat\": \"thin\"}, []v1.TopologySelectorTerm{topologySelectorTerm}), expSc: NewStorageClass(map[string]string{\"hostfailurestotolerate-migrationparam\": \"2\", \"datastore-migrationparam\": \"vsanDatastore\", \"diskformat-migrationparam\": \"thin\", paramcsiMigration: \"true\"}, []v1.TopologySelectorTerm{topologySelectorTerm}),"} {"_id":"doc-en-kubernetes-92489d610f7f6c659ee9a228813494e3bfe1d807aade79da58caf808f8b53874","title":"","text":"} } // Get the node zone information nodeFd := node.ObjectMeta.Labels[v1.LabelFailureDomainBetaZone] nodeRegion := node.ObjectMeta.Labels[v1.LabelFailureDomainBetaRegion] nodeFd := node.ObjectMeta.Labels[v1.LabelTopologyZone] nodeRegion := node.ObjectMeta.Labels[v1.LabelTopologyRegion] nodeZone := &cloudprovider.Zone{FailureDomain: nodeFd, Region: nodeRegion} nodeInfo := &NodeInfo{dataCenter: res.datacenter, vm: vm, vcServer: res.vc, vmUUID: nodeUUID, zone: nodeZone} nm.addNodeInfo(node.ObjectMeta.Name, nodeInfo)"} {"_id":"doc-en-kubernetes-fe295468090b77d20925042596cd13d991b8f07c0a754840f2fe9436420e3864","title":"","text":"// FIXME: For now, pick the first zone of datastore as the zone of volume labels := make(map[string]string) if len(dsZones) > 0 { labels[v1.LabelFailureDomainBetaRegion] = dsZones[0].Region labels[v1.LabelFailureDomainBetaZone] = dsZones[0].FailureDomain labels[v1.LabelTopologyRegion] = dsZones[0].Region labels[v1.LabelTopologyZone] = dsZones[0].FailureDomain } return labels, nil }"} {"_id":"doc-en-kubernetes-0d25c393fbd569107602a8ba2f8611d98068affb180db7970873ff7fd831dc92","title":"","text":"term := v1.TopologySelectorTerm{ MatchLabelExpressions: []v1.TopologySelectorLabelRequirement{ { Key: v1.LabelFailureDomainBetaZone, Key: v1.LabelTopologyZone, Values: zones, }, },"} {"_id":"doc-en-kubernetes-227c45bc91e95d461bf1a73c563edd2e596b4c356a6c9fd629d5fe595c44623a","title":"","text":"zones = append(zones, zoneA) nodeSelectorMap := map[string]string{ // nodeSelector set as zoneB v1.LabelFailureDomainBetaZone: zoneB, v1.LabelTopologyZone: zoneB, } verifyPodSchedulingFails(client, namespace, nodeSelectorMap, scParameters, zones, storagev1.VolumeBindingWaitForFirstConsumer) })"} {"_id":"doc-en-kubernetes-11a3b2238eaf26655289df8bca981b12791a3530983e674247a0880759afff94","title":"","text":"ginkgo.By(\"Verify zone information is present in the volume labels\") for _, pv := range persistentvolumes { // Multiple zones are separated with \"__\" pvZoneLabels := strings.Split(pv.ObjectMeta.Labels[\"failure-domain.beta.kubernetes.io/zone\"], \"__\") pvZoneLabels := strings.Split(pv.ObjectMeta.Labels[v1.LabelTopologyZone], \"__\") for _, zone := range zones { gomega.Expect(pvZoneLabels).Should(gomega.ContainElement(zone), \"Incorrect or missing zone labels in pv.\") }"} {"_id":"doc-en-kubernetes-17cd28d52a52dae457980210513e44b65fbbc4050d1f2a37fa062f99f688f2d6","title":"","text":"case err := <-injectedError: return nil, err } // Never get here. return nil, nil }"} {"_id":"doc-en-kubernetes-ba22ea670240b30b8506f7c544e52e154fb237babef124befb10e3e9ce35de5e","title":"","text":"import ( \"context\" \"strings\" \"time\" \"github.com/Microsoft/hcsshim\""} {"_id":"doc-en-kubernetes-d0fa5f3c89e6a9a267d1e6acb2f371adbd4cf5cb32208496d4ce88a71ca3ba8f","title":"","text":"// That will typically happen with init-containers in Exited state. Docker still knows about them but the HCS does not. // As we don't want to block stats retrieval for other containers, we only log errors. if !hcsshim.IsNotExist(err) && !hcsshim.IsAlreadyStopped(err) { klog.Errorf(\"Error opening container (stats will be missing) '%s': %v\", containerID, err) klog.V(4).Infof(\"Error opening container (stats will be missing) '%s': %v\", containerID, err) } return nil, nil }"} {"_id":"doc-en-kubernetes-bf0a1deb47cbe99659b29921e99fbdda1d40b752f9109e98ea6eb2bf823f78ac","title":"","text":"stats, err := hcsshimContainer.Statistics() if err != nil { if strings.Contains(err.Error(), \"0x5\") || strings.Contains(err.Error(), \"0xc0370105\") { // When the container is just created, querying for stats causes access errors because it hasn't started yet // This is transient; skip container for now // // These hcs errors do not have helpers exposed in public package so need to query for the known codes // https://github.com/microsoft/hcsshim/blob/master/internal/hcs/errors.go // PR to expose helpers in hcsshim: https://github.com/microsoft/hcsshim/pull/933 klog.V(4).Infof(\"Container is not in a state that stats can be accessed '%s': %v. This occurs when the container is created but not started.\", containerID, err) return nil, nil } return nil, err }"} {"_id":"doc-en-kubernetes-b6d4832c8afc1c115b22b9c5f6f77e1458217aba001f83efd05151b78f84d119","title":"","text":"func (r *Reachability) PrintSummary(printExpected bool, printObserved bool, printComparison bool) { right, wrong, ignored, comparison := r.Summary(ignoreLoopback) if ignored > 0 { framework.Logf(\"warning: the results of %d pod->pod cases have been ignored\", ignored) framework.Logf(\"warning: this test doesn't take into consideration hairpin traffic, i.e. traffic whose source and destination is the same pod: %d cases ignored\", ignored) } framework.Logf(\"reachability: correct:%v, incorrect:%v, result=%tnn\", right, wrong, wrong == 0) if printExpected {"} {"_id":"doc-en-kubernetes-78dfd471ecc7fc14335433351520ba31f0b2fc2e85afeb2c7ad46acddb2de8d6","title":"","text":"f := framework.NewDefaultFramework(\"metrics-grabber\") var c, ec clientset.Interface var grabber *e2emetrics.Grabber var masterRegistered bool ginkgo.BeforeEach(func() { var err error c = f.ClientSet ec = f.KubemarkExternalClusterClientSet // Check if master Node is registered nodes, err := c.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{}) framework.ExpectNoError(err) for _, node := range nodes.Items { if strings.HasSuffix(node.Name, \"master\") { masterRegistered = true } } gomega.Eventually(func() error { grabber, err = e2emetrics.NewMetricsGrabber(c, ec, true, true, true, true, true) if err != nil { return fmt.Errorf(\"failed to create metrics grabber: %v\", err) } if !grabber.HasControlPlanePods() { if masterRegistered && !grabber.HasControlPlanePods() { return fmt.Errorf(\"unable to get find control plane pods\") } return nil"} {"_id":"doc-en-kubernetes-afa940021177fdf46d5ea181bacea040d3fb9a5f95cb2308c20fcf7471b4b8e2","title":"","text":"ginkgo.It(\"should grab all metrics from a Scheduler.\", func() { ginkgo.By(\"Proxying to Pod through the API server\") // Check if master Node is registered nodes, err := c.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{}) framework.ExpectNoError(err) var masterRegistered = false for _, node := range nodes.Items { if strings.HasSuffix(node.Name, \"master\") { masterRegistered = true } } if !masterRegistered { framework.Logf(\"Master is node api.Registry. Skipping testing Scheduler metrics.\") return"} {"_id":"doc-en-kubernetes-2db68db85e8dd99d493b98cfe43a4505366f59fde87aff5279df7c6bfdf1617c","title":"","text":"ginkgo.It(\"should grab all metrics from a ControllerManager.\", func() { ginkgo.By(\"Proxying to Pod through the API server\") // Check if master Node is registered nodes, err := c.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{}) framework.ExpectNoError(err) var masterRegistered = false for _, node := range nodes.Items { if strings.HasSuffix(node.Name, \"master\") { masterRegistered = true } } if !masterRegistered { framework.Logf(\"Master is node api.Registry. Skipping testing ControllerManager metrics.\") return"} {"_id":"doc-en-kubernetes-ade7c52498941f596eebc8f88c256efaa8a986f2c76e5d32914b7cc7da27445a","title":"","text":"import ( \"context\" \"encoding/json\" \"fmt\" \"time\" appsv1 \"k8s.io/api/apps/v1\" autoscalingv1 \"k8s.io/api/autoscaling/v1\" \"k8s.io/apimachinery/pkg/types\" v1 \"k8s.io/api/core/v1\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/api/resource\""} {"_id":"doc-en-kubernetes-28a3ebdb84a99d2903be99953b4b80fa59648b315a13cfce32c9665181ed188e","title":"","text":"framework.ConformanceIt(\"should adopt matching pods on creation and release no longer matching pods\", func() { testRSAdoptMatchingAndReleaseNotMatching(f) }) ginkgo.It(\"Replicaset should have a working scale subresource\", func() { testRSScaleSubresources(f) }) }) // A basic test to check the deployment of an image using a ReplicaSet. The"} {"_id":"doc-en-kubernetes-6f3c50dee60966e4d0743254aa3d175df5b28f7a956cd6a92fce4e4e38fbf2ef","title":"","text":"}) framework.ExpectNoError(err) } func testRSScaleSubresources(f *framework.Framework) { ns := f.Namespace.Name c := f.ClientSet // Create webserver pods. rsPodLabels := map[string]string{ \"name\": \"sample-pod\", \"pod\": WebserverImageName, } rsName := \"test-rs\" replicas := int32(1) ginkgo.By(fmt.Sprintf(\"Creating replica set %q that asks for more than the allowed pod quota\", rsName)) rs := newRS(rsName, replicas, rsPodLabels, WebserverImageName, WebserverImage, nil) _, err := c.AppsV1().ReplicaSets(ns).Create(context.TODO(), rs, metav1.CreateOptions{}) framework.ExpectNoError(err) // Verify that the required pods have come up. err = e2epod.VerifyPodsRunning(c, ns, \"sample-pod\", false, replicas) framework.ExpectNoError(err, \"error in waiting for pods to come up: %s\", err) ginkgo.By(\"getting scale subresource\") scale, err := c.AppsV1().ReplicaSets(ns).GetScale(context.TODO(), rsName, metav1.GetOptions{}) if err != nil { framework.Failf(\"Failed to get scale subresource: %v\", err) } framework.ExpectEqual(scale.Spec.Replicas, int32(1)) framework.ExpectEqual(scale.Status.Replicas, int32(1)) ginkgo.By(\"updating a scale subresource\") scale.ResourceVersion = \"\" // indicate the scale update should be unconditional scale.Spec.Replicas = 2 scaleResult, err := c.AppsV1().ReplicaSets(ns).UpdateScale(context.TODO(), rsName, scale, metav1.UpdateOptions{}) if err != nil { framework.Failf(\"Failed to put scale subresource: %v\", err) } framework.ExpectEqual(scaleResult.Spec.Replicas, int32(2)) ginkgo.By(\"verifying the replicaset Spec.Replicas was modified\") rs, err = c.AppsV1().ReplicaSets(ns).Get(context.TODO(), rsName, metav1.GetOptions{}) if err != nil { framework.Failf(\"Failed to get statefulset resource: %v\", err) } framework.ExpectEqual(*(rs.Spec.Replicas), int32(2)) ginkgo.By(\"Patch a scale subresource\") scale.ResourceVersion = \"\" // indicate the scale update should be unconditional scale.Spec.Replicas = 4 // should be 2 after \"UpdateScale\" operation, now Patch to 4 rsScalePatchPayload, err := json.Marshal(autoscalingv1.Scale{ Spec: autoscalingv1.ScaleSpec{ Replicas: scale.Spec.Replicas, }, }) framework.ExpectNoError(err, \"Could not Marshal JSON for patch payload\") _, err = c.AppsV1().ReplicaSets(ns).Patch(context.TODO(), rsName, types.StrategicMergePatchType, []byte(rsScalePatchPayload), metav1.PatchOptions{}, \"scale\") framework.ExpectNoError(err, \"Failed to patch replicaset: %v\", err) rs, err = c.AppsV1().ReplicaSets(ns).Get(context.TODO(), rsName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Failed to get replicaset resource: %v\", err) framework.ExpectEqual(*(rs.Spec.Replicas), int32(4), \"replicaset should have 4 replicas\") } "} {"_id":"doc-en-kubernetes-89ccecfb7a48fe5e3f828d66a19808b668f1365f8d96e342cbde62e5e344bb35","title":"","text":"the Job MUST execute to completion. release: v1.16 file: test/e2e/apps/job.go - testname: ReplicaSet, completes the scaling of a ReplicaSet subresource codename: '[sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]' description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified that it is running. The RS MUST get and verify the scale subresource count. The RS MUST update and verify the scale subresource. The RS MUST patch and verify a scale subresource. release: v1.21 file: test/e2e/apps/replica_set.go - testname: Replica Set, adopt matching pods and release non matching pods codename: '[sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]'"} {"_id":"doc-en-kubernetes-2370a1166f68d542ad8a6b452ce4d5eb9adf3174aefbd435b9daf7b546abdf3f","title":"","text":"testRSAdoptMatchingAndReleaseNotMatching(f) }) ginkgo.It(\"Replicaset should have a working scale subresource\", func() { /* Release: v1.21 Testname: ReplicaSet, completes the scaling of a ReplicaSet subresource Description: Create a ReplicaSet (RS) with a single Pod. The Pod MUST be verified that it is running. The RS MUST get and verify the scale subresource count. The RS MUST update and verify the scale subresource. The RS MUST patch and verify a scale subresource. */ framework.ConformanceIt(\"Replicaset should have a working scale subresource\", func() { testRSScaleSubresources(f) }) })"} {"_id":"doc-en-kubernetes-a6cb3f7d0d0df29c4e666050be02013fa572e143810b691d53da0d79966f2966","title":"","text":"package oom import ( \"os\" \"github.com/opencontainers/runc/libcontainer/cgroups\" \"testing\""} {"_id":"doc-en-kubernetes-922506670d687f708855563b6b3e4580e6fb88d96639cc2dc1ddc6019a6b1507","title":"","text":"func TestPidListerFailure(t *testing.T) { _, err := getPids(\"/does/not/exist\") assert.True(t, cgroups.IsNotFound(err), \"expected getPids to return not exists error. Got %v\", err) assert.True(t, cgroups.IsNotFound(err) || os.IsNotExist(err), \"expected getPids to return not exists error. Got %v\", err) }"} {"_id":"doc-en-kubernetes-7ab4bdd52914e8f04788fa57c810d85fe8970e687349c460e20dbf7967dc60e2","title":"","text":"path string } func (v *VersionFile) nextPath() string { return fmt.Sprintf(\"%s-next\", v.path) } // Exists returns true if a version.txt file exists on the file system. func (v *VersionFile) Exists() (bool, error) { return exists(v.path)"} {"_id":"doc-en-kubernetes-38dac15bb17380f5fb84dbf1161b14ff70f62b584b5c4eada5926c1f5ecf34c9","title":"","text":"// Write creates or overwrites the contents of the version.txt file with the given EtcdVersionPair. func (v *VersionFile) Write(vp *EtcdVersionPair) error { data := []byte(fmt.Sprintf(\"%s/%s\", vp.version, vp.storageVersion)) return ioutil.WriteFile(v.path, data, 0666) // We do write + rename instead of just write to protect from version.txt // corruption under full disk condition. // See https://github.com/kubernetes/kubernetes/issues/98989. err := ioutil.WriteFile(v.nextPath(), []byte(vp.String()), 0666) if err != nil { return fmt.Errorf(\"failed to write new version file %s: %v\", v.nextPath(), err) } return os.Rename(v.nextPath(), v.path) } func exists(path string) (bool, error) {"} {"_id":"doc-en-kubernetes-74c0e4dbd66a8372b732844fa078e072a5e64c89ecb52ee807bb1681813084ce","title":"","text":"name = \"client-targets\", conditioned_srcs = for_platforms(for_client = [ \"//cmd/kubectl\", \"//cmd/kubectl-convert\", ]), )"} {"_id":"doc-en-kubernetes-0c021fda3a84b93f07d21361967470ba39f994cee3fb4e43c230f6ba50aaf92d","title":"","text":"load(\"//staging/src/k8s.io/component-base/version:def.bzl\", \"version_x_defs\") go_binary( name = \"kubectl\", name = \"kubectl-convert\", embed = [\":go_default_library\"], pure = \"on\", visibility = [\"//visibility:public\"],"} {"_id":"doc-en-kubernetes-11d6939a9598f7637f050895e12f33270db136807c3c29083d2963fb6d794360","title":"","text":"# If you update this list, please also update build/BUILD. readonly KUBE_CLIENT_TARGETS=( cmd/kubectl cmd/kubectl-convert ) readonly KUBE_CLIENT_BINARIES=(\"${KUBE_CLIENT_TARGETS[@]##*/}\") readonly KUBE_CLIENT_BINARIES_WIN=(\"${KUBE_CLIENT_BINARIES[@]/%/.exe}\")"} {"_id":"doc-en-kubernetes-2d46af9986a9452cf5cefdf6201eec8cbea93c2fb13a78ecad92321ec4dbebf9","title":"","text":" #!/usr/bin/env bash # Copyright 2021 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail run_convert_tests() { set -o nounset set -o errexit ### Convert deployment YAML file locally without affecting the live deployment # Pre-condition: no deployments exist kube::test::get_object_assert deployment \"{{range.items}}{{${id_field:?}}}:{{end}}\" '' # Command # Create a deployment (revision 1) kubectl create -f hack/testdata/deployment-revision1.yaml \"${kube_flags[@]:?}\" kube::test::get_object_assert deployment \"{{range.items}}{{${id_field:?}}}:{{end}}\" 'nginx:' kube::test::get_object_assert deployment \"{{range.items}}{{${image_field0:?}}}:{{end}}\" \"${IMAGE_DEPLOYMENT_R1}:\" # Command output_message=$(kubectl convert --local -f hack/testdata/deployment-revision1.yaml --output-version=apps/v1beta1 -o yaml \"${kube_flags[@]:?}\") # Post-condition: apiVersion is still apps/v1 in the live deployment, but command output is the new value kube::test::get_object_assert 'deployment nginx' \"{{ .apiVersion }}\" 'apps/v1' kube::test::if_has_string \"${output_message}\" \"apps/v1beta1\" # Clean up kubectl delete deployment nginx \"${kube_flags[@]:?}\" ## Convert multiple busybox PODs recursively from directory of YAML files # Command output_message=$(! kubectl-convert -f hack/testdata/recursive/pod --recursive 2>&1 \"${kube_flags[@]:?}\") # Post-condition: busybox0 & busybox1 PODs are converted, and since busybox2 is malformed, it should error kube::test::if_has_string \"${output_message}\" \"Object 'Kind' is missing\" # check that convert command supports --template output output_message=$(kubectl-convert \"${kube_flags[@]:?}\" -f hack/testdata/deployment-revision1.yaml --output-version=apps/v1beta2 --template=\"{{ .metadata.name }}:\") kube::test::if_has_string \"${output_message}\" 'nginx:' set +o nounset set +o errexit } "} {"_id":"doc-en-kubernetes-d44d3770baec486001a4f04ccd6335fea8dc87a929d11a53a31b1496749a97ca","title":"","text":"source \"${KUBE_ROOT}/test/cmd/authorization.sh\" source \"${KUBE_ROOT}/test/cmd/batch.sh\" source \"${KUBE_ROOT}/test/cmd/certificate.sh\" source \"${KUBE_ROOT}/test/cmd/convert.sh\" source \"${KUBE_ROOT}/test/cmd/core.sh\" source \"${KUBE_ROOT}/test/cmd/crd.sh\" source \"${KUBE_ROOT}/test/cmd/create.sh\""} {"_id":"doc-en-kubernetes-7fc4bb672d4bd61fdfc61b264308c07665d5b5a8cfd5c8f0f900ca7dcc557c67","title":"","text":"kube::util::ensure-gnu-sed kube::log::status \"Building kubectl\" make -C \"${KUBE_ROOT}\" WHAT=\"cmd/kubectl\" make -C \"${KUBE_ROOT}\" WHAT=\"cmd/kubectl cmd/kubectl-convert\" # Check kubectl kube::log::status \"Running kubectl with no options\""} {"_id":"doc-en-kubernetes-0dd001c145f11102e969b80c643726dbf14f98f8fe507894193bd7373efd9da7","title":"","text":"fi ###################### # Convert # ###################### if kube::test::if_supports_resource \"${deployments}\"; then record_command run_convert_tests fi ###################### # Delete # ###################### if kube::test::if_supports_resource \"${configmaps}\" ; then"} {"_id":"doc-en-kubernetes-77e714dd4d9458966f59b16cde4eeda345f839b3a819c7f9ed5b80b43b2b4193","title":"","text":"return err != nil && os.IsNotExist(err) } // GetCapacity returns node capacity data for \"cpu\", \"memory\", \"ephemeral-storage\", and \"huge-pages*\" // At present this method is only invoked when introspecting ephemeral storage func (cm *containerManagerImpl) GetCapacity() v1.ResourceList { if utilfeature.DefaultFeatureGate.Enabled(kubefeatures.LocalStorageCapacityIsolation) { // We store allocatable ephemeral-storage in the capacity property once we Start() the container manager if _, ok := cm.capacity[v1.ResourceEphemeralStorage]; !ok { // If we haven't yet stored the capacity for ephemeral-storage, we can try to fetch it directly from cAdvisor, if cm.cadvisorInterface != nil { rootfs, err := cm.cadvisorInterface.RootFsInfo() if err != nil { klog.ErrorS(err, \"Unable to get rootfs data from cAdvisor interface\") // If the rootfsinfo retrieval from cAdvisor fails for any reason, fallback to returning the capacity property with no ephemeral storage data return cm.capacity } // We don't want to mutate cm.capacity here so we'll manually construct a v1.ResourceList from it, // and add ephemeral-storage capacityWithEphemeralStorage := v1.ResourceList{} for rName, rQuant := range cm.capacity { capacityWithEphemeralStorage[rName] = rQuant } capacityWithEphemeralStorage[v1.ResourceEphemeralStorage] = cadvisor.EphemeralStorageCapacityFromFsInfo(rootfs)[v1.ResourceEphemeralStorage] return capacityWithEphemeralStorage } } } return cm.capacity }"} {"_id":"doc-en-kubernetes-0e2ddbba1e82ebfa544b28834ca3ea3550b89335d80222b3f486015e5101578f","title":"","text":"package cm import ( \"errors\" \"io/ioutil\" \"os\" \"path\" \"testing\" gomock \"github.com/golang/mock/gomock\" cadvisorapiv2 \"github.com/google/cadvisor/info/v2\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" kubefeatures \"k8s.io/kubernetes/pkg/features\" \"github.com/opencontainers/runc/libcontainer/cgroups\" \"github.com/stretchr/testify/assert\" \"github.com/stretchr/testify/require\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" cadvisortest \"k8s.io/kubernetes/pkg/kubelet/cadvisor/testing\" \"k8s.io/mount-utils\" )"} {"_id":"doc-en-kubernetes-cd8becd23a8ca5fccf482224d545968fe613fd18bb97de29eafe8e2c1229ba0c","title":"","text":"assert.NoError(t, err) assert.True(t, f.cpuHardcapping, \"cpu hardcapping is expected to be enabled\") } func TestGetCapacity(t *testing.T) { ephemeralStorageFromCapacity := int64(2000) ephemeralStorageFromCadvisor := int64(8000) mockCtrl := gomock.NewController(t) defer mockCtrl.Finish() mockCtrlError := gomock.NewController(t) defer mockCtrlError.Finish() mockCadvisor := cadvisortest.NewMockInterface(mockCtrl) rootfs := cadvisorapiv2.FsInfo{ Capacity: 8000, } mockCadvisor.EXPECT().RootFsInfo().Return(rootfs, nil) mockCadvisorError := cadvisortest.NewMockInterface(mockCtrlError) mockCadvisorError.EXPECT().RootFsInfo().Return(cadvisorapiv2.FsInfo{}, errors.New(\"Unable to get rootfs data from cAdvisor interface\")) cases := []struct { name string cm *containerManagerImpl expectedResourceQuantity *resource.Quantity expectedNoEphemeralStorage bool enableLocalStorageCapacityIsolation bool }{ { name: \"capacity property has ephemeral-storage\", cm: &containerManagerImpl{ cadvisorInterface: mockCadvisor, capacity: v1.ResourceList{ v1.ResourceEphemeralStorage: *resource.NewQuantity(ephemeralStorageFromCapacity, resource.BinarySI), }, }, expectedResourceQuantity: resource.NewQuantity(ephemeralStorageFromCapacity, resource.BinarySI), expectedNoEphemeralStorage: false, enableLocalStorageCapacityIsolation: true, }, { name: \"capacity property does not have ephemeral-storage\", cm: &containerManagerImpl{ cadvisorInterface: mockCadvisor, capacity: v1.ResourceList{}, }, expectedResourceQuantity: resource.NewQuantity(ephemeralStorageFromCadvisor, resource.BinarySI), expectedNoEphemeralStorage: false, enableLocalStorageCapacityIsolation: true, }, { name: \"capacity property does not have ephemeral-storage, error from rootfs\", cm: &containerManagerImpl{ cadvisorInterface: mockCadvisorError, capacity: v1.ResourceList{}, }, expectedNoEphemeralStorage: true, enableLocalStorageCapacityIsolation: true, }, { name: \"capacity property does not have ephemeral-storage, cadvisor interface is nil\", cm: &containerManagerImpl{ cadvisorInterface: nil, capacity: v1.ResourceList{}, }, expectedNoEphemeralStorage: true, enableLocalStorageCapacityIsolation: true, }, { name: \"LocalStorageCapacityIsolation feature flag is disabled\", cm: &containerManagerImpl{ cadvisorInterface: mockCadvisor, capacity: v1.ResourceList{ v1.ResourceCPU: resource.MustParse(\"4\"), v1.ResourceMemory: resource.MustParse(\"16G\"), }, }, expectedNoEphemeralStorage: true, enableLocalStorageCapacityIsolation: false, }, } for _, c := range cases { t.Run(c.name, func(t *testing.T) { defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, kubefeatures.LocalStorageCapacityIsolation, c.enableLocalStorageCapacityIsolation)() ret := c.cm.GetCapacity() if v, exists := ret[v1.ResourceEphemeralStorage]; !exists { if !c.expectedNoEphemeralStorage { t.Errorf(\"did not get any ephemeral storage data\") } } else { if v.Value() != c.expectedResourceQuantity.Value() { t.Errorf(\"got unexpected %s value, expected %d, got %d\", v1.ResourceEphemeralStorage, c.expectedResourceQuantity.Value(), v.Value()) } } }) } } "} {"_id":"doc-en-kubernetes-a5a73cb282789ffac91ee292b78f9d6173bd8f476b668b81b784d4ef8a35c24a","title":"","text":" // +build !providerless /* Copyright 2021 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package vsphere import ( \"context\" \"fmt\" \"github.com/vmware/govmomi/property\" \"github.com/vmware/govmomi/vim25/mo\" \"github.com/vmware/govmomi/vim25/types\" \"k8s.io/klog/v2\" \"k8s.io/legacy-cloud-providers/vsphere/vclib\" ) type sharedDatastore struct { nodeManager *NodeManager candidateDatastores []*vclib.DatastoreInfo } type hostInfo struct { hostUUID string hostMOID string datacenter string } const ( summary = \"summary\" runtimeHost = \"summary.runtime.host\" hostsProperty = \"host\" nameProperty = \"name\" ) func (shared *sharedDatastore) getSharedDatastore(ctcx context.Context) (*vclib.DatastoreInfo, error) { nodes := shared.nodeManager.getNodes() // Segregate nodes according to VC-DC dcNodes := make(map[string][]NodeInfo) nodeHosts := make(map[string]hostInfo) for nodeName, node := range nodes { nodeInfo, err := shared.nodeManager.GetNodeInfoWithNodeObject(node) if err != nil { return nil, fmt.Errorf(\"unable to find node %s: %v\", nodeName, err) } vcDC := nodeInfo.vcServer + nodeInfo.dataCenter.String() dcNodes[vcDC] = append(dcNodes[vcDC], nodeInfo) } for vcDC, nodes := range dcNodes { var hostInfos []hostInfo var err error hostInfos, err = shared.getNodeHosts(ctcx, nodes, vcDC) if err != nil { if vclib.IsManagedObjectNotFoundError(err) { klog.Warningf(\"SharedHost.getSharedDatastore: batch fetching of hosts failed - switching to fetching them individually.\") hostInfos, err = shared.getEachNodeHost(ctcx, nodes, vcDC) if err != nil { klog.Errorf(\"SharedHost.getSharedDatastore: error fetching node hosts individually: %v\", err) return nil, err } } else { return nil, err } } for _, host := range hostInfos { hostDCName := fmt.Sprintf(\"%s/%s\", host.datacenter, host.hostMOID) nodeHosts[hostDCName] = host } } if len(nodeHosts) < 1 { msg := fmt.Sprintf(\"SharedHost.getSharedDatastore unable to find hosts associated with nodes\") klog.Error(msg) return nil, fmt.Errorf(\"\") } for _, datastoreInfo := range shared.candidateDatastores { dataStoreHosts, err := shared.getAttachedHosts(ctcx, datastoreInfo.Datastore) if err != nil { msg := fmt.Sprintf(\"error finding attached hosts to datastore %s: %v\", datastoreInfo.Name(), err) klog.Error(msg) return nil, fmt.Errorf(msg) } if shared.isIncluded(dataStoreHosts, nodeHosts) { return datastoreInfo, nil } } return nil, fmt.Errorf(\"SharedHost.getSharedDatastore: unable to find any shared datastores\") } // check if all of the nodeHosts are included in the dataStoreHosts func (shared *sharedDatastore) isIncluded(dataStoreHosts []hostInfo, nodeHosts map[string]hostInfo) bool { result := true for _, host := range nodeHosts { hostFound := false for _, targetHost := range dataStoreHosts { if host.hostUUID == targetHost.hostUUID && host.hostMOID == targetHost.hostMOID { hostFound = true } } if !hostFound { result = false } } return result } func (shared *sharedDatastore) getEachNodeHost(ctx context.Context, nodes []NodeInfo, dcVC string) ([]hostInfo, error) { var hosts []hostInfo for _, node := range nodes { host, err := node.vm.GetHost(ctx) if err != nil { klog.Errorf(\"SharedHost.getEachNodeHost: unable to find host for vm %s: %v\", node.vm.InventoryPath, err) return nil, err } hosts = append(hosts, hostInfo{ hostUUID: host.Summary.Hardware.Uuid, hostMOID: host.Summary.Host.String(), datacenter: node.dataCenter.String(), }) } return hosts, nil } func (shared *sharedDatastore) getNodeHosts(ctx context.Context, nodes []NodeInfo, dcVC string) ([]hostInfo, error) { var vmRefs []types.ManagedObjectReference if len(nodes) < 1 { return nil, fmt.Errorf(\"no nodes found for dc-vc: %s\", dcVC) } var nodeInfo NodeInfo for _, n := range nodes { nodeInfo = n vmRefs = append(vmRefs, n.vm.Reference()) } pc := property.DefaultCollector(nodeInfo.dataCenter.Client()) var vmoList []mo.VirtualMachine err := pc.Retrieve(ctx, vmRefs, []string{nameProperty, runtimeHost}, &vmoList) if err != nil { klog.Errorf(\"SharedHost.getNodeHosts: unable to fetch vms from datacenter %s: %w\", nodeInfo.dataCenter.String(), err) return nil, err } var hostMoList []mo.HostSystem var hostRefs []types.ManagedObjectReference for _, vmo := range vmoList { if vmo.Summary.Runtime.Host == nil { msg := fmt.Sprintf(\"SharedHost.getNodeHosts: no host associated with vm %s\", vmo.Name) klog.Error(msg) return nil, fmt.Errorf(msg) } hostRefs = append(hostRefs, vmo.Summary.Runtime.Host.Reference()) } pc = property.DefaultCollector(nodeInfo.dataCenter.Client()) err = pc.Retrieve(ctx, hostRefs, []string{summary}, &hostMoList) if err != nil { klog.Errorf(\"SharedHost.getNodeHosts: unable to fetch hosts from datacenter %s: %w\", nodeInfo.dataCenter.String(), err) return nil, err } var hosts []hostInfo for _, host := range hostMoList { hosts = append(hosts, hostInfo{hostMOID: host.Summary.Host.String(), hostUUID: host.Summary.Hardware.Uuid, datacenter: nodeInfo.dataCenter.String()}) } return hosts, nil } func (shared *sharedDatastore) getAttachedHosts(ctx context.Context, datastore *vclib.Datastore) ([]hostInfo, error) { var ds mo.Datastore pc := property.DefaultCollector(datastore.Client()) err := pc.RetrieveOne(ctx, datastore.Reference(), []string{hostsProperty}, &ds) if err != nil { return nil, err } mounts := make(map[types.ManagedObjectReference]types.DatastoreHostMount) var refs []types.ManagedObjectReference for _, host := range ds.Host { refs = append(refs, host.Key) mounts[host.Key] = host } var hs []mo.HostSystem err = pc.Retrieve(ctx, refs, []string{summary}, &hs) if err != nil { return nil, err } var hosts []hostInfo for _, h := range hs { hosts = append(hosts, hostInfo{hostUUID: h.Summary.Hardware.Uuid, hostMOID: h.Summary.Host.String()}) } return hosts, nil } "} {"_id":"doc-en-kubernetes-9c432942f6e9b7776a5ccba0a8bfe7bbda38dbef54e885786f43f73f2790dffa","title":"","text":"return diskUUID, nil } // GetHost returns host of the virtual machine func (vm *VirtualMachine) GetHost(ctx context.Context) (mo.HostSystem, error) { host, err := vm.HostSystem(ctx) var hostSystemMo mo.HostSystem if err != nil { klog.Errorf(\"Failed to get host system for VM: %q. err: %+v\", vm.InventoryPath, err) return hostSystemMo, err } s := object.NewSearchIndex(vm.Client()) err = s.Properties(ctx, host.Reference(), []string{\"summary\"}, &hostSystemMo) if err != nil { klog.Errorf(\"Failed to retrieve datastores for host: %+v. err: %+v\", host, err) return hostSystemMo, err } return hostSystemMo, nil } // DetachDisk detaches the disk specified by vmDiskPath func (vm *VirtualMachine) DetachDisk(ctx context.Context, vmDiskPath string) error { device, err := vm.getVirtualDeviceByPath(ctx, vmDiskPath)"} {"_id":"doc-en-kubernetes-a07a58a40735b16e7d1eb38ec98a73cc3281e8082e9009305a041c2d56f1c710","title":"","text":"if len(zonesToSearch) == 0 { // If zone is not provided, get the shared datastore across all node VMs. klog.V(4).Infof(\"Validating if datastore %s is shared across all node VMs\", datastoreName) sharedDsList, err = getSharedDatastoresInK8SCluster(ctx, vs.nodeManager) sharedDSFinder := &sharedDatastore{ nodeManager: vs.nodeManager, candidateDatastores: candidateDatastoreInfos, } datastoreInfo, err = sharedDSFinder.getSharedDatastore(ctx) if err != nil { klog.Errorf(\"Failed to get shared datastore: %+v\", err) return \"\", err } // Prepare error msg to be used later, if required. err = fmt.Errorf(\"The specified datastore %s is not a shared datastore across node VMs\", datastoreName) if datastoreInfo == nil { err = fmt.Errorf(\"The specified datastore %s is not a shared datastore across node VMs\", datastoreName) klog.Error(err) return \"\", err } } else { // If zone is provided, get the shared datastores in that zone. klog.V(4).Infof(\"Validating if datastore %s is in zone %s \", datastoreName, zonesToSearch)"} {"_id":"doc-en-kubernetes-6300f762dea7ec46943f7f1621bb33320c02bc30a152a4d9834d2dad4f721b95","title":"","text":"klog.Errorf(\"Failed to find a shared datastore matching zone %s. err: %+v\", zonesToSearch, err) return \"\", err } // Prepare error msg to be used later, if required. err = fmt.Errorf(\"The specified datastore %s does not match the provided zones : %s\", datastoreName, zonesToSearch) } found := false // Check if the selected datastore belongs to the list of shared datastores computed. for _, sharedDs := range sharedDsList { if datastoreInfo, found = candidateDatastores[sharedDs.Info.Url]; found { klog.V(4).Infof(\"Datastore validation succeeded\") found = true break found := false for _, sharedDs := range sharedDsList { if datastoreInfo, found = candidateDatastores[sharedDs.Info.Url]; found { klog.V(4).Infof(\"Datastore validation succeeded\") found = true break } } if !found { err = fmt.Errorf(\"The specified datastore %s does not match the provided zones : %s\", datastoreName, zonesToSearch) klog.Error(err) return \"\", err } } if !found { klog.Error(err) return \"\", err } } }"} {"_id":"doc-en-kubernetes-b4609efa9b0f71950556496fa61a473ef3376255609e8d5d6d69752488d5aa3d","title":"","text":"LOG_FILE=\"${LOG_FILE:-${TMP_DIR}/update-vendor.log}\" kube::log::status \"logfile at ${LOG_FILE}\" function finish { ret=$? if [[ ${ret} != 0 ]]; then echo \"An error has occurred. Please see more details in ${LOG_FILE}\" fi exit ${ret} } trap finish EXIT if [ -z \"${BASH_XTRACEFD:-}\" ]; then exec 19> \"${LOG_FILE}\" export BASH_XTRACEFD=\"19\""} {"_id":"doc-en-kubernetes-3213574cc641de0acf4fd6f427885099b917a706a1c2a7527106b0de680b5ac8","title":"","text":"\"context\" \"fmt\" \"math/rand\" \"sort\" \"sync\" \"sync/atomic\" \"time\""} {"_id":"doc-en-kubernetes-71a56af80b333d369ea5436decfac8d8bc987f9747310af3a27f973e0960eff4","title":"","text":"return feasibleNodes, nil } type pluginScores struct { Plugin string Score int64 AverageNodeScore float64 } // prioritizeNodes prioritizes the nodes by running the score plugins, // which return a score for each node from the call to RunScorePlugins(). // The scores from each plugin are added together to make the score for that node, then"} {"_id":"doc-en-kubernetes-b66fa12fd85eca477b99c426659b99a57817f0d8385bd93fc6b781568669f206","title":"","text":"return nil, scoreStatus.AsError() } if klog.V(10).Enabled() { for plugin, nodeScoreList := range scoresMap { for _, nodeScore := range nodeScoreList { klog.InfoS(\"Plugin scored node for pod\", \"pod\", klog.KObj(pod), \"plugin\", plugin, \"node\", nodeScore.Name, \"score\", nodeScore.Score) } } } // Summarize all scores. result := make(framework.NodeScoreList, 0, len(nodes)) for i := range nodes { result = append(result, framework.NodeScore{Name: nodes[i].Name, Score: 0}) for j := range scoresMap {"} {"_id":"doc-en-kubernetes-b8b221586c9a339b4cded0642f070bdefd3a9123aa9480cebaaa29f6b7733789","title":"","text":"} } if klog.V(4).Enabled() { logPluginScores(nodes, scoresMap, pod) } if len(g.extenders) != 0 && nodes != nil { var mu sync.Mutex var wg sync.WaitGroup"} {"_id":"doc-en-kubernetes-31193288258fee76a48a170644cc613a75dea9d3538d3115465a1f5ecf8a7a34","title":"","text":"} } if klog.V(4).Enabled() { if klog.V(10).Enabled() { for i := range result { klog.InfoS(\"Calculated node's final score for pod\", \"pod\", klog.KObj(pod), \"node\", result[i].Name, \"score\", result[i].Score) }"} {"_id":"doc-en-kubernetes-c8bcc7042b38357d7109a141bb451b4a21562314f08d0d74c22a791455b5500c","title":"","text":"return result, nil } // logPluginScores adds summarized plugin score logging. The goal of this block is to show the highest scoring plugins on // each node, and the average score for those plugins across all nodes. func logPluginScores(nodes []*v1.Node, scoresMap framework.PluginToNodeScores, pod *v1.Pod) { totalPluginScores := make(map[string]int64) for j := range scoresMap { for i := range nodes { totalPluginScores[j] += scoresMap[j][i].Score } } // Build a map of Nodes->PluginScores on that node nodeToPluginScores := make(framework.PluginToNodeScores, len(nodes)) for _, node := range nodes { nodeToPluginScores[node.Name] = make(framework.NodeScoreList, len(scoresMap)) } // Convert the scoresMap (which contains Plugins->NodeScores) to the Nodes->PluginScores map for plugin, nodeScoreList := range scoresMap { for _, nodeScore := range nodeScoreList { klog.V(10).InfoS(\"Plugin scored node for pod\", \"pod\", klog.KObj(pod), \"plugin\", plugin, \"node\", nodeScore.Name, \"score\", nodeScore.Score) nodeToPluginScores[nodeScore.Name] = append(nodeToPluginScores[nodeScore.Name], framework.NodeScore{Name: plugin, Score: nodeScore.Score}) } } for node, scores := range nodeToPluginScores { // Get the top 3 scoring plugins for each node sort.Slice(scores, func(i, j int) bool { return scores[i].Score > scores[j].Score }) var topScores []pluginScores for _, score := range scores { pluginScore := pluginScores{ Plugin: score.Name, Score: score.Score, AverageNodeScore: float64(totalPluginScores[score.Name]) / float64(len(nodes)), } topScores = append(topScores, pluginScore) if len(topScores) == 3 { break } } klog.InfoS(\"Top 3 plugins for pod on node\", \"pod\", klog.KObj(pod), \"node\", node, \"scores\", topScores, ) } } // NewGenericScheduler creates a genericScheduler object. func NewGenericScheduler( cache internalcache.Cache,"} {"_id":"doc-en-kubernetes-acf12c778bb57597d83182144382281788f7f9175a351247ce61e6c5f0ab9ebc","title":"","text":"}, \"required\": [ \"currentReplicas\", \"desiredReplicas\", \"conditions\" \"desiredReplicas\" ], \"type\": \"object\" },"} {"_id":"doc-en-kubernetes-07d18eb9894ec424a8cc36e9af8f13b61faac0a9986e1a91911d8a45cbe12a82","title":"","text":"// conditions is the set of conditions required for this autoscaler to scale its target, // and indicates whether or not those conditions are met. // +optional repeated HorizontalPodAutoscalerCondition conditions = 6; }"} {"_id":"doc-en-kubernetes-deefe00dfcf16faf86ce00092fde83c780354d8e4ee3f0fbdfa6a02f4ef250ff","title":"","text":"// conditions is the set of conditions required for this autoscaler to scale its target, // and indicates whether or not those conditions are met. // +optional Conditions []HorizontalPodAutoscalerCondition `json:\"conditions\" protobuf:\"bytes,6,rep,name=conditions\"` }"} {"_id":"doc-en-kubernetes-d960957fb25b6ed083708366c18bc6011e015213511483bf1612eb244312e04f","title":"","text":"// Check that container has restarted ginkgo.By(\"Waiting for container to restart\") restarts := int32(0) err = wait.PollImmediate(10*time.Second, 2*time.Minute, func() (bool, error) { err = wait.PollImmediate(10*time.Second, framework.PodStartTimeout, func() (bool, error) { pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Get(context.TODO(), pod.Name, metav1.GetOptions{}) if err != nil { return false, err"} {"_id":"doc-en-kubernetes-8a847e2cea909eaaab7638c220b89fbc1ea74bdd65321ba50dad97a86f61471b","title":"","text":"ginkgo.By(\"Waiting for container to stop restarting\") stableCount := int(0) stableThreshold := int(time.Minute / framework.Poll) err = wait.PollImmediate(framework.Poll, 2*time.Minute, func() (bool, error) { err = wait.PollImmediate(framework.Poll, framework.PodStartTimeout, func() (bool, error) { pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Get(context.TODO(), pod.Name, metav1.GetOptions{}) if err != nil { return false, err"} {"_id":"doc-en-kubernetes-5aad0960a5782a3e7c2e8f0bbc4abff1ad8a96bd6901dd8f5d76c267ef7f6d9b","title":"","text":"var ( // TODO: Deprecate gitMajor and gitMinor, use only gitVersion instead. gitMajor string = \"0\" // major version, always numeric gitMinor string = \"2+\" // minor version, numeric possibly followed by \"+\" gitVersion string = \"v0.2-dev\" // version from git, output of $(git describe) gitMinor string = \"3+\" // minor version, numeric possibly followed by \"+\" gitVersion string = \"v0.3-dev\" // version from git, output of $(git describe) gitCommit string = \"\" // sha1 from git, output of $(git rev-parse HEAD) gitTreeState string = \"not a git tree\" // state of git tree, either \"clean\" or \"dirty\" )"} {"_id":"doc-en-kubernetes-e39a29feb3e0df54bb296cdc8acfd730572015d5170370992655ba6f62fa5416","title":"","text":"terminationGracePeriodSeconds: 600 hostNetwork: true containers: - image: gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.11.0 - image: gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.12.0 livenessProbe: httpGet: path: /healthz"} {"_id":"doc-en-kubernetes-b94179b413da9697058467ebff6d1d5a70fca3f66354e4e195d09973fe6be687","title":"","text":"metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/types\" utilnet \"k8s.io/apimachinery/pkg/util/net\" utilruntime \"k8s.io/apimachinery/pkg/util/runtime\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/wait\""} {"_id":"doc-en-kubernetes-9afaecfcb66be4ca73dc343902104e28a21bb77a3d2ab484b36ea85d3da3fc72","title":"","text":"} kubeClientConfigOverrides(s, clientConfig) closeAllConns, err := updateDialer(clientConfig) if err != nil { return nil, nil, err // Kubelet needs to be able to recover from stale http connections. // HTTP2 has a mechanism to detect broken connections by sending periodical pings. // HTTP1 only can have one persistent connection, and it will close all Idle connections // once the Kubelet heartbeat fails. However, since there are many edge cases that we can't // control, users can still opt-in to the previous behavior for closing the connections by // setting the environment variable DISABLE_HTTP2. var closeAllConns func() if s := os.Getenv(\"DISABLE_HTTP2\"); len(s) > 0 { klog.InfoS(\"HTTP2 has been explicitly disabled, updating Kubelet client Dialer to forcefully close active connections on heartbeat failures\") closeAllConns, err = updateDialer(clientConfig) if err != nil { return nil, nil, err } } else { closeAllConns = func() { utilnet.CloseIdleConnectionsFor(clientConfig.Transport) } } return clientConfig, closeAllConns, nil }"} {"_id":"doc-en-kubernetes-692f0bc67755d603b7887e23fcb0e1fde104b1e7c9757e1d4a7c6f0cdeff8aac","title":"","text":"} } // CloseIdleConnectionsFor close idles connections for the Transport. // If the Transport is wrapped it iterates over the wrapped round trippers // until it finds one that implements the CloseIdleConnections method. // If the Transport does not have a CloseIdleConnections method // then this function does nothing. func CloseIdleConnectionsFor(transport http.RoundTripper) { if transport == nil { return } type closeIdler interface { CloseIdleConnections() } switch transport := transport.(type) { case closeIdler: transport.CloseIdleConnections() case RoundTripperWrapper: CloseIdleConnectionsFor(transport.WrappedRoundTripper()) default: klog.Warningf(\"unknown transport type: %T\", transport) } } type TLSClientConfigHolder interface { TLSClientConfig() *tls.Config }"} {"_id":"doc-en-kubernetes-8f6b7df6b3c803197c9c2181176c3ee8700493e9aaf3de42eb49e72233ac8798","title":"","text":"\"k8s.io/apimachinery/pkg/runtime/schema\" \"k8s.io/apimachinery/pkg/runtime/serializer\" utilnet \"k8s.io/apimachinery/pkg/util/net\" \"k8s.io/apimachinery/pkg/util/wait\" ) type tcpLB struct {"} {"_id":"doc-en-kubernetes-45d90e6ba99a658772ebdcbc571701b7b12581acd3ec0363a64f0b834898c474","title":"","text":"} } // 1. connect to https server with http1.1 using a TCP proxy // 2. the connection has keepalive enabled so it will be reused // 3. break the TCP connection stopping the proxy // 4. close the idle connection to force creating a new connection // 5. count that there are 2 connection to the server (we didn't reuse the original connection) func TestReconnectBrokenTCP_HTTP1(t *testing.T) { ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, \"Hello, %s\", r.Proto) })) ts.EnableHTTP2 = false ts.StartTLS() defer ts.Close() u, err := url.Parse(ts.URL) if err != nil { t.Fatalf(\"failed to parse URL from %q: %v\", ts.URL, err) } lb := newLB(t, u.Host) defer lb.ln.Close() stopCh := make(chan struct{}) go lb.serve(stopCh) transport, ok := ts.Client().Transport.(*http.Transport) if !ok { t.Fatal(\"failed to assert *http.Transport\") } config := &Config{ Host: \"https://\" + lb.ln.Addr().String(), Transport: utilnet.SetTransportDefaults(transport), // large timeout, otherwise the broken connection will be cleaned by it Timeout: wait.ForeverTestTimeout, // These fields are required to create a REST client. ContentConfig: ContentConfig{ GroupVersion: &schema.GroupVersion{}, NegotiatedSerializer: &serializer.CodecFactory{}, }, } config.TLSClientConfig.NextProtos = []string{\"http/1.1\"} client, err := RESTClientFor(config) if err != nil { t.Fatalf(\"failed to create REST client: %v\", err) } data, err := client.Get().AbsPath(\"/\").DoRaw(context.TODO()) if err != nil { t.Fatalf(\"unexpected err: %s: %v\", data, err) } if string(data) != \"Hello, HTTP/1.1\" { t.Fatalf(\"unexpected response: %s\", data) } // Deliberately let the LB stop proxying traffic for the current // connection. This mimics a broken TCP connection that's not properly // closed. close(stopCh) stopCh = make(chan struct{}) go lb.serve(stopCh) // Close the idle connections utilnet.CloseIdleConnectionsFor(client.Client.Transport) // If the client didn't close the idle connections, the broken connection // would still be in the connection pool, the following request would // then reuse the broken connection instead of creating a new one, and // thus would fail. data, err = client.Get().AbsPath(\"/\").DoRaw(context.TODO()) if err != nil { t.Fatalf(\"unexpected err: %v\", err) } if string(data) != \"Hello, HTTP/1.1\" { t.Fatalf(\"unexpected response: %s\", data) } dials := atomic.LoadInt32(&lb.dials) if dials != 2 { t.Fatalf(\"expected %d dials, got %d\", 2, dials) } } // 1. connect to https server with http1.1 using a TCP proxy making the connection to timeout // 2. the connection has keepalive enabled so it will be reused // 3. close the in-flight connection to force creating a new connection // 4. count that there are 2 connection on the LB but only one succeeds func TestReconnectBrokenTCPInFlight_HTTP1(t *testing.T) { done := make(chan struct{}) defer close(done) received := make(chan struct{}) ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if r.URL.Path == \"/hang\" { conn, _, _ := w.(http.Hijacker).Hijack() close(received) <-done conn.Close() } fmt.Fprintf(w, \"Hello, %s\", r.Proto) })) ts.EnableHTTP2 = false ts.StartTLS() defer ts.Close() u, err := url.Parse(ts.URL) if err != nil { t.Fatalf(\"failed to parse URL from %q: %v\", ts.URL, err) } lb := newLB(t, u.Host) defer lb.ln.Close() stopCh := make(chan struct{}) go lb.serve(stopCh) transport, ok := ts.Client().Transport.(*http.Transport) if !ok { t.Fatal(\"failed to assert *http.Transport\") } config := &Config{ Host: \"https://\" + lb.ln.Addr().String(), Transport: utilnet.SetTransportDefaults(transport), // Use something extraordinary large to not hit the timeout Timeout: wait.ForeverTestTimeout, // These fields are required to create a REST client. ContentConfig: ContentConfig{ GroupVersion: &schema.GroupVersion{}, NegotiatedSerializer: &serializer.CodecFactory{}, }, } config.TLSClientConfig.NextProtos = []string{\"http/1.1\"} client, err := RESTClientFor(config) if err != nil { t.Fatalf(\"failed to create REST client: %v\", err) } // The request will connect, hang and eventually time out // but we can use a context to close once the test is done // we are only interested in have an inflight connection ctx, cancel := context.WithCancel(context.Background()) reqErrCh := make(chan error, 1) defer close(reqErrCh) go func() { _, err = client.Get().AbsPath(\"/hang\").DoRaw(ctx) reqErrCh <- err }() // wait until it connect to the server select { case <-received: case <-time.After(wait.ForeverTestTimeout): t.Fatal(\"Test timed out waiting for first request to fail\") } // Deliberately let the LB stop proxying traffic for the current // connection. This mimics a broken TCP connection that's not properly // closed. close(stopCh) stopCh = make(chan struct{}) go lb.serve(stopCh) // New request will fail if tries to reuse the connection data, err := client.Get().AbsPath(\"/\").DoRaw(context.Background()) if err != nil { t.Fatalf(\"unexpected err: %v\", err) } if string(data) != \"Hello, HTTP/1.1\" { t.Fatalf(\"unexpected response: %s\", data) } dials := atomic.LoadInt32(&lb.dials) if dials != 2 { t.Fatalf(\"expected %d dials, got %d\", 2, dials) } // cancel the in-flight connection cancel() select { case <-reqErrCh: if err == nil { t.Fatal(\"Connection succeeded but was expected to timeout\") } case <-time.After(10 * time.Second): t.Fatal(\"Test timed out waiting for the request to fail\") } } func TestRestClientTimeout(t *testing.T) { ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { time.Sleep(2 * time.Second)"} {"_id":"doc-en-kubernetes-7e66759cc0960f956fd1839ac22ab7e507686fc13ef0869c9ce6d8126ed2c27e","title":"","text":"metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/runtime/schema\" \"k8s.io/apimachinery/pkg/runtime/serializer\" \"k8s.io/apimachinery/pkg/runtime/serializer/streaming\" \"k8s.io/apimachinery/pkg/util/diff\" \"k8s.io/apimachinery/pkg/util/httpstream\" \"k8s.io/apimachinery/pkg/util/intstr\" utilnet \"k8s.io/apimachinery/pkg/util/net\" \"k8s.io/apimachinery/pkg/watch\" \"k8s.io/client-go/kubernetes/scheme\" restclientwatch \"k8s.io/client-go/rest/watch\""} {"_id":"doc-en-kubernetes-c85d8891e2bc9829396fe6c77c1a0fab5f5a33bb80a57aa557d6c61065f88875","title":"","text":"func testRESTClientWithConfig(t testing.TB, srv *httptest.Server, contentConfig ClientContentConfig) *RESTClient { base, _ := url.Parse(\"http://localhost\") var c *http.Client if srv != nil { var err error base, err = url.Parse(srv.URL) if err != nil { t.Fatalf(\"failed to parse test URL: %v\", err) } c = srv.Client() } versionedAPIPath := defaultResourcePathWithPrefix(\"\", \"\", \"\", \"\") client, err := NewRESTClient(base, versionedAPIPath, contentConfig, nil, nil) client, err := NewRESTClient(base, versionedAPIPath, contentConfig, nil, c) if err != nil { t.Fatalf(\"failed to create a client: %v\", err) }"} {"_id":"doc-en-kubernetes-b689b62f1b7f411a1b64dba4eb65e0b5df9e61c783f81eb48524172d327ad921","title":"","text":"}) } } func TestReuseRequest(t *testing.T) { var tests = []struct { name string enableHTTP2 bool }{ {\"HTTP1\", false}, {\"HTTP2\", true}, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Write([]byte(r.RemoteAddr)) })) ts.EnableHTTP2 = tt.enableHTTP2 ts.StartTLS() defer ts.Close() ctx, cancel := context.WithCancel(context.Background()) defer cancel() c := testRESTClient(t, ts) req1, err := c.Verb(\"GET\"). Prefix(\"foo\"). DoRaw(ctx) if err != nil { t.Fatalf(\"Unexpected error: %v\", err) } req2, err := c.Verb(\"GET\"). Prefix(\"foo\"). DoRaw(ctx) if err != nil { t.Fatalf(\"Unexpected error: %v\", err) } if string(req1) != string(req2) { t.Fatalf(\"Expected %v to be equal to %v\", string(req1), string(req2)) } }) } } func TestHTTP1DoNotReuseRequestAfterTimeout(t *testing.T) { var tests = []struct { name string enableHTTP2 bool }{ {\"HTTP1\", false}, {\"HTTP2\", true}, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { done := make(chan struct{}) ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { t.Logf(\"TEST Connected from %v on %vn\", r.RemoteAddr, r.URL.Path) if r.URL.Path == \"/hang\" { t.Logf(\"TEST hanging %vn\", r.RemoteAddr) <-done } w.Write([]byte(r.RemoteAddr)) })) ts.EnableHTTP2 = tt.enableHTTP2 ts.StartTLS() defer ts.Close() // close hanging connection before shutting down the http server defer close(done) ctx, cancel := context.WithCancel(context.Background()) defer cancel() transport, ok := ts.Client().Transport.(*http.Transport) if !ok { t.Fatalf(\"failed to assert *http.Transport\") } config := &Config{ Host: ts.URL, Transport: utilnet.SetTransportDefaults(transport), Timeout: 100 * time.Millisecond, // These fields are required to create a REST client. ContentConfig: ContentConfig{ GroupVersion: &schema.GroupVersion{}, NegotiatedSerializer: &serializer.CodecFactory{}, }, } if !tt.enableHTTP2 { config.TLSClientConfig.NextProtos = []string{\"http/1.1\"} } c, err := RESTClientFor(config) if err != nil { t.Fatalf(\"failed to create REST client: %v\", err) } req1, err := c.Verb(\"GET\"). Prefix(\"foo\"). DoRaw(ctx) if err != nil { t.Fatalf(\"Unexpected error: %v\", err) } _, err = c.Verb(\"GET\"). Prefix(\"/hang\"). DoRaw(ctx) if err == nil { t.Fatalf(\"Expected error\") } req2, err := c.Verb(\"GET\"). Prefix(\"foo\"). DoRaw(ctx) if err != nil { t.Fatalf(\"Unexpected error: %v\", err) } // http1 doesn't reuse the connection after it times if tt.enableHTTP2 != (string(req1) == string(req2)) { if tt.enableHTTP2 { t.Fatalf(\"Expected %v to be the same as %v\", string(req1), string(req2)) } else { t.Fatalf(\"Expected %v to be different to %v\", string(req1), string(req2)) } } }) } } func TestTransportConcurrency(t *testing.T) { const numReqs = 10 var tests = []struct { name string enableHTTP2 bool }{ {\"HTTP1\", false}, {\"HTTP2\", true}, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { t.Logf(\"Connected from %v %v\", r.RemoteAddr, r.URL) fmt.Fprintf(w, \"%v\", r.FormValue(\"echo\")) })) ts.EnableHTTP2 = tt.enableHTTP2 ts.StartTLS() defer ts.Close() var wg sync.WaitGroup wg.Add(numReqs) c := testRESTClient(t, ts) reqs := make(chan string) defer close(reqs) for i := 0; i < 4; i++ { go func() { for req := range reqs { res, err := c.Get().Param(\"echo\", req).DoRaw(context.Background()) if err != nil { t.Errorf(\"error on req %s: %v\", req, err) wg.Done() continue } if string(res) != req { t.Errorf(\"body of req %s = %q; want %q\", req, res, req) } wg.Done() } }() } for i := 0; i < numReqs; i++ { reqs <- fmt.Sprintf(\"request-%d\", i) } wg.Wait() }) } } "} {"_id":"doc-en-kubernetes-ed84afd51951ac0b21a9d020b7a1a6f82f5f115c12a9a018bc06452fb51012aa","title":"","text":"- [Deprecation of PodSecurityPolicy](#deprecation-of-podsecuritypolicy) - [Kubernetes API Reference Documentation](#kubernetes-api-reference-documentation) - [Kustomize Updates in Kubectl](#kustomize-updates-in-kubectl) - [Default Container Labels](#default-container-labels) - [Default Container Annotation](#default-container-annotation) - [Immutable Secrets and ConfigMaps](#immutable-secrets-and-configmaps) - [Structured Logging in Kubelet](#structured-logging-in-kubelet) - [Storage Capacity Tracking](#storage-capacity-tracking)"} {"_id":"doc-en-kubernetes-21f21a306230e84aee51c2b53326a067d600e748090732dc481d6dbf2844ec37","title":"","text":"[Kustomize](https://github.com/kubernetes-sigs/kustomize) version in kubectl had a jump from v2.0.3 to [v4.0.5](https://github.com/kubernetes/kubernetes/pull/98946). Kustomize is now treated as a library and future updates will be less sporadic. ### Default Container Labels ### Default Container Annotation Pod with multiple containers can use `kubectl.kubernetes.io/default-container` label to have a container preselected for kubectl commands. More can be read in [KEP-2227](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2227-kubectl-default-container/README.md). Pod with multiple containers can use `kubectl.kubernetes.io/default-container` annotation to have a container preselected for kubectl commands. More can be read in [KEP-2227](https://github.com/kubernetes/enhancements/blob/master/keps/sig-cli/2227-kubectl-default-container/README.md). ### Immutable Secrets and ConfigMaps"} {"_id":"doc-en-kubernetes-8f1345774fd4700172cfcfad6cfb27c9ac41f5a2c0def80c1483c54b268d4e77","title":"","text":"w.Write(LEVEL_2, \"NodeName:t%sn\", nodeNameText) zoneText := \"\" if endpoint.NodeName != nil { if endpoint.Zone != nil { zoneText = *endpoint.Zone } w.Write(LEVEL_2, \"Zone:t%sn\", zoneText)"} {"_id":"doc-en-kubernetes-dafac4dd7690088332508390c3763a72fa61000e64dc8701b425875d72c61fe2","title":"","text":"Addresses: []string{\"1.2.3.6\", \"1.2.3.7\"}, Conditions: discoveryv1.EndpointConditions{Ready: utilpointer.BoolPtr(true)}, TargetRef: &corev1.ObjectReference{Kind: \"Pod\", Name: \"test-124\"}, NodeName: utilpointer.StringPtr(\"node-2\"), }, }, Ports: []discoveryv1.EndpointPort{"} {"_id":"doc-en-kubernetes-65a3b913a3097252f7044442fa2d7476dab45f180406bbdf87563e06a1196609","title":"","text":"Ready: true Hostname: TargetRef: Pod/test-124 NodeName: NodeName: node-2 Zone: Events: ` + \"n\", },"} {"_id":"doc-en-kubernetes-5cf38097eb66a71a57c8659aeee47f5bcf5e3c7e267387822f025f5820ddbc01","title":"","text":"# Windows path it'll use it instead of the default unzipper. # See: https://github.com/containerd/containerd/issues/1896 Add-MachineEnvironmentPath -Path $PIGZ_ROOT # Add process exclusion for Windows Defender to boost performance. # Add process exclusion for Windows Defender to boost performance. Add-MpPreference -ExclusionProcess \"$PIGZ_ROOTunpigz.exe\" Log-Output \"Installed Pigz $PIGZ_VERSION\" } else {"} {"_id":"doc-en-kubernetes-3cd7ad0a6e76c236b214015fcfee447773fcc23d1f287f67d94a976339731b08","title":"","text":"# TODO(pjh): move the logging agent code below into a separate # module; it was put here temporarily to avoid disrupting the file layout in # the K8s release machinery. $LOGGINGAGENT_VERSION = '1.6.0' $LOGGINGAGENT_VERSION = '1.7.3' $LOGGINGAGENT_ROOT = 'C:fluent-bit' $LOGGINGAGENT_SERVICE = 'fluent-bit' $LOGGINGAGENT_CMDLINE = '*fluent-bit.exe*' $LOGGINGEXPORTER_VERSION = 'v0.10.3' $LOGGINGEXPORTER_VERSION = 'v0.16.2' $LOGGINGEXPORTER_ROOT = 'C:flb-exporter' $LOGGINGEXPORTER_SERVICE = 'flb-exporter' $LOGGINGEXPORTER_CMDLINE = '*flb-exporter.exe*'"} {"_id":"doc-en-kubernetes-75ec052fc512b5d0f8a6cb62e54b1bde8782d26795ec886642e2be36de13d90a","title":"","text":"$fluentbit_parser_file = \"$LOGGINGAGENT_ROOTconfparsers.conf\" $PARSERS_CONFIG | Out-File -FilePath $fluentbit_parser_file -Encoding ASCII # Create directory for all the log position files. New-Item -Type Directory -Path \"/var/run/google-fluentbit/pos-files/\" Log-Output \"Wrote logging config to $fluentbit_parser_file\" }"} {"_id":"doc-en-kubernetes-495a07bdb8c542b5542dc9675959de7279bf530cf8d0033722c8f12b93cb7012","title":"","text":"[SERVICE] Flush 5 Grace 120 Log_Level debug Log_Level info Log_File /var/log/fluentbit.log Daemon off Parsers_File parsers.conf"} {"_id":"doc-en-kubernetes-d97de0202439b80557a32ca70a402a348a64bb1eec440779f8716dea91085f7e","title":"","text":"# # storage.backlog.mem_limit 5M [INPUT] Name winlog Interval_Sec 2 # Channels Setup,Windows PowerShell Channels application,system,security Tag winevent.raw DB winlog.sqlite # DB /var/run/google-fluentbit/pos-files/winlog.db # Json Log Example: # {\"log\":\"[info:2016-02-16T16:04:05.930-08:00] Some log text heren\",\"stream\":\"stdout\",\"time\":\"2016-02-17T00:04:05.931087621Z\"}"} {"_id":"doc-en-kubernetes-a74df0e798ddf0c6397032a80ff491e9e952c884bf8062bbe934a85b7682324c","title":"","text":"Alias kube_containers Tag kube___ Tag_Regex (?[a-z0-9]([-a-z0-9]*[a-z0-9])?(.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)_(?[^_]+)_(?.+)- Path /var/log/containers/*.log Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 5 DB flb_kube.db # Settings from fluentd missing here. # tag reform.* # format json # time_key time # time_format %Y-%m-%dT%H:%M:%S.%NZ Path C:varlogcontainers*.log DB /var/run/google-fluentbit/pos-files/flb_kube.db [FILTER] Name parser Match kube_* Key_Name log Reserve_Data True Parser docker Parser containerd # Example: # I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537] # I0928 03:15:50.440223 4880 main.go:51] Starting CSI-Proxy Server ... [INPUT] Name tail Alias kubelet Tag kubelet #Multiline on #Multiline_Flush 5 Alias csi-proxy Tag csi-proxy Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 5 Path /etc/kubernetes/logs/kubelet.log DB /etc/kubernetes/logs/gcp-kubelet.db # Copied from fluentbit config. How is this used ? In match stages ? Parser_Firstline /^wd{4}/ Parser_1 ^(?w)(? Path /etc/kubernetes/logs/csi-proxy.log DB /var/run/google-fluentbit/pos-files/csi-proxy.db Multiline On Parser_Firstline glog # Example: # I0928 03:15:50.440223 4880 main.go:51] Starting CSI-Proxy Server ... # I1118 21:26:53.975789 6 proxier.go:1096] Port \"nodePort for kube-system/default-http-backend:http\" (:31429/tcp) was open before and is still needed [INPUT] Name tail Alias csi-proxy Tag csi-proxy #Multiline on #Multiline_Flush 5 Alias kube-proxy Tag kube-proxy Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 5 Path /etc/kubernetes/logs/csi-proxy.log DB /etc/kubernetes/logs/gcp-csi-proxy.db # Copied from fluentbit config. How is this used ? In match stages ? Parser_Firstline /^wd{4}/ Parser_1 ^(?w)(? Path /etc/kubernetes/logs/kube-proxy.log DB /var/run/google-fluentbit/pos-files/kube-proxy.db Multiline On Parser_Firstline glog # Example: # time=\"2019-12-10T21:27:59.836946700Z\" level=info msg=\"loading plugin \"io.containerd.grpc.v1.cri\"...\" type=io.containerd.grpc.v1 [INPUT] Name tail Alias container-runtime Tag container-runtime #Multiline on #Multiline_Flush 5 Tag container-runtime Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 5 Path /etc/kubernetes/logs/containerd.log DB /etc/kubernetes/logs/gcp-containerd.log.pos # Copied from fluentbit config. How is this used ? In match stages ? Parser_Firstline /^wd{4}/ Parser_1 ^(?w)(? DB /var/run/google-fluentbit/pos-files/container-runtime.db # TODO: Add custom parser for containerd logs once format is settled. # missing from fluentbit # time_format %m%d %H:%M:%S.%N # I1118 21:26:53.975789 6 proxier.go:1096] Port \"nodePort for kube-system/default-http-backend:http\" (:31429/tcp) was open before and is still needed # Example: # I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537] [INPUT] Name tail Alias kube-proxy Tag kube-proxy #Multiline on #Multiline_Flush 5 Alias kubelet Tag kubelet Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 5 Path /etc/kubernetes/logs/kube-proxy.log DB /etc/kubernetes/logs/gcp-kubeproxy.db # Copied from fluentbit config. How is this used ? In match stages ? Parser_Firstline /^wd{4}/ Parser_1 ^(?w)(? Path /etc/kubernetes/logs/kubelet.log DB /var/run/google-fluentbit/pos-files/kubelet.db Multiline On Parser_Firstline glog [FILTER] Name modify Match * Hard_rename log message # [OUTPUT] # Name http # Match * # Host 127.0.0.1 # Port 2021 # URI /logs # header_tag FLUENT-TAG # Format msgpack # Retry_Limit 2 [FILTER] Name parser Match kube_* Key_Name message Reserve_Data True Parser glog Parser json [OUTPUT] name stackdriver match * Name http Match * Host 127.0.0.1 Port 2021 URI /logs header_tag FLUENT-TAG Format msgpack Retry_Limit 2 '@ # Fluentbit parsers config file $PARSERS_CONFIG = @' [PARSER] Name docker Format json"} {"_id":"doc-en-kubernetes-23fbc61f7a28fe3f931e7b24d2e82d40e2941562cdc2544440b738b7dbd273ce","title":"","text":"Time_Key timestamp Time_Format %Y-%m-%dT%H:%M:%S.%L%z # ---------- [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog-rfc5424 Format regex"} {"_id":"doc-en-kubernetes-5a4c7307dc03a412a270ef407bfa502aee997fc72e05b80614fb1d1dfc9547af","title":"","text":"$PARSERS_CONFIG | Out-File -FilePath $fluentbit_parser_file -Encoding ASCII # Create directory for all the log position files. New-Item -Type Directory -Path \"/var/run/google-fluentbit/pos-files/\" New-Item -Type Directory -Path \"/var/run/google-fluentbit/pos-files/\" -Force | Out-Null Log-Output \"Wrote logging config to $fluentbit_parser_file\" }"} {"_id":"doc-en-kubernetes-6ccd0d68a678c5a65dd9eb1ec83155726e78e337e69042fc0f4e4c10e7193959","title":"","text":"func updatePod(client clientset.Interface, pod *v1.Pod, condition *v1.PodCondition, nominatedNode string) error { klog.V(3).InfoS(\"Updating pod condition\", \"pod\", klog.KObj(pod), \"conditionType\", condition.Type, \"conditionStatus\", condition.Status, \"conditionReason\", condition.Reason) podCopy := pod.DeepCopy() podStatusCopy := pod.Status.DeepCopy() // NominatedNodeName is updated only if we are trying to set it, and the value is // different from the existing one. if !podutil.UpdatePodCondition(&podCopy.Status, condition) && if !podutil.UpdatePodCondition(podStatusCopy, condition) && (len(nominatedNode) == 0 || pod.Status.NominatedNodeName == nominatedNode) { return nil } if nominatedNode != \"\" { podCopy.Status.NominatedNodeName = nominatedNode podStatusCopy.NominatedNodeName = nominatedNode } return util.PatchPod(client, pod, podCopy) return util.PatchPodStatus(client, pod, podStatusCopy) } // assume signals to the cache that a pod is already in the cache, so that binding can be asynchronous."} {"_id":"doc-en-kubernetes-ad8189506a6f58ad8ffb5365c90cc4f03ddd77498d632354e0c1f180a585c8c6","title":"","text":"return GetPodStartTime(pod1).Before(GetPodStartTime(pod2)) } // PatchPod calculates the delta bytes change from to , // PatchPodStatus calculates the delta bytes change from to , // and then submit a request to API server to patch the pod changes. func PatchPod(cs kubernetes.Interface, old *v1.Pod, new *v1.Pod) error { oldData, err := json.Marshal(old) func PatchPodStatus(cs kubernetes.Interface, old *v1.Pod, newStatus *v1.PodStatus) error { if newStatus == nil { return nil } oldData, err := json.Marshal(v1.Pod{Status: old.Status}) if err != nil { return err } newData, err := json.Marshal(new) newData, err := json.Marshal(v1.Pod{Status: *newStatus}) if err != nil { return err }"} {"_id":"doc-en-kubernetes-94e68225d207bdcde6b92b6b6316f8a30d4b84104da9923292c1adbb65f2ca16","title":"","text":"if len(p.Status.NominatedNodeName) == 0 { continue } podCopy := p.DeepCopy() podCopy.Status.NominatedNodeName = \"\" if err := PatchPod(cs, p, podCopy); err != nil { podStatusCopy := p.Status.DeepCopy() podStatusCopy.NominatedNodeName = \"\" if err := PatchPodStatus(cs, p, podStatusCopy); err != nil { errs = append(errs, err) } }"} {"_id":"doc-en-kubernetes-f4d9fe29293608a3ad0c45e403df181ec396b4ed5c70d50abac5a0682ef0baaf","title":"","text":"package util import ( \"context\" \"fmt\" \"github.com/google/go-cmp/cmp\" \"testing\" \"time\""} {"_id":"doc-en-kubernetes-a808255da8a57e4775a86d359f489655b3d3b14f0caddca596c571ffdea74665","title":"","text":"}) } } func TestPatchPodStatus(t *testing.T) { tests := []struct { name string pod v1.Pod statusToUpdate v1.PodStatus }{ { name: \"Should update pod conditions successfully\", pod: v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Namespace: \"ns\", Name: \"pod1\", }, Spec: v1.PodSpec{ ImagePullSecrets: []v1.LocalObjectReference{{Name: \"foo\"}}, }, }, statusToUpdate: v1.PodStatus{ Conditions: []v1.PodCondition{ { Type: v1.PodScheduled, Status: v1.ConditionFalse, }, }, }, }, { // ref: #101697, #94626 - ImagePullSecrets are allowed to have empty secret names // which would fail the 2-way merge patch generation on Pod patches // due to the mergeKey being the name field name: \"Should update pod conditions successfully on a pod Spec with secrets with empty name\", pod: v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Namespace: \"ns\", Name: \"pod2\", }, Spec: v1.PodSpec{ // this will serialize to imagePullSecrets:[{}] ImagePullSecrets: make([]v1.LocalObjectReference, 1), }, }, statusToUpdate: v1.PodStatus{ Conditions: []v1.PodCondition{ { Type: v1.PodScheduled, Status: v1.ConditionFalse, }, }, }, }, } client := clientsetfake.NewSimpleClientset() for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { _, err := client.CoreV1().Pods(tc.pod.Namespace).Create(context.TODO(), &tc.pod, metav1.CreateOptions{}) if err != nil { t.Fatal(err) } err = PatchPodStatus(client, &tc.pod, &tc.statusToUpdate) if err != nil { t.Fatal(err) } retrievedPod, err := client.CoreV1().Pods(tc.pod.Namespace).Get(context.TODO(), tc.pod.Name, metav1.GetOptions{}) if err != nil { t.Fatal(err) } if diff := cmp.Diff(tc.statusToUpdate, retrievedPod.Status); diff != \"\" { t.Errorf(\"unexpected pod status (-want,+got):n%s\", diff) } }) } } "} {"_id":"doc-en-kubernetes-0ae3493bdafcd2cfa1ddf0dec581c54b2be5291b5e50878392c3370a98a37931","title":"","text":"// FieldImmutableErrorMsg is a error message for field is immutable. const FieldImmutableErrorMsg string = `field is immutable` const totalAnnotationSizeLimitB int = 256 * (1 << 10) // 256 kB const TotalAnnotationSizeLimitB int = 256 * (1 << 10) // 256 kB // BannedOwners is a black list of object that are not allowed to be owners. var BannedOwners = map[schema.GroupVersionKind]struct{}{"} {"_id":"doc-en-kubernetes-82745622af70ba711fef92e167f1299dd5469f4e2002ec3348f7042a5786d985","title":"","text":"// ValidateAnnotations validates that a set of annotations are correctly defined. func ValidateAnnotations(annotations map[string]string, fldPath *field.Path) field.ErrorList { allErrs := field.ErrorList{} var totalSize int64 for k, v := range annotations { for k := range annotations { for _, msg := range validation.IsQualifiedName(strings.ToLower(k)) { allErrs = append(allErrs, field.Invalid(fldPath, k, msg)) } totalSize += (int64)(len(k)) + (int64)(len(v)) } if totalSize > (int64)(totalAnnotationSizeLimitB) { allErrs = append(allErrs, field.TooLong(fldPath, \"\", totalAnnotationSizeLimitB)) if err := ValidateAnnotationsSize(annotations); err != nil { allErrs = append(allErrs, field.TooLong(fldPath, \"\", TotalAnnotationSizeLimitB)) } return allErrs } func ValidateAnnotationsSize(annotations map[string]string) error { var totalSize int64 for k, v := range annotations { totalSize += (int64)(len(k)) + (int64)(len(v)) } if totalSize > (int64)(TotalAnnotationSizeLimitB) { return fmt.Errorf(\"annotations size %d is larger than limit %d\", totalSize, TotalAnnotationSizeLimitB) } return nil } func validateOwnerReference(ownerReference metav1.OwnerReference, fldPath *field.Path) field.ErrorList { allErrs := field.ErrorList{} gvk := schema.FromAPIVersionAndKind(ownerReference.APIVersion, ownerReference.Kind)"} {"_id":"doc-en-kubernetes-b21f4222d854a059bdae3ed9c2f834eaefd3d069df20510a6a76b201e6fff8ee","title":"","text":"{\"1234/5678\": \"bar\"}, {\"1.2.3.4/5678\": \"bar\"}, {\"UpperCase123\": \"bar\"}, {\"a\": strings.Repeat(\"b\", totalAnnotationSizeLimitB-1)}, {\"a\": strings.Repeat(\"b\", TotalAnnotationSizeLimitB-1)}, { \"a\": strings.Repeat(\"b\", totalAnnotationSizeLimitB/2-1), \"c\": strings.Repeat(\"d\", totalAnnotationSizeLimitB/2-1), \"a\": strings.Repeat(\"b\", TotalAnnotationSizeLimitB/2-1), \"c\": strings.Repeat(\"d\", TotalAnnotationSizeLimitB/2-1), }, } for i := range successCases {"} {"_id":"doc-en-kubernetes-f2eaed993bae425e6f8554913eef95f63320acc66c7d8524d4261344f49a5d37","title":"","text":"} } totalSizeErrorCases := []map[string]string{ {\"a\": strings.Repeat(\"b\", totalAnnotationSizeLimitB)}, {\"a\": strings.Repeat(\"b\", TotalAnnotationSizeLimitB)}, { \"a\": strings.Repeat(\"b\", totalAnnotationSizeLimitB/2), \"c\": strings.Repeat(\"d\", totalAnnotationSizeLimitB/2), \"a\": strings.Repeat(\"b\", TotalAnnotationSizeLimitB/2), \"c\": strings.Repeat(\"d\", TotalAnnotationSizeLimitB/2), }, } for i := range totalSizeErrorCases {"} {"_id":"doc-en-kubernetes-8f4b90ad7814df335406a1f13f772d4aa2e68bdba9244b25ee541c4533448138","title":"","text":"corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/meta\" apimachineryvalidation \"k8s.io/apimachinery/pkg/api/validation\" \"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured\" \"k8s.io/apimachinery/pkg/runtime\" ) const totalAnnotationSizeLimitB int64 = 256 * (1 << 10) // 256 kB type lastAppliedUpdater struct { fieldManager Manager }"} {"_id":"doc-en-kubernetes-915d18095d0c439d7ce0e993f8c7b1020c818c3d6df1903570ca320b2365aa9c","title":"","text":"annotations = map[string]string{} } annotations[corev1.LastAppliedConfigAnnotation] = value if isAnnotationsValid(annotations) != nil { if err := apimachineryvalidation.ValidateAnnotationsSize(annotations); err != nil { delete(annotations, corev1.LastAppliedConfigAnnotation) } accessor.SetAnnotations(annotations)"} {"_id":"doc-en-kubernetes-0a81e314bdddb29fec0520998957f99426d63064398d6091507c443b18a641d8","title":"","text":"} return string(lastApplied), nil } func isAnnotationsValid(annotations map[string]string) error { var totalSize int64 for k, v := range annotations { totalSize += (int64)(len(k)) + (int64)(len(v)) } if totalSize > (int64)(totalAnnotationSizeLimitB) { return fmt.Errorf(\"annotations size %d is larger than limit %d\", totalSize, totalAnnotationSizeLimitB) } return nil } "} {"_id":"doc-en-kubernetes-73a3bb476970faf2e25ef6a84181c9aadc1bf64328fda2ba35dba2ca03f0f985","title":"","text":"\"encoding/json\" \"fmt\" \"math/rand\" \"strings\" \"time\" \"github.com/davecgh/go-spew/spew\""} {"_id":"doc-en-kubernetes-76acfdf3cfde97854d278be73ba3de6d1cda7ff5d72931713ee820c6f0861fb6","title":"","text":"\"k8s.io/apimachinery/pkg/runtime/schema\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/intstr\" utilrand \"k8s.io/apimachinery/pkg/util/rand\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/apimachinery/pkg/watch\" \"k8s.io/client-go/dynamic\" clientset \"k8s.io/client-go/kubernetes\" appsclient \"k8s.io/client-go/kubernetes/typed/apps/v1\" watchtools \"k8s.io/client-go/tools/watch\" \"k8s.io/client-go/util/retry\" appsinternal \"k8s.io/kubernetes/pkg/apis/apps\" deploymentutil \"k8s.io/kubernetes/pkg/controller/deployment/util\" \"k8s.io/kubernetes/test/e2e/framework\""} {"_id":"doc-en-kubernetes-e5b990f82723e95ed268a3be05fb5914eeeff125984cd2010cac7ad4c318eaa1","title":"","text":"}) framework.ExpectNoError(err, \"failed to see %v event\", watch.Deleted) }) ginkgo.It(\"should validate Deployment Status endpoints\", func() { dClient := c.AppsV1().Deployments(ns) dName := \"test-deployment-\" + utilrand.String(5) labelSelector := \"e2e=testing\" w := &cache.ListWatch{ WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { options.LabelSelector = labelSelector return dClient.Watch(context.TODO(), options) }, } dList, err := c.AppsV1().Deployments(\"\").List(context.TODO(), metav1.ListOptions{LabelSelector: labelSelector}) framework.ExpectNoError(err, \"failed to list Deployments\") ginkgo.By(\"creating a Deployment\") podLabels := map[string]string{\"name\": WebserverImageName, \"e2e\": \"testing\"} replicas := int32(1) framework.Logf(\"Creating simple deployment %s\", dName) d := e2edeployment.NewDeployment(dName, replicas, podLabels, WebserverImageName, WebserverImage, appsv1.RollingUpdateDeploymentStrategyType) deploy, err := c.AppsV1().Deployments(ns).Create(context.TODO(), d, metav1.CreateOptions{}) framework.ExpectNoError(err) // Wait for it to be updated to revision 1 err = e2edeployment.WaitForDeploymentRevisionAndImage(c, ns, dName, \"1\", WebserverImage) framework.ExpectNoError(err) err = e2edeployment.WaitForDeploymentComplete(c, deploy) framework.ExpectNoError(err) testDeployment, err := dClient.Get(context.TODO(), dName, metav1.GetOptions{}) framework.ExpectNoError(err) ginkgo.By(\"Getting /status\") dResource := schema.GroupVersionResource{Group: \"apps\", Version: \"v1\", Resource: \"deployments\"} dStatusUnstructured, err := f.DynamicClient.Resource(dResource).Namespace(ns).Get(context.TODO(), dName, metav1.GetOptions{}, \"status\") framework.ExpectNoError(err, \"Failed to fetch the status of deployment %s in namespace %s\", dName, ns) dStatusBytes, err := json.Marshal(dStatusUnstructured) framework.ExpectNoError(err, \"Failed to marshal unstructured response. %v\", err) var dStatus appsv1.Deployment err = json.Unmarshal(dStatusBytes, &dStatus) framework.ExpectNoError(err, \"Failed to unmarshal JSON bytes to a deployment object type\") framework.Logf(\"Deployment %s has Conditions: %v\", dName, dStatus.Status.Conditions) ginkgo.By(\"updating Deployment Status\") var statusToUpdate, updatedStatus *appsv1.Deployment err = retry.RetryOnConflict(retry.DefaultRetry, func() error { statusToUpdate, err = dClient.Get(context.TODO(), dName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Unable to retrieve deployment %s\", dName) statusToUpdate.Status.Conditions = append(statusToUpdate.Status.Conditions, appsv1.DeploymentCondition{ Type: \"StatusUpdate\", Status: \"True\", Reason: \"E2E\", Message: \"Set from e2e test\", }) updatedStatus, err = dClient.UpdateStatus(context.TODO(), statusToUpdate, metav1.UpdateOptions{}) return err }) framework.ExpectNoError(err, \"Failed to update status. %v\", err) framework.Logf(\"updatedStatus.Conditions: %#v\", updatedStatus.Status.Conditions) ginkgo.By(\"watching for the Deployment status to be updated\") ctx, cancel := context.WithTimeout(context.Background(), dRetryTimeout) defer cancel() _, err = watchtools.Until(ctx, dList.ResourceVersion, w, func(event watch.Event) (bool, error) { if d, ok := event.Object.(*appsv1.Deployment); ok { found := d.ObjectMeta.Name == testDeployment.ObjectMeta.Name && d.ObjectMeta.Namespace == testDeployment.ObjectMeta.Namespace && d.Labels[\"e2e\"] == \"testing\" if !found { framework.Logf(\"Observed Deployment %v in namespace %v with annotations: %v & Conditions: %vn\", d.ObjectMeta.Name, d.ObjectMeta.Namespace, d.Annotations, d.Status.Conditions) return false, nil } for _, cond := range d.Status.Conditions { if cond.Type == \"StatusUpdate\" && cond.Reason == \"E2E\" && cond.Message == \"Set from e2e test\" { framework.Logf(\"Found Deployment %v in namespace %v with labels: %v annotations: %v & Conditions: %v\", d.ObjectMeta.Name, d.ObjectMeta.Namespace, d.ObjectMeta.Labels, d.Annotations, cond) return found, nil } framework.Logf(\"Observed Deployment %v in namespace %v with annotations: %v & Conditions: %v\", d.ObjectMeta.Name, d.ObjectMeta.Namespace, d.Annotations, cond) } } object := strings.Split(fmt.Sprintf(\"%v\", event.Object), \"{\")[0] framework.Logf(\"Observed %v event: %+v\", object, event.Type) return false, nil }) framework.ExpectNoError(err, \"failed to locate Deployment %v in namespace %v\", testDeployment.ObjectMeta.Name, ns) framework.Logf(\"Deployment %s has an updated status\", dName) ginkgo.By(\"patching the Statefulset Status\") payload := []byte(`{\"status\":{\"conditions\":[{\"type\":\"StatusPatched\",\"status\":\"True\"}]}}`) framework.Logf(\"Patch payload: %v\", string(payload)) patchedDeployment, err := dClient.Patch(context.TODO(), dName, types.MergePatchType, payload, metav1.PatchOptions{}, \"status\") framework.ExpectNoError(err, \"Failed to patch status. %v\", err) framework.Logf(\"Patched status conditions: %#v\", patchedDeployment.Status.Conditions) ginkgo.By(\"watching for the Deployment status to be patched\") ctx, cancel = context.WithTimeout(context.Background(), dRetryTimeout) defer cancel() _, err = watchtools.Until(ctx, dList.ResourceVersion, w, func(event watch.Event) (bool, error) { if e, ok := event.Object.(*appsv1.Deployment); ok { found := e.ObjectMeta.Name == testDeployment.ObjectMeta.Name && e.ObjectMeta.Namespace == testDeployment.ObjectMeta.Namespace && e.ObjectMeta.Labels[\"e2e\"] == testDeployment.ObjectMeta.Labels[\"e2e\"] if !found { framework.Logf(\"Observed deployment %v in namespace %v with annotations: %v & Conditions: %v\", testDeployment.ObjectMeta.Name, testDeployment.ObjectMeta.Namespace, testDeployment.Annotations, testDeployment.Status.Conditions) return false, nil } for _, cond := range e.Status.Conditions { if cond.Type == \"StatusPatched\" { framework.Logf(\"Found deployment %v in namespace %v with labels: %v annotations: %v & Conditions: %v\", testDeployment.ObjectMeta.Name, testDeployment.ObjectMeta.Namespace, testDeployment.ObjectMeta.Labels, testDeployment.Annotations, cond) return found, nil } framework.Logf(\"Observed deployment %v in namespace %v with annotations: %v & Conditions: %v\", testDeployment.ObjectMeta.Name, testDeployment.ObjectMeta.Namespace, testDeployment.Annotations, cond) } } object := strings.Split(fmt.Sprintf(\"%v\", event.Object), \"{\")[0] framework.Logf(\"Observed %v event: %+v\", object, event.Type) return false, nil }) framework.ExpectNoError(err, \"failed to locate deployment %v in namespace %v\", testDeployment.ObjectMeta.Name, ns) framework.Logf(\"Deployment %s has a patched status\", dName) }) }) func failureTrap(c clientset.Interface, ns string) {"} {"_id":"doc-en-kubernetes-2b9a330e6d97f060759ca9ef35bc932e46b14df554d862047c444904f4742be9","title":"","text":"MUST succeed. It MUST succeed when deleting the Deployment. release: v1.20 file: test/e2e/apps/deployment.go - testname: Deployment, status sub-resource codename: '[sig-apps] Deployment should validate Deployment Status endpoints [Conformance]' description: When a Deployment is created it MUST succeed. Attempt to read, update and patch its status sub-resource; all mutating sub-resource operations MUST be visible to subsequent reads. release: v1.22 file: test/e2e/apps/deployment.go - testname: 'PodDisruptionBudget: list and delete collection' codename: '[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]'"} {"_id":"doc-en-kubernetes-e0acbf75126c8fe7a209e94517af48d42cc612a77a438f777b330aae41bd4685","title":"","text":"framework.ExpectNoError(err, \"failed to see %v event\", watch.Deleted) }) ginkgo.It(\"should validate Deployment Status endpoints\", func() { /* Release: v1.22 Testname: Deployment, status sub-resource Description: When a Deployment is created it MUST succeed. Attempt to read, update and patch its status sub-resource; all mutating sub-resource operations MUST be visible to subsequent reads. */ framework.ConformanceIt(\"should validate Deployment Status endpoints\", func() { dClient := c.AppsV1().Deployments(ns) dName := \"test-deployment-\" + utilrand.String(5) labelSelector := \"e2e=testing\""} {"_id":"doc-en-kubernetes-e965739a4b55eb8369cbc467a48d71459625d07f9f91de200b96fa0cede4a923","title":"","text":"clone, err := conversion.NewCloner().DeepCopy(&volume) if err != nil { glog.Errorf(\"error cloning volume %q: %v\", volume.Name, err) continue } volumeClone := clone.(*api.PersistentVolume) ctrl.storeVolumeUpdate(volumeClone)"} {"_id":"doc-en-kubernetes-10cc9b1c024ec30ea1038527a77ae1068856a552c922528301fff079d93a9e7f","title":"","text":"clone, err := conversion.NewCloner().DeepCopy(&claim) if err != nil { glog.Errorf(\"error cloning claim %q: %v\", claimToClaimKey(&claim), err) continue } claimClone := clone.(*api.PersistentVolumeClaim) ctrl.storeClaimUpdate(claimClone)"} {"_id":"doc-en-kubernetes-8eb968bc2eb4fc2433b98a4bcd77777af220397915898b58c4d54b54f530c281","title":"","text":"return false } accessModes := c.spec.PersistentVolume.Spec.AccessModes if c.spec.PersistentVolume == nil { klog.V(4).Info(log(\"mounter.SetupAt Warning: skipping fsGroup permission change, no access mode available. The volume may only be accessible to root users.\")) return false } if c.spec.PersistentVolume.Spec.AccessModes == nil { klog.V(4).Info(log(\"mounter.SetupAt WARNING: skipping fsGroup, access modes not provided\")) return false } if !hasReadWriteOnce(accessModes) { if !hasReadWriteOnce(c.spec.PersistentVolume.Spec.AccessModes) { klog.V(4).Info(log(\"mounter.SetupAt WARNING: skipping fsGroup, only support ReadWriteOnce access mode\")) return false }"} {"_id":"doc-en-kubernetes-73590c036f7bed6f6de900e77059a75cd2748c82cbf1a419c21247fcaf3c0270","title":"","text":"authenticationv1 \"k8s.io/api/authentication/v1\" api \"k8s.io/api/core/v1\" corev1 \"k8s.io/api/core/v1\" storage \"k8s.io/api/storage/v1\" meta \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/runtime\""} {"_id":"doc-en-kubernetes-cdbbc23993cbc75638d1152dc1130164abe20ec5b400a4d54b955aa91df45408","title":"","text":"}) } } func Test_csiMountMgr_supportsFSGroup(t *testing.T) { type fields struct { plugin *csiPlugin driverName csiDriverName volumeLifecycleMode storage.VolumeLifecycleMode fsGroupPolicy storage.FSGroupPolicy volumeID string specVolumeID string readOnly bool supportsSELinux bool spec *volume.Spec pod *api.Pod podUID types.UID publishContext map[string]string kubeVolHost volume.KubeletVolumeHost MetricsProvider volume.MetricsProvider } type args struct { fsType string fsGroup *int64 driverPolicy storage.FSGroupPolicy } tests := []struct { name string fields fields args args want bool }{ { name: \"empty all\", args: args{}, want: false, }, { name: \"driverPolicy is FileFSGroupPolicy\", args: args{ fsGroup: new(int64), driverPolicy: storage.FileFSGroupPolicy, }, want: true, }, { name: \"driverPolicy is ReadWriteOnceWithFSTypeFSGroupPolicy\", args: args{ fsGroup: new(int64), driverPolicy: storage.ReadWriteOnceWithFSTypeFSGroupPolicy, }, want: false, }, { name: \"driverPolicy is ReadWriteOnceWithFSTypeFSGroupPolicy with empty Spec\", args: args{ fsGroup: new(int64), fsType: \"ext4\", driverPolicy: storage.ReadWriteOnceWithFSTypeFSGroupPolicy, }, fields: fields{ spec: &volume.Spec{}, }, want: false, }, { name: \"driverPolicy is ReadWriteOnceWithFSTypeFSGroupPolicy with empty PersistentVolume\", args: args{ fsGroup: new(int64), fsType: \"ext4\", driverPolicy: storage.ReadWriteOnceWithFSTypeFSGroupPolicy, }, fields: fields{ spec: volume.NewSpecFromPersistentVolume(&corev1.PersistentVolume{}, true), }, want: false, }, { name: \"driverPolicy is ReadWriteOnceWithFSTypeFSGroupPolicy with empty AccessModes\", args: args{ fsGroup: new(int64), fsType: \"ext4\", driverPolicy: storage.ReadWriteOnceWithFSTypeFSGroupPolicy, }, fields: fields{ spec: volume.NewSpecFromPersistentVolume(&api.PersistentVolume{ Spec: api.PersistentVolumeSpec{ AccessModes: []api.PersistentVolumeAccessMode{}, }, }, true), }, want: false, }, { name: \"driverPolicy is ReadWriteOnceWithFSTypeFSGroupPolicy with ReadWriteOnce AccessModes\", args: args{ fsGroup: new(int64), fsType: \"ext4\", driverPolicy: storage.ReadWriteOnceWithFSTypeFSGroupPolicy, }, fields: fields{ spec: volume.NewSpecFromPersistentVolume(&api.PersistentVolume{ Spec: api.PersistentVolumeSpec{ AccessModes: []api.PersistentVolumeAccessMode{api.ReadWriteOnce}, }, }, true), }, want: true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { c := &csiMountMgr{ plugin: tt.fields.plugin, driverName: tt.fields.driverName, volumeLifecycleMode: tt.fields.volumeLifecycleMode, fsGroupPolicy: tt.fields.fsGroupPolicy, volumeID: tt.fields.volumeID, specVolumeID: tt.fields.specVolumeID, readOnly: tt.fields.readOnly, supportsSELinux: tt.fields.supportsSELinux, spec: tt.fields.spec, pod: tt.fields.pod, podUID: tt.fields.podUID, publishContext: tt.fields.publishContext, kubeVolHost: tt.fields.kubeVolHost, MetricsProvider: tt.fields.MetricsProvider, } if got := c.supportsFSGroup(tt.args.fsType, tt.args.fsGroup, tt.args.driverPolicy); got != tt.want { t.Errorf(\"supportsFSGroup() = %v, want %v\", got, tt.want) } }) } } "} {"_id":"doc-en-kubernetes-a130c4451efc22860f77d6fed2fb09a17f3941d7f12873811c69aa917b50885f","title":"","text":"# $3 is the value to set the --pull flag for docker build; true by default # $4 is the set of --build-args for docker. function kube::build::docker_build() { kube::util::ensure-docker-buildx local -r image=$1 local -r context_dir=$2 local -r pull=\"${3:-true}\""} {"_id":"doc-en-kubernetes-8c04645bf76954eee9faf9a0f01806b4af0aef3d6e865f770d86333fdcec9cb5","title":"","text":"\"unicode\" \"unicode/utf8\" \"github.com/google/go-cmp/cmp\" v1 \"k8s.io/api/core/v1\" apiequality \"k8s.io/apimachinery/pkg/api/equality\" \"k8s.io/apimachinery/pkg/api/resource\""} {"_id":"doc-en-kubernetes-cb83af9e4af546e7f98b6cbe863b22e1c9399f87e2082d91c5e69d9040db3354","title":"","text":"metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" unversionedvalidation \"k8s.io/apimachinery/pkg/apis/meta/v1/validation\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/util/diff\" \"k8s.io/apimachinery/pkg/util/intstr\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/validation\""} {"_id":"doc-en-kubernetes-5f6f87669f90474adcaf27e5bea8ade0cc6f2d2a071b85cc8fdbb25ac97c55d3","title":"","text":"// PersistentVolumeSource should be immutable after creation. if !apiequality.Semantic.DeepEqual(newPv.Spec.PersistentVolumeSource, oldPv.Spec.PersistentVolumeSource) { pvcSourceDiff := diff.ObjectDiff(newPv.Spec.PersistentVolumeSource, oldPv.Spec.PersistentVolumeSource) pvcSourceDiff := cmp.Diff(oldPv.Spec.PersistentVolumeSource, newPv.Spec.PersistentVolumeSource) allErrs = append(allErrs, field.Forbidden(field.NewPath(\"spec\", \"persistentvolumesource\"), fmt.Sprintf(\"spec.persistentvolumesource is immutable after creationn%v\", pvcSourceDiff))) } allErrs = append(allErrs, ValidateImmutableField(newPv.Spec.VolumeMode, oldPv.Spec.VolumeMode, field.NewPath(\"volumeMode\"))...)"} {"_id":"doc-en-kubernetes-fb303da8d5207fe61aa66b8f7fd3e5e9609eb3b7fcfc82070bc24d50e23c86c7","title":"","text":"statusSize := oldPvc.Status.Capacity[\"storage\"] if !apiequality.Semantic.DeepEqual(newPvcClone.Spec, oldPvcClone.Spec) { specDiff := diff.ObjectDiff(newPvcClone.Spec, oldPvcClone.Spec) specDiff := cmp.Diff(oldPvcClone.Spec, newPvcClone.Spec) allErrs = append(allErrs, field.Forbidden(field.NewPath(\"spec\"), fmt.Sprintf(\"spec is immutable after creation except resources.requests for bound claimsn%v\", specDiff))) } if newSize.Cmp(oldSize) < 0 {"} {"_id":"doc-en-kubernetes-8cf87f99ac4fb28410a2f88fd44c8d77641ecab5c8958e5cfb0078e01d0d7bba","title":"","text":"// changes to Spec are not allowed, but updates to label/and some annotations are OK. // no-op updates pass validation. if !apiequality.Semantic.DeepEqual(newPvcClone.Spec, oldPvcClone.Spec) { specDiff := diff.ObjectDiff(newPvcClone.Spec, oldPvcClone.Spec) specDiff := cmp.Diff(oldPvcClone.Spec, newPvcClone.Spec) allErrs = append(allErrs, field.Forbidden(field.NewPath(\"spec\"), fmt.Sprintf(\"field is immutable after creationn%v\", specDiff))) } }"} {"_id":"doc-en-kubernetes-251a208186162a5097a64379248adc3a1c540909d88b02529ccc5dde068abc01","title":"","text":"if !apiequality.Semantic.DeepEqual(mungedPodSpec, oldPod.Spec) { // This diff isn't perfect, but it's a helluva lot better an \"I'm not going to tell you what the difference is\". //TODO: Pinpoint the specific field that causes the invalid error after we have strategic merge diff specDiff := diff.ObjectDiff(mungedPodSpec, oldPod.Spec) specDiff := cmp.Diff(oldPod.Spec, mungedPodSpec) allErrs = append(allErrs, field.Forbidden(specPath, fmt.Sprintf(\"pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds`, `spec.tolerations` (only additions to existing tolerations) or `spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)n%v\", specDiff))) }"} {"_id":"doc-en-kubernetes-768c950437c5064b5e3262e1c28f96185ffe9e454c00f71d948c5a7b65fe891e","title":"","text":"if new, ok := newContainerIndex[old.Name]; !ok { allErrs = append(allErrs, field.Forbidden(specPath, fmt.Sprintf(\"existing ephemeral containers %q may not be removedn\", old.Name))) } else if !apiequality.Semantic.DeepEqual(old, *new) { specDiff := diff.ObjectDiff(old, *new) specDiff := cmp.Diff(old, *new) allErrs = append(allErrs, field.Forbidden(specPath, fmt.Sprintf(\"existing ephemeral containers %q may not be changedn%v\", old.Name, specDiff))) } }"} {"_id":"doc-en-kubernetes-a987065e93b534e9c4ad7bd42a418a6aca2e707e211efcaf284019e03b96d0b5","title":"","text":"import ( \"context\" \"fmt\" \"os\" \"path/filepath\" \"strconv\" \"time\""} {"_id":"doc-en-kubernetes-b8270999648a8dd45b2c967f68127229dab4d8ce74e53d8209302813a86165b9","title":"","text":"}) ginkgo.It(\"after restart dbus, should be able to gracefully shutdown\", func() { // allows manual restart of dbus to work in Ubuntu. err := overlayDbusConfig() framework.ExpectNoError(err) defer func() { err := restoreDbusConfig() framework.ExpectNoError(err) }() ginkgo.By(\"Restart Dbus\") err := restartDbus() err = restartDbus() framework.ExpectNoError(err) ginkgo.By(\"Emitting Shutdown signal\")"} {"_id":"doc-en-kubernetes-76757c12c65de09f195f9aa0bd146216861820419b65b95da4c7735e2b7a2b58","title":"","text":"_, err := runCommand(\"sh\", \"-c\", cmd) return err } func systemctlDaemonReload() error { cmd := \"systemctl daemon-reload\" _, err := runCommand(\"sh\", \"-c\", cmd) return err } var ( dbusConfPath = \"/etc/systemd/system/dbus.service.d/k8s-graceful-node-shutdown-e2e.conf\" dbusConf = ` [Unit] RefuseManualStart=no RefuseManualStop=no [Service] KillMode=control-group ExecStop= ` ) func overlayDbusConfig() error { err := os.MkdirAll(filepath.Dir(dbusConf), 0755) if err != nil { return err } err = os.WriteFile(dbusConfPath, []byte(dbusConf), 0644) if err != nil { return err } return systemctlDaemonReload() } func restoreDbusConfig() error { err := os.Remove(dbusConf) if err != nil { return err } return systemctlDaemonReload() } "} {"_id":"doc-en-kubernetes-2913ab4002d365a53991e9ff9c02a2da56464182c138c3a113572c12133bee77","title":"","text":"less, err := isLess(iField, jField) if err != nil { klog.Fatalf(\"Field %s in %T is an unsortable type: %s, err: %v\", r.field, iObj, iField.Kind().String(), err) klog.Exitf(\"Field %s in %T is an unsortable type: %s, err: %v\", r.field, iObj, iField.Kind().String(), err) } return less }"} {"_id":"doc-en-kubernetes-6b9eee0670d11f67c6a6106f65e8be83a603f9db1f69a22a28ea47f332e34a76","title":"","text":"less, err := isLess(iField, jField) if err != nil { klog.Fatalf(\"Field %s in %T is an unsortable type: %s, err: %v\", t.field, t.parsedRows, iField.Kind().String(), err) klog.Exitf(\"Field %s in %T is an unsortable type: %s, err: %v\", t.field, t.parsedRows, iField.Kind().String(), err) } return less }"} {"_id":"doc-en-kubernetes-c5cb85394c649b044f4e0d3f7e495377fb5379385d02d4228ada8f8dbc29eb0a","title":"","text":"return } ch <- metrics.NewLazyConstMetric(containerStartTimeDesc, metrics.GaugeValue, float64(s.StartTime.UnixNano())/float64(time.Second), s.Name, pod.PodRef.Name, pod.PodRef.Namespace) ch <- metrics.NewLazyMetricWithTimestamp(s.StartTime.Time, metrics.NewLazyConstMetric(containerStartTimeDesc, metrics.GaugeValue, float64(s.StartTime.UnixNano())/float64(time.Second), s.Name, pod.PodRef.Name, pod.PodRef.Namespace)) } func (rc *resourceMetricsCollector) collectContainerCPUMetrics(ch chan<- metrics.Metric, pod summary.PodStats, s summary.ContainerStats) {"} {"_id":"doc-en-kubernetes-914097ec45946cf672a6932094575ba2f30edd2dabdb3829783e6259c04d85e2","title":"","text":"container_memory_working_set_bytes{container=\"container_b\",namespace=\"namespace_a\",pod=\"pod_a\"} 1000 1624396278302 # HELP container_start_time_seconds [ALPHA] Start time of the container since unix epoch in seconds # TYPE container_start_time_seconds gauge container_start_time_seconds{container=\"container_a\",namespace=\"namespace_a\",pod=\"pod_a\"} 1.6243962483020916e+09 container_start_time_seconds{container=\"container_a\",namespace=\"namespace_b\",pod=\"pod_b\"} 1.6243956783020916e+09 container_start_time_seconds{container=\"container_b\",namespace=\"namespace_a\",pod=\"pod_a\"} 1.6243961583020916e+09 container_start_time_seconds{container=\"container_a\",namespace=\"namespace_a\",pod=\"pod_a\"} 1.6243962483020916e+09 1624396248302 container_start_time_seconds{container=\"container_a\",namespace=\"namespace_b\",pod=\"pod_b\"} 1.6243956783020916e+09 1624395678302 container_start_time_seconds{container=\"container_b\",namespace=\"namespace_a\",pod=\"pod_a\"} 1.6243961583020916e+09 1624396158302 `, }, {"} {"_id":"doc-en-kubernetes-052881a332d8230a2598e96c32df0184bb4f770bf0e60e3d04b688c603510528","title":"","text":"}), \"container_start_time_seconds\": gstruct.MatchElements(containerID, gstruct.IgnoreExtras, gstruct.Elements{ fmt.Sprintf(\"%s::%s::%s\", f.Namespace.Name, pod0, \"busybox-container\"): boundedSample(time.Now().Add(-maxStatsAge).UnixNano(), time.Now().Add(2*time.Minute).UnixNano()), fmt.Sprintf(\"%s::%s::%s\", f.Namespace.Name, pod1, \"busybox-container\"): boundedSample(time.Now().Add(-maxStatsAge).UnixNano(), time.Now().Add(2*time.Minute).UnixNano()), fmt.Sprintf(\"%s::%s::%s\", f.Namespace.Name, pod0, \"busybox-container\"): boundedSample(time.Now().Add(-maxStatsAge).Unix(), time.Now().Add(2*time.Minute).Unix()), fmt.Sprintf(\"%s::%s::%s\", f.Namespace.Name, pod1, \"busybox-container\"): boundedSample(time.Now().Add(-maxStatsAge).Unix(), time.Now().Add(2*time.Minute).Unix()), }), \"pod_cpu_usage_seconds_total\": gstruct.MatchElements(podID, gstruct.IgnoreExtras, gstruct.Elements{"} {"_id":"doc-en-kubernetes-c4b4a49b78248a7493a431b229a261a00b148896f6b0eedd6c34eb94d55197a7","title":"","text":"Subsystem: KubeletSubsystem, Name: AssignedConfigKey, Help: \"The node's understanding of intended config. The count is always 1.\", DeprecatedVersion: \"1.22\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-2ebe5965897a39fdbe8e762d6ac827656e8a5aba55657794130146fdbec05ebd","title":"","text":"Subsystem: KubeletSubsystem, Name: ActiveConfigKey, Help: \"The config source the node is actively using. The count is always 1.\", DeprecatedVersion: \"1.22\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-33cdfb0c1fa6701dca11916d58f0fc0cfb834a5eed3099123c7630a56115c6ce","title":"","text":"Subsystem: KubeletSubsystem, Name: LastKnownGoodConfigKey, Help: \"The config source the node will fall back to when it encounters certain errors. The count is always 1.\", DeprecatedVersion: \"1.22\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-cb3c90adeb5623e6e623f058961e85eb494a8111e04de86e58384ae8f459a7f9","title":"","text":"Subsystem: KubeletSubsystem, Name: ConfigErrorKey, Help: \"This metric is true (1) if the node is experiencing a configuration-related error, false (0) otherwise.\", DeprecatedVersion: \"1.22\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, )"} {"_id":"doc-en-kubernetes-30e1490f18740534caf20a50d555b9368b648df9b422646f969e0046d1cb8d57","title":"","text":"\"custom\": resource.MustParse(\"0\"), }, Limits: v1.ResourceList{ \"memory\": resource.MustParse(\"2G\"), \"memory\": resource.MustParse(\"2.5Gi\"), \"custom\": resource.MustParse(\"6\"), }, }},"} {"_id":"doc-en-kubernetes-e2a032c49b1f110e234f4358afbdb1c8d4b2b436803a716cc058208834b52e0a","title":"","text":"expected := `# HELP kube_pod_resource_limit [ALPHA] Resources limit for workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any. # TYPE kube_pod_resource_limit gauge kube_pod_resource_limit{namespace=\"test\",node=\"node-one\",pod=\"foo\",priority=\"\",resource=\"custom\",scheduler=\"\",unit=\"\"} 6 kube_pod_resource_limit{namespace=\"test\",node=\"node-one\",pod=\"foo\",priority=\"\",resource=\"memory\",scheduler=\"\",unit=\"bytes\"} 2e+09 kube_pod_resource_limit{namespace=\"test\",node=\"node-one\",pod=\"foo\",priority=\"\",resource=\"memory\",scheduler=\"\",unit=\"bytes\"} 2.68435456e+09 # HELP kube_pod_resource_request [ALPHA] Resources requested by workloads on the cluster, broken down by pod. This shows the resource usage the scheduler and kubelet expect per pod for resources along with the unit for the resource if any. # TYPE kube_pod_resource_request gauge kube_pod_resource_request{namespace=\"test\",node=\"node-one\",pod=\"foo\",priority=\"\",resource=\"cpu\",scheduler=\"\",unit=\"cores\"} 2"} {"_id":"doc-en-kubernetes-fce64b596a833b2624fc0ddf425bf224e73616847d886f6aded0ff907572e472","title":"","text":"return base } // multiply by the appropriate exponential scale switch q.Format { case DecimalExponent, DecimalSI: return base * math.Pow10(exponent) default: // fast path for exponents that can fit in 64 bits if exponent > 0 && exponent < 7 { return base * float64(int64(1)<<(exponent*10)) } return base * math.Pow(2, float64(exponent*10)) } return base * math.Pow10(exponent) } // AsInt64 returns a representation of the current value as an int64 if a fast conversion"} {"_id":"doc-en-kubernetes-9aea365cbe7f7f141758687902a91709e93c6ddd69f14ae66c201c10580f0972","title":"","text":"{decQuantity(1024, 0, BinarySI), 1024}, {decQuantity(8*1024, 0, BinarySI), 8 * 1024}, {decQuantity(7*1024*1024, 0, BinarySI), 7 * 1024 * 1024}, {decQuantity(7*1024*1024, 1, BinarySI), (7 * 1024 * 1024) * 1024}, {decQuantity(7*1024*1024, 4, BinarySI), (7 * 1024 * 1024) * (1024 * 1024 * 1024 * 1024)}, {decQuantity(7*1024*1024, 8, BinarySI), (7 * 1024 * 1024) * (1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024 * 1024)}, {decQuantity(7*1024*1024, -1, BinarySI), (7 * 1024 * 1024) / float64(1024)}, {decQuantity(7*1024*1024, -8, BinarySI), (7 * 1024 * 1024) / float64(1024*1024*1024*1024*1024*1024*1024*1024)}, {decQuantity(7*1024*1024, 1, BinarySI), (7 * 1024 * 1024) * 10}, {decQuantity(7*1024*1024, 4, BinarySI), (7 * 1024 * 1024) * 10000}, {decQuantity(7*1024*1024, 8, BinarySI), (7 * 1024 * 1024) * 100000000}, {decQuantity(7*1024*1024, -1, BinarySI), (7 * 1024 * 1024) * math.Pow10(-1)}, // '* Pow10' and '/ float(10)' do not round the same way {decQuantity(7*1024*1024, -8, BinarySI), (7 * 1024 * 1024) / float64(100000000)}, {decQuantity(1024, 0, DecimalSI), 1024}, {decQuantity(8*1024, 0, DecimalSI), 8 * 1024},"} {"_id":"doc-en-kubernetes-6be1d322ec16ddb86f9462bf2934b4abe5af020bb26269e2eeb8c8af73af3e90","title":"","text":"} } func TestStringQuantityAsApproximateFloat64(t *testing.T) { table := []struct { in string out float64 }{ {\"2Ki\", 2048}, {\"1.1Ki\", 1126.4e+0}, {\"1Mi\", 1.048576e+06}, {\"2Gi\", 2.147483648e+09}, } for _, item := range table { t.Run(item.in, func(t *testing.T) { in, err := ParseQuantity(item.in) if err != nil { t.Fatal(err) } out := in.AsApproximateFloat64() if out != item.out { t.Fatalf(\"expected %v, got %v\", item.out, out) } if in.d.Dec != nil { if i, ok := in.AsInt64(); ok { q := intQuantity(i, 0, in.Format) out := q.AsApproximateFloat64() if out != item.out { t.Fatalf(\"as int quantity: expected %v, got %v\", item.out, out) } } } }) } } func benchmarkQuantities() []Quantity { return []Quantity{ intQuantity(1024*1024*1024, 0, BinarySI),"} {"_id":"doc-en-kubernetes-36d6edb12924e50d7aad94e39c5c8f6fd6660068ba62fdf4f37bc00888be0bd3","title":"","text":"fsckErrorsCorrected = 1 // 'fsck' found errors but exited without correcting them fsckErrorsUncorrected = 4 // Error thrown by exec cmd.Run() when process spawned by cmd.Start() completes before cmd.Wait() is called (see - k/k issue #103753) errNoChildProcesses = \"wait: no child processes\" ) // Mounter provides the default implementation of mount.Interface"} {"_id":"doc-en-kubernetes-dbba9438f97e8971e0f3fbc906b6e71a85f54dcf02cd6a18fa5341d05bb27de4","title":"","text":"command := exec.Command(mountCmd, mountArgs...) output, err := command.CombinedOutput() if err != nil { if err.Error() == errNoChildProcesses { if command.ProcessState.Success() { // We don't consider errNoChildProcesses an error if the process itself succeeded (see - k/k issue #103753). return nil } // Rewrite err with the actual exit error of the process. err = &exec.ExitError{ProcessState: command.ProcessState} } klog.Errorf(\"Mount failed: %vnMounting command: %snMounting arguments: %snOutput: %sn\", err, mountCmd, mountArgsLogStr, string(output)) return fmt.Errorf(\"mount failed: %vnMounting command: %snMounting arguments: %snOutput: %s\", err, mountCmd, mountArgsLogStr, string(output))"} {"_id":"doc-en-kubernetes-c56823e1189bc6fc779e0e30b57438bc4d9578d206f1ab1e9a2190c9b296b5e1","title":"","text":"command := exec.Command(\"umount\", target) output, err := command.CombinedOutput() if err != nil { if err.Error() == errNoChildProcesses { if command.ProcessState.Success() { // We don't consider errNoChildProcesses an error if the process itself succeeded (see - k/k issue #103753). return nil } // Rewrite err with the actual exit error of the process. err = &exec.ExitError{ProcessState: command.ProcessState} } return fmt.Errorf(\"unmount failed: %vnUnmounting arguments: %snOutput: %s\", err, target, string(output)) } return nil"} {"_id":"doc-en-kubernetes-6c377b6cd30e8dc2ed4266dec9ffe50b3a2afcbd89c00e74880bb16d19da7d32","title":"","text":"Subsystem: KubeletSubsystem, Name: AssignedConfigKey, Help: \"The node's understanding of intended config. The count is always 1.\", DeprecatedVersion: \"1.23.0\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-a65bcb593900744f4ee9899c2642080e3656ecbdfa9749f2476a8178a59d35d2","title":"","text":"Subsystem: KubeletSubsystem, Name: ActiveConfigKey, Help: \"The config source the node is actively using. The count is always 1.\", DeprecatedVersion: \"1.23.0\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-9faa0a296f1f7feffe199f288f012edf10f855f80b5684733f94d5e56905ed93","title":"","text":"Subsystem: KubeletSubsystem, Name: LastKnownGoodConfigKey, Help: \"The config source the node will fall back to when it encounters certain errors. The count is always 1.\", DeprecatedVersion: \"1.23.0\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, []string{ConfigSourceLabelKey, ConfigUIDLabelKey, ConfigResourceVersionLabelKey, KubeletConfigKeyLabelKey},"} {"_id":"doc-en-kubernetes-2cab39e7b6e5e255c397699825d3de1c24403ac442d4ccaeaf9668f3d7be3f74","title":"","text":"Subsystem: KubeletSubsystem, Name: ConfigErrorKey, Help: \"This metric is true (1) if the node is experiencing a configuration-related error, false (0) otherwise.\", DeprecatedVersion: \"1.23.0\", DeprecatedVersion: \"1.22.0\", StabilityLevel: metrics.ALPHA, }, )"} {"_id":"doc-en-kubernetes-4f56a5830c1aa514eba019dfd6f8e816b5292686c28f28959b607a47e47fd01c","title":"","text":"framework.ExpectNoError(err) beforeKC = kc } // show hidden metrics for release 1.22 beforeKC = updateShowHiddenMetricsForVersion(beforeKC, \"1.22\") // reset the node's assigned/active/last-known-good config by setting the source to nil, // so each test starts from a clean-slate (&nodeConfigTestCase{"} {"_id":"doc-en-kubernetes-ee2542e785d02ad9ce07034e92a249e05e87250643a63962e549246cbc1cd2b4","title":"","text":"framework.ExpectNoError(err) localKC = kc } // show hidden metrics for release 1.22 localKC = updateShowHiddenMetricsForVersion(localKC, \"1.22\") }) ginkgo.AfterEach(func() {"} {"_id":"doc-en-kubernetes-29b2c6e62f60954096cd431da8484fa1404d7f6db8427399c720e0abcaddcb86","title":"","text":"return nil }, timeout, interval).Should(gomega.BeNil()) } func updateShowHiddenMetricsForVersion(cfg *kubeletconfig.KubeletConfiguration, version string) *kubeletconfig.KubeletConfiguration { if cfg == nil { return &kubeletconfig.KubeletConfiguration{ ShowHiddenMetricsForVersion: version, } } cfg.ShowHiddenMetricsForVersion = version return cfg } "} {"_id":"doc-en-kubernetes-46a6e2b652d3fdcffc36f9b22b680ce5e68d4524d74b99945ee969a4d0008f4b","title":"","text":"return false } // GetAllocatableDevices returns information about all the devices known to the manager // GetAllocatableDevices returns information about all the healthy devices known to the manager func (m *ManagerImpl) GetAllocatableDevices() ResourceDeviceInstances { m.mutex.Lock() resp := m.allDevices.Clone() m.mutex.Unlock() klog.V(4).InfoS(\"Known devices\", \"numDevices\", len(resp)) defer m.mutex.Unlock() resp := m.allDevices.Filter(m.healthyDevices) klog.V(4).InfoS(\"GetAllocatableDevices\", \"known\", len(m.allDevices), \"allocatable\", len(resp)) return resp }"} {"_id":"doc-en-kubernetes-4542bb3955476f032fb447769c1820bd66b0ef2cdb73df72d3326e6dc880b5bb","title":"","text":"as.True(testManager.isDevicePluginResource(resourceName2)) } func TestGetAllocatableDevicesMultipleResources(t *testing.T) { socketDir, socketName, _, err := tmpSocketDir() topologyStore := topologymanager.NewFakeManager() require.NoError(t, err) defer os.RemoveAll(socketDir) testManager, err := newManagerImpl(socketName, nil, topologyStore) as := assert.New(t) as.NotNil(testManager) as.Nil(err) resource1Devs := []pluginapi.Device{ {ID: \"R1Device1\", Health: pluginapi.Healthy}, {ID: \"R1Device2\", Health: pluginapi.Healthy}, {ID: \"R1Device3\", Health: pluginapi.Unhealthy}, } resourceName1 := \"domain1.com/resource1\" e1 := &endpointImpl{} testManager.endpoints[resourceName1] = endpointInfo{e: e1, opts: nil} testManager.genericDeviceUpdateCallback(resourceName1, resource1Devs) resource2Devs := []pluginapi.Device{ {ID: \"R2Device1\", Health: pluginapi.Healthy}, } resourceName2 := \"other.domain2.org/resource2\" e2 := &endpointImpl{} testManager.endpoints[resourceName2] = endpointInfo{e: e2, opts: nil} testManager.genericDeviceUpdateCallback(resourceName2, resource2Devs) allocatableDevs := testManager.GetAllocatableDevices() as.Equal(2, len(allocatableDevs)) devInstances1, ok := allocatableDevs[resourceName1] as.True(ok) checkAllocatableDevicesConsistsOf(as, devInstances1, []string{\"R1Device1\", \"R1Device2\"}) devInstances2, ok := allocatableDevs[resourceName2] as.True(ok) checkAllocatableDevicesConsistsOf(as, devInstances2, []string{\"R2Device1\"}) } func TestGetAllocatableDevicesHealthTransition(t *testing.T) { socketDir, socketName, _, err := tmpSocketDir() topologyStore := topologymanager.NewFakeManager() require.NoError(t, err) defer os.RemoveAll(socketDir) testManager, err := newManagerImpl(socketName, nil, topologyStore) as := assert.New(t) as.NotNil(testManager) as.Nil(err) resource1Devs := []pluginapi.Device{ {ID: \"R1Device1\", Health: pluginapi.Healthy}, {ID: \"R1Device2\", Health: pluginapi.Healthy}, {ID: \"R1Device3\", Health: pluginapi.Unhealthy}, } // Adds three devices for resource1, two healthy and one unhealthy. // Expects allocatable devices for resource1 to be 2. resourceName1 := \"domain1.com/resource1\" e1 := &endpointImpl{} testManager.endpoints[resourceName1] = endpointInfo{e: e1, opts: nil} testManager.genericDeviceUpdateCallback(resourceName1, resource1Devs) allocatableDevs := testManager.GetAllocatableDevices() as.Equal(1, len(allocatableDevs)) devInstances, ok := allocatableDevs[resourceName1] as.True(ok) checkAllocatableDevicesConsistsOf(as, devInstances, []string{\"R1Device1\", \"R1Device2\"}) // Unhealthy device becomes healthy resource1Devs = []pluginapi.Device{ {ID: \"R1Device1\", Health: pluginapi.Healthy}, {ID: \"R1Device2\", Health: pluginapi.Healthy}, {ID: \"R1Device3\", Health: pluginapi.Healthy}, } testManager.genericDeviceUpdateCallback(resourceName1, resource1Devs) allocatableDevs = testManager.GetAllocatableDevices() as.Equal(1, len(allocatableDevs)) devInstances, ok = allocatableDevs[resourceName1] as.True(ok) checkAllocatableDevicesConsistsOf(as, devInstances, []string{\"R1Device1\", \"R1Device2\", \"R1Device3\"}) } func checkAllocatableDevicesConsistsOf(as *assert.Assertions, devInstances DeviceInstances, expectedDevs []string) { as.Equal(len(expectedDevs), len(devInstances)) for _, deviceID := range expectedDevs { _, ok := devInstances[deviceID] as.True(ok) } } func constructDevices(devices []string) checkpoint.DevicesPerNUMA { ret := checkpoint.DevicesPerNUMA{} for _, dev := range devices {"} {"_id":"doc-en-kubernetes-8a350b824ac1882f0e44326a9e6afc883d1c47dc8d64ff943a18b5e3f719ab81","title":"","text":"} return clone } // Filter takes a condition set expressed as map[string]sets.String and returns a new // ResourceDeviceInstances with only the devices matching the condition set. func (rdev ResourceDeviceInstances) Filter(cond map[string]sets.String) ResourceDeviceInstances { filtered := NewResourceDeviceInstances() for resourceName, filterIDs := range cond { if _, exists := rdev[resourceName]; !exists { continue } filtered[resourceName] = DeviceInstances{} for instanceID, instance := range rdev[resourceName] { if filterIDs.Has(instanceID) { filtered[resourceName][instanceID] = instance } } } return filtered } "} {"_id":"doc-en-kubernetes-71ec2d331d1fdcdab02f35581cf3be22e997329536b79ea7b3539d3afc6cfd32","title":"","text":"package devicemanager import ( \"encoding/json\" \"testing\" \"github.com/stretchr/testify/require\" \"k8s.io/apimachinery/pkg/util/sets\" pluginapi \"k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1\" \"k8s.io/kubernetes/pkg/kubelet/cm/devicemanager/checkpoint\" )"} {"_id":"doc-en-kubernetes-89e4c2da4719049109d780bfbc395500f0c1248c01fdf504174cdf13cd6c5efd","title":"","text":"} } } func TestResourceDeviceInstanceFilter(t *testing.T) { var expected string var cond map[string]sets.String var resp ResourceDeviceInstances devs := ResourceDeviceInstances{ \"foo\": DeviceInstances{ \"dev-foo1\": pluginapi.Device{ ID: \"foo1\", }, \"dev-foo2\": pluginapi.Device{ ID: \"foo2\", }, \"dev-foo3\": pluginapi.Device{ ID: \"foo3\", }, }, \"bar\": DeviceInstances{ \"dev-bar1\": pluginapi.Device{ ID: \"bar1\", }, \"dev-bar2\": pluginapi.Device{ ID: \"bar2\", }, \"dev-bar3\": pluginapi.Device{ ID: \"bar3\", }, }, \"baz\": DeviceInstances{ \"dev-baz1\": pluginapi.Device{ ID: \"baz1\", }, \"dev-baz2\": pluginapi.Device{ ID: \"baz2\", }, \"dev-baz3\": pluginapi.Device{ ID: \"baz3\", }, }, } resp = devs.Filter(map[string]sets.String{}) expected = `{}` expectResourceDeviceInstances(t, resp, expected) cond = map[string]sets.String{ \"foo\": sets.NewString(\"dev-foo1\", \"dev-foo2\"), \"bar\": sets.NewString(\"dev-bar1\"), } resp = devs.Filter(cond) expected = `{\"bar\":{\"dev-bar1\":{\"ID\":\"bar1\"}},\"foo\":{\"dev-foo1\":{\"ID\":\"foo1\"},\"dev-foo2\":{\"ID\":\"foo2\"}}}` expectResourceDeviceInstances(t, resp, expected) cond = map[string]sets.String{ \"foo\": sets.NewString(\"dev-foo1\", \"dev-foo2\", \"dev-foo3\"), \"bar\": sets.NewString(\"dev-bar1\", \"dev-bar2\", \"dev-bar3\"), \"baz\": sets.NewString(\"dev-baz1\", \"dev-baz2\", \"dev-baz3\"), } resp = devs.Filter(cond) expected = `{\"bar\":{\"dev-bar1\":{\"ID\":\"bar1\"},\"dev-bar2\":{\"ID\":\"bar2\"},\"dev-bar3\":{\"ID\":\"bar3\"}},\"baz\":{\"dev-baz1\":{\"ID\":\"baz1\"},\"dev-baz2\":{\"ID\":\"baz2\"},\"dev-baz3\":{\"ID\":\"baz3\"}},\"foo\":{\"dev-foo1\":{\"ID\":\"foo1\"},\"dev-foo2\":{\"ID\":\"foo2\"},\"dev-foo3\":{\"ID\":\"foo3\"}}}` expectResourceDeviceInstances(t, resp, expected) cond = map[string]sets.String{ \"foo\": sets.NewString(\"dev-foo1\", \"dev-foo2\", \"dev-foo3\", \"dev-foo4\"), \"bar\": sets.NewString(\"dev-bar1\", \"dev-bar2\", \"dev-bar3\", \"dev-bar4\"), \"baz\": sets.NewString(\"dev-baz1\", \"dev-baz2\", \"dev-baz3\", \"dev-bar4\"), } resp = devs.Filter(cond) expected = `{\"bar\":{\"dev-bar1\":{\"ID\":\"bar1\"},\"dev-bar2\":{\"ID\":\"bar2\"},\"dev-bar3\":{\"ID\":\"bar3\"}},\"baz\":{\"dev-baz1\":{\"ID\":\"baz1\"},\"dev-baz2\":{\"ID\":\"baz2\"},\"dev-baz3\":{\"ID\":\"baz3\"}},\"foo\":{\"dev-foo1\":{\"ID\":\"foo1\"},\"dev-foo2\":{\"ID\":\"foo2\"},\"dev-foo3\":{\"ID\":\"foo3\"}}}` expectResourceDeviceInstances(t, resp, expected) cond = map[string]sets.String{ \"foo\": sets.NewString(\"dev-foo1\", \"dev-foo4\", \"dev-foo7\"), \"bar\": sets.NewString(\"dev-bar1\", \"dev-bar4\", \"dev-bar7\"), \"baz\": sets.NewString(\"dev-baz1\", \"dev-baz4\", \"dev-baz7\"), } resp = devs.Filter(cond) expected = `{\"bar\":{\"dev-bar1\":{\"ID\":\"bar1\"}},\"baz\":{\"dev-baz1\":{\"ID\":\"baz1\"}},\"foo\":{\"dev-foo1\":{\"ID\":\"foo1\"}}}` expectResourceDeviceInstances(t, resp, expected) } func expectResourceDeviceInstances(t *testing.T, resp ResourceDeviceInstances, expected string) { // per docs in https://pkg.go.dev/encoding/json#Marshal // \"Map values encode as JSON objects. The map's key type must either be a string, an integer type, or // implement encoding.TextMarshaler. The map keys are sorted [...]\" // so this check is expected to be stable and not flaky data, err := json.Marshal(resp) if err != nil { t.Fatalf(\"unexpected JSON marshalling error: %v\", err) } got := string(data) if got != expected { t.Errorf(\"expected %q got %q\", expected, got) } } "} {"_id":"doc-en-kubernetes-fb3c1833fc8d41cb273b0bc0836a6515ef52cb0d881b3b8ec01dc0108b9b6470","title":"","text":"}, { \"ImportPath\": \"github.com/google/cadvisor/api\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/container\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/events\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/fs\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/healthz\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/http\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/info/v1\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/info/v2\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/manager\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/metrics\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/pages\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/storage\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/summary\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/utils\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/validate\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/cadvisor/version\", \"Comment\": \"0.10.1-92-g41a0c30\", \"Rev\": \"41a0c30fbf4df4d5d711b752785febb6ed5330a4\" \"Comment\": \"0.10.1-103-gbfaf70b\", \"Rev\": \"bfaf70b2555fcaba212130da04a21302344e38f5\" }, { \"ImportPath\": \"github.com/google/gofuzz\","} {"_id":"doc-en-kubernetes-e1c56c6c0d9a189f37eb2fdb8a64607ed1dc7709c9b01a45aaf3601863891746","title":"","text":"// TODO(jnagal): Infer schema through reflection. (See bigquery/client/example) func (self *bigqueryStorage) GetSchema() *bigquery.TableSchema { fields := make([]*bigquery.TableFieldSchema, 18) fields := make([]*bigquery.TableFieldSchema, 19) i := 0 fields[i] = &bigquery.TableFieldSchema{ Type: typeTimestamp,"} {"_id":"doc-en-kubernetes-cb7dc19af35ecb78e127748d6f93ccd886b8188b34316748560191e41fb7fb2b","title":"","text":"\"flag\" \"fmt\" \"io/ioutil\" \"net/http\" \"strings\" \"code.google.com/p/goauth2/oauth\" \"code.google.com/p/goauth2/oauth/jwt\" bigquery \"code.google.com/p/google-api-go-client/bigquery/v2\" \"golang.org/x/oauth2\" \"golang.org/x/oauth2/jwt\" ) var ("} {"_id":"doc-en-kubernetes-f03cb0c879ad052e21da2ca3ecfe14bb2f8afd6b975470e8111b330b4edfbfbf","title":"","text":"type Client struct { service *bigquery.Service token *oauth.Token token *oauth2.Token datasetId string tableId string } // Helper method to create an authenticated connection. func connect() (*oauth.Token, *bigquery.Service, error) { func connect() (*oauth2.Token, *bigquery.Service, error) { if *clientId == \"\" { return nil, nil, fmt.Errorf(\"no client id specified\") }"} {"_id":"doc-en-kubernetes-27862f4af45699396def7e8c1c4ea0fa6e5091ea51fc3b37366911703f215604","title":"","text":"return nil, nil, fmt.Errorf(\"could not access credential file %v - %v\", pemFile, err) } t := jwt.NewToken(*serviceAccount, authScope, pemBytes) token, err := t.Assert(&http.Client{}) jwtConfig := &jwt.Config{ Email: *serviceAccount, Scopes: []string{authScope}, PrivateKey: pemBytes, TokenURL: \"https://accounts.google.com/o/oauth2/token\", } token, err := jwtConfig.TokenSource(oauth2.NoContext).Token() if err != nil { fmt.Printf(\"Invalid token: %vn\", err) return nil, nil, err } config := &oauth.Config{ ClientId: *clientId, ClientSecret: *clientSecret, Scope: authScope, AuthURL: \"https://accounts.google.com/o/oauth2/auth\", TokenURL: \"https://accounts.google.com/o/oauth2/token\", if !token.Valid() { return nil, nil, fmt.Errorf(\"invalid token for BigQuery oauth\") } transport := &oauth.Transport{ Token: token, Config: config, config := &oauth2.Config{ ClientID: *clientId, ClientSecret: *clientSecret, Scopes: []string{authScope}, Endpoint: oauth2.Endpoint{ AuthURL: \"https://accounts.google.com/o/oauth2/auth\", TokenURL: \"https://accounts.google.com/o/oauth2/token\", }, } client := transport.Client() client := config.Client(oauth2.NoContext, token) service, err := bigquery.New(client) if err != nil {"} {"_id":"doc-en-kubernetes-6a1f500bb6b44fccb544909744180296fac074f66aaaf4224abfbedaba596cae","title":"","text":"} // Refresh expired token. if c.token.Expired() { if !c.token.Valid() { token, service, err := connect() if err != nil { return nil, err"} {"_id":"doc-en-kubernetes-1db92e840964042ed863a0a90109d32c72cd1a9aab044836b761ab67e4cacfb9","title":"","text":"apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server-v0.5.0 name: metrics-server-v0.5.1 namespace: kube-system labels: k8s-app: metrics-server addonmanager.kubernetes.io/mode: Reconcile version: v0.5.0 version: v0.5.1 spec: selector: matchLabels: k8s-app: metrics-server version: v0.5.0 version: v0.5.1 template: metadata: name: metrics-server labels: k8s-app: metrics-server version: v0.5.0 version: v0.5.1 spec: securityContext: seccompProfile:"} {"_id":"doc-en-kubernetes-a1941efea21025550e37c756e6400362d07d6a1df0cc57faae88c298dbb95990","title":"","text":"kubernetes.io/os: linux containers: - name: metrics-server image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0 image: k8s.gcr.io/metrics-server/metrics-server:v0.5.1 command: - /metrics-server - --metric-resolution=30s"} {"_id":"doc-en-kubernetes-bc4558b981d99291b5ef500347b2bdf4922653aabd61d59efc483cfcc7627fea","title":"","text":"- --memory={{ base_metrics_server_memory }} - --extra-memory={{ metrics_server_memory_per_node }}Mi - --threshold=5 - --deployment=metrics-server-v0.5.0 - --deployment=metrics-server-v0.5.1 - --container=metrics-server - --poll-period=30000 - --estimator=exponential"} {"_id":"doc-en-kubernetes-6299537b75e46cd595a6a56dfe8089da66fc514406585915b706fee60bca4490","title":"","text":"log.Printf(\"Error parsing response (%#v): %s\", response, err) return err } log.Printf(\"Got initial state from etcd: %+v\", manifests) log.Printf(\"Got state from etcd: %+v\", manifests) updateChannel <- manifestUpdate{etcdSource, manifests} return nil }"} {"_id":"doc-en-kubernetes-f7ec3dd4bad84d9dc9c82d273658469bc089e47124cf1d4c8597cbd3d251fab3","title":"","text":"defer util.HandleCrash() for { watchResponse := <-watchChannel log.Printf(\"Got change: %#v\", watchResponse) // This means the channel has been closed. if watchResponse == nil { return } log.Printf(\"Got etcd change: %#v\", watchResponse) manifests, err := kl.extractFromEtcd(watchResponse) if err != nil { log.Printf(\"Error handling response from etcd: %#v\", err)"} {"_id":"doc-en-kubernetes-60910d6ca096282709b6601a50655240952dc1205af410f307358e581d24f2e5","title":"","text":"// Sync the configured list of containers (desired state) with the host current state func (kl *Kubelet) SyncManifests(config []api.ContainerManifest) error { log.Printf(\"Desired:%#v\", config) log.Printf(\"Desired: %#v\", config) var err error desired := map[string]bool{} for _, manifest := range config {"} {"_id":"doc-en-kubernetes-9e463620af1cee186461adf80df3db4be130a743bf4b585ef52071b9458a04d9","title":"","text":"continue } if !exists { log.Printf(\"Network container doesn't exit, creating\") log.Printf(\"Network container doesn't exist, creating\") netName, err = kl.createNetworkContainer(&manifest) if err != nil { log.Printf(\"Failed to create network container: %#v\", err)"} {"_id":"doc-en-kubernetes-8b36924118ca1405be3843d60357d799228ee89c61c4501dd97c8fd3f1b4fea0","title":"","text":"} } existingContainers, _ := kl.ListContainers() log.Printf(\"Existing:n%#v Desired: %#v\", existingContainers, desired) log.Printf(\"Existing: %#v Desired: %#v\", existingContainers, desired) for _, container := range existingContainers { // Skip containers that we didn't create to allow users to manually // spin up their own containers if they want."} {"_id":"doc-en-kubernetes-ffc927f4c03eea1d4bac152a493cd2cf93f7da342a71bdf1f045ed533c7b8356","title":"","text":"https://storage.googleapis.com/kubernetes-release/release/${version}/kubernetes-src.tar.gz It is based on the Kubernetes source at: https://github.com/kubernetes/kubernetes/tree/${gitref} https://github.com/kubernetes/kubernetes/tree/${gitref/-gke.*/} ${devel} For Kubernetes copyright and licensing information, see: /home/kubernetes/LICENSES"} {"_id":"doc-en-kubernetes-f7ff62d616972738022bbb109de62d090adbb925e77279916310f81d743b548d","title":"","text":"} func getMemoryStat(f *framework.Framework, host string) (rss, workingSet float64) { memCmd := \"cat /sys/fs/cgroup/memory/system.slice/node-problem-detector.service/memory.usage_in_bytes && cat /sys/fs/cgroup/memory/system.slice/node-problem-detector.service/memory.stat\" var memCmd string isCgroupV2 := isHostRunningCgroupV2(f, host) if isCgroupV2 { memCmd = \"cat /sys/fs/cgroup/system.slice/node-problem-detector.service/memory.current && cat /sys/fs/cgroup/system.slice/node-problem-detector.service/memory.stat\" } else { memCmd = \"cat /sys/fs/cgroup/memory/system.slice/node-problem-detector.service/memory.usage_in_bytes && cat /sys/fs/cgroup/memory/system.slice/node-problem-detector.service/memory.stat\" } result, err := e2essh.SSH(memCmd, host, framework.TestContext.Provider) framework.ExpectNoError(err) framework.ExpectEqual(result.Code, 0)"} {"_id":"doc-en-kubernetes-3efdca3d7f3f8a011b4c9c0e64c2312418fac47ef7639a403c56b324c2c90619","title":"","text":"memoryUsage, err := strconv.ParseFloat(lines[0], 64) framework.ExpectNoError(err) var rssToken, inactiveFileToken string if isCgroupV2 { // Use Anon memory for RSS as cAdvisor on cgroupv2 // see https://github.com/google/cadvisor/blob/a9858972e75642c2b1914c8d5428e33e6392c08a/container/libcontainer/handler.go#L799 rssToken = \"anon\" inactiveFileToken = \"inactive_file\" } else { rssToken = \"total_rss\" inactiveFileToken = \"total_inactive_file\" } var totalInactiveFile float64 for _, line := range lines[1:] { tokens := strings.Split(line, \" \") if tokens[0] == \"total_rss\" { if tokens[0] == rssToken { rss, err = strconv.ParseFloat(tokens[1], 64) framework.ExpectNoError(err) } if tokens[0] == \"total_inactive_file\" { if tokens[0] == inactiveFileToken { totalInactiveFile, err = strconv.ParseFloat(tokens[1], 64) framework.ExpectNoError(err) }"} {"_id":"doc-en-kubernetes-6bf6b78f6e63368a7f16114587d7c81b66cbd0ef30b977c6f48473b0894682eb","title":"","text":"} func getCPUStat(f *framework.Framework, host string) (usage, uptime float64) { cpuCmd := \"cat /sys/fs/cgroup/cpu/system.slice/node-problem-detector.service/cpuacct.usage && cat /proc/uptime | awk '{print $1}'\" var cpuCmd string if isHostRunningCgroupV2(f, host) { cpuCmd = \" cat /sys/fs/cgroup/cpu.stat | grep 'usage_usec' | sed 's/[^0-9]*//g' && cat /proc/uptime | awk '{print $1}'\" } else { cpuCmd = \"cat /sys/fs/cgroup/cpu/system.slice/node-problem-detector.service/cpuacct.usage && cat /proc/uptime | awk '{print $1}'\" } result, err := e2essh.SSH(cpuCmd, host, framework.TestContext.Provider) framework.ExpectNoError(err) framework.ExpectEqual(result.Code, 0)"} {"_id":"doc-en-kubernetes-50e13d61eba1a90973b7a8a15336d0853f022952e4eaea242723b1d75d68d6b1","title":"","text":"return } func isHostRunningCgroupV2(f *framework.Framework, host string) bool { result, err := e2essh.SSH(\"stat -fc %T /sys/fs/cgroup/\", host, framework.TestContext.Provider) framework.ExpectNoError(err) framework.ExpectEqual(result.Code, 0) // 0x63677270 == CGROUP2_SUPER_MAGIC // https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html return strings.Contains(result.Stdout, \"cgroup2\") || strings.Contains(result.Stdout, \"0x63677270\") } func getNpdPodStat(f *framework.Framework, nodeName string) (cpuUsage, rss, workingSet float64) { summary, err := e2ekubelet.GetStatsSummary(f.ClientSet, nodeName) framework.ExpectNoError(err)"} {"_id":"doc-en-kubernetes-798f477312544883e9ff4aa9ed96cb4c466a1969513b18ad159158640097aeb1","title":"","text":"\"UserDefinedMetrics\": gomega.BeEmpty(), }) } expectedPageFaultsUpperBound := 1000000 expectedMajorPageFaultsUpperBound := 10 if IsCgroup2UnifiedMode() { expectedMajorPageFaultsUpperBound = 1000 // On cgroupv2 these stats are recursive, so make sure they are at least like the value set // above for the container. expectedPageFaultsUpperBound = 1e9 expectedMajorPageFaultsUpperBound = 100000 } podsContExpectations := sysContExpectations().(*gstruct.FieldsMatcher)"} {"_id":"doc-en-kubernetes-3f648b3309d5602371b7dd3d74c19b5b95ed64486e80bec1a1154c7de648adc2","title":"","text":"\"UsageBytes\": bounded(10*e2evolume.Kb, memoryLimit), \"WorkingSetBytes\": bounded(10*e2evolume.Kb, memoryLimit), \"RSSBytes\": bounded(1*e2evolume.Kb, memoryLimit), \"PageFaults\": bounded(0, 1000000), \"PageFaults\": bounded(0, expectedPageFaultsUpperBound), \"MajorPageFaults\": bounded(0, expectedMajorPageFaultsUpperBound), }) runtimeContExpectations := sysContExpectations().(*gstruct.FieldsMatcher)"} {"_id":"doc-en-kubernetes-22e10fd6596ec120fe3a0fcd9b4cf1e7e66de116e53efd9a289d39e216284036","title":"","text":"func TestUpdatePodWithTerminatedPod(t *testing.T) { podWorkers, _ := createPodWorkers() terminatedPod := newPodWithPhase(\"0000-0000-0000\", \"done-pod\", v1.PodSucceeded) runningPod := &kubecontainer.Pod{ID: \"0000-0000-0001\", Name: \"done-pod\"} orphanedPod := &kubecontainer.Pod{ID: \"0000-0000-0001\", Name: \"orphaned-pod\"} pod := newPod(\"0000-0000-0002\", \"running-pod\") podWorkers.UpdatePod(UpdatePodOptions{"} {"_id":"doc-en-kubernetes-80046fa1778f3ee8ebfa9bffc37c2a1f4dd35f9065a795b8c616d008d5831d3e","title":"","text":"}) podWorkers.UpdatePod(UpdatePodOptions{ UpdateType: kubetypes.SyncPodKill, RunningPod: runningPod, RunningPod: orphanedPod, }) drainAllWorkers(podWorkers) if podWorkers.IsPodKnownTerminated(pod.UID) == true { t.Errorf(\"podWorker state should not be terminated\") } if podWorkers.IsPodKnownTerminated(terminatedPod.UID) == false { t.Errorf(\"podWorker state should be terminated\") } if podWorkers.IsPodKnownTerminated(runningPod.ID) == true { t.Errorf(\"podWorker state should not be marked terminated for a running pod\") if podWorkers.IsPodKnownTerminated(orphanedPod.ID) == false { t.Errorf(\"podWorker state should be terminated for orphaned pod\") } }"} {"_id":"doc-en-kubernetes-9c9f997aeac05a8507615c3ba44e151e73c848a81fd6d9db9cb007fe6d2c8970","title":"","text":"\"uniqueItems\": true }, { \"description\": \"Redirect the standard error stream of the pod for this call. Defaults to true.\", \"description\": \"Redirect the standard error stream of the pod for this call.\", \"in\": \"query\", \"name\": \"stderr\", \"type\": \"boolean\","} {"_id":"doc-en-kubernetes-981247e0f32454e69574c79de530d8ad36bd2ca74536edcdfb469a21f25b7b29","title":"","text":"\"uniqueItems\": true }, { \"description\": \"Redirect the standard output stream of the pod for this call. Defaults to true.\", \"description\": \"Redirect the standard output stream of the pod for this call.\", \"in\": \"query\", \"name\": \"stdout\", \"type\": \"boolean\","} {"_id":"doc-en-kubernetes-adbe46fc6a82fd9ddd68dcf851089e7f033af547693b4230f492e1890b5a69f3","title":"","text":"optional bool stdin = 1; // Redirect the standard output stream of the pod for this call. // Defaults to true. // +optional optional bool stdout = 2; // Redirect the standard error stream of the pod for this call. // Defaults to true. // +optional optional bool stderr = 3;"} {"_id":"doc-en-kubernetes-007abfc00af047191756e7b0093f388b261fc0cce4e3b55fe8e357179357b9dd","title":"","text":"Stdin bool `json:\"stdin,omitempty\" protobuf:\"varint,1,opt,name=stdin\"` // Redirect the standard output stream of the pod for this call. // Defaults to true. // +optional Stdout bool `json:\"stdout,omitempty\" protobuf:\"varint,2,opt,name=stdout\"` // Redirect the standard error stream of the pod for this call. // Defaults to true. // +optional Stderr bool `json:\"stderr,omitempty\" protobuf:\"varint,3,opt,name=stderr\"`"} {"_id":"doc-en-kubernetes-e9057fe5739d638c786f5acee94b52a1663362d92346f244bb276b060ec7b316","title":"","text":"var map_PodExecOptions = map[string]string{ \"\": \"PodExecOptions is the query options to a Pod's remote exec call.\", \"stdin\": \"Redirect the standard input stream of the pod for this call. Defaults to false.\", \"stdout\": \"Redirect the standard output stream of the pod for this call. Defaults to true.\", \"stderr\": \"Redirect the standard error stream of the pod for this call. Defaults to true.\", \"stdout\": \"Redirect the standard output stream of the pod for this call.\", \"stderr\": \"Redirect the standard error stream of the pod for this call.\", \"tty\": \"TTY if true indicates that a tty will be allocated for the exec call. Defaults to false.\", \"container\": \"Container in which to execute the command. Defaults to only container if there is only one container in the pod.\", \"command\": \"Command is the remote command to execute. argv array. Not executed within a shell.\","} {"_id":"doc-en-kubernetes-cae769ae8d5340c601cae948407294c4183503fa9ce98f6a8257412452138a58","title":"","text":"RuntimeCgroupsName: s.RuntimeCgroups, SystemCgroupsName: s.SystemCgroups, KubeletCgroupsName: s.KubeletCgroups, KubeletOOMScoreAdj: s.OOMScoreAdj, CgroupsPerQOS: s.CgroupsPerQOS, CgroupRoot: s.CgroupRoot, CgroupDriver: s.CgroupDriver,"} {"_id":"doc-en-kubernetes-4f5e4d69119685fe0acefa00ad4a6d0e209f7e0004a344a7a97356e506702b1a","title":"","text":"RuntimeCgroupsName string SystemCgroupsName string KubeletCgroupsName string KubeletOOMScoreAdj int32 ContainerRuntime string CgroupsPerQOS bool CgroupRoot string"} {"_id":"doc-en-kubernetes-b0fbbb2f369450db958d014c5291eb90e2f2fbbac837ec0896f543b0a98aec8b","title":"","text":"kubecontainer \"k8s.io/kubernetes/pkg/kubelet/container\" \"k8s.io/kubernetes/pkg/kubelet/lifecycle\" \"k8s.io/kubernetes/pkg/kubelet/pluginmanager/cache\" \"k8s.io/kubernetes/pkg/kubelet/qos\" \"k8s.io/kubernetes/pkg/kubelet/stats/pidlimit\" \"k8s.io/kubernetes/pkg/kubelet/status\" schedulerframework \"k8s.io/kubernetes/pkg/scheduler/framework\""} {"_id":"doc-en-kubernetes-bc0f566ea046f9936dc3ef1eefa6ae91c8202db3ec81afe4df6991a7b03cc881","title":"","text":"} cont.ensureStateFunc = func(_ cgroups.Manager) error { return ensureProcessInContainerWithOOMScore(os.Getpid(), qos.KubeletOOMScoreAdj, cont.manager) return ensureProcessInContainerWithOOMScore(os.Getpid(), int(cm.KubeletOOMScoreAdj), cont.manager) } systemContainers = append(systemContainers, cont) } else { cm.periodicTasks = append(cm.periodicTasks, func() { if err := ensureProcessInContainerWithOOMScore(os.Getpid(), qos.KubeletOOMScoreAdj, nil); err != nil { if err := ensureProcessInContainerWithOOMScore(os.Getpid(), int(cm.KubeletOOMScoreAdj), nil); err != nil { klog.ErrorS(err, \"Failed to ensure process in container with oom score\") return }"} {"_id":"doc-en-kubernetes-ed6fab0c6cf8e2f24387f5dc8c90675ce9c83d0f88e7d9c814dc8fb9b3cd8417","title":"","text":"echo \"minion-$((${BASH_REMATCH[1]} - 1))\" } # Find the vagrant machien name based on the host name of the minion # Find the vagrant machine name based on the host name of the minion function find-vagrant-name-by-minion-name { local ip=\"$1\" if [[ \"$ip\" == \"${INSTANCE_PREFIX}-master\" ]]; then"} {"_id":"doc-en-kubernetes-31ab650fba7580f20b4cdaffe6262ba5a8d2d77f7afa1a7d56c1ff0177b5905f","title":"","text":"} func (eic *execInContainer) SetDir(dir string) { //unimplemented // unimplemented } func (eic *execInContainer) SetStdin(in io.Reader) { //unimplemented // unimplemented } func (eic *execInContainer) SetStdout(out io.Writer) {"} {"_id":"doc-en-kubernetes-0ef26a2263562c704a323eee07982b2769b15c74c82bb7be841fe83749a74ed0","title":"","text":"} func (eic *execInContainer) SetEnv(env []string) { //unimplemented // unimplemented } func (eic *execInContainer) Stop() { //unimplemented // unimplemented } func (eic *execInContainer) Start() error {"} {"_id":"doc-en-kubernetes-b1d0079db29a3a4a57d44b620294c9a520e160a4ce73d7f4d341f0a10d445281","title":"","text":"timeoutErr, ok := err.(*TimeoutError) if ok { if utilfeature.DefaultFeatureGate.Enabled(features.ExecProbeTimeout) { return probe.Failure, string(data), nil // When exec probe timeout, data is empty, so we should return timeoutErr.Error() as the stdout. return probe.Failure, timeoutErr.Error(), nil } klog.Warningf(\"Exec probe timed out after %s but ExecProbeTimeout feature gate was disabled\", timeoutErr.Timeout())"} {"_id":"doc-en-kubernetes-f16efbf3c6aa8818500568c3ac6aa84c479ee298be9aabf392b401e4ff34b8a0","title":"","text":"\"io\" \"strings\" \"testing\" \"time\" \"k8s.io/kubernetes/pkg/probe\" )"} {"_id":"doc-en-kubernetes-15024ff35c457dce4d9a500135e0a9bbc5904712e346b8b24447b654434ab44d","title":"","text":"{probe.Unknown, true, \"\", \"\", fmt.Errorf(\"test error\")}, // Unhealthy {probe.Failure, false, \"Fail\", \"\", &fakeExitError{true, 1}}, // Timeout {probe.Failure, false, \"\", \"command testcmd timed out\", NewTimeoutError(fmt.Errorf(\"command testcmd timed out\"), time.Second)}, } for i, test := range tests { fake := FakeCmd{"} {"_id":"doc-en-kubernetes-dfcb17671562997e03cc008d4836d91cf70578c7555f1266c59eb14ec03f5c8a","title":"","text":"// TODO: This k8s-generic, well-known constant should be fetchable from another source, not be in this package KubeProxyClusterRoleName = \"system:node-proxier\" // KubeProxyClusterRoleBindingName sets the name for the kube-proxy CluterRoleBinding KubeProxyClusterRoleBindingName = \"kubeadm:node-proxier\" // KubeProxyServiceAccountName describes the name of the ServiceAccount for the kube-proxy addon KubeProxyServiceAccountName = \"kube-proxy\" // KubeProxyClusterRoleBindingName sets the name for the kube-proxy CluterRoleBinding KubeProxyClusterRoleBindingName = \"kubeam:node-proxier\" // KubeProxyConfigMapRoleName sets the name of ClusterRole for ConfigMap KubeProxyConfigMapRoleName = \"kube-proxy\" )"} {"_id":"doc-en-kubernetes-64eae22749511524fd719f48560cfbabad13a515c523c6497dee8a40afe59ad7","title":"","text":"// https://github.com/kubernetes/kubeadm/issues/1582 var UnversionedKubeletConfigMap bool if _, ok := m[\"featureGates\"]; ok { if featureGates, ok := m[\"featureGates\"].(map[string]bool); ok { if featureGates, ok := m[\"featureGates\"].(map[interface{}]interface{}); ok { // TODO: update the default to true once this graduates to Beta. UnversionedKubeletConfigMap = false if val, ok := featureGates[\"UnversionedKubeletConfigMap\"]; ok { UnversionedKubeletConfigMap = val if valBool, ok := val.(bool); ok { UnversionedKubeletConfigMap = valBool } else { framework.Failf(\"unable to cast the value of feature gate UnversionedKubeletConfigMap to bool\") } } } else { framework.Failf(\"unable to cast the featureGates field in the %s ConfigMap\", kubeadmConfigName)"} {"_id":"doc-en-kubernetes-aa15d9772bf5d20b3e6b666524c03863b97984613406040ee3d1e94f509e63fe","title":"","text":"import ( \"context\" \"encoding/json\" \"fmt\" \"strconv\" \"time\" kubeletconfigv1beta1 \"k8s.io/kubelet/config/v1beta1\" kubeletconfig \"k8s.io/kubernetes/pkg/kubelet/apis/config\" kubeletconfigscheme \"k8s.io/kubernetes/pkg/kubelet/apis/config/scheme\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/util/uuid\" \"k8s.io/kubernetes/test/e2e/framework\" e2ekubelet \"k8s.io/kubernetes/test/e2e/framework/kubelet\" e2eskipper \"k8s.io/kubernetes/test/e2e/framework/skipper\" imageutils \"k8s.io/kubernetes/test/utils/image\""} {"_id":"doc-en-kubernetes-3dbd1eed5bcfec73efa14e05137c07ef54ba7b31f01c873d22beb2301e74e5c2","title":"","text":"framework.ExpectNotEqual(nodeList.Size(), 0) ginkgo.By(\"Getting memory details from node status and kubelet config\") status := nodeList.Items[0].Status nodeName := nodeList.Items[0].ObjectMeta.Name kubeletConfig, err := e2ekubelet.GetCurrentKubeletConfig(nodeName, f.Namespace.Name, true) framework.Logf(\"Getting configuration details for node %s\", nodeName) request := f.ClientSet.CoreV1().RESTClient().Get().Resource(\"nodes\").Name(nodeName).SubResource(\"proxy\").Suffix(\"configz\") rawbytes, err := request.DoRaw(context.Background()) framework.ExpectNoError(err) kubeletConfig, err := decodeConfigz(rawbytes) framework.ExpectNoError(err) systemReserve, err := resource.ParseQuantity(kubeletConfig.SystemReserved[\"memory\"])"} {"_id":"doc-en-kubernetes-f7cc5ad58fc841f2aeb2efcd23a2d258559836907568c3d72752f472265d390e","title":"","text":"return totalAllocatable } // modified from https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/kubelet/config.go#L110 // the proxy version was causing and non proxy used a value that isn't set by e2e func decodeConfigz(contentsBytes []byte) (*kubeletconfig.KubeletConfiguration, error) { // This hack because /configz reports the following structure: // {\"kubeletconfig\": {the JSON representation of kubeletconfigv1beta1.KubeletConfiguration}} type configzWrapper struct { ComponentConfig kubeletconfigv1beta1.KubeletConfiguration `json:\"kubeletconfig\"` } configz := configzWrapper{} kubeCfg := kubeletconfig.KubeletConfiguration{} err := json.Unmarshal(contentsBytes, &configz) if err != nil { return nil, err } scheme, _, err := kubeletconfigscheme.NewSchemeAndCodecs() if err != nil { return nil, err } err = scheme.Convert(&configz.ComponentConfig, &kubeCfg, nil) if err != nil { return nil, err } return &kubeCfg, nil } "} {"_id":"doc-en-kubernetes-053eff727a21b343aa7d5f278bf254d95ee89937d98c2e7ddbb6e298e3604d02","title":"","text":"} patchedJS, retErr = jsonpatch.MergePatch(versionedJS, p.patchBytes) if retErr == jsonpatch.ErrBadJSONPatch { return nil, nil, errors.NewBadRequest(retErr.Error()) } return patchedJS, strictErrors, retErr default: // only here as a safety net - go-restful filters content-type"} {"_id":"doc-en-kubernetes-33ad06757a56f653c3bf19142296c8d79c4e89b185d2e32dc6d67b53fcf5e23f","title":"","text":"import ( \"context\" \"path/filepath\" \"regexp\" \"time\" \"github.com/onsi/ginkgo\" \"github.com/onsi/gomega\" appsv1 \"k8s.io/api/apps/v1\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/runtime\" \"k8s.io/apimachinery/pkg/runtime/serializer\" e2eskipper \"k8s.io/kubernetes/test/e2e/framework/skipper\" kubeletdevicepluginv1beta1 \"k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1\" e2etestfiles \"k8s.io/kubernetes/test/e2e/framework/testfiles\" admissionapi \"k8s.io/pod-security-admission/api\" \"regexp\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/uuid\""} {"_id":"doc-en-kubernetes-d929ca3e3e7c55ecbcfa27c54a49103c25b6e40a2779beb5856ee21b2a526cd0","title":"","text":"\"k8s.io/kubernetes/test/e2e/framework\" e2enode \"k8s.io/kubernetes/test/e2e/framework/node\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" \"github.com/onsi/ginkgo\" \"github.com/onsi/gomega\" ) const ("} {"_id":"doc-en-kubernetes-3eb63af23c4708fe83a810195be576dc6060831819f890b329d131e1317baa00","title":"","text":"var _ = SIGDescribe(\"Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial]\", func() { f := framework.NewDefaultFramework(\"device-plugin-errors\") f.NamespacePodSecurityEnforceLevel = admissionapi.LevelPrivileged testDevicePlugin(f, \"/var/lib/kubelet/plugins_registry\") testDevicePlugin(f, kubeletdevicepluginv1beta1.DevicePluginPath) }) // numberOfSampleResources returns the number of resources advertised by a node."} {"_id":"doc-en-kubernetes-b1263a5c3ebf3797e9bb02ac1213581af3d745f40ef628ae1e13d4fa01b8d662","title":"","text":"var devicePluginPod, dptemplate *v1.Pod ginkgo.BeforeEach(func() { e2eskipper.Skipf(\"Device Plugin tests are currently broken and being investigated\") ginkgo.By(\"Wait for node to be ready\") gomega.Eventually(func() bool { nodes, err := e2enode.TotalReady(f.ClientSet)"} {"_id":"doc-en-kubernetes-f41cb9355f735d7dfe3aecfaf8cfee1729ae8bd1f7231bf0a9647f81ab16904f","title":"","text":"dp.Spec.Containers[0].Env[i].Value = pluginSockDir } } dptemplate = dp dptemplate = dp.DeepCopy() devicePluginPod = f.PodClient().CreateSync(dp) ginkgo.By(\"Waiting for devices to become available on the local node\") gomega.Eventually(func() bool { return numberOfSampleResources(getLocalNode(f)) > 0 node, ready := getLocalTestNode(f) return ready && numberOfSampleResources(node) > 0 }, 5*time.Minute, framework.Poll).Should(gomega.BeTrue()) framework.Logf(\"Successfully created device plugin pod\") ginkgo.By(\"Waiting for the resource exported by the sample device plugin to become available on the local node\") gomega.Eventually(func() bool { node := getLocalNode(f) return numberOfDevicesCapacity(node, resourceName) == devsLen && node, ready := getLocalTestNode(f) return ready && numberOfDevicesCapacity(node, resourceName) == devsLen && numberOfDevicesAllocatable(node, resourceName) == devsLen }, 30*time.Second, framework.Poll).Should(gomega.BeTrue()) })"} {"_id":"doc-en-kubernetes-2a4162c6651b26c2dd02449e24446b24d67b740bccab213aeddb110e49bff0e3","title":"","text":"ginkgo.By(\"Waiting for devices to become unavailable on the local node\") gomega.Eventually(func() bool { return numberOfSampleResources(getLocalNode(f)) <= 0 node, ready := getLocalTestNode(f) return ready && numberOfSampleResources(node) <= 0 }, 5*time.Minute, framework.Poll).Should(gomega.BeTrue()) ginkgo.By(\"devices now unavailable on the local node\") }) ginkgo.It(\"Can schedule a pod that requires a device\", func() {"} {"_id":"doc-en-kubernetes-eae3e55b0a9a79df13d0c379858f7a3a65daf43fa5879fc9ccfb3222cf00b526","title":"","text":"ginkgo.By(\"Waiting for resource to become available on the local node after re-registration\") gomega.Eventually(func() bool { node := getLocalNode(f) return numberOfDevicesCapacity(node, resourceName) == devsLen && node, ready := getLocalTestNode(f) return ready && numberOfDevicesCapacity(node, resourceName) == devsLen && numberOfDevicesAllocatable(node, resourceName) == devsLen }, 30*time.Second, framework.Poll).Should(gomega.BeTrue())"} {"_id":"doc-en-kubernetes-5d480d5cd96dc34e1d7d2c7cc056d0dbf4d772571e1fdd0c8e810ac7cf08d1bc","title":"","text":"return &nodeList.Items[0] } // getLocalTestNode fetches the node object describing the local worker node set up by the e2e_node infra, alongside with its ready state. // getLocalTestNode is a variant of `getLocalNode` which reports but does not set any requirement about the node readiness state, letting // the caller decide. The check is intentionally done like `getLocalNode` does. // Note `getLocalNode` aborts (as in ginkgo.Expect) the test implicitly if the worker node is not ready. func getLocalTestNode(f *framework.Framework) (*v1.Node, bool) { node, err := f.ClientSet.CoreV1().Nodes().Get(context.TODO(), framework.TestContext.NodeName, metav1.GetOptions{}) framework.ExpectNoError(err) ready := e2enode.IsNodeReady(node) schedulable := e2enode.IsNodeSchedulable(node) framework.Logf(\"node %q ready=%v schedulable=%v\", node.Name, ready, schedulable) return node, ready && schedulable } // logKubeletLatencyMetrics logs KubeletLatencyMetrics computed from the Prometheus // metrics exposed on the current node and identified by the metricNames. // The Kubelet subsystem prefix is automatically prepended to these metric names."} {"_id":"doc-en-kubernetes-b6c11e1973260c6dbb785cff750d65221e758c441f6056d42030c965a64ad8e7","title":"","text":"\"strconv\" \"time\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/fields\" \"github.com/onsi/ginkgo\""} {"_id":"doc-en-kubernetes-49ac6e2a8d890b23d5b546431f225c35995e5a6452fe78a76f10a0729c7c9561","title":"","text":"ginkgo.Context(\"when gracefully shutting down with Pod priority\", func() { const ( pollInterval = 1 * time.Second podStatusUpdateTimeout = 10 * time.Second pollInterval = 1 * time.Second podStatusUpdateTimeout = 10 * time.Second priorityClassesCreateTimeout = 10 * time.Second ) var ("} {"_id":"doc-en-kubernetes-f8a8e302e4a026e47c36fdce91d3ad335d8e6dc9a8eb510585ee0081875bf364","title":"","text":"ginkgo.By(\"Wait for the node to be ready\") waitForNodeReady() for _, customClass := range []*schedulingv1.PriorityClass{customClassA, customClassB, customClassC} { customClasses := []*schedulingv1.PriorityClass{customClassA, customClassB, customClassC} for _, customClass := range customClasses { _, err := f.ClientSet.SchedulingV1().PriorityClasses().Create(context.Background(), customClass, metav1.CreateOptions{}) framework.ExpectNoError(err) if err != nil && !apierrors.IsAlreadyExists(err) { framework.ExpectNoError(err) } } gomega.Eventually(func() error { for _, customClass := range customClasses { _, err := f.ClientSet.SchedulingV1().PriorityClasses().Get(context.Background(), customClass.Name, metav1.GetOptions{}) if err != nil { return err } } return nil }, priorityClassesCreateTimeout, pollInterval).Should(gomega.BeNil()) }) ginkgo.AfterEach(func() {"} {"_id":"doc-en-kubernetes-b589a3169e02c1097257359b4f95fd09f5319a70b544edb41b1960eae6fb1150","title":"","text":"It(\"should lookup the Schema by its GroupVersionKind\", func() { schema = resources.LookupResource(gvk) Expect(schema).ToNot(BeNil()) }) var deployment *proto.Kind It(\"should be a Kind\", func() { deployment = schema.(*proto.Kind) Expect(deployment).ToNot(BeNil()) Expect(schema.(*proto.Kind)).ToNot(BeNil()) }) })"} {"_id":"doc-en-kubernetes-8ab764be11aaaa9d4a3094e05c9736abb48eee1f9fdef6bb0be64fcb7e5b11a7","title":"","text":"It(\"should lookup the Schema by its GroupVersionKind\", func() { schema = resources.LookupResource(gvk) Expect(schema).ToNot(BeNil()) }) var sarspec *proto.Kind It(\"should be a Kind and have a spec\", func() { sar := schema.(*proto.Kind) Expect(sar).ToNot(BeNil()) Expect(sar.Fields).To(HaveKey(\"spec\")) specRef := sar.Fields[\"spec\"].(proto.Reference) Expect(specRef).ToNot(BeNil()) Expect(specRef.Reference()).To(Equal(\"io.k8s.api.authorization.v1.SubjectAccessReviewSpec\")) sarspec = specRef.SubSchema().(*proto.Kind) Expect(sarspec).ToNot(BeNil()) Expect(specRef.SubSchema().(*proto.Kind)).ToNot(BeNil()) }) })"} {"_id":"doc-en-kubernetes-6ea79a38b83c00c5846821fa1cda075b055c7b3205f257b01c48086cc057b6aa","title":"","text":"config.DeleteNodePortService() ginkgo.By(fmt.Sprintf(\"dialing(http) %v (node) --> %v:%v (nodeIP) and getting ZERO host endpoints\", config.NodeIP, config.NodeIP, config.NodeHTTPPort)) err = config.DialFromNode(\"http\", config.NodeIP, config.NodeHTTPPort, config.MaxTries, config.MaxTries, sets.NewString()) // #106770 MaxTries can be very large on large clusters, with the risk that a new NodePort is created by another test and start to answer traffic. // Since we only want to assert that traffic is not being forwarded anymore and the retry timeout is 2 seconds, consider the test is correct // if the service doesn't answer after 10 tries. err = config.DialFromNode(\"http\", config.NodeIP, config.NodeHTTPPort, 10, 10, sets.NewString()) if err != nil { framework.Failf(\"Error dialing http from node: %v\", err) framework.Failf(\"Failure validating that node port service STOPPED removed properly: %v\", err) } })"} {"_id":"doc-en-kubernetes-c2bbd4cb941e47164c70a3f504c70123a27df1105c78d640952b40c60a7ffa82","title":"","text":"config.DeleteNodePortService() ginkgo.By(fmt.Sprintf(\"dialing(udp) %v (node) --> %v:%v (nodeIP) and getting ZERO host endpoints\", config.NodeIP, config.NodeIP, config.NodeUDPPort)) err = config.DialFromNode(\"udp\", config.NodeIP, config.NodeUDPPort, config.MaxTries, config.MaxTries, sets.NewString()) // #106770 MaxTries can be very large on large clusters, with the risk that a new NodePort is created by another test and start to answer traffic. // Since we only want to assert that traffic is not being forwarded anymore and the retry timeout is 2 seconds, consider the test is correct // if the service doesn't answer after 10 tries. err = config.DialFromNode(\"udp\", config.NodeIP, config.NodeUDPPort, 10, 10, sets.NewString()) if err != nil { framework.Failf(\"Failure validating that node port service STOPPED removed properly: %v\", err) }"} {"_id":"doc-en-kubernetes-3cfff5af395b0993b3400a88df8c1dd60519585e5cb62f28063d354aa48aa115","title":"","text":"if svcInfo.preserveDIP || svcInfo.localTrafficDSR { nodePortEndpoints = hnsLocalEndpoints } hnsLoadBalancer, err := hns.getLoadBalancer( nodePortEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, localRoutedVIP: true, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, \"\", Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.NodePort()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } svcInfo.nodePorthnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"hnsID\", hnsLoadBalancer.hnsID) if len(nodePortEndpoints) > 0 { hnsLoadBalancer, err := hns.getLoadBalancer( nodePortEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, localRoutedVIP: true, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, \"\", Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.NodePort()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } svcInfo.nodePorthnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"nodeport\", svcInfo.NodePort(), \"hnsID\", hnsLoadBalancer.hnsID) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"nodeport\", svcInfo.NodePort(), \"hnsID\", hnsLoadBalancer.hnsID) } } // Create a Load Balancer Policy for each external IP"} {"_id":"doc-en-kubernetes-2a450238b9d6a00c9449ae99197daa01df736ee5bb8effc48c60739dcdb1dd61","title":"","text":"if svcInfo.localTrafficDSR { externalIPEndpoints = hnsLocalEndpoints } // Try loading existing policies, if already available hnsLoadBalancer, err = hns.getLoadBalancer( externalIPEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, externalIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue if len(externalIPEndpoints) > 0 { // Try loading existing policies, if already available hnsLoadBalancer, err = hns.getLoadBalancer( externalIPEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, externalIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } externalIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } externalIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } // Create a Load Balancer Policy for each loadbalancer ingress for _, lbIngressIP := range svcInfo.loadBalancerIngressIPs {"} {"_id":"doc-en-kubernetes-9a5e74672d1d68e9d613ea72b5be4b40577b94510a449e59dc265167204e6fd5","title":"","text":"if svcInfo.preserveDIP || svcInfo.localTrafficDSR { lbIngressEndpoints = hnsLocalEndpoints } hnsLoadBalancer, err := hns.getLoadBalancer( lbIngressEndpoints, loadBalancerFlags{isDSR: svcInfo.preserveDIP || svcInfo.localTrafficDSR, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, lbIngressIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue if len(lbIngressEndpoints) > 0 { hnsLoadBalancer, err := hns.getLoadBalancer( lbIngressEndpoints, loadBalancerFlags{isDSR: svcInfo.preserveDIP || svcInfo.localTrafficDSR, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, lbIngressIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } lbIngressIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } lbIngressIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } svcInfo.policyApplied = true klog.V(2).InfoS(\"Policy successfully applied for service\", \"serviceInfo\", svcInfo)"} {"_id":"doc-en-kubernetes-c25610b7503458d6b99b3fe26dc1c64d544038ba8ce3768c0663c098c74b7bea","title":"","text":"\"description\": \"EndpointPort is a tuple that describes a single port.\", \"properties\": { \"appProtocol\": { \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {"} {"_id":"doc-en-kubernetes-2cf7a7938161786550a8a4b636fc5d4c20ddad13464a0c5da2fc3af4ce8d640f","title":"","text":"\"description\": \"ServicePort contains information on service's port.\", \"properties\": { \"appProtocol\": { \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {"} {"_id":"doc-en-kubernetes-54fbd2d1f5d7308a7d9c0757a250f7dc6544bbe50e57ad3ca55c662fff5a372e","title":"","text":"\"name\": \"The name of this port. This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined.\", \"port\": \"The port number of the endpoint.\", \"protocol\": \"The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", } func (EndpointPort) SwaggerDoc() map[string]string {"} {"_id":"doc-en-kubernetes-1b35d6083434fc17e65853be19d37f2c51553ed9f09eb75daac77301bfae7638","title":"","text":"\"\": \"ServicePort contains information on service's port.\", \"name\": \"The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.\", \"protocol\": \"The IP protocol for this port. Supports \"TCP\", \"UDP\", and \"SCTP\". Default is TCP.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"port\": \"The port that will be exposed by this service.\", \"targetPort\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service\", \"nodePort\": \"The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport\","} {"_id":"doc-en-kubernetes-4baabd3b0850d356278c598613dfd94ed7ea2eeb31a45735bb0a9d84ac12a444","title":"","text":"// The application protocol for this port. // This field follows standard Kubernetes label syntax. // Un-prefixed names are reserved for IANA standard service names (as per // RFC-6335 and http://www.iana.org/assignments/service-names). // RFC-6335 and https://www.iana.org/assignments/service-names). // Non-standard protocols should use prefixed names such as // mycompany.com/my-custom-protocol. // +optional"} {"_id":"doc-en-kubernetes-e37adc53cafd89f74bfb02f5c90d82fadbfcdcad19c3f189c8ae1c8bb0a8ca3d","title":"","text":"\"name\": \"The name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is dervied from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string.\", \"protocol\": \"The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.\", \"port\": \"The port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", } func (EndpointPort) SwaggerDoc() map[string]string {"} {"_id":"doc-en-kubernetes-718a96f0663797fccf6cec1365e3d1ffc94cc79229396dcfcd62a7941b8f333f","title":"","text":"\"description\": \"EndpointPort is a tuple that describes a single port.\", \"properties\": { \"appProtocol\": { \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"type\": \"string\" }, \"name\": {"} {"_id":"doc-en-kubernetes-29ad0ed9eda1bf1ef6a60e1452ab272ffb32160d08160a2c8396de4e3b885024","title":"","text":"\"description\": \"ServicePort contains information on service's port.\", \"properties\": { \"appProtocol\": { \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"type\": \"string\" }, \"name\": {"} {"_id":"doc-en-kubernetes-05c4e6878112ce64375041da94bbd0932ff7432bb56daa057f00523a0683a935","title":"","text":"\"description\": \"EndpointPort represents a Port used by an EndpointSlice\", \"properties\": { \"appProtocol\": { \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {"} {"_id":"doc-en-kubernetes-613ff8d7d82ee6f9e921fa3c265023d71f34c8bf3ed937125c57e43ee40d455d","title":"","text":"package pidlimit import ( \"fmt\" \"io/ioutil\" \"strconv\" \"strings\" \"syscall\" \"time\" \"k8s.io/apimachinery/pkg/apis/meta/v1\" v1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" statsapi \"k8s.io/kubelet/pkg/apis/stats/v1alpha1\" )"} {"_id":"doc-en-kubernetes-fef10ea3dbe978d2b340dd9dc02d65aba9e308241bc3cacbb54dc435fa4e7a0f","title":"","text":"} } var info syscall.Sysinfo_t syscall.Sysinfo(&info) procs := int64(info.Procs) rlimit.NumOfRunningProcesses = &procs // Prefer to read \"/proc/loadavg\" when possible because sysinfo(2) // returns truncated number when greater than 65538. See // https://github.com/kubernetes/kubernetes/issues/107107 if procs, err := runningTaskCount(); err == nil { rlimit.NumOfRunningProcesses = &procs } else { var info syscall.Sysinfo_t syscall.Sysinfo(&info) procs := int64(info.Procs) rlimit.NumOfRunningProcesses = &procs } rlimit.Time = v1.NewTime(time.Now()) return rlimit, nil } func runningTaskCount() (int64, error) { // Example: 1.36 3.49 4.53 2/3518 3715089 bytes, err := ioutil.ReadFile(\"/proc/loadavg\") if err != nil { return 0, err } fields := strings.Fields(string(bytes)) if len(fields) < 5 { return 0, fmt.Errorf(\"not enough fields in /proc/loadavg\") } subfields := strings.Split(fields[3], \"/\") if len(subfields) != 2 { return 0, fmt.Errorf(\"error parsing fourth field of /proc/loadavg\") } return strconv.ParseInt(subfields[1], 10, 64) } "} {"_id":"doc-en-kubernetes-40d245af514a0a9ba7664c14ea1877c3bc7c00149004318199adf1619d21799e","title":"","text":"pluginName := fmt.Sprintf(\"example-plugin\") p := pluginwatcher.NewTestExamplePlugin(pluginName, registerapi.DevicePlugin, socketPath, supportedVersions...) require.NoError(t, p.Serve(\"v1beta1\", \"v1beta2\")) defer func() { require.NoError(t, p.Stop()) }() timestampBeforeRegistration := time.Now() dsw.AddOrUpdatePlugin(socketPath) waitForRegistration(t, socketPath, timestampBeforeRegistration, asw)"} {"_id":"doc-en-kubernetes-bdb61d976ccb6b458de9ce949c6ad7f92abf9a8256fffa8a303d053903885b4a","title":"","text":"// validation with resource counting, but we did this before QoS was even defined. // let's not make that mistake again with other resources now that QoS is defined. requiredSet := quota.ToSet(required).Intersection(validationSet) missingSet := sets.NewString() missingSetResourceToContainerNames := make(map[string]sets.String) for i := range pod.Spec.Containers { enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSet) enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSetResourceToContainerNames) } for i := range pod.Spec.InitContainers { enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSet) enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSetResourceToContainerNames) } if len(missingSet) == 0 { if len(missingSetResourceToContainerNames) == 0 { return nil } return fmt.Errorf(\"must specify %s\", strings.Join(missingSet.List(), \",\")) var resources = sets.NewString() for resource := range missingSetResourceToContainerNames { resources.Insert(resource) } var errorMessages = make([]string, 0, len(missingSetResourceToContainerNames)) for _, resource := range resources.List() { errorMessages = append(errorMessages, fmt.Sprintf(\"%s for: %s\", resource, strings.Join(missingSetResourceToContainerNames[resource].List(), \",\"))) } return fmt.Errorf(\"must specify %s\", strings.Join(errorMessages, \"; \")) } // GroupResource that this evaluator tracks"} {"_id":"doc-en-kubernetes-c24fb137b5888fee061bd47d7a0714d1f3f0503a1e3a44650b5e230da20ddfbe","title":"","text":"// enforcePodContainerConstraints checks for required resources that are not set on this container and // adds them to missingSet. func enforcePodContainerConstraints(container *corev1.Container, requiredSet, missingSet sets.String) { func enforcePodContainerConstraints(container *corev1.Container, requiredSet sets.String, missingSetResourceToContainerNames map[string]sets.String) { requests := container.Resources.Requests limits := container.Resources.Limits containerUsage := podComputeUsageHelper(requests, limits) containerSet := quota.ToSet(quota.ResourceNames(containerUsage)) if !containerSet.Equal(requiredSet) { difference := requiredSet.Difference(containerSet) missingSet.Insert(difference.List()...) if difference := requiredSet.Difference(containerSet); difference.Len() != 0 { for _, diff := range difference.List() { if _, ok := missingSetResourceToContainerNames[diff]; !ok { missingSetResourceToContainerNames[diff] = sets.NewString(container.Name) } else { missingSetResourceToContainerNames[diff].Insert(container.Name) } } } } }"} {"_id":"doc-en-kubernetes-b2cf83809e6c4a29f4d63b1fba61a9aecdd7416d9a1977029b696834394df95e","title":"","text":"\"time\" \"github.com/google/go-cmp/cmp\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\""} {"_id":"doc-en-kubernetes-975d6394f4d72738db9fff7b98a2b75a8eab92863860c942312065b95528e947","title":"","text":"pod: &api.Pod{ Spec: api.PodSpec{ InitContainers: []api.Container{{ Name: \"dummy\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory for: dummy`, }, \"multiple init container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ InitContainers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }, { Name: \"bar\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")},"} {"_id":"doc-en-kubernetes-93a3283d3046650651b961ba8c833c97c9ac0e991a1672d723bb558c14d07f2a","title":"","text":"}, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory`, err: `must specify memory for: bar,foo`, }, \"container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"dummy\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory for: dummy`, }, \"multiple container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }, { Name: \"bar\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")},"} {"_id":"doc-en-kubernetes-ad6d581655672fed5c468e8d5ca654d00af5c2b66f77a9ed92f1bc22cb56428c","title":"","text":"}, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory`, err: `must specify memory for: bar,foo`, }, \"container resource missing multiple\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{}, }, { Name: \"bar\", Resources: api.ResourceRequirements{}, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory, corev1.ResourceCPU}, err: `must specify cpu for: bar,foo; memory for: bar,foo`, }, } evaluator := NewPodEvaluator(nil, clock.RealClock{})"} {"_id":"doc-en-kubernetes-f0a09b02eef41b4a5a15c13669d87eb48e8503a0e530502868b43a25ea802e39","title":"","text":"case err != nil && len(test.err) == 0, err == nil && len(test.err) != 0, err != nil && test.err != err.Error(): t.Errorf(\"%s unexpected error: %v\", testName, err) t.Errorf(\"%s want: %v,got: %v\", testName, test.err, err) } } }"} {"_id":"doc-en-kubernetes-2ee9284e6f18f5981b3fd46924fc7b2e1a557190e11aa7b485195e4a9719ac2e","title":"","text":"id, err := getDiskID(pdName, exec) if err != nil { klog.Errorf(\"WaitForAttach (windows) failed with error %s\", err) return \"\", err } return id, err return id, nil } partition := \"\""} {"_id":"doc-en-kubernetes-89ffd7043eefa39531dc42605371ca95193e15c04504e9489d32cf5b01527716","title":"","text":"clientset \"k8s.io/client-go/kubernetes\" \"k8s.io/client-go/rest\" v1 \"k8s.io/client-go/tools/clientcmd/api/v1\" resttransport \"k8s.io/client-go/transport\" kubeapiservertesting \"k8s.io/kubernetes/cmd/kube-apiserver/app/testing\" \"k8s.io/kubernetes/pkg/apis/autoscaling\" api \"k8s.io/kubernetes/pkg/apis/core\""} {"_id":"doc-en-kubernetes-6d3873542fc65ab2a9aed07f5009cea74e81d263ba12848fd1d3006f2ef51564","title":"","text":"controlPlaneConfig.GenericConfig.Authorization.Authorizer = authorizerfactory.NewAlwaysDenyAuthorizer() _, s, closeFn := framework.RunAnAPIServer(controlPlaneConfig) defer closeFn() ns := framework.CreateTestingNamespace(\"auth-always-deny\", s, t) defer framework.DeleteTestingNamespace(ns, s, t) transport := http.DefaultTransport transport := resttransport.NewBearerAuthRoundTripper(framework.UnprivilegedUserToken, http.DefaultTransport) for _, r := range getTestRequests(ns.Name) { bodyBytes := bytes.NewReader([]byte(r.body))"} {"_id":"doc-en-kubernetes-a35afca203a22947ef89bdfd6d8fd138abb0c6071e02d9d35fff70708244220a","title":"","text":"apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/wait\" authauthenticator \"k8s.io/apiserver/pkg/authentication/authenticator\" \"k8s.io/apiserver/pkg/authentication/group\" \"k8s.io/apiserver/pkg/authentication/request/bearertoken\" authenticatorunion \"k8s.io/apiserver/pkg/authentication/request/union\" \"k8s.io/apiserver/pkg/authentication/user\" \"k8s.io/apiserver/pkg/authorization/authorizer\" \"k8s.io/apiserver/pkg/authorization/authorizerfactory\""} {"_id":"doc-en-kubernetes-9c7f8ad2103236513034b8924077b810def4b5ed210be6fb11d1022f7cf02be0","title":"","text":"func initStatusForbiddenControlPlaneConfig() *controlplane.Config { controlPlaneConfig := framework.NewIntegrationTestControlPlaneConfig() controlPlaneConfig.GenericConfig.Authentication.Authenticator = authenticatorunion.New( authauthenticator.RequestFunc(func(req *http.Request) (*authauthenticator.Response, bool, error) { return &authauthenticator.Response{ User: &user.DefaultInfo{ Name: \"unprivileged\", Groups: []string{user.AllAuthenticated}, }, }, true, nil })) controlPlaneConfig.GenericConfig.Authorization.Authorizer = authorizerfactory.NewAlwaysDenyAuthorizer() return controlPlaneConfig }"} {"_id":"doc-en-kubernetes-f48df30e6df0447ec9b7d166a9bd3140616fef0cea7a468c80d6953ffcbfde62","title":"","text":"statusCode: http.StatusForbidden, reqPath: \"/apis\", reason: \"Forbidden\", message: `forbidden: User \"\" cannot get path \"/apis\": Everything is forbidden.`, message: `forbidden: User \"unprivileged\" cannot get path \"/apis\": Everything is forbidden.`, }, { name: \"401\","} {"_id":"doc-en-kubernetes-6f4352f1a7a3044323ca34738a483533b32d3a42510d1b23c697cdab1a1f4e05","title":"","text":"netutils \"k8s.io/utils/net\" ) const ( UnprivilegedUserToken = \"unprivileged-user\" ) // Config is a struct of configuration directives for NewControlPlaneComponents. type Config struct { // If nil, a default is used, partially filled configs will not get populated."} {"_id":"doc-en-kubernetes-7e1d347656e1ba52ee8717b75abdd6ee8500ee8eab211fdc3b5bd80af4901af4","title":"","text":"return authorizer.DecisionAllow, \"always allow\", nil } // alwaysEmpty simulates \"no authentication\" for old tests func alwaysEmpty(req *http.Request) (*authauthenticator.Response, bool, error) { // unsecuredUser simulates requests to the unsecured endpoint for old tests func unsecuredUser(req *http.Request) (*authauthenticator.Response, bool, error) { auth := req.Header.Get(\"Authorization\") if len(auth) != 0 { return nil, false, nil } return &authauthenticator.Response{ User: &user.DefaultInfo{ Name: \"\", Name: \"system:unsecured\", Groups: []string{user.SystemPrivilegedGroup, user.AllAuthenticated}, }, }, true, nil }"} {"_id":"doc-en-kubernetes-430130c236576de2a3e8710885761ec8dde0d7ff675bec5ec603577ac0de8da4","title":"","text":"tokens[privilegedLoopbackToken] = &user.DefaultInfo{ Name: user.APIServerUser, UID: uuid.New().String(), Groups: []string{user.SystemPrivilegedGroup}, Groups: []string{user.SystemPrivilegedGroup, user.AllAuthenticated}, } tokens[UnprivilegedUserToken] = &user.DefaultInfo{ Name: \"unprivileged\", UID: uuid.New().String(), Groups: []string{user.AllAuthenticated}, } tokenAuthenticator := authenticatorfactory.NewFromTokens(tokens, controlPlaneConfig.GenericConfig.Authentication.APIAudiences) if controlPlaneConfig.GenericConfig.Authentication.Authenticator == nil { controlPlaneConfig.GenericConfig.Authentication.Authenticator = authenticatorunion.New(tokenAuthenticator, authauthenticator.RequestFunc(alwaysEmpty)) controlPlaneConfig.GenericConfig.Authentication.Authenticator = authenticatorunion.New(tokenAuthenticator, authauthenticator.RequestFunc(unsecuredUser)) } else { controlPlaneConfig.GenericConfig.Authentication.Authenticator = authenticatorunion.New(tokenAuthenticator, controlPlaneConfig.GenericConfig.Authentication.Authenticator) }"} {"_id":"doc-en-kubernetes-86eccc1a2bdba6f53f013bcb61d57099e955d4b68a8a6fcc79268029047ad352","title":"","text":"VolumeAttributes: attributes, }, } if pattern.FsType != \"\" { r.VolSource.CSI.FSType = &pattern.FsType } } default: framework.Failf(\"VolumeResource doesn't support: %s\", pattern.VolType)"} {"_id":"doc-en-kubernetes-2a4f285e10d3a46e7a0449c9735ca417d1befa0ed590564fea85c27235de0d27","title":"","text":"\"context\" \"crypto/sha256\" \"encoding/binary\" \"encoding/json\" \"errors\" \"fmt\" \"math\""} {"_id":"doc-en-kubernetes-4c271da8450c49c70c4b2ef4b1f404acbd28a0c82d9ad09579c605e8ce567be6","title":"","text":"apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" apitypes \"k8s.io/apimachinery/pkg/types\" utilerrors \"k8s.io/apimachinery/pkg/util/errors\" utilruntime \"k8s.io/apimachinery/pkg/util/runtime\" \"k8s.io/apimachinery/pkg/util/sets\""} {"_id":"doc-en-kubernetes-ddb7795976f3a32697578d01133f450e65a4f5fa48b415484bae8729aaaa4b80","title":"","text":"\"k8s.io/utils/clock\" flowcontrol \"k8s.io/api/flowcontrol/v1beta3\" flowcontrolapplyconfiguration \"k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3\" flowcontrolclient \"k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta3\" flowcontrollister \"k8s.io/client-go/listers/flowcontrol/v1beta3\" )"} {"_id":"doc-en-kubernetes-92c8793edca32ab6b7b03535d9859693100389c5faa8dadcd0b62e3e4d195ad2","title":"","text":"// if we are going to issue an update, be sure we track every name we update so we know if we update it too often. currResult.updatedItems.Insert(fsu.flowSchema.Name) patchBytes, err := makeFlowSchemaConditionPatch(fsu.condition) if err != nil { // should never happen because these conditions are created here and well formed panic(fmt.Sprintf(\"Failed to json.Marshall(%#+v): %s\", fsu.condition, err.Error())) } if klogV := klog.V(4); klogV.Enabled() { klogV.Infof(\"%s writing Condition %s to FlowSchema %s, which had ResourceVersion=%s, because its previous value was %s, diff: %s\", cfgCtlr.name, fsu.condition, fsu.flowSchema.Name, fsu.flowSchema.ResourceVersion, fcfmt.Fmt(fsu.oldValue), cmp.Diff(fsu.oldValue, fsu.condition)) } fsIfc := cfgCtlr.flowcontrolClient.FlowSchemas() patchOptions := metav1.PatchOptions{FieldManager: cfgCtlr.asFieldManager} _, err = fsIfc.Patch(context.TODO(), fsu.flowSchema.Name, apitypes.StrategicMergePatchType, patchBytes, patchOptions, \"status\") if err != nil { if err := apply(cfgCtlr.flowcontrolClient.FlowSchemas(), fsu, cfgCtlr.asFieldManager); err != nil { if apierrors.IsNotFound(err) { // This object has been deleted. A notification is coming // and nothing more needs to be done here."} {"_id":"doc-en-kubernetes-f386a203da17b33683bf4909146697a909204820ad922f1b45aa78104fbe1734","title":"","text":"return suggestedDelay, utilerrors.NewAggregate(errs) } // makeFlowSchemaConditionPatch takes in a condition and returns the patch status as a json. func makeFlowSchemaConditionPatch(condition flowcontrol.FlowSchemaCondition) ([]byte, error) { o := struct { Status flowcontrol.FlowSchemaStatus `json:\"status\"` }{ Status: flowcontrol.FlowSchemaStatus{ Conditions: []flowcontrol.FlowSchemaCondition{ condition, }, }, } return json.Marshal(o) func apply(client flowcontrolclient.FlowSchemaInterface, fsu fsStatusUpdate, asFieldManager string) error { applyOptions := metav1.ApplyOptions{FieldManager: asFieldManager, Force: true} // the condition field in fsStatusUpdate holds the new condition we want to update. // TODO: this will break when we have multiple conditions for a flowschema _, err := client.ApplyStatus(context.TODO(), toFlowSchemaApplyConfiguration(fsu), applyOptions) return err } func toFlowSchemaApplyConfiguration(fsUpdate fsStatusUpdate) *flowcontrolapplyconfiguration.FlowSchemaApplyConfiguration { condition := flowcontrolapplyconfiguration.FlowSchemaCondition(). WithType(fsUpdate.condition.Type). WithStatus(fsUpdate.condition.Status). WithReason(fsUpdate.condition.Reason). WithLastTransitionTime(fsUpdate.condition.LastTransitionTime). WithMessage(fsUpdate.condition.Message) return flowcontrolapplyconfiguration.FlowSchema(fsUpdate.flowSchema.Name). WithStatus(flowcontrolapplyconfiguration.FlowSchemaStatus(). WithConditions(condition), ) } // shouldDelayUpdate checks to see if a flowschema has been updated too often and returns true if a delay is needed."} {"_id":"doc-en-kubernetes-93225da96a516ca529003757567582f9ea126f16e21f449d6ab0df15e1c3251b","title":"","text":" /* Copyright 2021 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package flowcontrol import ( \"fmt\" \"reflect\" \"testing\" \"time\" \"github.com/google/go-cmp/cmp\" flowcontrol \"k8s.io/api/flowcontrol/v1beta3\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) func Test_configController_generatePatchBytes(t *testing.T) { now := time.Now().UTC() tests := []struct { name string condition flowcontrol.FlowSchemaCondition want []byte }{ { name: \"check if only condition is parsed\", condition: flowcontrol.FlowSchemaCondition{ Type: flowcontrol.FlowSchemaConditionDangling, Status: flowcontrol.ConditionTrue, Reason: \"test reason\", Message: \"test none\", LastTransitionTime: metav1.NewTime(now), }, want: []byte(fmt.Sprintf(`{\"status\":{\"conditions\":[{\"type\":\"Dangling\",\"status\":\"True\",\"lastTransitionTime\":\"%s\",\"reason\":\"test reason\",\"message\":\"test none\"}]}}`, now.Format(time.RFC3339))), }, { name: \"check when message has double quotes\", condition: flowcontrol.FlowSchemaCondition{ Type: flowcontrol.FlowSchemaConditionDangling, Status: flowcontrol.ConditionTrue, Reason: \"test reason\", Message: `test \"\"none`, LastTransitionTime: metav1.NewTime(now), }, want: []byte(fmt.Sprintf(`{\"status\":{\"conditions\":[{\"type\":\"Dangling\",\"status\":\"True\",\"lastTransitionTime\":\"%s\",\"reason\":\"test reason\",\"message\":\"test \"\"none\"}]}}`, now.Format(time.RFC3339))), }, { name: \"check when message has a whitespace character that can be escaped\", condition: flowcontrol.FlowSchemaCondition{ Type: flowcontrol.FlowSchemaConditionDangling, Status: flowcontrol.ConditionTrue, Reason: \"test reason\", Message: \"test u0009u0009none\", LastTransitionTime: metav1.NewTime(now), }, want: []byte(fmt.Sprintf(`{\"status\":{\"conditions\":[{\"type\":\"Dangling\",\"status\":\"True\",\"lastTransitionTime\":\"%s\",\"reason\":\"test reason\",\"message\":\"test ttnone\"}]}}`, now.Format(time.RFC3339))), }, { name: \"check when a few fields (message & lastTransitionTime) are missing\", condition: flowcontrol.FlowSchemaCondition{ Type: flowcontrol.FlowSchemaConditionDangling, Status: flowcontrol.ConditionTrue, Reason: \"test reason\", }, want: []byte(`{\"status\":{\"conditions\":[{\"type\":\"Dangling\",\"status\":\"True\",\"lastTransitionTime\":null,\"reason\":\"test reason\"}]}}`), }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { got, _ := makeFlowSchemaConditionPatch(tt.condition) if !reflect.DeepEqual(got, tt.want) { t.Errorf(\"makeFlowSchemaConditionPatch() got = %s, want %s; diff is %s\", got, tt.want, cmp.Diff(tt.want, got)) } }) } } "} {"_id":"doc-en-kubernetes-6a3a0d101ee9a9bd9b64004faf299e3f3517939f5d662486d2142deb8f145941","title":"","text":"}() }() postTimeoutFn() defer postTimeoutFn() tw.timeout(err) } }"} {"_id":"doc-en-kubernetes-546e0f0e94660ad0ffe2b3404674af482246a476bea24fd28278735d1db11693","title":"","text":"}) } postTimeoutCh := make(chan struct{}) ts := httptest.NewServer( withDeadline( WithTimeout("} {"_id":"doc-en-kubernetes-18d7b0e088c3001bef37ed708a585c2916f0df9bd44dda4bc2ca467cb02545b7","title":"","text":"h := w.Header() // trigger the timeout cancel() // mutate response Headers for j := 0; j < 1000; j++ { h.Set(\"Test\", \"post\") // keep mutating response Headers until the request times out for { select { case <-postTimeoutCh: return default: h.Set(\"Test\", \"post\") } } }), func(req *http.Request) (*http.Request, bool, func(), *apierrors.StatusError) { return req, false, func() {}, apierrors.NewServerTimeout(schema.GroupResource{Group: \"foo\", Resource: \"bar\"}, \"get\", 0) return req, false, func() { close(postTimeoutCh) }, apierrors.NewServerTimeout(schema.GroupResource{Group: \"foo\", Resource: \"bar\"}, \"get\", 0) }, ), ),"} {"_id":"doc-en-kubernetes-8a1b50b2b7a81c9fa0575a72ba5bacb76bb19f7079a358a77c60df2d229a8d2d","title":"","text":"FailureThreshold: 1, } pod := livenessPodSpec(f.Namespace.Name, nil, livenessProbe) RunLivenessTest(f, pod, 5, time.Minute*5) // ~2 minutes backoff timeouts + 4 minutes defaultObservationTimeout + 2 minutes for each pod restart RunLivenessTest(f, pod, 5, 2*time.Minute+defaultObservationTimeout+4*2*time.Minute) }) /*"} {"_id":"doc-en-kubernetes-784f8eba5c56b8ef10e9d121a12b108cd15eaea7bb9c86158d9dbf2eb227c275","title":"","text":"framework.PodNominator parallelizer parallelize.Parallelizer // Indicates that RunFilterPlugins should accumulate all failed statuses and not return // after the first failure. runAllFilters bool } // extensionPoint encapsulates desired and applied set of plugins at a specific extension"} {"_id":"doc-en-kubernetes-cb6e30dbef9523459a5931d304cdaab869194e0351389d8a6becd33d7306e60c","title":"","text":"metricsRecorder *metricsRecorder podNominator framework.PodNominator extenders []framework.Extender runAllFilters bool captureProfile CaptureProfile clusterEventMap map[framework.ClusterEvent]sets.String parallelizer parallelize.Parallelizer"} {"_id":"doc-en-kubernetes-25c7a23ee31e825031d53a13c06806bfda44da61344a194321b931d894dfde12","title":"","text":"} } // WithRunAllFilters sets the runAllFilters flag, which means RunFilterPlugins accumulates // all failure Statuses. func WithRunAllFilters(runAllFilters bool) Option { return func(o *frameworkOptions) { o.runAllFilters = runAllFilters } } // WithPodNominator sets podNominator for the scheduling frameworkImpl. func WithPodNominator(nominator framework.PodNominator) Option { return func(o *frameworkOptions) {"} {"_id":"doc-en-kubernetes-c894b5616cc624512592b7f3e76d8f3980f50d59a2c68c7912c5893263353e80","title":"","text":"eventRecorder: options.eventRecorder, informerFactory: options.informerFactory, metricsRecorder: options.metricsRecorder, runAllFilters: options.runAllFilters, extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer,"} {"_id":"doc-en-kubernetes-e68972a7190657f58cc7d298b8fc7677f3e3648f3779549168dff69b572095d1","title":"","text":"} pluginStatus.SetFailedPlugin(pl.Name()) statuses[pl.Name()] = pluginStatus if !f.runAllFilters { // Exit early if we don't need to run all filters. return statuses } } }"} {"_id":"doc-en-kubernetes-6759317b68e0c9ee8df08422fe2f716e005f2c6e1ec69dd5473f9fb36707a6b6","title":"","text":"plugins []*TestPlugin wantStatus *framework.Status wantStatusMap framework.PluginToStatus runAllFilters bool }{ { name: \"SuccessFilter\","} {"_id":"doc-en-kubernetes-3dec127209ab6b8e23e0e7876197ce50114bde8e0c5802d1eb79b8d6fd5eb64b","title":"","text":"\"TestPlugin2\": framework.NewStatus(framework.Unschedulable, \"injected filter status\").WithFailedPlugin(\"TestPlugin2\"), }, }, { name: \"SuccessFilterWithRunAllFilters\", plugins: []*TestPlugin{ { name: \"TestPlugin\", inj: injectedResult{FilterStatus: int(framework.Success)}, }, }, runAllFilters: true, wantStatus: nil, wantStatusMap: framework.PluginToStatus{}, }, { name: \"ErrorAndErrorFilters\", plugins: []*TestPlugin{ { name: \"TestPlugin1\", inj: injectedResult{FilterStatus: int(framework.Error)}, }, { name: \"TestPlugin2\", inj: injectedResult{FilterStatus: int(framework.Error)}, }, }, runAllFilters: true, wantStatus: framework.AsStatus(fmt.Errorf(`running \"TestPlugin1\" filter plugin: %w`, errInjectedFilterStatus)).WithFailedPlugin(\"TestPlugin1\"), wantStatusMap: framework.PluginToStatus{ \"TestPlugin1\": framework.AsStatus(fmt.Errorf(`running \"TestPlugin1\" filter plugin: %w`, errInjectedFilterStatus)).WithFailedPlugin(\"TestPlugin1\"), }, }, { name: \"ErrorAndErrorFilters\", plugins: []*TestPlugin{ { name: \"TestPlugin1\", inj: injectedResult{FilterStatus: int(framework.UnschedulableAndUnresolvable)}, }, { name: \"TestPlugin2\", inj: injectedResult{FilterStatus: int(framework.Unschedulable)}, }, }, runAllFilters: true, wantStatus: framework.NewStatus(framework.UnschedulableAndUnresolvable, \"injected filter status\", \"injected filter status\").WithFailedPlugin(\"TestPlugin1\"), wantStatusMap: framework.PluginToStatus{ \"TestPlugin1\": framework.NewStatus(framework.UnschedulableAndUnresolvable, \"injected filter status\").WithFailedPlugin(\"TestPlugin1\"), \"TestPlugin2\": framework.NewStatus(framework.Unschedulable, \"injected filter status\").WithFailedPlugin(\"TestPlugin2\"), }, }, } for _, tt := range tests {"} {"_id":"doc-en-kubernetes-78b17766ff0d4dcf22d1a50e6e387561ca7e578193e8f5a66c56b7fd2610a628","title":"","text":"config.Plugin{Name: pl.name}) } profile := config.KubeSchedulerProfile{Plugins: cfgPls} f, err := newFrameworkWithQueueSortAndBind(registry, profile, WithRunAllFilters(tt.runAllFilters)) f, err := newFrameworkWithQueueSortAndBind(registry, profile) if err != nil { t.Fatalf(\"fail to create framework: %s\", err) }"} {"_id":"doc-en-kubernetes-f8aa00e1604463c8bae74a3f6e2be8b865845fb49decfedffd799cba28733da5","title":"","text":"- endpoint: replaceNetworkingV1NamespacedNetworkPolicyStatus reason: endpoints is currently feature gated and and will only receive e2e & conformance test in 1.25 link: https://github.com/kubernetes/kubernetes/pull/107963 - endpoint: readCoreV1NodeStatus reason: Kubernetes distribution would reasonably not allow this action via the API link: https://github.com/kubernetes/kubernetes/issues/109379 - endpoint: replaceCoreV1NodeStatus reason: Kubernetes distribution would reasonably not allow this action via the API link: https://github.com/kubernetes/kubernetes/issues/109379 - endpoint: deleteCoreV1CollectionNode reason: Kubernetes distribution would reasonably not allow this action via the API link: https://github.com/kubernetes/kubernetes/issues/109379 "} {"_id":"doc-en-kubernetes-3843cfdfc1daf8ef0e1b4ec695fe429a8a2ec3dcca5f71586486446614308a0d","title":"","text":"OPENSTACK_IMAGE_NAME=${OPENSTACK_IMAGE_NAME:-CentOS7} # Downloaded image name for Openstack project IMAGE_FILE=${IMAGE_FILE:-CentOS-7-x86_64-GenericCloud-1510.qcow2} IMAGE_FILE=${IMAGE_FILE:-CentOS-7-x86_64-GenericCloud-1604.qcow2} # Absolute path where image file is stored. IMAGE_PATH=${IMAGE_PATH:-~/Downloads/openstack}"} {"_id":"doc-en-kubernetes-96dd8d3801799f5deaa85e260085777c08b5e12ec26ad0f898f4879a9a0da65c","title":"","text":"# Verify prereqs on host machine function verify-prereqs() { # Check the OpenStack command-line clients for client in swift glance nova openstack; for client in swift glance nova heat openstack; do if which $client >/dev/null 2>&1; then echo \"${client} client installed\""} {"_id":"doc-en-kubernetes-787b127fd1c6a48bf1758c8a3ccdd6bee8e0a186adc55ab78078642c9d95e002","title":"","text":"# etcd - name: \"etcd\" version: 3.5.3 version: 3.5.4 refPaths: - path: cluster/gce/manifests/etcd.manifest match: etcd_docker_tag|etcd_version"} {"_id":"doc-en-kubernetes-083623d7de21a74c29510153716d5c58a9443e348aca224767162bb48378e768","title":"","text":"{ \"name\": \"etcd-container\", {{security_context}} \"image\": \"{{ pillar.get('etcd_docker_repository', 'registry.k8s.io/etcd') }}:{{ pillar.get('etcd_docker_tag', '3.5.3-0') }}\", \"image\": \"{{ pillar.get('etcd_docker_repository', 'registry.k8s.io/etcd') }}:{{ pillar.get('etcd_docker_tag', '3.5.4-0') }}\", \"resources\": { \"requests\": { \"cpu\": {{ cpulimit }}"} {"_id":"doc-en-kubernetes-c5ac68470522c0e6f08ea52f61fd9cddea704b325bac1c98cfa986d39fbebb60","title":"","text":"\"value\": \"{{ pillar.get('storage_backend', 'etcd3') }}\" }, { \"name\": \"TARGET_VERSION\", \"value\": \"{{ pillar.get('etcd_version', '3.5.3') }}\" \"value\": \"{{ pillar.get('etcd_version', '3.5.4') }}\" }, { \"name\": \"DO_NOT_MOVE_BINARIES\","} {"_id":"doc-en-kubernetes-1c92efc8afcfbd836f84b10386f7de51ca13ea1647578b3ac0ec45e32205f462","title":"","text":"export SECONDARY_RANGE_NAME=\"pods-default\" export STORAGE_BACKEND=\"etcd3\" export STORAGE_MEDIA_TYPE=\"application/vnd.kubernetes.protobuf\" export ETCD_IMAGE=3.5.3-0 export ETCD_VERSION=3.5.3 export ETCD_IMAGE=3.5.4-0 export ETCD_VERSION=3.5.4 # Upgrade master with updated kube envs \"${KUBE_ROOT}/cluster/gce/upgrade.sh\" -M -l"} {"_id":"doc-en-kubernetes-55a4e03abe9a52a7450f1f110781351252d8f104234db6bc08a9a49beb2cc6a7","title":"","text":"MinExternalEtcdVersion = \"3.2.18\" // DefaultEtcdVersion indicates the default etcd version that kubeadm uses DefaultEtcdVersion = \"3.5.3-0\" DefaultEtcdVersion = \"3.5.4-0\" // Etcd defines variable used internally when referring to etcd component Etcd = \"etcd\""} {"_id":"doc-en-kubernetes-bfa5c7fbc9ce04dece0a4da73b4375b6e20b0f1902609423eef6b6484cb7af26","title":"","text":"19: \"3.4.13-0\", 20: \"3.4.13-0\", 21: \"3.4.13-0\", 22: \"3.5.3-0\", 23: \"3.5.3-0\", 24: \"3.5.3-0\", 22: \"3.5.4-0\", 23: \"3.5.4-0\", 24: \"3.5.4-0\", 25: \"3.5.4-0\", } // KubeadmCertsClusterRoleName sets the name for the ClusterRole that allows"} {"_id":"doc-en-kubernetes-c04e254653ab1e291fa8be01b665ba25384658936f2205f8747b4f4deeb2ba4b","title":"","text":"# A set of helpers for starting/running etcd for tests ETCD_VERSION=${ETCD_VERSION:-3.5.3} ETCD_VERSION=${ETCD_VERSION:-3.5.4} ETCD_HOST=${ETCD_HOST:-127.0.0.1} ETCD_PORT=${ETCD_PORT:-2379} # This is intentionally not called ETCD_LOG_LEVEL:"} {"_id":"doc-en-kubernetes-7abea5b46c4141a2a83f00f9369761ff9bd87988ccbbf587c9ba45f80357d49a","title":"","text":"imagePullPolicy: Never args: [ \"--etcd-servers=http://localhost:2379\" ] - name: etcd image: gcr.io/etcd-development/etcd:v3.5.3 image: gcr.io/etcd-development/etcd:v3.5.4 "} {"_id":"doc-en-kubernetes-e3a4271ec8db6cdc365986c8bdd58370bcd7b86777b94dd3182c5de8552c58db","title":"","text":"e2essh \"k8s.io/kubernetes/test/e2e/framework/ssh\" ) const etcdImage = \"3.5.3-0\" const etcdImage = \"3.5.4-0\" // EtcdUpgrade upgrades etcd on GCE. func EtcdUpgrade(targetStorage, targetVersion string) error {"} {"_id":"doc-en-kubernetes-ba5a58b815d1216b8e631749f25b6bdcb60bd126558ccb5763f075531c4162a9","title":"","text":"configs[CudaVectorAdd2] = Config{list.PromoterE2eRegistry, \"cuda-vector-add\", \"2.2\"} configs[DebianIptables] = Config{list.BuildImageRegistry, \"debian-iptables\", \"bullseye-v1.3.0\"} configs[EchoServer] = Config{list.PromoterE2eRegistry, \"echoserver\", \"2.4\"} configs[Etcd] = Config{list.GcEtcdRegistry, \"etcd\", \"3.5.3-0\"} configs[Etcd] = Config{list.GcEtcdRegistry, \"etcd\", \"3.5.4-0\"} configs[GlusterDynamicProvisioner] = Config{list.PromoterE2eRegistry, \"glusterdynamic-provisioner\", \"v1.3\"} configs[Httpd] = Config{list.PromoterE2eRegistry, \"httpd\", \"2.4.38-2\"} configs[HttpdNew] = Config{list.PromoterE2eRegistry, \"httpd\", \"2.4.39-2\"}"} {"_id":"doc-en-kubernetes-1aa24fdddb9f4f52e322927595f77ee6f8395269ab86d968ac855c757c5d5097","title":"","text":"Pod: newNamedPod(\"5-static\", \"test1\", \"pod1\", true), UpdateType: kubetypes.SyncPodUpdate, }) // Wait for the previous work to be delivered to the worker drainAllWorkers(podWorkers) channels.Channel(\"5-static\").Hold() podWorkers.UpdatePod(UpdatePodOptions{ Pod: newNamedPod(\"5-static\", \"test1\", \"pod1\", true),"} {"_id":"doc-en-kubernetes-f598162e0f5a8c0b5a1b81c312dd2a75d42c6e1e0481befb84dba896627d14fc","title":"","text":"return true, binding, nil }) controllers := make(map[string]string) stopFn := broadcaster.StartEventWatcher(func(obj runtime.Object) { stopFn, err := broadcaster.StartEventWatcher(func(obj runtime.Object) { e, ok := obj.(*eventsv1.Event) if !ok || e.Reason != \"Scheduled\" { return"} {"_id":"doc-en-kubernetes-a0aeb9471e2477d28c6142856f5ed8be9479588e62d67fbef886c2cacda8c026","title":"","text":"controllers[e.Regarding.Name] = e.ReportingController wg.Done() }) if err != nil { t.Fatal(err) } defer stopFn() // Run scheduler."} {"_id":"doc-en-kubernetes-8727884e10f2bdaf2d8a553d67983ded94697f227ace33511bde1b3868aadb37","title":"","text":"fwk.EventRecorder().Eventf(p.Pod, nil, v1.EventTypeWarning, \"FailedScheduling\", \"Scheduling\", msg) } called := make(chan struct{}) stopFunc := eventBroadcaster.StartEventWatcher(func(obj runtime.Object) { stopFunc, err := eventBroadcaster.StartEventWatcher(func(obj runtime.Object) { e, _ := obj.(*eventsv1.Event) if e.Reason != item.eventReason { t.Errorf(\"got event %v, want %v\", e.Reason, item.eventReason) } close(called) }) if err != nil { t.Fatal(err) } sched.scheduleOne(ctx) <-called if e, a := item.expectAssumedPod, gotAssumedPod; !reflect.DeepEqual(e, a) {"} {"_id":"doc-en-kubernetes-e111ced0aec596bd5a3feb53c8649f0c4b281b4c4641729afbbc2197a5203a89","title":"","text":"fakeVolumeBinder := volumebinding.NewFakeVolumeBinder(item.volumeBinderConfig) s, bindingChan, errChan := setupTestSchedulerWithVolumeBinding(ctx, fakeVolumeBinder, eventBroadcaster) eventChan := make(chan struct{}) stopFunc := eventBroadcaster.StartEventWatcher(func(obj runtime.Object) { stopFunc, err := eventBroadcaster.StartEventWatcher(func(obj runtime.Object) { e, _ := obj.(*eventsv1.Event) if e, a := item.eventReason, e.Reason; e != a { t.Errorf(\"expected %v, got %v\", e, a) } close(eventChan) }) if err != nil { t.Fatal(err) } s.scheduleOne(ctx) // Wait for pod to succeed or fail scheduling select {"} {"_id":"doc-en-kubernetes-e142d9aed8b74cd7c7eef9b5473aded2ae944dd756d84bda301222392b541d2d","title":"","text":"// StartStructuredLogging starts sending events received from this EventBroadcaster to the structured logging function. // The return value can be ignored or used to stop recording, if desired. // TODO: this function should also return an error. func (e *eventBroadcasterImpl) StartStructuredLogging(verbosity klog.Level) func() { return e.StartEventWatcher( stopWatcher, err := e.StartEventWatcher( func(obj runtime.Object) { event, ok := obj.(*eventsv1.Event) if !ok {"} {"_id":"doc-en-kubernetes-b80fe5863c46ceb880a325bb86e700961737df6a10bec1d4cb852692bfbcfaaf","title":"","text":"} klog.V(verbosity).InfoS(\"Event occurred\", \"object\", klog.KRef(event.Regarding.Namespace, event.Regarding.Name), \"kind\", event.Regarding.Kind, \"apiVersion\", event.Regarding.APIVersion, \"type\", event.Type, \"reason\", event.Reason, \"action\", event.Action, \"note\", event.Note) }) if err != nil { klog.Errorf(\"failed to start event watcher: '%v'\", err) return func() {} } return stopWatcher } // StartEventWatcher starts sending events received from this EventBroadcaster to the given event handler function. // The return value is used to stop recording func (e *eventBroadcasterImpl) StartEventWatcher(eventHandler func(event runtime.Object)) func() { func (e *eventBroadcasterImpl) StartEventWatcher(eventHandler func(event runtime.Object)) (func(), error) { watcher, err := e.Watch() if err != nil { klog.Errorf(\"Unable start event watcher: '%v' (will not retry!)\", err) // TODO: Rewrite the function signature to return an error, for // now just return a no-op function return func() { klog.Error(\"The event watcher failed to start\") } return nil, err } go func() { defer utilruntime.HandleCrash()"} {"_id":"doc-en-kubernetes-1e519579175705fad15e45aa3b326503d86cc525df5fcf72cdd6a1c915502f0e","title":"","text":"eventHandler(watchEvent.Object) } }() return watcher.Stop return watcher.Stop, nil } func (e *eventBroadcasterImpl) startRecordingEvents(stopCh <-chan struct{}) { func (e *eventBroadcasterImpl) startRecordingEvents(stopCh <-chan struct{}) error { eventHandler := func(obj runtime.Object) { event, ok := obj.(*eventsv1.Event) if !ok {"} {"_id":"doc-en-kubernetes-0a2b74c39293b824723109a19b169c67b95d6151efc2a10ca37e8fcdf0b9a9ec","title":"","text":"} e.recordToSink(event, clock.RealClock{}) } stopWatcher := e.StartEventWatcher(eventHandler) stopWatcher, err := e.StartEventWatcher(eventHandler) if err != nil { return err } go func() { <-stopCh stopWatcher() }() return nil } // StartRecordingToSink starts sending events received from the specified eventBroadcaster to the given sink. func (e *eventBroadcasterImpl) StartRecordingToSink(stopCh <-chan struct{}) { go wait.Until(e.refreshExistingEventSeries, refreshTime, stopCh) go wait.Until(e.finishSeries, finishTime, stopCh) e.startRecordingEvents(stopCh) err := e.startRecordingEvents(stopCh) if err != nil { klog.Errorf(\"unexpected type, expected eventsv1.Event\") return } } type eventBroadcasterAdapterImpl struct {"} {"_id":"doc-en-kubernetes-9e70573c17f52ec2eccf6d7c57fec03437eb5dbbfeb1251ed572ab34f512489f","title":"","text":"// Don't call StartRecordingToSink, as we don't need neither refreshing event // series nor finishing them in this tests and additional events updated would // race with our expected ones. broadcaster.startRecordingEvents(stopCh) err = broadcaster.startRecordingEvents(stopCh) if err != nil { t.Fatal(err) } recorder.Eventf(regarding, related, isomorphicEvent.Type, isomorphicEvent.Reason, isomorphicEvent.Action, isomorphicEvent.Note, []interface{}{1}) // read from the chan as this was needed only to populate the cache <-createEvent"} {"_id":"doc-en-kubernetes-2e3e949cc0168beb41f3cefd4b7b55089553931d74f9d2403d76f89a6f70c126","title":"","text":"// of StartRecordingToSink. This lets you also process events in a custom way (e.g. in tests). // NOTE: events received on your eventHandler should be copied before being used. // TODO: figure out if this can be removed. StartEventWatcher(eventHandler func(event runtime.Object)) func() StartEventWatcher(eventHandler func(event runtime.Object)) (func(), error) // StartStructuredLogging starts sending events received from this EventBroadcaster to the structured // logging function. The return value can be ignored or used to stop recording, if desired."} {"_id":"doc-en-kubernetes-881c51b2c221f126166cf6cf7a98555045bac8f077b4549f94936f386f5db281","title":"","text":"\"v\": true, \"vmodule\": true, \"log-flush-frequency\": true, \"provider-id\": true, } fs.VisitAll(func(f *pflag.Flag) { if notDeprecated[f.Name] {"} {"_id":"doc-en-kubernetes-7014bcf0cdcebfde8535ee72aa0d2871478b64bb8d5da659571495211602f9bb","title":"","text":"\"api/v1.0\", }, \"/\"), } _, err := c.MachineInfo() if err != nil { return nil, err } return c, nil }"} {"_id":"doc-en-kubernetes-bfe152df2cc3397a3cf811398511debdf025b46af4480f92194baa502e16e043","title":"","text":"NumStats: 3, NumSamples: 2, CpuUsagePercentiles: []int{10, 50, 90}, MemoryUsagePercentages: []int{10, 80, 90}, MemoryUsagePercentiles: []int{10, 80, 90}, } containerName := \"/some/container\" cinfo := itest.GenerateRandomContainerInfo(containerName, 4, query, 1*time.Second)"} {"_id":"doc-en-kubernetes-233a8bcb2d10693aef1e2fddafbb0b9d4be5182b14d0f4de6621269d7c5cbd11","title":"","text":"\"time\" v1 \"k8s.io/api/core/v1\" apiequality \"k8s.io/apimachinery/pkg/api/equality\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/diff\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/kubernetes/test/e2e/framework\" \"k8s.io/kubernetes/test/e2e/instrumentation/common\""} {"_id":"doc-en-kubernetes-37f44d1423a470bc4305078c2efed274c565d687def6f6dc1a5c3325aff3c4b1","title":"","text":"} }) ginkgo.It(\"should manage the lifecycle of an event\", func() { eventTestName := \"event-test\" ginkgo.By(\"creating a test event\") // create a test event in test namespace _, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).Create(context.TODO(), &v1.Event{ ObjectMeta: metav1.ObjectMeta{ Name: eventTestName, Labels: map[string]string{ \"testevent-constant\": \"true\", }, }, Message: \"This is a test event\", Reason: \"Test\", Type: \"Normal\", Count: 1, InvolvedObject: v1.ObjectReference{ Namespace: f.Namespace.Name, }, }, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create test event\") ginkgo.By(\"listing all events in all namespaces\") // get a list of Events in all namespaces to ensure endpoint coverage eventsList, err := f.ClientSet.CoreV1().Events(\"\").List(context.TODO(), metav1.ListOptions{ LabelSelector: \"testevent-constant=true\", }) framework.ExpectNoError(err, \"failed list all events\") foundCreatedEvent := false var eventCreatedName string for _, val := range eventsList.Items { if val.ObjectMeta.Name == eventTestName && val.ObjectMeta.Namespace == f.Namespace.Name { foundCreatedEvent = true eventCreatedName = val.ObjectMeta.Name break } } if !foundCreatedEvent { framework.Failf(\"unable to find test event %s in namespace %s, full list of events is %+v\", eventTestName, f.Namespace.Name, eventsList.Items) } ginkgo.By(\"patching the test event\") // patch the event's message eventPatchMessage := \"This is a test event - patched\" eventPatch, err := json.Marshal(map[string]interface{}{ \"message\": eventPatchMessage, }) framework.ExpectNoError(err, \"failed to marshal the patch JSON payload\") _, err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Patch(context.TODO(), eventTestName, types.StrategicMergePatchType, []byte(eventPatch), metav1.PatchOptions{}) framework.ExpectNoError(err, \"failed to patch the test event\") ginkgo.By(\"fetching the test event\") // get event by name event, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).Get(context.TODO(), eventCreatedName, metav1.GetOptions{}) framework.ExpectNoError(err, \"failed to fetch the test event\") framework.ExpectEqual(event.Message, eventPatchMessage, \"test event message does not match patch message\") ginkgo.By(\"updating the test event\") testEvent, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).Get(context.TODO(), event.Name, metav1.GetOptions{}) framework.ExpectNoError(err, \"failed to get test event\") testEvent.Series = &v1.EventSeries{ Count: 100, LastObservedTime: metav1.MicroTime{Time: time.Unix(1505828956, 0)}, } // clear ResourceVersion and ManagedFields which are set by control-plane testEvent.ObjectMeta.ResourceVersion = \"\" testEvent.ObjectMeta.ManagedFields = nil _, err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Update(context.TODO(), testEvent, metav1.UpdateOptions{}) framework.ExpectNoError(err, \"failed to update the test event\") ginkgo.By(\"getting the test event\") event, err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Get(context.TODO(), testEvent.Name, metav1.GetOptions{}) framework.ExpectNoError(err, \"failed to get test event\") // clear ResourceVersion and ManagedFields which are set by control-plane event.ObjectMeta.ResourceVersion = \"\" event.ObjectMeta.ManagedFields = nil if !apiequality.Semantic.DeepEqual(testEvent, event) { framework.Failf(\"test event wasn't properly updated: %v\", diff.ObjectReflectDiff(testEvent, event)) } ginkgo.By(\"deleting the test event\") // delete original event err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Delete(context.TODO(), eventCreatedName, metav1.DeleteOptions{}) framework.ExpectNoError(err, \"failed to delete the test event\") ginkgo.By(\"listing all events in all namespaces\") // get a list of Events list namespace eventsList, err = f.ClientSet.CoreV1().Events(\"\").List(context.TODO(), metav1.ListOptions{ LabelSelector: \"testevent-constant=true\", }) framework.ExpectNoError(err, \"fail to list all events\") foundCreatedEvent = false for _, val := range eventsList.Items { if val.ObjectMeta.Name == eventTestName && val.ObjectMeta.Namespace == f.Namespace.Name { foundCreatedEvent = true break } } if foundCreatedEvent { framework.Failf(\"Should not have found test event %s in namespace %s, full list of events %+v\", eventTestName, f.Namespace.Name, eventsList.Items) } }) /* Release: v1.20 Testname: Event, delete a collection"} {"_id":"doc-en-kubernetes-ed49cbf19a58678d9d8f8be8231ad1e40c3947a114d7281b0110c6ab840b02c4","title":"","text":"its label selector. release: v1.20 file: test/e2e/instrumentation/core_events.go - testname: Event resource lifecycle codename: '[sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]' description: Create an event, the event MUST exist. The event is patched with a new message, the check MUST have the update message. The event is deleted and MUST NOT show up when listing all events. release: v1.20 - testname: Event, manage lifecycle of an Event codename: '[sig-instrumentation] Events should manage the lifecycle of an event [Conformance]' description: Attempt to create an event which MUST succeed. Attempt to list all namespaces with a label selector which MUST succeed. One list MUST be found. The event is patched with a new message, the check MUST have the update message. The event is updated with a new series of events, the check MUST confirm this update. The event is deleted and MUST NOT show up when listing all events. release: v1.25 file: test/e2e/instrumentation/core_events.go - testname: DNS, cluster codename: '[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]'"} {"_id":"doc-en-kubernetes-ba7c55e8d338a5af807161432feb7b968580fae3f273799ab005d07a78be9e78","title":"","text":"f.NamespacePodSecurityEnforceLevel = admissionapi.LevelPrivileged /* Release: v1.20 Testname: Event resource lifecycle Description: Create an event, the event MUST exist. The event is patched with a new message, the check MUST have the update message. The event is deleted and MUST NOT show up when listing all events. Release: v1.25 Testname: Event, manage lifecycle of an Event Description: Attempt to create an event which MUST succeed. Attempt to list all namespaces with a label selector which MUST succeed. One list MUST be found. The event is patched with a new message, the check MUST have the update message. The event is updated with a new series of events, the check MUST confirm this update. The event is deleted and MUST NOT show up when listing all events. */ framework.ConformanceIt(\"should ensure that an event can be fetched, patched, deleted, and listed\", func() { eventTestName := \"event-test\" ginkgo.By(\"creating a test event\") // create a test event in test namespace _, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).Create(context.TODO(), &v1.Event{ ObjectMeta: metav1.ObjectMeta{ Name: eventTestName, Labels: map[string]string{ \"testevent-constant\": \"true\", }, }, Message: \"This is a test event\", Reason: \"Test\", Type: \"Normal\", Count: 1, InvolvedObject: v1.ObjectReference{ Namespace: f.Namespace.Name, }, }, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create test event\") ginkgo.By(\"listing all events in all namespaces\") // get a list of Events in all namespaces to ensure endpoint coverage eventsList, err := f.ClientSet.CoreV1().Events(\"\").List(context.TODO(), metav1.ListOptions{ LabelSelector: \"testevent-constant=true\", }) framework.ExpectNoError(err, \"failed list all events\") foundCreatedEvent := false var eventCreatedName string for _, val := range eventsList.Items { if val.ObjectMeta.Name == eventTestName && val.ObjectMeta.Namespace == f.Namespace.Name { foundCreatedEvent = true eventCreatedName = val.ObjectMeta.Name break } } if !foundCreatedEvent { framework.Failf(\"unable to find test event %s in namespace %s, full list of events is %+v\", eventTestName, f.Namespace.Name, eventsList.Items) } ginkgo.By(\"patching the test event\") // patch the event's message eventPatchMessage := \"This is a test event - patched\" eventPatch, err := json.Marshal(map[string]interface{}{ \"message\": eventPatchMessage, }) framework.ExpectNoError(err, \"failed to marshal the patch JSON payload\") _, err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Patch(context.TODO(), eventTestName, types.StrategicMergePatchType, []byte(eventPatch), metav1.PatchOptions{}) framework.ExpectNoError(err, \"failed to patch the test event\") ginkgo.By(\"fetching the test event\") // get event by name event, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).Get(context.TODO(), eventCreatedName, metav1.GetOptions{}) framework.ExpectNoError(err, \"failed to fetch the test event\") framework.ExpectEqual(event.Message, eventPatchMessage, \"test event message does not match patch message\") ginkgo.By(\"deleting the test event\") // delete original event err = f.ClientSet.CoreV1().Events(f.Namespace.Name).Delete(context.TODO(), eventCreatedName, metav1.DeleteOptions{}) framework.ExpectNoError(err, \"failed to delete the test event\") ginkgo.By(\"listing all events in all namespaces\") // get a list of Events list namespace eventsList, err = f.ClientSet.CoreV1().Events(\"\").List(context.TODO(), metav1.ListOptions{ LabelSelector: \"testevent-constant=true\", }) framework.ExpectNoError(err, \"fail to list all events\") foundCreatedEvent = false for _, val := range eventsList.Items { if val.ObjectMeta.Name == eventTestName && val.ObjectMeta.Namespace == f.Namespace.Name { foundCreatedEvent = true break } } if foundCreatedEvent { framework.Failf(\"Should not have found test event %s in namespace %s, full list of events %+v\", eventTestName, f.Namespace.Name, eventsList.Items) } }) framework.ConformanceIt(\"should manage the lifecycle of an event\", func() { // As per SIG-Arch meeting 14 July 2022 this e2e test now supersede // e2e test \"Event resource lifecycle\", which has been removed. ginkgo.It(\"should manage the lifecycle of an event\", func() { eventTestName := \"event-test\" ginkgo.By(\"creating a test event\")"} {"_id":"doc-en-kubernetes-327ad2b920a90a0254f49336e4adabd566dd85b7630cf19b11b562c0ba7e122f","title":"","text":"v1 \"k8s.io/api/core/v1\" discoveryv1 \"k8s.io/api/discovery/v1\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\""} {"_id":"doc-en-kubernetes-eceab5f049a8fe61ffadffa03b4abc32c620a97a3a254234bbd7e03117bda149","title":"","text":"framework.ExpectNoError(verifyServeHostnameServiceUp(cs, ns, podNames3, svc3IP, servicePort)) }) ginkgo.It(\"should work after the service has been recreated\", func() { serviceName := \"service-deletion\" ns := f.Namespace.Name numPods, servicePort := 1, defaultServeHostnameServicePort ginkgo.By(\"creating the service \" + serviceName + \" in namespace \" + ns) defer func() { framework.ExpectNoError(StopServeHostnameService(f.ClientSet, ns, serviceName)) }() podNames, svcIP, _ := StartServeHostnameService(cs, getServeHostnameService(serviceName), ns, numPods) framework.ExpectNoError(verifyServeHostnameServiceUp(cs, ns, podNames, svcIP, servicePort)) ginkgo.By(\"deleting the service \" + serviceName + \" in namespace \" + ns) err := cs.CoreV1().Services(ns).Delete(context.TODO(), serviceName, metav1.DeleteOptions{}) framework.ExpectNoError(err) ginkgo.By(\"Waiting for the service \" + serviceName + \" in namespace \" + ns + \" to disappear\") if pollErr := wait.PollImmediate(framework.Poll, e2eservice.RespondingTimeout, func() (bool, error) { _, err := cs.CoreV1().Services(ns).Get(context.TODO(), serviceName, metav1.GetOptions{}) if err != nil { if apierrors.IsNotFound(err) { framework.Logf(\"Service %s/%s is gone.\", ns, serviceName) return true, nil } return false, err } framework.Logf(\"Service %s/%s still exists\", ns, serviceName) return false, nil }); pollErr != nil { framework.Failf(\"Failed to wait for service to disappear: %v\", pollErr) } ginkgo.By(\"recreating the service \" + serviceName + \" in namespace \" + ns) svc, err := cs.CoreV1().Services(ns).Create(context.TODO(), getServeHostnameService(serviceName), metav1.CreateOptions{}) framework.ExpectNoError(err) framework.ExpectNoError(verifyServeHostnameServiceUp(cs, ns, podNames, svc.Spec.ClusterIP, servicePort)) }) ginkgo.It(\"should work after restarting kube-proxy [Disruptive]\", func() { kubeProxyLabelSet := map[string]string{clusterAddonLabelKey: kubeProxyLabelName} e2eskipper.SkipUnlessComponentRunsAsPodsAndClientCanDeleteThem(kubeProxyLabelName, cs, metav1.NamespaceSystem, kubeProxyLabelSet)"} {"_id":"doc-en-kubernetes-472520db07d4e8d6c2c17d3da809752ebbdaabe758126d9e7bab3edb0423bf87","title":"","text":"// Create a pod in one node to get evicted ginkgo.By(\"creating a client pod that is going to be evicted for the service \" + serviceName) evictedPod := e2epod.NewAgnhostPod(namespace, \"evicted-pod\", nil, nil, nil) evictedPod.Spec.Containers[0].Command = []string{\"/bin/sh\", \"-c\", \"sleep 10; fallocate -l 10M file; sleep 10000\"} evictedPod.Spec.Containers[0].Command = []string{\"/bin/sh\", \"-c\", \"sleep 10; dd if=/dev/zero of=file bs=1M count=10; sleep 10000\"} evictedPod.Spec.Containers[0].Name = \"evicted-pod\" evictedPod.Spec.Containers[0].Resources = v1.ResourceRequirements{ Limits: v1.ResourceList{\"ephemeral-storage\": resource.MustParse(\"5Mi\")},"} {"_id":"doc-en-kubernetes-7c11ab54384d2ad4b9247f575ddad48fda2ed9ee0191d1c4b3699600cf6b1116","title":"","text":"Name: \"bar\", Image: image, Command: []string{ \"/bin/sh\", \"-c\", \"sleep 10; fallocate -l 10M file; sleep 10000\", \"/bin/sh\", \"-c\", \"sleep 10; dd if=/dev/zero of=file bs=1M count=10; sleep 10000\", }, Resources: v1.ResourceRequirements{ Limits: v1.ResourceList{"} {"_id":"doc-en-kubernetes-6a6d087d754dfb8bc5734fde8ddcf751592578d9e2c99a66b14520eddfc74869","title":"","text":"import ( \"context\" \"fmt\" \"log\" \"net\" \"net/url\" \"os\" \"path\" \"strings\" \"sync\" \"time\" grpcprom \"github.com/grpc-ecosystem/go-grpc-prometheus\" \"go.etcd.io/etcd/client/pkg/v3/logutil\" \"go.etcd.io/etcd/client/pkg/v3/transport\" clientv3 \"go.etcd.io/etcd/client/v3\" \"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc\" \"go.uber.org/zap\" \"go.uber.org/zap/zapcore\" \"google.golang.org/grpc\" \"k8s.io/apimachinery/pkg/runtime\""} {"_id":"doc-en-kubernetes-42c6c2327fa61c372454cccb7c58f283839a8bbd86bcb8bd1dda8b6c290fb496","title":"","text":"dbMetricsMonitorJitter = 0.5 ) // TODO(negz): Stop using a package scoped logger. At the time of writing we're // creating an etcd client for each CRD. We need to pass each etcd client a // logger or each client will create its own, which comes with a significant // memory cost (around 20% of the API server's memory when hundreds of CRDs are // present). The correct fix here is to not create a client per CRD. See // https://github.com/kubernetes/kubernetes/issues/111476 for more. var etcd3ClientLogger *zap.Logger func init() { // grpcprom auto-registers (via an init function) their client metrics, since we are opting out of // using the global prometheus registry and using our own wrapped global registry,"} {"_id":"doc-en-kubernetes-a7469fc201ba6ee067d595028f8a58acbe3fd0da431267548118179a6e611b3e","title":"","text":"// For reference: https://github.com/kubernetes/kubernetes/pull/81387 legacyregistry.RawMustRegister(grpcprom.DefaultClientMetrics) dbMetricsMonitors = make(map[string]struct{}) l, err := logutil.CreateDefaultZapLogger(etcdClientDebugLevel()) if err != nil { l = zap.NewNop() } etcd3ClientLogger = l.Named(\"etcd-client\") } // etcdClientDebugLevel translates ETCD_CLIENT_DEBUG into zap log level. // NOTE(negz): This is a copy of a private etcd client function: // https://github.com/etcd-io/etcd/blob/v3.5.4/client/v3/logger.go#L47 func etcdClientDebugLevel() zapcore.Level { envLevel := os.Getenv(\"ETCD_CLIENT_DEBUG\") if envLevel == \"\" || envLevel == \"true\" { return zapcore.InfoLevel } var l zapcore.Level if err := l.Set(envLevel); err == nil { log.Printf(\"Deprecated env ETCD_CLIENT_DEBUG value. Using default level: 'info'\") return zapcore.InfoLevel } return l } func newETCD3HealthCheck(c storagebackend.Config, stopCh <-chan struct{}) (func() error, error) {"} {"_id":"doc-en-kubernetes-a192d21328c23f188c5c7324f40f90677e68cc8b29905e42cd680edaa3d742f4","title":"","text":"} dialOptions = append(dialOptions, grpc.WithContextDialer(dialer)) } cfg := clientv3.Config{ DialTimeout: dialTimeout, DialKeepAliveTime: keepaliveTime,"} {"_id":"doc-en-kubernetes-61101037dbd2ff92e2dc4719a13668affad732e2d9958c9948919a16bd8b71e0","title":"","text":"DialOptions: dialOptions, Endpoints: c.ServerList, TLS: tlsConfig, Logger: etcd3ClientLogger, } return clientv3.New(cfg)"} {"_id":"doc-en-kubernetes-4e93843c5b693c9bf5e8501ea04546ee591504eeaaa44031099da5f70989ca98","title":"","text":"package job import ( \"reflect\" \"testing\" \"github.com/google/go-cmp/cmp\" \"github.com/google/go-cmp/cmp/cmpopts\" batchv1 \"k8s.io/api/batch/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/apimachinery/pkg/util/validation/field\" genericapirequest \"k8s.io/apiserver/pkg/endpoints/request\""} {"_id":"doc-en-kubernetes-7a4686d1c1b8ebd3569f00f9433c63734410560f778ee5aff09324315075fa78","title":"","text":"}, }, }, \"attempt status update and verify it doesn't change\": { job: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 1, }, }, updatedJob: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 2, }, }, wantJob: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 1, }, }, }, \"ensure generation doesn't change over non spec updates\": { job: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 1, }, }, updatedJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(0, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 2, }, }, wantJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(0, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 1, }, }, }, \"test updating suspend false->true\": { job: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Suspend: pointer.Bool(false), }, }, updatedJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(0, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Suspend: pointer.Bool(true), }, }, wantJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(1, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Suspend: pointer.Bool(true), }, }, }, \"test updating suspend nil -> true\": { job: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, }, updatedJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(0, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Suspend: pointer.Bool(true), }, }, wantJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(1, map[string]string{\"hello\": \"world\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Suspend: pointer.Bool(true), }, }, }, } for name, tc := range cases {"} {"_id":"doc-en-kubernetes-e10afb2399d4050cf176f0d83ad21ed2bc517f329724d0687be9db79b1ff10ad","title":"","text":"Strategy.PrepareForUpdate(ctx, &tc.updatedJob, &tc.job) if diff := cmp.Diff(tc.wantJob, tc.updatedJob); diff != \"\" { t.Errorf(\"Job pod failure policy (-want,+got):n%s\", diff) t.Errorf(\"Job update differences (-want,+got):n%s\", diff) } }) } } // TestJobStrategy_PrepareForUpdate tests various scenearios for PrepareForCreate // TestJobStrategy_PrepareForCreate tests various scenarios for PrepareForCreate func TestJobStrategy_PrepareForCreate(t *testing.T) { validSelector := getValidLabelSelector() validPodTemplateSpec := getValidPodTemplateSpecForSelector(validSelector)"} {"_id":"doc-en-kubernetes-2e1a11132dc05f919cd26de47b8b4cbd5cb4798bb2d3969469e50c12da957324","title":"","text":"}, }, }, \"job does not allow setting status on create\": { job: batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, Status: batch.JobStatus{ Active: 1, }, }, wantJob: batch.Job{ ObjectMeta: getValidObjectMetaWithAnnotations(1, map[string]string{batchv1.JobTrackingFinalizer: \"\"}), Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, }, }, }, } for name, tc := range cases {"} {"_id":"doc-en-kubernetes-dcb1e559d885447b6841f3d8f93fbdcee82f6bfd49e69b1fdd09179e0b2eb2a1","title":"","text":"} } // TODO(#111514): refactor by spliting into dedicated test functions func TestJobStrategy(t *testing.T) { ctx := genericapirequest.NewDefaultContext() if !Strategy.NamespaceScoped() { t.Errorf(\"Job must be namespace scoped\") } if Strategy.AllowCreateOnUpdate() { t.Errorf(\"Job should not allow create on update\") } validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validPodTemplateSpec := api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } job := &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, Annotations: map[string]string{ \"foo\": \"bar\", }, ResourceVersion: \"0\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Completions: pointer.Int32Ptr(2), // Set gated values. Suspend: pointer.BoolPtr(true), TTLSecondsAfterFinished: pointer.Int32Ptr(0), CompletionMode: completionModePtr(batch.IndexedCompletion), }, Status: batch.JobStatus{ Active: 11, }, } Strategy.PrepareForCreate(ctx, job) if job.Status.Active != 0 { t.Errorf(\"Job does not allow setting status on create\") } if job.Generation != 1 { t.Errorf(\"expected Generation=1, got %d\", job.Generation) } errs := Strategy.Validate(ctx, job) if len(errs) != 0 { t.Errorf(\"Unexpected error validating %v\", errs) } if job.Spec.CompletionMode == nil { t.Errorf(\"Job should allow setting .spec.completionMode\") } wantAnnotations := map[string]string{ \"foo\": \"bar\", batchv1.JobTrackingFinalizer: \"\", } if diff := cmp.Diff(wantAnnotations, job.Annotations); diff != \"\" { t.Errorf(\"Job has annotations (-want,+got):n%s\", diff) } parallelism := int32(10) // ensure we do not change generation for non-spec updates updatedLabelJob := job.DeepCopy() updatedLabelJob.Labels = map[string]string{\"a\": \"true\"} Strategy.PrepareForUpdate(ctx, updatedLabelJob, job) if updatedLabelJob.Generation != 1 { t.Errorf(\"expected Generation=1, got %d\", updatedLabelJob.Generation) } errs = Strategy.ValidateUpdate(ctx, updatedLabelJob, job) if len(errs) != 0 { t.Errorf(\"Unexpected update validation error\") } updatedJob := &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"bar\", ResourceVersion: \"4\", // remove one annotation and try to enforce the job tracking finalizer. Annotations: map[string]string{batchv1.JobTrackingFinalizer: \"\"}, }, Spec: batch.JobSpec{ Parallelism: ¶llelism, Completions: pointer.Int32Ptr(2), // Update gated features. TTLSecondsAfterFinished: pointer.Int32Ptr(1), CompletionMode: completionModePtr(batch.IndexedCompletion), // No change because field is immutable. }, Status: batch.JobStatus{ Active: 11, }, } // Ensure we do not change status job.Status.Active = 10 Strategy.PrepareForUpdate(ctx, updatedJob, job) if updatedJob.Status.Active != 10 { t.Errorf(\"PrepareForUpdate should have preserved prior version status\") } if updatedJob.Generation != 2 { t.Errorf(\"expected Generation=2, got %d\", updatedJob.Generation) } wantAnnotations = map[string]string{ batchv1.JobTrackingFinalizer: \"\", } if diff := cmp.Diff(wantAnnotations, updatedJob.Annotations); diff != \"\" { t.Errorf(\"Job has annotations (-want,+got):n%s\", diff) } errs = Strategy.ValidateUpdate(ctx, updatedJob, job) if len(errs) == 0 { t.Errorf(\"Expected a validation error\") } // Test updating suspend false->true and nil-> true when the feature gate is // disabled. We don't care about other combinations. job.Spec.Suspend, updatedJob.Spec.Suspend = pointer.BoolPtr(false), pointer.BoolPtr(true) Strategy.PrepareForUpdate(ctx, updatedJob, job) job.Spec.Suspend, updatedJob.Spec.Suspend = nil, pointer.BoolPtr(true) Strategy.PrepareForUpdate(ctx, updatedJob, job) func TestJobStrategy_GarbageCollectionPolicy(t *testing.T) { // Make sure we correctly implement the interface. // Otherwise a typo could silently change the default. var gcds rest.GarbageCollectionDeleteStrategy = Strategy"} {"_id":"doc-en-kubernetes-4342ed7ed62e17c5beebaf4db525335bab53b2ca16e7840a487b89b45c1cec91","title":"","text":"} } func TestValidateToleratingBadLabels(t *testing.T) { invalidSelector := getValidLabelSelector() invalidSelector.MatchExpressions = []metav1.LabelSelectorRequirement{{Key: \"key\", Operator: metav1.LabelSelectorOpNotIn, Values: []string{\"bad value\"}}} validPodTemplateSpec := getValidPodTemplateSpecForSelector(getValidLabelSelector()) job := &batch.Job{ ObjectMeta: getValidObjectMeta(0), Spec: batch.JobSpec{ Selector: invalidSelector, ManualSelector: pointer.BoolPtr(true), Template: validPodTemplateSpec, }, } job.ResourceVersion = \"1\" oldObj := job.DeepCopy() newObj := job.DeepCopy() context := genericapirequest.NewContext() errorList := Strategy.ValidateUpdate(context, newObj, oldObj) if len(errorList) > 0 { t.Errorf(\"Unexpected error list with no-op update of bad object: %v\", errorList) } } func TestJobStrategyValidateUpdate(t *testing.T) { func TestJobStrategy_ValidateUpdate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"},"} {"_id":"doc-en-kubernetes-23226c334bf4f2a43dc905ec39988a986a0b62ef68179434de037bbe29347a75","title":"","text":"}, }, update: func(job *batch.Job) { // change something. job.Annotations[\"foo\"] = \"bar\" }, }, \"deleting user annotation\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Annotations: map[string]string{ batch.JobTrackingFinalizer: \"\", \"foo\": \"bar\", }, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, update: func(job *batch.Job) { delete(job.Annotations, \"foo\") }, }, \"updating node selector for unsuspended job disallowed\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{"} {"_id":"doc-en-kubernetes-0fd7a2713ab31d66ead784c4b723ca4b392533b627ab9f12b392a7a6fda972d7","title":"","text":"}, mutableSchedulingDirectivesEnabled: false, }, \"invalid label selector\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Annotations: map[string]string{\"foo\": \"bar\"}, }, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, MatchExpressions: []metav1.LabelSelectorRequirement{{Key: \"key\", Operator: metav1.LabelSelectorOpNotIn, Values: []string{\"bad value\"}}}, }, ManualSelector: pointer.BoolPtr(true), Template: validPodTemplateSpec, }, }, update: func(job *batch.Job) { job.Annotations[\"hello\"] = \"world\" }, }, } for name, tc := range cases { t.Run(name, func(t *testing.T) {"} {"_id":"doc-en-kubernetes-b6572f6bd741977739dca2f5d828f4b022526c65492a231affb3fef11ebeb68c","title":"","text":"} } func TestJobStrategyWithGeneration(t *testing.T) { ctx := genericapirequest.NewDefaultContext() theUID := types.UID(\"1a2b3c4d5e6f7g8h9i0k\") validPodTemplateSpec := api.PodTemplateSpec{ Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } job := &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob2\", Namespace: metav1.NamespaceDefault, UID: theUID, }, Spec: batch.JobSpec{ Selector: nil, Template: validPodTemplateSpec, }, } Strategy.PrepareForCreate(ctx, job) errs := Strategy.Validate(ctx, job) if len(errs) != 0 { t.Errorf(\"Unexpected error validating %v\", errs) } // Validate the stuff that validation should have validated. if job.Spec.Selector == nil { t.Errorf(\"Selector not generated\") } expectedLabels := make(map[string]string) expectedLabels[\"controller-uid\"] = string(theUID) if !reflect.DeepEqual(job.Spec.Selector.MatchLabels, expectedLabels) { t.Errorf(\"Expected label selector not generated\") } if job.Spec.Template.ObjectMeta.Labels == nil { t.Errorf(\"Expected template labels not generated\") } if v, ok := job.Spec.Template.ObjectMeta.Labels[\"job-name\"]; !ok || v != \"myjob2\" { t.Errorf(\"Expected template labels not present\") } if v, ok := job.Spec.Template.ObjectMeta.Labels[\"controller-uid\"]; !ok || v != string(theUID) { t.Errorf(\"Expected template labels not present: ok: %v, v: %v\", ok, v) } } func TestJobStatusStrategy(t *testing.T) { func TestJobStrategy_WarningsOnUpdate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() if !StatusStrategy.NamespaceScoped() { t.Errorf(\"Job must be namespace scoped\") } if StatusStrategy.AllowCreateOnUpdate() { t.Errorf(\"Job should not allow create on update\") } validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, }"} {"_id":"doc-en-kubernetes-d05d4c2265fbfeee93379b33e236ba8a92aa63fd7495574735c734bf2862a56e","title":"","text":"Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } oldParallelism := int32(10) newParallelism := int32(11) oldJob := &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"10\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: &oldParallelism, }, Status: batch.JobStatus{ Active: 11, }, } newJob := &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"9\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: &newParallelism, }, Status: batch.JobStatus{ Active: 12, }, } cases := map[string]struct { oldJob *batch.Job job *batch.Job wantWarningsCount int32 }{ \"generation 0 for both\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 0, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, StatusStrategy.PrepareForUpdate(ctx, newJob, oldJob) if newJob.Status.Active != 12 { t.Errorf(\"Job status updates must allow changes to job status\") } if *newJob.Spec.Parallelism != 10 { t.Errorf(\"Job status updates must now allow changes to job spec\") oldJob: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 0, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, }, \"generation 1 for new; force WarningsOnUpdate to check PodTemplate for updates\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 1, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, oldJob: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 0, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, }, \"force validation failure in pod template\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 1, }, Spec: batch.JobSpec{ Selector: validSelector, Template: api.PodTemplateSpec{ Spec: api.PodSpec{Volumes: []api.Volume{{Name: \"volume-name\"}, {Name: \"volume-name\"}}}, }, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, oldJob: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Generation: 0, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, wantWarningsCount: 1, }, } for val, tc := range cases { t.Run(val, func(t *testing.T) { gotWarnings := Strategy.WarningsOnUpdate(ctx, tc.job, tc.oldJob) if len(gotWarnings) != int(tc.wantWarningsCount) { t.Errorf(\"got warning length of %d but expected %d\", len(gotWarnings), tc.wantWarningsCount) } }) } } func TestJobStrategy_WarningsOnCreate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() theUID := types.UID(\"1a2b3c4d5e6f7g8h9i0k\") validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validSpec := batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, }, } testcases := map[string]struct { job *batch.Job wantWarningsCount int32 }{ \"happy path job\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob2\", Namespace: metav1.NamespaceDefault, UID: theUID, }, Spec: validSpec, }, }, \"dns invalid name\": { wantWarningsCount: 1, job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"my job2\", Namespace: metav1.NamespaceDefault, UID: theUID, }, Spec: validSpec, }, }, } for name, tc := range testcases { t.Run(name, func(t *testing.T) { gotWarnings := Strategy.WarningsOnCreate(ctx, tc.job) if len(gotWarnings) != int(tc.wantWarningsCount) { t.Errorf(\"got warning length of %d but expected %d\", len(gotWarnings), tc.wantWarningsCount) } }) } } func TestJobStrategy_Validate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() theUID := types.UID(\"1a2b3c4d5e6f7g8h9i0k\") validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validPodSpec := api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, } validObjectMeta := metav1.ObjectMeta{ Name: \"myjob2\", Namespace: metav1.NamespaceDefault, UID: theUID, } testcases := map[string]struct { job *batch.Job wantJob *batch.Job wantWarningCount int32 }{ \"valid job with labels in pod template\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: validPodSpec, }}, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"controller-uid\": string(theUID)}}, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: validPodSpec, }}, }, }, \"no labels in job\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ Spec: validPodSpec, }}, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"controller-uid\": string(theUID)}}, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{\"job-name\": \"myjob2\", \"controller-uid\": string(theUID)}, }, Spec: validPodSpec, }}, }, }, \"labels exist\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{\"a\": \"b\", \"job-name\": \"myjob2\", \"controller-uid\": string(theUID)}, }, Spec: validPodSpec, }}, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"controller-uid\": string(theUID)}}, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{\"a\": \"b\", \"job-name\": \"myjob2\", \"controller-uid\": string(theUID)}, }, Spec: validPodSpec, }}, }, }, \"manual selector; do not generate labels\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: validPodSpec, }, Completions: pointer.Int32Ptr(2), ManualSelector: pointer.BoolPtr(true), }, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: validPodSpec, }, Completions: pointer.Int32Ptr(2), ManualSelector: pointer.BoolPtr(true), }, }, }, \"valid job with extended configuration\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: validPodSpec, }, Completions: pointer.Int32Ptr(2), Suspend: pointer.BoolPtr(true), TTLSecondsAfterFinished: pointer.Int32Ptr(0), CompletionMode: completionModePtr(batch.IndexedCompletion), }, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"controller-uid\": string(theUID)}}, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{\"a\": \"b\", \"job-name\": \"myjob2\", \"controller-uid\": string(theUID)}, }, Spec: validPodSpec, }, Completions: pointer.Int32Ptr(2), Suspend: pointer.BoolPtr(true), TTLSecondsAfterFinished: pointer.Int32Ptr(0), CompletionMode: completionModePtr(batch.IndexedCompletion), }, }, }, \"fail validation due to invalid volume spec\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: nil, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, Volumes: []api.Volume{{Name: \"volume-name\"}}, }, }, }, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: &metav1.LabelSelector{MatchLabels: map[string]string{\"controller-uid\": string(theUID)}}, Template: api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: map[string]string{\"a\": \"b\", \"job-name\": \"myjob2\", \"controller-uid\": string(theUID)}, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, Volumes: []api.Volume{{Name: \"volume-name\"}}, }, }, }, }, wantWarningCount: 1, }, } for name, tc := range testcases { t.Run(name, func(t *testing.T) { errs := Strategy.Validate(ctx, tc.job) if len(errs) != int(tc.wantWarningCount) { t.Errorf(\"want warnings %d but got %d\", tc.wantWarningCount, len(errs)) } if diff := cmp.Diff(tc.wantJob, tc.job); diff != \"\" { t.Errorf(\"Unexpected job (-want,+got):n%s\", diff) } }) } } func TestStrategy_ResetFields(t *testing.T) { resetFields := Strategy.GetResetFields() if len(resetFields) != 1 { t.Error(\"ResetFields should have 1 element\") } } func TestJobStatusStrategy_ResetFields(t *testing.T) { resetFields := StatusStrategy.GetResetFields() if len(resetFields) != 1 { t.Error(\"ResetFields should have 1 element\") } } func TestStatusStrategy_PrepareForUpdate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validPodTemplateSpec := api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } validObjectMeta := metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"10\", } cases := map[string]struct { job *batch.Job newJob *batch.Job wantJob *batch.Job }{ \"job must allow status updates\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, Status: batch.JobStatus{ Active: 11, }, }, newJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, Status: batch.JobStatus{ Active: 12, }, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, Status: batch.JobStatus{ Active: 12, }, }, }, \"parallelism changes not allowed\": { job: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(3), }, }, newJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, }, wantJob: &batch.Job{ ObjectMeta: validObjectMeta, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(3), }, }, }, } for name, tc := range cases { t.Run(name, func(t *testing.T) { StatusStrategy.PrepareForUpdate(ctx, tc.newJob, tc.job) if diff := cmp.Diff(tc.wantJob, tc.newJob); diff != \"\" { t.Errorf(\"Unexpected job (-want,+got):n%s\", diff) } }) } } func TestStatusStrategy_ValidateUpdate(t *testing.T) { ctx := genericapirequest.NewDefaultContext() validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validPodTemplateSpec := api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } cases := map[string]struct { job *batch.Job newJob *batch.Job wantJob *batch.Job }{ \"incoming resource version on update should not be mutated\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"10\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, }, newJob: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"9\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, }, wantJob: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"9\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, Parallelism: pointer.Int32(4), }, }, }, } for name, tc := range cases { t.Run(name, func(t *testing.T) { errs := StatusStrategy.ValidateUpdate(ctx, tc.newJob, tc.job) if len(errs) != 0 { t.Errorf(\"Unexpected error %v\", errs) } if diff := cmp.Diff(tc.wantJob, tc.newJob); diff != \"\" { t.Errorf(\"Unexpected job (-want,+got):n%s\", diff) } }) } } func TestJobStrategy_GetAttrs(t *testing.T) { validSelector := &metav1.LabelSelector{ MatchLabels: map[string]string{\"a\": \"b\"}, } validPodTemplateSpec := api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: validSelector.MatchLabels, }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyOnFailure, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{Name: \"abc\", Image: \"image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePolicy: api.TerminationMessageReadFile}}, }, } errs := StatusStrategy.ValidateUpdate(ctx, newJob, oldJob) if len(errs) != 0 { t.Errorf(\"Unexpected error %v\", errs) cases := map[string]struct { job *batch.Job wantErr string nonJobObject *api.Pod }{ \"valid job with no labels\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, }, \"valid job with a label\": { job: &batch.Job{ ObjectMeta: metav1.ObjectMeta{ Name: \"myjob\", Namespace: metav1.NamespaceDefault, ResourceVersion: \"0\", Labels: map[string]string{\"a\": \"b\"}, }, Spec: batch.JobSpec{ Selector: validSelector, Template: validPodTemplateSpec, ManualSelector: pointer.BoolPtr(true), Parallelism: pointer.Int32Ptr(1), }, }, }, \"pod instead\": { job: nil, nonJobObject: &api.Pod{}, wantErr: \"given object is not a job.\", }, } if newJob.ResourceVersion != \"9\" { t.Errorf(\"Incoming resource version on update should not be mutated\") for name, tc := range cases { t.Run(name, func(t *testing.T) { if tc.job == nil { _, _, err := GetAttrs(tc.nonJobObject) if diff := cmp.Diff(tc.wantErr, err.Error()); diff != \"\" { t.Errorf(\"Unexpected errors (-want,+got):n%s\", diff) } } else { gotLabels, _, err := GetAttrs(tc.job) if err != nil { t.Errorf(\"Error %s supposed to be nil\", err.Error()) } if diff := cmp.Diff(labels.Set(tc.job.ObjectMeta.Labels), gotLabels); diff != \"\" { t.Errorf(\"Unexpected attrs (-want,+got):n%s\", diff) } } }) } } func TestSelectableFieldLabelConversions(t *testing.T) { func TestJobToSelectiableFields(t *testing.T) { apitesting.TestSelectableFieldLabelConversionsOfKind(t, \"batch/v1\", \"Job\","} {"_id":"doc-en-kubernetes-41dab007d51335667f36b6b1b94d65985f715ba2368b989409e3ee36af97635e","title":"","text":"\"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp\" semconv \"go.opentelemetry.io/otel/semconv/v1.17.0\" \"go.opentelemetry.io/otel/trace\" \"k8s.io/apiserver/pkg/endpoints/request\" tracing \"k8s.io/component-base/tracing\" )"} {"_id":"doc-en-kubernetes-dd4b460e33e5009219b29e1c1e52e4ee07ef1dacfe5cab18ce386f944c4b9998","title":"","text":"otelhttp.WithPropagators(tracing.Propagators()), otelhttp.WithPublicEndpoint(), otelhttp.WithTracerProvider(tp), otelhttp.WithSpanNameFormatter(func(operation string, r *http.Request) string { ctx := r.Context() info, exist := request.RequestInfoFrom(ctx) if !exist || !info.IsResourceRequest { return r.Method } return getSpanNameFromRequestInfo(info, r) }), } wrappedHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { // Add the http.target attribute to the otelhttp span"} {"_id":"doc-en-kubernetes-ee6459558ef95e56b27e225d18c66356ce63ec33e6612158468a717ec3768bc3","title":"","text":"// See https://github.com/open-telemetry/opentelemetry-go/tree/main/example/passthrough return otelhttp.NewHandler(wrappedHandler, \"KubernetesAPI\", opts...) } func getSpanNameFromRequestInfo(info *request.RequestInfo, r *http.Request) string { spanName := \"/\" + info.APIPrefix if info.APIGroup != \"\" { spanName += \"/\" + info.APIGroup } spanName += \"/\" + info.APIVersion if info.Namespace != \"\" { spanName += \"/namespaces/{:namespace}\" } spanName += \"/\" + info.Resource if info.Name != \"\" { spanName += \"/\" + \"{:name}\" } if info.Subresource != \"\" { spanName += \"/\" + info.Subresource } return r.Method + \" \" + spanName } "} {"_id":"doc-en-kubernetes-f470e1849b0e5132bd7228937e42f06b3e493018b6ed067cea47fdf36fbc09b6","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"POST /api/v1/nodes\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-69f5187a2bc41ac133a1ccdadbf7b4a570de042730b069934f329c883372e1be","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"GET /api/v1/nodes/{:name}\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-7593cdd3c4ece785e077f3f8673bc21e66cf5babedd28f9785339dd9767de26b","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"GET /api/v1/nodes\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-564bc1a18137cba321e421ec2f0f2d19443847d4d667a17b781d24509e2029da","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"PUT /api/v1/nodes/{:name}\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-fe0ef984fd2ed13a70096fa69a8a3d9f90641643171cecca31955ba0c17752f9","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"PATCH /api/v1/nodes/{:name}\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-bd5105c0dd910e8e6600e86f8de00d3592a9672f8d0390ca5b5d2c05ec241ea2","title":"","text":"}, expectedTrace: []*spanExpectation{ { name: \"KubernetesAPI\", name: \"DELETE /api/v1/nodes/{:name}\", attributes: map[string]func(*commonv1.AnyValue) bool{ \"http.user_agent\": func(v *commonv1.AnyValue) bool { return strings.HasPrefix(v.GetStringValue(), \"tracing.test\")"} {"_id":"doc-en-kubernetes-ae112e7d66469272b6853db4c49475ee25e39731adafdf588b785ef59c546575","title":"","text":"\"path/filepath\" \"strings\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" utilerrors \"k8s.io/apimachinery/pkg/util/errors\" utilvalidation \"k8s.io/apimachinery/pkg/util/validation\" utilfeature \"k8s.io/apiserver/pkg/util/feature\""} {"_id":"doc-en-kubernetes-6e250f2f1be06a9ababb78bbeacdf2d19afda965ca13191136434dea2c143988","title":"","text":"searches = []string{} for _, s := range fields[1:] { if s != \".\" { s = strings.TrimSuffix(s, \".\") searches = append(searches, strings.TrimSuffix(s, \".\")) } searches = append(searches, s) } } if fields[0] == \"options\" {"} {"_id":"doc-en-kubernetes-082206c81b1f5b03de212e0f2186b7097979a323be3e06f2497b019c7cc287c9","title":"","text":"{\"nameserver t 1.2.3.4\", []string{\"1.2.3.4\"}, []string{}, []string{}, false}, {\"nameserver 1.2.3.4nnameserver 5.6.7.8\", []string{\"1.2.3.4\", \"5.6.7.8\"}, []string{}, []string{}, false}, {\"nameserver 1.2.3.4 #comment\", []string{\"1.2.3.4\"}, []string{}, []string{}, false}, {\"search \", []string{}, []string{}, []string{}, false}, // search empty {\"search .\", []string{}, []string{\".\"}, []string{}, false}, {\"search \", []string{}, []string{}, []string{}, false}, // search empty {\"search .\", []string{}, []string{}, []string{}, false}, // ignore lone dot {\"search . foo\", []string{}, []string{\"foo\"}, []string{}, false}, {\"search foo .\", []string{}, []string{\"foo\"}, []string{}, false}, {\"search foo . bar\", []string{}, []string{\"foo\", \"bar\"}, []string{}, false}, {\"search foo\", []string{}, []string{\"foo\"}, []string{}, false}, {\"search foo bar\", []string{}, []string{\"foo\", \"bar\"}, []string{}, false}, {\"search foo. bar\", []string{}, []string{\"foo\", \"bar\"}, []string{}, false},"} {"_id":"doc-en-kubernetes-00b2e9293e8a57e274a2be9213827dda471a10149cdedb86992e6ada9e77ccf2","title":"","text":"utilruntime.Must(flowcontrolv1beta1.AddToScheme(scheme)) utilruntime.Must(flowcontrolv1beta2.AddToScheme(scheme)) utilruntime.Must(flowcontrolv1beta3.AddToScheme(scheme)) // TODO(#112512): This controls serialization order, for 1.26, we can // set the serialization version to v1beta2 because vN-1 understands that // level. In 1.27, we should set the serialization version to v1beta3. utilruntime.Must(scheme.SetVersionPriority(flowcontrolv1beta2.SchemeGroupVersion, flowcontrolv1beta3.SchemeGroupVersion, utilruntime.Must(scheme.SetVersionPriority(flowcontrolv1beta3.SchemeGroupVersion, flowcontrolv1beta2.SchemeGroupVersion, flowcontrolv1beta1.SchemeGroupVersion, flowcontrolv1alpha1.SchemeGroupVersion)) }"} {"_id":"doc-en-kubernetes-3fe5b93ab821157e0616cc0256043635c5b5289c728214eb737b047f84a86548","title":"","text":"\"admissionregistration.k8s.io/v1/mutatingwebhookconfigurations\": \"Sqi0GUgDaX0=\", \"admissionregistration.k8s.io/v1/validatingwebhookconfigurations\": \"B0wHjQmsGNk=\", \"events.k8s.io/v1/events\": \"r2yiGXH7wu8=\", \"flowcontrol.apiserver.k8s.io/v1beta3/flowschemas\": \"G+8IkrqFuJw=\", \"flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations\": \"wltM4WMeeXs=\", \"flowcontrol.apiserver.k8s.io/v1beta2/flowschemas\": \"G+8IkrqFuJw=\", \"flowcontrol.apiserver.k8s.io/v1beta2/prioritylevelconfigurations\": \"wltM4WMeeXs=\", \"flowcontrol.apiserver.k8s.io/v1beta3/flowschemas\": \"9NnFrw7ZEmA=\", \"flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations\": \"+CwSrEWTDhc=\", \"flowcontrol.apiserver.k8s.io/v1beta2/flowschemas\": \"9NnFrw7ZEmA=\", \"flowcontrol.apiserver.k8s.io/v1beta2/prioritylevelconfigurations\": \"+CwSrEWTDhc=\", }"} {"_id":"doc-en-kubernetes-0e741408572a3beda86e2d49e8f986f1ff77d4b2e07f814b9c7f3128756840ce","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1alpha1\", \"flowschemas\"): { Stub: `{\"metadata\": {\"name\": \"va1\"}, \"spec\": {\"priorityLevelConfiguration\": {\"name\": \"name1\"}}}`, ExpectedEtcdPath: \"/registry/flowschemas/va1\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"FlowSchema\"), ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"FlowSchema\"), }, // --"} {"_id":"doc-en-kubernetes-2e9a9dc29a72657b31d3afc6452288cc51ae11af9cfac87b4d5dc851420cf3cc","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1alpha1\", \"prioritylevelconfigurations\"): { Stub: `{\"metadata\": {\"name\": \"conf1\"}, \"spec\": {\"type\": \"Limited\", \"limited\": {\"assuredConcurrencyShares\":3, \"limitResponse\": {\"type\": \"Reject\"}}}}`, ExpectedEtcdPath: \"/registry/prioritylevelconfigurations/conf1\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"PriorityLevelConfiguration\"), ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"PriorityLevelConfiguration\"), }, // --"} {"_id":"doc-en-kubernetes-d2863f3591ef6e3c28166078afe570f66b659e51237d1e03f63fcfcc8c265a4a","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta1\", \"flowschemas\"): { Stub: `{\"metadata\": {\"name\": \"va2\"}, \"spec\": {\"priorityLevelConfiguration\": {\"name\": \"name1\"}}}`, ExpectedEtcdPath: \"/registry/flowschemas/va2\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"FlowSchema\"), ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"FlowSchema\"), }, // --"} {"_id":"doc-en-kubernetes-94066cf3a30600c69e2e38ee51e0c40d2992ecc70a1e438a75844ff31934e2f7","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta1\", \"prioritylevelconfigurations\"): { Stub: `{\"metadata\": {\"name\": \"conf2\"}, \"spec\": {\"type\": \"Limited\", \"limited\": {\"assuredConcurrencyShares\":3, \"limitResponse\": {\"type\": \"Reject\"}}}}`, ExpectedEtcdPath: \"/registry/prioritylevelconfigurations/conf2\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"PriorityLevelConfiguration\"), ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"PriorityLevelConfiguration\"), }, // --"} {"_id":"doc-en-kubernetes-34675e0525f6941e43a27e393b161a2472dcff9acc23b8b5133ebc376d5332b4","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"flowschemas\"): { Stub: `{\"metadata\": {\"name\": \"fs-1\"}, \"spec\": {\"priorityLevelConfiguration\": {\"name\": \"name1\"}}}`, ExpectedEtcdPath: \"/registry/flowschemas/fs-1\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"FlowSchema\"), }, // --"} {"_id":"doc-en-kubernetes-a540f2466de949b471e4d91d42572e7d4b8ea534c17584e49720de9349d373d6","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"prioritylevelconfigurations\"): { Stub: `{\"metadata\": {\"name\": \"conf3\"}, \"spec\": {\"type\": \"Limited\", \"limited\": {\"assuredConcurrencyShares\":3, \"limitResponse\": {\"type\": \"Reject\"}}}}`, ExpectedEtcdPath: \"/registry/prioritylevelconfigurations/conf3\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"PriorityLevelConfiguration\"), }, // --"} {"_id":"doc-en-kubernetes-95dbd7c60e06d53681fbe8686dd363fd7000ad3e78afd2015f7c1691a642e646","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"flowschemas\"): { Stub: `{\"metadata\": {\"name\": \"fs-2\"}, \"spec\": {\"priorityLevelConfiguration\": {\"name\": \"name1\"}}}`, ExpectedEtcdPath: \"/registry/flowschemas/fs-2\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"FlowSchema\"), }, // --"} {"_id":"doc-en-kubernetes-0951d1a2e69b06311eb321f421ade492f91733369cb517912840046b047e2565","title":"","text":"gvr(\"flowcontrol.apiserver.k8s.io\", \"v1beta3\", \"prioritylevelconfigurations\"): { Stub: `{\"metadata\": {\"name\": \"conf4\"}, \"spec\": {\"type\": \"Limited\", \"limited\": {\"nominalConcurrencyShares\":3, \"limitResponse\": {\"type\": \"Reject\"}}}}`, ExpectedEtcdPath: \"/registry/prioritylevelconfigurations/conf4\", ExpectedGVK: gvkP(\"flowcontrol.apiserver.k8s.io\", \"v1beta2\", \"PriorityLevelConfiguration\"), }, // --"} {"_id":"doc-en-kubernetes-1beecfa7407c2c19191a25dccc03e7283f658e62c9f0f863fbea672103132515","title":"","text":"oldModifyResponse := proxy.ModifyResponse proxy.ModifyResponse = func(response *http.Response) error { code := response.StatusCode if code >= 300 && code <= 399 { if code >= 300 && code <= 399 && len(response.Header.Get(\"Location\")) > 0 { // close the original response response.Body.Close() msg := \"the backend attempted to redirect this request, which is not permitted\""} {"_id":"doc-en-kubernetes-bf3dce4ea8f9b88906ca841aee45fc57d9a24ee565d3ec737092e166edf6299e","title":"","text":"name string rejectForwardingRedirects bool serverStatusCode int redirect string expectStatusCode int expectBody []byte }{"} {"_id":"doc-en-kubernetes-999c692a854de3c692a05cc415367d5758a756aef79818d3d862232c7822f553","title":"","text":"name: \"reject redirection enabled in proxy, backend server sending 301 response\", rejectForwardingRedirects: true, serverStatusCode: 301, redirect: \"/\", expectStatusCode: 502, expectBody: []byte(`the backend attempted to redirect this request, which is not permitted`), }, { name: \"reject redirection enabled in proxy, backend server sending 304 response with a location header\", rejectForwardingRedirects: true, serverStatusCode: 304, redirect: \"/\", expectStatusCode: 502, expectBody: []byte(`the backend attempted to redirect this request, which is not permitted`), }, { name: \"reject redirection enabled in proxy, backend server sending 304 response with no location header\", rejectForwardingRedirects: true, serverStatusCode: 304, expectStatusCode: 304, expectBody: []byte{}, // client doesn't read the body for 304 responses }, { name: \"reject redirection disabled in proxy, backend server sending 200 response\", rejectForwardingRedirects: false, serverStatusCode: 200,"} {"_id":"doc-en-kubernetes-adffc7be632442e548c7f8750d76cc38ecd4300e98f50ed403e231dd5364f7e2","title":"","text":"name: \"reject redirection disabled in proxy, backend server sending 301 response\", rejectForwardingRedirects: false, serverStatusCode: 301, redirect: \"/\", expectStatusCode: 301, expectBody: originalBody, },"} {"_id":"doc-en-kubernetes-2b818ef8062aa1e229b5e7f3e40aad6b3efc2aa959402c488c77f5aeeba5ec57","title":"","text":"t.Run(tc.name, func(t *testing.T) { // Set up a backend server backendServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { if tc.redirect != \"\" { w.Header().Set(\"Location\", tc.redirect) } w.WriteHeader(tc.serverStatusCode) w.Write(originalBody) }))"} {"_id":"doc-en-kubernetes-b673eb1d3d8e084cea9a49aaba0f3560f26929320ad24528bd15a1b743b8997d","title":"","text":"kl.statusManager.SetPodStatus(pod, v1.PodStatus{ Phase: v1.PodFailed, Reason: reason, Message: \"Pod \" + message}) Message: \"Pod was rejected: \" + message}) } // canAdmitPod determines if a pod can be admitted, and gives a reason if it"} {"_id":"doc-en-kubernetes-f8ad730ece4cc4a91ed2c249c539decb88588fa7cae6bc80f278b3d960482b9d","title":"","text":"// perform any validation. // According to our measurements this is significantly faster // in codepaths that matter at high scale. // Note: this method copies the Set; if the Set is immutable, consider wrapping it with ValidatedSetSelector // instead, which does not copy. func (ls Set) AsSelectorPreValidated() Selector { return SelectorFromValidatedSet(ls) }"} {"_id":"doc-en-kubernetes-53e595ea5e3aa0e3c81b389b27567ad82863b6e968d4a2c96bc2d8dad3cef9dd","title":"","text":"// SelectorFromValidatedSet returns a Selector which will match exactly the given Set. // A nil and empty Sets are considered equivalent to Everything(). // It assumes that Set is already validated and doesn't do any validation. // Note: this method copies the Set; if the Set is immutable, consider wrapping it with ValidatedSetSelector // instead, which does not copy. func SelectorFromValidatedSet(ls Set) Selector { if ls == nil || len(ls) == 0 { return internalSelector{}"} {"_id":"doc-en-kubernetes-43957c1d625b90c4fa64785b3f5578395dc84fedcc58b86ec5e48488f141dbcb","title":"","text":"func ParseToRequirements(selector string, opts ...field.PathOption) ([]Requirement, error) { return parse(selector, field.ToPath(opts...)) } // ValidatedSetSelector wraps a Set, allowing it to implement the Selector interface. Unlike // Set.AsSelectorPreValidated (which copies the input Set), this type simply wraps the underlying // Set. As a result, it is substantially more efficient. A nil and empty Sets are considered // equivalent to Everything(). // // Callers MUST ensure the underlying Set is not mutated, and that it is already validated. If these // constraints are not met, Set.AsValidatedSelector should be preferred // // None of the Selector methods mutate the underlying Set, but Add() and Requirements() convert to // the less optimized version. type ValidatedSetSelector Set func (s ValidatedSetSelector) Matches(labels Labels) bool { for k, v := range s { if !labels.Has(k) || v != labels.Get(k) { return false } } return true } func (s ValidatedSetSelector) Empty() bool { return len(s) == 0 } func (s ValidatedSetSelector) String() string { keys := make([]string, 0, len(s)) for k := range s { keys = append(keys, k) } // Ensure deterministic output sort.Strings(keys) b := strings.Builder{} for i, key := range keys { v := s[key] b.Grow(len(key) + 2 + len(v)) if i != 0 { b.WriteString(\",\") } b.WriteString(key) b.WriteString(\"=\") b.WriteString(v) } return b.String() } func (s ValidatedSetSelector) Add(r ...Requirement) Selector { return s.toFullSelector().Add(r...) } func (s ValidatedSetSelector) Requirements() (requirements Requirements, selectable bool) { return s.toFullSelector().Requirements() } func (s ValidatedSetSelector) DeepCopySelector() Selector { res := make(ValidatedSetSelector, len(s)) for k, v := range s { res[k] = v } return res } func (s ValidatedSetSelector) RequiresExactMatch(label string) (value string, found bool) { v, f := s[label] return v, f } func (s ValidatedSetSelector) toFullSelector() Selector { return SelectorFromValidatedSet(Set(s)) } var _ Selector = ValidatedSetSelector{} "} {"_id":"doc-en-kubernetes-9a8d3291d84e2ef64ea0b6a45408ce2ebbe2376d53d830150abe75136498aa98","title":"","text":"\"foo\": \"foo\", \"bar\": \"bar\", } matchee := Set(map[string]string{ \"foo\": \"foo\", \"bar\": \"bar\", \"extra\": \"label\", }) for i := 0; i < b.N; i++ { if SelectorFromValidatedSet(set).Empty() { s := SelectorFromValidatedSet(set) if s.Empty() { b.Errorf(\"Unexpected selector\") } if !s.Matches(matchee) { b.Errorf(\"Unexpected match\") } } } func BenchmarkSetSelector(b *testing.B) { set := map[string]string{ \"foo\": \"foo\", \"bar\": \"bar\", } matchee := Set(map[string]string{ \"foo\": \"foo\", \"bar\": \"bar\", \"extra\": \"label\", }) for i := 0; i < b.N; i++ { s := ValidatedSetSelector(set) if s.Empty() { b.Errorf(\"Unexpected selector\") } if !s.Matches(matchee) { b.Errorf(\"Unexpected match\") } } } func TestSetSelectorString(t *testing.T) { cases := []struct { set Set out string }{ { Set{}, \"\", }, { Set{\"app\": \"foo\"}, \"app=foo\", }, { Set{\"app\": \"foo\", \"a\": \"b\"}, \"a=b,app=foo\", }, } for _, tt := range cases { t.Run(tt.out, func(t *testing.T) { if got := ValidatedSetSelector(tt.set).String(); tt.out != got { t.Fatalf(\"expected %v, got %v\", tt.out, got) } }) } }"} {"_id":"doc-en-kubernetes-a9e3475ae74f6a4e5849e2c67032de9c729aa5ba7bd290faed341d741dc80a1f","title":"","text":"\"fmt\" \"strings\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/validation/field\""} {"_id":"doc-en-kubernetes-ab78ce70a3b3db9c734626e19ec127251fc233a14c84df8daeb3adc22a8704fa","title":"","text":"if exists { // For GPUs, not only requests can't exceed limits, they also can't be lower, i.e. must be equal. if quantity.Cmp(limitQuantity) != 0 && !v1helper.IsOvercommitAllowed(resourceName) { allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be equal to %s limit\", resourceName))) allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be equal to %s limit of %s\", resourceName, limitQuantity.String()))) } else if quantity.Cmp(limitQuantity) > 0 { allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be less than or equal to %s limit\", resourceName))) allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be less than or equal to %s limit of %s\", resourceName, limitQuantity.String()))) } } }"} {"_id":"doc-en-kubernetes-6f781b2460661f733fe8efa774f70ab76927ea355514e1cecb1c71a5f620913f","title":"","text":"package validation import ( \"strings\" \"testing\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/sets\""} {"_id":"doc-en-kubernetes-f48791571bd758d7cf5e24281c1f6be2bc6d3d0c7cd32b7be3b2b63d3f1ffe4d","title":"","text":"} errorCase := []struct { name string requirements v1.ResourceRequirements name string requirements v1.ResourceRequirements skipLimitValueCheck bool skipRequestValueCheck bool }{ { name: \"Resources with Requests Larger Than Limits\","} {"_id":"doc-en-kubernetes-b731dccd452ca098487d8216eb36b405f1fd9853a69d166141308e5eb8d264d9","title":"","text":"v1.ResourceName(\"my.org\"): resource.MustParse(\"10m\"), }, }, skipRequestValueCheck: true, }, { name: \"Invalid Resources with Limits\","} {"_id":"doc-en-kubernetes-0085d2a7933714ec618ca72d1967d9c9236117548baa44ee683a1c6eb5f253c4","title":"","text":"v1.ResourceName(\"my.org\"): resource.MustParse(\"9m\"), }, }, skipLimitValueCheck: true, }, } for _, tc := range errorCase { t.Run(tc.name, func(t *testing.T) { if errs := ValidateResourceRequirements(&tc.requirements, field.NewPath(\"resources\")); len(errs) == 0 { errs := ValidateResourceRequirements(&tc.requirements, field.NewPath(\"resources\")) if len(errs) == 0 { t.Errorf(\"expected error\") } validateNamesAndValuesInDescription(t, tc.requirements.Limits, errs, tc.skipLimitValueCheck, \"limit\") validateNamesAndValuesInDescription(t, tc.requirements.Requests, errs, tc.skipRequestValueCheck, \"request\") }) } } func validateNamesAndValuesInDescription(t *testing.T, r v1.ResourceList, errs field.ErrorList, skipValueTest bool, rl string) { for name, value := range r { containsName := false containsValue := false for _, e := range errs { if strings.Contains(e.Error(), name.String()) { containsName = true } if strings.Contains(e.Error(), value.String()) { containsValue = true } } if !containsName { t.Errorf(\"error must contain %s name\", rl) } if !containsValue && !skipValueTest { t.Errorf(\"error must contain %s value\", rl) } } } func TestValidateContainerResourceName(t *testing.T) { successCase := []struct { name string"} {"_id":"doc-en-kubernetes-2663bee782152df929b3b1d6c289b18ac0e53e4638995780e7e17df557e7149d","title":"","text":"if exists { // For non overcommitable resources, not only requests can't exceed limits, they also can't be lower, i.e. must be equal. if quantity.Cmp(limitQuantity) != 0 && !helper.IsOvercommitAllowed(resourceName) { allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be equal to %s limit\", resourceName))) allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be equal to %s limit of %s\", resourceName, limitQuantity.String()))) } else if quantity.Cmp(limitQuantity) > 0 { allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be less than or equal to %s limit\", resourceName))) allErrs = append(allErrs, field.Invalid(reqPath, quantity.String(), fmt.Sprintf(\"must be less than or equal to %s limit of %s\", resourceName, limitQuantity.String()))) } } else if !helper.IsOvercommitAllowed(resourceName) { allErrs = append(allErrs, field.Required(limPath, \"Limit must be set for non overcommitable resources\"))"} {"_id":"doc-en-kubernetes-e69f4ca60537de83169f58e05ee218518073e230cf14c1570631a78db46abeac","title":"","text":"\"k8s.io/apimachinery/pkg/util/uuid\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/apiserver/pkg/server/healthz\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" cacheddiscovery \"k8s.io/client-go/discovery/cached\" \"k8s.io/client-go/informers\" v1core \"k8s.io/client-go/kubernetes/typed/core/v1\""} {"_id":"doc-en-kubernetes-43a10479caec81feabb89c6a392a169e571f1bdf297dc423fcb66c853ade5c5d","title":"","text":"cliflag \"k8s.io/component-base/cli/flag\" \"k8s.io/component-base/cli/globalflag\" \"k8s.io/component-base/configz\" \"k8s.io/component-base/metrics/features\" controllersmetrics \"k8s.io/component-base/metrics/prometheus/controllers\" \"k8s.io/component-base/metrics/prometheus/slis\" \"k8s.io/component-base/term\" \"k8s.io/component-base/version\" \"k8s.io/component-base/version/verflag\""} {"_id":"doc-en-kubernetes-cb8c24aad2e4aae68f87805beed7f34ca104226ef40ba33f283354a22413470f","title":"","text":"\"k8s.io/klog/v2\" ) func init() { utilruntime.Must(features.AddFeatureGates(utilfeature.DefaultMutableFeatureGate)) } const ( // ControllerStartJitter is the jitter value used when starting controller managers. ControllerStartJitter = 1.0"} {"_id":"doc-en-kubernetes-78f6dda7950219444b7e83361c1d1d64e69b532c5b77c8509487e482dde86e11","title":"","text":"// Start the controller manager HTTP server if c.SecureServing != nil { unsecuredMux := genericcontrollermanager.NewBaseHandler(&c.ComponentConfig.Generic.Debugging, healthzHandler) if utilfeature.DefaultFeatureGate.Enabled(features.ComponentSLIs) { slis.SLIMetricsWithReset{}.Install(unsecuredMux) } handler := genericcontrollermanager.BuildHandlerChain(unsecuredMux, &c.Authorization, &c.Authentication) // TODO: handle stoppedCh and listenerStoppedCh returned by c.SecureServing.Serve if _, _, err := c.SecureServing.Serve(handler, 0, stopCh); err != nil {"} {"_id":"doc-en-kubernetes-c97cf825faf7482b327e03190e44a2e7d4095f2ab2b07285e0c70e431e97c910","title":"","text":"// with gingko.BeforeEach/AfterEach/DeferCleanup. // // When a test runs, functions will be invoked in this order: // - BeforeEaches defined by tests before f.NewDefaultFramework // in the order in which they were defined (first-in-first-out) // - f.BeforeEach // - all BeforeEaches in the order in which they were defined (first-in-first-out) // - BeforeEaches defined by tests after f.NewDefaultFramework // - It callback // - all AfterEaches in the order in which they were defined // - all DeferCleanups with the order reversed (first-in-last-out) // - f.AfterEach // // Because a test might skip test execution in a BeforeEach that runs // before f.BeforeEach, AfterEach callbacks that depend on the // framework instance must check whether it was initialized. They can // do that by checking f.ClientSet for nil. DeferCleanup callbacks // don't need to do this because they get defined when the test // runs. NewFrameworkExtensions []func(f *Framework) )"} {"_id":"doc-en-kubernetes-5af32f6f98e6aa7ff0605f9ac14c820e9d5da92406ea9f37bf2ca563cc1f3d15","title":"","text":"} } // Paranoia-- prevent reuse! // Unsetting this is relevant for a following test that uses // the same instance because it might not reach f.BeforeEach // when some other BeforeEach skips the test first. f.Namespace = nil f.clientConfig = nil f.ClientSet = nil"} {"_id":"doc-en-kubernetes-b220792593828af5aba52209f8f5ea01a39030ba0d14f484540fc624df19baf5","title":"","text":"framework.NewFrameworkExtensions = append(framework.NewFrameworkExtensions, func(f *framework.Framework) { ginkgo.AfterEach(func() { if f.ClientSet == nil { // Test didn't reach f.BeforeEach, most // likely because the test got // skipped. Nothing to check... return } e2enode.AllNodesReady(f.ClientSet, 3*time.Minute) }) },"} {"_id":"doc-en-kubernetes-8a24aee8beb1a8d4ac362630e22675b2fb661e88c86b5a009e165fa86c1ee62a","title":"","text":"WhenScaled: apps.DeletePersistentVolumeClaimRetentionPolicyType, WhenDeleted: apps.DeletePersistentVolumeClaimRetentionPolicyType, }, // tests the case when no policy is set. nil, } { subtestName := pvcDeletePolicyString(policy) + \"/StatefulSetAutoDeletePVCEnabled\" if testName != \"\" {"} {"_id":"doc-en-kubernetes-a688c882943d891e578f38d10a181b608761cb3841c8683b7d4e998158b9799d","title":"","text":"} func pvcDeletePolicyString(policy *apps.StatefulSetPersistentVolumeClaimRetentionPolicy) string { if policy == nil { return \"nullPolicy\" } const retain = apps.RetainPersistentVolumeClaimRetentionPolicyType const delete = apps.DeletePersistentVolumeClaimRetentionPolicyType switch {"} {"_id":"doc-en-kubernetes-0db8b9a9963fe172f8c526f5dbad63325026a59f82106da42a11e780e7146730","title":"","text":"// claimOwnerMatchesSetAndPod returns false if the ownerRefs of the claim are not set consistently with the // PVC deletion policy for the StatefulSet. func claimOwnerMatchesSetAndPod(claim *v1.PersistentVolumeClaim, set *apps.StatefulSet, pod *v1.Pod) bool { policy := set.Spec.PersistentVolumeClaimRetentionPolicy policy := getPersistentVolumeClaimRetentionPolicy(set) const retain = apps.RetainPersistentVolumeClaimRetentionPolicyType const delete = apps.DeletePersistentVolumeClaimRetentionPolicyType switch {"} {"_id":"doc-en-kubernetes-47465fe8816417c02c40a19dfb97d753521676e0f20d2b2d300b96d801c88151","title":"","text":"updateMeta(&podMeta, \"Pod\") setMeta := set.TypeMeta updateMeta(&setMeta, \"StatefulSet\") policy := set.Spec.PersistentVolumeClaimRetentionPolicy policy := getPersistentVolumeClaimRetentionPolicy(set) const retain = apps.RetainPersistentVolumeClaimRetentionPolicyType const delete = apps.DeletePersistentVolumeClaimRetentionPolicyType switch {"} {"_id":"doc-en-kubernetes-9ddef8812cad782f4c5e8137ed479959bc608d8f005148172df935e1d20d011d","title":"","text":"/* Release : v1.23 Testname: Pod Lifecycle, poststart https hook Description: When a post-start handler is specified in the container lifecycle using a 'HttpGet' action, then the handler MUST be invoked before the container is terminated. A server pod is created that will serve https requests, create a second pod with a container lifecycle specifying a post-start that invokes the server pod to validate that the post-start is executed. Description: When a post-start handler is specified in the container lifecycle using a 'HttpGet' action, then the handler MUST be invoked before the container is terminated. A server pod is created that will serve https requests, create a second pod on the same node with a container lifecycle specifying a post-start that invokes the server pod to validate that the post-start is executed. */ ginkgo.It(\"should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]\", func() { lifecycle := &v1.Lifecycle{"} {"_id":"doc-en-kubernetes-08079a1a3b9c6995ad34ad1086db607120a55d908c66f497eaa08ce57604a009","title":"","text":"}, } podWithHook := getPodWithHook(\"pod-with-poststart-https-hook\", imageutils.GetPauseImageName(), lifecycle) // make sure we spawn the test pod on the same node as the webserver. nodeSelection := e2epod.NodeSelection{} e2epod.SetAffinity(&nodeSelection, targetNode) e2epod.SetNodeSelection(&podWithHook.Spec, nodeSelection) testPodWithHook(podWithHook) }) /*"} {"_id":"doc-en-kubernetes-7b10299e55d8707ba818ea889ba9c71cf79a004115f22f50d03ec71032faccd3","title":"","text":"/* Release : v1.23 Testname: Pod Lifecycle, prestop https hook Description: When a pre-stop handler is specified in the container lifecycle using a 'HttpGet' action, then the handler MUST be invoked before the container is terminated. A server pod is created that will serve https requests, create a second pod with a container lifecycle specifying a pre-stop that invokes the server pod to validate that the pre-stop is executed. Description: When a pre-stop handler is specified in the container lifecycle using a 'HttpGet' action, then the handler MUST be invoked before the container is terminated. A server pod is created that will serve https requests, create a second pod on the same node with a container lifecycle specifying a pre-stop that invokes the server pod to validate that the pre-stop is executed. */ ginkgo.It(\"should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]\", func() { lifecycle := &v1.Lifecycle{"} {"_id":"doc-en-kubernetes-182e647ca66ce6caa006ed411b395cb25d5be03667e62db9fd654f71dd0eb9a1","title":"","text":"}, } podWithHook := getPodWithHook(\"pod-with-prestop-https-hook\", imageutils.GetPauseImageName(), lifecycle) // make sure we spawn the test pod on the same node as the webserver. nodeSelection := e2epod.NodeSelection{} e2epod.SetAffinity(&nodeSelection, targetNode) e2epod.SetNodeSelection(&podWithHook.Spec, nodeSelection) testPodWithHook(podWithHook) }) })"} {"_id":"doc-en-kubernetes-750f8c2f1564e8e5e516e6b822cb4ccee06db2609f7d616c8b608c728609bc93","title":"","text":"\"fmt\" \"unsafe\" \"github.com/pkg/errors\" \"golang.org/x/sys/windows\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/pkg/windows/service\" )"} {"_id":"doc-en-kubernetes-881f487c7d5ec6a348d872becf20cd39d9bcb213c5a6400fa8d70ebe8314fed5","title":"","text":"func createWindowsJobObject(pc uint32) (windows.Handle, error) { job, err := windows.CreateJobObject(nil, nil) if err != nil { return windows.InvalidHandle, errors.Wrap(err, \"windows.CreateJobObject failed\") return windows.InvalidHandle, fmt.Errorf(\"windows.CreateJobObject failed: %w\", err) } limitInfo := windows.JOBOBJECT_BASIC_LIMIT_INFORMATION{ LimitFlags: windows.JOB_OBJECT_LIMIT_PRIORITY_CLASS,"} {"_id":"doc-en-kubernetes-b8f8a06d9e27d7471d21e5fc0be14d6be393c1a7995067d319cfc97f82859209","title":"","text":"windows.JobObjectBasicLimitInformation, uintptr(unsafe.Pointer(&limitInfo)), uint32(unsafe.Sizeof(limitInfo))); err != nil { return windows.InvalidHandle, errors.Wrap(err, \"windows.SetInformationJobObject failed\") return windows.InvalidHandle, fmt.Errorf(\"windows.SetInformationJobObject failed: %w\", err) } return job, nil }"} {"_id":"doc-en-kubernetes-044508cc8890a3b5cf143e81cafe0349b661e0b1a236d9150c16f2e00a6ac093","title":"","text":"return err } if err := windows.AssignProcessToJobObject(job, windows.CurrentProcess()); err != nil { return errors.Wrap(err, \"windows.AssignProcessToJobObject failed\") return fmt.Errorf(\"windows.AssignProcessToJobObject failed: %w\", err) } if windowsService {"} {"_id":"doc-en-kubernetes-1b81cab78b3be285a7de1fe08224f3e0e9dd2edbee19c49b110dfd21f6077d54","title":"","text":"if getPodRevision(replicas[target]) != updateRevision.Name && !isTerminating(replicas[target]) { klog.V(2).InfoS(\"Pod of StatefulSet is terminating for update\", \"statefulSet\", klog.KObj(set), \"pod\", klog.KObj(replicas[target])) err := ssc.podControl.DeleteStatefulPod(set, replicas[target]) if err := ssc.podControl.DeleteStatefulPod(set, replicas[target]); err != nil { if !errors.IsNotFound(err) { return &status, err } } status.CurrentReplicas-- return &status, err }"} {"_id":"doc-en-kubernetes-93d10f14d22084814069711852b9f7520c53a05eb83bf26a8149dd478cdbf201","title":"","text":"{UpdatePodFailure, simpleSetFn}, {UpdateSetStatusFailure, simpleSetFn}, {PodRecreateDeleteFailure, simpleSetFn}, {NewRevisionDeletePodFailure, simpleSetFn}, } for _, testCase := range testCases {"} {"_id":"doc-en-kubernetes-a8cdb49cea2f148f26828b00c69742add3e4f5706c5399e0c24babab8f9d491a","title":"","text":"} } func NewRevisionDeletePodFailure(t *testing.T, set *apps.StatefulSet, invariants invariantFunc) { client := fake.NewSimpleClientset(set) om, _, ssc := setupController(client) if err := scaleUpStatefulSetControl(set, ssc, om, invariants); err != nil { t.Errorf(\"Failed to turn up StatefulSet : %s\", err) } var err error set, err = om.setsLister.StatefulSets(set.Namespace).Get(set.Name) if err != nil { t.Fatalf(\"Error getting updated StatefulSet: %v\", err) } if set.Status.Replicas != 3 { t.Error(\"Failed to scale StatefulSet to 3 replicas\") } selector, err := metav1.LabelSelectorAsSelector(set.Spec.Selector) if err != nil { t.Error(err) } pods, err := om.podsLister.Pods(set.Namespace).List(selector) if err != nil { t.Error(err) } // trigger a new revision updateSet := set.DeepCopy() updateSet.Spec.Template.Spec.Containers[0].Image = \"nginx-new\" if err := om.setsIndexer.Update(updateSet); err != nil { t.Error(\"Failed to update StatefulSet\") } set, err = om.setsLister.StatefulSets(set.Namespace).Get(set.Name) if err != nil { t.Fatalf(\"Error getting updated StatefulSet: %v\", err) } // delete fails om.SetDeleteStatefulPodError(apierrors.NewInternalError(errors.New(\"API server failed\")), 0) _, err = ssc.UpdateStatefulSet(context.TODO(), set, pods) if err == nil { t.Error(\"Expected err in update StatefulSet when deleting a pod\") } set, err = om.setsLister.StatefulSets(set.Namespace).Get(set.Name) if err != nil { t.Fatalf(\"Error getting updated StatefulSet: %v\", err) } if err := invariants(set, om); err != nil { t.Error(err) } if set.Status.CurrentReplicas != 3 { t.Fatalf(\"Failed pod deletion should not update CurrentReplicas: want 3, got %d\", set.Status.CurrentReplicas) } if set.Status.CurrentRevision == set.Status.UpdateRevision { t.Error(\"Failed to create new revision\") } // delete works om.SetDeleteStatefulPodError(nil, 0) status, err := ssc.UpdateStatefulSet(context.TODO(), set, pods) if err != nil { t.Fatalf(\"Unexpected err in update StatefulSet: %v\", err) } if status.CurrentReplicas != 2 { t.Fatalf(\"Pod deletion should update CurrentReplicas: want 2, got %d\", status.CurrentReplicas) } if err := invariants(set, om); err != nil { t.Error(err) } } func emptyInvariants(set *apps.StatefulSet, om *fakeObjectManager) error { return nil }"} {"_id":"doc-en-kubernetes-1bb34495b1d041d5d95e073f59088059ce3d68f35de964beecd12f72511b1d16","title":"","text":"} interval := time.NewTicker(time.Millisecond * 500) defer interval.Stop() done := make(chan bool, 1) done := make(chan bool) go func() { time.Sleep(time.Minute * 2) done <- true close(done) }() for { select {"} {"_id":"doc-en-kubernetes-95c58e1230e051455e52970857e78f2c2a51ca86f779d02394eca54298f5f413","title":"","text":"timedout := make(chan bool) go func() { time.Sleep(gracefulWait) timedout <- true close(timedout) }() go func() { select {"} {"_id":"doc-en-kubernetes-86a25aed3aa0af9d5c4e2b2180ea579278ca5076c77702b4bb8b4fb1b2da2778","title":"","text":"} }() err = r.cmd.Wait() stopped <- true close(stopped) if exiterr, ok := err.(*exec.ExitError); ok { klog.Infof(\"etcd server stopped (signal: %s)\", exiterr.Error()) // stopped"} {"_id":"doc-en-kubernetes-c3efdefcf1b0213c11b5b0f794a0940b5fa03eb3dfdf700ed0dcedbea6da2c6c","title":"","text":"return } ref := &v1.ObjectReference{ Kind: \"Pod\", Name: nsName.Name, Namespace: nsName.Namespace, APIVersion: \"v1\", Kind: \"Pod\", Name: nsName.Name, Namespace: nsName.Namespace, } tc.recorder.Eventf(ref, v1.EventTypeNormal, \"TaintManagerEviction\", \"Marking for deletion Pod %s\", nsName.String()) }"} {"_id":"doc-en-kubernetes-559e2f0a2bcc27c25b0a5f92cdb4491a4fa3a090803238883ce4e65e4b578790","title":"","text":"return } ref := &v1.ObjectReference{ Kind: \"Pod\", Name: nsName.Name, Namespace: nsName.Namespace, APIVersion: \"v1\", Kind: \"Pod\", Name: nsName.Name, Namespace: nsName.Namespace, } tc.recorder.Eventf(ref, v1.EventTypeNormal, \"TaintManagerEviction\", \"Cancelling deletion of Pod %s\", nsName.String()) }"} {"_id":"doc-en-kubernetes-3b546bc510fb8f0e1d76e687a0830836fb800db91194f70269d0eec1bed51f1d","title":"","text":"\"testing\" \"time\" \"github.com/google/go-cmp/cmp\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/types\""} {"_id":"doc-en-kubernetes-cf18add881e048df34f006f85bf4a99c9553be1e96376e86fc05986a59a19d45","title":"","text":"featuregatetesting \"k8s.io/component-base/featuregate/testing\" \"k8s.io/kubernetes/pkg/controller/testutil\" \"k8s.io/kubernetes/pkg/features\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var timeForControllerToProgressForSanityCheck = 20 * time.Millisecond"} {"_id":"doc-en-kubernetes-42e49b939a114222aff19db006b56ff573886edb4ca06e5d0e6b42eb6482bb78","title":"","text":"t.Errorf(\"[%v]Unexpected test result. Expected delete %v, got %v\", description, expectDelete, podDeleted) } } // TestPodDeletionEvent Verify that the output events are as expected func TestPodDeletionEvent(t *testing.T) { f := func(path cmp.Path) bool { switch path.String() { // These fields change at runtime, so ignore it case \"LastTimestamp\", \"FirstTimestamp\", \"ObjectMeta.Name\": return true } return false } t.Run(\"emitPodDeletionEvent\", func(t *testing.T) { controller := &NoExecuteTaintManager{} recorder := testutil.NewFakeRecorder() controller.recorder = recorder controller.emitPodDeletionEvent(types.NamespacedName{ Name: \"test\", Namespace: \"test\", }) want := []*v1.Event{ { ObjectMeta: metav1.ObjectMeta{ Namespace: \"test\", }, InvolvedObject: v1.ObjectReference{ Kind: \"Pod\", APIVersion: \"v1\", Namespace: \"test\", Name: \"test\", }, Reason: \"TaintManagerEviction\", Type: \"Normal\", Count: 1, Message: \"Marking for deletion Pod test/test\", Source: v1.EventSource{Component: \"nodeControllerTest\"}, }, } if diff := cmp.Diff(want, recorder.Events, cmp.FilterPath(f, cmp.Ignore())); len(diff) > 0 { t.Errorf(\"emitPodDeletionEvent() returned data (-want,+got):n%s\", diff) } }) t.Run(\"emitCancelPodDeletionEvent\", func(t *testing.T) { controller := &NoExecuteTaintManager{} recorder := testutil.NewFakeRecorder() controller.recorder = recorder controller.emitCancelPodDeletionEvent(types.NamespacedName{ Name: \"test\", Namespace: \"test\", }) want := []*v1.Event{ { ObjectMeta: metav1.ObjectMeta{ Namespace: \"test\", }, InvolvedObject: v1.ObjectReference{ Kind: \"Pod\", APIVersion: \"v1\", Namespace: \"test\", Name: \"test\", }, Reason: \"TaintManagerEviction\", Type: \"Normal\", Count: 1, Message: \"Cancelling deletion of Pod test/test\", Source: v1.EventSource{Component: \"nodeControllerTest\"}, }, } if diff := cmp.Diff(want, recorder.Events, cmp.FilterPath(f, cmp.Ignore())); len(diff) > 0 { t.Errorf(\"emitPodDeletionEvent() returned data (-want,+got):n%s\", diff) } }) } "} {"_id":"doc-en-kubernetes-7f6359f8009a905146c68591c2df21e80c4a5ddd068c74724afa7eaeff417d2b","title":"","text":"QOSReserved featuregate.Feature = \"QOSReserved\" // owner: @chrishenzie // kep: https://kep.k8s.io/2485 // alpha: v1.22 // beta: v1.27 // // Enables usage of the ReadWriteOncePod PersistentVolume access mode. ReadWriteOncePod featuregate.Feature = \"ReadWriteOncePod\""} {"_id":"doc-en-kubernetes-0a16b5ecc9be6e2428d0e6b98c200dac304cc003f5d2a224805d5f94ce93e9bf","title":"","text":"QOSReserved: {Default: false, PreRelease: featuregate.Alpha}, ReadWriteOncePod: {Default: false, PreRelease: featuregate.Alpha}, ReadWriteOncePod: {Default: true, PreRelease: featuregate.Beta}, RecoverVolumeExpansionFailure: {Default: false, PreRelease: featuregate.Alpha},"} {"_id":"doc-en-kubernetes-5953a87766769761732ebcdcea96a76a7df9e6aab24a70a0165adc4e01b3ab61","title":"","text":"\"github.com/onsi/ginkgo/v2\" v1 \"k8s.io/api/core/v1\" schedulingv1 \"k8s.io/api/scheduling/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\" errors \"k8s.io/apimachinery/pkg/util/errors\" \"k8s.io/apimachinery/pkg/util/uuid\" clientset \"k8s.io/client-go/kubernetes\" \"k8s.io/kubernetes/pkg/kubelet/events\" \"k8s.io/kubernetes/test/e2e/framework\""} {"_id":"doc-en-kubernetes-e3b0a1d1a8ac2daeb2621b38a091dc56c063acc5ab92497aa6ea140c63e648cd","title":"","text":"type readWriteOncePodTest struct { config *storageframework.PerTestConfig cs clientset.Interface volume *storageframework.VolumeResource pods []*v1.Pod cs clientset.Interface volume *storageframework.VolumeResource pods []*v1.Pod priorityClass *schedulingv1.PriorityClass migrationCheck *migrationOpCheck }"} {"_id":"doc-en-kubernetes-db8d3802dc354affbf2415d06d78696115f36193a02c4dca075d1433cda40746","title":"","text":"tsInfo: storageframework.TestSuiteInfo{ Name: \"read-write-once-pod\", TestPatterns: patterns, FeatureTag: \"[Feature:ReadWriteOncePod]\", }, } }"} {"_id":"doc-en-kubernetes-a8504e0b194bcc5f8a0ff2c0c76073ef27676f43528abffe1df9bc8b481f238b","title":"","text":"err := l.volume.CleanupResource(ctx) errs = append(errs, err) if l.priorityClass != nil { framework.Logf(\"Deleting PriorityClass %v\", l.priorityClass.Name) err := l.cs.SchedulingV1().PriorityClasses().Delete(ctx, l.priorityClass.Name, metav1.DeleteOptions{}) errs = append(errs, err) } framework.ExpectNoError(errors.NewAggregate(errs), \"while cleaning up resource\") l.migrationCheck.validateMigrationVolumeOpCounts(ctx) }"} {"_id":"doc-en-kubernetes-94e60e759438403e7f5a112f4ae6517c54fc76454be6af44e1b83319b10b2859","title":"","text":"ginkgo.DeferCleanup(cleanup) }) ginkgo.It(\"should block a second pod from using an in-use ReadWriteOncePod volume\", func(ctx context.Context) { ginkgo.It(\"should preempt lower priority pods using ReadWriteOncePod volumes\", func(ctx context.Context) { // Create the ReadWriteOncePod PVC. accessModes := []v1.PersistentVolumeAccessMode{v1.ReadWriteOncePod} l.volume = storageframework.CreateVolumeResourceWithAccessModes(ctx, driver, l.config, pattern, t.GetTestSuiteInfo().SupportedSizeRange, accessModes) l.priorityClass = &schedulingv1.PriorityClass{ ObjectMeta: metav1.ObjectMeta{Name: \"e2e-test-read-write-once-pod-\" + string(uuid.NewUUID())}, Value: int32(1000), } _, err := l.cs.SchedulingV1().PriorityClasses().Create(ctx, l.priorityClass, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create priority class\") podConfig := e2epod.Config{ NS: f.Namespace.Name, PVCs: []*v1.PersistentVolumeClaim{l.volume.Pvc},"} {"_id":"doc-en-kubernetes-5c24de8b325db41b337232a7756baddc260a20f5d6462c2b1bf7a113cb5aa0a0","title":"","text":"framework.ExpectNoError(err, \"failed to wait for pod1 running status\") l.pods = append(l.pods, pod1) // Create the second pod, which will fail scheduling because the ReadWriteOncePod PVC is already in use. // Create the second pod, which will preempt the first pod because it's using the // ReadWriteOncePod PVC and has higher priority. pod2, err := e2epod.MakeSecPod(&podConfig) framework.ExpectNoError(err, \"failed to create spec for pod2\") pod2.Spec.PriorityClassName = l.priorityClass.Name _, err = l.cs.CoreV1().Pods(pod2.Namespace).Create(ctx, pod2, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create pod2\") err = e2epod.WaitForPodNameUnschedulableInNamespace(ctx, l.cs, pod2.Name, pod2.Namespace) framework.ExpectNoError(err, \"failed to wait for pod2 unschedulable status\") l.pods = append(l.pods, pod2) // Delete the first pod and observe the second pod can now start. err = e2epod.DeletePodWithWait(ctx, l.cs, pod1) framework.ExpectNoError(err, \"failed to delete pod1\") // Wait for the first pod to be preempted and the second pod to start. err = e2epod.WaitForPodNotFoundInNamespace(ctx, l.cs, pod1.Name, pod1.Namespace, f.Timeouts.PodStart) framework.ExpectNoError(err, \"failed to wait for pod1 to be preempted\") err = e2epod.WaitTimeoutForPodRunningInNamespace(ctx, l.cs, pod2.Name, pod2.Namespace, f.Timeouts.PodStart) framework.ExpectNoError(err, \"failed to wait for pod2 running status\") // Recreate the first pod, which will fail to schedule because the second pod // is using the ReadWriteOncePod PVC and has higher priority. _, err = l.cs.CoreV1().Pods(pod1.Namespace).Create(ctx, pod1, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create pod1\") err = e2epod.WaitForPodNameUnschedulableInNamespace(ctx, l.cs, pod1.Name, pod1.Namespace) framework.ExpectNoError(err, \"failed to wait for pod1 unschedulable status\") // Delete the second pod with higher priority and observe the first pod can now start. err = e2epod.DeletePodWithWait(ctx, l.cs, pod2) framework.ExpectNoError(err, \"failed to delete pod2\") err = e2epod.WaitTimeoutForPodRunningInNamespace(ctx, l.cs, pod1.Name, pod1.Namespace, f.Timeouts.PodStart) framework.ExpectNoError(err, \"failed to wait for pod1 running status\") }) ginkgo.It(\"should block a second pod from using an in-use ReadWriteOncePod volume on the same node\", func(ctx context.Context) {"} {"_id":"doc-en-kubernetes-e5cd31dfe5b443de321beffe6f81fd00a7fd2b8f22b9ec2099fab26e16b72697","title":"","text":"serviceAccountName: csi-hostpathplugin-sa containers: - name: hostpath image: registry.k8s.io/sig-storage/hostpathplugin:v1.9.0 image: registry.k8s.io/sig-storage/hostpathplugin:v1.11.0 args: - \"--drivername=hostpath.csi.k8s.io\" - \"--v=5\""} {"_id":"doc-en-kubernetes-61647fea4f7cc5c6dab498ac59084869bf5ba9359cc60d961aee8801ff28bc5d","title":"","text":"command := exec.Command(\"umount\", target) output, err := command.CombinedOutput() if err != nil { if err.Error() == errNoChildProcesses { if command.ProcessState.Success() { // We don't consider errNoChildProcesses an error if the process itself succeeded (see - k/k issue #103753). return nil } // Rewrite err with the actual exit error of the process. err = &exec.ExitError{ProcessState: command.ProcessState} } if mounter.withSafeNotMountedBehavior && strings.Contains(string(output), errNotMounted) { klog.V(4).Infof(\"ignoring 'not mounted' error for %s\", target) return nil } return fmt.Errorf(\"unmount failed: %vnUnmounting arguments: %snOutput: %s\", err, target, string(output)) return checkUmountError(target, command, output, err, mounter.withSafeNotMountedBehavior) } return nil }"} {"_id":"doc-en-kubernetes-377ccfbf775d1e9ffdcf76e091db7351528f419c681b17c08328f7a5fa04525b","title":"","text":"// UnmountWithForce unmounts given target but will retry unmounting with force option // after given timeout. func (mounter *Mounter) UnmountWithForce(target string, umountTimeout time.Duration) error { err := tryUnmount(mounter, target, umountTimeout) err := tryUnmount(target, mounter.withSafeNotMountedBehavior, umountTimeout) if err != nil { if err == context.DeadlineExceeded { klog.V(2).Infof(\"Timed out waiting for unmount of %s, trying with -f\", target) err = forceUmount(target) err = forceUmount(target, mounter.withSafeNotMountedBehavior) } return err }"} {"_id":"doc-en-kubernetes-153ec76bdf93a2c9767070adb6f6ee8e66918f0dc6e85af80a8bd8a8fcdcd956","title":"","text":"} // tryUnmount calls plain \"umount\" and waits for unmountTimeout for it to finish. func tryUnmount(mounter *Mounter, path string, unmountTimeout time.Duration) error { klog.V(4).Infof(\"Unmounting %s\", path) func tryUnmount(target string, withSafeNotMountedBehavior bool, unmountTimeout time.Duration) error { klog.V(4).Infof(\"Unmounting %s\", target) ctx, cancel := context.WithTimeout(context.Background(), unmountTimeout) defer cancel() cmd := exec.CommandContext(ctx, \"umount\", path) out, cmderr := cmd.CombinedOutput() command := exec.CommandContext(ctx, \"umount\", target) output, err := command.CombinedOutput() // CombinedOutput() does not return DeadlineExceeded, make sure it's // propagated on timeout."} {"_id":"doc-en-kubernetes-2b7bec511880531f4cdd70c8b4a9856c0e97fbffbd6a09bd51e5f6eccf1e90e6","title":"","text":"return ctx.Err() } if cmderr != nil { if mounter.withSafeNotMountedBehavior && strings.Contains(string(out), errNotMounted) { klog.V(4).Infof(\"ignoring 'not mounted' error for %s\", path) return nil } return fmt.Errorf(\"unmount failed: %vnUnmounting arguments: %snOutput: %s\", cmderr, path, string(out)) if err != nil { return checkUmountError(target, command, output, err, withSafeNotMountedBehavior) } return nil } func forceUmount(path string) error { cmd := exec.Command(\"umount\", \"-f\", path) out, cmderr := cmd.CombinedOutput() func forceUmount(target string, withSafeNotMountedBehavior bool) error { command := exec.Command(\"umount\", \"-f\", target) output, err := command.CombinedOutput() if cmderr != nil { return fmt.Errorf(\"unmount failed: %vnUnmounting arguments: %snOutput: %s\", cmderr, path, string(out)) if err != nil { return checkUmountError(target, command, output, err, withSafeNotMountedBehavior) } return nil } // checkUmountError checks a result of umount command and determine a return value. func checkUmountError(target string, command *exec.Cmd, output []byte, err error, withSafeNotMountedBehavior bool) error { if err.Error() == errNoChildProcesses { if command.ProcessState.Success() { // We don't consider errNoChildProcesses an error if the process itself succeeded (see - k/k issue #103753). return nil } // Rewrite err with the actual exit error of the process. err = &exec.ExitError{ProcessState: command.ProcessState} } if withSafeNotMountedBehavior && strings.Contains(string(output), errNotMounted) { klog.V(4).Infof(\"ignoring 'not mounted' error for %s\", target) return nil } return fmt.Errorf(\"unmount failed: %vnUnmounting arguments: %snOutput: %s\", err, target, string(output)) } "} {"_id":"doc-en-kubernetes-14f82cd59f6b8474588f1f2f9bee62af0d328529cd6e114e05d34174b718b843","title":"","text":"\"reflect\" \"strings\" \"testing\" \"time\" utilexec \"k8s.io/utils/exec\" testexec \"k8s.io/utils/exec/testing\""} {"_id":"doc-en-kubernetes-c36fcfdc5bb58bd809d02b9f813244194b5cc37d091d1863d8fccc24a6ed0973","title":"","text":"return testexec.InitFakeCmd(&c, cmd, args...) } } func TestNotMountedBehaviorOfUnmount(t *testing.T) { target, err := ioutil.TempDir(\"\", \"kubelet-umount\") if err != nil { t.Errorf(\"Cannot create temp dir: %v\", err) } defer os.RemoveAll(target) m := Mounter{withSafeNotMountedBehavior: true} if err = m.Unmount(target); err != nil { t.Errorf(`Expect complete Unmount(), but it dose not: %v`, err) } if err = tryUnmount(target, m.withSafeNotMountedBehavior, time.Minute); err != nil { t.Errorf(`Expect complete tryUnmount(), but it does not: %v`, err) } // forceUmount exec \"umount -f\", so skip this case if user is not root. if os.Getuid() == 0 { if err = forceUmount(target, m.withSafeNotMountedBehavior); err != nil { t.Errorf(`Expect complete forceUnmount(), but it does not: %v`, err) } } } "} {"_id":"doc-en-kubernetes-7195293e21e841b5bc109de05d1c86b72e1fe61357bb67008f4daa4312fe0134","title":"","text":"\"fmt\" \"io/ioutil\" \"os\" \"os/exec\" \"reflect\" \"strings\" \"sync\""} {"_id":"doc-en-kubernetes-f66792a91a32e30fb7d9ceeb45979a02576e4004723e8cd747511e3e1ca3e7eb","title":"","text":"} } func TestCheckUmountError(t *testing.T) { target := \"/test/path\" withSafeNotMountedBehavior := true command := exec.Command(\"uname\", \"-r\") // dummy command return status 0 if err := command.Run(); err != nil { t.Errorf(\"Faild to exec dummy command. err: %s\", err) } testcases := []struct { output []byte err error expected bool }{ { err: errors.New(\"wait: no child processes\"), expected: true, }, { output: []byte(\"umount: /test/path: not mounted.\"), err: errors.New(\"exit status 1\"), expected: true, }, { output: []byte(\"umount: /test/path: No such file or directory\"), err: errors.New(\"exit status 1\"), expected: false, }, } for _, v := range testcases { if err := checkUmountError(target, command, v.output, v.err, withSafeNotMountedBehavior); (err == nil) != v.expected { if v.expected { t.Errorf(\"Expected to return nil, but did not. err: %s\", err) } else { t.Errorf(\"Expected to return error, but did not.\") } } } } func TestFormat(t *testing.T) { const ( formatCount = 5"} {"_id":"doc-en-kubernetes-5fabaa1ef5e18fdf0cac95825723bc1fde24abc351648883f957d54c8804a7fe","title":"","text":"\"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc\" \"go.opentelemetry.io/otel/trace\" \"google.golang.org/grpc\" \"google.golang.org/grpc/codes\" \"google.golang.org/grpc/credentials/insecure\" \"google.golang.org/grpc/status\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" tracing \"k8s.io/component-base/tracing\" \"k8s.io/klog/v2\""} {"_id":"doc-en-kubernetes-4c3ebe408f3c174cb7eb265a4316b6139527812068bb4088a09fa26665883bb2","title":"","text":"klog.V(4).InfoS(\"Validating the CRI v1 API image version\") r.imageClient = runtimeapi.NewImageServiceClient(conn) if _, err := r.imageClient.ImageFsInfo(ctx, &runtimeapi.ImageFsInfoRequest{}); err == nil { klog.V(2).InfoS(\"Validated CRI v1 image API\") } else if status.Code(err) == codes.Unimplemented { return fmt.Errorf(\"CRI v1 image API is not implemented for endpoint %q: %w\", endpoint, err) if _, err := r.imageClient.ImageFsInfo(ctx, &runtimeapi.ImageFsInfoRequest{}); err != nil { return fmt.Errorf(\"validate CRI v1 image API for endpoint %q: %w\", endpoint, err) } klog.V(2).InfoS(\"Validated CRI v1 image API\") return nil }"} {"_id":"doc-en-kubernetes-6836797365fcdc844f53c8c2083ff6b36fd6f577087ad386ae82cd80134fb227","title":"","text":"klog.V(4).InfoS(\"Validating the CRI v1 API runtime version\") r.runtimeClient = runtimeapi.NewRuntimeServiceClient(conn) if _, err := r.runtimeClient.Version(ctx, &runtimeapi.VersionRequest{}); err == nil { klog.V(2).InfoS(\"Validated CRI v1 runtime API\") } else if status.Code(err) == codes.Unimplemented { return fmt.Errorf(\"CRI v1 runtime API is not implemented for endpoint %q: %w\", endpoint, err) if _, err := r.runtimeClient.Version(ctx, &runtimeapi.VersionRequest{}); err != nil { return fmt.Errorf(\"validate CRI v1 runtime API for endpoint %q: %w\", endpoint, err) } klog.V(2).InfoS(\"Validated CRI v1 runtime API\") return nil }"} {"_id":"doc-en-kubernetes-f6ebfb124acae2bee7092d418f947188ed9235e33bae82de296d75f2046e847b","title":"","text":"return converted, nil } // Deep copy the list before we invoke the converter to ensure that if the converter does mutate the // list (which it shouldn't, but you never know), it doesn't have any impact. convertedList := list.DeepCopy() convertedList.SetAPIVersion(desiredAPIVersion) convertedObjects, err := c.converter.Convert(list, toGVK.GroupVersion()) if err != nil { return nil, fmt.Errorf(\"conversion for %v failed: %w\", in.GetObjectKind().GroupVersionKind(), err)"} {"_id":"doc-en-kubernetes-e23b5ada498d0d420e13c2db171eeb8ba4aa0432046b4dee1ebbc0dce6bd9119","title":"","text":"return nil, fmt.Errorf(\"conversion for %v returned %d objects, expected %d\", in.GetObjectKind().GroupVersionKind(), len(convertedObjects.Items), len(objectsToConvert)) } // start a deepcopy of the input and fill in the converted objects from the response at the right spots. // Fill in the converted objects from the response at the right spots. // The response list might be sparse because objects had the right version already. convertedList := list.DeepCopy() convertedList.SetAPIVersion(desiredAPIVersion) convertedIndex := 0 for i := range convertedList.Items { original := &convertedList.Items[i]"} {"_id":"doc-en-kubernetes-dde2b985fac44014badd51f6f748f1037273b347fd2d2055d69ea05dc9d4ccc3","title":"","text":"SourceObject: &unstructured.Unstructured{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v1\", \"metadata\": map[string]interface{}{}, \"other\": \"data\", \"kind\": \"foo\", },"} {"_id":"doc-en-kubernetes-87525da44b04e42da38b60aa4ac7fc43345bd9130cd3a76cc9b541bd58e299b6","title":"","text":"ExpectedObject: &unstructured.Unstructured{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v2\", \"metadata\": map[string]interface{}{}, \"other\": \"data\", \"kind\": \"foo\", },"} {"_id":"doc-en-kubernetes-c1ef26fabdb9ee6ff0b78660ef6f26ccfab9e8b79ce5d56101cd54763e3dc582","title":"","text":"{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v1\", \"metadata\": map[string]interface{}{}, \"kind\": \"foo\", \"other\": \"data\", },"} {"_id":"doc-en-kubernetes-42207d85f475963b3eb5f453c5cafb5d85333abf0eb5f168d6f298c0c5539227","title":"","text":"{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v1\", \"metadata\": map[string]interface{}{}, \"kind\": \"foo\", \"other\": \"data2\", },"} {"_id":"doc-en-kubernetes-0403aea06cfd3e8c136caaa2b3e63f13b2dab2e203359045f6e3fc7ec6d71f36","title":"","text":"{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v2\", \"metadata\": map[string]interface{}{}, \"kind\": \"foo\", \"other\": \"data\", },"} {"_id":"doc-en-kubernetes-3ef43cfc5d9eeeb497df17bf0f02643f4d067a7300d6adca0a0ef6708565aed4","title":"","text":"{ Object: map[string]interface{}{ \"apiVersion\": \"example.com/v2\", \"metadata\": map[string]interface{}{}, \"kind\": \"foo\", \"other\": \"data2\", },"} {"_id":"doc-en-kubernetes-a05131c5aa70c71e56cd8f7f32e793214b781d471b1d3ea80e2f64604e63dbe1","title":"","text":"}) } } func TestConverterMutatesInput(t *testing.T) { testCRD := apiextensionsv1.CustomResourceDefinition{ Spec: apiextensionsv1.CustomResourceDefinitionSpec{ Conversion: &apiextensionsv1.CustomResourceConversion{ Strategy: apiextensionsv1.NoneConverter, }, Group: \"test.k8s.io\", Versions: []apiextensionsv1.CustomResourceDefinitionVersion{ { Name: \"v1alpha1\", Served: true, }, { Name: \"v1alpha2\", Served: true, }, }, }, } safeConverter, _, err := NewDelegatingConverter(&testCRD, &inputMutatingConverter{}) if err != nil { t.Fatalf(\"Cannot create converter: %v\", err) } input := &unstructured.UnstructuredList{ Object: map[string]interface{}{ \"apiVersion\": \"test.k8s.io/v1alpha1\", }, Items: []unstructured.Unstructured{ { Object: map[string]interface{}{ \"apiVersion\": \"test.k8s.io/v1alpha1\", \"metadata\": map[string]interface{}{ \"name\": \"item1\", }, }, }, { Object: map[string]interface{}{ \"apiVersion\": \"test.k8s.io/v1alpha1\", \"metadata\": map[string]interface{}{ \"name\": \"item2\", }, }, }, }, } toVersion, _ := schema.ParseGroupVersion(\"test.k8s.io/v1alpha2\") toVersions := schema.GroupVersions{toVersion} converted, err := safeConverter.ConvertToVersion(input, toVersions) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } convertedList := converted.(*unstructured.UnstructuredList) if e, a := 2, len(convertedList.Items); e != a { t.Fatalf(\"length: expected %d, got %d\", e, a) } } type inputMutatingConverter struct{} func (i *inputMutatingConverter) Convert(in *unstructured.UnstructuredList, targetGVK schema.GroupVersion) (*unstructured.UnstructuredList, error) { out := &unstructured.UnstructuredList{} for _, obj := range in.Items { u := obj.DeepCopy() u.SetAPIVersion(targetGVK.String()) out.Items = append(out.Items, *u) } in.Items = nil return out, nil } "} {"_id":"doc-en-kubernetes-8ba0f29a2302e8ad4ef036950ff44fde7ff0dc861bb3d2ddf4377e654be935fa","title":"","text":"ginkgo.It(\"should log default container if not specified\", func(ctx context.Context) { ginkgo.By(\"Waiting for log generator to start.\") if !e2epod.CheckPodsRunningReadyOrSucceeded(ctx, c, ns, []string{podName}, framework.PodStartTimeout) { framework.Failf(\"Pod %s was not ready\", podName) // we need to wait for pod completion, to check the generated number of lines if err := e2epod.WaitForPodSuccessInNamespaceTimeout(ctx, c, podName, ns, framework.PodStartTimeout); err != nil { framework.Failf(\"Pod %s did not finish: %v\", podName, err) } ginkgo.By(\"specified container log lines\")"} {"_id":"doc-en-kubernetes-24790636d336cd0b8f75346ce85d4473283bc8eab2818e608e713da56d5399c1","title":"","text":"kube::golang::build_some_binaries \"${nonstatics[@]}\" fi # Workaround for https://github.com/kubernetes/kubernetes/issues/115675 local testgogcflags=\"${gogcflags}\" local testgoexperiment=\"${GOEXPERIMENT:-}\" if [[ \"$(go version)\" == *\"go1.20\"* && \"${platform}\" == \"linux/arm\" ]]; then # work around https://github.com/golang/go/issues/58339 until fixed in go1.20.x testgogcflags=\"${testgogcflags} -d=inlstaticinit=0\" # work around https://github.com/golang/go/issues/58425 testgoexperiment=\"nounified,${testgoexperiment}\" kube::log::info \"Building test binaries with GOEXPERIMENT=${testgoexperiment} GCFLAGS=${testgogcflags} ASMFLAGS=${goasmflags} LDFLAGS=${goldflags}\" fi for test in \"${tests[@]:+${tests[@]}}\"; do local outfile testpkg outfile=$(kube::golang::outfile_for_binary \"${test}\" \"${platform}\") testpkg=$(dirname \"${test}\") mkdir -p \"$(dirname \"${outfile}\")\" go test -c GOEXPERIMENT=\"${testgoexperiment}\" go test -c ${goflags:+\"${goflags[@]}\"} -gcflags=\"${gogcflags}\" -gcflags=\"${testgogcflags}\" -asmflags=\"${goasmflags}\" -ldflags=\"${goldflags}\" -tags=\"${gotags:-}\" "} {"_id":"doc-en-kubernetes-e663e0c65a3891a0c90abbb0e97decdb9f173ed7d25db5edaf86ac6925e1c3c9","title":"","text":"msg, )) } // warn if labelSelector is empty which is no-match. if t.LabelSelector == nil { warnings = append(warnings, fmt.Sprintf(\"%s: a null labelSelector results in matching no pod\", fieldPath.Child(\"spec\", \"topologySpreadConstraints\").Index(i).Child(\"labelSelector\"))) } } // use of deprecated annotations"} {"_id":"doc-en-kubernetes-66a5ffea1ce2aaf5fa6fdf309125a07d35569e3d06416d7ac0d00027fabdeea5","title":"","text":"if podSpec.TerminationGracePeriodSeconds != nil && *podSpec.TerminationGracePeriodSeconds < 0 { warnings = append(warnings, fmt.Sprintf(\"%s: must be >= 0; negative values are invalid and will be treated as 1\", fieldPath.Child(\"spec\", \"terminationGracePeriodSeconds\"))) } if podSpec.Affinity != nil { if affinity := podSpec.Affinity.PodAffinity; affinity != nil { warnings = append(warnings, warningsForPodAffinityTerms(affinity.RequiredDuringSchedulingIgnoredDuringExecution, fieldPath.Child(\"spec\", \"affinity\", \"podAffinity\", \"requiredDuringSchedulingIgnoredDuringExecution\"))...) warnings = append(warnings, warningsForWeightedPodAffinityTerms(affinity.PreferredDuringSchedulingIgnoredDuringExecution, fieldPath.Child(\"spec\", \"affinity\", \"podAffinity\", \"preferredDuringSchedulingIgnoredDuringExecution\"))...) } if affinity := podSpec.Affinity.PodAntiAffinity; affinity != nil { warnings = append(warnings, warningsForPodAffinityTerms(affinity.RequiredDuringSchedulingIgnoredDuringExecution, fieldPath.Child(\"spec\", \"affinity\", \"podAntiAffinity\", \"requiredDuringSchedulingIgnoredDuringExecution\"))...) warnings = append(warnings, warningsForWeightedPodAffinityTerms(affinity.PreferredDuringSchedulingIgnoredDuringExecution, fieldPath.Child(\"spec\", \"affinity\", \"podAntiAffinity\", \"preferredDuringSchedulingIgnoredDuringExecution\"))...) } } return warnings } func warningsForPodAffinityTerms(terms []api.PodAffinityTerm, fieldPath *field.Path) []string { var warnings []string for i, t := range terms { if t.LabelSelector == nil { warnings = append(warnings, fmt.Sprintf(\"%s: a null labelSelector results in matching no pod\", fieldPath.Index(i).Child(\"labelSelector\"))) } } return warnings } func warningsForWeightedPodAffinityTerms(terms []api.WeightedPodAffinityTerm, fieldPath *field.Path) []string { var warnings []string for i, t := range terms { // warn if labelSelector is empty which is no-match. if t.PodAffinityTerm.LabelSelector == nil { warnings = append(warnings, fmt.Sprintf(\"%s: a null labelSelector results in matching no pod\", fieldPath.Index(i).Child(\"podAffinityTerm\", \"labelSelector\"))) } } return warnings }"} {"_id":"doc-en-kubernetes-3098c9875f9adf66f3267994dc33650fc0979076a3959d8b7d3577ec8a97201c","title":"","text":"template: &api.PodTemplateSpec{ Spec: api.PodSpec{ TopologySpreadConstraints: []api.TopologySpreadConstraint{ {TopologyKey: `foo`}, {TopologyKey: `beta.kubernetes.io/arch`}, {TopologyKey: `beta.kubernetes.io/os`}, {TopologyKey: `failure-domain.beta.kubernetes.io/region`}, {TopologyKey: `failure-domain.beta.kubernetes.io/zone`}, {TopologyKey: `beta.kubernetes.io/instance-type`}, { TopologyKey: `foo`, LabelSelector: &metav1.LabelSelector{}, }, { TopologyKey: `beta.kubernetes.io/arch`, LabelSelector: &metav1.LabelSelector{}, }, { TopologyKey: `beta.kubernetes.io/os`, LabelSelector: &metav1.LabelSelector{}, }, { TopologyKey: `failure-domain.beta.kubernetes.io/region`, LabelSelector: &metav1.LabelSelector{}, }, { TopologyKey: `failure-domain.beta.kubernetes.io/zone`, LabelSelector: &metav1.LabelSelector{}, }, { TopologyKey: `beta.kubernetes.io/instance-type`, LabelSelector: &metav1.LabelSelector{}, }, }, }, },"} {"_id":"doc-en-kubernetes-de5a4f4a8aeec010d4621a987e7b1632fdaab182209876eb68fb7324dc5722fe","title":"","text":"`spec.terminationGracePeriodSeconds: must be >= 0; negative values are invalid and will be treated as 1`, }, }, { name: \"null LabelSelector in topologySpreadConstraints\", template: &api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{}, Spec: api.PodSpec{ TopologySpreadConstraints: []api.TopologySpreadConstraint{ { LabelSelector: &metav1.LabelSelector{}, }, { LabelSelector: nil, }, }, }, }, expected: []string{ `spec.topologySpreadConstraints[1].labelSelector: a null labelSelector results in matching no pod`, }, }, { name: \"null LabelSelector in PodAffinity\", template: &api.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{}, Spec: api.PodSpec{ Affinity: &api.Affinity{ PodAffinity: &api.PodAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: []api.PodAffinityTerm{ { LabelSelector: &metav1.LabelSelector{}, }, { LabelSelector: nil, }, }, PreferredDuringSchedulingIgnoredDuringExecution: []api.WeightedPodAffinityTerm{ { PodAffinityTerm: api.PodAffinityTerm{ LabelSelector: &metav1.LabelSelector{}, }, }, { PodAffinityTerm: api.PodAffinityTerm{ LabelSelector: nil, }, }, }, }, PodAntiAffinity: &api.PodAntiAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: []api.PodAffinityTerm{ { LabelSelector: &metav1.LabelSelector{}, }, { LabelSelector: nil, }, }, PreferredDuringSchedulingIgnoredDuringExecution: []api.WeightedPodAffinityTerm{ { PodAffinityTerm: api.PodAffinityTerm{ LabelSelector: &metav1.LabelSelector{}, }, }, { PodAffinityTerm: api.PodAffinityTerm{ LabelSelector: nil, }, }, }, }, }, }, }, expected: []string{ `spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[1].labelSelector: a null labelSelector results in matching no pod`, `spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.labelSelector: a null labelSelector results in matching no pod`, `spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[1].labelSelector: a null labelSelector results in matching no pod`, `spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[1].podAffinityTerm.labelSelector: a null labelSelector results in matching no pod`, }, }, } for _, tc := range testcases {"} {"_id":"doc-en-kubernetes-3d0a3c274c7013b51a541f560f2d817469406c3c81ae2e147d9a98f9f132d51d","title":"","text":"\"description\": \"EndpointConditions represents the current condition of an endpoint.\", \"properties\": { \"ready\": { \"description\": \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints.\", \"description\": \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints, except when the normal readiness behavior is being explicitly overridden, for example when the associated Service has set the publishNotReadyAddresses flag.\", \"type\": \"boolean\" }, \"serving\": {"} {"_id":"doc-en-kubernetes-ec653db5f4b98e2ba57c79ad50752b8bac68208ea594dd3b7a8909e7a0d93631","title":"","text":"// according to whatever system is managing the endpoint. A nil value // indicates an unknown state. In most cases consumers should interpret this // unknown state as ready. For compatibility reasons, ready should never be // \"true\" for terminating endpoints. // \"true\" for terminating endpoints, except when the normal readiness // behavior is being explicitly overridden, for example when the associated // Service has set the publishNotReadyAddresses flag. Ready *bool // serving is identical to ready except that it is set regardless of the"} {"_id":"doc-en-kubernetes-68a1dab516c4d853b8d2456f9a045d023fb304a2d04d30958fe9ab9ad34f210f","title":"","text":"Properties: map[string]spec.Schema{ \"ready\": { SchemaProps: spec.SchemaProps{ Description: \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints.\", Description: \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints, except when the normal readiness behavior is being explicitly overridden, for example when the associated Service has set the publishNotReadyAddresses flag.\", Type: []string{\"boolean\"}, Format: \"\", },"} {"_id":"doc-en-kubernetes-0f352df96f8b08a686bf4b1e44c5b6c10bb8901e61d79f0a9eaa13965e697517","title":"","text":"// according to whatever system is managing the endpoint. A nil value // indicates an unknown state. In most cases consumers should interpret this // unknown state as ready. For compatibility reasons, ready should never be // \"true\" for terminating endpoints. // \"true\" for terminating endpoints, except when the normal readiness // behavior is being explicitly overridden, for example when the associated // Service has set the publishNotReadyAddresses flag. // +optional optional bool ready = 1;"} {"_id":"doc-en-kubernetes-8ba03c88d9c899536faaa69d97aa3fd9785efe51bc1ffad0c697ff06346d42cd","title":"","text":"// according to whatever system is managing the endpoint. A nil value // indicates an unknown state. In most cases consumers should interpret this // unknown state as ready. For compatibility reasons, ready should never be // \"true\" for terminating endpoints. // \"true\" for terminating endpoints, except when the normal readiness // behavior is being explicitly overridden, for example when the associated // Service has set the publishNotReadyAddresses flag. // +optional Ready *bool `json:\"ready,omitempty\" protobuf:\"bytes,1,name=ready\"`"} {"_id":"doc-en-kubernetes-474ef678c840409bb26d69788ddd281fadc0b62ba3417e9cdb6844f49f9da5fc","title":"","text":"var map_EndpointConditions = map[string]string{ \"\": \"EndpointConditions represents the current condition of an endpoint.\", \"ready\": \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints.\", \"ready\": \"ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be \"true\" for terminating endpoints, except when the normal readiness behavior is being explicitly overridden, for example when the associated Service has set the publishNotReadyAddresses flag.\", \"serving\": \"serving is identical to ready except that it is set regardless of the terminating state of endpoints. This condition should be set to true for a ready endpoint that is terminating. If nil, consumers should defer to the ready condition.\", \"terminating\": \"terminating indicates that this endpoint is terminating. A nil value indicates an unknown state. Consumers should interpret this unknown state to mean that the endpoint is not terminating.\", }"} {"_id":"doc-en-kubernetes-1aacb98fcd461517e38e2ba5f3a4002551aaf1d41e7145438f2f3715611177ea","title":"","text":"event := recorder.makeEvent(ref, annotations, eventtype, reason, message) event.Source = recorder.source event.ReportingInstance = recorder.source.Host event.ReportingController = recorder.source.Component // NOTE: events should be a non-blocking operation, but we also need to not // put this in a goroutine, otherwise we'll race to write to a closed channel // when we go to shut down this broadcaster. Just drop events if we get overloaded,"} {"_id":"doc-en-kubernetes-f9bd611d0196e5887517d4cda3f1a9c84657eaf7e320b98a559c02fcbc7b2d96","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[2]\", }, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 1, Type: v1.EventTypeNormal, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 1, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"bar\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[2]\"}): type: 'Normal' reason: 'Started' some verbose message: 1`, expectUpdate: false,"} {"_id":"doc-en-kubernetes-b5b2a67d25871754b585e0b8d70fa2b2012a0b41d1d6133068bc07b8bbf4f719","title":"","text":"UID: \"bar\", APIVersion: \"v1\", }, Reason: \"Killed\", Message: \"some other verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 1, Type: v1.EventTypeNormal, Reason: \"Killed\", Message: \"some other verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 1, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"bar\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'Killed' some other verbose message: 1`, expectUpdate: false,"} {"_id":"doc-en-kubernetes-286445f587731ca9f2eada1ab69a3838bf480cccbfc9086589298747a05cca60","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[2]\", }, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 2, Type: v1.EventTypeNormal, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 2, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"bar\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[2]\"}): type: 'Normal' reason: 'Started' some verbose message: 1`, expectUpdate: true,"} {"_id":"doc-en-kubernetes-d0cfe6b1d9ccbd44ecb01d3a3e7b750f694a528e324483c6e73b1fc5cbd60010","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[3]\", }, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 1, Type: v1.EventTypeNormal, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 1, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"differentUid\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[3]\"}): type: 'Normal' reason: 'Started' some verbose message: 1`, expectUpdate: false,"} {"_id":"doc-en-kubernetes-efe7fe86f801a0f095c6b082221d93b9409a7d1d643606027f9d72ac11036866","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[2]\", }, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 3, Type: v1.EventTypeNormal, Reason: \"Started\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 3, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"bar\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[2]\"}): type: 'Normal' reason: 'Started' some verbose message: 1`, expectUpdate: true,"} {"_id":"doc-en-kubernetes-9da816cf695c4adaa97ece94c02192f5908a58d3b8b196827698f64368036224","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[3]\", }, Reason: \"Stopped\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 1, Type: v1.EventTypeNormal, Reason: \"Stopped\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 1, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"differentUid\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[3]\"}): type: 'Normal' reason: 'Stopped' some verbose message: 1`, expectUpdate: false,"} {"_id":"doc-en-kubernetes-e5727c6e949267e3af7a6337170e707f885cf00253c3e724c1a2619e84ef2cef","title":"","text":"APIVersion: \"v1\", FieldPath: \"spec.containers[3]\", }, Reason: \"Stopped\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, Count: 2, Type: v1.EventTypeNormal, Reason: \"Stopped\", Message: \"some verbose message: 1\", Source: v1.EventSource{Component: \"eventTest\"}, ReportingController: \"eventTest\", Count: 2, Type: v1.EventTypeNormal, }, expectLog: `Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"baz\", Name:\"foo\", UID:\"differentUid\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"spec.containers[3]\"}): type: 'Normal' reason: 'Stopped' some verbose message: 1`, expectUpdate: true,"} {"_id":"doc-en-kubernetes-12196467217c3f52922fc6d09a6fe74e4500bbe0f8c057c3c1f89c6f56ec3b22","title":"","text":"// Temp clear time stamps for comparison because actual values don't matter for comparison recvEvent.FirstTimestamp = expectedEvent.FirstTimestamp recvEvent.LastTimestamp = expectedEvent.LastTimestamp recvEvent.ReportingController = expectedEvent.ReportingController // Check that name has the right prefix. if n, en := recvEvent.Name, expectedEvent.Name; !strings.HasPrefix(n, en) { t.Errorf(\"%v - Name '%v' does not contain prefix '%v'\", messagePrefix, n, en)"} {"_id":"doc-en-kubernetes-3c744483a6f2e230791711870f0302d453174c5e0f21d08b0f155af860c0d3fd","title":"","text":"} for _, tc := range tests { tc := tc t.Run(tc.desc, func(t *testing.T) { fakeSystemBus := &fakeSystemDBus{} bus := DBusCon{"} {"_id":"doc-en-kubernetes-767018e3cdeebc7830a590adc9b5278cb06c6b4689f543ce6d59072588d56fc0","title":"","text":"outBuiltIn := e2ekubectl.RunKubectlOrDie(\"\", \"get\", \"nodes\", node.Name) ginkgo.By(fmt.Sprintf(\"calling kubectl get nodes %s --subresource=status\", node.Name)) outStatusSubresource := e2ekubectl.RunKubectlOrDie(\"\", \"get\", \"nodes\", node.Name, \"--subresource=status\") gomega.Expect(outBuiltIn).To(gomega.Equal(outStatusSubresource)) // Avoid comparing values of fields that might end up // changing between the two invocations of kubectl. requiredOutput := [][]string{ {\"NAME\"}, {\"STATUS\"}, {\"ROLES\"}, {\"AGE\"}, {\"VERSION\"}, {node.Name}, // check for NAME {\"\"}, // avoid comparing STATUS {\"\"}, // avoid comparing ROLES {\"\"}, // avoid comparing AGE {node.Status.NodeInfo.KubeletVersion}, // check for VERSION } checkOutput(outBuiltIn, requiredOutput) checkOutput(outStatusSubresource, requiredOutput) }) }) })"} {"_id":"doc-en-kubernetes-63f6b4cdcc3623dff60b3a1e3db7a3a3138f421c46f0292a6be341af575da4bf","title":"","text":"}, }, { name: \"pod that could not start still has a pending update and is tracked in metrics\", wantErr: false, pods: []*v1.Pod{ staticPod(), }, prepareWorker: func(t *testing.T, w *podWorkers, records map[types.UID][]syncPodRecord) { // send a create of a static pod pod := staticPod() // block startup of the static pod due to full name collision w.startedStaticPodsByFullname[kubecontainer.GetPodFullName(pod)] = types.UID(\"2\") w.UpdatePod(UpdatePodOptions{ UpdateType: kubetypes.SyncPodCreate, StartTime: time.Unix(1, 0).UTC(), Pod: pod, }) drainAllWorkers(w) if _, ok := records[pod.UID]; ok { t.Fatalf(\"unexpected records: %#v\", records) } // pod worker is unaware of pod1 yet }, wantWorker: func(t *testing.T, w *podWorkers, records map[types.UID][]syncPodRecord) { uid := types.UID(\"1\") if len(w.podSyncStatuses) != 1 { t.Fatalf(\"unexpected sync statuses: %#v\", w.podSyncStatuses) } s, ok := w.podSyncStatuses[uid] if !ok || s.IsTerminationRequested() || s.IsTerminationStarted() || s.IsFinished() || s.IsWorking() || s.IsDeleted() || s.restartRequested || s.activeUpdate != nil || s.pendingUpdate == nil { t.Errorf(\"unexpected requested pod termination: %#v\", s) } // expect that no sync calls are made, since the pod doesn't ever start if actual, expected := records[uid], []syncPodRecord(nil); !reflect.DeepEqual(expected, actual) { t.Fatalf(\"unexpected pod sync records: %s\", cmp.Diff(expected, actual, cmp.AllowUnexported(syncPodRecord{}))) } }, expectMetrics: map[string]string{ metrics.DesiredPodCount.FQName(): `# HELP kubelet_desired_pods [ALPHA] The number of pods the kubelet is being instructed to run. static is true if the pod is not from the apiserver. # TYPE kubelet_desired_pods gauge kubelet_desired_pods{static=\"\"} 0 kubelet_desired_pods{static=\"true\"} 1 `, metrics.WorkingPodCount.FQName(): `# HELP kubelet_working_pods [ALPHA] Number of pods the kubelet is actually running, broken down by lifecycle phase, whether the pod is desired, orphaned, or runtime only (also orphaned), and whether the pod is static. An orphaned pod has been removed from local configuration or force deleted in the API and consumes resources that are not otherwise visible. # TYPE kubelet_working_pods gauge kubelet_working_pods{config=\"desired\",lifecycle=\"sync\",static=\"\"} 0 kubelet_working_pods{config=\"desired\",lifecycle=\"sync\",static=\"true\"} 1 kubelet_working_pods{config=\"desired\",lifecycle=\"terminated\",static=\"\"} 0 kubelet_working_pods{config=\"desired\",lifecycle=\"terminated\",static=\"true\"} 0 kubelet_working_pods{config=\"desired\",lifecycle=\"terminating\",static=\"\"} 0 kubelet_working_pods{config=\"desired\",lifecycle=\"terminating\",static=\"true\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"sync\",static=\"\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"sync\",static=\"true\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"terminated\",static=\"\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"terminated\",static=\"true\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"terminating\",static=\"\"} 0 kubelet_working_pods{config=\"orphan\",lifecycle=\"terminating\",static=\"true\"} 0 kubelet_working_pods{config=\"runtime_only\",lifecycle=\"sync\",static=\"unknown\"} 0 kubelet_working_pods{config=\"runtime_only\",lifecycle=\"terminated\",static=\"unknown\"} 0 kubelet_working_pods{config=\"runtime_only\",lifecycle=\"terminating\",static=\"unknown\"} 0 `, }, }, { name: \"pod that could not start and is not in config is force terminated without runtime during pod cleanup\", wantErr: false, terminatingErr: errors.New(\"unable to terminate\"),"} {"_id":"doc-en-kubernetes-6e40e5aec75e92b6f493a71bde00b4e67833acbb00f29c5f9953a1f389324311","title":"","text":"// or can be started, and updates the cached pod state so that downstream components can observe what the // pod worker goroutine is currently attempting to do. If ok is false, there is no available event. If any // of the boolean values is false, ensure the appropriate cleanup happens before returning. // // This method should ensure that either status.pendingUpdate is cleared and merged into status.activeUpdate, // or when a pod cannot be started status.pendingUpdate remains the same. Pods that have not been started // should never have an activeUpdate because that is exposed to downstream components on started pods. func (p *podWorkers) startPodSync(podUID types.UID) (ctx context.Context, update podWork, canStart, canEverStart, ok bool) { p.podLock.Lock() defer p.podLock.Unlock()"} {"_id":"doc-en-kubernetes-974d81673621e949b2174a966258faf76edbae1683e9d2d8165c353efffcc738","title":"","text":"klog.V(4).InfoS(\"Pod cannot start ever\", \"pod\", klog.KObj(update.Options.Pod), \"podUID\", podUID, \"updateType\", update.WorkType) return ctx, update, canStart, canEverStart, true case !canStart: // this is the only path we don't start the pod, so we need to put the change back in pendingUpdate status.pendingUpdate = &update.Options status.working = false klog.V(4).InfoS(\"Pod cannot start yet\", \"pod\", klog.KObj(update.Options.Pod), \"podUID\", podUID) return ctx, update, canStart, canEverStart, true"} {"_id":"doc-en-kubernetes-474cfd8dcd0f6c8309c869ec2a05ec4ac53fe293c4ecb917b74f40a483fed415","title":"","text":"State: status.WorkType(), Orphan: orphan, } if status.activeUpdate != nil && status.activeUpdate.Pod != nil { sync.HasConfig = true sync.Static = kubetypes.IsStaticPod(status.activeUpdate.Pod) switch { case status.activeUpdate != nil: if status.activeUpdate.Pod != nil { sync.HasConfig = true sync.Static = kubetypes.IsStaticPod(status.activeUpdate.Pod) } case status.pendingUpdate != nil: if status.pendingUpdate.Pod != nil { sync.HasConfig = true sync.Static = kubetypes.IsStaticPod(status.pendingUpdate.Pod) } } workers[uid] = sync }"} {"_id":"doc-en-kubernetes-1ac72de09af1145881f243319e62bb8c4d838f0e88ee72ecf86239eb4f6d2246","title":"","text":"// Iterate through all pods in desired state of world, and remove if they no // longer exist func (dswp *desiredStateOfWorldPopulator) findAndRemoveDeletedPods() { podsFromCache := make(map[volumetypes.UniquePodName]struct{}) for _, volumeToMount := range dswp.desiredStateOfWorld.GetVolumesToMount() { podsFromCache[volumetypes.UniquePodName(volumeToMount.Pod.UID)] = struct{}{} pod, podExists := dswp.podManager.GetPodByUID(volumeToMount.Pod.UID) if podExists {"} {"_id":"doc-en-kubernetes-7249c9ed7f82a2385e7647832912ab77f9e1470c9edd64e1b7c744fd2d348312","title":"","text":"dswp.deleteProcessedPod(volumeToMount.PodName) } // Cleanup orphanded entries from processedPods dswp.pods.Lock() orphanedPods := make([]volumetypes.UniquePodName, 0, len(dswp.pods.processedPods)) for k := range dswp.pods.processedPods { if _, ok := podsFromCache[k]; !ok { orphanedPods = append(orphanedPods, k) } } dswp.pods.Unlock() for _, orphanedPod := range orphanedPods { uid := types.UID(orphanedPod) _, podExists := dswp.podManager.GetPodByUID(uid) if !podExists && dswp.podStateProvider.ShouldPodRuntimeBeRemoved(uid) { dswp.deleteProcessedPod(orphanedPod) } } podsWithError := dswp.desiredStateOfWorld.GetPodsWithErrors() for _, podName := range podsWithError { if _, podExists := dswp.podManager.GetPodByUID(types.UID(podName)); !podExists {"} {"_id":"doc-en-kubernetes-17b45a9a4146c1747102c7076bd161ed85de258b354895d7ffa2c65f35847a67","title":"","text":"t.Fatalf(\"Failed to remove pods from desired state of world since they no longer exist\") } // podWorker may call volume_manager WaitForUnmount() after we processed the pod in findAndRemoveDeletedPods() dswp.ReprocessPod(podName) dswp.findAndRemoveDeletedPods() // findAndRemoveDeletedPods() above must detect orphaned pod and delete it from the map if _, ok := dswp.pods.processedPods[podName]; ok { t.Fatalf(\"Failed to remove orphanded pods from internal map\") } volumeExists := dswp.desiredStateOfWorld.VolumeExists(expectedVolumeName, \"\" /* SELinuxContext */) if volumeExists { t.Fatalf("} {"_id":"doc-en-kubernetes-2302997c615d74634885f9fb7ee60f0f18e976544b325e927485a8aa0466467c","title":"","text":"// TODO: scope this to the kube-system namespace rbacv1helpers.NewRule(\"create\").Groups(coordinationGroup).Resources(\"leases\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"update\").Groups(coordinationGroup).Resources(\"leases\").Names(\"kube-scheduler\").RuleOrDie(), // TODO: Remove once we fully migrate to lease in leader-election. rbacv1helpers.NewRule(\"create\").Groups(legacyGroup).Resources(\"endpoints\").RuleOrDie(), rbacv1helpers.NewRule(\"get\", \"update\").Groups(legacyGroup).Resources(\"endpoints\").Names(\"kube-scheduler\").RuleOrDie(), // Fundamental resources rbacv1helpers.NewRule(Read...).Groups(legacyGroup).Resources(\"nodes\").RuleOrDie(),"} {"_id":"doc-en-kubernetes-5c303e630474962f9bb41b1adfe059880df754a1569c92cf11eca715f9c439d7","title":"","text":"- apiGroups: - \"\" resources: - endpoints verbs: - create - apiGroups: - \"\" resourceNames: - kube-scheduler resources: - endpoints verbs: - get - update - apiGroups: - \"\" resources: - nodes verbs: - get"} {"_id":"doc-en-kubernetes-4a242efb49069a471e25bff67e50c5ce6f8cc22726f3c8c9dbcca077d79c5194","title":"","text":"Image: imageutils.GetE2EImage(imageutils.BusyBox), Command: []string{\"sh\", \"-c\"}, Args: []string{` sleep 9999999 & PID=$! _term() { kill $PID echo \"Caught SIGTERM signal!\" exit 0 } trap _term SIGTERM while true; do sleep 5; done wait $PID exit 0 `, }, },"} {"_id":"doc-en-kubernetes-67bef338da6642ea46ee656e5c8d9cddf87eb29b70a8199d8e8bb873174755e7","title":"","text":"} return priority } // getGracePeriodOverrideTestPod returns a new Pod object containing a container // runs a shell script, hangs the process until a SIGTERM signal is received. // The script waits for $PID to ensure that the process does not exist. // If priorityClassName is scheduling.SystemNodeCritical, the Pod is marked as critical and a comment is added. func getGracePeriodOverrideTestPod(name string, node string, gracePeriod int64, priorityClassName string) *v1.Pod { pod := &v1.Pod{ TypeMeta: metav1.TypeMeta{"} {"_id":"doc-en-kubernetes-727aeaed32e48daa907513b833e33297d9151ce0a8c710c43c0516a217d7b568","title":"","text":"Image: busyboxImage, Command: []string{\"sh\", \"-c\"}, Args: []string{` _term() { echo \"Caught SIGTERM signal!\" while true; do sleep 5; done } trap _term SIGTERM while true; do sleep 5; done `}, sleep 9999999 & PID=$! _term() { echo \"Caught SIGTERM signal!\" wait $PID } trap _term SIGTERM wait $PID `}, }, }, TerminationGracePeriodSeconds: &gracePeriod,"} {"_id":"doc-en-kubernetes-bf89a1c2ba2838f819785a6e44e79106e4e377e0169945cac9ad7ae05e11acf6","title":"","text":"\"fmt\" \"net\" \"net/url\" \"os\" \"path/filepath\" \"strings\" \"syscall\""} {"_id":"doc-en-kubernetes-7a7474227f63a81b5699bcc38053e2da10c8d2aa2776d42a3c57022754ca9891","title":"","text":"// does NOT work in 1809 if the socket file is created within a bind mounted directory by a container // and the FSCTL is issued in the host by the kubelet. // If the file does not exist, it cannot be a Unix domain socket. if _, err := os.Stat(filePath); os.IsNotExist(err) { return false, fmt.Errorf(\"File %s not found. Err: %v\", filePath, err) } klog.V(6).InfoS(\"Function IsUnixDomainSocket starts\", \"filePath\", filePath) // As detailed in https://github.com/kubernetes/kubernetes/issues/104584 we cannot rely // on the Unix Domain socket working on the very first try, hence the potential need to"} {"_id":"doc-en-kubernetes-5c6d6967756cfbc2a4991d78056d2ab9e665e8f4b6d88da81e46507e930812f0","title":"","text":"import ( \"fmt\" \"math/rand\" \"net\" \"os\" \"reflect\" \"runtime\" \"sync\""} {"_id":"doc-en-kubernetes-99e534531cdb36136cf541a104bf771ad29de6175c86df88a8f0da26c9086056","title":"","text":"// the PATCH end-point. Returns true if the query param is supported by the // spec for the passed GVK; false otherwise. func supportsQueryParamV3(doc *spec3.OpenAPI, gvk schema.GroupVersionKind, queryParam VerifiableQueryParam) bool { if doc == nil || doc.Paths == nil { return false } for _, path := range doc.Paths.Paths { // If operation is not PATCH, then continue. op := path.PathProps.Patch"} {"_id":"doc-en-kubernetes-f4796f9bc661004f167851a7e86d6bf70d421cafeaf7278cb6d6141a719b82c0","title":"","text":"\"k8s.io/client-go/openapi/cached\" \"k8s.io/client-go/openapi/openapitest\" \"k8s.io/client-go/openapi3\" \"k8s.io/kube-openapi/pkg/spec3\" ) func TestV3SupportsQueryParamBatchV1(t *testing.T) {"} {"_id":"doc-en-kubernetes-ff6569ddb5fd1e308943d55c03c3e84018fe1d3808f390f6804c94a2afd07842","title":"","text":"}) } } func TestInvalidOpenAPIV3Document(t *testing.T) { tests := map[string]struct { spec *spec3.OpenAPI }{ \"nil document correctly returns Unsupported error\": { spec: nil, }, \"empty document correctly returns Unsupported error\": { spec: &spec3.OpenAPI{}, }, \"minimal document correctly returns Unsupported error\": { spec: &spec3.OpenAPI{ Version: \"openapi 3.0.0\", Paths: nil, }, }, \"document with empty Paths correctly returns Unsupported error\": { spec: &spec3.OpenAPI{ Version: \"openapi 3.0.0\", Paths: &spec3.Paths{}, }, }, } gvk := schema.GroupVersionKind{ Group: \"batch\", Version: \"v1\", Kind: \"Job\", } for tn, tc := range tests { t.Run(tn, func(t *testing.T) { verifier := &queryParamVerifierV3{ finder: NewCRDFinder(func() ([]schema.GroupKind, error) { return []schema.GroupKind{}, nil }), root: &fakeRoot{tc.spec}, queryParam: QueryParamFieldValidation, } err := verifier.HasSupport(gvk) if err == nil { t.Errorf(\"Expected not supports error, but none received.\") } }) } } // fakeRoot implements Root interface; manually specifies the returned OpenAPI V3 document. type fakeRoot struct { spec *spec3.OpenAPI } func (f *fakeRoot) GroupVersions() ([]schema.GroupVersion, error) { // Unused return nil, nil } // GVSpec returns hard-coded OpenAPI V3 document. func (f *fakeRoot) GVSpec(gv schema.GroupVersion) (*spec3.OpenAPI, error) { return f.spec, nil } func (f *fakeRoot) GVSpecAsMap(gv schema.GroupVersion) (map[string]interface{}, error) { // Unused return nil, nil } "} {"_id":"doc-en-kubernetes-c46f9c38dc4b02afbfbf1d67c7ae6fb887ccabf6e0aa933110eb56cd3d34498a","title":"","text":"// back to the queue multiple times before it's successfully scheduled. // It shouldn't be updated once initialized. It's used to record the e2e scheduling // latency for a pod. InitialAttemptTimestamp time.Time InitialAttemptTimestamp *time.Time // If a Pod failed in a scheduling cycle, record the plugin names it failed by. UnschedulablePlugins sets.Set[string] // Whether the Pod is scheduling gated (by PreEnqueuePlugins) or not."} {"_id":"doc-en-kubernetes-c80195a7c273d4dca74f109501d960d2fdd46e076535c1a5410d649ad1f918a1","title":"","text":"p.unschedulablePods.addOrUpdate(pInfo) return false, nil } if pInfo.InitialAttemptTimestamp == nil { now := p.clock.Now() pInfo.InitialAttemptTimestamp = &now } if err := p.activeQ.Add(pInfo); err != nil { logger.Error(err, \"Error adding pod to the active queue\", \"pod\", klog.KObj(pInfo.Pod)) return false, err"} {"_id":"doc-en-kubernetes-9883ae5aee85f0faa16adb970e767a538539c1b53c0739ba4383702fe31753cb","title":"","text":"return &framework.QueuedPodInfo{ PodInfo: podInfo, Timestamp: now, InitialAttemptTimestamp: now, InitialAttemptTimestamp: nil, UnschedulablePlugins: sets.New(plugins...), } }"} {"_id":"doc-en-kubernetes-700dfa5a18592c641c98782cccc0e4631069bf540c2df43eae10d956fdc85111","title":"","text":"t.Fatalf(\"Failed to pop a pod %v\", err) } checkPerPodSchedulingMetrics(\"Attempt twice with update\", t, pInfo, 2, timestamp) // Case 4: A gated pod is created and scheduled after lifting gate. The queue operations are // Add gated pod -> check unschedulablePods -> lift gate & update pod -> Pop. c = testingclock.NewFakeClock(timestamp) // Create a queue with PreEnqueuePlugin m := map[string][]framework.PreEnqueuePlugin{\"\": {&preEnqueuePlugin{allowlists: []string{\"foo\"}}}} queue = NewTestQueue(ctx, newDefaultQueueSort(), WithClock(c), WithPreEnqueuePluginMap(m), WithPluginMetricsSamplePercent(0)) // Create a pod without PreEnqueuePlugin label. gatedPod := st.MakePod().Name(\"gated-test-pod\").Namespace(\"test-ns\").UID(\"test-uid\").Obj() err = queue.Add(logger, gatedPod) if err != nil { t.Fatalf(\"Failed to add a pod %v\", err) } // Check pod is added to the unschedulablePods queue. if getUnschedulablePod(queue, gatedPod) != gatedPod { t.Errorf(\"Pod %v was not found in the unschedulablePods.\", gatedPod.Name) } // Override clock to get different InitialAttemptTimestamp c.Step(1 * time.Minute) // Update pod with the required label to get it out of unschedulablePods queue. updateGatedPod := gatedPod.DeepCopy() updateGatedPod.Labels = map[string]string{\"foo\": \"\"} queue.Update(logger, gatedPod, updateGatedPod) pInfo, err = queue.Pop() if err != nil { t.Fatalf(\"Failed to pop a pod %v\", err) } checkPerPodSchedulingMetrics(\"Attempt once/gated\", t, pInfo, 1, timestamp.Add(1*time.Minute)) } func TestIncomingPodsMetrics(t *testing.T) {"} {"_id":"doc-en-kubernetes-9739c1fa7417db6258a2cadf6e63760efaf6c57f2ec886d10627e6b239fd417e","title":"","text":"if pInfo.Attempts != wantAttempts { t.Errorf(\"[%s] Pod schedule attempt unexpected, got %v, want %v\", name, pInfo.Attempts, wantAttempts) } if pInfo.InitialAttemptTimestamp != wantInitialAttemptTs { t.Errorf(\"[%s] Pod initial schedule attempt timestamp unexpected, got %v, want %v\", name, pInfo.InitialAttemptTimestamp, wantInitialAttemptTs) if *pInfo.InitialAttemptTimestamp != wantInitialAttemptTs { t.Errorf(\"[%s] Pod initial schedule attempt timestamp unexpected, got %v, want %v\", name, *pInfo.InitialAttemptTimestamp, wantInitialAttemptTs) } }"} {"_id":"doc-en-kubernetes-38101ba1d0a72c9df9947eb5c8cb278b7cb03b0ea17b5c5f882c647a5f45e85a","title":"","text":"logger.V(2).Info(\"Successfully bound pod to node\", \"pod\", klog.KObj(assumedPod), \"node\", scheduleResult.SuggestedHost, \"evaluatedNodes\", scheduleResult.EvaluatedNodes, \"feasibleNodes\", scheduleResult.FeasibleNodes) metrics.PodScheduled(fwk.ProfileName(), metrics.SinceInSeconds(start)) metrics.PodSchedulingAttempts.Observe(float64(assumedPodInfo.Attempts)) metrics.PodSchedulingDuration.WithLabelValues(getAttemptsLabel(assumedPodInfo)).Observe(metrics.SinceInSeconds(assumedPodInfo.InitialAttemptTimestamp)) if assumedPodInfo.InitialAttemptTimestamp != nil { metrics.PodSchedulingDuration.WithLabelValues(getAttemptsLabel(assumedPodInfo)).Observe(metrics.SinceInSeconds(*assumedPodInfo.InitialAttemptTimestamp)) } // Run \"postbind\" plugins. fwk.RunPostBindPlugins(ctx, state, assumedPod, scheduleResult.SuggestedHost)"} {"_id":"doc-en-kubernetes-45fee3bfc08af62be5d24cd2722e2016f6c08ac4378d0c9b3df4d14a51eba8b1","title":"","text":" #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Bring up a Kubernetes cluster. # Usage: # wget -q -O - https://get.k8s.io | sh # or # curl -sS https://get.k8s.io | sh # # Advanced options # Set KUBERNETES_PROVIDER to choose between different providers: # Google Compute Engine [default] # * export KUBERNETES_PROVIDER=gce; wget -q -O - https://get.k8s.io | sh # Amazon EC2 # * export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | sh # Microsoft Azure # * export KUBERNETES_PROVIDER=azure; wget -q -O - https://get.k8s.io | sh # Vagrant (local virtual machines) # * export KUBERNETES_PROVIDER=vagrant; wget -q -O - https://get.k8s.io | sh # VMWare VSphere # * export KUBERNETES_PROVIDER=vsphere; wget -q -O - https://get.k8s.io | sh # Rackspace # * export KUBERNETES_PROVIDER=rackspace; wget -q -O - https://get.k8s.io | sh # # Set KUBERNETES_SKIP_DOWNLOAD to non-empty to skip downloading a release. # Set KUBERNETES_SKIP_CONFIRM to skip the installation confirmation prompt. set -o errexit set -o nounset set -o pipefail function create-cluster { echo \"Creating a kubernetes on ${KUBERNETES_PROVIDER:-gce}...\" ( cd kubernetes ./cluster/kube-up.sh echo \"Kubernetes binaries at ${PWD}/kubernetes/cluster/\" echo \"You may want to add this directory to your PATH in $HOME/.profile\" echo \"Installation successful!\" ) } if [[ \"${KUBERNETES_SKIP_DOWNLOAD-}\" ]]; then create-cluster exit 0 fi release=v0.7.2 release_url=https://storage.googleapis.com/kubernetes-release/release/${release}/kubernetes.tar.gz uname=$(uname) if [[ \"${uname}\" == \"Darwin\" ]]; then platform=\"darwin\" elif [[ \"${uname}\" == \"Linux\" ]]; then platform=\"linux\" else echo \"Unknown, unsupported platform: (${uname}).\" echo \"Supported platforms: Linux, Darwin.\" echo \"Bailing out.\" exit 2 fi machine=$(uname -m) if [[ \"${machine}\" == \"x86_64\" ]]; then arch=\"amd64\" elif [[ \"${machine}\" == \"i686\" ]]; then arch=\"386\" elif [[ \"${machine}\" == \"arm*\" ]]; then arch=\"arm\" else echo \"Unknown, unsupported architecture (${machine}).\" echo \"Supported architectures x86_64, i686, arm*\" echo \"Bailing out.\" exit 3 fi file=kubernetes.tar.gz echo \"Downloading kubernetes release ${release} to ${PWD}/kubernetes.tar.gz\" if [[ -n \"${KUBERNETES_SKIP_CONFIRM-}\" ]]; then echo \"Is this ok? [Y]/n\" read confirm if [[ \"$confirm\" == \"n\" ]]; then echo \"Aborting.\" exit 0 fi fi if [[ $(which wget) ]]; then wget -O ${file} ${release_url} elif [[ $(which curl) ]]; then curl -L -o ${file} ${release_url} else echo \"Couldn't find curl or wget. Bailing out.\" exit 1 fi echo \"Unpacking kubernetes release ${release}\" tar -xzf ${file} rm ${file} create-cluster "} {"_id":"doc-en-kubernetes-e0e53f532102924fc5ada3fcfcf1b086bc16eb7d9d66146b0724830b772af52d","title":"","text":"}) err = clientretry.RetryOnConflict(UpdateNodeSpecBackoff, func() error { var curNode *v1.Node if cnc.cloud.ProviderName() == \"gce\" { // TODO(wlan0): Move this logic to the route controller using the node taint instead of condition // Since there are node taints, do we still need this? // This condition marks the node as unusable until routes are initialized in the cloud provider if err := nodeutil.SetNodeCondition(cnc.kubeClient, types.NodeName(nodeName), v1.NodeCondition{ Type: v1.NodeNetworkUnavailable, Status: v1.ConditionTrue, Reason: \"NoRouteCreated\", Message: \"Node created without a route\", LastTransitionTime: metav1.Now(), }); err != nil { return err } // fetch latest node from API server since GCE-specific condition was set and informer cache may be stale curNode, err = cnc.kubeClient.CoreV1().Nodes().Get(context.TODO(), nodeName, metav1.GetOptions{}) if err != nil { return err } } else { curNode, err = cnc.nodeInformer.Lister().Get(nodeName) if err != nil { return err } curNode, err = cnc.nodeInformer.Lister().Get(nodeName) if err != nil { return err } newNode := curNode.DeepCopy()"} {"_id":"doc-en-kubernetes-5ef55e7c72186ffac2352dd3d40f297d6308fc5eca7857dd420fe6e9e72a343f","title":"","text":"} } // test syncNode with instanceV2, same test case with TestGCECondition. func TestGCEConditionV2(t *testing.T) { existingNode := &v1.Node{ ObjectMeta: metav1.ObjectMeta{ Name: \"node0\", CreationTimestamp: metav1.Date(2012, 1, 1, 0, 0, 0, 0, time.UTC), }, Status: v1.NodeStatus{ Conditions: []v1.NodeCondition{ { Type: v1.NodeReady, Status: v1.ConditionUnknown, LastHeartbeatTime: metav1.Date(2015, 1, 1, 12, 0, 0, 0, time.UTC), LastTransitionTime: metav1.Date(2015, 1, 1, 12, 0, 0, 0, time.UTC), }, }, }, Spec: v1.NodeSpec{ Taints: []v1.Taint{ { Key: cloudproviderapi.TaintExternalCloudProvider, Value: \"true\", Effect: v1.TaintEffectNoSchedule, }, }, }, } fakeCloud := &fakecloud.Cloud{ EnableInstancesV2: true, InstanceTypes: map[types.NodeName]string{ types.NodeName(\"node0\"): \"t1.micro\", }, ProviderID: map[types.NodeName]string{ types.NodeName(\"node0\"): \"fake://12334\", }, Addresses: []v1.NodeAddress{ { Type: v1.NodeHostName, Address: \"node0.cloud.internal\", }, { Type: v1.NodeInternalIP, Address: \"10.0.0.1\", }, { Type: v1.NodeExternalIP, Address: \"132.143.154.163\", }, }, Provider: \"gce\", Err: nil, } clientset := fake.NewSimpleClientset(existingNode) factory := informers.NewSharedInformerFactory(clientset, 0) eventBroadcaster := record.NewBroadcaster() cloudNodeController := &CloudNodeController{ kubeClient: clientset, nodeInformer: factory.Core().V1().Nodes(), nodesLister: factory.Core().V1().Nodes().Lister(), cloud: fakeCloud, recorder: eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: \"cloud-node-controller\"}), nodeStatusUpdateFrequency: 1 * time.Second, } stopCh := make(chan struct{}) defer close(stopCh) factory.Start(stopCh) factory.WaitForCacheSync(stopCh) w := eventBroadcaster.StartLogging(klog.Infof) defer w.Stop() cloudNodeController.syncNode(context.TODO(), existingNode.Name) updatedNode, err := clientset.CoreV1().Nodes().Get(context.TODO(), existingNode.Name, metav1.GetOptions{}) if err != nil { t.Fatalf(\"error getting updated nodes: %v\", err) } conditionAdded := false for _, cond := range updatedNode.Status.Conditions { if cond.Status == \"True\" && cond.Type == \"NetworkUnavailable\" && cond.Reason == \"NoRouteCreated\" { conditionAdded = true } } assert.True(t, conditionAdded, \"Network Route Condition for GCE not added by external cloud initializer\") } // This test checks that a node with the external cloud provider taint is cloudprovider initialized and // the GCE route condition is added if cloudprovider is GCE func TestGCECondition(t *testing.T) { existingNode := &v1.Node{ ObjectMeta: metav1.ObjectMeta{ Name: \"node0\", CreationTimestamp: metav1.Date(2012, 1, 1, 0, 0, 0, 0, time.UTC), }, Status: v1.NodeStatus{ Conditions: []v1.NodeCondition{ { Type: v1.NodeReady, Status: v1.ConditionUnknown, LastHeartbeatTime: metav1.Date(2015, 1, 1, 12, 0, 0, 0, time.UTC), LastTransitionTime: metav1.Date(2015, 1, 1, 12, 0, 0, 0, time.UTC), }, }, }, Spec: v1.NodeSpec{ Taints: []v1.Taint{ { Key: cloudproviderapi.TaintExternalCloudProvider, Value: \"true\", Effect: v1.TaintEffectNoSchedule, }, }, }, } fakeCloud := &fakecloud.Cloud{ EnableInstancesV2: false, InstanceTypes: map[types.NodeName]string{ types.NodeName(\"node0\"): \"t1.micro\", }, Addresses: []v1.NodeAddress{ { Type: v1.NodeHostName, Address: \"node0.cloud.internal\", }, { Type: v1.NodeInternalIP, Address: \"10.0.0.1\", }, { Type: v1.NodeExternalIP, Address: \"132.143.154.163\", }, }, Provider: \"gce\", Err: nil, } clientset := fake.NewSimpleClientset(existingNode) factory := informers.NewSharedInformerFactory(clientset, 0) eventBroadcaster := record.NewBroadcaster() cloudNodeController := &CloudNodeController{ kubeClient: clientset, nodeInformer: factory.Core().V1().Nodes(), nodesLister: factory.Core().V1().Nodes().Lister(), cloud: fakeCloud, recorder: eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: \"cloud-node-controller\"}), nodeStatusUpdateFrequency: 1 * time.Second, } stopCh := make(chan struct{}) defer close(stopCh) factory.Start(stopCh) factory.WaitForCacheSync(stopCh) w := eventBroadcaster.StartLogging(klog.Infof) defer w.Stop() cloudNodeController.syncNode(context.TODO(), existingNode.Name) updatedNode, err := clientset.CoreV1().Nodes().Get(context.TODO(), existingNode.Name, metav1.GetOptions{}) if err != nil { t.Fatalf(\"error getting updated nodes: %v\", err) } conditionAdded := false for _, cond := range updatedNode.Status.Conditions { if cond.Status == \"True\" && cond.Type == \"NetworkUnavailable\" && cond.Reason == \"NoRouteCreated\" { conditionAdded = true } } assert.True(t, conditionAdded, \"Network Route Condition for GCE not added by external cloud initializer\") } func Test_reconcileNodeLabels(t *testing.T) { testcases := []struct { name string"} {"_id":"doc-en-kubernetes-eaaea2d6ff6c8bd60953a71a37cc668f06e41282243f4b8d042b66194cb72f31","title":"","text":"name: \"net.ipv4.tcp_keepalive_time\", // refer to https://github.com/torvalds/linux/commit/13b287e8d1cad951634389f85b8c9b816bd3bb1e. kernel: \"4.5\", }, { // refer to https://github.com/torvalds/linux/commit/1e579caa18b96f9eb18f4f5416658cd15f37c062. name: \"net.ipv4.tcp_fin_timeout\", kernel: \"4.6\", }, { // refer to https://github.com/torvalds/linux/commit/b840d15d39128d08ed4486085e5507d2617b9ae1. name: \"net.ipv4.tcp_keepalive_intvl\", kernel: \"4.5\", }, { // refer to https://github.com/torvalds/linux/commit/9bd6861bd4326e3afd3f14a9ec8a723771fb20bb. name: \"net.ipv4.tcp_keepalive_probes\", kernel: \"4.5\", }, }"} {"_id":"doc-en-kubernetes-3616c13cf6f0820033356172eb6c21489043d12ad489306ac39c88e61ec87d72","title":"","text":"}, }, { name: \"kernelVersion is 5.15.0, return safeSysctls with no kernelVersion limit and net.ipv4.ip_local_reserved_ports and net.ipv4.tcp_keepalive_time\", name: \"kernelVersion is 5.15.0, return safeSysctls with no kernelVersion limit and kernelVersion below 5.15.0\", getVersion: func() (*version.Version, error) { kernelVersionStr := \"5.15.0-75-generic\" return version.ParseGeneric(kernelVersionStr)"} {"_id":"doc-en-kubernetes-055631af0db00abc99b90036008baaec6ec5de122865a8315a043972612c0e77","title":"","text":"\"net.ipv4.ip_unprivileged_port_start\", \"net.ipv4.ip_local_reserved_ports\", \"net.ipv4.tcp_keepalive_time\", \"net.ipv4.tcp_fin_timeout\", \"net.ipv4.tcp_keepalive_intvl\", \"net.ipv4.tcp_keepalive_probes\", }, }, }"} {"_id":"doc-en-kubernetes-d2ce1c1799cd74bb499f84923b0ee3ccbfe772d199fc9f700b749d24cfc0387e","title":"","text":"'net.ipv4.ip_unprivileged_port_start' 'net.ipv4.ip_local_reserved_ports' 'net.ipv4.tcp_keepalive_time' 'net.ipv4.tcp_fin_timeout' 'net.ipv4.tcp_keepalive_intvl' 'net.ipv4.tcp_keepalive_probes' */"} {"_id":"doc-en-kubernetes-dd73d8e9c2bc20d55eeb2795934164dbb6073e2977c7470298f3279f62029c81","title":"","text":"\"net.ipv4.ip_unprivileged_port_start\", \"net.ipv4.ip_local_reserved_ports\", \"net.ipv4.tcp_keepalive_time\", \"net.ipv4.tcp_fin_timeout\", \"net.ipv4.tcp_keepalive_intvl\", \"net.ipv4.tcp_keepalive_probes\", ) )"} {"_id":"doc-en-kubernetes-e74b8789a37068cae5ca2ea7bedea803efc2eab1e1a928ef16192e7344b95543","title":"","text":"expectReason: `forbidden sysctls`, expectDetail: `net.ipv4.tcp_keepalive_time`, }, { name: \"new supported sysctls not supported: net.ipv4.tcp_fin_timeout\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_fin_timeout\", Value: \"60\"}}, }, }}, allowed: false, expectReason: `forbidden sysctls`, expectDetail: `net.ipv4.tcp_fin_timeout`, }, { name: \"new supported sysctls not supported: net.ipv4.tcp_keepalive_intvl\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_keepalive_intvl\", Value: \"75\"}}, }, }}, allowed: false, expectReason: `forbidden sysctls`, expectDetail: `net.ipv4.tcp_keepalive_intvl`, }, { name: \"new supported sysctls not supported: net.ipv4.tcp_keepalive_probes\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_keepalive_probes\", Value: \"9\"}}, }, }}, allowed: false, expectReason: `forbidden sysctls`, expectDetail: `net.ipv4.tcp_keepalive_probes`, }, } for _, tc := range tests {"} {"_id":"doc-en-kubernetes-29c6d80b543a2e479c63ecf75c011fc72422d30d9cfdff99f30c4b9304bd9a7c","title":"","text":"expectDetail: `a, b`, }, { name: \"new supported sysctls\", name: \"new supported sysctls: net.ipv4.tcp_keepalive_time\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_keepalive_time\", Value: \"7200\"}},"} {"_id":"doc-en-kubernetes-410e8453b8f78f376ff7c17f6bb71bc2b4406102a86db877ef849b188d4e7e49","title":"","text":"}}, allowed: true, }, { name: \"new supported sysctls: net.ipv4.tcp_fin_timeout\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_fin_timeout\", Value: \"60\"}}, }, }}, allowed: true, }, { name: \"new supported sysctls: net.ipv4.tcp_keepalive_intvl\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_keepalive_intvl\", Value: \"75\"}}, }, }}, allowed: true, }, { name: \"new supported sysctls: net.ipv4.tcp_keepalive_probes\", pod: &corev1.Pod{Spec: corev1.PodSpec{ SecurityContext: &corev1.PodSecurityContext{ Sysctls: []corev1.Sysctl{{Name: \"net.ipv4.tcp_keepalive_probes\", Value: \"9\"}}, }, }}, allowed: true, }, } for _, tc := range tests {"} {"_id":"doc-en-kubernetes-e88bb8e866924064be3c717bb69ac1e0729dacf91e655d9b7c0314e24275deff","title":"","text":"{Name: \"net.ipv4.ip_unprivileged_port_start\", Value: \"1024\"}, {Name: \"net.ipv4.ip_local_reserved_ports\", Value: \"1024-4999\"}, {Name: \"net.ipv4.tcp_keepalive_time\", Value: \"7200\"}, {Name: \"net.ipv4.tcp_fin_timeout\", Value: \"60\"}, {Name: \"net.ipv4.tcp_keepalive_intvl\", Value: \"75\"}, {Name: \"net.ipv4.tcp_keepalive_probes\", Value: \"9\"}, } }), }"} {"_id":"doc-en-kubernetes-05f4cdc6a215a9c9f15c6b1e945dc34ca6fa0c9f51fb2404262ba685b2d8ceb0","title":"","text":"value: 1024-4999 - name: net.ipv4.tcp_keepalive_time value: \"7200\" - name: net.ipv4.tcp_fin_timeout value: \"60\" - name: net.ipv4.tcp_keepalive_intvl value: \"75\" - name: net.ipv4.tcp_keepalive_probes value: \"9\" "} {"_id":"doc-en-kubernetes-b764b8bdd87c0fd0cadc998bfb7011da12ae406dc8663fc57390324a38a93f84","title":"","text":"# See the License for the specific language governing permissions and # limitations under the License. ARG BASEIMAGE ARG RUNNERIMAGE FROM ${BASEIMAGE} as debbase FROM ${RUNNERIMAGE} # This is a dependency for `kubectl diff` tests COPY --from=debbase /usr/bin/diff /usr/local/bin/ COPY --from=debbase /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu COPY --from=debbase /lib/x86_64-linux-gnu/libpthread.so.0 /lib/x86_64-linux-gnu COPY --from=debbase /lib64/ld-linux-x86-64.so.2 /lib64 COPY cluster /kubernetes/cluster COPY ginkgo /usr/local/bin/ COPY e2e.test /usr/local/bin/"} {"_id":"doc-en-kubernetes-30bf2e3dfe34ad9826895b5c1732969b33de9f131a5b655ba62a86bac446a576","title":"","text":"# This is defined in root Makefile, but some build contexts do not refer to them KUBE_BASE_IMAGE_REGISTRY?=registry.k8s.io BASE_IMAGE_VERSION?=bookworm-v1.0.0 BASEIMAGE?=${KUBE_BASE_IMAGE_REGISTRY}/build-image/debian-base-${ARCH}:${BASE_IMAGE_VERSION} # Keep debian releases (e.g. debian 11 == bullseye) consistent # between BASE_IMAGE_VERSION and DISTROLESS_IMAGE images DISTROLESS_IMAGE?=base-debian11 RUNNERIMAGE?=gcr.io/distroless/${DISTROLESS_IMAGE}:latest RUNNERIMAGE?=${KUBE_BASE_IMAGE_REGISTRY}/build-image/debian-base-${ARCH}:${BASE_IMAGE_VERSION} TEMP_DIR:=$(shell mktemp -d -t conformance-XXXXXX)"} {"_id":"doc-en-kubernetes-88a5ea8c1f60f5376bf035069de4c9cc362ff1f96e1cb84e7e6a32b22fbffb59","title":"","text":"--load --pull -t ${REGISTRY}/conformance-${ARCH}:${VERSION} --build-arg BASEIMAGE=$(BASEIMAGE) --build-arg RUNNERIMAGE=$(RUNNERIMAGE) ${TEMP_DIR} rm -rf \"${TEMP_DIR}\""} {"_id":"doc-en-kubernetes-5b3e6d0001b3ec58a1e73cfb730932bccd76477c0b9dd11f0913464cd9780006","title":"","text":"allNodePluginScores := make([]framework.NodePluginScores, len(nodes)) numPlugins := len(f.scorePlugins) - state.SkipScorePlugins.Len() plugins := make([]framework.ScorePlugin, 0, numPlugins) pluginToNodeScores := make([]framework.NodeScoreList, numPlugins) pluginToNodeScores := make(map[string]framework.NodeScoreList, numPlugins) for _, pl := range f.scorePlugins { if state.SkipScorePlugins.Has(pl.Name()) { continue } plugins = append(plugins, pl) pluginToNodeScores[pl.Name()] = make(framework.NodeScoreList, len(nodes)) } ctx, cancel := context.WithCancel(ctx) defer cancel() errCh := parallelize.NewErrorChannel() logger := klog.FromContext(ctx) logger = klog.LoggerWithName(logger, \"Score\") // TODO(knelasevero): Remove duplicated keys from log entry calls // When contextualized logging hits GA // https://github.com/kubernetes/kubernetes/issues/111672 logger = klog.LoggerWithValues(logger, \"pod\", klog.KObj(pod)) // Run Score method for each node in parallel. f.Parallelizer().Until(ctx, len(plugins), func(i int) { pl := plugins[i] logger := klog.LoggerWithName(logger, pl.Name()) nodeScores := make(framework.NodeScoreList, len(nodes)) for index, node := range nodes { nodeName := node.Name if len(plugins) > 0 { logger := klog.FromContext(ctx) logger = klog.LoggerWithName(logger, \"Score\") // TODO(knelasevero): Remove duplicated keys from log entry calls // When contextualized logging hits GA // https://github.com/kubernetes/kubernetes/issues/111672 logger = klog.LoggerWithValues(logger, \"pod\", klog.KObj(pod)) // Run Score method for each node in parallel. f.Parallelizer().Until(ctx, len(nodes), func(index int) { nodeName := nodes[index].Name logger := klog.LoggerWithValues(logger, \"node\", klog.ObjectRef{Name: nodeName}) ctx := klog.NewContext(ctx, logger) s, status := f.runScorePlugin(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { err := fmt.Errorf(\"plugin %q failed with: %w\", pl.Name(), status.AsError()) errCh.SendErrorWithCancel(err, cancel) return } nodeScores[index] = framework.NodeScore{ Name: nodeName, Score: s, for _, pl := range plugins { logger := klog.LoggerWithName(logger, pl.Name()) ctx := klog.NewContext(ctx, logger) s, status := f.runScorePlugin(ctx, pl, state, pod, nodeName) if !status.IsSuccess() { err := fmt.Errorf(\"plugin %q failed with: %w\", pl.Name(), status.AsError()) errCh.SendErrorWithCancel(err, cancel) return } pluginToNodeScores[pl.Name()][index] = framework.NodeScore{ Name: nodeName, Score: s, } } }, metrics.Score) if err := errCh.ReceiveError(); err != nil { return nil, framework.AsStatus(fmt.Errorf(\"running Score plugins: %w\", err)) } } // Run NormalizeScore method for each ScorePlugin in parallel. f.Parallelizer().Until(ctx, len(plugins), func(index int) { pl := plugins[index] if pl.ScoreExtensions() == nil { pluginToNodeScores[i] = nodeScores return } status := f.runScoreExtension(ctx, pl, state, pod, nodeScores) nodeScoreList := pluginToNodeScores[pl.Name()] status := f.runScoreExtension(ctx, pl, state, pod, nodeScoreList) if !status.IsSuccess() { err := fmt.Errorf(\"plugin %q failed with: %w\", pl.Name(), status.AsError()) errCh.SendErrorWithCancel(err, cancel) return } pluginToNodeScores[i] = nodeScores }, metrics.Score) if err := errCh.ReceiveError(); err != nil { return nil, framework.AsStatus(fmt.Errorf(\"running Normalize on Score plugins: %w\", err))"} {"_id":"doc-en-kubernetes-36543ad31ece4c84bf9328a28b403ee8d044d433337610518ca702ca3f9cc381","title":"","text":"for i, pl := range plugins { weight := f.scorePluginWeight[pl.Name()] nodeScoreList := pluginToNodeScores[i] nodeScoreList := pluginToNodeScores[pl.Name()] score := nodeScoreList[index].Score if score > framework.MaxNodeScore || score < framework.MinNodeScore {"} {"_id":"doc-en-kubernetes-cfc1602c663fefa00781dd2024b69becc12f6c6f39b0ea298e02bb27e7b6e6de","title":"","text":"driverName: driverName, } if shouldTimeout { timeout := plugin.PluginClientTimeout + time.Millisecond timeout := plugin.PluginClientTimeout + 10*time.Millisecond fakeDRADriverGRPCServer.timeout = &timeout }"} {"_id":"doc-en-kubernetes-40a0dfcca293e47661384ae974b9655d34c33e75ae651726e1564fd0dbce2b96","title":"","text":"klog.V(2).InfoS(\"Preemptor Pod preempted victim Pod\", \"preemptor\", klog.KObj(pod), \"victim\", klog.KObj(victim), \"node\", c.Name()) } fh.EventRecorder().Eventf(victim, pod, v1.EventTypeNormal, \"Preempted\", \"Preempting\", \"Preempted by a pod on node %v\", c.Name()) fh.EventRecorder().Eventf(victim, pod, v1.EventTypeNormal, \"Preempted\", \"Preempting\", \"Preempted by pod %v on node %v\", pod.UID, c.Name()) } fh.Parallelizer().Until(ctx, len(c.Victims().Pods), preemptPod, ev.PluginName)"} {"_id":"doc-en-kubernetes-d30062f0cdecee594c8ed201ab4d5e25ac193cd8d4ebe15173c042fb16728139","title":"","text":"requests := make(map[string]int64, len(pods)) for _, pod := range pods { podSum := int64(0) for _, c := range pod.Spec.Containers { // Calculate all regular containers and restartable init containers requests. containers := append([]v1.Container{}, pod.Spec.Containers...) for _, c := range pod.Spec.InitContainers { if c.RestartPolicy != nil && *c.RestartPolicy == v1.ContainerRestartPolicyAlways { containers = append(containers, c) } } for _, c := range containers { if container == \"\" || container == c.Name { if containerRequest, ok := c.Resources.Requests[resource]; ok { podSum += containerRequest.MilliValue()"} {"_id":"doc-en-kubernetes-fef58ba7d6e732164995444f2a73bb622bb42a5be64898fdabcc2cf218be63a5","title":"","text":"}) } } func TestCalculatePodRequests(t *testing.T) { containerRestartPolicyAlways := v1.ContainerRestartPolicyAlways testPod := \"test-pod\" tests := []struct { name string pods []*v1.Pod container string resource v1.ResourceName expectedRequests map[string]int64 expectedError error }{ { name: \"void\", pods: []*v1.Pod{}, container: \"\", resource: v1.ResourceCPU, expectedRequests: map[string]int64{}, expectedError: nil, }, { name: \"pod with regular containers\", pods: []*v1.Pod{{ ObjectMeta: metav1.ObjectMeta{ Name: testPod, Namespace: testNamespace, }, Spec: v1.PodSpec{ Containers: []v1.Container{ {Name: \"container1\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(100, resource.DecimalSI)}}}, {Name: \"container2\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(50, resource.DecimalSI)}}}, }, }, }}, container: \"\", resource: v1.ResourceCPU, expectedRequests: map[string]int64{testPod: 150}, expectedError: nil, }, { name: \"calculate requests with special container\", pods: []*v1.Pod{{ ObjectMeta: metav1.ObjectMeta{ Name: testPod, Namespace: testNamespace, }, Spec: v1.PodSpec{ Containers: []v1.Container{ {Name: \"container1\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(100, resource.DecimalSI)}}}, {Name: \"container2\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(50, resource.DecimalSI)}}}, }, }, }}, container: \"container1\", resource: v1.ResourceCPU, expectedRequests: map[string]int64{testPod: 100}, expectedError: nil, }, { name: \"container missing requests\", pods: []*v1.Pod{{ ObjectMeta: metav1.ObjectMeta{ Name: testPod, Namespace: testNamespace, }, Spec: v1.PodSpec{ Containers: []v1.Container{ {Name: \"container1\"}, }, }, }}, container: \"\", resource: v1.ResourceCPU, expectedRequests: nil, expectedError: fmt.Errorf(\"missing request for %s in container %s of Pod %s\", v1.ResourceCPU, \"container1\", testPod), }, { name: \"pod with restartable init containers\", pods: []*v1.Pod{{ ObjectMeta: metav1.ObjectMeta{ Name: testPod, Namespace: testNamespace, }, Spec: v1.PodSpec{ Containers: []v1.Container{ {Name: \"container1\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(100, resource.DecimalSI)}}}, }, InitContainers: []v1.Container{ {Name: \"init-container1\", Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(20, resource.DecimalSI)}}}, {Name: \"restartable-container1\", RestartPolicy: &containerRestartPolicyAlways, Resources: v1.ResourceRequirements{Requests: v1.ResourceList{v1.ResourceCPU: *resource.NewMilliQuantity(50, resource.DecimalSI)}}}, }, }, }}, container: \"\", resource: v1.ResourceCPU, expectedRequests: map[string]int64{testPod: 150}, expectedError: nil, }, } for _, tc := range tests { t.Run(tc.name, func(t *testing.T) { requests, err := calculatePodRequests(tc.pods, tc.container, tc.resource) assert.Equal(t, tc.expectedRequests, requests, \"requests should be as expected\") assert.Equal(t, tc.expectedError, err, \"error should be as expected\") }) } } "} {"_id":"doc-en-kubernetes-7882b17b329ae48d33d71ca5e6dffa2d994f8e0727b8ae8b7a828b28ccdd7d71","title":"","text":"import ( \"fmt\" \"reflect\" \"strings\" \"testing\" \"github.com/google/go-cmp/cmp\""} {"_id":"doc-en-kubernetes-779bb930e888a0c623b848607b8c6c74369af92523305542e03a4c7f5317d7f2","title":"","text":"utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" \"k8s.io/kubernetes/pkg/features\" st \"k8s.io/kubernetes/pkg/scheduler/testing\" ) func TestNewResource(t *testing.T) {"} {"_id":"doc-en-kubernetes-fc59c25dd9d187b5cfd8188a26b7f2ea4ad302dfc9fefeb5806bbdfac0cbb15c","title":"","text":"} } type testingMode interface { Fatalf(format string, args ...interface{}) } func makeBasePod(t testingMode, nodeName, objName, cpu, mem, extended string, ports []v1.ContainerPort, volumes []v1.Volume) *v1.Pod { req := v1.ResourceList{} if cpu != \"\" { req = v1.ResourceList{ v1.ResourceCPU: resource.MustParse(cpu), v1.ResourceMemory: resource.MustParse(mem), } if extended != \"\" { parts := strings.Split(extended, \":\") if len(parts) != 2 { t.Fatalf(\"Invalid extended resource string: \"%s\"\", extended) } req[v1.ResourceName(parts[0])] = resource.MustParse(parts[1]) } } return &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ UID: types.UID(objName), Namespace: \"node_info_cache_test\", Name: objName, }, Spec: v1.PodSpec{ Containers: []v1.Container{{ Resources: v1.ResourceRequirements{ Requests: req, }, Ports: ports, }}, NodeName: nodeName, Volumes: volumes, }, } } func TestNewNodeInfo(t *testing.T) { nodeName := \"test-node\" pods := []*v1.Pod{ makeBasePod(t, nodeName, \"test-1\", \"100m\", \"500\", \"\", []v1.ContainerPort{{HostIP: \"127.0.0.1\", HostPort: 80, Protocol: \"TCP\"}}, nil), makeBasePod(t, nodeName, \"test-2\", \"200m\", \"1Ki\", \"\", []v1.ContainerPort{{HostIP: \"127.0.0.1\", HostPort: 8080, Protocol: \"TCP\"}}, nil), st.MakePod().UID(\"test-1\").Namespace(\"node_info_cache_test\").Name(\"test-1\").Node(nodeName). Containers([]v1.Container{st.MakeContainer().ResourceRequests(map[v1.ResourceName]string{ v1.ResourceCPU: \"100m\", v1.ResourceMemory: \"500\", }).ContainerPort([]v1.ContainerPort{{ HostIP: \"127.0.0.1\", HostPort: 80, Protocol: \"TCP\", }}).Obj()}). Obj(), st.MakePod().UID(\"test-2\").Namespace(\"node_info_cache_test\").Name(\"test-2\").Node(nodeName). Containers([]v1.Container{st.MakeContainer().ResourceRequests(map[v1.ResourceName]string{ v1.ResourceCPU: \"200m\", v1.ResourceMemory: \"1Ki\", }).ContainerPort([]v1.ContainerPort{{ HostIP: \"127.0.0.1\", HostPort: 8080, Protocol: \"TCP\", }}).Obj()}). Obj(), } expected := &NodeInfo{"} {"_id":"doc-en-kubernetes-6081a8593e6d75237c81154909861d6ed6fccb474e6137cf09aa561ef4d54f0f","title":"","text":"func TestNodeInfoRemovePod(t *testing.T) { nodeName := \"test-node\" pods := []*v1.Pod{ makeBasePod(t, nodeName, \"test-1\", \"100m\", \"500\", \"\", []v1.ContainerPort{{HostIP: \"127.0.0.1\", HostPort: 80, Protocol: \"TCP\"}}, []v1.Volume{{VolumeSource: v1.VolumeSource{PersistentVolumeClaim: &v1.PersistentVolumeClaimVolumeSource{ClaimName: \"pvc-1\"}}}}), makeBasePod(t, nodeName, \"test-2\", \"200m\", \"1Ki\", \"\", []v1.ContainerPort{{HostIP: \"127.0.0.1\", HostPort: 8080, Protocol: \"TCP\"}}, nil), st.MakePod().UID(\"test-1\").Namespace(\"node_info_cache_test\").Name(\"test-1\").Node(nodeName). Containers([]v1.Container{st.MakeContainer().ResourceRequests(map[v1.ResourceName]string{ v1.ResourceCPU: \"100m\", v1.ResourceMemory: \"500\", }).ContainerPort([]v1.ContainerPort{{ HostIP: \"127.0.0.1\", HostPort: 80, Protocol: \"TCP\", }}).Obj()}). Volumes([]v1.Volume{{VolumeSource: v1.VolumeSource{PersistentVolumeClaim: &v1.PersistentVolumeClaimVolumeSource{ClaimName: \"pvc-1\"}}}}). Obj(), st.MakePod().UID(\"test-2\").Namespace(\"node_info_cache_test\").Name(\"test-2\").Node(nodeName). Containers([]v1.Container{st.MakeContainer().ResourceRequests(map[v1.ResourceName]string{ v1.ResourceCPU: \"200m\", v1.ResourceMemory: \"1Ki\", }).ContainerPort([]v1.ContainerPort{{ HostIP: \"127.0.0.1\", HostPort: 8080, Protocol: \"TCP\", }}).Obj()}). Obj(), } // add pod Overhead"} {"_id":"doc-en-kubernetes-7bf54f0094bf4c897747699647bab479c91fe1dee3ee395824a60b8ba50f982e","title":"","text":"expectedNodeInfo *NodeInfo }{ { pod: makeBasePod(t, nodeName, \"non-exist\", \"0\", \"0\", \"\", []v1.ContainerPort{{}}, []v1.Volume{}), pod: st.MakePod().UID(\"non-exist\").Namespace(\"node_info_cache_test\").Node(nodeName).Obj(), errExpected: true, expectedNodeInfo: &NodeInfo{ node: &v1.Node{"} {"_id":"doc-en-kubernetes-0fb24445872f115407f2e3ad9a425a36d1c2eda8429fc1dc2b4c37164e0e22ed","title":"","text":"return p } // Volumes set the volumes and inject into the inner pod. func (p *PodWrapper) Volumes(volumes []v1.Volume) *PodWrapper { p.Spec.Volumes = volumes return p } // SchedulingGates sets `gates` as additional SchedulerGates of the inner pod. func (p *PodWrapper) SchedulingGates(gates []string) *PodWrapper { for _, gate := range gates {"} {"_id":"doc-en-kubernetes-9fbf74991035fe7f1b8cdf19513aee69eea1d20184ac8aa219a36029efbd74f1","title":"","text":"// This will start reconciliation of node.status.volumesInUse. rc.updateLastSyncTime() } if len(rc.volumesNeedReportedInUse) != 0 && rc.populatorHasAddedPods() { // Once DSW is populated, mark all reconstructed as reported in node.status, // so they can proceed with MountDevice / SetUp. rc.desiredStateOfWorld.MarkVolumesReportedInUse(rc.volumesNeedReportedInUse) rc.volumesNeedReportedInUse = nil } }"} {"_id":"doc-en-kubernetes-b3645ea7f18d526a7c7dfe0dcd1ff944d823b0d61653c18c5624046b4dba7a6d","title":"","text":"timeOfLastSync: time.Time{}, volumesFailedReconstruction: make([]podVolume, 0), volumesNeedUpdateFromNodeStatus: make([]v1.UniqueVolumeName, 0), volumesNeedReportedInUse: make([]v1.UniqueVolumeName, 0), } }"} {"_id":"doc-en-kubernetes-222e217eea6a87615da3cc6e1988e808d21cc83f113a8eee7412e0c3fc2042e8","title":"","text":"timeOfLastSync time.Time volumesFailedReconstruction []podVolume volumesNeedUpdateFromNodeStatus []v1.UniqueVolumeName volumesNeedReportedInUse []v1.UniqueVolumeName } func (rc *reconciler) unmountVolumes() {"} {"_id":"doc-en-kubernetes-a1738fd15de37baa2d26bc537bdde0070168e4700ef1a2b62c9b2df1c959d63c","title":"","text":"// Add the volumes to ASW rc.updateStates(reconstructedVolumes) // The reconstructed volumes are mounted, hence a previous kubelet must have already put it into node.status.volumesInUse. // Remember to update DSW with this information. rc.volumesNeedReportedInUse = reconstructedVolumeNames // Remember to update devicePath from node.status.volumesAttached rc.volumesNeedUpdateFromNodeStatus = reconstructedVolumeNames }"} {"_id":"doc-en-kubernetes-700772fb13aad9b39389317d58214a492df0751379fd74a42153befcd19d0884","title":"","text":"tests := []struct { name string volumePaths []string expectedVolumesNeedReportedInUse []string expectedVolumesNeedDevicePath []string expectedVolumesFailedReconstruction []string verifyFunc func(rcInstance *reconciler, fakePlugin *volumetesting.FakeVolumePlugin) error"} {"_id":"doc-en-kubernetes-5fc00d8f83cef0518bc6ddb4628f308c23b58935f6af4249242fed3d0b606d61","title":"","text":"filepath.Join(\"pod1\", \"volumes\", \"fake-plugin\", \"pvc-abcdef\"), filepath.Join(\"pod2\", \"volumes\", \"fake-plugin\", \"pvc-abcdef\"), }, expectedVolumesNeedReportedInUse: []string{\"fake-plugin/pvc-abcdef\", \"fake-plugin/pvc-abcdef\"}, expectedVolumesNeedDevicePath: []string{\"fake-plugin/pvc-abcdef\", \"fake-plugin/pvc-abcdef\"}, expectedVolumesFailedReconstruction: []string{}, verifyFunc: func(rcInstance *reconciler, fakePlugin *volumetesting.FakeVolumePlugin) error {"} {"_id":"doc-en-kubernetes-526eeeea77a0227ccede9dc62555a106a3071bd074d0df92937c01b4fd39f3c0","title":"","text":"volumePaths: []string{ filepath.Join(\"pod1\", \"volumes\", \"missing-plugin\", \"pvc-abcdef\"), }, expectedVolumesNeedReportedInUse: []string{}, expectedVolumesNeedDevicePath: []string{}, expectedVolumesFailedReconstruction: []string{\"pvc-abcdef\"}, },"} {"_id":"doc-en-kubernetes-f2cb6035d4fde3da5465ba636c114c569f543c8c10ed4ec537c0d4053b3d59d1","title":"","text":"t.Errorf(\"Expected expectedVolumesNeedDevicePath:n%vn got:n%v\", expectedVolumes, rcInstance.volumesNeedUpdateFromNodeStatus) } expectedVolumes = make([]v1.UniqueVolumeName, len(tc.expectedVolumesNeedReportedInUse)) for i := range tc.expectedVolumesNeedReportedInUse { expectedVolumes[i] = v1.UniqueVolumeName(tc.expectedVolumesNeedReportedInUse[i]) } if !reflect.DeepEqual(expectedVolumes, rcInstance.volumesNeedReportedInUse) { t.Errorf(\"Expected volumesNeedReportedInUse:n%vn got:n%v\", expectedVolumes, rcInstance.volumesNeedReportedInUse) } volumesFailedReconstruction := sets.NewString() for _, vol := range rcInstance.volumesFailedReconstruction { volumesFailedReconstruction.Insert(vol.volumeSpecName)"} {"_id":"doc-en-kubernetes-57c3f01813b26e4bfe435421f506d6bb1d9e560addca1e92dbcf872e70b9cb03","title":"","text":"resourcev1alpha2 \"k8s.io/api/resource/v1alpha2\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/client-go/kubernetes\" \"k8s.io/dynamic-resource-allocation/controller\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/test/e2e/dra/test-driver/app\" \"k8s.io/kubernetes/test/e2e/framework\" e2enode \"k8s.io/kubernetes/test/e2e/framework/node\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" admissionapi \"k8s.io/pod-security-admission/api\" utilpointer \"k8s.io/utils/pointer\""} {"_id":"doc-en-kubernetes-f6dac0d3936b40e088d69588be46ff2edd7ff9acc26ffba5d13b05925c67d187","title":"","text":"framework.ExpectNoError(err, \"start pod\") } }) // This test covers aspects of non graceful node shutdown by DRA controller // More details about this can be found in the KEP: // https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/2268-non-graceful-shutdown // NOTE: this test depends on kind. It will only work with kind cluster as it shuts down one of the // nodes by running `docker stop `, which is very kind-specific. ginkgo.It(\"[Serial] [Disruptive] [Slow] must deallocate on non graceful node shutdown\", func(ctx context.Context) { ginkgo.By(\"create test pod\") parameters := b.parameters() label := \"app.kubernetes.io/instance\" instance := f.UniqueName + \"-test-app\" pod := b.podExternal() pod.Labels[label] = instance claim := b.externalClaim(resourcev1alpha2.AllocationModeWaitForFirstConsumer) b.create(ctx, parameters, claim, pod) ginkgo.By(\"wait for test pod \" + pod.Name + \" to run\") labelSelector := labels.SelectorFromSet(labels.Set(pod.Labels)) pods, err := e2epod.WaitForPodsWithLabelRunningReady(ctx, f.ClientSet, pod.Namespace, labelSelector, 1, framework.PodStartTimeout) framework.ExpectNoError(err, \"start pod\") runningPod := &pods.Items[0] nodeName := runningPod.Spec.NodeName // Prevent builder tearDown to fail waiting for unprepared resources delete(b.driver.Nodes, nodeName) ginkgo.By(\"stop node \" + nodeName + \" non gracefully\") _, stderr, err := framework.RunCmd(\"docker\", \"stop\", nodeName) gomega.Expect(stderr).To(gomega.BeEmpty()) framework.ExpectNoError(err) ginkgo.DeferCleanup(framework.RunCmd, \"docker\", \"start\", nodeName) if ok := e2enode.WaitForNodeToBeNotReady(ctx, f.ClientSet, nodeName, f.Timeouts.NodeNotReady); !ok { framework.Failf(\"Node %s failed to enter NotReady state\", nodeName) } ginkgo.By(\"apply out-of-service taint on node \" + nodeName) taint := v1.Taint{ Key: v1.TaintNodeOutOfService, Effect: v1.TaintEffectNoExecute, } e2enode.AddOrUpdateTaintOnNode(ctx, f.ClientSet, nodeName, taint) e2enode.ExpectNodeHasTaint(ctx, f.ClientSet, nodeName, &taint) ginkgo.DeferCleanup(e2enode.RemoveTaintOffNode, f.ClientSet, nodeName, taint) ginkgo.By(\"waiting for claim to get deallocated\") gomega.Eventually(ctx, framework.GetObject(b.f.ClientSet.ResourceV1alpha2().ResourceClaims(b.f.Namespace.Name).Get, claim.Name, metav1.GetOptions{})).WithTimeout(f.Timeouts.PodDelete).Should(gomega.HaveField(\"Status.Allocation\", gomega.BeNil())) }) }) ginkgo.Context(\"with node-local resources\", func() {"} {"_id":"doc-en-kubernetes-59af271218eb753db235a0723b908ac3556f17671983ce289cd2fc3485564f03","title":"","text":"SystemPodsStartup: 10 * time.Minute, NodeSchedulable: 30 * time.Minute, SystemDaemonsetStartup: 5 * time.Minute, NodeNotReady: 3 * time.Minute, } // TimeoutContext contains timeout settings for several actions."} {"_id":"doc-en-kubernetes-5d0019655d9bc4c7d02ef4874909fceac8304d17b894fd8d155794805e7e6101","title":"","text":"// SystemDaemonsetStartup is how long to wait for all system daemonsets to be ready. SystemDaemonsetStartup time.Duration // NodeNotReady is how long to wait for a node to be not ready. NodeNotReady time.Duration } // NewTimeoutContext returns a TimeoutContext with all values set either to"} {"_id":"doc-en-kubernetes-a94389136d5400b7671ed2395a7109adb782a1b881c8b9bece92bf6fea526415","title":"","text":"// implementing this brute force approach instead of fancy channel notification to avoid test specific code in prod. // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, test) verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, \"\", test) // run storage migration // get secrets"} {"_id":"doc-en-kubernetes-4cd0608f42d86e49d18135e2538da1b9c4c17eae8d0c59aec70d9576bb4518c8","title":"","text":"} // remove old KMS provider // verifyIfKMSTransformersSwapped sometimes passes even before the changes in the encryption config file are observed. // this causes the metrics tests to fail, which validate two config changes. // this may happen when an existing KMS provider is already running (e.g., new-kms-provider-for-secrets in this case). // to ensure that the changes are observed, we added one more provider (kms-provider-to-encrypt-all) and are validating it in verifyIfKMSTransformersSwapped. encryptionConfigWithoutOldProvider := ` kind: EncryptionConfiguration apiVersion: apiserver.config.k8s.io/v1"} {"_id":"doc-en-kubernetes-da98bf7b7e7114a55fc7e286046af3149c1654c26d8df15528e9fd483d06a2e8","title":"","text":"name: new-kms-provider-for-configmaps cachesize: 1000 endpoint: unix:///@new-kms-provider.sock - resources: - '*.*' providers: - kms: name: kms-provider-to-encrypt-all cachesize: 1000 endpoint: unix:///@new-encrypt-all-kms-provider.sock - identity: {} ` // start new KMS Plugin _ = mock.NewBase64Plugin(t, \"@new-encrypt-all-kms-provider.sock\") // update encryption config and wait for hot reload if err := os.WriteFile(filepath.Join(test.configDir, encryptionConfigFileName), []byte(encryptionConfigWithoutOldProvider), 0644); err != nil { t.Fatalf(\"failed to update encryption config, err: %v\", err) } wantPrefixForEncryptAll := \"k8s:enc:kms:v1:kms-provider-to-encrypt-all:\" // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, test) verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, wantPrefixForEncryptAll, test) // confirm that reading secrets still works _, err = test.restClient.CoreV1().Secrets(testNamespace).Get("} {"_id":"doc-en-kubernetes-93a5f15b7352da3ea5124f7cdb159fb1ac2f736a1d6ac52580b8c3b0f906787f","title":"","text":"func verifyPrefixOfSecretResource(t *testing.T, wantPrefix string, test *transformTest) { // implementing this brute force approach instead of fancy channel notification to avoid test specific code in prod. // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefix, test) verifyIfKMSTransformersSwapped(t, wantPrefix, \"\", test) // run storage migration secretsList, err := test.restClient.CoreV1().Secrets(\"\").List("} {"_id":"doc-en-kubernetes-33f4a1afec0cea9cc4c6934a44d5b5164d557fa06c524615ad07f3a92559d09d","title":"","text":"} } func verifyIfKMSTransformersSwapped(t *testing.T, wantPrefix string, test *transformTest) { func verifyIfKMSTransformersSwapped(t *testing.T, wantPrefix, wantPrefixForEncryptAll string, test *transformTest) { t.Helper() var swapErr error"} {"_id":"doc-en-kubernetes-7d45391f9562796d50f7a663b519d78a142d412ab8aaa9e1d4e47df7a604ddab","title":"","text":"return false, nil } if wantPrefixForEncryptAll != \"\" { deploymentName := fmt.Sprintf(\"deployment-%d\", idx) _, err := test.createDeployment(deploymentName, \"default\") if err != nil { t.Fatalf(\"Failed to create test secret, error: %v\", err) } rawEnvelope, err := test.readRawRecordFromETCD(test.getETCDPathForResource(test.storageConfig.Prefix, \"\", \"deployments\", deploymentName, \"default\")) if err != nil { t.Fatalf(\"failed to read %s from etcd: %v\", test.getETCDPathForResource(test.storageConfig.Prefix, \"\", \"deployments\", deploymentName, \"default\"), err) } // check prefix if !bytes.HasPrefix(rawEnvelope.Kvs[0].Value, []byte(wantPrefixForEncryptAll)) { idx++ swapErr = fmt.Errorf(\"expected deployment to be prefixed with %s, but got %s\", wantPrefixForEncryptAll, rawEnvelope.Kvs[0].Value) // return nil error to continue polling till timeout return false, nil } } return true, nil }) if pollErr == wait.ErrWaitTimeout {"} {"_id":"doc-en-kubernetes-09545ca7f9a9ce659367b555d47cd66829b5a8476d36cf42a76416d02a23cdc8","title":"","text":"// start new KMS Plugin _ = mock.NewBase64Plugin(t, \"@new-kms-provider.sock\") // update encryption config if err := os.WriteFile(filepath.Join(test.configDir, encryptionConfigFileName), []byte(encryptionConfigWithNewProvider), 0644); err != nil { t.Fatalf(\"failed to update encryption config, err: %v\", err) } updateFile(t, test.configDir, encryptionConfigFileName, []byte(encryptionConfigWithNewProvider)) wantPrefixForSecrets := \"k8s:enc:kms:v1:new-kms-provider-for-secrets:\" // implementing this brute force approach instead of fancy channel notification to avoid test specific code in prod. // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, \"\", test) verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, test) // run storage migration // get secrets"} {"_id":"doc-en-kubernetes-6d3ae2161f513801e472f49fc96ee32a65635eec1b9de225b5bcbc91eeeec8d9","title":"","text":"} // remove old KMS provider // verifyIfKMSTransformersSwapped sometimes passes even before the changes in the encryption config file are observed. // this causes the metrics tests to fail, which validate two config changes. // this may happen when an existing KMS provider is already running (e.g., new-kms-provider-for-secrets in this case). // to ensure that the changes are observed, we added one more provider (kms-provider-to-encrypt-all) and are validating it in verifyIfKMSTransformersSwapped. encryptionConfigWithoutOldProvider := ` kind: EncryptionConfiguration apiVersion: apiserver.config.k8s.io/v1"} {"_id":"doc-en-kubernetes-cdfcf4a1df5663e2219ae88e212c8084d1bf0300a3ec4826cfcd091046cacf2d","title":"","text":"name: new-kms-provider-for-configmaps cachesize: 1000 endpoint: unix:///@new-kms-provider.sock - resources: - '*.*' providers: - kms: name: kms-provider-to-encrypt-all cachesize: 1000 endpoint: unix:///@new-encrypt-all-kms-provider.sock - identity: {} ` // start new KMS Plugin _ = mock.NewBase64Plugin(t, \"@new-encrypt-all-kms-provider.sock\") // update encryption config and wait for hot reload if err := os.WriteFile(filepath.Join(test.configDir, encryptionConfigFileName), []byte(encryptionConfigWithoutOldProvider), 0644); err != nil { t.Fatalf(\"failed to update encryption config, err: %v\", err) } wantPrefixForEncryptAll := \"k8s:enc:kms:v1:kms-provider-to-encrypt-all:\" updateFile(t, test.configDir, encryptionConfigFileName, []byte(encryptionConfigWithoutOldProvider)) // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, wantPrefixForEncryptAll, test) verifyIfKMSTransformersSwapped(t, wantPrefixForSecrets, test) // confirm that reading secrets still works _, err = test.restClient.CoreV1().Secrets(testNamespace).Get("} {"_id":"doc-en-kubernetes-3021be26d46e5d12eced84521e3a6a3b58cce7cf908fa20453a8b13759127fb4","title":"","text":"func verifyPrefixOfSecretResource(t *testing.T, wantPrefix string, test *transformTest) { // implementing this brute force approach instead of fancy channel notification to avoid test specific code in prod. // wait for config to be observed verifyIfKMSTransformersSwapped(t, wantPrefix, \"\", test) verifyIfKMSTransformersSwapped(t, wantPrefix, test) // run storage migration secretsList, err := test.restClient.CoreV1().Secrets(\"\").List("} {"_id":"doc-en-kubernetes-5e6a75c273349cccea3e0d3459a9bddcb9b0d60cf5adf5f80109784bc3a02499","title":"","text":"} } func verifyIfKMSTransformersSwapped(t *testing.T, wantPrefix, wantPrefixForEncryptAll string, test *transformTest) { func verifyIfKMSTransformersSwapped(t *testing.T, wantPrefix string, test *transformTest) { t.Helper() var swapErr error"} {"_id":"doc-en-kubernetes-3845e47adac5d9440b4038adeac4909d4d1212256ee08440f9021291e1d29d23","title":"","text":"return false, nil } if wantPrefixForEncryptAll != \"\" { deploymentName := fmt.Sprintf(\"deployment-%d\", idx) _, err := test.createDeployment(deploymentName, \"default\") if err != nil { t.Fatalf(\"Failed to create test secret, error: %v\", err) } rawEnvelope, err := test.readRawRecordFromETCD(test.getETCDPathForResource(test.storageConfig.Prefix, \"\", \"deployments\", deploymentName, \"default\")) if err != nil { t.Fatalf(\"failed to read %s from etcd: %v\", test.getETCDPathForResource(test.storageConfig.Prefix, \"\", \"deployments\", deploymentName, \"default\"), err) } // check prefix if !bytes.HasPrefix(rawEnvelope.Kvs[0].Value, []byte(wantPrefixForEncryptAll)) { idx++ swapErr = fmt.Errorf(\"expected deployment to be prefixed with %s, but got %s\", wantPrefixForEncryptAll, rawEnvelope.Kvs[0].Value) // return nil error to continue polling till timeout return false, nil } } return true, nil }) if pollErr == wait.ErrWaitTimeout {"} {"_id":"doc-en-kubernetes-f09b89ee07b171e14f466ce58c45a0d6b8be43aaa0da93f40f73c3a99b7feb06","title":"","text":"} } func updateFile(t *testing.T, configDir, filename string, newContent []byte) { t.Helper() // Create a temporary file tempFile, err := os.CreateTemp(configDir, \"tempfile\") if err != nil { t.Fatal(err) } defer tempFile.Close() // Write the new content to the temporary file _, err = tempFile.Write(newContent) if err != nil { t.Fatal(err) } // Atomically replace the original file with the temporary file err = os.Rename(tempFile.Name(), filepath.Join(configDir, filename)) if err != nil { t.Fatal(err) } } func TestKMSHealthz(t *testing.T) { encryptionConfig := ` kind: EncryptionConfiguration"} {"_id":"doc-en-kubernetes-9f29cbbb543ce7ca348458022a81aac37d0b408f46f2b5891caaa6b5913ee5d8","title":"","text":"\"k8s.io/apimachinery/pkg/util/uuid\" netutils \"k8s.io/utils/net\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" kubefeatures \"k8s.io/kubernetes/pkg/features\" kubeletconfig \"k8s.io/kubernetes/pkg/kubelet/apis/config\" \"k8s.io/kubernetes/test/e2e/framework\""} {"_id":"doc-en-kubernetes-74d64b4f81f4ba440c4b9353c89a2aec9a23ceb152282ff1f5a7421ac48d6b20","title":"","text":"e2enode \"k8s.io/kubernetes/test/e2e/framework/node\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\" e2epodoutput \"k8s.io/kubernetes/test/e2e/framework/pod/output\" e2eskipper \"k8s.io/kubernetes/test/e2e/framework/skipper\" \"k8s.io/kubernetes/test/e2e/network/common\" imageutils \"k8s.io/kubernetes/test/utils/image\" admissionapi \"k8s.io/pod-security-admission/api\" ) var _ = common.SIGDescribe(\"Dual Stack Host IP [Feature:PodHostIPs]\", func() { var _ = common.SIGDescribe(\"DualStack Host IP [Serial] [NodeFeature:PodHostIPs] [Feature:PodHostIPs]\", func() { f := framework.NewDefaultFramework(\"dualstack\") f.NamespacePodSecurityLevel = admissionapi.LevelPrivileged ginkgo.Context(\"when creating a Pod, it has no PodHostIPs feature\", func() { tempSetCurrentKubeletConfig(f, func(ctx context.Context, initialConfig *kubeletconfig.KubeletConfiguration) {"} {"_id":"doc-en-kubernetes-276042380d8b33c655e7024d6f597990ed0b5659b744034ccc000ed158089184","title":"","text":"gomega.Expect(p.Status.HostIPs).Should(gomega.BeNil()) ginkgo.By(\"deleting the pod\") err := podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(30)) err := podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") })"} {"_id":"doc-en-kubernetes-9b20111c2af74c7d9ccb38b85dbdd2074ac90c6e72f156e7b905f6f231e7369e","title":"","text":"gomega.Expect(p.Status.HostIPs).Should(gomega.BeNil()) ginkgo.By(\"deleting the pod\") err := podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(30)) err := podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") }) })"} {"_id":"doc-en-kubernetes-8c6ef80098fc182c8cbf347856544b6f7b3e58487b25000fda2c07dcb8fc3b3b","title":"","text":"podName := \"pod-dualstack-host-ips\" pod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: podName, Labels: map[string]string{\"test\": \"dualstack-host-ips\"}, }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"dualstack-host-ips\", Image: imageutils.GetE2EImage(imageutils.Agnhost), }, }, }, } pod := genPodHostIPs(podName+string(uuid.NewUUID()), false) ginkgo.By(\"submitting the pod to kubernetes\") podClient := e2epod.NewPodClient(f)"} {"_id":"doc-en-kubernetes-80528975ce0b8bd3638706cdfa0996a7aa7888c51510261d307ed57dfa72d1a5","title":"","text":"} ginkgo.By(\"deleting the pod\") err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(30)) err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") })"} {"_id":"doc-en-kubernetes-43b61cc6e1afbbbd9c4ba05debb9032dc2733348f11ad4b197187377446b5f2b","title":"","text":"podName := \"pod-dualstack-host-ips\" pod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: podName, Labels: map[string]string{\"test\": \"dualstack-host-ips\"}, }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"dualstack-host-ips\", Image: imageutils.GetE2EImage(imageutils.Agnhost), }, }, HostNetwork: true, }, } pod := genPodHostIPs(podName+string(uuid.NewUUID()), true) ginkgo.By(\"submitting the pod to kubernetes\") podClient := e2epod.NewPodClient(f)"} {"_id":"doc-en-kubernetes-2ddde7d176b0fe38b3c5e7fe87058d59c97e9118c4199c78b50020fb73611c5f","title":"","text":"} ginkgo.By(\"deleting the pod\") err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(30)) err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") }) ginkgo.It(\"should provide hostIPs as an env var\", func(ctx context.Context) { if !utilfeature.DefaultFeatureGate.Enabled(kubefeatures.PodHostIPs) { e2eskipper.Skipf(\"PodHostIPs feature is not enabled\") } podName := \"downward-api-\" + string(uuid.NewUUID()) env := []v1.EnvVar{ {"} {"_id":"doc-en-kubernetes-ff7ab673520fe116ad012d061b9950a80a01bbc1e2a03ae9975717d51d9606c0","title":"","text":"testDownwardAPI(ctx, f, podName, env, expectations) }) }) ginkgo.Context(\"when feature rollback\", func() { tempSetCurrentKubeletConfig(f, func(ctx context.Context, initialConfig *kubeletconfig.KubeletConfiguration) { initialConfig.FeatureGates = map[string]bool{ string(kubefeatures.PodHostIPs): true, } }) ginkgo.It(\"should able upgrade and rollback\", func(ctx context.Context) { podName := \"pod-dualstack-host-ips\" pod := genPodHostIPs(podName+string(uuid.NewUUID()), false) ginkgo.By(\"submitting the pod to kubernetes\") podClient := e2epod.NewPodClient(f) p := podClient.CreateSync(ctx, pod) gomega.Expect(p.Status.HostIPs).ShouldNot(gomega.BeNil()) ginkgo.By(\"Disable PodHostIPs feature\") cfg, err := getCurrentKubeletConfig(ctx) framework.ExpectNoError(err) newCfg := cfg.DeepCopy() newCfg.FeatureGates = map[string]bool{ string(kubefeatures.PodHostIPs): false, } updateKubeletConfig(ctx, f, newCfg, true) gomega.Expect(p.Status.HostIPs).ShouldNot(gomega.BeNil()) ginkgo.By(\"deleting the pod\") err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") ginkgo.By(\"recreate pod\") pod = genPodHostIPs(podName+string(uuid.NewUUID()), false) p = podClient.CreateSync(ctx, pod) // Feature PodHostIPs is disabled, HostIPs should be nil gomega.Expect(p.Status.HostIPs).Should(gomega.BeNil()) newCfg.FeatureGates = map[string]bool{ string(kubefeatures.PodHostIPs): true, } updateKubeletConfig(ctx, f, newCfg, true) p, err = podClient.Get(ctx, pod.Name, metav1.GetOptions{}) framework.ExpectNoError(err) // Feature PodHostIPs is enabled, HostIPs should not be nil gomega.Expect(p.Status.HostIPs).ShouldNot(gomega.BeNil()) ginkgo.By(\"deleting the pod\") err = podClient.Delete(ctx, pod.Name, *metav1.NewDeleteOptions(1)) framework.ExpectNoError(err, \"failed to delete pod\") }) }) }) func genPodHostIPs(podName string, hostNetwork bool) *v1.Pod { return &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: podName, Labels: map[string]string{\"test\": \"dualstack-host-ips\"}, }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"test-container\", Image: imageutils.GetE2EImage(imageutils.Agnhost), }, }, RestartPolicy: v1.RestartPolicyNever, HostNetwork: hostNetwork, }, } } func testDownwardAPI(ctx context.Context, f *framework.Framework, podName string, env []v1.EnvVar, expectations []string) { pod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{"} {"_id":"doc-en-kubernetes-2c7bc61437be7446ffaf9173c34735d7135c8ccc2a86da4101f0743136414d75","title":"","text":"} kube::etcd::install() { ( local os local arch os=$(kube::util::host_os) arch=$(kube::util::host_arch) cd \"${KUBE_ROOT}/third_party\" || return 1 if [[ $(readlink etcd) == etcd-v${ETCD_VERSION}-${os}-* ]]; then kube::log::info \"etcd v${ETCD_VERSION} already installed. To use:\" kube::log::info \"export PATH=\"$(pwd)/etcd:${PATH}\"\" return #already installed fi if [[ ${os} == \"darwin\" ]]; then download_file=\"etcd-v${ETCD_VERSION}-${os}-${arch}.zip\" url=\"https://github.com/etcd-io/etcd/releases/download/v${ETCD_VERSION}/${download_file}\" kube::util::download_file \"${url}\" \"${download_file}\" unzip -o \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-${os}-${arch}\" etcd rm \"${download_file}\" elif [[ ${os} == \"linux\" ]]; then url=\"https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-${os}-${arch}.tar.gz\" download_file=\"etcd-v${ETCD_VERSION}-${os}-${arch}.tar.gz\" kube::util::download_file \"${url}\" \"${download_file}\" tar xzf \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-${os}-${arch}\" etcd rm \"${download_file}\" else kube::log::info \"${os} is NOT supported.\" fi kube::log::info \"etcd v${ETCD_VERSION} installed. To use:\" local os local arch os=$(kube::util::host_os) arch=$(kube::util::host_arch) cd \"${KUBE_ROOT}/third_party\" || return 1 if [[ $(readlink etcd) == etcd-v${ETCD_VERSION}-${os}-* ]]; then kube::log::info \"etcd v${ETCD_VERSION} already installed. To use:\" kube::log::info \"export PATH=\"$(pwd)/etcd:${PATH}\"\" ) # export into current process PATH=\"$(pwd)/etcd:${PATH}\" export PATH return #already installed fi if [[ ${os} == \"darwin\" ]]; then download_file=\"etcd-v${ETCD_VERSION}-${os}-${arch}.zip\" url=\"https://github.com/etcd-io/etcd/releases/download/v${ETCD_VERSION}/${download_file}\" kube::util::download_file \"${url}\" \"${download_file}\" unzip -o \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-${os}-${arch}\" etcd rm \"${download_file}\" elif [[ ${os} == \"linux\" ]]; then url=\"https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-${os}-${arch}.tar.gz\" download_file=\"etcd-v${ETCD_VERSION}-${os}-${arch}.tar.gz\" kube::util::download_file \"${url}\" \"${download_file}\" tar xzf \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-${os}-${arch}\" etcd rm \"${download_file}\" else kube::log::info \"${os} is NOT supported.\" fi kube::log::info \"etcd v${ETCD_VERSION} installed. To use:\" kube::log::info \"export PATH=\"$(pwd)/etcd:${PATH}\"\" # export into current process PATH=\"$(pwd)/etcd:${PATH}\" export PATH }"} {"_id":"doc-en-kubernetes-897f13579190a5c8d8a1e0e35cc310d066b23c66638a16e0d8faadca03eacf6c","title":"","text":"kube::util::require-jq kube::golang::setup_env kube::etcd::install make -C \"${KUBE_ROOT}\" WHAT=cmd/kube-apiserver"} {"_id":"doc-en-kubernetes-96afe983c4a4db63fea89916c5ab6e07112224ed7592f98e898406a3f0280563","title":"","text":"trap cleanup EXIT SIGINT kube::golang::setup_env TMP_DIR=${TMP_DIR:-$(kube::realpath \"$(mktemp -d -t \"$(basename \"$0\").XXXXXX\")\")} ETCD_HOST=${ETCD_HOST:-127.0.0.1} ETCD_PORT=${ETCD_PORT:-2379}"} {"_id":"doc-en-kubernetes-2eda084ecb25b594e923a9f1e4a72c64945109105af7662dd45b827669ac9170","title":"","text":"if err != nil { assert.NoErrorf(t, err, \"unable to create temp file\") } stdoutBuf := &bytes.Buffer{} stderrBuf := &bytes.Buffer{} containerID := \"fake-container-id\""} {"_id":"doc-en-kubernetes-98265b155578a94de2d8391a7d4a48d899ff560bac6662052e09cd4dbd93a80c","title":"","text":"ctx, cancel := context.WithCancel(context.Background()) defer cancel() // Start to follow the container's log. fileName := file.Name() go func(ctx context.Context) { podLogOptions := v1.PodLogOptions{ Follow: true, } opts := NewLogOptions(&podLogOptions, time.Now()) ReadLogs(ctx, file.Name(), containerID, opts, fakeRuntimeService, stdoutBuf, stderrBuf) _ = ReadLogs(ctx, fileName, containerID, opts, fakeRuntimeService, stdoutBuf, stderrBuf) }(ctx) // log in stdout"} {"_id":"doc-en-kubernetes-a6df48e174beefffa0800f0afb38d17410ec77c007cc0658e26039ff501b959e","title":"","text":"// Write 10 lines to log file. // Let ReadLogs start. time.Sleep(50 * time.Millisecond) for line := 0; line < 10; line++ { // Write the first three lines to log file now := time.Now().Format(types.RFC3339NanoLenient)"} {"_id":"doc-en-kubernetes-b045e766d9cd50e0dcd15aa6d0d74a5ba9eeff580db30f9bf6825059a245d567","title":"","text":"v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/resource\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" \"k8s.io/kubernetes/pkg/features\" kubeletconfig \"k8s.io/kubernetes/pkg/kubelet/apis/config\" \"k8s.io/kubernetes/test/e2e/framework\" e2epod \"k8s.io/kubernetes/test/e2e/framework/pod\""} {"_id":"doc-en-kubernetes-04af650392faedb6456b1e7d0b7e32d9542580b3826b7ef1c0265399389480d9","title":"","text":"}) ginkgo.It(\"The containers terminated by OOM killer should have the reason set to OOMKilled\", func() { cfg, configErr := getCurrentKubeletConfig(context.TODO()) framework.ExpectNoError(configErr) if utilfeature.DefaultFeatureGate.Enabled(features.NodeSwap) { // If Swap is enabled, we should test OOM with LimitedSwap. // UnlimitedSwap allows for workloads to use unbounded swap which // makes testing OOM challenging. // We are not able to change the default for these conformance tests, // so we will skip these tests if swap is enabled. if cfg.MemorySwap.SwapBehavior == \"\" || cfg.MemorySwap.SwapBehavior == \"UnlimitedSwap\" { ginkgo.Skip(\"OOMKiller should not run with UnlimitedSwap\") } } ginkgo.By(\"Waiting for the pod to be failed\") err := e2epod.WaitForPodTerminatedInNamespace(context.TODO(), f.ClientSet, testCase.podSpec.Name, \"\", f.Namespace.Name) framework.ExpectNoError(err, \"Failed waiting for pod to terminate, %s/%s\", f.Namespace.Name, testCase.podSpec.Name)"} {"_id":"doc-en-kubernetes-c79d4923510889bc39a23b8743886aba8bc61aa906f6fd59830e57806b616e8b","title":"","text":"package storage import ( utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" \"k8s.io/kubernetes/pkg/features\" \"context\" \"testing\" \"time\" \"context\" \"github.com/google/go-cmp/cmp\" apiequality \"k8s.io/apimachinery/pkg/api/equality\" \"k8s.io/apimachinery/pkg/api/resource\""} {"_id":"doc-en-kubernetes-f0bf8d5c2563a6d2fd426b7e4a219f692f00a1db4ab0135e05f41cb64b890588","title":"","text":"genericregistrytest \"k8s.io/apiserver/pkg/registry/generic/testing\" \"k8s.io/apiserver/pkg/registry/rest\" etcd3testing \"k8s.io/apiserver/pkg/storage/etcd3/testing\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" api \"k8s.io/kubernetes/pkg/apis/core\" \"k8s.io/kubernetes/pkg/features\" \"k8s.io/kubernetes/pkg/registry/core/persistentvolume\" \"k8s.io/kubernetes/pkg/registry/registrytest\" )"} {"_id":"doc-en-kubernetes-6b45ccf8bd6e710cb15fc14017c7c532f3e6a3232c49691223f7d87fc55abf4c","title":"","text":"t.Errorf(\"unexpected error: %v\", err) } // We need to set custom timestamp which is not the same as the one on existing PV // - doing so will prevent timestamp update on phase change and custom one is used instead. pvStartTimestamp = &metav1.Time{Time: pvStartTimestamp.Time.Add(time.Second)} pvIn := &api.PersistentVolume{ ObjectMeta: metav1.ObjectMeta{ Name: \"foo\", }, Status: api.PersistentVolumeStatus{ Phase: api.VolumeBound, // Set the same timestamp as original PV so this won't get updated on phase change breaking DeepEqual() later in test. Phase: api.VolumeBound, LastPhaseTransitionTime: pvStartTimestamp, }, }"} {"_id":"doc-en-kubernetes-cb5e35350061f2d51703de7eef3e595ffa001d99226c06afce9dfe6e9ffd454a","title":"","text":"\"k8s.io/apimachinery/pkg/util/sets\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" featuregatetesting \"k8s.io/component-base/featuregate/testing\" \"k8s.io/component-base/metrics/legacyregistry\" \"k8s.io/component-base/metrics/testutil\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/pkg/features\""} {"_id":"doc-en-kubernetes-59a30f80a0ec2ccb453bdb3bf7d4a0d7b721f072809f6e1374cb96e74cd9ab99","title":"","text":"ipt := iptablestest.NewFake() fp := NewFakeProxier(ipt) metrics.RegisterMetrics() defer legacyregistry.Reset() // Create initial state var svc2 *v1.Service"} {"_id":"doc-en-kubernetes-015cca6cb4332e5626d69456650d1be5a010f1d2cfe7b31ebe59c5c22e03ba76","title":"","text":"if fp.needFullSync { t.Fatalf(\"Proxier unexpectedly already needs a full sync?\") } prFailures, err := testutil.GetCounterMetricValue(metrics.IptablesPartialRestoreFailuresTotal) partialRestoreFailures, err := testutil.GetCounterMetricValue(metrics.IptablesPartialRestoreFailuresTotal) if err != nil { t.Fatalf(\"Could not get partial restore failures metric: %v\", err) } if prFailures != 0.0 { if partialRestoreFailures != 0.0 { t.Errorf(\"Already did a partial resync? Something failed earlier!\") }"} {"_id":"doc-en-kubernetes-36310cf7e8a2de8cb1c5b96162764c71d8584ce9aec5908126c4d166c06cc5d6","title":"","text":"if !fp.needFullSync { t.Errorf(\"Proxier did not fail on previous partial resync?\") } updatedPRFailures, err := testutil.GetCounterMetricValue(metrics.IptablesPartialRestoreFailuresTotal) updatedPartialRestoreFailures, err := testutil.GetCounterMetricValue(metrics.IptablesPartialRestoreFailuresTotal) if err != nil { t.Errorf(\"Could not get partial restore failures metric: %v\", err) } if updatedPRFailures != prFailures+1.0 { t.Errorf(\"Partial restore failures metric was not incremented after failed partial resync (expected %.02f, got %.02f)\", prFailures+1.0, updatedPRFailures) if updatedPartialRestoreFailures != partialRestoreFailures+1.0 { t.Errorf(\"Partial restore failures metric was not incremented after failed partial resync (expected %.02f, got %.02f)\", partialRestoreFailures+1.0, updatedPartialRestoreFailures) } // On retry we should do a full resync, which should succeed (and delete svc4)"} {"_id":"doc-en-kubernetes-adf2f74349521d883a8a2da7ecc2f46ab0adad61319b9c10474c373218062e8e","title":"","text":"return false, fmt.Errorf(\"mountVolume.NodeExpandVolume get PVC failed : %v\", err) } if volumeToMount.VolumeSpec.ReadOnly { simpleMsg, detailedMsg := volumeToMount.GenerateMsg(\"MountVolume.NodeExpandVolume failed\", \"requested read-only file system\") klog.Warningf(detailedMsg) og.recorder.Eventf(volumeToMount.Pod, v1.EventTypeWarning, kevents.FileSystemResizeFailed, simpleMsg) og.recorder.Eventf(pvc, v1.EventTypeWarning, kevents.FileSystemResizeFailed, simpleMsg) return true, nil } pvcStatusCap := pvc.Status.Capacity[v1.ResourceStorage] pvSpecCap := pv.Spec.Capacity[v1.ResourceStorage] if pvcStatusCap.Cmp(pvSpecCap) < 0 { if volumeToMount.VolumeSpec.ReadOnly { simpleMsg, detailedMsg := volumeToMount.GenerateMsg(\"MountVolume.NodeExpandVolume failed\", \"requested read-only file system\") klog.Warningf(detailedMsg) og.recorder.Eventf(volumeToMount.Pod, v1.EventTypeWarning, kevents.FileSystemResizeFailed, simpleMsg) og.recorder.Eventf(pvc, v1.EventTypeWarning, kevents.FileSystemResizeFailed, simpleMsg) return true, nil } rsOpts.NewSize = pvSpecCap rsOpts.OldSize = pvcStatusCap resizeOp := nodeResizeOperationOpts{"} {"_id":"doc-en-kubernetes-736f614f13e1fc0ab367d19a068093eb556327c9414c4498d2be5e79b613f535","title":"","text":"} switch { case o.Watch || o.WatchOnly: case o.Watch: if len(o.SortBy) > 0 { fmt.Fprintf(o.IOStreams.ErrOut, \"warning: --watch or --watch-only requested, --sort-by will be ignoredn\") fmt.Fprintf(o.IOStreams.ErrOut, \"warning: --watch requested, --sort-by will be ignored for watch events receivedn\") } case o.WatchOnly: if len(o.SortBy) > 0 { fmt.Fprintf(o.IOStreams.ErrOut, \"warning: --watch-only requested, --sort-by will be ignoredn\") } default: if len(args) == 0 && cmdutil.IsFilenameSliceEmpty(o.Filenames, o.Kustomize) {"} {"_id":"doc-en-kubernetes-596f9cc851c3d715a7112830195858e025a65fffec6a31f26b01c09af03f5e69","title":"","text":"scheduledJobTimeout = 5 * time.Minute ) var _ = framework.KubeDescribe(\"[Feature:ScheduledJob]\", func() { var _ = framework.KubeDescribe(\"ScheduledJob\", func() { options := framework.FrameworkOptions{ ClientQPS: 20, ClientBurst: 50,"} {"_id":"doc-en-kubernetes-d57dda2ff8658c11477eda68d93377b8b4a64ba05aed46bd072bded9887ffc47","title":"","text":"nodeAllocatableMilliCPU2, nodeAvailableMilliCPU2 := getNodeAllocatableAndAvailableMilliCPUValues(&node) framework.Logf(\"TEST2: Node '%s': NodeAllocatable MilliCPUs = %dm. MilliCPUs currently available to allocate = %dm.\", node.Name, nodeAllocatableMilliCPU2, nodeAvailableMilliCPU2) testPod3CPUQuantity := resource.NewMilliQuantity(nodeAvailableMilliCPU2+testPod1CPUQuantity.MilliValue()/2, resource.DecimalSI) testPod3CPUQuantity := resource.NewMilliQuantity(nodeAvailableMilliCPU2+testPod1CPUQuantity.MilliValue()/4, resource.DecimalSI) testPod1CPUQuantityResized := resource.NewMilliQuantity(testPod1CPUQuantity.MilliValue()/3, resource.DecimalSI) framework.Logf(\"TEST2: testPod1 MilliCPUs after resize '%dm'\", testPod1CPUQuantityResized.MilliValue())"} {"_id":"doc-en-kubernetes-4573a4186b3a3846b3ba2abac232fdcb94329e5e5d073cb98fd263a091f3f41d","title":"","text":" # See the OWNERS docs at https://go.k8s.io/owners approvers: - aojea - danwinship reviewers: - aojea - danwinship "} {"_id":"doc-en-kubernetes-1b607b2373869277a791068b1924681c938506b607fbfd837cd02bcf8fdace64","title":"","text":" kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:network-policies namespace: kube-system labels: addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: [\"\"] resources: - pods - nodes - namespaces verbs: - get - watch - list # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: [\"networking.k8s.io\"] resources: - networkpolicies verbs: - watch - list --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kube-network-policies labels: addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:network-policies subjects: - kind: ServiceAccount name: kube-network-policies namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: kube-network-policies namespace: kube-system labels: k8s-app: kube-network-policies kubernetes.io/cluster-service: \"true\" addonmanager.kubernetes.io/mode: Reconcile No newline at end of file"} {"_id":"doc-en-kubernetes-b0deff0c54a1a8918ed4d4e0cededb15a64f32039b3f975f3084a371887d7da4","title":"","text":" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-network-policies namespace: kube-system labels: tier: node app: kube-network-policies k8s-app: kube-network-policies addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: app: kube-network-policies template: metadata: labels: tier: node app: kube-network-policies k8s-app: kube-network-policies spec: hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: kube-network-policies containers: - name: kube-network-policies image: registry.k8s.io/networking/kube-network-policies:v0.1.0 command: - /bin/sh - -c - /bin/netpol -v 4 1>>/var/log/kube-network-policies.log 2>&1 resources: requests: cpu: \"100m\" memory: \"50Mi\" securityContext: privileged: true volumeMounts: - mountPath: /var/log name: varlog readOnly: false - mountPath: /lib/modules name: lib-modules readOnly: true volumes: - name: varlog hostPath: path: /var/log - name: lib-modules hostPath: path: /lib/modules "} {"_id":"doc-en-kubernetes-131c9b2cd11f47888fb2750d9b0349952c91e7f854f8bf629060d5cc15f8069f","title":"","text":"local -r ds_file=\"${dst_dir}/calico-policy-controller/calico-node-daemonset.yaml\" sed -i -e \"s@__CALICO_CNI_DIR__@/home/kubernetes/bin@g\" \"${ds_file}\" fi if [[ \"${NETWORK_POLICY_PROVIDER:-}\" == \"kube-network-policies\" ]]; then setup-addon-manifests \"addons\" \"kube-network-policies\" fi if [[ \"${ENABLE_DEFAULT_STORAGE_CLASS:-}\" == \"true\" ]]; then setup-addon-manifests \"addons\" \"storage-class/gce\" fi"} {"_id":"doc-en-kubernetes-e1344680161244adc26ec477cf6ad6bf8e0bc6d3f484e7c1aca4ae98774ed635","title":"","text":"} EOF if [[ \"${KUBERNETES_MASTER:-}\" != \"true\" ]]; then if [[ \"${NETWORK_POLICY_PROVIDER:-\"none\"}\" != \"none\" || \"${ENABLE_NETD:-}\" == \"true\" ]]; then # Use Kubernetes cni daemonset on node if network policy provider is specified if [[ \"${NETWORK_POLICY_PROVIDER:-\"none\"}\" == \"calico\" || \"${ENABLE_NETD:-}\" == \"true\" ]]; then # Use Kubernetes cni daemonset on node if network policy provider calico is specified # or netd is enabled. cni_template_path=\"\" fi"} {"_id":"doc-en-kubernetes-29056360cb95507015afa8da4694f134491db736ca48ef5e99c9b87971fc86a4","title":"","text":"readonly gcloud_supported_providers=\"gce gke\" readonly master_logfiles=\"kube-apiserver.log kube-apiserver-audit.log kube-scheduler.log kube-controller-manager.log cloud-controller-manager.log etcd.log etcd-events.log glbc.log cluster-autoscaler.log kube-addon-manager.log konnectivity-server.log fluentd.log kubelet.cov\" readonly node_logfiles=\"kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov\" readonly node_logfiles=\"kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov kube-network-policies.log\" readonly node_systemd_services=\"node-problem-detector\" readonly hollow_node_logfiles=\"kubelet-hollow-node-*.log kubeproxy-hollow-node-*.log npd-hollow-node-*.log\" readonly aws_logfiles=\"cloud-init-output.log\""} {"_id":"doc-en-kubernetes-50e23932b6ae52796f91a167c08303732c7c63554eff9a4a8b2c4df1f39a5d45","title":"","text":"- endpoint: createAuthenticationV1SelfSubjectReview reason: Cluster providers are allowed to choose to not serve this API, and the whoami command handles unavailability gracefully. link: https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/3325-self-subject-attributes-review-api/README.md#ga - endpoint: getInternalApiserverAPIGroup reason: The endpoint is part of a group that has no stable endpoints yet, reporting as a stable endpoint in apisnoop. link: https://github.com/kubernetes/issues/124248 - endpoint: getResourceAPIGroup reason: The endpoint is part of a group that has no stable endpoints yet, reporting as a stable endpoint in apisnoop. link: https://github.com/kubernetes/issues/124248 - endpoint: getStoragemigrationAPIGroup reason: The endpoint is part of a group that has no stable endpoints yet, reporting as a stable endpoint in apisnoop. link: https://github.com/kubernetes/issues/124248 "} {"_id":"doc-en-kubernetes-8d3386d3b99a97d6db14e960531d7ead855a42b1bea69743aadb32b7473933f4","title":"","text":"- deleteCoreV1Node - deleteStorageV1CollectionCSINode - deleteStorageV1CSINode - getInternalApiserverAPIGroup - getResourceAPIGroup - getStoragemigrationAPIGroup - listStorageV1CSINode - patchStorageV1CSINode - patchStorageV1VolumeAttachmentStatus"} {"_id":"doc-en-kubernetes-9f352892a7fa57bcc97f1de9b7140035c3e647ba5dd3ed85e340f48b8d8b7bb0","title":"","text":"// Introduce some small jittering to ensure that over time the requests won't start // accumulating at approximately the same time from the set of nodes due to priority and // fairness effect. go wait.JitterUntil(kl.syncNodeStatus, kl.nodeStatusUpdateFrequency, 0.04, true, wait.NeverStop) go func() { // Call updateRuntimeUp once before syncNodeStatus to make sure kubelet had already checked runtime state // otherwise when restart kubelet, syncNodeStatus will report node notReady in first report period kl.updateRuntimeUp() wait.JitterUntil(kl.syncNodeStatus, kl.nodeStatusUpdateFrequency, 0.04, true, wait.NeverStop) }() go kl.fastStatusUpdateOnce() // start syncing lease"} {"_id":"doc-en-kubernetes-633c6bb622529525269f94f085be986dd89026d4d079018225a328a157b46439","title":"","text":"# Check to see if I can access the URL /logs/ kubectl auth can-i get /logs/ # Check to see if I can approve certificates.k8s.io kubectl auth can-i approve certificates.k8s.io # List all allowed actions in namespace \"foo\" kubectl auth can-i --list --namespace=foo`) resourceVerbs = sets.NewString(\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\", \"use\", \"bind\", \"impersonate\", \"*\") resourceVerbs = sets.NewString(\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\", \"use\", \"bind\", \"impersonate\", \"*\", \"approve\") nonResourceURLVerbs = sets.NewString(\"get\", \"put\", \"post\", \"head\", \"options\", \"delete\", \"patch\", \"*\") // holds all the server-supported resources that cannot be discovered by clients. i.e. users and groups for the impersonate verb nonStandardResourceNames = sets.NewString(\"users\", \"groups\")"} {"_id":"doc-en-kubernetes-b078998db8f7486dd516e2abaa4ffad5683dd67a81275e2ecfda24071daaf2a9","title":"","text":"default: errString := \"you must specify two arguments: verb resource or verb resource/resourceName.\" usageString := \"See 'kubectl auth can-i -h' for help and examples.\" return errors.New(fmt.Sprintf(\"%sn%s\", errString, usageString)) return fmt.Errorf(\"%sn%s\", errString, usageString) } }"} {"_id":"doc-en-kubernetes-7801e9ef3dc5ddd4db1c2902f0461319857aa99328108dacf3dfceba3be2e89f","title":"","text":"\"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/pkg/kubelet/userns/inuserns\" utilkernel \"k8s.io/kubernetes/pkg/util/kernel\" \"k8s.io/mount-utils\" )"} {"_id":"doc-en-kubernetes-ba502f4233cba8f0a59cdcbaa3a63c2a43a5244c816fa161f79cc71fd807351d","title":"","text":"return false } if inuserns.RunningInUserNS() { // Turning off swap in unprivileged tmpfs mounts unsupported // https://github.com/torvalds/linux/blob/v6.8/mm/shmem.c#L4004-L4011 // https://github.com/kubernetes/kubernetes/issues/125137 klog.InfoS(\"Running under a user namespace - tmpfs noswap is not supported\") return false } kernelVersion, err := utilkernel.GetVersion() if err != nil { klog.ErrorS(err, \"cannot determine kernel version, unable to determine is tmpfs noswap is supported\")"} {"_id":"doc-en-kubernetes-15a36e4cb37f3774b4e5d6f4bc30cfbb91fdbf3f9ce9f18f9691211baad69c33","title":"","text":"} return parameters, nil default: sort.Slice(objs, func(i, j int) bool { obj1, obj2 := objs[i].(*resourcev1alpha2.ResourceClassParameters), objs[j].(*resourcev1alpha2.ResourceClassParameters) if obj1 == nil || obj2 == nil { return false } return obj1.Name < obj2.Name }) return nil, statusError(logger, fmt.Errorf(\"multiple generated class parameters for %s.%s %s found: %s\", class.ParametersRef.Kind, class.ParametersRef.APIGroup, klog.KRef(class.Namespace, class.ParametersRef.Name), klog.KObjSlice(objs))) } }"} {"_id":"doc-en-kubernetes-555210586c074f6777944574c07886a890eb6d90f2fcc56156ac728f6587bdcc","title":"","text":"} return parameters, nil default: sort.Slice(objs, func(i, j int) bool { obj1, obj2 := objs[i].(*resourcev1alpha2.ResourceClaimParameters), objs[j].(*resourcev1alpha2.ResourceClaimParameters) if obj1 == nil || obj2 == nil { return false } return obj1.Name < obj2.Name }) return nil, statusError(logger, fmt.Errorf(\"multiple generated claim parameters for %s.%s %s found: %s\", claim.Spec.ParametersRef.Kind, claim.Spec.ParametersRef.APIGroup, klog.KRef(claim.Namespace, claim.Spec.ParametersRef.Name), klog.KObjSlice(objs))) } }"} {"_id":"doc-en-kubernetes-df95bc4231687a001d0dc9c585503aeac17795b32cab1f70d83fc47eeddb4547","title":"","text":"if sysruntime.GOOS == \"linux\" { // AppArmor is a Linux kernel security module and it does not support other operating systems. klet.appArmorValidator = apparmor.NewValidator() klet.softAdmitHandlers.AddPodAdmitHandler(lifecycle.NewAppArmorAdmitHandler(klet.appArmorValidator)) klet.admitHandlers.AddPodAdmitHandler(lifecycle.NewAppArmorAdmitHandler(klet.appArmorValidator)) } leaseDuration := time.Duration(kubeCfg.NodeLeaseDurationSeconds) * time.Second"} {"_id":"doc-en-kubernetes-0ff903d824d296204cbf1da5deeb96aa2410ea7120fb43fafa8550778673ca06","title":"","text":"// the list of handlers to call during pod admission. admitHandlers lifecycle.PodAdmitHandlers // softAdmithandlers are applied to the pod after it is admitted by the Kubelet, but before it is // run. A pod rejected by a softAdmitHandler will be left in a Pending state indefinitely. If a // rejected pod should not be recreated, or the scheduler is not aware of the rejection rule, the // admission rule should be applied by a softAdmitHandler. softAdmitHandlers lifecycle.PodAdmitHandlers // the list of handlers to call during pod sync loop. lifecycle.PodSyncLoopHandlers"} {"_id":"doc-en-kubernetes-4460f4e927ff74a2e690790c6187023ddb2add17bdf356bb7da8ed5dfe46f841","title":"","text":"return isTerminal, nil } // If the pod should not be running, we request the pod's containers be stopped. This is not the same // as termination (we want to stop the pod, but potentially restart it later if soft admission allows // it later). Set the status and phase appropriately runnable := kl.canRunPod(pod) if !runnable.Admit { // Pod is not runnable; and update the Pod and Container statuses to why. if apiPodStatus.Phase != v1.PodFailed && apiPodStatus.Phase != v1.PodSucceeded { apiPodStatus.Phase = v1.PodPending } apiPodStatus.Reason = runnable.Reason apiPodStatus.Message = runnable.Message // Waiting containers are not creating. const waitingReason = \"Blocked\" for _, cs := range apiPodStatus.InitContainerStatuses { if cs.State.Waiting != nil { cs.State.Waiting.Reason = waitingReason } } for _, cs := range apiPodStatus.ContainerStatuses { if cs.State.Waiting != nil { cs.State.Waiting.Reason = waitingReason } } } // Record the time it takes for the pod to become running // since kubelet first saw the pod if firstSeenTime is set. existingStatus, ok := kl.statusManager.GetPodStatus(pod.UID)"} {"_id":"doc-en-kubernetes-a74beac77b1758ed8c36f5473065d0ae1cf31d37bd350414654b4c655427c700","title":"","text":"kl.statusManager.SetPodStatus(pod, apiPodStatus) // Pods that are not runnable must be stopped - return a typed error to the pod worker if !runnable.Admit { klog.V(2).InfoS(\"Pod is not runnable and must have running containers stopped\", \"pod\", klog.KObj(pod), \"podUID\", pod.UID, \"message\", runnable.Message) var syncErr error p := kubecontainer.ConvertPodStatusToRunningPod(kl.getRuntime().Type(), podStatus) if err := kl.killPod(ctx, pod, p, nil); err != nil { if !wait.Interrupted(err) { kl.recorder.Eventf(pod, v1.EventTypeWarning, events.FailedToKillPod, \"error killing pod: %v\", err) syncErr = fmt.Errorf(\"error killing pod: %w\", err) utilruntime.HandleError(syncErr) } } else { // There was no error killing the pod, but the pod cannot be run. // Return an error to signal that the sync loop should back off. syncErr = fmt.Errorf(\"pod cannot be run: %v\", runnable.Message) } return false, syncErr } // If the network plugin is not ready, only start the pod if it uses the host network if err := kl.runtimeState.networkErrors(); err != nil && !kubecontainer.IsHostNetworkPod(pod) { kl.recorder.Eventf(pod, v1.EventTypeWarning, events.NetworkNotReady, \"%s: %v\", NetworkNotReadyErrorMsg, err)"} {"_id":"doc-en-kubernetes-68f2b0e74378a79643adf685ff6517fdf150ded87f0d20c5de606be04df5da74","title":"","text":"return true, \"\", \"\" } func (kl *Kubelet) canRunPod(pod *v1.Pod) lifecycle.PodAdmitResult { attrs := &lifecycle.PodAdmitAttributes{Pod: pod} // Get \"OtherPods\". Rejected pods are failed, so only include admitted pods that are alive. attrs.OtherPods = kl.GetActivePods() for _, handler := range kl.softAdmitHandlers { if result := handler.Admit(attrs); !result.Admit { return result } } return lifecycle.PodAdmitResult{Admit: true} } // syncLoop is the main loop for processing changes. It watches for changes from // three channels (file, apiserver, and http) and creates a union of them. For // any new change seen, will run a sync against desired state and running state. If"} {"_id":"doc-en-kubernetes-17f0da01b2e4ac0bf482dae578d7e6db4e5192b139cc341a84bd291fe20d0fce","title":"","text":"if err != nil { t.Fatalf(\"Couldn't create cacher: %v\", err) } if utilfeature.DefaultFeatureGate.Enabled(features.ResilientWatchCacheInitialization) { // The tests assume that Get/GetList/Watch calls shouldn't fail. // However, 429 error can now be returned if watchcache is under initialization. // To avoid rewriting all tests, we wait for watchcache to initialize. if err := cacher.Wait(context.Background()); err != nil { t.Fatal(err) } } d := destroyFunc s = cacher destroyFunc = func() {"} {"_id":"doc-en-kubernetes-0c613eaa0b7b23ac4311758e54dd174bafaa91dddfd830a3a1e77649bbc0487d","title":"","text":"return nil } // Wait blocks until the cacher is Ready or Stopped, it returns an error if Stopped. func (c *Cacher) Wait(ctx context.Context) error { return c.ready.wait(ctx) } // errWatcher implements watch.Interface to return a single error type errWatcher struct { result chan watch.Event"} {"_id":"doc-en-kubernetes-874a37a08f94b7f533ef1311a6539973b6439f55f661cf3d294eda7711a37455","title":"","text":"if utilfeature.DefaultFeatureGate.Enabled(features.ResilientWatchCacheInitialization) { // The tests assume that Get/GetList/Watch calls shouldn't fail. // However, 429 error can now be returned if watchcache is under initialization. // To avoid rewriting all tests, we wait for watcache to initialize. if err := cacher.ready.wait(ctx); err != nil { // To avoid rewriting all tests, we wait for watchcache to initialize. if err := cacher.Wait(ctx); err != nil { t.Fatal(err) } }"} {"_id":"doc-en-kubernetes-c6297158d2286fac93f67459d58047b5ec80f8e64453277a398ddec31e01eba5","title":"","text":"// The tests assume that Get/GetList/Watch calls shouldn't fail. // However, 429 error can now be returned if watchcache is under initialization. // To avoid rewriting all tests, we wait for watcache to initialize. if err := cacher.ready.wait(context.Background()); err != nil { if err := cacher.Wait(context.Background()); err != nil { return nil, storage.APIObjectVersioner{}, err } }"} {"_id":"doc-en-kubernetes-16b5d66e99ca662616cef7bcb96f1ab6f214d1f1bdabfb8f5000c6ada3737d57","title":"","text":" /* Copyright 2022 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package plugin import ( \"context\" \"errors\" \"fmt\" \"net\" \"sync\" \"time\" \"google.golang.org/grpc\" \"google.golang.org/grpc/connectivity\" \"google.golang.org/grpc/credentials/insecure\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/klog/v2\" drapb \"k8s.io/kubelet/pkg/apis/dra/v1alpha4\" ) // NewDRAPluginClient returns a wrapper around those gRPC methods of a DRA // driver kubelet plugin which need to be called by kubelet. The wrapper // handles gRPC connection management and logging. Connections are reused // across different NewDRAPluginClient calls. func NewDRAPluginClient(pluginName string) (*Plugin, error) { if pluginName == \"\" { return nil, fmt.Errorf(\"plugin name is empty\") } existingPlugin := draPlugins.get(pluginName) if existingPlugin == nil { return nil, fmt.Errorf(\"plugin name %s not found in the list of registered DRA plugins\", pluginName) } return existingPlugin, nil } type Plugin struct { backgroundCtx context.Context cancel func(cause error) mutex sync.Mutex conn *grpc.ClientConn endpoint string highestSupportedVersion *utilversion.Version clientCallTimeout time.Duration } func (p *Plugin) getOrCreateGRPCConn() (*grpc.ClientConn, error) { p.mutex.Lock() defer p.mutex.Unlock() if p.conn != nil { return p.conn, nil } ctx := p.backgroundCtx logger := klog.FromContext(ctx) network := \"unix\" logger.V(4).Info(\"Creating new gRPC connection\", \"protocol\", network, \"endpoint\", p.endpoint) // grpc.Dial is deprecated. grpc.NewClient should be used instead. // For now this gets ignored because this function is meant to establish // the connection, with the one second timeout below. Perhaps that // approach should be reconsidered? //nolint:staticcheck conn, err := grpc.Dial( p.endpoint, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(func(ctx context.Context, target string) (net.Conn, error) { return (&net.Dialer{}).DialContext(ctx, network, target) }), ) if err != nil { return nil, err } ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() if ok := conn.WaitForStateChange(ctx, connectivity.Connecting); !ok { return nil, errors.New(\"timed out waiting for gRPC connection to be ready\") } p.conn = conn return p.conn, nil } func (p *Plugin) NodePrepareResources( ctx context.Context, req *drapb.NodePrepareResourcesRequest, opts ...grpc.CallOption, ) (*drapb.NodePrepareResourcesResponse, error) { logger := klog.FromContext(ctx) logger.V(4).Info(\"Calling NodePrepareResources rpc\", \"request\", req) conn, err := p.getOrCreateGRPCConn() if err != nil { return nil, err } ctx, cancel := context.WithTimeout(ctx, p.clientCallTimeout) defer cancel() nodeClient := drapb.NewNodeClient(conn) response, err := nodeClient.NodePrepareResources(ctx, req) logger.V(4).Info(\"Done calling NodePrepareResources rpc\", \"response\", response, \"err\", err) return response, err } func (p *Plugin) NodeUnprepareResources( ctx context.Context, req *drapb.NodeUnprepareResourcesRequest, opts ...grpc.CallOption, ) (*drapb.NodeUnprepareResourcesResponse, error) { logger := klog.FromContext(ctx) logger.V(4).Info(\"Calling NodeUnprepareResource rpc\", \"request\", req) conn, err := p.getOrCreateGRPCConn() if err != nil { return nil, err } ctx, cancel := context.WithTimeout(ctx, p.clientCallTimeout) defer cancel() nodeClient := drapb.NewNodeClient(conn) response, err := nodeClient.NodeUnprepareResources(ctx, req) logger.V(4).Info(\"Done calling NodeUnprepareResources rpc\", \"response\", response, \"err\", err) return response, err } "} {"_id":"doc-en-kubernetes-825f3807cfa161a4ddffe2e8ab04417734cc47ff6f0541094e56f197b9659dba","title":"","text":" /* Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package plugin import ( \"context\" \"fmt\" \"net\" \"os\" \"path/filepath\" \"sync\" \"testing\" \"github.com/stretchr/testify/assert\" \"google.golang.org/grpc\" drapb \"k8s.io/kubelet/pkg/apis/dra/v1alpha4\" \"k8s.io/kubernetes/test/utils/ktesting\" ) const ( v1alpha4Version = \"v1alpha4\" ) type fakeV1alpha4GRPCServer struct { drapb.UnimplementedNodeServer } var _ drapb.NodeServer = &fakeV1alpha4GRPCServer{} func (f *fakeV1alpha4GRPCServer) NodePrepareResources(ctx context.Context, in *drapb.NodePrepareResourcesRequest) (*drapb.NodePrepareResourcesResponse, error) { return &drapb.NodePrepareResourcesResponse{Claims: map[string]*drapb.NodePrepareResourceResponse{\"claim-uid\": { Devices: []*drapb.Device{ { RequestNames: []string{\"test-request\"}, CDIDeviceIDs: []string{\"test-cdi-id\"}, }, }, }}}, nil } func (f *fakeV1alpha4GRPCServer) NodeUnprepareResources(ctx context.Context, in *drapb.NodeUnprepareResourcesRequest) (*drapb.NodeUnprepareResourcesResponse, error) { return &drapb.NodeUnprepareResourcesResponse{}, nil } type tearDown func() func setupFakeGRPCServer(version string) (string, tearDown, error) { p, err := os.MkdirTemp(\"\", \"dra_plugin\") if err != nil { return \"\", nil, err } closeCh := make(chan struct{}) addr := filepath.Join(p, \"server.sock\") teardown := func() { close(closeCh) os.RemoveAll(addr) } listener, err := net.Listen(\"unix\", addr) if err != nil { teardown() return \"\", nil, err } s := grpc.NewServer() switch version { case v1alpha4Version: fakeGRPCServer := &fakeV1alpha4GRPCServer{} drapb.RegisterNodeServer(s, fakeGRPCServer) default: return \"\", nil, fmt.Errorf(\"unsupported version: %s\", version) } go func() { go s.Serve(listener) <-closeCh s.GracefulStop() }() return addr, teardown, nil } func TestGRPCConnIsReused(t *testing.T) { tCtx := ktesting.Init(t) addr, teardown, err := setupFakeGRPCServer(v1alpha4Version) if err != nil { t.Fatal(err) } defer teardown() reusedConns := make(map[*grpc.ClientConn]int) wg := sync.WaitGroup{} m := sync.Mutex{} p := &Plugin{ backgroundCtx: tCtx, endpoint: addr, clientCallTimeout: defaultClientCallTimeout, } conn, err := p.getOrCreateGRPCConn() defer func() { err := conn.Close() if err != nil { t.Error(err) } }() if err != nil { t.Fatal(err) } // ensure the plugin we are using is registered draPlugins.add(\"dummy-plugin\", p) defer draPlugins.delete(\"dummy-plugin\") // we call `NodePrepareResource` 2 times and check whether a new connection is created or the same is reused for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() client, err := NewDRAPluginClient(\"dummy-plugin\") if err != nil { t.Error(err) return } req := &drapb.NodePrepareResourcesRequest{ Claims: []*drapb.Claim{ { Namespace: \"dummy-namespace\", UID: \"dummy-uid\", Name: \"dummy-claim\", }, }, } _, err = client.NodePrepareResources(tCtx, req) assert.NoError(t, err) client.mutex.Lock() conn := client.conn client.mutex.Unlock() m.Lock() defer m.Unlock() reusedConns[conn]++ }() } wg.Wait() // We should have only one entry otherwise it means another gRPC connection has been created if len(reusedConns) != 1 { t.Errorf(\"expected length to be 1 but got %d\", len(reusedConns)) } if counter, ok := reusedConns[conn]; ok && counter != 2 { t.Errorf(\"expected counter to be 2 but got %d\", counter) } } func TestNewDRAPluginClient(t *testing.T) { for _, test := range []struct { description string setup func(string) tearDown pluginName string shouldError bool }{ { description: \"plugin name is empty\", setup: func(_ string) tearDown { return func() {} }, pluginName: \"\", shouldError: true, }, { description: \"plugin name not found in the list\", setup: func(_ string) tearDown { return func() {} }, pluginName: \"plugin-name-not-found-in-the-list\", shouldError: true, }, { description: \"plugin exists\", setup: func(name string) tearDown { draPlugins.add(name, &Plugin{}) return func() { draPlugins.delete(name) } }, pluginName: \"dummy-plugin\", }, } { t.Run(test.description, func(t *testing.T) { teardown := test.setup(test.pluginName) defer teardown() client, err := NewDRAPluginClient(test.pluginName) if test.shouldError { assert.Nil(t, client) assert.Error(t, err) } else { assert.NotNil(t, client) assert.Nil(t, err) } }) } } func TestNodeUnprepareResources(t *testing.T) { for _, test := range []struct { description string serverSetup func(string) (string, tearDown, error) serverVersion string request *drapb.NodeUnprepareResourcesRequest }{ { description: \"server supports v1alpha4\", serverSetup: setupFakeGRPCServer, serverVersion: v1alpha4Version, request: &drapb.NodeUnprepareResourcesRequest{}, }, } { t.Run(test.description, func(t *testing.T) { tCtx := ktesting.Init(t) addr, teardown, err := setupFakeGRPCServer(test.serverVersion) if err != nil { t.Fatal(err) } defer teardown() p := &Plugin{ backgroundCtx: tCtx, endpoint: addr, clientCallTimeout: defaultClientCallTimeout, } conn, err := p.getOrCreateGRPCConn() defer func() { err := conn.Close() if err != nil { t.Error(err) } }() if err != nil { t.Fatal(err) } draPlugins.add(\"dummy-plugin\", p) defer draPlugins.delete(\"dummy-plugin\") client, err := NewDRAPluginClient(\"dummy-plugin\") if err != nil { t.Fatal(err) } _, err = client.NodeUnprepareResources(tCtx, test.request) if err != nil { t.Fatal(err) } }) } } "} {"_id":"doc-en-kubernetes-eb48251297b2ad77b0249236beb9681e9436ecfca440d7d8e68888932ae9779d","title":"","text":"\"context\" \"errors\" \"fmt\" \"net\" \"sync\" \"time\" v1 \"k8s.io/api/core/v1\" resourceapi \"k8s.io/api/resource/v1alpha3\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\" \"google.golang.org/grpc\" \"google.golang.org/grpc/connectivity\" \"google.golang.org/grpc/credentials/insecure\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/client-go/kubernetes\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/pkg/kubelet/pluginmanager/cache\" drapb \"k8s.io/kubelet/pkg/apis/dra/v1alpha4\" ) // defaultClientCallTimeout is the default amount of time that a DRA driver has // to respond to any of the gRPC calls. kubelet uses this value by passing nil // to RegisterPlugin. Some tests use a different, usually shorter timeout to // speed up testing. // // This is half of the kubelet retry period (according to // https://github.com/kubernetes/kubernetes/commit/0449cef8fd5217d394c5cd331d852bd50983e6b3). const defaultClientCallTimeout = 45 * time.Second // RegistrationHandler is the handler which is fed to the pluginwatcher API. type RegistrationHandler struct { // backgroundCtx is used for all future activities of the handler. // This is necessary because it implements APIs which don't // provide a context. backgroundCtx context.Context kubeClient kubernetes.Interface getNode func() (*v1.Node, error) } // NewDRAPluginClient returns a wrapper around those gRPC methods of a DRA // driver kubelet plugin which need to be called by kubelet. The wrapper // handles gRPC connection management and logging. Connections are reused // across different NewDRAPluginClient calls. func NewDRAPluginClient(pluginName string) (*Plugin, error) { if pluginName == \"\" { return nil, fmt.Errorf(\"plugin name is empty\") } var _ cache.PluginHandler = &RegistrationHandler{} // NewPluginHandler returns new registration handler. // // Must only be called once per process because it manages global state. // If a kubeClient is provided, then it synchronizes ResourceSlices // with the resource information provided by plugins. func NewRegistrationHandler(kubeClient kubernetes.Interface, getNode func() (*v1.Node, error)) *RegistrationHandler { handler := &RegistrationHandler{ // The context and thus logger should come from the caller. backgroundCtx: klog.NewContext(context.TODO(), klog.LoggerWithName(klog.TODO(), \"DRA registration handler\")), kubeClient: kubeClient, getNode: getNode, existingPlugin := draPlugins.get(pluginName) if existingPlugin == nil { return nil, fmt.Errorf(\"plugin name %s not found in the list of registered DRA plugins\", pluginName) } // When kubelet starts up, no DRA driver has registered yet. None of // the drivers are usable until they come back, which might not happen // at all. Therefore it is better to not advertise any local resources // because pods could get stuck on the node waiting for the driver // to start up. // // This has to run in the background. go handler.wipeResourceSlices(\"\") return existingPlugin, nil } type Plugin struct { backgroundCtx context.Context cancel func(cause error) return handler mutex sync.Mutex conn *grpc.ClientConn endpoint string highestSupportedVersion *utilversion.Version clientCallTimeout time.Duration } // wipeResourceSlices deletes ResourceSlices of the node, optionally just for a specific driver. func (h *RegistrationHandler) wipeResourceSlices(driver string) { if h.kubeClient == nil { return } ctx := h.backgroundCtx logger := klog.FromContext(ctx) func (p *Plugin) getOrCreateGRPCConn() (*grpc.ClientConn, error) { p.mutex.Lock() defer p.mutex.Unlock() backoff := wait.Backoff{ Duration: time.Second, Factor: 2, Jitter: 0.2, Cap: 5 * time.Minute, Steps: 100, if p.conn != nil { return p.conn, nil } // Error logging is done inside the loop. Context cancellation doesn't get logged. _ = wait.ExponentialBackoffWithContext(ctx, backoff, func(ctx context.Context) (bool, error) { node, err := h.getNode() if apierrors.IsNotFound(err) { return false, nil } if err != nil { logger.Error(err, \"Unexpected error checking for node\") return false, nil } fieldSelector := fields.Set{resourceapi.ResourceSliceSelectorNodeName: node.Name} if driver != \"\" { fieldSelector[resourceapi.ResourceSliceSelectorDriver] = driver } err = h.kubeClient.ResourceV1alpha3().ResourceSlices().DeleteCollection(ctx, metav1.DeleteOptions{}, metav1.ListOptions{FieldSelector: fieldSelector.String()}) switch { case err == nil: logger.V(3).Info(\"Deleted ResourceSlices\", \"fieldSelector\", fieldSelector) return true, nil case apierrors.IsUnauthorized(err): // This can happen while kubelet is still figuring out // its credentials. logger.V(5).Info(\"Deleting ResourceSlice failed, retrying\", \"fieldSelector\", fieldSelector, \"err\", err) return false, nil default: // Log and retry for other errors. logger.V(3).Info(\"Deleting ResourceSlice failed, retrying\", \"fieldSelector\", fieldSelector, \"err\", err) return false, nil } }) } // RegisterPlugin is called when a plugin can be registered. func (h *RegistrationHandler) RegisterPlugin(pluginName string, endpoint string, versions []string, pluginClientTimeout *time.Duration) error { // Prepare a context with its own logger for the plugin. // // The lifecycle of the plugin's background activities is tied to our // root context, so canceling that will also cancel the plugin. // // The logger injects the plugin name as additional value // into all log output related to the plugin. ctx := h.backgroundCtx ctx := p.backgroundCtx logger := klog.FromContext(ctx) logger = klog.LoggerWithValues(logger, \"pluginName\", pluginName) ctx = klog.NewContext(ctx, logger) logger.V(3).Info(\"Register new DRA plugin\", \"endpoint\", endpoint) highestSupportedVersion, err := h.validateVersions(pluginName, versions) network := \"unix\" logger.V(4).Info(\"Creating new gRPC connection\", \"protocol\", network, \"endpoint\", p.endpoint) // grpc.Dial is deprecated. grpc.NewClient should be used instead. // For now this gets ignored because this function is meant to establish // the connection, with the one second timeout below. Perhaps that // approach should be reconsidered? //nolint:staticcheck conn, err := grpc.Dial( p.endpoint, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(func(ctx context.Context, target string) (net.Conn, error) { return (&net.Dialer{}).DialContext(ctx, network, target) }), ) if err != nil { return fmt.Errorf(\"version check of plugin %s failed: %w\", pluginName, err) } var timeout time.Duration if pluginClientTimeout == nil { timeout = defaultClientCallTimeout } else { timeout = *pluginClientTimeout return nil, err } ctx, cancel := context.WithCancelCause(ctx) ctx, cancel := context.WithTimeout(context.Background(), time.Second) defer cancel() pluginInstance := &Plugin{ backgroundCtx: ctx, cancel: cancel, conn: nil, endpoint: endpoint, highestSupportedVersion: highestSupportedVersion, clientCallTimeout: timeout, if ok := conn.WaitForStateChange(ctx, connectivity.Connecting); !ok { return nil, errors.New(\"timed out waiting for gRPC connection to be ready\") } // Storing endpoint of newly registered DRA Plugin into the map, where plugin name will be the key // all other DRA components will be able to get the actual socket of DRA plugins by its name. if draPlugins.add(pluginName, pluginInstance) { logger.V(1).Info(\"Already registered, previous plugin was replaced\") } return nil p.conn = conn return p.conn, nil } func (h *RegistrationHandler) validateVersions( pluginName string, versions []string, ) (*utilversion.Version, error) { if len(versions) == 0 { return nil, errors.New(\"empty list for supported versions\") } func (p *Plugin) NodePrepareResources( ctx context.Context, req *drapb.NodePrepareResourcesRequest, opts ...grpc.CallOption, ) (*drapb.NodePrepareResourcesResponse, error) { logger := klog.FromContext(ctx) logger.V(4).Info(\"Calling NodePrepareResources rpc\", \"request\", req) // Validate version newPluginHighestVersion, err := utilversion.HighestSupportedVersion(versions) conn, err := p.getOrCreateGRPCConn() if err != nil { // HighestSupportedVersion includes the list of versions in its error // if relevant, no need to repeat it here. return nil, fmt.Errorf(\"none of the versions are supported: %w\", err) } existingPlugin := draPlugins.get(pluginName) if existingPlugin == nil { return newPluginHighestVersion, nil return nil, err } if existingPlugin.highestSupportedVersion.LessThan(newPluginHighestVersion) { return newPluginHighestVersion, nil } return nil, fmt.Errorf(\"another plugin instance is already registered with a higher supported version: %q < %q\", newPluginHighestVersion, existingPlugin.highestSupportedVersion) } // DeRegisterPlugin is called when a plugin has removed its socket, // signaling it is no longer available. func (h *RegistrationHandler) DeRegisterPlugin(pluginName string) { if p := draPlugins.delete(pluginName); p != nil { logger := klog.FromContext(p.backgroundCtx) logger.V(3).Info(\"Deregister DRA plugin\", \"endpoint\", p.endpoint) // Clean up the ResourceSlices for the deleted Plugin since it // may have died without doing so itself and might never come // back. go h.wipeResourceSlices(pluginName) return } ctx, cancel := context.WithTimeout(ctx, p.clientCallTimeout) defer cancel() logger := klog.FromContext(h.backgroundCtx) logger.V(3).Info(\"Deregister DRA plugin not necessary, was already removed\") nodeClient := drapb.NewNodeClient(conn) response, err := nodeClient.NodePrepareResources(ctx, req) logger.V(4).Info(\"Done calling NodePrepareResources rpc\", \"response\", response, \"err\", err) return response, err } // ValidatePlugin is called by kubelet's plugin watcher upon detection // of a new registration socket opened by DRA plugin. func (h *RegistrationHandler) ValidatePlugin(pluginName string, endpoint string, versions []string) error { _, err := h.validateVersions(pluginName, versions) func (p *Plugin) NodeUnprepareResources( ctx context.Context, req *drapb.NodeUnprepareResourcesRequest, opts ...grpc.CallOption, ) (*drapb.NodeUnprepareResourcesResponse, error) { logger := klog.FromContext(ctx) logger.V(4).Info(\"Calling NodeUnprepareResource rpc\", \"request\", req) conn, err := p.getOrCreateGRPCConn() if err != nil { return fmt.Errorf(\"invalid versions of plugin %s: %w\", pluginName, err) return nil, err } return err ctx, cancel := context.WithTimeout(ctx, p.clientCallTimeout) defer cancel() nodeClient := drapb.NewNodeClient(conn) response, err := nodeClient.NodeUnprepareResources(ctx, req) logger.V(4).Info(\"Done calling NodeUnprepareResources rpc\", \"response\", response, \"err\", err) return response, err }"} {"_id":"doc-en-kubernetes-48325ec8ae577dc508542ea0245f810885ec945d7172bd12a2cc78ebdd05489d","title":"","text":"package plugin import ( \"context\" \"fmt\" \"net\" \"os\" \"path/filepath\" \"sync\" \"testing\" \"github.com/stretchr/testify/assert\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"google.golang.org/grpc\" drapb \"k8s.io/kubelet/pkg/apis/dra/v1alpha4\" \"k8s.io/kubernetes/test/utils/ktesting\" ) func getFakeNode() (*v1.Node, error) { return &v1.Node{ObjectMeta: metav1.ObjectMeta{Name: \"worker\"}}, nil const ( v1alpha4Version = \"v1alpha4\" ) type fakeV1alpha4GRPCServer struct { drapb.UnimplementedNodeServer } var _ drapb.NodeServer = &fakeV1alpha4GRPCServer{} func (f *fakeV1alpha4GRPCServer) NodePrepareResources(ctx context.Context, in *drapb.NodePrepareResourcesRequest) (*drapb.NodePrepareResourcesResponse, error) { return &drapb.NodePrepareResourcesResponse{Claims: map[string]*drapb.NodePrepareResourceResponse{\"claim-uid\": { Devices: []*drapb.Device{ { RequestNames: []string{\"test-request\"}, CDIDeviceIDs: []string{\"test-cdi-id\"}, }, }, }}}, nil } func (f *fakeV1alpha4GRPCServer) NodeUnprepareResources(ctx context.Context, in *drapb.NodeUnprepareResourcesRequest) (*drapb.NodeUnprepareResourcesResponse, error) { return &drapb.NodeUnprepareResourcesResponse{}, nil } type tearDown func() func setupFakeGRPCServer(version string) (string, tearDown, error) { p, err := os.MkdirTemp(\"\", \"dra_plugin\") if err != nil { return \"\", nil, err } closeCh := make(chan struct{}) addr := filepath.Join(p, \"server.sock\") teardown := func() { close(closeCh) if err := os.RemoveAll(addr); err != nil { panic(err) } } listener, err := net.Listen(\"unix\", addr) if err != nil { teardown() return \"\", nil, err } s := grpc.NewServer() switch version { case v1alpha4Version: fakeGRPCServer := &fakeV1alpha4GRPCServer{} drapb.RegisterNodeServer(s, fakeGRPCServer) default: return \"\", nil, fmt.Errorf(\"unsupported version: %s\", version) } go func() { go func() { if err := s.Serve(listener); err != nil { panic(err) } }() <-closeCh s.GracefulStop() }() return addr, teardown, nil } func TestRegistrationHandler_ValidatePlugin(t *testing.T) { newRegistrationHandler := func() *RegistrationHandler { return NewRegistrationHandler(nil, getFakeNode) func TestGRPCConnIsReused(t *testing.T) { tCtx := ktesting.Init(t) addr, teardown, err := setupFakeGRPCServer(v1alpha4Version) if err != nil { t.Fatal(err) } defer teardown() reusedConns := make(map[*grpc.ClientConn]int) wg := sync.WaitGroup{} m := sync.Mutex{} p := &Plugin{ backgroundCtx: tCtx, endpoint: addr, clientCallTimeout: defaultClientCallTimeout, } conn, err := p.getOrCreateGRPCConn() defer func() { err := conn.Close() if err != nil { t.Error(err) } }() if err != nil { t.Fatal(err) } // ensure the plugin we are using is registered draPlugins.add(\"dummy-plugin\", p) defer draPlugins.delete(\"dummy-plugin\") // we call `NodePrepareResource` 2 times and check whether a new connection is created or the same is reused for i := 0; i < 2; i++ { wg.Add(1) go func() { defer wg.Done() client, err := NewDRAPluginClient(\"dummy-plugin\") if err != nil { t.Error(err) return } req := &drapb.NodePrepareResourcesRequest{ Claims: []*drapb.Claim{ { Namespace: \"dummy-namespace\", UID: \"dummy-uid\", Name: \"dummy-claim\", }, }, } _, err = client.NodePrepareResources(tCtx, req) assert.NoError(t, err) client.mutex.Lock() conn := client.conn client.mutex.Unlock() m.Lock() defer m.Unlock() reusedConns[conn]++ }() } wg.Wait() // We should have only one entry otherwise it means another gRPC connection has been created if len(reusedConns) != 1 { t.Errorf(\"expected length to be 1 but got %d\", len(reusedConns)) } if counter, ok := reusedConns[conn]; ok && counter != 2 { t.Errorf(\"expected counter to be 2 but got %d\", counter) } } func TestNewDRAPluginClient(t *testing.T) { for _, test := range []struct { description string handler func() *RegistrationHandler setup func(string) tearDown pluginName string endpoint string versions []string shouldError bool }{ { description: \"no versions provided\", handler: newRegistrationHandler, description: \"plugin name is empty\", setup: func(_ string) tearDown { return func() {} }, pluginName: \"\", shouldError: true, }, { description: \"unsupported version\", handler: newRegistrationHandler, versions: []string{\"v2.0.0\"}, description: \"plugin name not found in the list\", setup: func(_ string) tearDown { return func() {} }, pluginName: \"plugin-name-not-found-in-the-list\", shouldError: true, }, { description: \"plugin already registered with a higher supported version\", handler: func() *RegistrationHandler { handler := newRegistrationHandler() if err := handler.RegisterPlugin(\"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\", \"\", []string{\"v1.1.0\"}, nil); err != nil { t.Fatal(err) description: \"plugin exists\", setup: func(name string) tearDown { draPlugins.add(name, &Plugin{}) return func() { draPlugins.delete(name) } return handler }, pluginName: \"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\", versions: []string{\"v1.0.0\"}, shouldError: true, }, { description: \"should validate the plugin\", handler: newRegistrationHandler, pluginName: \"this-is-a-dummy-plugin-with-a-long-name-so-it-doesnt-collide\", versions: []string{\"v1.3.0\"}, pluginName: \"dummy-plugin\", }, } { t.Run(test.description, func(t *testing.T) { handler := test.handler() err := handler.ValidatePlugin(test.pluginName, test.endpoint, test.versions) teardown := test.setup(test.pluginName) defer teardown() client, err := NewDRAPluginClient(test.pluginName) if test.shouldError { assert.Nil(t, client) assert.Error(t, err) } else { assert.NotNil(t, client) assert.Nil(t, err) } }) } } t.Cleanup(func() { handler := newRegistrationHandler() handler.DeRegisterPlugin(\"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\") handler.DeRegisterPlugin(\"this-is-a-dummy-plugin-with-a-long-name-so-it-doesnt-collide\") }) func TestNodeUnprepareResources(t *testing.T) { for _, test := range []struct { description string serverSetup func(string) (string, tearDown, error) serverVersion string request *drapb.NodeUnprepareResourcesRequest }{ { description: \"server supports v1alpha4\", serverSetup: setupFakeGRPCServer, serverVersion: v1alpha4Version, request: &drapb.NodeUnprepareResourcesRequest{}, }, } { t.Run(test.description, func(t *testing.T) { tCtx := ktesting.Init(t) addr, teardown, err := setupFakeGRPCServer(test.serverVersion) if err != nil { t.Fatal(err) } defer teardown() p := &Plugin{ backgroundCtx: tCtx, endpoint: addr, clientCallTimeout: defaultClientCallTimeout, } conn, err := p.getOrCreateGRPCConn() defer func() { err := conn.Close() if err != nil { t.Error(err) } }() if err != nil { t.Fatal(err) } draPlugins.add(\"dummy-plugin\", p) defer draPlugins.delete(\"dummy-plugin\") client, err := NewDRAPluginClient(\"dummy-plugin\") if err != nil { t.Fatal(err) } _, err = client.NodeUnprepareResources(tCtx, test.request) if err != nil { t.Fatal(err) } }) } }"} {"_id":"doc-en-kubernetes-e7a7689f8e666e1bfd07eca3018d01693bb66e01512f2c4539798dee01350a6f","title":"","text":" /* Copyright 2022 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package plugin import ( \"context\" \"errors\" \"fmt\" \"time\" v1 \"k8s.io/api/core/v1\" resourceapi \"k8s.io/api/resource/v1alpha3\" apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\" utilversion \"k8s.io/apimachinery/pkg/util/version\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/client-go/kubernetes\" \"k8s.io/klog/v2\" \"k8s.io/kubernetes/pkg/kubelet/pluginmanager/cache\" ) // defaultClientCallTimeout is the default amount of time that a DRA driver has // to respond to any of the gRPC calls. kubelet uses this value by passing nil // to RegisterPlugin. Some tests use a different, usually shorter timeout to // speed up testing. // // This is half of the kubelet retry period (according to // https://github.com/kubernetes/kubernetes/commit/0449cef8fd5217d394c5cd331d852bd50983e6b3). const defaultClientCallTimeout = 45 * time.Second // RegistrationHandler is the handler which is fed to the pluginwatcher API. type RegistrationHandler struct { // backgroundCtx is used for all future activities of the handler. // This is necessary because it implements APIs which don't // provide a context. backgroundCtx context.Context kubeClient kubernetes.Interface getNode func() (*v1.Node, error) } var _ cache.PluginHandler = &RegistrationHandler{} // NewPluginHandler returns new registration handler. // // Must only be called once per process because it manages global state. // If a kubeClient is provided, then it synchronizes ResourceSlices // with the resource information provided by plugins. func NewRegistrationHandler(kubeClient kubernetes.Interface, getNode func() (*v1.Node, error)) *RegistrationHandler { handler := &RegistrationHandler{ // The context and thus logger should come from the caller. backgroundCtx: klog.NewContext(context.TODO(), klog.LoggerWithName(klog.TODO(), \"DRA registration handler\")), kubeClient: kubeClient, getNode: getNode, } // When kubelet starts up, no DRA driver has registered yet. None of // the drivers are usable until they come back, which might not happen // at all. Therefore it is better to not advertise any local resources // because pods could get stuck on the node waiting for the driver // to start up. // // This has to run in the background. go handler.wipeResourceSlices(\"\") return handler } // wipeResourceSlices deletes ResourceSlices of the node, optionally just for a specific driver. func (h *RegistrationHandler) wipeResourceSlices(driver string) { if h.kubeClient == nil { return } ctx := h.backgroundCtx logger := klog.FromContext(ctx) backoff := wait.Backoff{ Duration: time.Second, Factor: 2, Jitter: 0.2, Cap: 5 * time.Minute, Steps: 100, } // Error logging is done inside the loop. Context cancellation doesn't get logged. _ = wait.ExponentialBackoffWithContext(ctx, backoff, func(ctx context.Context) (bool, error) { node, err := h.getNode() if apierrors.IsNotFound(err) { return false, nil } if err != nil { logger.Error(err, \"Unexpected error checking for node\") return false, nil } fieldSelector := fields.Set{resourceapi.ResourceSliceSelectorNodeName: node.Name} if driver != \"\" { fieldSelector[resourceapi.ResourceSliceSelectorDriver] = driver } err = h.kubeClient.ResourceV1alpha3().ResourceSlices().DeleteCollection(ctx, metav1.DeleteOptions{}, metav1.ListOptions{FieldSelector: fieldSelector.String()}) switch { case err == nil: logger.V(3).Info(\"Deleted ResourceSlices\", \"fieldSelector\", fieldSelector) return true, nil case apierrors.IsUnauthorized(err): // This can happen while kubelet is still figuring out // its credentials. logger.V(5).Info(\"Deleting ResourceSlice failed, retrying\", \"fieldSelector\", fieldSelector, \"err\", err) return false, nil default: // Log and retry for other errors. logger.V(3).Info(\"Deleting ResourceSlice failed, retrying\", \"fieldSelector\", fieldSelector, \"err\", err) return false, nil } }) } // RegisterPlugin is called when a plugin can be registered. func (h *RegistrationHandler) RegisterPlugin(pluginName string, endpoint string, versions []string, pluginClientTimeout *time.Duration) error { // Prepare a context with its own logger for the plugin. // // The lifecycle of the plugin's background activities is tied to our // root context, so canceling that will also cancel the plugin. // // The logger injects the plugin name as additional value // into all log output related to the plugin. ctx := h.backgroundCtx logger := klog.FromContext(ctx) logger = klog.LoggerWithValues(logger, \"pluginName\", pluginName) ctx = klog.NewContext(ctx, logger) logger.V(3).Info(\"Register new DRA plugin\", \"endpoint\", endpoint) highestSupportedVersion, err := h.validateVersions(pluginName, versions) if err != nil { return fmt.Errorf(\"version check of plugin %s failed: %w\", pluginName, err) } var timeout time.Duration if pluginClientTimeout == nil { timeout = defaultClientCallTimeout } else { timeout = *pluginClientTimeout } ctx, cancel := context.WithCancelCause(ctx) pluginInstance := &Plugin{ backgroundCtx: ctx, cancel: cancel, conn: nil, endpoint: endpoint, highestSupportedVersion: highestSupportedVersion, clientCallTimeout: timeout, } // Storing endpoint of newly registered DRA Plugin into the map, where plugin name will be the key // all other DRA components will be able to get the actual socket of DRA plugins by its name. if draPlugins.add(pluginName, pluginInstance) { logger.V(1).Info(\"Already registered, previous plugin was replaced\") } return nil } func (h *RegistrationHandler) validateVersions( pluginName string, versions []string, ) (*utilversion.Version, error) { if len(versions) == 0 { return nil, errors.New(\"empty list for supported versions\") } // Validate version newPluginHighestVersion, err := utilversion.HighestSupportedVersion(versions) if err != nil { // HighestSupportedVersion includes the list of versions in its error // if relevant, no need to repeat it here. return nil, fmt.Errorf(\"none of the versions are supported: %w\", err) } existingPlugin := draPlugins.get(pluginName) if existingPlugin == nil { return newPluginHighestVersion, nil } if existingPlugin.highestSupportedVersion.LessThan(newPluginHighestVersion) { return newPluginHighestVersion, nil } return nil, fmt.Errorf(\"another plugin instance is already registered with a higher supported version: %q < %q\", newPluginHighestVersion, existingPlugin.highestSupportedVersion) } // DeRegisterPlugin is called when a plugin has removed its socket, // signaling it is no longer available. func (h *RegistrationHandler) DeRegisterPlugin(pluginName string) { if p := draPlugins.delete(pluginName); p != nil { logger := klog.FromContext(p.backgroundCtx) logger.V(3).Info(\"Deregister DRA plugin\", \"endpoint\", p.endpoint) // Clean up the ResourceSlices for the deleted Plugin since it // may have died without doing so itself and might never come // back. go h.wipeResourceSlices(pluginName) return } logger := klog.FromContext(h.backgroundCtx) logger.V(3).Info(\"Deregister DRA plugin not necessary, was already removed\") } // ValidatePlugin is called by kubelet's plugin watcher upon detection // of a new registration socket opened by DRA plugin. func (h *RegistrationHandler) ValidatePlugin(pluginName string, endpoint string, versions []string) error { _, err := h.validateVersions(pluginName, versions) if err != nil { return fmt.Errorf(\"invalid versions of plugin %s: %w\", pluginName, err) } return err } "} {"_id":"doc-en-kubernetes-45ea7bb63b96eb792aed228dc7a8fcfc088731a6d0f5d925deee37756e48c689","title":"","text":" /* Copyright 2023 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package plugin import ( \"testing\" \"github.com/stretchr/testify/assert\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) func getFakeNode() (*v1.Node, error) { return &v1.Node{ObjectMeta: metav1.ObjectMeta{Name: \"worker\"}}, nil } func TestRegistrationHandler_ValidatePlugin(t *testing.T) { newRegistrationHandler := func() *RegistrationHandler { return NewRegistrationHandler(nil, getFakeNode) } for _, test := range []struct { description string handler func() *RegistrationHandler pluginName string endpoint string versions []string shouldError bool }{ { description: \"no versions provided\", handler: newRegistrationHandler, shouldError: true, }, { description: \"unsupported version\", handler: newRegistrationHandler, versions: []string{\"v2.0.0\"}, shouldError: true, }, { description: \"plugin already registered with a higher supported version\", handler: func() *RegistrationHandler { handler := newRegistrationHandler() if err := handler.RegisterPlugin(\"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\", \"\", []string{\"v1.1.0\"}, nil); err != nil { t.Fatal(err) } return handler }, pluginName: \"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\", versions: []string{\"v1.0.0\"}, shouldError: true, }, { description: \"should validate the plugin\", handler: newRegistrationHandler, pluginName: \"this-is-a-dummy-plugin-with-a-long-name-so-it-doesnt-collide\", versions: []string{\"v1.3.0\"}, }, } { t.Run(test.description, func(t *testing.T) { handler := test.handler() err := handler.ValidatePlugin(test.pluginName, test.endpoint, test.versions) if test.shouldError { assert.Error(t, err) } else { assert.NoError(t, err) } }) } t.Cleanup(func() { handler := newRegistrationHandler() handler.DeRegisterPlugin(\"this-plugin-already-exists-and-has-a-long-name-so-it-doesnt-collide\") handler.DeRegisterPlugin(\"this-is-a-dummy-plugin-with-a-long-name-so-it-doesnt-collide\") }) } "} {"_id":"doc-en-kubernetes-7dd5a72080da59c6671ec6ba3f2c5d8a97ae8c4e9887bdb5395d06e2c262c600","title":"","text":"``` cd kubernetes sudo hack/local-up-cluster.sh hack/local-up-cluster.sh ``` This will build and start a lightweight local cluster, consisting of a master"} {"_id":"doc-en-kubernetes-794399533d7d0791ae0da6d0bf969c645a7e54308ef6a130032517993de56da7","title":"","text":"GO_OUT=\"${KUBE_ROOT}/_output/local/bin/${host_os}/${host_arch}\" APISERVER_LOG=/tmp/kube-apiserver.log \"${GO_OUT}/kube-apiserver\" sudo \"${GO_OUT}/kube-apiserver\" -v=${LOG_LEVEL} --address=\"${API_HOST}\" --port=\"${API_PORT}\" "} {"_id":"doc-en-kubernetes-14a1e93cfd38a95107af68701619439f725cf9370c560d9d215b4b415238f960","title":"","text":"kube::util::wait_for_url \"http://${API_HOST}:${API_PORT}/api/v1beta1/pods\" \"apiserver: \" CTLRMGR_LOG=/tmp/kube-controller-manager.log \"${GO_OUT}/kube-controller-manager\" sudo \"${GO_OUT}/kube-controller-manager\" -v=${LOG_LEVEL} --machines=\"127.0.0.1\" --master=\"${API_HOST}:${API_PORT}\" >\"${CTLRMGR_LOG}\" 2>&1 & CTLRMGR_PID=$! KUBELET_LOG=/tmp/kubelet.log \"${GO_OUT}/kubelet\" sudo \"${GO_OUT}/kubelet\" -v=${LOG_LEVEL} --etcd_servers=\"http://127.0.0.1:4001\" --hostname_override=\"127.0.0.1\" "} {"_id":"doc-en-kubernetes-2a779090d8150bad3aefd97d85cf5091d28a6aec6482a1d296d0759f5fde000d","title":"","text":"KUBELET_PID=$! PROXY_LOG=/tmp/kube-proxy.log \"${GO_OUT}/kube-proxy\" sudo \"${GO_OUT}/kube-proxy\" -v=${LOG_LEVEL} --master=\"http://${API_HOST}:${API_PORT}\" >\"${PROXY_LOG}\" 2>&1 & PROXY_PID=$! SCHEDULER_LOG=/tmp/kube-scheduler.log \"${GO_OUT}/kube-scheduler\" sudo \"${GO_OUT}/kube-scheduler\" -v=${LOG_LEVEL} --master=\"http://${API_HOST}:${API_PORT}\" >\"${SCHEDULER_LOG}\" 2>&1 & SCHEDULER_PID=$!"} {"_id":"doc-en-kubernetes-bdaf3886e5179a58b718af6c703f63999365245e4bb334abd5e1958fd796202a","title":"","text":"cleanup() { echo \"Cleaning up...\" kill \"${APISERVER_PID}\" kill \"${CTLRMGR_PID}\" kill \"${KUBELET_PID}\" kill \"${PROXY_PID}\" kill \"${SCHEDULER_PID}\" sudo kill \"${APISERVER_PID}\" sudo kill \"${CTLRMGR_PID}\" sudo kill \"${KUBELET_PID}\" sudo kill \"${PROXY_PID}\" sudo kill \"${SCHEDULER_PID}\" kill \"${ETCD_PID}\" rm -rf \"${ETCD_DIR}\""} {"_id":"doc-en-kubernetes-f2cb6d9963d322aa8400359267db30879052756ba900e4a0fbbbd59b01f0a307","title":"","text":"} ginkgo.BeforeEach(func() { // Precautionary check that kubelet is healthy before running the test. waitForKubeletToStart(context.TODO(), f) ginkgo.By(\"setting up the pod to be used in the test\") e2epod.NewPodClient(f).Create(context.TODO(), testCase.podSpec) })"} {"_id":"doc-en-kubernetes-416f783ca3617065cdc2b07d53343a5fe2bb88a655c18978750d2bd4192389af","title":"","text":"Name: podName, }, Spec: v1.PodSpec{ RestartPolicy: v1.RestartPolicyNever, PriorityClassName: \"system-node-critical\", RestartPolicy: v1.RestartPolicyNever, Containers: []v1.Container{ createContainer(ctnName), },"} {"_id":"doc-en-kubernetes-a9ce4fbb96cddbb793033aff653115f16a92ad6ef7acc09db6f2881a9ce46fce","title":"","text":"\"sh\", \"-c\", // use the dd tool to attempt to allocate huge block of memory which exceeds the node allocatable \"sleep 5 && dd if=/dev/zero of=/dev/null iflag=fullblock count=10 bs=10G\", \"sleep 5 && dd if=/dev/zero of=/dev/null iflag=fullblock count=10 bs=1024G\", }, SecurityContext: &v1.SecurityContext{ SeccompProfile: &v1.SeccompProfile{"} {"_id":"doc-en-kubernetes-f9062ca1e26bd030a62605acf761de70a52b7dbcd19218ca29610aaf446e738a","title":"","text":"} if len(sorted) != len(graph) { fmt.Printf(\"%s\", (sorted)) return nil // Cycle detected }"} {"_id":"doc-en-kubernetes-13569e6844295f9deb842c4b449da540a5309899d6f978415a634ccdf68055d8","title":"","text":"} logger.V(4).Info(\"Update endpoints\", \"service\", klog.KObj(service), \"readyEndpoints\", totalReadyEps, \"notreadyEndpoints\", totalNotReadyEps) var updatedEndpoints *v1.Endpoints if createEndpoints { // No previous endpoints, create them _, err = e.client.CoreV1().Endpoints(service.Namespace).Create(ctx, newEndpoints, metav1.CreateOptions{}) } else { // Pre-existing _, err = e.client.CoreV1().Endpoints(service.Namespace).Update(ctx, newEndpoints, metav1.UpdateOptions{}) updatedEndpoints, err = e.client.CoreV1().Endpoints(service.Namespace).Update(ctx, newEndpoints, metav1.UpdateOptions{}) } if err != nil { if createEndpoints && errors.IsForbidden(err) {"} {"_id":"doc-en-kubernetes-abeeedd4bea0f0be7862d90e5be3fb0bf82532434d6c0e71abbe54d0a17b4012","title":"","text":"// If the current endpoints is updated we track the old resource version, so // if we obtain this resource version again from the lister we know is outdated // and we need to retry later to wait for the informer cache to be up-to-date. if !createEndpoints { // there are some operations (webhooks, truncated endpoints, ...) that can potentially cause endpoints updates became noop // and return the same resourceVersion. // Ref: https://issues.k8s.io/127370 , https://issues.k8s.io/126578 if updatedEndpoints != nil && updatedEndpoints.ResourceVersion != currentEndpoints.ResourceVersion { e.staleEndpointsTracker.Stale(currentEndpoints) } return nil"} {"_id":"doc-en-kubernetes-d3eac6868b6a507086819a53b798e4077c9ca5ab0d48abb0ca009330c1d6332b","title":"","text":"apierrors \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/util/intstr\" \"k8s.io/apimachinery/pkg/util/sets\" \"k8s.io/apimachinery/pkg/util/wait\" \"k8s.io/client-go/informers\" clientset \"k8s.io/client-go/kubernetes\""} {"_id":"doc-en-kubernetes-a28ed4d19f0618392cfe415de844a279c67718773649289e299e6c9b6cada423","title":"","text":"\"k8s.io/kubernetes/pkg/controller/endpoint\" \"k8s.io/kubernetes/test/integration/framework\" \"k8s.io/kubernetes/test/utils/ktesting\" netutils \"k8s.io/utils/net\" ) func TestEndpointUpdates(t *testing.T) {"} {"_id":"doc-en-kubernetes-a0c45b1ebc1f9bd7e1cefdec1b9f5a67cc89708a76990e05aec52218ba8c62ae","title":"","text":"svc.Spec.ExternalName = \"google.com\" return svc } func TestEndpointTruncate(t *testing.T) { // Disable ServiceAccount admission plugin as we don't have serviceaccount controller running. server := kubeapiservertesting.StartTestServerOrDie(t, nil, framework.DefaultTestServerFlags(), framework.SharedEtcd()) defer server.TearDownFn() client, err := clientset.NewForConfig(server.ClientConfig) if err != nil { t.Fatalf(\"Error creating clientset: %v\", err) } informers := informers.NewSharedInformerFactory(client, 0) tCtx := ktesting.Init(t) epController := endpoint.NewEndpointController( tCtx, informers.Core().V1().Pods(), informers.Core().V1().Services(), informers.Core().V1().Endpoints(), client, 0) // Start informer and controllers informers.Start(tCtx.Done()) go epController.Run(tCtx, 1) // Create namespace ns := framework.CreateNamespaceOrDie(client, \"test-endpoints-truncate\", t) defer framework.DeleteNamespaceOrDie(client, ns, t) // Create a pod with labels basePod := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"test-pod\", Labels: labelMap(), }, Spec: v1.PodSpec{ NodeName: \"fake-node\", Containers: []v1.Container{ { Name: \"fakename\", Image: \"fakeimage\", Ports: []v1.ContainerPort{ { Name: \"port-443\", ContainerPort: 443, }, }, }, }, }, Status: v1.PodStatus{ Phase: v1.PodRunning, Conditions: []v1.PodCondition{ { Type: v1.PodReady, Status: v1.ConditionTrue, }, }, PodIP: \"10.0.0.1\", PodIPs: []v1.PodIP{ { IP: \"10.0.0.1\", }, }, }, } // create 1001 Pods to reach endpoint max capacity that is set to 1000 allPodNames := sets.New[string]() baseIP := netutils.BigForIP(netutils.ParseIPSloppy(\"10.0.0.1\")) for i := 0; i < 1001; i++ { pod := basePod.DeepCopy() pod.Name = fmt.Sprintf(\"%s-%d\", basePod.Name, i) allPodNames.Insert(pod.Name) podIP := netutils.AddIPOffset(baseIP, i).String() pod.Status.PodIP = podIP pod.Status.PodIPs[0] = v1.PodIP{IP: podIP} createdPod, err := client.CoreV1().Pods(ns.Name).Create(tCtx, pod, metav1.CreateOptions{}) if err != nil { t.Fatalf(\"Failed to create pod %s: %v\", pod.Name, err) } createdPod.Status = pod.Status _, err = client.CoreV1().Pods(ns.Name).UpdateStatus(tCtx, createdPod, metav1.UpdateOptions{}) if err != nil { t.Fatalf(\"Failed to update status of pod %s: %v\", pod.Name, err) } } // Create a service associated to the pod svc := &v1.Service{ ObjectMeta: metav1.ObjectMeta{ Name: \"test-service\", Namespace: ns.Name, Labels: map[string]string{ \"foo\": \"bar\", }, }, Spec: v1.ServiceSpec{ Selector: map[string]string{ \"foo\": \"bar\", }, Ports: []v1.ServicePort{ {Name: \"port-443\", Port: 443, Protocol: \"TCP\", TargetPort: intstr.FromInt32(443)}, }, }, } _, err = client.CoreV1().Services(ns.Name).Create(tCtx, svc, metav1.CreateOptions{}) if err != nil { t.Fatalf(\"Failed to create service %s: %v\", svc.Name, err) } var truncatedPodName string // poll until associated Endpoints to the previously created Service exists if err := wait.PollUntilContextTimeout(tCtx, 1*time.Second, 10*time.Second, true, func(context.Context) (bool, error) { podNames := sets.New[string]() endpoints, err := client.CoreV1().Endpoints(ns.Name).Get(tCtx, svc.Name, metav1.GetOptions{}) if err != nil { return false, nil } for _, subset := range endpoints.Subsets { for _, address := range subset.Addresses { podNames.Insert(address.TargetRef.Name) } } if podNames.Len() != 1000 { return false, nil } truncated, ok := endpoints.Annotations[v1.EndpointsOverCapacity] if !ok || truncated != \"truncated\" { return false, nil } // There is only 1 truncated Pod. truncatedPodName, _ = allPodNames.Difference(podNames).PopAny() return true, nil }); err != nil { t.Fatalf(\"endpoints not found: %v\", err) } // Update the truncated Pod several times to make endpoints controller resync the service. truncatedPod, err := client.CoreV1().Pods(ns.Name).Get(tCtx, truncatedPodName, metav1.GetOptions{}) if err != nil { t.Fatalf(\"Failed to get pod %s: %v\", truncatedPodName, err) } for i := 0; i < 10; i++ { truncatedPod.Status.Conditions[0].Status = v1.ConditionFalse truncatedPod, err = client.CoreV1().Pods(ns.Name).UpdateStatus(tCtx, truncatedPod, metav1.UpdateOptions{}) if err != nil { t.Fatalf(\"Failed to update status of pod %s: %v\", truncatedPod.Name, err) } truncatedPod.Status.Conditions[0].Status = v1.ConditionTrue truncatedPod, err = client.CoreV1().Pods(ns.Name).UpdateStatus(tCtx, truncatedPod, metav1.UpdateOptions{}) if err != nil { t.Fatalf(\"Failed to update status of pod %s: %v\", truncatedPod.Name, err) } } // delete 501 Pods for i := 500; i < 1001; i++ { podName := fmt.Sprintf(\"%s-%d\", basePod.Name, i) err = client.CoreV1().Pods(ns.Name).Delete(tCtx, podName, metav1.DeleteOptions{}) if err != nil { t.Fatalf(\"error deleting test pod: %v\", err) } } // poll until endpoints for deleted Pod are no longer in Endpoints. if err := wait.PollUntilContextTimeout(tCtx, 1*time.Second, 10*time.Second, true, func(context.Context) (bool, error) { endpoints, err := client.CoreV1().Endpoints(ns.Name).Get(tCtx, svc.Name, metav1.GetOptions{}) if err != nil { return false, nil } numEndpoints := 0 for _, subset := range endpoints.Subsets { numEndpoints += len(subset.Addresses) } if numEndpoints != 500 { return false, nil } truncated, ok := endpoints.Annotations[v1.EndpointsOverCapacity] if ok || truncated == \"truncated\" { return false, nil } return true, nil }); err != nil { t.Fatalf(\"error checking for no endpoints with terminating pods: %v\", err) } } "} {"_id":"doc-en-kubernetes-bd53e66842fb9f0051aad196ae3200c4ebdf53ad5385cf363d168966eb761295","title":"","text":"// our target metric should be updated by now if err := wait.PollUntilContextTimeout(ctx, 10*time.Second, 2*time.Minute, true, func(_ context.Context) (bool, error) { metrics, err := metricsGrabber.GrabFromKubeProxy(ctx, nodeName) framework.ExpectNoError(err) if err != nil { return false, fmt.Errorf(\"failed to fetch metrics: %w\", err) } targetMetricAfter, err := metrics.GetCounterMetricValue(metricName) framework.ExpectNoError(err) if err != nil { return false, fmt.Errorf(\"failed to fetch metric: %w\", err) } return targetMetricAfter > targetMetricBefore, nil }); err != nil { framework.Failf(\"expected %s metric to be updated after accessing endpoints via localhost nodeports\", metricName) if wait.Interrupted(err) { framework.Failf(\"expected %s metric to be updated after accessing endpoints via localhost nodeports\", metricName) } framework.ExpectNoError(err) } }) })"} {"_id":"doc-en-kubernetes-7a25bf088444448a044cc2da8bc6893689acd492f5b8f76232586c42da79cdf1","title":"","text":"APIServerTracing featuregate.Feature = \"APIServerTracing\" // owner: @linxiulei // beta: v1.30 // alpha: v1.30 // // Enables serving watch requests in separate goroutines. APIServingWithRoutine featuregate.Feature = \"APIServingWithRoutine\""} {"_id":"doc-en-kubernetes-6845bf0286ab011877d7ffee83975cea62bef3842b3a1496e0f1c30c1bec2f70","title":"","text":"} if len(strings.Split(stringValue, \"/\")) == 1 { if !standardFinalizers.Has(stringValue) { allWarnings = append(allWarnings, fmt.Sprintf(\"%s: %q: prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers\", fldPath.String(), stringValue)) if strings.Contains(stringValue, \".\") { allWarnings = append(allWarnings, fmt.Sprintf(\"%s: %q: prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers\", fldPath.String(), stringValue)) } else { allWarnings = append(allWarnings, fmt.Sprintf(\"%s: %q: prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers\", fldPath.String(), stringValue)) } } } return allWarnings"} {"_id":"doc-en-kubernetes-3772687ccf40d38b875849de61831c6752804410ca99c05c42fb4fc5c02a34aa","title":"","text":"finalizer: []string{\"kubernetes.io/valid-finalizer\"}, }, { name: \"create-crd-with-fqdn-like-finalizer-without-path\", finalizer: []string{\"finalizer.without.valid-path.io\"}, expectCreateWarnings: []string{ `metadata.finalizers: \"finalizer.without.valid-path.io\": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers`, }, }, { name: \"update-crd-with-invalid-finalizer\", finalizer: []string{\"invalid-finalizer\"}, updatedFinalizer: []string{\"another-invalid-finalizer\"},"} {"_id":"doc-en-kubernetes-826ea3afba8e03a924a32dc792c4ca777d7d0b70dd5d30e4033d8a4493c979a4","title":"","text":" /* Copyright 2024 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package storage import ( \"context\" \"fmt\" \"time\" storagev1 \"k8s.io/api/storage/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/labels\" \"k8s.io/apimachinery/pkg/types\" utilrand \"k8s.io/apimachinery/pkg/util/rand\" \"k8s.io/client-go/util/retry\" \"k8s.io/kubernetes/test/e2e/framework\" \"k8s.io/kubernetes/test/e2e/storage/utils\" \"github.com/onsi/ginkgo/v2\" \"github.com/onsi/gomega\" ) var _ = utils.SIGDescribe(\"CSINodes\", func() { f := framework.NewDefaultFramework(\"csinodes\") ginkgo.Describe(\"CSI Conformance\", func() { ginkgo.It(\"should run through the lifecycle of a csinode\", func(ctx context.Context) { csiNodeClient := f.ClientSet.StorageV1().CSINodes() initialCSINode := storagev1.CSINode{ ObjectMeta: metav1.ObjectMeta{ Name: \"e2e-csinode-\" + utilrand.String(5), }, } ginkgo.By(fmt.Sprintf(\"Creating initial csiNode %q\", initialCSINode.Name)) csiNode, err := csiNodeClient.Create(ctx, &initialCSINode, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create csiNode %q\", initialCSINode.Name) ginkgo.By(fmt.Sprintf(\"Getting initial csiNode %q\", initialCSINode.Name)) retrievedCSINode, err := csiNodeClient.Get(ctx, initialCSINode.Name, metav1.GetOptions{}) framework.ExpectNoError(err, \"Failed to retrieve csiNode %q\", initialCSINode.Name) gomega.Expect(retrievedCSINode.Name).To(gomega.Equal(csiNode.Name), \"Checking that the retrieved name has been found\") ginkgo.By(fmt.Sprintf(\"Patching initial csiNode: %q\", initialCSINode.Name)) payload := \"{\"metadata\":{\"labels\":{\"\" + csiNode.Name + \"\":\"patched\"}}}\" patchedCSINode, err := csiNodeClient.Patch(ctx, csiNode.Name, types.StrategicMergePatchType, []byte(payload), metav1.PatchOptions{}) framework.ExpectNoError(err, \"Failed to patch csiNode %q\", csiNode.Name) gomega.Expect(patchedCSINode.Labels).To(gomega.HaveKeyWithValue(csiNode.Name, \"patched\"), \"Checking that patched label has been applied\") patchedSelector := labels.Set{csiNode.Name: \"patched\"}.AsSelector().String() ginkgo.By(fmt.Sprintf(\"Listing csiNodes with LabelSelector %q\", patchedSelector)) csiNodeList, err := csiNodeClient.List(ctx, metav1.ListOptions{LabelSelector: patchedSelector}) framework.ExpectNoError(err, \"failed to list csiNodes\") gomega.Expect(csiNodeList.Items).To(gomega.HaveLen(1)) ginkgo.By(fmt.Sprintf(\"Delete initial csiNode: %q\", initialCSINode.Name)) err = csiNodeClient.Delete(ctx, csiNode.Name, metav1.DeleteOptions{}) framework.ExpectNoError(err, \"failed to delete csiNode %q\", initialCSINode.Name) ginkgo.By(fmt.Sprintf(\"Confirm deletion of csiNode %q\", initialCSINode.Name)) type state struct { CSINodes []storagev1.CSINode } err = framework.Gomega().Eventually(ctx, framework.HandleRetry(func(ctx context.Context) (*state, error) { csiNodeList, err := csiNodeClient.List(ctx, metav1.ListOptions{LabelSelector: patchedSelector}) if err != nil { return nil, fmt.Errorf(\"failed to list CSINode: %w\", err) } return &state{ CSINodes: csiNodeList.Items, }, nil })).WithTimeout(30 * time.Second).Should(framework.MakeMatcher(func(s *state) (func() string, error) { if len(s.CSINodes) == 0 { return nil, nil } return func() string { return fmt.Sprintf(\"Expected CSINode to be deleted, found %q\", s.CSINodes[0].Name) }, nil })) framework.ExpectNoError(err, \"Timeout while waiting to confirm CSINode %q deletion\", initialCSINode.Name) replacementCSINode := storagev1.CSINode{ ObjectMeta: metav1.ObjectMeta{ Name: \"e2e-csinode-\" + utilrand.String(5), }, } ginkgo.By(fmt.Sprintf(\"Creating replacement csiNode %q\", replacementCSINode.Name)) secondCSINode, err := csiNodeClient.Create(ctx, &replacementCSINode, metav1.CreateOptions{}) framework.ExpectNoError(err, \"failed to create csiNode %q\", replacementCSINode.Name) ginkgo.By(fmt.Sprintf(\"Getting replacement csiNode %q\", replacementCSINode.Name)) retrievedCSINode, err = csiNodeClient.Get(ctx, secondCSINode.Name, metav1.GetOptions{}) framework.ExpectNoError(err, \"Failed to retrieve CSINode %q\", replacementCSINode.Name) gomega.Expect(retrievedCSINode.Name).To(gomega.Equal(secondCSINode.Name), \"Checking that the retrieved name has been found\") ginkgo.By(fmt.Sprintf(\"Updating replacement csiNode %q\", retrievedCSINode.Name)) var updatedCSINode *storagev1.CSINode err = retry.RetryOnConflict(retry.DefaultRetry, func() error { tmpCSINode, err := csiNodeClient.Get(ctx, retrievedCSINode.Name, metav1.GetOptions{}) framework.ExpectNoError(err, \"Unable to get %q\", replacementCSINode.Name) tmpCSINode.Labels = map[string]string{replacementCSINode.Name: \"updated\"} updatedCSINode, err = csiNodeClient.Update(ctx, tmpCSINode, metav1.UpdateOptions{}) return err }) framework.ExpectNoError(err, \"failed to update %q\", replacementCSINode.Name) gomega.Expect(updatedCSINode.Labels).To(gomega.HaveKeyWithValue(secondCSINode.Name, \"updated\"), \"Checking that updated label has been applied\") updatedSelector := labels.Set{retrievedCSINode.Name: \"updated\"}.AsSelector().String() ginkgo.By(fmt.Sprintf(\"DeleteCollection of CSINodes with %q label\", updatedSelector)) err = csiNodeClient.DeleteCollection(ctx, metav1.DeleteOptions{}, metav1.ListOptions{LabelSelector: updatedSelector}) framework.ExpectNoError(err, \"failed to delete csiNode Colllection\") ginkgo.By(fmt.Sprintf(\"Confirm deletion of replacement csiNode with LabelSelector %q\", updatedSelector)) err = framework.Gomega().Eventually(ctx, framework.HandleRetry(func(ctx context.Context) (*state, error) { csiNodeList, err := csiNodeClient.List(ctx, metav1.ListOptions{LabelSelector: updatedSelector}) if err != nil { return nil, fmt.Errorf(\"failed to list CSINode: %w\", err) } return &state{ CSINodes: csiNodeList.Items, }, nil })).WithTimeout(30 * time.Second).Should(framework.MakeMatcher(func(s *state) (func() string, error) { if len(s.CSINodes) == 0 { return nil, nil } return func() string { return fmt.Sprintf(\"Expected CSINode to be deleted, found %q\", s.CSINodes[0].Name) }, nil })) framework.ExpectNoError(err, \"Timeout while waiting to confirm CSINode %q deletion\", replacementCSINode.Name) }) }) }) "}