{"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-94b76b7158da38f34cef746d9bee478a533e1bd68a65e8e6a6a09dbc8bf5dccf", "text": " Minion kubelet cAdvisor docker container container container Pod container container container Pod container container container Pod Proxy Minion Node kubelet kubelet cAdvisor container docker container container container container Pod cAdvisor Pod container container container container container container Pod Pod ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-824f179cebec1a3ec1b6fa5fdcd9cd35f61d8444719cf437ae44c17b90532b8d", "text": " Proxy Proxy ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-71eac7960bd00f4a4e7b80389cfc8171b7068c7cf3dbc6b24e4fd9049e7fbeb0", "text": " Firewall Firewall Internet Internet ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-ab525215a4137a9e8eb194dc3a34b23bedf19ded3f83b42537b360dc97a142a7", "text": " Distributed Watchable Storage (implemented via etcd) ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-7786a413dd147e34c6c3a1cd0bef9850fe38c55144ef993457389b474b9739f3", "text": " ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f", "query": "CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0.", "positive_passages": [{"docid": "doc-en-kubernetes-7e4f563aa6e0bcbdd1a66b94e0a23144347b3f757ae31ba7c9767c1b68bfc1da", "text": " docker .. ... Node kubelet container container cAdvisor Pod container container container Pod container container container Pod kubelet info service Proxy docker .. ... Distributed Watchable Storage (implemented via etcd) ", "commid": "kubernetes_pr_5474"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c2e5391174f153a1c9277af876ed0c0c2fa7a8f69a8e1b77e99161308c31e686", "query": "I can see that this message is flooding my minion logs so I was curious what should this config file looks like. I was trying to find some example, but with no luck. Also is this config file created during the runtime? What actually creates it? Shouldn't we just give up when the config file does not exists? Also there are better ways (maybe) to do polling if the file exists or not. I meaning you can use inotify () instead of an infinite loop + sleep. I'm happy to make a patch if you think it is a good idea. PS: I know that you put the timeout 5 seconds before retries.\nThanks for the report. I think this is just a bug-- we should check whether it exists and not pollute the log. I'm pretty sure we don't currently even use that file. Also we should NOT be reading the proxy's config from a file in /tmp where anybody can write it. Bad default location.\nProxy config sounds like an /etc thing, same for kubelet's container manifests from disk\nAre we interested in polling for the configuration files changes? Or do we want to read the settings just once, as an ordinary unix binary?\nYes, polling is a requirement. This is a live config file like the kubelet's machine files.", "positive_passages": [{"docid": "doc-en-kubernetes-056c40aae9b523477819c4baf88ce5b646ffaccc6e48d98e17ed490f77cdf7ce", "text": "import ( \"bytes\" \"encoding/json\" \"fmt\" \"io/ioutil\" \"reflect\" \"time\"", "commid": "kubernetes_pr_681"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c2e5391174f153a1c9277af876ed0c0c2fa7a8f69a8e1b77e99161308c31e686", "query": "I can see that this message is flooding my minion logs so I was curious what should this config file looks like. I was trying to find some example, but with no luck. Also is this config file created during the runtime? What actually creates it? Shouldn't we just give up when the config file does not exists? Also there are better ways (maybe) to do polling if the file exists or not. I meaning you can use inotify () instead of an infinite loop + sleep. I'm happy to make a patch if you think it is a good idea. PS: I know that you put the timeout 5 seconds before retries.\nThanks for the report. I think this is just a bug-- we should check whether it exists and not pollute the log. I'm pretty sure we don't currently even use that file. Also we should NOT be reading the proxy's config from a file in /tmp where anybody can write it. Bad default location.\nProxy config sounds like an /etc thing, same for kubelet's container manifests from disk\nAre we interested in polling for the configuration files changes? Or do we want to read the settings just once, as an ordinary unix binary?\nYes, polling is a requirement. This is a live config file like the kubelet's machine files.", "positive_passages": [{"docid": "doc-en-kubernetes-481c33cb8e6ad4442a325e2f3d999cdd2ea41e65795005f4f258fb56b126afc1", "text": "var lastEndpoints []api.Endpoints sleep := 5 * time.Second // Used to avoid spamming the error log file, makes error logging edge triggered. hadSuccess := true for { data, err := ioutil.ReadFile(s.filename) if err != nil { glog.Errorf(\"Couldn't read file: %s : %v\", s.filename, err) msg := fmt.Sprintf(\"Couldn't read file: %s : %v\", s.filename, err) if hadSuccess { glog.Error(msg) } else { glog.V(1).Info(msg) } hadSuccess = false time.Sleep(sleep) continue } hadSuccess = true if bytes.Equal(lastData, data) { time.Sleep(sleep)", "commid": "kubernetes_pr_681"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102", "query": "Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce.", "positive_passages": [{"docid": "doc-en-kubernetes-7c3a010463a431c99c68ab693b6a247385b875e2f8f3206865fc0d34d6ad4fb5", "text": "glog.Infof(\"Version test passed\") } func runSelfLinkTest(c *client.Client) { var svc api.Service err := c.Post().Path(\"services\").Body( &api.Service{ ObjectMeta: api.ObjectMeta{ Name: \"selflinktest\", Labels: map[string]string{ \"name\": \"selflinktest\", }, }, Port: 12345, // This is here because validation requires it. Selector: map[string]string{ \"foo\": \"bar\", }, }, ).Do().Into(&svc) if err != nil { glog.Fatalf(\"Failed creating selflinktest service: %v\", err) } err = c.Get().AbsPath(svc.SelfLink).Do().Into(&svc) if err != nil { glog.Fatalf(\"Failed listing service with supplied self link '%v': %v\", svc.SelfLink, err) } var svcList api.ServiceList err = c.Get().Path(\"services\").Do().Into(&svcList) if err != nil { glog.Fatalf(\"Failed listing services: %v\", err) } err = c.Get().AbsPath(svcList.SelfLink).Do().Into(&svcList) if err != nil { glog.Fatalf(\"Failed listing services with supplied self link '%v': %v\", svcList.SelfLink, err) } found := false for i := range svcList.Items { item := &svcList.Items[i] if item.Name != \"selflinktest\" { continue } found = true err = c.Get().AbsPath(item.SelfLink).Do().Into(&svc) if err != nil { glog.Fatalf(\"Failed listing service with supplied self link '%v': %v\", item.SelfLink, err) } break } if !found { glog.Fatalf(\"never found selflinktest service\") } glog.Infof(\"Self link test passed\") // TODO: Should test PUT at some point, too. } func runAtomicPutTest(c *client.Client) { var svc api.Service err := c.Post().Path(\"services\").Body(", "commid": "kubernetes_pr_2082"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102", "query": "Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce.", "positive_passages": [{"docid": "doc-en-kubernetes-559cf43ef81f6d6a44fbacc117be3d22b123560c680b895a0d01ee36ecc0a6e0", "text": "runServiceTest, runAPIVersionsTest, runMasterServiceTest, runSelfLinkTest, } var wg sync.WaitGroup wg.Add(len(testFuncs))", "commid": "kubernetes_pr_2082"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102", "query": "Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce.", "positive_passages": [{"docid": "doc-en-kubernetes-5644ac57169d4e7e288c9846590f3ac560ea0c4dcf7375dec4eba330b758d855", "text": "newURL.Path = path.Join(h.canonicalPrefix, req.URL.Path) newURL.RawQuery = \"\" newURL.Fragment = \"\" return h.selfLinker.SetSelfLink(obj, newURL.String()) err := h.selfLinker.SetSelfLink(obj, newURL.String()) if err != nil { return err } if !runtime.IsListType(obj) { return nil } // Set self-link of objects in the list. items, err := runtime.ExtractList(obj) if err != nil { return err } for i := range items { if err := h.setSelfLinkAddName(items[i], req); err != nil { return err } } return runtime.SetList(obj, items) } // Like setSelfLink, but appends the object's name.", "commid": "kubernetes_pr_2082"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102", "query": "Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce.", "positive_passages": [{"docid": "doc-en-kubernetes-3e16d4673a4e155a5204361313297ce03461058571e1495995578e343c57aa5f", "text": "\"github.com/GoogleCloudPlatform/kubernetes/pkg/conversion\" ) func IsListType(obj Object) bool { _, err := GetItemsPtr(obj) return err == nil } // GetItemsPtr returns a pointer to the list object's Items member. // If 'list' doesn't have an Items member, it's not really a list type // and an error will be returned.", "commid": "kubernetes_pr_2082"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102", "query": "Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce.", "positive_passages": [{"docid": "doc-en-kubernetes-f2b7227aa5809e2ad93786c56678a2f443c615e4425d813324d6b15c85393526", "text": "\"github.com/google/gofuzz\" ) func TestIsList(t *testing.T) { tests := []struct { obj runtime.Object isList bool }{ {&api.PodList{}, true}, {&api.Pod{}, false}, } for _, item := range tests { if e, a := item.isList, runtime.IsListType(item.obj); e != a { t.Errorf(\"%v: Expected %v, got %v\", reflect.TypeOf(item.obj), e, a) } } } func TestExtractList(t *testing.T) { pl := &api.PodList{ Items: []api.Pod{", "commid": "kubernetes_pr_2082"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-2f423faa5d7b893bb36e115dd653dc1122e59a11be0c2e04b387deced8cd91c7", "text": "for i, manifest := range manifests.Items { name := manifest.ID if name == \"\" { name = fmt.Sprintf(\"_%d\", i+1) name = fmt.Sprintf(\"%d\", i+1) } pods = append(pods, kubelet.Pod{Name: name, Manifest: manifest}) }", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-c77fab6f1ada196ff2b2b972bd9baf356870057afcf2ddb04d47111912a366d5", "text": "func TestGetEtcd(t *testing.T) { fakeClient := tools.MakeFakeEtcdClient(t) ch := make(chan interface{}, 1) manifest := api.ContainerManifest{ID: \"foo\", Version: \"v1beta1\", Containers: []api.Container{{Name: \"1\", Image: \"foo\"}}} fakeClient.Data[\"/registry/hosts/machine/kubelet\"] = tools.EtcdResponseWithError{ R: &etcd.Response{ Node: &etcd.Node{ Value: api.EncodeOrDie(&api.ContainerManifestList{ Items: []api.ContainerManifest{{ID: \"foo\"}}, Items: []api.ContainerManifest{manifest}, }), ModifiedIndex: 1, },", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-1476b2eae164695a67ea94bac2f3228156d93c492177c7192da14ab418fd7d0e", "text": "t.Errorf(\"Expected %#v, Got %#v\", 2, lastIndex) } update := (<-ch).(kubelet.PodUpdate) expected := CreatePodUpdate(kubelet.SET, kubelet.Pod{Name: \"foo\", Manifest: api.ContainerManifest{ID: \"foo\"}}) expected := CreatePodUpdate(kubelet.SET, kubelet.Pod{Name: \"foo\", Manifest: manifest}) if !reflect.DeepEqual(expected, update) { t.Errorf(\"Expected %#v, Got %#v\", expected, update) } for i := range update.Pods { if errs := kubelet.ValidatePod(&update.Pods[i]); len(errs) != 0 { t.Errorf(\"Expected no validation errors on %#v, Got %#v\", update.Pods[i], errs) } } } func TestWatchEtcd(t *testing.T) {", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-f3dd572040a1f39ce9afb9b135a449a10737f44240dda6f199479d055f8d67a2", "text": "import ( \"crypto/sha1\" \"encoding/base64\" \"encoding/base32\" \"fmt\" \"io/ioutil\" \"os\" \"path/filepath\" \"regexp\" \"sort\" \"strings\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet\"", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-0590595e283b75e93838ef59263bca7b4a1496840461423d8b1ae5bd31996b6e", "text": "return pod, nil } var simpleSubdomainSafeEncoding = base64.NewEncoding(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ012345678900\") var simpleSubdomainSafeEncoding = base32.NewEncoding(\"0123456789abcdefghijklmnopqrstuv\") var unsafeDNSLabelReplacement = regexp.MustCompile(\"[^a-z0-9]+\") // simpleSubdomainSafeHash generates a compact hash of the input that uses characters // only in the range a-zA-Z0-9, making it suitable for DNS subdomain labels func simpleSubdomainSafeHash(s string) string { // simpleSubdomainSafeHash generates a pod name for the given path that is // suitable as a subdomain label. func simpleSubdomainSafeHash(path string) string { name := strings.ToLower(filepath.Base(path)) name = unsafeDNSLabelReplacement.ReplaceAllString(name, \"\") hasher := sha1.New() hasher.Write([]byte(s)) hasher.Write([]byte(path)) sha := simpleSubdomainSafeEncoding.EncodeToString(hasher.Sum(nil)) if len(sha) > 20 { sha = sha[:20] } return sha return fmt.Sprintf(\"%.15s%.30s\", name, sha) }", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-55e55b0f9146d30528f5d535c7f5fffe20b83b9f3794a649fe3b50c665532bb6", "text": "func TestExtractFromDir(t *testing.T) { manifests := []api.ContainerManifest{ {ID: \"\", Containers: []api.Container{{Image: \"foo\"}}}, {ID: \"\", Containers: []api.Container{{Image: \"bar\"}}}, {Version: \"v1beta1\", Containers: []api.Container{{Name: \"1\", Image: \"foo\"}}}, {Version: \"v1beta1\", Containers: []api.Container{{Name: \"2\", Image: \"bar\"}}}, } files := make([]*os.File, len(manifests))", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-60b8917d0cb759332fdbc5d58e183dc7a24ed48ec20ff3c663cd617b505bb73e", "text": "if !reflect.DeepEqual(expected, update) { t.Errorf(\"Expected %#v, Got %#v\", expected, update) } for i := range update.Pods { if errs := kubelet.ValidatePod(&update.Pods[i]); len(errs) != 0 { t.Errorf(\"Expected no validation errors on %#v, Got %#v\", update.Pods[i], errs) } } } func TestSubdomainSafeName(t *testing.T) { type Case struct { Input string Expected string } testCases := []Case{ {\"/some/path/invalidUPPERCASE\", \"invaliduppercasa6hlenc0vpqbbdtt26ghneqsq3pvud\"}, {\"/some/path/_-!%$#&@^&*(){}\", \"nvhc03p016m60huaiv3avts372rl2p\"}, } for _, testCase := range testCases { value := simpleSubdomainSafeHash(testCase.Input) if value != testCase.Expected { t.Errorf(\"Expected %s, Got %s\", testCase.Expected, value) } value2 := simpleSubdomainSafeHash(testCase.Input) if value != value2 { t.Errorf(\"Value for %s was not stable across runs: %s %s\", testCase.Input, value, value2) } } } // These are used for testing extract json (below)", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-77ca850f1e7cc522875f4168243c63c85961de8df906aef9f05a075d1e6702f1", "text": "func TestExtractFromHttpMultiple(t *testing.T) { manifests := []api.ContainerManifest{ {Version: \"v1beta1\", ID: \"\"}, {Version: \"v1beta1\", ID: \"bar\"}, {Version: \"v1beta1\", ID: \"\", Containers: []api.Container{{Name: \"1\", Image: \"foo\"}}}, {Version: \"v1beta1\", ID: \"bar\", Containers: []api.Container{{Name: \"1\", Image: \"foo\"}}}, } data, err := json.Marshal(manifests) if err != nil {", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c", "query": "The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output.", "positive_passages": [{"docid": "doc-en-kubernetes-1f7a2af8ecde443631baa5df97bc14532c5ec6e5c7ce677f131ada79aecfc5d7", "text": "if !reflect.DeepEqual(expected, update) { t.Errorf(\"Expected: %#v, Got: %#v\", expected, update) } for i := range update.Pods { if errs := kubelet.ValidatePod(&update.Pods[i]); len(errs) != 0 { t.Errorf(\"Expected no validation errors on %#v, Got %#v\", update.Pods[i], errs) } } } func TestExtractFromHttpEmptyArray(t *testing.T) {", "commid": "kubernetes_pr_749"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31", "query": "It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of", "positive_passages": [{"docid": "doc-en-kubernetes-8054c8bbdd963bec7e8fd99db1df1a7991f39531fa67ebfb06dff6c1a74f4eae", "text": "* **Kubernetes Web Interface** ([ui.md](ui.md)): Accessing the Kubernetes web user interface. * **Kubecfg Command Line Interface** ([cli.md](cli.md)): The `kubecfg` command line reference. * **Kubectl Command Line Interface** ([kubectl.md](kubectl.md)): The `kubectl` command line reference. * **Roadmap** ([roadmap.md](roadmap.md)): The set of supported use cases, features, docs, and patterns that are required before Kubernetes 1.0.", "commid": "kubernetes_pr_3722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31", "query": "It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of", "positive_passages": [{"docid": "doc-en-kubernetes-88cd7c30fd22e6a3e7340a791c4484c999037e4e755e9f3bc3aef5fc542c50aa", "text": " ## kubectl The ```kubectl``` command provides command line access to the kubernetes API. See [kubectl documentation](kubectl.md) for details. ## kubecfg is deprecated. Please use kubectl! ## kubecfg command line interface The `kubecfg` command line tools is used to interact with the Kubernetes HTTP API. * [ReplicationController Commands](#replication-controller-commands) * [RESTful Commands](#restful-commands) * [Complete Details](#details) * [Usage](#usage) * [Options](#options) ### Replication Controller Commands #### Run ``` kubecfg [options] run ``` Creates a Kubernetes ReplicaController object. * `[options]` are described in the [Options](#options) section. * `` is the Docker image to use. * `` is the number of replicas of the container to create. * `` is the name to assign to this new ReplicaController. ##### Example ``` kubecfg -p 8080:80 run dockerfile/nginx 2 myNginxController ``` #### Resize ``` kubecfg [options] resize ``` Changes the desired number of replicas, causing replicas to be created or deleted. * `[options]` are described in the [Options](#options) section. ##### Example ``` kubecfg resize myNginxController 3 ``` #### Stop ``` kubecfg [options] stop ``` Stops a controller by setting its desired size to zero. Syntactic sugar on top of resize. * `[options]` are described in the [Options](#options) section. #### Remove ``` kubecfg [options] rm ``` Delete a replication controller. The desired size of the controller must be zero, by calling either `kubecfg resize 0` or `kubecfg stop `. * `[options]` are described in the [Options](#options) section. ### RESTful Commands Kubecfg also supports raw access to the basic restful requests. There are four different resources you can acccess: * `pods` * `replicationControllers` * `services` * `minions` ###### Common Flags * -yaml : output in YAML format * -json : output in JSON format * -c : Accept a file in JSON or YAML for POST/PUT #### Commands ##### get Raw access to a RESTful GET request. ``` kubecfg [options] get pods/pod-abc-123 ``` ##### list Raw access to a RESTful LIST request. ``` kubecfg [options] list pods ``` ##### create Raw access to a RESTful POST request. ``` kubecfg <-c some/body.[json|yaml]> [options] create pods ``` ##### update Raw access to a RESTful PUT request. ``` kubecfg <-c some/body.[json|yaml]> [options] update pods/pod-abc-123 ``` ##### delete Raw access to a RESTful DELETE request. ``` kubecfg [options] delete pods/pod-abc-123 ``` ### Details #### Usage ``` kubecfg -h [-c config/file.json] [-p :,..., :] Kubernetes REST API: kubecfg [OPTIONS] get|list|create|delete|update [/] Manage replication controllers: kubecfg [OPTIONS] stop|rm|rollingupdate kubecfg [OPTIONS] run kubecfg [OPTIONS] resize ``` #### Options * `-V=true|false`: Print the version number. * `-alsologtostderr=true|false`: log to standard error as well as files * `-auth=\"/path/to/.kubernetes_auth\"`: Path to the auth info file. Only used if doing https. * `-c=\"/path/to/config_file\"`: Path to the config file. * `-h=\"\"`: The host to connect to. * `-json=true|false`: If true, print raw JSON for responses * `-l=\"\"`: Selector (label query) to use for listing * `-log_backtrace_at=:0`: when logging hits line file:N, emit a stack trace * `-log_dir=\"\"`: If non-empty, write log files in this directory * `-log_flush_frequency=5s`: Maximum number of seconds between log flushes * `-logtostderr=true|false`: log to standard error instead of files * `-p=\"\"`: The port spec, comma-separated list of `:,...` * `-proxy=true|false`: If true, run a proxy to the API server * `-s=-1`: If positive, create and run a corresponding service on this port, only used with 'run' * `-stderrthreshold=0`: logs at or above this threshold go to stderr * `-template=\"\"`: If present, parse this string as a golang template and use it for output printing * `-template_file=\"\"`: If present, load this file as a golang template and use it for output printing * `-u=1m0s`: Update interval period * `-v=0`: log level for V logs. See [Logging Conventions](devel/logging.md) for details * `-verbose=true|false`: If true, print extra information * `-vmodule=\"\"`: comma-separated list of pattern=N settings for file-filtered logging * `-www=\"\"`: If -proxy is true, use this directory to serve static files * `-yaml=true|false`: If true, print raw YAML for responses ", "commid": "kubernetes_pr_3722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31", "query": "It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of", "positive_passages": [{"docid": "doc-en-kubernetes-7a671e34f733ae15cb97f401d4620aa5917af4de1b3ad08c7ad6c0a37ceb817b", "text": "* [API](api-conventions.md) * [Client libraries](client-libraries.md) * [Command-line interface](cli.md) * [Command-line interface](kubectl.md) * [UI](ux.md) * [Images and registries](images.md) * [Container environment](container-environment.md)", "commid": "kubernetes_pr_3722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31", "query": "It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of", "positive_passages": [{"docid": "doc-en-kubernetes-96fe0bfe1bf4b1cfaa75dda32d8f502734a33f1d73a3eeebc8382cd3b79b9eba", "text": "The replication controller is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in [#492](https://github.com/GoogleCloudPlatform/kubernetes/issues/492)), which would change its `replicas` field. We will not add scheduling policies (e.g., [spreading](https://github.com/GoogleCloudPlatform/kubernetes/issues/367#issuecomment-48428019)) to replication controller. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsehwere. We even plan to factor out the mechanism for bulk pod creation ([#170](https://github.com/GoogleCloudPlatform/kubernetes/issues/170)). The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The \"macro\" operations currently supported by kubecfg (run, stop, resize, rollingupdate) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc. The replication controller is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The \"macro\" operations currently supported by kubectl (run-container, stop, resize, rollingupdate) are proof-of-concept examples of this. For instance, we could imagine something like [Asgard](http://techblog.netflix.com/2012/06/asgard-web-based-cloud-management-and.html) managing replication controllers, auto-scalers, services, scheduling policies, canaries, etc. ## Common usage patterns", "commid": "kubernetes_pr_3722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31", "query": "It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of", "positive_passages": [{"docid": "doc-en-kubernetes-ee892708bfa41a4f9b5eb02ad3043296cc9d7a00f5616e16ded5b2c6e202cc6e", "text": "Start the server: ```sh cluster/kubecfg.sh -proxy -www $PWD/www cluster/kubectl.sh proxy -www=$PWD/www ``` The UI should now be running on [localhost](http://localhost:8001/static/index.html#/groups//selector)", "commid": "kubernetes_pr_3722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9d9eb5f8f6da0df47984d5529d5299f1f0821654a05b1fd79fc26d6b03acedb8", "query": "Salt was failing to install on the master vagrant instance. The change noted by the FIXME solved it.\nIt's possible the required merge in the upstream repo occurred, will take a look.\nI just attempted to install a new cluster from upstream/master and was not able to reproduce the issue described. Was there an issue with curl-ing that specific version of salt-bootstrap? I guess more details on what the failure you encountered appeared as would help know how to proceed.\nI will submit a PR to revert back to using the default non-versioned bootstrap URL. Salt-stack PR: Fixed the errors in install that were previously encountered around salt-api not being found.\nSee , thanks!", "positive_passages": [{"docid": "doc-en-kubernetes-cba0204e1a4f53cc2176261e51788498d463d235c7d9f3516d9366346705f2a0", "text": "# install. See https://github.com/saltstack/salt-bootstrap/issues/270 # # -M installs the master # FIXME: The following line should be replaced with: # curl -L http://bootstrap.saltstack.com | sh -s -- -M # when the merged salt-api service is included in the fedora salt-master rpm # Merge is here: https://github.com/saltstack/salt/pull/13554 # Fedora git repository is here: http://pkgs.fedoraproject.org/cgit/salt.git/ # (a new service file needs to be added for salt-api) curl -sS -L https://raw.githubusercontent.com/saltstack/salt-bootstrap/v2014.06.30/bootstrap-salt.sh | sh -s -- -M curl -sS -L --connect-timeout 20 --retry 6 --retry-delay 10 https://bootstrap.saltstack.com | sh -s -- -M fi # Build release", "commid": "kubernetes_pr_932"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9d9eb5f8f6da0df47984d5529d5299f1f0821654a05b1fd79fc26d6b03acedb8", "query": "Salt was failing to install on the master vagrant instance. The change noted by the FIXME solved it.\nIt's possible the required merge in the upstream repo occurred, will take a look.\nI just attempted to install a new cluster from upstream/master and was not able to reproduce the issue described. Was there an issue with curl-ing that specific version of salt-bootstrap? I guess more details on what the failure you encountered appeared as would help know how to proceed.\nI will submit a PR to revert back to using the default non-versioned bootstrap URL. Salt-stack PR: Fixed the errors in install that were previously encountered around salt-api not being found.\nSee , thanks!", "positive_passages": [{"docid": "doc-en-kubernetes-d432ad9835130ddff181e23ec081539c60e95bed6e658ebbfe89e3eac77b6391", "text": "# # We specify -X to avoid a race condition that can cause minion failure to # install. See https://github.com/saltstack/salt-bootstrap/issues/270 curl -sS -L http://bootstrap.saltstack.com | sh -s -- -X curl -sS -L --connect-timeout 20 --retry 6 --retry-delay 10 https://bootstrap.saltstack.com | sh -s -- -X ## TODO this only works on systemd distros, need to find a work-around as removing -X above fails to start the services installed systemctl enable salt-minion", "commid": "kubernetes_pr_932"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-8fbd5069dc8d0c6f50725a24a030fe68d0dfa0a75d89baaa3745d64bc3a59e50", "text": "myKubelet := kubelet.NewIntegrationTestKubelet(machineList[0], &fakeDocker1) go util.Forever(func() { myKubelet.Run(cfg1.Updates()) }, 0) go util.Forever(func() { kubelet.ListenAndServeKubeletServer(myKubelet, cfg1.Channel(\"http\"), http.DefaultServeMux, \"localhost\", 10250) kubelet.ListenAndServeKubeletServer(myKubelet, cfg1.Channel(\"http\"), \"localhost\", 10250) }, 0) // Kubelet (machine)", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-3800f1341f4cd6bdcc9cc4a746fe471dfba758db97dba862a6e4ca9d132cbf20", "text": "otherKubelet := kubelet.NewIntegrationTestKubelet(machineList[1], &fakeDocker2) go util.Forever(func() { otherKubelet.Run(cfg2.Updates()) }, 0) go util.Forever(func() { kubelet.ListenAndServeKubeletServer(otherKubelet, cfg2.Channel(\"http\"), http.DefaultServeMux, \"localhost\", 10251) kubelet.ListenAndServeKubeletServer(otherKubelet, cfg2.Channel(\"http\"), \"localhost\", 10251) }, 0) return apiServer.URL", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-af2c5c88a18cae048e4729875cbad873e81badb4cda114d4b924ff9bc0ff59cd", "text": "// start the kubelet server if *enableServer { go util.Forever(func() { kubelet.ListenAndServeKubeletServer(k, cfg.Channel(\"http\"), http.DefaultServeMux, *address, *port) kubelet.ListenAndServeKubeletServer(k, cfg.Channel(\"http\"), *address, *port) }, 0) }", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-5de0dd7e140d5f8b76254ed921689e322f48b6f4eaae2bb827f0d06cb615ab3b", "text": "type Server struct { host HostInterface updates chan<- interface{} handler http.Handler mux *http.ServeMux } func ListenAndServeKubeletServer(host HostInterface, updates chan<- interface{}, delegate http.Handler, address string, port uint) { // ListenAndServeKubeletServer initializes a server to respond to HTTP network requests on the Kubelet func ListenAndServeKubeletServer(host HostInterface, updates chan<- interface{}, address string, port uint) { glog.Infof(\"Starting to listen on %s:%d\", address, port) handler := Server{ host: host, updates: updates, handler: delegate, } handler := NewServer(host, updates) s := &http.Server{ Addr: net.JoinHostPort(address, strconv.FormatUint(uint64(port), 10)), Handler: &handler,", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-1442573651d91c2a6cf6b8a92832be370c4ed7922f688d3cb66f3c16dbf1ab0a", "text": "ServeLogs(w http.ResponseWriter, req *http.Request) } // NewServer initializes and configures a kubelet.Server object to handle HTTP requests func NewServer(host HostInterface, updates chan<- interface{}) Server { server := Server{ host: host, updates: updates, mux: http.NewServeMux(), } server.InstallDefaultHandlers() return server } // InstallDefaultHandlers registers the set of supported HTTP request patterns with the mux func (s *Server) InstallDefaultHandlers() { s.mux.HandleFunc(\"/healthz\", s.handleHealth) s.mux.HandleFunc(\"/container\", s.handleContainer) s.mux.HandleFunc(\"/containers\", s.handleContainers) s.mux.HandleFunc(\"/podInfo\", s.handlePodInfo) s.mux.HandleFunc(\"/stats/\", s.handleStats) s.mux.HandleFunc(\"/logs/\", s.handleLogs) s.mux.HandleFunc(\"/spec/\", s.handleSpec) } // error serializes an error object into an HTTP response func (s *Server) error(w http.ResponseWriter, err error) { http.Error(w, fmt.Sprintf(\"Internal Error: %v\", err), http.StatusInternalServerError) } func (s *Server) ServeHTTP(w http.ResponseWriter, req *http.Request) { defer httplog.MakeLogged(req, &w).StacktraceWhen( httplog.StatusIsNot( http.StatusOK, http.StatusNotFound, ), ).Log() // handleHealth handles health checking requests against the Kubelet func (s *Server) handleHealth(w http.ResponseWriter, req *http.Request) { } // handleContainer handles container requests against the Kubelet func (s *Server) handleContainer(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() data, err := ioutil.ReadAll(req.Body) if err != nil { s.error(w, err) return } // This is to provide backward compatibility. It only supports a single manifest var pod Pod err = yaml.Unmarshal(data, &pod.Manifest) if err != nil { s.error(w, err) return } //TODO: sha1 of manifest? pod.Name = \"1\" s.updates <- PodUpdate{[]Pod{pod}, SET} } // handleContainers handles containers requests against the Kubelet func (s *Server) handleContainers(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() data, err := ioutil.ReadAll(req.Body) if err != nil { s.error(w, err) return } var manifests []api.ContainerManifest err = yaml.Unmarshal(data, &manifests) if err != nil { s.error(w, err) return } pods := make([]Pod, len(manifests)) for i := range manifests { pods[i].Name = fmt.Sprintf(\"%d\", i+1) pods[i].Manifest = manifests[i] } s.updates <- PodUpdate{pods, SET} } // handlePodInfo handles podInfo requests against the Kubelet func (s *Server) handlePodInfo(w http.ResponseWriter, req *http.Request) { u, err := url.ParseRequestURI(req.RequestURI) if err != nil { s.error(w, err) return } // TODO: use an http.ServeMux instead of a switch. switch { case u.Path == \"/container\" || u.Path == \"/containers\": defer req.Body.Close() data, err := ioutil.ReadAll(req.Body) if err != nil { s.error(w, err) return } if u.Path == \"/container\" { // This is to provide backward compatibility. It only supports a single manifest var pod Pod err = yaml.Unmarshal(data, &pod.Manifest) if err != nil { s.error(w, err) return } //TODO: sha1 of manifest? pod.Name = \"1\" s.updates <- PodUpdate{[]Pod{pod}, SET} } else if u.Path == \"/containers\" { var manifests []api.ContainerManifest err = yaml.Unmarshal(data, &manifests) if err != nil { s.error(w, err) return } pods := make([]Pod, len(manifests)) for i := range manifests { pods[i].Name = fmt.Sprintf(\"%d\", i+1) pods[i].Manifest = manifests[i] } s.updates <- PodUpdate{pods, SET} } case u.Path == \"/podInfo\": podID := u.Query().Get(\"podID\") if len(podID) == 0 { w.WriteHeader(http.StatusBadRequest) http.Error(w, \"Missing 'podID=' query entry.\", http.StatusBadRequest) return } // TODO: backwards compatibility with existing API, needs API change podFullName := GetPodFullName(&Pod{Name: podID, Namespace: \"etcd\"}) info, err := s.host.GetPodInfo(podFullName) if err == ErrNoContainersInPod { http.Error(w, \"Pod does not exist\", http.StatusNotFound) return } if err != nil { s.error(w, err) return } data, err := json.Marshal(info) if err != nil { s.error(w, err) return } w.WriteHeader(http.StatusOK) w.Header().Add(\"Content-type\", \"application/json\") w.Write(data) case strings.HasPrefix(u.Path, \"/stats\"): s.serveStats(w, req) case strings.HasPrefix(u.Path, \"/spec\"): info, err := s.host.GetMachineInfo() if err != nil { s.error(w, err) return } data, err := json.Marshal(info) if err != nil { s.error(w, err) return } w.Header().Add(\"Content-type\", \"application/json\") w.Write(data) case strings.HasPrefix(u.Path, \"/logs/\"): s.host.ServeLogs(w, req) default: if s.handler != nil { s.handler.ServeHTTP(w, req) } podID := u.Query().Get(\"podID\") if len(podID) == 0 { w.WriteHeader(http.StatusBadRequest) http.Error(w, \"Missing 'podID=' query entry.\", http.StatusBadRequest) return } // TODO: backwards compatibility with existing API, needs API change podFullName := GetPodFullName(&Pod{Name: podID, Namespace: \"etcd\"}) info, err := s.host.GetPodInfo(podFullName) if err == ErrNoContainersInPod { http.Error(w, \"Pod does not exist\", http.StatusNotFound) return } if err != nil { s.error(w, err) return } data, err := json.Marshal(info) if err != nil { s.error(w, err) return } w.WriteHeader(http.StatusOK) w.Header().Add(\"Content-type\", \"application/json\") w.Write(data) } // handleStats handles stats requests against the Kubelet func (s *Server) handleStats(w http.ResponseWriter, req *http.Request) { s.serveStats(w, req) } // handleLogs handles logs requests against the Kubelet func (s *Server) handleLogs(w http.ResponseWriter, req *http.Request) { s.host.ServeLogs(w, req) } // handleSpec handles spec requests against the Kubelet func (s *Server) handleSpec(w http.ResponseWriter, req *http.Request) { info, err := s.host.GetMachineInfo() if err != nil { s.error(w, err) return } data, err := json.Marshal(info) if err != nil { s.error(w, err) return } w.Header().Add(\"Content-type\", \"application/json\") w.Write(data) } // ServeHTTP responds to HTTP requests on the Kubelet func (s *Server) ServeHTTP(w http.ResponseWriter, req *http.Request) { defer httplog.MakeLogged(req, &w).StacktraceWhen( httplog.StatusIsNot( http.StatusOK, http.StatusNotFound, ), ).Log() s.mux.ServeHTTP(w, req) } // serveStats implements stats logic func (s *Server) serveStats(w http.ResponseWriter, req *http.Request) { // /stats// components := strings.Split(strings.TrimPrefix(path.Clean(req.URL.Path), \"/\"), \"/\")", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7", "query": "The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone", "positive_passages": [{"docid": "doc-en-kubernetes-1ed72901bc3ca78d7548879bb28f66a7081d79dd71775fba4ce821cbf87174be", "text": "} fw.updateReader = startReading(fw.updateChan) fw.fakeKubelet = &fakeKubelet{} fw.serverUnderTest = &Server{ host: fw.fakeKubelet, updates: fw.updateChan, } server := NewServer(fw.fakeKubelet, fw.updateChan) fw.serverUnderTest = &server fw.testHTTPServer = httptest.NewServer(fw.serverUnderTest) return fw }", "commid": "kubernetes_pr_975"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-82d6ae4c87c6f8e252c1f0b26293f239ba6c9acb5e3ce72c90d7acd21c87661c", "text": "MASTER_NAME=\"${INSTANCE_PREFIX}-master\" MASTER_TAG=\"${INSTANCE_PREFIX}-master\" MINION_TAG=\"${INSTANCE_PREFIX}-minion\" MINION_NAMES=($(eval echo ${INSTANCE_PREFIX}-minion-{1..${NUM_MINIONS}})) # Unable to use hostnames yet because DNS is not in cluster, so we revert external look-up name to use the minion IP #MINION_NAMES=($(eval echo ${INSTANCE_PREFIX}-minion-{1..${NUM_MINIONS}})) # IP LOCATIONS FOR INTERACTING WITH THE MINIONS MINION_IP_BASE=\"10.245.2.\" for (( i=0; i <${NUM_MINIONS}; i++)) do KUBE_MINION_IP_ADDRESSES[$i]=\"${MINION_IP_BASE}$[$i+2]\" MINION_IP[$i]=\"${MINION_IP_BASE}$[$i+2]\" MINION_NAMES[$i]=\"${MINION_IP[$i]}\" VAGRANT_MINION_NAMES[$i]=\"minion-$[$i+1]\" done", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-7c71f9e0113efc7076f4fba27ef91e71d7c9106fb82617e594c8a3eefe47839b", "text": "roles: - kubernetes-pool cbr-cidr: $MINION_IP_RANGE minion_ip: $MINION_IP EOF # we will run provision to update code each time we test, so we do not want to do salt install each time", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-de759a85143135ff9fd02b94e0e816e8a265a2d0db10ffd77d00a4e4b54071ea", "text": "} filteredMinions := v.saltMinionsByRole(minions, \"kubernetes-pool\") for _, minion := range filteredMinions { fmt.Println(\"Minion: \", minion.Host, \" , \", instance, \" IP: \", minion.IP) if minion.Host == instance { // Due to vagrant not running with a dedicated DNS setup, we return the IP address of a minion as its hostname at this time if minion.IP == instance { return net.ParseIP(minion.IP), nil } }", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-2f8e98e7e98c372dc9b4c1118e0ae7e652f34421dcc2002b0c2601dd17e6286d", "text": "filteredMinions := v.saltMinionsByRole(minions, \"kubernetes-pool\") var instances []string for _, instance := range filteredMinions { instances = append(instances, instance.Host) // With no dedicated DNS setup in cluster, IP address is used as hostname instances = append(instances, instance.IP) } return instances, nil", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-791f8b2b45dd83fe96b7916ad94854be317260c60d6932552ab3f0694976deb6", "text": "t.Fatalf(\"Incorrect number of instances returned\") } if instances[0] != \"kubernetes-minion-1\" { // no DNS in vagrant cluster, so we return IP as hostname expectedInstanceHost := \"10.245.2.2\" expectedInstanceIP := \"10.245.2.2\" if instances[0] != expectedInstanceHost { t.Fatalf(\"Invalid instance returned\") }", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-81f29a5f4247c1add94ae8d1f8917b417a051b7b63497d541945536c189c5c49", "text": "t.Fatalf(\"Unexpected error, should have returned a valid IP address: %s\", err) } if ip.String() != \"10.245.2.2\" { if ip.String() != expectedInstanceIP { t.Fatalf(\"Invalid IP address returned\") } }", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-37f463a451931f1a183550432329e210e208b39cdfd1c14a4a6f7be08b104ec5", "text": "import ( \"fmt\" \"strings\" \"sync\" \"time\"", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920", "query": "In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not.", "positive_passages": [{"docid": "doc-en-kubernetes-6dad3f50a0b02f8f04123091be0f650f366c3015ea36bcd28a8f019845b5e73c", "text": "if instances == nil || !ok { return \"\" } ix := strings.Index(host, \".\") if ix != -1 { host = host[:ix] } addr, err := instances.IPAddress(host) if err != nil { glog.Errorf(\"Error getting instance IP: %#v\", err)", "commid": "kubernetes_pr_1279"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fb3b457ee434f4c000eee30c833933190e7e6916431cae811f57cb551a433ff1", "query": "Kubernetes use kubernetes/pause image to create network container. The problem is that whatever signals the pause container received, it will exit with code 2. Instead, with a normal exit, it should exit with code 0; while it exit with port binding issue, it should exit with code -1.\nThe port-binding issues will occur before the pause image gets invoked, but the pause image itself should exit 0.\nI filed to docker to capture port-binding issue there. Port-binding is not my concern since we validate the port to avoid the conflict anyway. The concern I am having is PreStart we plan to introduce between docker container creation and start ().\nThere seems to be another issue - quick testing shows that if there is a conflict with a port allocated OUTSIDE of docker (e.g. the kube-proxy), Docker will accept the container and run it, but the host port doesn't work at all, and there's no way to tell except trawling logs. , Dawn Chen wrote:\nIs there anything we could do to help go through, to eliminate the need for the network container?\nGood catch. But following your steps above, my quick testing shows that docker container is running, but doesn't work at all. Also daemon logs doesn't have any error information. Will file another issue to docker: 1) # netstat -lnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN tcp6 0 0 :::4194 ::: LISTEN tcp6 0 0 ::: ::: LISTEN tcp6 0 0 :::8080 ::: LISTEN tcp6 0 0 :::22 :::* LISTEN 2) # docker run -p :80 -d dockerimages/apache 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11 3) # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dockerimages/apache:latest \"apache2 -DFOREGROUN 6 seconds ago Up 5 seconds 0.0.0.0:-80/tcp jollynewton 4) # docker logs AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message [Tue Sep 30 00:03:16. 2014] [core:warn] [pid 1:tid ] AH00098: pid file overwritten -- Unclean shutdown of previous Apache run? [Tue Sep 30 00:03:16. 2014] [mpmevent:notice] [pid 1:tid ] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations [Tue Sep 30 00:03:16. 2014] [core:notice] [pid 1:tid ] AH00094: Command line: 'apache2 -D FOREGROUND' 5) # cat | grep [] +job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] POST /v1.14/containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/start [] +job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] +job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /v1.14/containers//json [] +job containerinspect() [] -job containerinspect() = OK (0) [info] GET /containers//logs?stderr=1&stdout=1&tail=all [] +job containerinspect() [] -job containerinspect() = OK (0) [] +job logs() [] -job logs() = OK (0)\nI agreed we should help docker/docker go through, so that network container should be gone forever which makes our system, especially kubelet much simple. But that doesn't help with docker/docker and a new issue found by following.\nhave you tried this on docker master because it should have been fixed by , just want to prevent duplicate issues\nI just rebuild docker from HEAD, and tested it with above steps I posted. The problem is fixed: 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458 2014/09/30 16:29:20 Error response from daemon: Cannot start container 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458: Bind for 0.0.0.0:8080 failed: port is already allocated But another issue I filed (docker/docker) is still exist. Thanks!\ngreat, just wanted to make sure! and yes I can see if want to change the existing functionality for docker/docker", "positive_passages": [{"docid": "doc-en-kubernetes-cfb7307c013043c7a3dfc83f212625ec6957dc00a7151c86cc9526ffa0a6e779", "text": "package main import \"syscall\" import ( \"os\" \"os/signal\" \"syscall\" ) func main() { // Halts execution, waiting on signal. syscall.Pause() c := make(chan os.Signal, 1) signal.Notify(c, os.Interrupt, os.Kill, syscall.SIGTERM) // Block until a signal is received. <-c }", "commid": "kubernetes_pr_1563"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fb3b457ee434f4c000eee30c833933190e7e6916431cae811f57cb551a433ff1", "query": "Kubernetes use kubernetes/pause image to create network container. The problem is that whatever signals the pause container received, it will exit with code 2. Instead, with a normal exit, it should exit with code 0; while it exit with port binding issue, it should exit with code -1.\nThe port-binding issues will occur before the pause image gets invoked, but the pause image itself should exit 0.\nI filed to docker to capture port-binding issue there. Port-binding is not my concern since we validate the port to avoid the conflict anyway. The concern I am having is PreStart we plan to introduce between docker container creation and start ().\nThere seems to be another issue - quick testing shows that if there is a conflict with a port allocated OUTSIDE of docker (e.g. the kube-proxy), Docker will accept the container and run it, but the host port doesn't work at all, and there's no way to tell except trawling logs. , Dawn Chen wrote:\nIs there anything we could do to help go through, to eliminate the need for the network container?\nGood catch. But following your steps above, my quick testing shows that docker container is running, but doesn't work at all. Also daemon logs doesn't have any error information. Will file another issue to docker: 1) # netstat -lnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN tcp6 0 0 :::4194 ::: LISTEN tcp6 0 0 ::: ::: LISTEN tcp6 0 0 :::8080 ::: LISTEN tcp6 0 0 :::22 :::* LISTEN 2) # docker run -p :80 -d dockerimages/apache 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11 3) # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dockerimages/apache:latest \"apache2 -DFOREGROUN 6 seconds ago Up 5 seconds 0.0.0.0:-80/tcp jollynewton 4) # docker logs AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message [Tue Sep 30 00:03:16. 2014] [core:warn] [pid 1:tid ] AH00098: pid file overwritten -- Unclean shutdown of previous Apache run? [Tue Sep 30 00:03:16. 2014] [mpmevent:notice] [pid 1:tid ] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations [Tue Sep 30 00:03:16. 2014] [core:notice] [pid 1:tid ] AH00094: Command line: 'apache2 -D FOREGROUND' 5) # cat | grep [] +job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] POST /v1.14/containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/start [] +job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] +job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /v1.14/containers//json [] +job containerinspect() [] -job containerinspect() = OK (0) [info] GET /containers//logs?stderr=1&stdout=1&tail=all [] +job containerinspect() [] -job containerinspect() = OK (0) [] +job logs() [] -job logs() = OK (0)\nI agreed we should help docker/docker go through, so that network container should be gone forever which makes our system, especially kubelet much simple. But that doesn't help with docker/docker and a new issue found by following.\nhave you tried this on docker master because it should have been fixed by , just want to prevent duplicate issues\nI just rebuild docker from HEAD, and tested it with above steps I posted. The problem is fixed: 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458 2014/09/30 16:29:20 Error response from daemon: Cannot start container 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458: Bind for 0.0.0.0:8080 failed: port is already allocated But another issue I filed (docker/docker) is still exist. Thanks!\ngreat, just wanted to make sure! and yes I can see if want to change the existing functionality for docker/docker", "positive_passages": [{"docid": "doc-en-kubernetes-d0866d03c769fdfd121dca3ce3da4ce50e94aaafcc4c587d2d096f26c512e658", "text": "set -e set -x # Build the binary. go build --ldflags '-extldflags \"-static\" -s' pause.go # Run goupx to shrink binary size. go get github.com/pwaller/goupx goupx pause ", "commid": "kubernetes_pr_1563"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f", "query": "It's not well tested nor widely used; so it should either be removed or have tests and uses .", "positive_passages": [{"docid": "doc-en-kubernetes-5d4f925e4f8f487394eccdb707b78736978977739e90987215e9c657e8f96ae1", "text": ") var ( configFile = flag.String(\"configfile\", \"/tmp/proxy_config\", \"Configuration file for the proxy\") etcdServerList util.StringList etcdConfigFile = flag.String(\"etcd_config\", \"\", \"The config file for the etcd client. Mutually exclusive with -etcd_servers\") bindAddress = util.IP(net.ParseIP(\"0.0.0.0\"))", "commid": "kubernetes_pr_1660"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f", "query": "It's not well tested nor widely used; so it should either be removed or have tests and uses .", "positive_passages": [{"docid": "doc-en-kubernetes-7fadfd0900aea33319228dc4aa9a01a291a14bf4b52fe3a1a435dca5f27bf3be", "text": "} } // And create a configuration source that reads from a local file config.NewConfigSourceFile(*configFile, serviceConfig.Channel(\"file\"), endpointsConfig.Channel(\"file\")) glog.Infof(\"Using configuration file %s\", *configFile) loadBalancer := proxy.NewLoadBalancerRR() proxier := proxy.NewProxier(loadBalancer, net.IP(bindAddress)) // Wire proxier to handle changes to services", "commid": "kubernetes_pr_1660"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f", "query": "It's not well tested nor widely used; so it should either be removed or have tests and uses .", "positive_passages": [{"docid": "doc-en-kubernetes-dbb8f8abf8e792e7f35be402092fd88ea529699560ca69eaa05dda215ea1344c", "text": "**-bindaddress**=\"0.0.0.0\" The address for the proxy server to serve on (set to 0.0.0.0 or \"\" for all interfaces) **-configfile**=\"/tmp/proxy_config\" Configuration file for the proxy **-etcd_servers**=[] List of etcd servers to watch (http://ip:port), comma separated (optional)", "commid": "kubernetes_pr_1660"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f", "query": "It's not well tested nor widely used; so it should either be removed or have tests and uses .", "positive_passages": [{"docid": "doc-en-kubernetes-77da8fc49d75a5e74e299a59487f9a47e1d662a4cffd7ffb963ed801e246f60d", "text": "The address for the proxy server to serve on (set to 0.0.0.0 or \"\" for all interfaces) .PP fB-configfilefP=\"/tmp/proxy_config\" Configuration file for the proxy .PP fB-etcd_serversfP=[] List of etcd servers to watch ( [la]http://ip:port[ra]), comma separated (optional)", "commid": "kubernetes_pr_1660"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f", "query": "It's not well tested nor widely used; so it should either be removed or have tests and uses .", "positive_passages": [{"docid": "doc-en-kubernetes-29948b79f4a07c40ab6dad57f381072996a465ad212efbe37a0821d75020af7e", "text": " /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Reads the configuration from the file. Example file for two services [nodejs & mysql] //{\"Services\": [ // { // \"Name\":\"nodejs\", // \"Port\":10000, // \"Endpoints\":[\"10.240.180.168:8000\", \"10.240.254.199:8000\", \"10.240.62.150:8000\"] // }, // { // \"Name\":\"mysql\", // \"Port\":10001, // \"Endpoints\":[\"10.240.180.168:9000\", \"10.240.254.199:9000\", \"10.240.62.150:9000\"] // } //] //} package config import ( \"bytes\" \"encoding/json\" \"fmt\" \"io/ioutil\" \"reflect\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/golang/glog\" ) // serviceConfig is a deserialized form of the config file format which ConfigSourceFile accepts. // TODO: this is apparently untested; is it used? type serviceConfig struct { Services []struct { Name string `json: \"name\"` Port int `json: \"port\"` Endpoints []string `json: \"endpoints\"` } `json:\"service\"` } // ConfigSourceFile periodically reads service configurations in JSON from a file, and sends the services and endpoints defined in the file to the specified channels. type ConfigSourceFile struct { serviceChannel chan ServiceUpdate endpointsChannel chan EndpointsUpdate filename string } // NewConfigSourceFile creates a new ConfigSourceFile and let it immediately runs the created ConfigSourceFile in a goroutine. func NewConfigSourceFile(filename string, serviceChannel chan ServiceUpdate, endpointsChannel chan EndpointsUpdate) ConfigSourceFile { config := ConfigSourceFile{ filename: filename, serviceChannel: serviceChannel, endpointsChannel: endpointsChannel, } go config.Run() return config } // Run begins watching the config file. func (s ConfigSourceFile) Run() { glog.V(1).Infof(\"Watching file %s\", s.filename) var lastData []byte var lastServices []api.Service var lastEndpoints []api.Endpoints sleep := 5 * time.Second // Used to avoid spamming the error log file, makes error logging edge triggered. hadSuccess := true for { data, err := ioutil.ReadFile(s.filename) if err != nil { msg := fmt.Sprintf(\"Couldn't read file: %s : %v\", s.filename, err) if hadSuccess { glog.Error(msg) } else { glog.V(1).Info(msg) } hadSuccess = false time.Sleep(sleep) continue } hadSuccess = true if bytes.Equal(lastData, data) { time.Sleep(sleep) continue } lastData = data config := &serviceConfig{} if err = json.Unmarshal(data, config); err != nil { glog.Errorf(\"Couldn't unmarshal configuration from file : %s %v\", data, err) continue } // Ok, we have a valid configuration, send to channel for // rejiggering. newServices := make([]api.Service, len(config.Services)) newEndpoints := make([]api.Endpoints, len(config.Services)) for i, service := range config.Services { newServices[i] = api.Service{TypeMeta: api.TypeMeta{ID: service.Name}, Port: service.Port} newEndpoints[i] = api.Endpoints{TypeMeta: api.TypeMeta{ID: service.Name}, Endpoints: service.Endpoints} } if !reflect.DeepEqual(lastServices, newServices) { serviceUpdate := ServiceUpdate{Op: SET, Services: newServices} s.serviceChannel <- serviceUpdate lastServices = newServices } if !reflect.DeepEqual(lastEndpoints, newEndpoints) { endpointsUpdate := EndpointsUpdate{Op: SET, Endpoints: newEndpoints} s.endpointsChannel <- endpointsUpdate lastEndpoints = newEndpoints } time.Sleep(sleep) } } ", "commid": "kubernetes_pr_1660"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-082e7d35941be3e789d491496de3cc2d3041fad7b5e8ac402cd9dd1071c07661", "text": "## Protocols for Collaborative Development Please read [this doc](docs/collab.md) for information on how we're running development for the project. Please read [this doc](docs/devel/collab.md) for information on how we're running development for the project. ## Adding dependencies", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-718671d140b86ebf5b93a6194596b0324f233352f6657a5cad794f27a1ec1d0d", "text": "* [Example of dynamic updates](examples/update-demo/README.md) * [Cluster monitoring with heapster and cAdvisor](https://github.com/GoogleCloudPlatform/heapster) * [Community projects](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Kubernetes-Community) * [Development guide](docs/development.md) * [Development guide](docs/devel/development.md) Or fork and start hacking!", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-56dbcbda8b317d88bda99208dd165d87b0e0d280727a2a0e75bb7a199e4b2ab7", "text": " # On Collaborative Development Kubernetes is open source, but many of the people working on it do so as their day job. In order to avoid forcing people to be \"at work\" effectively 24/7, we want to establish some semi-formal protocols around development. Hopefully these rules make things go more smoothly. If you find that this is not the case, please complain loudly. ## Patches welcome First and foremost: as a potential contributor, your changes and ideas are welcome at any hour of the day or night, weekdays, weekends, and holidays. Please do not ever hesitate to ask a question or send a PR. ## Timezones and calendars For the time being, most of the people working on this project are in the US and on Pacific time. Any times mentioned henceforth will refer to this timezone. Any references to \"work days\" will refer to the US calendar. ## Code reviews All changes must be code reviewed. For non-maintainers this is obvious, since you can't commit anyway. But even for maintainers, we want all changes to get at least one review, preferably from someone who knows the areas the change touches. For non-trivial changes we may want two reviewers. The primary reviewer will make this decision and nominate a second reviewer, if needed. Except for trivial changes, PRs should sit for at least a 2 hours to allow for wider review. Most PRs will find reviewers organically. If a maintainer intends to be the primary reviewer of a PR they should set themselves as the assignee on GitHub and say so in a reply to the PR. Only the primary reviewer of a change should actually do the merge, except in rare cases (e.g. they are unavailable in a reasonable timeframe). If a PR has gone 2 work days without an owner emerging, please poke the PR thread and ask for a reviewer to be assigned. Except for rare cases, such as trivial changes (e.g. typos, comments) or emergencies (e.g. broken builds), maintainers should not merge their own changes. Expect reviewers to request that you avoid [common go style mistakes](https://code.google.com/p/go-wiki/wiki/CodeReviewComments) in your PRs. ## Assigned reviews Maintainers can assign reviews to other maintainers, when appropriate. The assignee becomes the shepherd for that PR and is responsible for merging the PR once they are satisfied with it or else closing it. The assignee might request reviews from non-maintainers. ## Merge hours Maintainers will do merges between the hours of 7:00 am Monday and 7:00 pm (19:00h) Friday. PRs that arrive over the weekend or on holidays will only be merged if there is a very good reason for it and if the code review requirements have been met. There may be discussion an even approvals granted outside of the above hours, but merges will generally be deferred. ## Holds Any maintainer or core contributor who wants to review a PR but does not have time immediately may put a hold on a PR simply by saying so on the PR discussion and offering an ETA measured in single-digit days at most. Any PR that has a hold shall not be merged until the person who requested the hold acks the review, withdraws their hold, or is overruled by a preponderance of maintainers. ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-36cea03b6c0d18b42b316d857d137b1e6afbce0320660f7682c4e5fa621a10b0", "text": " # Development Guide # Releases and Official Builds Official releases are built in Docker containers. Details are [here](build/README.md). You can do simple builds and development with just a local Docker installation. If want to build go locally outside of docker, please continue below. ## Go development environment Kubernetes is written in [Go](http://golang.org) programming language. If you haven't set up Go development environment, please follow [this instruction](http://golang.org/doc/code.html) to install go tool and set up GOPATH. Ensure your version of Go is at least 1.3. ## Put kubernetes into GOPATH We highly recommend to put kubernetes' code into your GOPATH. For example, the following commands will download kubernetes' code under the current user's GOPATH (Assuming there's only one directory in GOPATH.): ``` $ echo $GOPATH /home/user/goproj $ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform/ $ cd $GOPATH/src/github.com/GoogleCloudPlatform/ $ git clone git@github.com:GoogleCloudPlatform/kubernetes.git ``` The commands above will not work if there are more than one directory in ``$GOPATH``. (Obviously, clone your own fork of Kubernetes if you plan to do development.) ## godep and dependency management Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. It is not strictly required for building Kubernetes but it is required when managing dependencies under the Godeps/ tree, and is required by a number of the build and test scripts. Please make sure that ``godep`` is installed and in your ``$PATH``. ### Installing godep There are many ways to build and host go binaries. Here is an easy way to get utilities like ```godep``` installed: 1. Ensure that [mercurial](http://mercurial.selenic.com/wiki/Download) is installed on your system. (some of godep's dependencies use the mercurial source control system). Use ```apt-get install mercurial``` or ```yum install mercurial``` on Linux, or [brew.sh](http://brew.sh) on OS X, or download directly from mercurial. 2. Create a new GOPATH for your tools and install godep: ``` export GOPATH=$HOME/go-tools mkdir -p $GOPATH go get github.com/tools/godep ``` 3. Add $GOPATH/bin to your path. Typically you'd add this to your ~/.profile: ``` export GOPATH=$HOME/go-tools export PATH=$PATH:$GOPATH/bin ``` ### Using godep Here is a quick summary of `godep`. `godep` helps manage third party dependencies by copying known versions into Godeps/_workspace. You can use `godep` in three ways: 1. Use `godep` to call your `go` commands. For example: `godep go test ./...` 2. Use `godep` to modify your `$GOPATH` so that other tools know where to find the dependencies. Specifically: `export GOPATH=$GOPATH:$(godep path)` 3. Use `godep` to copy the saved versions of packages into your `$GOPATH`. This is done with `godep restore`. We recommend using options #1 or #2. ## Hooks Before committing any changes, please link/copy these hooks into your .git directory. This will keep you from accidentally committing non-gofmt'd go code. ``` cd kubernetes ln -s hooks/prepare-commit-msg .git/hooks/prepare-commit-msg ln -s hooks/commit-msg .git/hooks/commit-msg ``` ## Unit tests ``` cd kubernetes hack/test-go.sh ``` Alternatively, you could also run: ``` cd kubernetes godep go test ./... ``` If you only want to run unit tests in one package, you could run ``godep go test`` under the package directory. For example, the following commands will run all unit tests in package kubelet: ``` $ cd kubernetes # step into kubernetes' directory. $ cd pkg/kubelet $ godep go test # some output from unit tests PASS ok github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet 0.317s ``` ## Coverage ``` cd kubernetes godep go tool cover -html=target/c.out ``` ## Integration tests You need an etcd somewhere in your PATH. To install etcd, run: ``` cd kubernetes hack/install-etcd.sh sudo ln -s $(pwd)/third_party/etcd/bin/etcd /usr/bin/etcd ``` ``` cd kubernetes hack/test-integration.sh ``` ## End-to-End tests You can run an end-to-end test which will bring up a master and two minions, perform some tests, and then tear everything down. Make sure you have followed the getting started steps for your chosen cloud platform (which might involve changing the `KUBERNETES_PROVIDER` environment variable to something other than \"gce\". ``` cd kubernetes hack/e2e-test.sh ``` Pressing control-C should result in an orderly shutdown but if something goes wrong and you still have some VMs running you can force a cleanup with the magical incantation: ``` hack/e2e-test.sh 1 1 1 ``` ## Testing out flaky tests [Instructions here](docs/devel/flaky-tests.md) ## Add/Update dependencies Kubernetes uses [godep](https://github.com/tools/godep) to manage dependencies. To add or update a package, please follow the instructions on [godep's document](https://github.com/tools/godep). To add a new package ``foo/bar``: - Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes - Run ``godep restore`` to make sure you have all dependancies pulled. - Download foo/bar into the first directory in GOPATH: ``go get foo/bar``. - Change code in kubernetes to use ``foo/bar``. - Run ``godep save ./...`` under kubernetes' root directory. To update a package ``foo/bar``: - Make sure the kubernetes' root directory is in $GOPATH/github.com/GoogleCloudPlatform/kubernetes - Run ``godep restore`` to make sure you have all dependancies pulled. - Update the package with ``go get -u foo/bar``. - Change code in kubernetes accordingly if necessary. - Run ``godep update foo/bar`` under kubernetes' root directory. ## Keeping your development fork in sync One time after cloning your forked repo: ``` git remote add upstream https://github.com/GoogleCloudPlatform/kubernetes.git ``` Then each time you want to sync to upstream: ``` git fetch upstream git rebase upstream/master ``` ## Regenerating the API documentation ``` cd kubernetes/api sudo docker build -t kubernetes/raml2html . sudo docker run --name=\"docgen\" kubernetes/raml2html sudo docker cp docgen:/data/kubernetes.html . ``` View the API documentation using htmlpreview (works on your fork, too): ``` http://htmlpreview.github.io/?https://github.com/GoogleCloudPlatform/kubernetes/blob/master/api/kubernetes.html ``` ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-ac58de09aeb1517c34a337428ccefa0e0c3ce7872bd3cbb562323271dca9baef", "text": " # Hunting flaky tests in Kubernetes Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass. We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test. Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes. _Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_ There is a testing image ```brendanburns/flake``` up on the docker hub. We will use this image to test our fix. Create a replication controller with the following config: ```yaml id: flakeController desiredState: replicas: 24 replicaSelector: name: flake podTemplate: desiredState: manifest: version: v1beta1 id: \"\" volumes: [] containers: - name: flake image: brendanburns/flake env: - name: TEST_PACKAGE value: pkg/tools - name: REPO_SPEC value: https://github.com/GoogleCloudPlatform/kubernetes restartpolicy: {} labels: name: flake labels: name: flake ``` ```./cluster/kubecfg.sh -c controller.yaml create replicaControllers``` This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient runs for your purposes, and you can stop the replication controller: ```sh ./cluster/kubecfg.sh stop flakeController ./cluster/kubecfg.sh rm flakeController ``` Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller) Happy flake hunting! ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-f73ac31aad0d03fcbc40548d97e3710f76602dd53a791335b19ad6cd6a910673", "text": " // Build it with: // $ dot -Tsvg releasing.dot >releasing.svg digraph tagged_release { size = \"5,5\" // Arrows go up. rankdir = BT subgraph left { // Group the left nodes together. ci012abc -> pr101 -> ci345cde -> pr102 style = invis } subgraph right { // Group the right nodes together. version_commit -> dev_commit style = invis } { // Align the version commit and the info about it. rank = same // Align them with pr101 pr101 version_commit // release_info shows the change in the commit. release_info } { // Align the dev commit and the info about it. rank = same // Align them with 345cde ci345cde dev_commit dev_info } // Join the nodes from subgraph left. pr99 -> ci012abc pr102 -> pr100 // Do the version node. pr99 -> version_commit dev_commit -> pr100 tag -> version_commit pr99 [ label = \"Merge PR #99\" shape = box fillcolor = \"#ccccff\" style = \"filled\" fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; ci012abc [ label = \"012abc\" shape = circle fillcolor = \"#ffffcc\" style = \"filled\" fontname = \"Consolas, Liberation Mono, Menlo, Courier, monospace\" ]; pr101 [ label = \"Merge PR #101\" shape = box fillcolor = \"#ccccff\" style = \"filled\" fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; ci345cde [ label = \"345cde\" shape = circle fillcolor = \"#ffffcc\" style = \"filled\" fontname = \"Consolas, Liberation Mono, Menlo, Courier, monospace\" ]; pr102 [ label = \"Merge PR #102\" shape = box fillcolor = \"#ccccff\" style = \"filled\" fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; version_commit [ label = \"678fed\" shape = circle fillcolor = \"#ccffcc\" style = \"filled\" fontname = \"Consolas, Liberation Mono, Menlo, Courier, monospace\" ]; dev_commit [ label = \"456dcb\" shape = circle fillcolor = \"#ffffcc\" style = \"filled\" fontname = \"Consolas, Liberation Mono, Menlo, Courier, monospace\" ]; pr100 [ label = \"Merge PR #100\" shape = box fillcolor = \"#ccccff\" style = \"filled\" fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; release_info [ label = \"pkg/version/base.go:ngitVersion = \"v0.5\";\" shape = none fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; dev_info [ label = \"pkg/version/base.go:ngitVersion = \"v0.5-dev\";\" shape = none fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; tag [ label = \"$ git tag -a v0.5\" fillcolor = \"#ffcccc\" style = \"filled\" fontname = \"Helvetica Neue, Helvetica, Segoe UI, Arial, freesans, sans-serif\" ]; } ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-143abc9f10f3a4e58d6b55563595a05b8eefd676d1b1d179c58a7380dee60f22", "text": " # Releasing Kubernetes This document explains how to create a Kubernetes release (as in version) and how the version information gets embedded into the built binaries. ## Origin of the Sources Kubernetes may be built from either a git tree (using `hack/build-go.sh`) or from a tarball (using either `hack/build-go.sh` or `go install`) or directly by the Go native build system (using `go get`). When building from git, we want to be able to insert specific information about the build tree at build time. In particular, we want to use the output of `git describe` to generate the version of Kubernetes and the status of the build tree (add a `-dirty` prefix if the tree was modified.) When building from a tarball or using the Go build system, we will not have access to the information about the git tree, but we still want to be able to tell whether this build corresponds to an exact release (e.g. v0.3) or is between releases (e.g. at some point in development between v0.3 and v0.4). ## Version Number Format In order to account for these use cases, there are some specific formats that may end up representing the Kubernetes version. Here are a few examples: - **v0.5**: This is official version 0.5 and this version will only be used when building from a clean git tree at the v0.5 git tag, or from a tree extracted from the tarball corresponding to that specific release. - **v0.5-15-g0123abcd4567**: This is the `git describe` output and it indicates that we are 15 commits past the v0.5 release and that the SHA1 of the commit where the binaries were built was `0123abcd4567`. It is only possible to have this level of detail in the version information when building from git, not when building from a tarball. - **v0.5-15-g0123abcd4567-dirty** or **v0.5-dirty**: The extra `-dirty` prefix means that the tree had local modifications or untracked files at the time of the build, so there's no guarantee that the source code matches exactly the state of the tree at the `0123abcd4567` commit or at the `v0.5` git tag (resp.) - **v0.5-dev**: This means we are building from a tarball or using `go get` or, if we have a git tree, we are using `go install` directly, so it is not possible to inject the git version into the build information. Additionally, this is not an official release, so the `-dev` prefix indicates that the version we are building is after `v0.5` but before `v0.6`. (There is actually an exception where a commit with `v0.5-dev` is not present on `v0.6`, see later for details.) ## Injecting Version into Binaries In order to cover the different build cases, we start by providing information that can be used when using only Go build tools or when we do not have the git version information available. To be able to provide a meaningful version in those cases, we set the contents of variables in a Go source file that will be used when no overrides are present. We are using `pkg/version/base.go` as the source of versioning in absence of information from git. Here is a sample of that file's contents: ``` var ( gitVersion string = \"v0.4-dev\" // version from git, output of $(git describe) gitCommit string = \"\" // sha1 from git, output of $(git rev-parse HEAD) ) ``` This means a build with `go install` or `go get` or a build from a tarball will yield binaries that will identify themselves as `v0.4-dev` and will not be able to provide you with a SHA1. To add the extra versioning information when building from git, the `hack/build-go.sh` script will gather that information (using `git describe` and `git rev-parse`) and then create a `-ldflags` string to pass to `go install` and tell the Go linker to override the contents of those variables at build time. It can, for instance, tell it to override `gitVersion` and set it to `v0.4-13-g4567bcdef6789-dirty` and set `gitCommit` to `4567bcdef6789...` which is the complete SHA1 of the (dirty) tree used at build time. ## Handling Official Versions Handling official versions from git is easy, as long as there is an annotated git tag pointing to a specific version then `git describe` will return that tag exactly which will match the idea of an official version (e.g. `v0.5`). Handling it on tarballs is a bit harder since the exact version string must be present in `pkg/version/base.go` for it to get embedded into the binaries. But simply creating a commit with `v0.5` on its own would mean that the commits coming after it would also get the `v0.5` version when built from tarball or `go get` while in fact they do not match `v0.5` (the one that was tagged) exactly. To handle that case, creating a new release should involve creating two adjacent commits where the first of them will set the version to `v0.5` and the second will set it to `v0.5-dev`. In that case, even in the presence of merges, there will be a single comit where the exact `v0.5` version will be used and all others around it will either have `v0.4-dev` or `v0.5-dev`. The diagram below illustrates it. ![Diagram of git commits involved in the release](./releasing.png) After working on `v0.4-dev` and merging PR 99 we decide it is time to release `v0.5`. So we start a new branch, create one commit to update `pkg/version/base.go` to include `gitVersion = \"v0.5\"` and `git commit` it. We test it and make sure everything is working as expected. Before sending a PR for it, we create a second commit on that same branch, updating `pkg/version/base.go` to include `gitVersion = \"v0.5-dev\"`. That will ensure that further builds (from tarball or `go install`) on that tree will always include the `-dev` prefix and will not have a `v0.5` version (since they do not match the official `v0.5` exactly.) We then send PR 100 with both commits in it. Once the PR is accepted, we can use `git tag -a` to create an annotated tag *pointing to the one commit* that has `v0.5` in `pkg/version/base.go` and push it to GitHub. (Unfortunately GitHub tags/releases are not annotated tags, so this needs to be done from a git client and pushed to GitHub using SSH.) ## Parallel Commits While we are working on releasing `v0.5`, other development takes place and other PRs get merged. For instance, in the example above, PRs 101 and 102 get merged to the master branch before the versioning PR gets merged. This is not a problem, it is only slightly inaccurate that checking out the tree at commit `012abc` or commit `345cde` or at the commit of the merges of PR 101 or 102 will yield a version of `v0.4-dev` *but* those commits are not present in `v0.5`. In that sense, there is a small window in which commits will get a `v0.4-dev` or `v0.4-N-gXXX` label and while they're indeed later than `v0.4` but they are not really before `v0.5` in that `v0.5` does not contain those commits. Unfortunately, there is not much we can do about it. On the other hand, other projects seem to live with that and it does not really become a large problem. As an example, Docker commit a327d9b91edf has a `v1.1.1-N-gXXX` label but it is not present in Docker `v1.2.0`: ``` $ git describe a327d9b91edf v1.1.1-822-ga327d9b91edf $ git log --oneline v1.2.0..a327d9b91edf a327d9b91edf Fix data space reporting from Kb/Mb to KB/MB (Non-empty output here means the commit is not present on v1.2.0.) ``` ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "query": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "positive_passages": [{"docid": "doc-en-kubernetes-a6230681f6cd44905810676af4cf1a61b3e1fb0fc64d64f6d4a8f2a8818d0c5f", "text": " tagged_release ci012abc 012abc pr101 Merge PR #101 ci012abc->pr101 ci345cde 345cde pr101->ci345cde pr102 Merge PR #102 ci345cde->pr102 pr100 Merge PR #100 pr102->pr100 version_commit 678fed dev_commit 456dcb version_commit->dev_commit dev_commit->pr100 release_info pkg/version/base.go: gitVersion = "v0.5"; dev_info pkg/version/base.go: gitVersion = "v0.5-dev"; pr99 Merge PR #99 pr99->ci012abc pr99->version_commit tag $ git tag -a v0.5 tag->version_commit ", "commid": "kubernetes_pr_1807"}], "negative_passages": []} {"query_id": "q-en-kubernetes-12a3ede7d945c55e14c6bf835d8548e1819c9a378123160dd4714cd3f7d4519c", "query": "On Fedora20 I have the following error: [fedora ~]$ sudo kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 24ms ago Docs: Main PID: (kube-proxy) CGroup: /usr/bin/kube-proxy --logtostderr=true --v=0 --etcdservers=http://fed-master:4001 ott 29 15:15:21 fed-minion systemd[1]: Started Kubernetes Kube-Proxy Server. ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: get registry/services/specs...:4001] ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Connecting to etcd: attempt...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: http://fed- GET ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: recv.response.fromhttp://fe...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Hint: Some lines were ellipsized, use -l to show in full. kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled) Active: activating (auto-restart) (Result: exit-code) since mer 2014-10-29 15:15:21 UTC; 29ms ago Docs: Main PID: (code=exited, status=2) ott 29 15:15:21 fed-minion systemd[1]: kubelet.service: main process exited, code=exited, status=2/INVALIDARGUMENT ott 29 15:15:21 fed-minion systemd[1]: Unit kubelet.service entered failed state. docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 28ms ago Docs: Main PID: (docker) CGroup: /usr/bin/docker -d -H fd:// --selinux-enabled ott 29 15:15:21 fed-minion docker[]: 2014/10/29 15:15:21 docker daemon: 1.3.0 /1.3.0; execdriver: native; graphdriver: ott 29 15:15:21 fed-minion docker[]: [] +job serveapi(fd://) ott 29 15:15:21 fed-minion docker[]: [info] Listening for HTTP on fd () ott 29 15:15:21 fed-minion docker[]: [] +job initnetworkdriver() ott 29 15:15:21 fed-minion docker[]: [] -job init_networkdriver() = OK (0) ott 29 15:15:21 fed-minion docker[]: [info] Loading containers: ott 29 15:15:21 fed-minion docker[]: [info] : done. ott 29 15:15:21 fed-minion docker[]: [] +job acceptconnections() ott 29 15:15:21 fed-minion docker[]: [] -job acceptconnections() = OK (0) ott 29 15:15:21 fed-minion systemd[1]: Started Docker Application Container Engine. When I run netstat -tulnp (cadvisor)\" I don't see anything in output.\nExit code of 2 implies glog.Fatalf to me. What are the contents of the logs for kubelet and kube-proxy? What was the command line of kubelet?\nPlease reopen if you have more details.", "positive_passages": [{"docid": "doc-en-kubernetes-d0b2a1694e137a25f0c9d773183746b4cbbfdc14bd5ba535205288f1c6c3991e", "text": " # Maintainers Eric Paris ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-12a3ede7d945c55e14c6bf835d8548e1819c9a378123160dd4714cd3f7d4519c", "query": "On Fedora20 I have the following error: [fedora ~]$ sudo kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 24ms ago Docs: Main PID: (kube-proxy) CGroup: /usr/bin/kube-proxy --logtostderr=true --v=0 --etcdservers=http://fed-master:4001 ott 29 15:15:21 fed-minion systemd[1]: Started Kubernetes Kube-Proxy Server. ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: get registry/services/specs...:4001] ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Connecting to etcd: attempt...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: http://fed- GET ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: recv.response.fromhttp://fe...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Hint: Some lines were ellipsized, use -l to show in full. kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled) Active: activating (auto-restart) (Result: exit-code) since mer 2014-10-29 15:15:21 UTC; 29ms ago Docs: Main PID: (code=exited, status=2) ott 29 15:15:21 fed-minion systemd[1]: kubelet.service: main process exited, code=exited, status=2/INVALIDARGUMENT ott 29 15:15:21 fed-minion systemd[1]: Unit kubelet.service entered failed state. docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 28ms ago Docs: Main PID: (docker) CGroup: /usr/bin/docker -d -H fd:// --selinux-enabled ott 29 15:15:21 fed-minion docker[]: 2014/10/29 15:15:21 docker daemon: 1.3.0 /1.3.0; execdriver: native; graphdriver: ott 29 15:15:21 fed-minion docker[]: [] +job serveapi(fd://) ott 29 15:15:21 fed-minion docker[]: [info] Listening for HTTP on fd () ott 29 15:15:21 fed-minion docker[]: [] +job initnetworkdriver() ott 29 15:15:21 fed-minion docker[]: [] -job init_networkdriver() = OK (0) ott 29 15:15:21 fed-minion docker[]: [info] Loading containers: ott 29 15:15:21 fed-minion docker[]: [info] : done. ott 29 15:15:21 fed-minion docker[]: [] +job acceptconnections() ott 29 15:15:21 fed-minion docker[]: [] -job acceptconnections() = OK (0) ott 29 15:15:21 fed-minion systemd[1]: Started Docker Application Container Engine. When I run netstat -tulnp (cadvisor)\" I don't see anything in output.\nExit code of 2 implies glog.Fatalf to me. What are the contents of the logs for kubelet and kube-proxy? What was the command line of kubelet?\nPlease reopen if you have more details.", "positive_passages": [{"docid": "doc-en-kubernetes-a70238982f99fadc61e7ce926d51d6c8b6faa8bfc5da0332d4165249799ad153", "text": " #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-12a3ede7d945c55e14c6bf835d8548e1819c9a378123160dd4714cd3f7d4519c", "query": "On Fedora20 I have the following error: [fedora ~]$ sudo kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 24ms ago Docs: Main PID: (kube-proxy) CGroup: /usr/bin/kube-proxy --logtostderr=true --v=0 --etcdservers=http://fed-master:4001 ott 29 15:15:21 fed-minion systemd[1]: Started Kubernetes Kube-Proxy Server. ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: get registry/services/specs...:4001] ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Connecting to etcd: attempt...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: http://fed- GET ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: recv.response.fromhttp://fe...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Hint: Some lines were ellipsized, use -l to show in full. kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled) Active: activating (auto-restart) (Result: exit-code) since mer 2014-10-29 15:15:21 UTC; 29ms ago Docs: Main PID: (code=exited, status=2) ott 29 15:15:21 fed-minion systemd[1]: kubelet.service: main process exited, code=exited, status=2/INVALIDARGUMENT ott 29 15:15:21 fed-minion systemd[1]: Unit kubelet.service entered failed state. docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 28ms ago Docs: Main PID: (docker) CGroup: /usr/bin/docker -d -H fd:// --selinux-enabled ott 29 15:15:21 fed-minion docker[]: 2014/10/29 15:15:21 docker daemon: 1.3.0 /1.3.0; execdriver: native; graphdriver: ott 29 15:15:21 fed-minion docker[]: [] +job serveapi(fd://) ott 29 15:15:21 fed-minion docker[]: [info] Listening for HTTP on fd () ott 29 15:15:21 fed-minion docker[]: [] +job initnetworkdriver() ott 29 15:15:21 fed-minion docker[]: [] -job init_networkdriver() = OK (0) ott 29 15:15:21 fed-minion docker[]: [info] Loading containers: ott 29 15:15:21 fed-minion docker[]: [info] : done. ott 29 15:15:21 fed-minion docker[]: [] +job acceptconnections() ott 29 15:15:21 fed-minion docker[]: [] -job acceptconnections() = OK (0) ott 29 15:15:21 fed-minion systemd[1]: Started Docker Application Container Engine. When I run netstat -tulnp (cadvisor)\" I don't see anything in output.\nExit code of 2 implies glog.Fatalf to me. What are the contents of the logs for kubelet and kube-proxy? What was the command line of kubelet?\nPlease reopen if you have more details.", "positive_passages": [{"docid": "doc-en-kubernetes-3b11a81468f6a5ac23a5c5152bea0f7aaadbb57189a6cd5f4abeb0c3a33cdcd4", "text": " #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ddb3c77615a6f61140a3f455ef45c381d55feb4351606a018aa9cb9944c9a41d", "query": "introduces PullPolicies (PullAlways, PullNever, PullIfNotPresent), but by default, the policy is PullAlways, which bite docker repository again today. The related #google-containers IRC conversation can be found at: We should Introduce a new policy to cap the number of attempts. Maybe getting rid of PullAlways completely? cc/\nI would like to change the default one to PullIfNoPresent for now. We can continue discussing the right policies.\nSuggestions: PullIfNotPresent should be our default? PullAlways needs to be nerfed. Suggestion: max N pull attempts per pod config change, with {% set registry_qps = \"-registry_qps=0.1\" %} DAEMON_ARGS=\"{{daemon_args}} {{etcd_servers}} {{apiservers}} {{auth_path}} {{hostname_override}} {{address}} {{config}} --allow_privileged={{pillar['allow_privileged']}}\"", "commid": "kubernetes_pr_2869"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b0f40bfe0350c828d92f94d76b7846c99a5446ea0a208e561c5b1550f68762be", "query": "From kubelet logs: E1117 23:17:17. ] Sleeping: Unable to write event: event \"e2e-test-etune-minion-1.c.seventh-circle-487.internal.\" is invalid: involvedObject.namespace: invalid value '' The kubelet code says: I'll update when I figure out where the record.Event is coming from. The code is:\nEvent in question is: I1117 23:25:42. ] Event(api.ObjectReference{Kind:\"Minion\", Namespace:\"\", Name:\"e2e-test-etune-minion-1.c.seventh-circle-487.internal\", UID:\"e2e-test-etune-minion-1.c.seventh-circle-487.internal\", APIVersion:\"\", ResourceVersion:\"\", FieldPath:\"\"}): status: '', reason: 'starting' Starting kubelet.\nI think the namespace needs to be kubelet. Note that one bad event blocks other good events from getting sent. Probably better to change that.\nI'll file a new one for bad event blocking good.\nI have a similar issue related to event. When generating event for image, what kind of namespace I should use? Image is not APIObject kubernetes managed, and it could shared crossing several containers even belongs to different pod. Currently I plan to just use default when creating ObjectReference for image related event. If we use kubelet for event related to minion, are we going to make it reserved one?\nwhat events are you planning to add for images? \"default\" seems like a good enough namespace for minion events for now.", "positive_passages": [{"docid": "doc-en-kubernetes-55ad420d6563fedd0c63cab2bafb6841e198176a2e339ef3148c7d799aad4500", "text": "// TODO: get the real minion object of ourself, // and use the real minion name and UID. ref := &api.ObjectReference{ Kind: \"Minion\", Name: kl.hostname, UID: kl.hostname, Kind: \"Minion\", Name: kl.hostname, UID: kl.hostname, Namespace: api.NamespaceDefault, } record.Eventf(ref, \"\", \"starting\", \"Starting kubelet.\") }", "commid": "kubernetes_pr_2429"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6a98eaf0134f550b05a03ee7991e3bdcaaccc7953bd821682f7b5b34771c6a6b", "query": "For example, creating a service with id 'redis-master10012345678999' fails and the returned message contains the following: \"message\": \"service \"redis-master10012345678999\" is invalid: name: invalid value 'redis-master10012345678999' 'invalid value' is too general, users should understand exactly what is wrong with the input. for example - is it too long? does it contain characters that are not allowed?\nAgree - there is now a place to stick the explanations in the ValidationError - we need to link together validation logic and that string. , abonas wrote:\nI checked and it seems there are only two errors that do not have detailed explanation. I'll send a PR to fix this.\nThis is one:\nThanks for the pointer Brian! I'll send second PR to fix this one.\nStill second PR to be sent.", "positive_passages": [{"docid": "doc-en-kubernetes-517d53c3009a525b41232b9601ffacb3dc61fc22dcc9b53c518b178e5c14964e", "text": "if !util.IsQualifiedName(strings.ToLower(k)) { allErrs = append(allErrs, errs.NewFieldInvalid(field, k, qualifiedNameErrorMsg)) } if !util.IsValidAnnotationValue(v) { allErrs = append(allErrs, errs.NewFieldInvalid(field, k, \"\")) } totalSize += (int64)(len(k)) + (int64)(len(v)) } if totalSize > (int64)(totalAnnotationSizeLimitB) { allErrs = append(allErrs, errs.NewFieldTooLong(\"annotations\", \"\")) allErrs = append(allErrs, errs.NewFieldTooLong(field, \"\", totalAnnotationSizeLimitB)) } return allErrs }", "commid": "kubernetes_pr_5786"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6a98eaf0134f550b05a03ee7991e3bdcaaccc7953bd821682f7b5b34771c6a6b", "query": "For example, creating a service with id 'redis-master10012345678999' fails and the returned message contains the following: \"message\": \"service \"redis-master10012345678999\" is invalid: name: invalid value 'redis-master10012345678999' 'invalid value' is too general, users should understand exactly what is wrong with the input. for example - is it too long? does it contain characters that are not allowed?\nAgree - there is now a place to stick the explanations in the ValidationError - we need to link together validation logic and that string. , abonas wrote:\nI checked and it seems there are only two errors that do not have detailed explanation. I'll send a PR to fix this.\nThis is one:\nThanks for the pointer Brian! I'll send second PR to fix this one.\nStill second PR to be sent.", "positive_passages": [{"docid": "doc-en-kubernetes-1b2ac816d4dc23e2d4f2431df4ed7a285abf0bb44114aa941a8b8857f23237c6", "text": "func (v *ValidationError) Error() string { var s string switch v.Type { case ValidationErrorTypeRequired: case ValidationErrorTypeRequired, ValidationErrorTypeTooLong: s = spew.Sprintf(\"%s: %s\", v.Field, v.Type) default: s = spew.Sprintf(\"%s: %s '%+v'\", v.Field, v.Type, v.BadValue)", "commid": "kubernetes_pr_5786"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6a98eaf0134f550b05a03ee7991e3bdcaaccc7953bd821682f7b5b34771c6a6b", "query": "For example, creating a service with id 'redis-master10012345678999' fails and the returned message contains the following: \"message\": \"service \"redis-master10012345678999\" is invalid: name: invalid value 'redis-master10012345678999' 'invalid value' is too general, users should understand exactly what is wrong with the input. for example - is it too long? does it contain characters that are not allowed?\nAgree - there is now a place to stick the explanations in the ValidationError - we need to link together validation logic and that string. , abonas wrote:\nI checked and it seems there are only two errors that do not have detailed explanation. I'll send a PR to fix this.\nThis is one:\nThanks for the pointer Brian! I'll send second PR to fix this one.\nStill second PR to be sent.", "positive_passages": [{"docid": "doc-en-kubernetes-76434e67b55356557767506e655a59f3b90fe0b2e032527f55efe25497a34359", "text": "return &ValidationError{ValidationErrorTypeNotFound, field, value, \"\"} } func NewFieldTooLong(field string, value interface{}) *ValidationError { return &ValidationError{ValidationErrorTypeTooLong, field, value, \"\"} func NewFieldTooLong(field string, value interface{}, maxLength int) *ValidationError { return &ValidationError{ValidationErrorTypeTooLong, field, value, fmt.Sprintf(\"must have at most %d characters\", maxLength)} } type ValidationErrorList []error", "commid": "kubernetes_pr_5786"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6a98eaf0134f550b05a03ee7991e3bdcaaccc7953bd821682f7b5b34771c6a6b", "query": "For example, creating a service with id 'redis-master10012345678999' fails and the returned message contains the following: \"message\": \"service \"redis-master10012345678999\" is invalid: name: invalid value 'redis-master10012345678999' 'invalid value' is too general, users should understand exactly what is wrong with the input. for example - is it too long? does it contain characters that are not allowed?\nAgree - there is now a place to stick the explanations in the ValidationError - we need to link together validation logic and that string. , abonas wrote:\nI checked and it seems there are only two errors that do not have detailed explanation. I'll send a PR to fix this.\nThis is one:\nThanks for the pointer Brian! I'll send second PR to fix this one.\nStill second PR to be sent.", "positive_passages": [{"docid": "doc-en-kubernetes-06a5ff00c381f4b5ad8ee5c690b3e991963b61b3814a0ed102bf6e8855e81ef5", "text": "return (len(value) <= LabelValueMaxLength && labelValueRegexp.MatchString(value)) } // Annotation values are opaque. func IsValidAnnotationValue(value string) bool { return true } const QualifiedNameFmt string = \"(\" + qnameTokenFmt + \"/)?\" + qnameTokenFmt const QualifiedNameMaxLength int = 253", "commid": "kubernetes_pr_5786"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6a98eaf0134f550b05a03ee7991e3bdcaaccc7953bd821682f7b5b34771c6a6b", "query": "For example, creating a service with id 'redis-master10012345678999' fails and the returned message contains the following: \"message\": \"service \"redis-master10012345678999\" is invalid: name: invalid value 'redis-master10012345678999' 'invalid value' is too general, users should understand exactly what is wrong with the input. for example - is it too long? does it contain characters that are not allowed?\nAgree - there is now a place to stick the explanations in the ValidationError - we need to link together validation logic and that string. , abonas wrote:\nI checked and it seems there are only two errors that do not have detailed explanation. I'll send a PR to fix this.\nThis is one:\nThanks for the pointer Brian! I'll send second PR to fix this one.\nStill second PR to be sent.", "positive_passages": [{"docid": "doc-en-kubernetes-8fa55f773d3b6b9b929b83603879ded95a64212e8a79b27aee34e7708749588b", "text": "} pod.Spec.Containers = newContainers if !api.Semantic.DeepEqual(pod.Spec, oldPod.Spec) { // TODO: a better error would include all immutable fields explicitly. allErrs = append(allErrs, errs.NewFieldInvalid(\"spec.containers\", newPod.Spec.Containers, \"some fields are immutable\")) allErrs = append(allErrs, errs.NewFieldInvalid(\"spec\", newPod.Spec, \"may not update fields other than container.image\")) } newPod.Status = oldPod.Status", "commid": "kubernetes_pr_5853"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-7e63074f5269f2743d38897873f80902a5baf7405f17ef2b0bb13f5c378d3df3", "text": "if err != nil { glog.Errorf(\"Unable to get pod with name %q and uid %q info, health checks may be invalid\", podFullName, uid) } netInfo, found := podStatus.Info[dockertools.PodInfraContainerName] if found { podStatus.PodIP = netInfo.PodIP } for _, container := range pod.Spec.Containers { expectedHash := dockertools.HashContainer(&container)", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-54c3e835289ab4f3ac9b277f70cfc4fce3fbd68f2be310758c8360bc14d8cdf5", "text": "return nil, false } // getPhase returns the phase of a pod given its container info. func getPhase(spec *api.PodSpec, info api.PodInfo) api.PodPhase { if info == nil { return api.PodPending } running := 0 waiting := 0 stopped := 0 failed := 0 succeeded := 0 unknown := 0 for _, container := range spec.Containers { if containerStatus, ok := info[container.Name]; ok { if containerStatus.State.Running != nil { running++ } else if containerStatus.State.Termination != nil { stopped++ if containerStatus.State.Termination.ExitCode == 0 { succeeded++ } else { failed++ } } else if containerStatus.State.Waiting != nil { waiting++ } else { unknown++ } } else { unknown++ } } switch { case waiting > 0: // One or more containers has not been started return api.PodPending case running > 0 && unknown == 0: // All containers have been started, and at least // one container is running return api.PodRunning case running == 0 && stopped > 0 && unknown == 0: // All containers are terminated if spec.RestartPolicy.Always != nil { // All containers are in the process of restarting return api.PodRunning } if stopped == succeeded { // RestartPolicy is not Always, and all // containers are terminated in success return api.PodSucceeded } if spec.RestartPolicy.Never != nil { // RestartPolicy is Never, and all containers are // terminated with at least one in failure return api.PodFailed } // RestartPolicy is OnFailure, and at least one in failure // and in the process of restarting return api.PodRunning default: return api.PodPending } } // GetPodStatus returns information from Docker about the containers in a pod func (kl *Kubelet) GetPodStatus(podFullName string, uid types.UID) (api.PodStatus, error) { var manifest api.PodSpec var spec api.PodSpec for _, pod := range kl.pods { if GetPodFullName(&pod) == podFullName { manifest = pod.Spec spec = pod.Spec break } } info, err := dockertools.GetDockerPodInfo(kl.dockerClient, manifest, podFullName, uid) info, err := dockertools.GetDockerPodInfo(kl.dockerClient, spec, podFullName, uid) // TODO(dchen1107): Determine PodPhase here var podStatus api.PodStatus podStatus.Phase = getPhase(&spec, info) netContainerInfo, found := info[dockertools.PodInfraContainerName] if found { podStatus.PodIP = netContainerInfo.PodIP } // TODO(dchen1107): Change Info to list from map podStatus.Info = info return podStatus, err", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-9a9a6ae74535658161a25a28602ecdef6baead18968850613577a9013de11219", "text": "} } } func TestPodPhaseWithRestartAlways(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{Always: &api.RestartPolicyAlways{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } stoppedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{}, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": stoppedState, \"containerB\": stoppedState, }, Host: \"machine\", }, }, api.PodRunning, \"all stopped with restart always\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": stoppedState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart always\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart always\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } func TestPodPhaseWithRestartNever(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{Never: &api.RestartPolicyNever{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } succeededState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: 0, }, }, } failedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: -1, }, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": succeededState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodSucceeded, \"all succeeded with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": failedState, \"containerB\": failedState, }, Host: \"machine\", }, }, api.PodFailed, \"all failed with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart never\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } func TestPodPhaseWithRestartOnFailure(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{OnFailure: &api.RestartPolicyOnFailure{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } succeededState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: 0, }, }, } failedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: -1, }, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": succeededState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodSucceeded, \"all succeeded with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": failedState, \"containerB\": failedState, }, Host: \"machine\", }, }, api.PodRunning, \"all failed with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart onfailure\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } ", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-36de24b69ec519b506fd9a66eec2c610dcbced6f34af13f61c64b2617e72a3ca", "text": "\"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet/leaky\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/labels\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/registry/pod\"", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-09da8c72669ece21951449de9510b58c12eab1ee63e0c9e875b6c0302ea4955b", "text": "newStatus.Phase = api.PodUnknown } else { newStatus.Info = result.Status.Info newStatus.Phase = getPhase(&pod.Spec, newStatus.Info) if netContainerInfo, ok := newStatus.Info[leaky.PodInfraContainerName]; ok { if netContainerInfo.PodIP != \"\" { newStatus.PodIP = netContainerInfo.PodIP } newStatus.PodIP = result.Status.PodIP if newStatus.Info == nil { // There is a small race window that kubelet couldn't // propulated the status yet. This should go away once // we removed boundPods newStatus.Phase = api.PodPending } else { newStatus.Phase = result.Status.Phase } } return newStatus, err", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-06618979dcf89937d773853cb2b8408fc98f1e215f14b33a7a3438acd9eb169d", "text": "} wg.Wait() } // getPhase returns the phase of a pod given its container info. // TODO(dchen1107): push this all the way down into kubelet. func getPhase(spec *api.PodSpec, info api.PodInfo) api.PodPhase { if info == nil { return api.PodPending } running := 0 waiting := 0 stopped := 0 failed := 0 succeeded := 0 unknown := 0 for _, container := range spec.Containers { if containerStatus, ok := info[container.Name]; ok { if containerStatus.State.Running != nil { running++ } else if containerStatus.State.Termination != nil { stopped++ if containerStatus.State.Termination.ExitCode == 0 { succeeded++ } else { failed++ } } else if containerStatus.State.Waiting != nil { waiting++ } else { unknown++ } } else { unknown++ } } switch { case waiting > 0: // One or more containers has not been started return api.PodPending case running > 0 && unknown == 0: // All containers have been started, and at least // one container is running return api.PodRunning case running == 0 && stopped > 0 && unknown == 0: // All containers are terminated if spec.RestartPolicy.Always != nil { // All containers are in the process of restarting return api.PodRunning } if stopped == succeeded { // RestartPolicy is not Always, and all // containers are terminated in success return api.PodSucceeded } if spec.RestartPolicy.Never != nil { // RestartPolicy is Never, and all containers are // terminated with at least one in failure return api.PodFailed } // RestartPolicy is OnFailure, and at least one in failure // and in the process of restarting return api.PodRunning default: return api.PodPending } } ", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-39ccafab293f28388b6206049c04ee997ba4a2819cde3a0dda1fa8f078d21c8e", "text": "\"reflect\" \"sync\" \"testing\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet/leaky\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/registry/registrytest\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" ) type podInfoCall struct {", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-7326e2a6c8d4e7793cd40cd90f2d0ca60c6a8fb79514ac3dc4b41f83398bfacf", "text": "if status == nil { t.Errorf(\"Unexpected non-status.\") } expected := &api.PodStatus{ Phase: \"Pending\", Host: \"machine\", HostIP: \"1.2.3.5\", Info: api.PodInfo{ \"bar\": api.ContainerStatus{}, }, } if !reflect.DeepEqual(status, expected) { t.Errorf(\"expected:n%#vngot:n%#vn\", expected, status) } } type podCacheTestConfig struct {", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-04754bae5c499a2b5257bf4be3e211fb7d29df4a340b77fef8b499c9d85c053d", "text": "} } func TestFillPodStatus(t *testing.T) { pod := makePod(api.NamespaceDefault, \"foo\", \"machine\", \"bar\") expectedIP := \"1.2.3.4\" expectedTime, _ := time.Parse(\"2013-Feb-03\", \"2013-Feb-03\") config := podCacheTestConfig{ kubeletContainerInfo: api.PodStatus{ Phase: api.PodPending, Host: \"machine\", HostIP: \"ip of machine\", PodIP: expectedIP, Info: api.PodInfo{ leaky.PodInfraContainerName: { State: api.ContainerState{ Running: &api.ContainerStateRunning{ StartedAt: util.NewTime(expectedTime), }, }, RestartCount: 1, PodIP: expectedIP, }, }, }, nodes: []api.Node{*makeHealthyNode(\"machine\", \"ip of machine\")}, pods: []api.Pod{*pod}, } cache := config.Construct() err := cache.updatePodStatus(&config.pods[0]) if err != nil { t.Fatalf(\"Unexpected error: %+v\", err) } status, err := cache.GetPodStatus(pod.Namespace, pod.Name) if e, a := &config.kubeletContainerInfo, status; !reflect.DeepEqual(e, a) { t.Errorf(\"Expected: %+v, Got %+v\", e, a) } } func TestFillPodInfoNoData(t *testing.T) { pod := makePod(api.NamespaceDefault, \"foo\", \"machine\", \"bar\") expectedIP := \"\"", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-84ec20a9a1c474d5496a5c8caaa421014e5132e3b88c62da6c10a39894c35c64", "text": "} } func TestPodPhaseWithRestartAlways(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{Always: &api.RestartPolicyAlways{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } stoppedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{}, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": stoppedState, \"containerB\": stoppedState, }, Host: \"machine\", }, }, api.PodRunning, \"all stopped with restart always\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": stoppedState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart always\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart always\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } func TestPodPhaseWithRestartNever(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{Never: &api.RestartPolicyNever{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } succeededState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: 0, }, }, } failedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: -1, }, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": succeededState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodSucceeded, \"all succeeded with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": failedState, \"containerB\": failedState, }, Host: \"machine\", }, }, api.PodFailed, \"all failed with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart never\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } func TestPodPhaseWithRestartOnFailure(t *testing.T) { desiredState := api.PodSpec{ Containers: []api.Container{ {Name: \"containerA\"}, {Name: \"containerB\"}, }, RestartPolicy: api.RestartPolicy{OnFailure: &api.RestartPolicyOnFailure{}}, } currentState := api.PodStatus{ Host: \"machine\", } runningState := api.ContainerStatus{ State: api.ContainerState{ Running: &api.ContainerStateRunning{}, }, } succeededState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: 0, }, }, } failedState := api.ContainerStatus{ State: api.ContainerState{ Termination: &api.ContainerStateTerminated{ ExitCode: -1, }, }, } tests := []struct { pod *api.Pod status api.PodPhase test string }{ {&api.Pod{Spec: desiredState, Status: currentState}, api.PodPending, \"waiting\"}, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": runningState, }, Host: \"machine\", }, }, api.PodRunning, \"all running with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": succeededState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodSucceeded, \"all succeeded with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": failedState, \"containerB\": failedState, }, Host: \"machine\", }, }, api.PodRunning, \"all failed with restart never\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, \"containerB\": succeededState, }, Host: \"machine\", }, }, api.PodRunning, \"mixed state #1 with restart onfailure\", }, { &api.Pod{ Spec: desiredState, Status: api.PodStatus{ Info: map[string]api.ContainerStatus{ \"containerA\": runningState, }, Host: \"machine\", }, }, api.PodPending, \"mixed state #2 with restart onfailure\", }, } for _, test := range tests { if status := getPhase(&test.pod.Spec, test.pod.Status.Info); status != test.status { t.Errorf(\"In test %s, expected %v, got %v\", test.test, test.status, status) } } } func TestGarbageCollection(t *testing.T) { pod1 := makePod(api.NamespaceDefault, \"foo\", \"machine\", \"bar\") pod2 := makePod(api.NamespaceDefault, \"baz\", \"machine\", \"qux\")", "commid": "kubernetes_pr_4194"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-ec3906e167ac177574f1dfbdf16a30f3ea0f46b4b7e1e7b9c8fa1bdbf873757b", "text": "m := make(api.PodInfo) for k, v := range r.Status.Info { v.Ready = true v.PodIP = \"1.2.3.4\" m[k] = v } r.Status.Info = m", "commid": "kubernetes_pr_4970"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-9d5f5c6320636df815efe2d597380cf5cc5fa971206ed9ab10c1037e0d6d61ba", "text": "return func() (bool, error) { endpoints, err := c.Endpoints(serviceNamespace).Get(serviceID) if err != nil { glog.Infof(\"Error on creating endpoints: %v\", err) return false, nil } glog.Infof(\"endpoints: %v\", endpoints.Endpoints) return len(endpoints.Endpoints) == endpointCount, nil } }", "commid": "kubernetes_pr_4970"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-4c87a476515a8c76d3a6f64ad7c7ce2836016116da0aecf2defaeec912aa8a2c", "text": "return &containerStatus, nil } // GetDockerPodInfo returns docker info for all containers in the pod/manifest. // GetDockerPodInfo returns docker info for all containers in the pod/manifest and // infrastructure container func GetDockerPodInfo(client DockerInterface, manifest api.PodSpec, podFullName string, uid types.UID) (api.PodInfo, error) { info := api.PodInfo{} expectedContainers := make(map[string]api.Container)", "commid": "kubernetes_pr_4970"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bdb8aba44fdfc136ba5914129c5151c5b123e60df26d898975d42cdde5aeebc3", "query": "Kubelet now only reports ContainerStatus. PodStatus / PodCondition is observed at master side based on a set of ContainerStatus. There is no other way to get PodIP except net container for master side. This is very confusion to the end user, , , ... Now we are finalizing on v1beta3 API, including PodStatus & ContainerStatus. We can make net container completely invisible to the end users by finishing the following steps: 1) Expose network related errors at API level 2) Generate PodStaus directly from Kubelet\n3) once PodStatus generated by kubelet, PodIP can be removed from ContainerStatus.\nI think reporting PodStatus rather than just container status is most important. How hard is that?\nUsed to be hard since there are many dependencies when I first report Status back. Now it is totally feasible. I already started to write PRs for this. :-)\nOne more PR required to close this one.", "positive_passages": [{"docid": "doc-en-kubernetes-11b19c922f77e269b7916d0efc0723bd45d19f474996ab8d7914c76b3457bbd8", "text": "// Assume info is ready to process podStatus.Phase = getPhase(spec, info) podStatus.Info = api.PodInfo{} for _, c := range spec.Containers { containerStatus := info[c.Name] containerStatus.Ready = kl.readiness.IsReady(containerStatus) info[c.Name] = containerStatus podStatus.Info[c.Name] = containerStatus } podStatus.Conditions = append(podStatus.Conditions, getPodReadyCondition(spec, info)...) podStatus.Conditions = append(podStatus.Conditions, getPodReadyCondition(spec, podStatus.Info)...) netContainerInfo, found := info[dockertools.PodInfraContainerName] if found { podStatus.PodIP = netContainerInfo.PodIP } // TODO(dchen1107): Change Info to list from map podStatus.Info = info return podStatus, nil }", "commid": "kubernetes_pr_4970"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e082935199b3867216aadc7ab3099030787c94aab618524e13109de89e7d352f", "query": "In case it isn't clear why this happened, it's because gcloud ships with an older version of kubectl (currently version 0.7.0). We keep it as up to date as we can, but it will always lag a release or so behind head.\nFixed by the revert.", "positive_passages": [{"docid": "doc-en-kubernetes-ebff61b4c73740e17a06dc4107e707d8833f3d740685507858804bd48659353e", "text": "} // Takes a list of strings and compiles them into a list of regular expressions func CompileRegexps(regexpStrings StringList) ([]*regexp.Regexp, error) { func CompileRegexps(regexpStrings []string) ([]*regexp.Regexp, error) { regexps := []*regexp.Regexp{} for _, regexpStr := range regexpStrings { r, err := regexp.Compile(regexpStr)", "commid": "kubernetes_pr_1648"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e082935199b3867216aadc7ab3099030787c94aab618524e13109de89e7d352f", "query": "In case it isn't clear why this happened, it's because gcloud ships with an older version of kubectl (currently version 0.7.0). We keep it as up to date as we can, but it will always lag a release or so behind head.\nFixed by the revert.", "positive_passages": [{"docid": "doc-en-kubernetes-3b91ee1154b112879cefeb6f9f8a3037685f1becb9c7da0784d5b303c08e6e22", "text": "t.Errorf(\"diff returned %v\", diff) } } func TestCompileRegex(t *testing.T) { uncompiledRegexes := []string{\"endsWithMe$\", \"^startingWithMe\"} regexes, err := CompileRegexps(uncompiledRegexes) if err != nil { t.Errorf(\"Failed to compile legal regexes: '%v': %v\", uncompiledRegexes, err) } if len(regexes) != len(uncompiledRegexes) { t.Errorf(\"Wrong number of regexes returned: '%v': %v\", uncompiledRegexes, regexes) } if !regexes[0].MatchString(\"Something that endsWithMe\") { t.Errorf(\"Wrong regex returned: '%v': %v\", uncompiledRegexes[0], regexes[0]) } if regexes[0].MatchString(\"Something that doesn't endsWithMe.\") { t.Errorf(\"Wrong regex returned: '%v': %v\", uncompiledRegexes[0], regexes[0]) } if !regexes[1].MatchString(\"startingWithMe is very important\") { t.Errorf(\"Wrong regex returned: '%v': %v\", uncompiledRegexes[1], regexes[1]) } if regexes[1].MatchString(\"not startingWithMe should fail\") { t.Errorf(\"Wrong regex returned: '%v': %v\", uncompiledRegexes[1], regexes[1]) } } ", "commid": "kubernetes_pr_1648"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e082935199b3867216aadc7ab3099030787c94aab618524e13109de89e7d352f", "query": "In case it isn't clear why this happened, it's because gcloud ships with an older version of kubectl (currently version 0.7.0). We keep it as up to date as we can, but it will always lag a release or so behind head.\nFixed by the revert.", "positive_passages": [{"docid": "doc-en-kubernetes-b1ab63dd995bbe8f064cfc2886c0942e78604d77db1e3422cf43ec13159927d8", "text": "// kubectl command (begining with a space). func kubectlArgs() string { if *checkVersionSkew { return \" --match-server-version --v=4\" return \" --match-server-version\" } return \" --v=4\" return \"\" } func bashWrap(cmd string) string {", "commid": "kubernetes_pr_3475"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8d9e539297dc070efbb3a72a40e6c4dd30fcde81ce3b97c5e3f1b6dd525ae415", "query": "I'm trying containervm with kubelet built from head and this YAML file: The YAML file is v1beta2 and doesn't set a name to the Pod. When starting kubelet, it doesn't start the \"nc\" echo server and complains about this on the logs: Chatting with on IM she mentioned it might be due to a change from that now requires non-empty pod names (replaces random generation with validation for non-emptiness) but that means all existing YAML files (even the ones with an older version) will stop working... Is that expected? Thanks! Filipe\nYes, the latest kubelet call validation on all sources: etcd, file, http, etc. now, and PodName is a required field. One potential fix for this is also invoke conversion code before call validation, like what we have done at apiserver. Thoughts?\nJust so I get unblocked... Do you have an example YAML file that will set the pod name and will work? Right now this is what I'm using: Should I bump up version to v1beta3 and include a new field somewhere? Thanks! Filipe\nThe type of that yaml is not pod, and the kubelet code doesn't know how to read pods from a file. So, I think change needs to be reverted before you can make the next ContainerVM release. The v1beta2 vs v1beta3 should not have anything to do with it, unless I'm misunderstanding.\nOr will it work to do this:\nStill, not sure if you want to release a containerVM version that doesn't accept the old format?\nThanks I made it work with the YAML above! I still have errors in the logs: But the docker containers are being brought up... It's weird that is the YAML field that is missing, considering the source seems to say it's a deprecated field that will be removed at some point... I agree with your point that this needs to be fixed at head before the next containervm release. I opened to increase test coverage of standalone kubelet since we have been seeing many changes break that model recently. Cheers, Filipe\nid in v1beta2 maps to name in v1beta3 Tim's change made name required. The internal representation is like v1beta3. But v1beta3 is not officially stable so we don't want to use it in the GKE or containerVM docs.\nThe issue is caused by , and affect both v0.8 and v0.9 release for standalone kubelet to support old format ContainerManifests.\nUgh, I thought ID was really required. I spoke with Dawn a bit, and here's my thinking. 1) update all our docs and examples to include ID 2) in file and HTTP config sources, check for Name = \"\" and apply a default, maybe log a warning that this is deprecated 3) wait for the ContainerManifest -Pod conversion to really make it required", "positive_passages": [{"docid": "doc-en-kubernetes-88b66a0599dd199c29ab489ef31fc9f4ac102b355c1b3184c93f86fac4c18748", "text": "pod.UID = types.UID(hex.EncodeToString(hasher.Sum(nil)[0:])) glog.V(5).Infof(\"Generated UID %q for pod %q from file %s\", pod.UID, pod.Name, filename) } // This is required for backward compatibility, and should be removed once we // completely deprecate ContainerManifest. if len(pod.Name) == 0 { pod.Name = string(pod.UID) glog.V(5).Infof(\"Generated Name %q for UID %q from file %s\", pod.Name, pod.UID, filename) } if len(pod.Namespace) == 0 { hasher := adler32.New() fmt.Fprint(hasher, filename)", "commid": "kubernetes_pr_3725"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8d9e539297dc070efbb3a72a40e6c4dd30fcde81ce3b97c5e3f1b6dd525ae415", "query": "I'm trying containervm with kubelet built from head and this YAML file: The YAML file is v1beta2 and doesn't set a name to the Pod. When starting kubelet, it doesn't start the \"nc\" echo server and complains about this on the logs: Chatting with on IM she mentioned it might be due to a change from that now requires non-empty pod names (replaces random generation with validation for non-emptiness) but that means all existing YAML files (even the ones with an older version) will stop working... Is that expected? Thanks! Filipe\nYes, the latest kubelet call validation on all sources: etcd, file, http, etc. now, and PodName is a required field. One potential fix for this is also invoke conversion code before call validation, like what we have done at apiserver. Thoughts?\nJust so I get unblocked... Do you have an example YAML file that will set the pod name and will work? Right now this is what I'm using: Should I bump up version to v1beta3 and include a new field somewhere? Thanks! Filipe\nThe type of that yaml is not pod, and the kubelet code doesn't know how to read pods from a file. So, I think change needs to be reverted before you can make the next ContainerVM release. The v1beta2 vs v1beta3 should not have anything to do with it, unless I'm misunderstanding.\nOr will it work to do this:\nStill, not sure if you want to release a containerVM version that doesn't accept the old format?\nThanks I made it work with the YAML above! I still have errors in the logs: But the docker containers are being brought up... It's weird that is the YAML field that is missing, considering the source seems to say it's a deprecated field that will be removed at some point... I agree with your point that this needs to be fixed at head before the next containervm release. I opened to increase test coverage of standalone kubelet since we have been seeing many changes break that model recently. Cheers, Filipe\nid in v1beta2 maps to name in v1beta3 Tim's change made name required. The internal representation is like v1beta3. But v1beta3 is not officially stable so we don't want to use it in the GKE or containerVM docs.\nThe issue is caused by , and affect both v0.8 and v0.9 release for standalone kubelet to support old format ContainerManifests.\nUgh, I thought ID was really required. I spoke with Dawn a bit, and here's my thinking. 1) update all our docs and examples to include ID 2) in file and HTTP config sources, check for Name = \"\" and apply a default, maybe log a warning that this is deprecated 3) wait for the ContainerManifest -Pod conversion to really make it required", "positive_passages": [{"docid": "doc-en-kubernetes-28885ed1d23d44408ab386dcfc093d727dd43da7bc4118c2245dfc8135ee2e4b", "text": "pod.UID = types.UID(hex.EncodeToString(hasher.Sum(nil)[0:])) glog.V(5).Infof(\"Generated UID %q for pod %q from URL %s\", pod.UID, pod.Name, url) } // This is required for backward compatibility, and should be removed once we // completely deprecate ContainerManifest. if len(pod.Name) == 0 { pod.Name = string(pod.UID) glog.V(5).Infof(\"Generate Name %q from UID %q from URL %s\", pod.Name, pod.UID, url) } if len(pod.Namespace) == 0 { hasher := adler32.New() fmt.Fprint(hasher, url)", "commid": "kubernetes_pr_3725"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8d9e539297dc070efbb3a72a40e6c4dd30fcde81ce3b97c5e3f1b6dd525ae415", "query": "I'm trying containervm with kubelet built from head and this YAML file: The YAML file is v1beta2 and doesn't set a name to the Pod. When starting kubelet, it doesn't start the \"nc\" echo server and complains about this on the logs: Chatting with on IM she mentioned it might be due to a change from that now requires non-empty pod names (replaces random generation with validation for non-emptiness) but that means all existing YAML files (even the ones with an older version) will stop working... Is that expected? Thanks! Filipe\nYes, the latest kubelet call validation on all sources: etcd, file, http, etc. now, and PodName is a required field. One potential fix for this is also invoke conversion code before call validation, like what we have done at apiserver. Thoughts?\nJust so I get unblocked... Do you have an example YAML file that will set the pod name and will work? Right now this is what I'm using: Should I bump up version to v1beta3 and include a new field somewhere? Thanks! Filipe\nThe type of that yaml is not pod, and the kubelet code doesn't know how to read pods from a file. So, I think change needs to be reverted before you can make the next ContainerVM release. The v1beta2 vs v1beta3 should not have anything to do with it, unless I'm misunderstanding.\nOr will it work to do this:\nStill, not sure if you want to release a containerVM version that doesn't accept the old format?\nThanks I made it work with the YAML above! I still have errors in the logs: But the docker containers are being brought up... It's weird that is the YAML field that is missing, considering the source seems to say it's a deprecated field that will be removed at some point... I agree with your point that this needs to be fixed at head before the next containervm release. I opened to increase test coverage of standalone kubelet since we have been seeing many changes break that model recently. Cheers, Filipe\nid in v1beta2 maps to name in v1beta3 Tim's change made name required. The internal representation is like v1beta3. But v1beta3 is not officially stable so we don't want to use it in the GKE or containerVM docs.\nThe issue is caused by , and affect both v0.8 and v0.9 release for standalone kubelet to support old format ContainerManifests.\nUgh, I thought ID was really required. I spoke with Dawn a bit, and here's my thinking. 1) update all our docs and examples to include ID 2) in file and HTTP config sources, check for Name = \"\" and apply a default, maybe log a warning that this is deprecated 3) wait for the ContainerManifest -Pod conversion to really make it required", "positive_passages": [{"docid": "doc-en-kubernetes-6e8ce1efddd5f85bb7905a417c527be927c8521935072a1224919598b8519b7b", "text": "}), }, { desc: \"Single manifest without ID\", manifests: api.ContainerManifest{Version: \"v1beta1\", UUID: \"111\"}, expected: CreatePodUpdate(kubelet.SET, kubelet.HTTPSource, api.BoundPod{ ObjectMeta: api.ObjectMeta{ UID: \"111\", Name: \"111\", Namespace: \"foobar\", }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicy{Always: &api.RestartPolicyAlways{}}, DNSPolicy: api.DNSClusterFirst, }, }), }, { desc: \"Multiple manifests\", manifests: []api.ContainerManifest{ {Version: \"v1beta1\", ID: \"foo\", UUID: \"111\", Containers: []api.Container{{Name: \"1\", Image: \"foo\"}}},", "commid": "kubernetes_pr_3725"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c037c8cde80ec7f20aa2f250e008d71bb0af3d2acb8876fcb8161235bc3e00f", "query": "Part of issue . Re-write test in Go, generate unique names for top level resources, ensure test is idempotent, clean up all resources at end of test.\nWhat should we do about this test? The current test in only brings up the guestbook but doesn't even confirm that it is up or tries to connect to any of the ports... Was the point of that test case to test /? Does it make sense to rewrite this test in Go or do we already have more test coverage from e.g. ? If it makes sense to rewrite it in Go, what should we accomplish with this test?\nHow about a functional test that inserts some names and tries to retrieve them i.e. enough to know that the service has basic functionality?\n+1 to an actual functional test. The service is basic enough, it should be straightforward to do the same POST/GET from go that the js on the guestbook homepage does.\nand I talked about this and for a while and think that they should remain as tests. This test needs to be improved: it needs to be turned into an actual test (we don't do anything with the service).\nhas not passed on gce or gke-ci since around 11am this morning (around when this PR was merged). Can you look at the Jenkins logs and try to figure out what is going wrong?\nThe problem is with guestbook itself not with the test. The frontend cannot connect to redis data base so it returns error response to the user. I created issue for that\nI have been 100% absorbed for weeks with my own e2e test so I've not paid much attention to this. Does the test connect to the service via an external load balancer?\nNope. It's using proxy. For example:\nIs this a reasonable use of the proxy service? Today told me that the proxy service is fine for contacting cluster-level services (note the dash Brian) but for regular applications we should use some other mechanism (e.g. external load balancer). ?\nMy first idea was to use ELB but it requires to create also firewall-rules using. I'm not sure if we want to do it.\nPR has been reverted by The test is passing for me locally. I'll updated . Is there any chance that using proxy instead of ELB can cause the problem?\nPersonally, I think -- for now -- using an ELB is fine -- by setting the use external load balancer field in the System specification. I am working on a PR to fix an issue relating to the use of application endpoints using the proxy server (to address issue -- which can bite you if one of the fields of a JSON response contains a field called ).\nIt sounds to me like there are 2 issues: should the guestbook example make the guestbook app accessible to users? should the test test correctness? I think these are 2 totally different things. The right answer to (1) is clearly external IPs. That's what we want to teach users to do. (2) likely doesn't have the same requirements. Conflating the 2 cases might not be helpful. However, if you want to do that, use the external load balancer, not the proxy.\nYes, I am torn about this too. Another thing to consider is that a test that uses an external load balancer is going to take longer because it takes much longer to create a service with an external load balancer (almost instant with ELB vs. perhaps up to a minute with a load balancer) and then there is a teardown step as well when you delete the service (which deletes the forwarding rules etc.). As a point of principle we should decide if it is an approved use of the proxy mechansim for use in tests in order to make them more lightweight vs. using an ELB.\nSo there is agreement to use ELB. What about firewall rule? Port 8000 should be opened, I can do it by using gcloud, however it's not provider agnostic operation. Is there any other option for that?\nSadly not: I suggest just using gcloud / making the API calls for the firewall and keep this as a GCE specific test. Which is indeed a shame.\nReopening since the PR was reverted in", "positive_passages": [{"docid": "doc-en-kubernetes-c21d831a1c6df0f8bb2092e36ff1d08333b0382da52b6066f7af08d803d248a4", "text": "source \"${KUBE_ROOT}/cluster/kube-env.sh\" source \"${KUBE_VERSION_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh\" function wait_for_running() { echo \"Waiting for pods to come up.\" frontends=$(${KUBECTL} get pods -l name=frontend -o template '--template={{range.items}}{{.id}} {{end}}') master=$(${KUBECTL} get pods -l name=redis-master -o template '--template={{range.items}}{{.id}} {{end}}') slaves=$(${KUBECTL} get pods -l name=redisslave -o template '--template={{range.items}}{{.id}} {{end}}') pods=$(echo $frontends $master $slaves) all_running=0 for i in $(seq 1 30); do sleep 10 all_running=1 for pod in $pods; do status=$(${KUBECTL} get pods $pod -o template '--template={{.currentState.status}}') || true if [[ \"$status\" == \"Pending\" ]]; then all_running=0 break fi done if [[ \"${all_running}\" == 1 ]]; then break fi done if [[ \"${all_running}\" == 0 ]]; then echo \"Pods did not come up in time\" exit 1 fi } function teardown() { ${KUBECTL} stop -f \"${GUESTBOOK}\" } prepare-e2e GUESTBOOK=\"${KUBE_ROOT}/examples/guestbook\" echo \"WARNING: this test is a no op that only attempts to launch guestbook resources.\" # Launch the guestbook example ${KUBECTL} create -f \"${GUESTBOOK}\" sleep 15 POD_LIST_1=$(${KUBECTL} get pods -o template '--template={{range.items}}{{.id}} {{end}}') echo \"Pods running: ${POD_LIST_1}\" # TODO make this an actual test. Open up a firewall and use curl to post and # read a message via the frontend ${KUBECTL} stop -f \"${GUESTBOOK}\" POD_LIST_2=$(${KUBECTL} get pods -o template '--template={{range.items}}{{.id}} {{end}}') echo \"Pods running after shutdown: ${POD_LIST_2}\" trap \"teardown\" EXIT # Verify that all pods are running wait_for_running get-password detect-master # Add a new entry to the guestbook and verify that it was remembered FRONTEND_ADDR=https://${KUBE_MASTER_IP}/api/v1beta1/proxy/services/frontend echo \"Waiting for frontend to serve content\" serving=0 for i in $(seq 1 12); do ENTRY=$(curl ${FRONTEND_ADDR}/index.php?cmd=get&key=messages --insecure --user ${KUBE_USER}:${KUBE_PASSWORD}) echo $ENTRY if [[ $ENTRY == '{\"data\": \"\"}' ]]; then serving=1 break fi sleep 10 done if [[ \"${serving}\" == 0 ]]; then echo \"Pods did not start serving content in time\" exit 1 fi curl ${FRONTEND_ADDR}/index.php?cmd=set&key=messages&value=TestEntry --insecure --user ${KUBE_USER}:${KUBE_PASSWORD} ENTRY=$(curl ${FRONTEND_ADDR}/index.php?cmd=get&key=messages --insecure --user ${KUBE_USER}:${KUBE_PASSWORD}) if [[ $ENTRY != '{\"data\": \"TestEntry\"}' ]]; then echo \"Wrong entry received: ${ENTRY}\" exit 1 fi exit 0", "commid": "kubernetes_pr_4588"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c037c8cde80ec7f20aa2f250e008d71bb0af3d2acb8876fcb8161235bc3e00f", "query": "Part of issue . Re-write test in Go, generate unique names for top level resources, ensure test is idempotent, clean up all resources at end of test.\nWhat should we do about this test? The current test in only brings up the guestbook but doesn't even confirm that it is up or tries to connect to any of the ports... Was the point of that test case to test /? Does it make sense to rewrite this test in Go or do we already have more test coverage from e.g. ? If it makes sense to rewrite it in Go, what should we accomplish with this test?\nHow about a functional test that inserts some names and tries to retrieve them i.e. enough to know that the service has basic functionality?\n+1 to an actual functional test. The service is basic enough, it should be straightforward to do the same POST/GET from go that the js on the guestbook homepage does.\nand I talked about this and for a while and think that they should remain as tests. This test needs to be improved: it needs to be turned into an actual test (we don't do anything with the service).\nhas not passed on gce or gke-ci since around 11am this morning (around when this PR was merged). Can you look at the Jenkins logs and try to figure out what is going wrong?\nThe problem is with guestbook itself not with the test. The frontend cannot connect to redis data base so it returns error response to the user. I created issue for that\nI have been 100% absorbed for weeks with my own e2e test so I've not paid much attention to this. Does the test connect to the service via an external load balancer?\nNope. It's using proxy. For example:\nIs this a reasonable use of the proxy service? Today told me that the proxy service is fine for contacting cluster-level services (note the dash Brian) but for regular applications we should use some other mechanism (e.g. external load balancer). ?\nMy first idea was to use ELB but it requires to create also firewall-rules using. I'm not sure if we want to do it.\nPR has been reverted by The test is passing for me locally. I'll updated . Is there any chance that using proxy instead of ELB can cause the problem?\nPersonally, I think -- for now -- using an ELB is fine -- by setting the use external load balancer field in the System specification. I am working on a PR to fix an issue relating to the use of application endpoints using the proxy server (to address issue -- which can bite you if one of the fields of a JSON response contains a field called ).\nIt sounds to me like there are 2 issues: should the guestbook example make the guestbook app accessible to users? should the test test correctness? I think these are 2 totally different things. The right answer to (1) is clearly external IPs. That's what we want to teach users to do. (2) likely doesn't have the same requirements. Conflating the 2 cases might not be helpful. However, if you want to do that, use the external load balancer, not the proxy.\nYes, I am torn about this too. Another thing to consider is that a test that uses an external load balancer is going to take longer because it takes much longer to create a service with an external load balancer (almost instant with ELB vs. perhaps up to a minute with a load balancer) and then there is a teardown step as well when you delete the service (which deletes the forwarding rules etc.). As a point of principle we should decide if it is an approved use of the proxy mechansim for use in tests in order to make them more lightweight vs. using an ELB.\nSo there is agreement to use ELB. What about firewall rule? Port 8000 should be opened, I can do it by using gcloud, however it's not provider agnostic operation. Is there any other option for that?\nSadly not: I suggest just using gcloud / making the API calls for the firewall and keep this as a GCE specific test. Which is indeed a shame.\nReopening since the PR was reverted in", "positive_passages": [{"docid": "doc-en-kubernetes-4aca493934e234da31e97e0c81db16b78e41f12174d3281d64dcace2fdbc1400", "text": "local REGION=${ZONE%-*} gcloud compute forwarding-rules delete -q --region ${REGION} \"${INSTANCE_PREFIX}-default-frontend\" || true gcloud compute target-pools delete -q --region ${REGION} \"${INSTANCE_PREFIX}-default-frontend\" || true gcloud compute firewall-rules delete guestbook-e2e-minion-8000 -q || true fi } function wait_for_running() { echo \"Waiting for pods to come up.\" local frontends master slaves pods all_running status i pod frontends=($(${KUBECTL} get pods -l name=frontend -o template '--template={{range.items}}{{.id}} {{end}}')) master=($(${KUBECTL} get pods -l name=redis-master -o template '--template={{range.items}}{{.id}} {{end}}')) slaves=($(${KUBECTL} get pods -l name=redis-slave -o template '--template={{range.items}}{{.id}} {{end}}')) pods=(\"${frontends[@]}\" \"${master[@]}\" \"${slaves[@]}\") all_running=0 for i in {1..30}; do all_running=1 for pod in \"${pods[@]}\"; do status=$(${KUBECTL} get pods \"${pod}\" -o template '--template={{.currentState.status}}') || true if [[ \"$status\" != \"Running\" ]]; then all_running=0 break fi done if [[ \"${all_running}\" == 1 ]]; then break fi sleep 10 done if [[ \"${all_running}\" == 0 ]]; then echo \"Pods did not come up in time\" return 1 fi }", "commid": "kubernetes_pr_5294"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7c037c8cde80ec7f20aa2f250e008d71bb0af3d2acb8876fcb8161235bc3e00f", "query": "Part of issue . Re-write test in Go, generate unique names for top level resources, ensure test is idempotent, clean up all resources at end of test.\nWhat should we do about this test? The current test in only brings up the guestbook but doesn't even confirm that it is up or tries to connect to any of the ports... Was the point of that test case to test /? Does it make sense to rewrite this test in Go or do we already have more test coverage from e.g. ? If it makes sense to rewrite it in Go, what should we accomplish with this test?\nHow about a functional test that inserts some names and tries to retrieve them i.e. enough to know that the service has basic functionality?\n+1 to an actual functional test. The service is basic enough, it should be straightforward to do the same POST/GET from go that the js on the guestbook homepage does.\nand I talked about this and for a while and think that they should remain as tests. This test needs to be improved: it needs to be turned into an actual test (we don't do anything with the service).\nhas not passed on gce or gke-ci since around 11am this morning (around when this PR was merged). Can you look at the Jenkins logs and try to figure out what is going wrong?\nThe problem is with guestbook itself not with the test. The frontend cannot connect to redis data base so it returns error response to the user. I created issue for that\nI have been 100% absorbed for weeks with my own e2e test so I've not paid much attention to this. Does the test connect to the service via an external load balancer?\nNope. It's using proxy. For example:\nIs this a reasonable use of the proxy service? Today told me that the proxy service is fine for contacting cluster-level services (note the dash Brian) but for regular applications we should use some other mechanism (e.g. external load balancer). ?\nMy first idea was to use ELB but it requires to create also firewall-rules using. I'm not sure if we want to do it.\nPR has been reverted by The test is passing for me locally. I'll updated . Is there any chance that using proxy instead of ELB can cause the problem?\nPersonally, I think -- for now -- using an ELB is fine -- by setting the use external load balancer field in the System specification. I am working on a PR to fix an issue relating to the use of application endpoints using the proxy server (to address issue -- which can bite you if one of the fields of a JSON response contains a field called ).\nIt sounds to me like there are 2 issues: should the guestbook example make the guestbook app accessible to users? should the test test correctness? I think these are 2 totally different things. The right answer to (1) is clearly external IPs. That's what we want to teach users to do. (2) likely doesn't have the same requirements. Conflating the 2 cases might not be helpful. However, if you want to do that, use the external load balancer, not the proxy.\nYes, I am torn about this too. Another thing to consider is that a test that uses an external load balancer is going to take longer because it takes much longer to create a service with an external load balancer (almost instant with ELB vs. perhaps up to a minute with a load balancer) and then there is a teardown step as well when you delete the service (which deletes the forwarding rules etc.). As a point of principle we should decide if it is an approved use of the proxy mechansim for use in tests in order to make them more lightweight vs. using an ELB.\nSo there is agreement to use ELB. What about firewall rule? Port 8000 should be opened, I can do it by using gcloud, however it's not provider agnostic operation. Is there any other option for that?\nSadly not: I suggest just using gcloud / making the API calls for the firewall and keep this as a GCE specific test. Which is indeed a shame.\nReopening since the PR was reverted in", "positive_passages": [{"docid": "doc-en-kubernetes-2521f1e5ee1a86d2eaa960a404a5a3b0b2ca547ec69fd54e193159f6c05ff897", "text": "trap \"teardown\" EXIT echo \"WARNING: this test is a no op that only attempts to launch guestbook resources.\" # Launch the guestbook example ${KUBECTL} create -f \"${GUESTBOOK}\" sleep 15 POD_LIST_1=$(${KUBECTL} get pods -o template '--template={{range.items}}{{.id}} {{end}}') echo \"Pods running: ${POD_LIST_1}\" # TODO make this an actual test. Open up a firewall and use curl to post and # read a message via the frontend ${KUBECTL} stop -f \"${GUESTBOOK}\" POD_LIST_2=$(${KUBECTL} get pods -o template '--template={{range.items}}{{.id}} {{end}}') echo \"Pods running after shutdown: ${POD_LIST_2}\" # Verify that all pods are running wait_for_running if [[ \"${KUBERNETES_PROVIDER}\" == \"gce\" ]]; then gcloud compute firewall-rules create --allow=tcp:8000 --network=\"${NETWORK}\" --target-tags=\"${MINION_TAG}\" guestbook-e2e-minion-8000 fi # Add a new entry to the guestbook and verify that it was remembered frontend_addr=$(${KUBECTL} get service frontend -o template '--template={{range .publicIPs}}{{.}}{{end}}:{{.port}}') echo \"Waiting for frontend to serve content\" serving=0 for i in {1..12}; do entry=$(curl \"http://${frontend_addr}/index.php?cmd=get&key=messages\") || true echo ${entry} if [[ \"${entry}\" == '{\"data\": \"\"}' ]]; then serving=1 break fi sleep 10 done if [[ \"${serving}\" == 0 ]]; then echo \"Pods did not start serving content in time\" exit 1 fi curl \"http://${frontend_addr}/index.php?cmd=set&key=messages&value=TestEntry\" entry=$(curl \"http://${frontend_addr}/index.php?cmd=get&key=messages\") if [[ \"${entry}\" != '{\"data\": \"TestEntry\"}' ]]; then echo \"Wrong entry received: ${entry}\" exit 1 fi exit 0", "commid": "kubernetes_pr_5294"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8857d4913f590dc5c0dae632efd4f6f9d7848af1d63f93740d749815422076cf", "query": "I stumbled across this while experimenting with kubeconfig load order/merging. There are various helpers constructed from a Factory that use , however this always uses the same default ClientConfig because doesn't use the passed in version param: Am I missing something or is this a bug? /cc\nLooks like a bug, clientforversion should accept \"\" (which can provide a default) and SetKubernetesDefaults has the final fallback protection (by using latest.Version). If version != defaultConfig.Version, we should copy defaultConfig, set version, then apply kubedefaults. The reason we take version here is if a resource is specified via file which only exists in a newer version, ie user says: --api-version=v1beta1 -f RESTMapper can handle that newresource is only valid for v1beta3 and apply the right override, then ask for client. --api-version is expected to bubble up via default config.", "positive_passages": [{"docid": "doc-en-kubernetes-3bb0c9cf2b1c619f738855e9398944c4c0e083699002655ddb0ef6fa6d5cef5d", "text": "return nil, err } c.defaultConfig = config if c.matchVersion { if err := client.MatchesServerVersion(config); err != nil { return nil, err } } } // TODO: have a better config copy method config := *c.defaultConfig if len(version) != 0 { config.Version = version } client.SetKubernetesDefaults(&config) return &config, nil }", "commid": "kubernetes_pr_3940"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8857d4913f590dc5c0dae632efd4f6f9d7848af1d63f93740d749815422076cf", "query": "I stumbled across this while experimenting with kubeconfig load order/merging. There are various helpers constructed from a Factory that use , however this always uses the same default ClientConfig because doesn't use the passed in version param: Am I missing something or is this a bug? /cc\nLooks like a bug, clientforversion should accept \"\" (which can provide a default) and SetKubernetesDefaults has the final fallback protection (by using latest.Version). If version != defaultConfig.Version, we should copy defaultConfig, set version, then apply kubedefaults. The reason we take version here is if a resource is specified via file which only exists in a newer version, ie user says: --api-version=v1beta1 -f RESTMapper can handle that newresource is only valid for v1beta3 and apply the right override, then ask for client. --api-version is expected to bubble up via default config.", "positive_passages": [{"docid": "doc-en-kubernetes-55f207ba73aadcafbc935a6ef60f45ffa7231bc70d967395cac2e7b8e32bb1d0", "text": "\"fmt\" \"io\" \"io/ioutil\" \"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/latest\"", "commid": "kubernetes_pr_3940"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8857d4913f590dc5c0dae632efd4f6f9d7848af1d63f93740d749815422076cf", "query": "I stumbled across this while experimenting with kubeconfig load order/merging. There are various helpers constructed from a Factory that use , however this always uses the same default ClientConfig because doesn't use the passed in version param: Am I missing something or is this a bug? /cc\nLooks like a bug, clientforversion should accept \"\" (which can provide a default) and SetKubernetesDefaults has the final fallback protection (by using latest.Version). If version != defaultConfig.Version, we should copy defaultConfig, set version, then apply kubedefaults. The reason we take version here is if a resource is specified via file which only exists in a newer version, ie user says: --api-version=v1beta1 -f RESTMapper can handle that newresource is only valid for v1beta3 and apply the right override, then ask for client. --api-version is expected to bubble up via default config.", "positive_passages": [{"docid": "doc-en-kubernetes-40fde2b99eef1bccc273a042dda6da4aa81f071b4b1e1e52e030651613ebe00a", "text": "func stringBody(body string) io.ReadCloser { return ioutil.NopCloser(bytes.NewReader([]byte(body))) } // Verify that resource.RESTClients constructed from a factory respect mapping.APIVersion func TestClientVersions(t *testing.T) { f := NewFactory(nil) versions := []string{ \"v1beta1\", \"v1beta2\", \"v1beta3\", } for _, version := range versions { mapping := &meta.RESTMapping{ APIVersion: version, } c, err := f.RESTClient(nil, mapping) if err != nil { t.Errorf(\"unexpected error: %v\", err) } client := c.(*client.RESTClient) if client.APIVersion() != version { t.Errorf(\"unexpected Client APIVersion: %s %v\", client.APIVersion, client) } } } ", "commid": "kubernetes_pr_3940"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bb0cafb0475cc6e5be0cdac2147f4b242ac75aa090a22ab5c84cfc4d4c244da0", "query": "We should at least print the local client version before we try to create a server client for version checking.\n+1. It took me quite a while to figure out that kubectl version -c was the magic incantation to only print the client version.\nSorry, didn't realize it was assigned until after I had the PR ready :P If you also had something ready feel free to ignore mine.", "positive_passages": [{"docid": "doc-en-kubernetes-e556777f4bcb0dae76d700286af9033698d3970b2990016236956b6725b3cdec", "text": ") func GetVersion(w io.Writer, kubeClient client.Interface) { GetClientVersion(w) serverVersion, err := kubeClient.ServerVersion() if err != nil { fmt.Printf(\"Couldn't read version from server: %vn\", err) os.Exit(1) } GetClientVersion(w) fmt.Fprintf(w, \"Server Version: %#vn\", *serverVersion) }", "commid": "kubernetes_pr_4367"}], "negative_passages": []} {"query_id": "q-en-kubernetes-deb75b38aec5bebbcea339d029a1dfeb2da737ca01d1c960040375471f28ccca", "query": "Part of umbrella issue . cc But this is my favorite shell test; it's so fast!", "positive_passages": [{"docid": "doc-en-kubernetes-0da7129163210d35d5e034c206a31ca7116622cfda9a6125902e6427b4726adf", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Launches a container and verifies it can be reached. Assumes that # we're being called by hack/e2e-test.sh (we use some env vars it sets up). set -o errexit set -o nounset set -o pipefail KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. : ${KUBE_VERSION_ROOT:=${KUBE_ROOT}} : ${KUBECTL:=\"${KUBE_VERSION_ROOT}/cluster/kubectl.sh\"} : ${KUBE_CONFIG_FILE:=\"config-test.sh\"} export KUBECTL KUBE_CONFIG_FILE source \"${KUBE_ROOT}/cluster/kube-env.sh\" source \"${KUBE_VERSION_ROOT}/cluster/${KUBERNETES_PROVIDER}/util.sh\" prepare-e2e if [[ \"${KUBERNETES_PROVIDER}\" != \"gce\" ]] && [[ \"${KUBERNETES_PROVIDER}\" != \"gke\" ]]; then echo \"WARNING: Skipping certs.sh for cloud provider: ${KUBERNETES_PROVIDER}.\" exit 0 fi # Set KUBE_MASTER detect-master # IMPORTANT: there are upstream things that rely on these files. # Do *not* fix this test by changing this path, unless you _really_ know # what you are doing. for file in kubecfg.key kubecfg.crt ca.crt; do echo \"Checking for ${file}\" \"${GCLOUD}\" compute ssh --zone=\"${ZONE}\" \"${KUBE_MASTER}\" --command \"ls /srv/kubernetes/${file}\" done ", "commid": "kubernetes_pr_4826"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8fe8d41011716a5e2c28a734b3bece1d608434eb521eddfc89e8a0849b497adc", "query": "I've reported this before.\nFrom a Makefile I get exit status 255.\nJust to be clear (sorry for the basic question), are you saying that when the service file is malformed the crash is unhelpful for debugging? Or that creating any/all services using is currently broken?\nThe YAML file is not malformed. I think there was something perhaps not quite right about firewalls/rules etc. and this caused to crash unhelpfully.", "positive_passages": [{"docid": "doc-en-kubernetes-6e46cf9dbab0e8b8adcf862ab681f4b9eb96f39f21e0fa64cf400f3dc1fb52a1", "text": "FilenameParam(flags.Filenames...). Flatten(). Do() checkErr(r.Err()) count := 0 err = r.Visit(func(info *resource.Info) error {", "commid": "kubernetes_pr_4281"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8fe8d41011716a5e2c28a734b3bece1d608434eb521eddfc89e8a0849b497adc", "query": "I've reported this before.\nFrom a Makefile I get exit status 255.\nJust to be clear (sorry for the basic question), are you saying that when the service file is malformed the crash is unhelpful for debugging? Or that creating any/all services using is currently broken?\nThe YAML file is not malformed. I think there was something perhaps not quite right about firewalls/rules etc. and this caused to crash unhelpfully.", "positive_passages": [{"docid": "doc-en-kubernetes-9694cda8568a3beb9abf77d5b9a988ca5af852d00018d4f4ba6d6e124297ea59", "text": "ResourceTypeOrNameArgs(args...). Flatten(). Do() checkErr(r.Err()) found := 0 err = r.IgnoreErrors(errors.IsNotFound).Visit(func(r *resource.Info) error {", "commid": "kubernetes_pr_4281"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8fe8d41011716a5e2c28a734b3bece1d608434eb521eddfc89e8a0849b497adc", "query": "I've reported this before.\nFrom a Makefile I get exit status 255.\nJust to be clear (sorry for the basic question), are you saying that when the service file is malformed the crash is unhelpful for debugging? Or that creating any/all services using is currently broken?\nThe YAML file is not malformed. I think there was something perhaps not quite right about firewalls/rules etc. and this caused to crash unhelpfully.", "positive_passages": [{"docid": "doc-en-kubernetes-b3a93958c0480da0f086458ff2d0abf0df0e6985436ec08cc22d5f694879cb4f", "text": "ResourceTypeOrNameArgs(args...). SingleResourceType(). Do() checkErr(r.Err()) mapping, err := r.ResourceMapping() checkErr(err)", "commid": "kubernetes_pr_4281"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8fe8d41011716a5e2c28a734b3bece1d608434eb521eddfc89e8a0849b497adc", "query": "I've reported this before.\nFrom a Makefile I get exit status 255.\nJust to be clear (sorry for the basic question), are you saying that when the service file is malformed the crash is unhelpful for debugging? Or that creating any/all services using is currently broken?\nThe YAML file is not malformed. I think there was something perhaps not quite right about firewalls/rules etc. and this caused to crash unhelpfully.", "positive_passages": [{"docid": "doc-en-kubernetes-72596694c15ff00fcd0ebed0dc03f0a48335d0e1ae4c4c43c7d9298ded504c37", "text": "FilenameParam(flags.Filenames...). Flatten(). Do() checkErr(r.Err()) patch := cmdutil.GetFlagString(cmd, \"patch\") if len(flags.Filenames) == 0 && len(patch) == 0 {", "commid": "kubernetes_pr_4281"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b0f3dc2cf2d68fb8ee1e89f48638b658c3ae4124a5a63a21f7ced93b1e767d8a", "query": "I thought I had worked this out during development, but from an IRC report from off a 0.10 GKE cluster, SkyDNS isn't starting. A bit of log trawling: Maybe I closed too early?\ngist with additional info:\n(FWIW, this came up the next time, and we test this setup all the time using \"Services should provide DNS for the cluster\" ) but even if this is a low grade flake, I take a bit of a personal affront.)\nThe cluster where SkyDNS didn't come up was created via the console at It was a cluster size of 2 on us-central1-b.", "positive_passages": [{"docid": "doc-en-kubernetes-810c83a0cc98b111cfe90415c1b1d46aca551ee86fbc8fff39752b490dd35377", "text": "# The business logic for whether a given object should be created # was already enforced by salt, and /etc/kubernetes/addons is the # managed result is of that. Start everything below that directory. echo \"== Kubernetes addon manager started at $(date -Is) ==\" KUBECTL=/usr/local/bin/kubectl function create-object() { obj=$1 for tries in {1..5}; do if ${KUBECTL} --server=\"127.0.0.1:8080\" create --validate=true -f ${obj}; then return fi echo \"++ ${obj} failed, attempt ${try} (sleeping 5) ++\" sleep 5 done } echo \"== Kubernetes addon manager started at $(date -Is) ==\" for obj in $(find /etc/kubernetes/addons -name *.yaml); do ${KUBECTL} --server=\"127.0.0.1:8080\" create -f ${obj} & create-object ${obj} & echo \"++ addon ${obj} started in pid $! ++\" done noerrors=\"true\"", "commid": "kubernetes_pr_5330"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-ee8d8600f1d0b96e5e500b241d5815dc4a2098bfa14b6a959ec296cc7efb8520", "text": "detect-project &> /dev/null export PATH=$(get_absolute_dirname $kubectl):$PATH kubectl=\"${GCLOUD}\" fi if [[ \"$KUBERNETES_PROVIDER\" == \"vagrant\" ]]; then # When we are using vagrant it has hard coded auth. We repeat that here so that # we don't clobber auth that might be used for a publicly facing cluster. config=( \"--auth-path=$HOME/.kubernetes_vagrant_auth\" ) elif [[ \"${KUBERNETES_PROVIDER}\" == \"gke\" ]]; then # GKE runs kubectl through gcloud. config=( \"preview\"", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-d8a8d8bb01f98d1394bd0f60c6a1ada87e363a154898b9d833b6165f9142c697", "text": "\"--zone=${ZONE}\" \"--cluster=${CLUSTER_NAME}\" ) elif [[ \"$KUBERNETES_PROVIDER\" == \"vagrant\" ]]; then # When we are using vagrant it has hard coded auth. We repeat that here so that # we don't clobber auth that might be used for a publicly facing cluster. config=( \"--auth-path=$HOME/.kubernetes_vagrant_auth\" ) fi detect-master > /dev/null if [[ -n \"${KUBE_MASTER_IP-}\" && -z \"${KUBERNETES_MASTER-}\" ]]; then export KUBERNETES_MASTER=https://${KUBE_MASTER_IP} fi echo \"current-context: \"$(${kubectl} config view -o template --template='{{index . \"current-context\"}}')\"\" echo \"Running:\" \"${kubectl}\" \"${config[@]:+${config[@]}}\" \"${@+$@}\" >&2 \"${kubectl}\" \"${config[@]:+${config[@]}}\" \"${@+$@}\"", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-dd14175eeef8847b088e9c99b27a559deea44afc931513964a7df45489a610cb", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. function detect-master () { echo \"Running locally\" } ", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-bc68184aa48bded97e0d7c2af2778fcecc6eca0328f3bc1543d7a72ada40acbf", "text": "This will build and start a lightweight local cluster, consisting of a master and a single minion. Type Control-C to shut it down. You can use the cluster/kubectl.sh script to interact with the local cluster. You must set the KUBERNETES_PROVIDER environment variable. You can use the cluster/kubectl.sh script to interact with the local cluster. hack/local-up-cluster.sh will print the commands to run to point kubectl at the local cluster. ``` export KUBERNETES_PROVIDER=local ``` ### Running a container", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-89a0d50330d5f3f0160534006ae5457f635b123c0c8b770c605c474d699edd70", "text": "[[ -n \"${ETCD_PID-}\" ]] && kill \"${ETCD_PID}\" [[ -n \"${ETCD_DIR-}\" ]] && rm -rf \"${ETCD_DIR}\" exit 0 }", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-f34b610ecddc24c3cd4232a4ca212d943fde9f37920adb6265bbd9a48515bafe", "text": "To start using your cluster, open up another terminal/tab and run: export KUBERNETES_PROVIDER=local cluster/kubectl.sh config set-cluster local --server=http://${API_HOST}:${API_PORT} --insecure-skip-tls-verify=true --global cluster/kubectl.sh config set-context local --cluster=local --global cluster/kubectl.sh config use-context local cluster/kubectl.sh EOF", "commid": "kubernetes_pr_4401"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-9a81dd5834be35eb6afc5480f1d6744bf802d32107eaf2722b12eb9c07da54d1", "text": "// 3. CurrentDirectoryPath // 4. HomeDirectoryPath // Empty filenames are ignored. Files with non-deserializable content produced errors. // The first file to set a particular value or map key wins and the value or map key is never changed. // This means that the first file to set CurrentContext will have its context preserved. It also means // that if two files specify a \"red-user\", only values from the first file's red-user are used. Even // The first file to set a particular map key wins and map key's value is never changed. // BUT, if you set a struct value that is NOT contained inside of map, the value WILL be changed. // This results in some odd looking logic to merge in one direction, merge in the other, and then merge the two. // It also means that if two files specify a \"red-user\", only values from the first file's red-user are used. Even // non-conflicting entries from the second file's \"red-user\" are discarded. // Relative paths inside of the .kubeconfig files are resolved against the .kubeconfig file's parent folder // and only absolute file paths are returned. func (rules *ClientConfigLoadingRules) Load() (*clientcmdapi.Config, error) { config := clientcmdapi.NewConfig() mergeConfigWithFile(config, rules.CommandLinePath) resolveLocalPaths(rules.CommandLinePath, config) kubeConfigFiles := []string{rules.CommandLinePath, rules.EnvVarPath, rules.CurrentDirectoryPath, rules.HomeDirectoryPath} mergeConfigWithFile(config, rules.EnvVarPath) resolveLocalPaths(rules.EnvVarPath, config) // first merge all of our maps mapConfig := clientcmdapi.NewConfig() for _, file := range kubeConfigFiles { mergeConfigWithFile(mapConfig, file) resolveLocalPaths(file, mapConfig) } mergeConfigWithFile(config, rules.CurrentDirectoryPath) resolveLocalPaths(rules.CurrentDirectoryPath, config) // merge all of the struct values in the reverse order so that priority is given correctly nonMapConfig := clientcmdapi.NewConfig() for i := len(kubeConfigFiles) - 1; i >= 0; i-- { file := kubeConfigFiles[i] mergeConfigWithFile(nonMapConfig, file) resolveLocalPaths(file, nonMapConfig) } mergeConfigWithFile(config, rules.HomeDirectoryPath) resolveLocalPaths(rules.HomeDirectoryPath, config) // since values are overwritten, but maps values are not, we can merge the non-map config on top of the map config and // get the values we expect. config := clientcmdapi.NewConfig() mergo.Merge(config, mapConfig) mergo.Merge(config, nonMapConfig) return config, nil }", "commid": "kubernetes_pr_4416"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-89b69b2b4a0c0fd5280b357025f1541bdeb31bd53485ff1ea5e0606ecb6929d6", "text": "Contexts: map[string]clientcmdapi.Context{ \"gothic-context\": {AuthInfo: \"blue-user\", Cluster: \"chicken-cluster\", Namespace: \"plane-ns\"}}, } testConfigConflictAlfa = clientcmdapi.Config{ AuthInfos: map[string]clientcmdapi.AuthInfo{ \"red-user\": {Token: \"a-different-red-token\"},", "commid": "kubernetes_pr_4416"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbcf83004ea3f0459daec6a4d94a4bece404fe8f97225f1a2fba429c343bf057", "query": "I've been pulling my hair out trying get kubectl talk to my local cluster. It seems that kubectl merges the local file with values pulled down from the cloud. I've done the following: The I just set is silently overridden. Questions: is this documented anywhere? should kubectl merge in my GCE cluster when I set my cloud-provider to something else? should kubectl override my previously set current-context?\nworkaround is to use flag to override the config file value.\nbut not a good UX long term\nSomeone else hit this as well: . Better fix is to have the local-up script set a context, which I will try to add today. As for the overwriting behavior, it's documented in , but based on what you describe, the intended load order is not working; local .kubeconfig should overwrite global. cc\nThat sounds infuriating. I'll take a look this morning.", "positive_passages": [{"docid": "doc-en-kubernetes-83a651d24fdd1abbc5c865eca76961c3186e91441365ed99b8370cd52be93e51", "text": "} ) func TestConflictingCurrentContext(t *testing.T) { commandLineFile, _ := ioutil.TempFile(\"\", \"\") defer os.Remove(commandLineFile.Name()) envVarFile, _ := ioutil.TempFile(\"\", \"\") defer os.Remove(envVarFile.Name()) mockCommandLineConfig := clientcmdapi.Config{ CurrentContext: \"any-context-value\", } mockEnvVarConfig := clientcmdapi.Config{ CurrentContext: \"a-different-context\", } WriteToFile(mockCommandLineConfig, commandLineFile.Name()) WriteToFile(mockEnvVarConfig, envVarFile.Name()) loadingRules := ClientConfigLoadingRules{ CommandLinePath: commandLineFile.Name(), EnvVarPath: envVarFile.Name(), } mergedConfig, err := loadingRules.Load() if err != nil { t.Errorf(\"Unexpected error: %v\", err) } if mergedConfig.CurrentContext != mockCommandLineConfig.CurrentContext { t.Errorf(\"expected %v, got %v\", mockCommandLineConfig.CurrentContext, mergedConfig.CurrentContext) } } func TestResolveRelativePaths(t *testing.T) { pathResolutionConfig1 := clientcmdapi.Config{ AuthInfos: map[string]clientcmdapi.AuthInfo{", "commid": "kubernetes_pr_4416"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-5381e1a98ff3037853fe73b819618d954bd1e81045ac71f3de699dcb00ac2918", "text": "} // TODO use generalized labels once they are implemented (#341) b := resource.NewBuilder(mapper, typer, factory.ClientMapperForCommand(cmd)). b := resource.NewBuilder(mapper, typer, factory.ClientMapperForCommand()). NamespaceParam(cmdNamespace).DefaultNamespace(). SelectorParam(\"kubernetes.io/cluster-service=true\"). ResourceTypeOrNameArgs(false, []string{\"services\"}...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-d4c970efc4c373b752fd15bd08a6228e92b5b53ccae85ab4c4585796dca89c90", "text": "return printer, nil } // ClientMapperForCommand returns a ClientMapper for the given command and factory. func (f *Factory) ClientMapperForCommand(cmd *cobra.Command) resource.ClientMapper { // ClientMapperForCommand returns a ClientMapper for the factory. func (f *Factory) ClientMapperForCommand() resource.ClientMapper { return resource.ClientMapperFunc(func(mapping *meta.RESTMapping) (resource.RESTClient, error) { return f.RESTClient(mapping) })", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-f6a00898c2ac301a6351dea50661b356b736ef0704af2c443a5de13745c7a659", "text": "return err } mapper, typer := f.Object() r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). ContinueOnError(). NamespaceParam(cmdNamespace).DefaultNamespace(). FilenameParam(filenames...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-633755adaf3daa7cec7371a7f676a1b1d5ff596117edaf83c1091062382e1554", "text": "// handle watch separately since we cannot watch multiple resource types isWatch, isWatchOnly := util.GetFlagBool(cmd, \"watch\"), util.GetFlagBool(cmd, \"watch-only\") if isWatch || isWatchOnly { r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). NamespaceParam(cmdNamespace).DefaultNamespace(). SelectorParam(selector). ResourceTypeOrNameArgs(true, args...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-f18f9e121f452d6afbe48e99bc690bf839f66be0525f3ae6a2f0777f0260eb59", "text": "return nil } b := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). b := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). NamespaceParam(cmdNamespace).DefaultNamespace(). SelectorParam(selector). ResourceTypeOrNameArgs(true, args...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-f54e55cf8bbb9d9f53d4d5e9359c0be50e1660a8c2c184148ccdf158acd8c8dd", "text": "} mapper, typer := f.Object() b := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). b := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). ContinueOnError(). NamespaceParam(cmdNamespace).DefaultNamespace(). SelectorParam(selector).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-d5fda7c6c6ae1d98781d5ca6088117386f77b76a780fe325955b7ac7b1bb7d47", "text": "mapper, typer := f.Object() // TODO: use resource.Builder instead obj, err := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). obj, err := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). NamespaceParam(cmdNamespace).RequireNamespace(). FilenameParam(filename). Do().", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-56d8a1a03086c3671a08b07bf785651ef064c29d1e7fa59487a3723fde3acf6c", "text": "cmdNamespace, err := f.DefaultNamespace() cmdutil.CheckErr(err) mapper, typer := f.Object() r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). ContinueOnError(). NamespaceParam(cmdNamespace).RequireNamespace(). ResourceTypeOrNameArgs(false, args...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7466b54034457325e304e8d94b4e7685272fe09d95978484a7adf32fd05db2b4", "query": "I think the need for this argument to the factory methods has passed - it means that callers who want to provide configurable behavior have to do it through the factory itself, which is acceptable. We can remove it whenever.\nI removed most of the unused arguments. But, There are a few methods still using the argument. Namely, , , and . Before proceeding any further I wanted to ask what should be done about these methods? Deprecated/Refactored/Removed ?\nThe factory methods were the most important. The pattern before was people using the flags by name (cmd.Flags().Get(\"foo\")), but now factory methods must use pointer flag vars, so there's nothing that should need the command there. The printer methods are optional units that can be used when you add the flags for them to your command. Unless we refactor the definition of those flags to use pointers and return a printer struct (something I had started a while back and never finished), it's not worth changing them right now. The other consumer of factory is OpenShift - we wrap the factory to add our additional types (implementations of things like describers, action verbs, etc): - but we will pick up and remove use of cmd when we rebase.\nThanks for the explanation. I've cleaned the as well. Print method APIs are untouched as you suggested.\ncc\nLooks like this was done by PR", "positive_passages": [{"docid": "doc-en-kubernetes-e78d48e54bd620eb54bbd31a262ca5ff7ad46c24ed3d86b2bef014e42290446e", "text": "} mapper, typer := f.Object() r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand(cmd)). r := resource.NewBuilder(mapper, typer, f.ClientMapperForCommand()). ContinueOnError(). NamespaceParam(cmdNamespace).RequireNamespace(). FilenameParam(filenames...).", "commid": "kubernetes_pr_6144"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c692c87cef94fb1baca9eba906c310bcdc8b4d6325b69948d0a021b1972f05ef", "query": "A significant number of builds fail with the error If you look at , this is the leading cause of flakes (build passes on 1.4, fails on 1.3 or vice versa).\nI am going to look into this one.\nI kept getting emails about shippable failures... Below is the snippet of the log that includes new log messages: The host field of the pod was not assigned and the check failed after 30 seconds of polling. This means either the scheduler didn't schedule the pod or it couldn't schedule the pods due to, say, conflicts. The pod in question was from I have a wild guess. Since usually only one runtime fails (1.3 or 1.4), could it have been port conflicts (hostPort: 8080) that failed the tests?\nIs this still a problem? Or can we close this issue now?\nSome bugs were discovered and fixed, but in general, timeouts are still a problem in integration tests. We can close this one though because there are other issue () tracking problems like this. is currently blocked on a reflector benchmark to stress test the watch performance.", "positive_passages": [{"docid": "doc-en-kubernetes-1149a532ea4c280a1077df621fb46c2cf4f7fac633a025d0a0e7e0cdb969396d", "text": "return func() (bool, error) { for i := range pods.Items { host, id, namespace := pods.Items[i].Status.Host, pods.Items[i].Name, pods.Items[i].Namespace glog.Infof(\"Check whether pod %s.%s exists on node %q\", id, namespace, host) if len(host) == 0 { glog.Infof(\"Pod %s.%s is not bound to a host yet\", id, namespace) return false, nil } if _, err := podInfo.GetPodStatus(host, namespace, id); err != nil {", "commid": "kubernetes_pr_4547"}], "negative_passages": []} {"query_id": "q-en-kubernetes-92291ba27b38eeaf8a2eb7cf1c2fd4ca17817007f4731e08ba809be4fd605c4a", "query": "No other types have helper methods, and this does weird things with pointers.\nIt's a clean build/test without that helper method. See .", "positive_passages": [{"docid": "doc-en-kubernetes-40687d02006b5b88ee96c7be54da93476087c35e3c73ebc1303404b9de7c49b9", "text": "// ResourceList is a set of (resource name, quantity) pairs. type ResourceList map[ResourceName]resource.Quantity // Get is a convenience function, which returns a 0 quantity if the // resource list is nil, empty, or lacks a value for the requested resource. // Treat as read only! func (rl ResourceList) Get(name ResourceName) *resource.Quantity { if rl == nil { return &resource.Quantity{} } q := rl[name] return &q } // Node is a worker node in Kubernetenes // The name of the node according to etcd is in ObjectMeta.Name. type Node struct {", "commid": "kubernetes_pr_4668"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-87cfa115d501a7639afee6c49e443698c1a7fb38b9fb404b28040d74cf25dba1", "text": "HostPort int `json:\"hostPort,omitempty\"` // Required: This must be a valid port number, 0 < x < 65536. ContainerPort int `json:\"containerPort\"` // Optional: Supports \"TCP\" and \"UDP\". Defaults to \"TCP\". // Required: Supports \"TCP\" and \"UDP\". Protocol Protocol `json:\"protocol,omitempty\"` // Optional: What host IP to bind the external port to. HostIP string `json:\"hostIP,omitempty\"`", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-90eab0c297ade246b462d8ce713b25b2141e4e0ebc8aa225957b1d2f3c0a9bb8", "text": "LivenessProbe *Probe `json:\"livenessProbe,omitempty\"` ReadinessProbe *Probe `json:\"readinessProbe,omitempty\"` Lifecycle *Lifecycle `json:\"lifecycle,omitempty\"` // Optional: Defaults to /dev/termination-log // Required. TerminationMessagePath string `json:\"terminationMessagePath,omitempty\"` // Optional: Default to false. Privileged bool `json:\"privileged,omitempty\"` // Optional: Policy for pulling images for this container // Required: Policy for pulling images for this container ImagePullPolicy PullPolicy `json:\"imagePullPolicy\"` // Optional: Capabilities for container. Capabilities Capabilities `json:\"capabilities,omitempty\"`", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-acae5b24b4d878f4e994f62d384d1de99e59e1e9f7408a8afa59082df6ed39bd", "text": "Volumes []Volume `json:\"volumes\"` Containers []Container `json:\"containers\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" // Required: Set DNS policy. DNSPolicy DNSPolicy `json:\"dnsPolicy,omitempty\"` // NodeSelector is a selector which must be true for the pod to fit on a node NodeSelector map[string]string `json:\"nodeSelector,omitempty\"`", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-8dc0d96adce126a8df241fc1b7b9e5a050204c913924fe507192985a26f9fa03", "text": "// proxied by this service. Port int `json:\"port\"` // Optional: Supports \"TCP\" and \"UDP\". Defaults to \"TCP\". // Required: Supports \"TCP\" and \"UDP\". Protocol Protocol `json:\"protocol,omitempty\"` // This service will route traffic to pods having labels matching this selector. If empty or not present,", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-bcdafd674d5f5ca26c8c17b45449a172acf5dfa0510fb194fbc4a402546e7284", "text": "// Optional: If unspecified, the first port on the container will be used. ContainerPort util.IntOrString `json:\"containerPort,omitempty\"` // Optional: Supports \"ClientIP\" and \"None\". Used to maintain session affinity. // Required: Supports \"ClientIP\" and \"None\". Used to maintain session affinity. SessionAffinity AffinityType `json:\"sessionAffinity,omitempty\"` }", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-714ad5690812390e7d9a20665971045e80400b49fb24258316a1a9c3a074af93", "query": "Many comments in there say \"if not specified, default is ...\" - but we previously asserted that there are no defaults for the internal API. We should strip those comments out, and leave them in versioned structs only.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-4ab00eaa350062fe54db1d3a5b019cacb157aab281aa7d43453540f2bbe271f7", "text": "Volumes []Volume `json:\"volumes\"` Containers []Container `json:\"containers\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" // Required: Set DNS policy. DNSPolicy DNSPolicy `json:\"dnsPolicy\"` }", "commid": "kubernetes_pr_4541"}], "negative_passages": []} {"query_id": "q-en-kubernetes-399b2503d95e5517e810fb8fc23f0fca9215d555088a5ac25779615594a9bf09", "query": "This is an offshoot of . cc Here's a log from a single showing that it's getting it's context screwed over by the other clusters coming up at the same time, even those its local environment never changed: Here's the relevant part of the new script: ( is in a separate commit and allows the override of on the GCE so that this script can work.)\nI think this can be resolved by having kube- set the context by exporting KUBECONFIG, so a subsequent run doesn't overwrite it.\nAfter some digging with it's a bit trickier than I thought. If we want to be able to run kube-up in parallel from the same branch/working directory, needs to create a separate kubeconfig for each run. But we also want kube-up to set config such that you can immediately run against the created cluster. Probably best to defer until is resolved.\ncc And the script is sitting in", "positive_passages": [{"docid": "doc-en-kubernetes-d6c212a64e91204ce1bb9c8b28a0c33157c16262ecb03197bc563300da0abeb8", "text": "} node_count=\"${NUM_MINIONS}\" next_node=\"10.244.0.0\" next_node=\"${KUBE_GCE_CLUSTER_CLASS_B:-10.244}.0.0\" node_subnet_size=24 node_subnet_count=$((2 ** (32-$node_subnet_size))) subnets=()", "commid": "kubernetes_pr_6919"}], "negative_passages": []} {"query_id": "q-en-kubernetes-399b2503d95e5517e810fb8fc23f0fca9215d555088a5ac25779615594a9bf09", "query": "This is an offshoot of . cc Here's a log from a single showing that it's getting it's context screwed over by the other clusters coming up at the same time, even those its local environment never changed: Here's the relevant part of the new script: ( is in a separate commit and allows the override of on the GCE so that this script can work.)\nI think this can be resolved by having kube- set the context by exporting KUBECONFIG, so a subsequent run doesn't overwrite it.\nAfter some digging with it's a bit trickier than I thought. If we want to be able to run kube-up in parallel from the same branch/working directory, needs to create a separate kubeconfig for each run. But we also want kube-up to set config such that you can immediately run against the created cluster. Probably best to defer until is resolved.\ncc And the script is sitting in", "positive_passages": [{"docid": "doc-en-kubernetes-8352a5f995b0804afbbd0751134d308e7979372b755cf4083db1db69592f2c52", "text": "next_node=$(increment_ipv4 $next_node $node_subnet_count) done CLUSTER_IP_RANGE=\"10.244.0.0/16\" CLUSTER_IP_RANGE=\"${KUBE_GCE_CLUSTER_CLASS_B:-10.244}.0.0/16\" MINION_IP_RANGES=($(eval echo \"${subnets[@]}\")) MINION_SCOPES=(\"storage-ro\" \"compute-rw\" \"https://www.googleapis.com/auth/monitoring\")", "commid": "kubernetes_pr_6919"}], "negative_passages": []} {"query_id": "q-en-kubernetes-399b2503d95e5517e810fb8fc23f0fca9215d555088a5ac25779615594a9bf09", "query": "This is an offshoot of . cc Here's a log from a single showing that it's getting it's context screwed over by the other clusters coming up at the same time, even those its local environment never changed: Here's the relevant part of the new script: ( is in a separate commit and allows the override of on the GCE so that this script can work.)\nI think this can be resolved by having kube- set the context by exporting KUBECONFIG, so a subsequent run doesn't overwrite it.\nAfter some digging with it's a bit trickier than I thought. If we want to be able to run kube-up in parallel from the same branch/working directory, needs to create a separate kubeconfig for each run. But we also want kube-up to set config such that you can immediately run against the created cluster. Probably best to defer until is resolved.\ncc And the script is sitting in", "positive_passages": [{"docid": "doc-en-kubernetes-3692f2da1a98d5389fd1cd251331d8c3f16bed3780cb2e59a5f7a72d35861beb", "text": "MASTER_NAME=\"${INSTANCE_PREFIX}-master\" MASTER_TAG=\"${INSTANCE_PREFIX}-master\" MINION_TAG=\"${INSTANCE_PREFIX}-minion\" CLUSTER_IP_RANGE=\"10.245.0.0/16\" MINION_IP_RANGES=($(eval echo \"10.245.{1..${NUM_MINIONS}}.0/24\")) CLUSTER_IP_RANGE=\"${KUBE_GCE_CLUSTER_CLASS_B:-10.245}.0.0/16\" MINION_IP_RANGES=($(eval echo \"${KUBE_GCE_CLUSTER_CLASS_B:-10.245}.{1..${NUM_MINIONS}}.0/24\")) MASTER_IP_RANGE=\"${MASTER_IP_RANGE:-10.246.0.0/24}\" MINION_SCOPES=(\"storage-ro\" \"compute-rw\") # Increase the sleep interval value if concerned about API rate limits. 3, in seconds, is the default. POLL_SLEEP_INTERVAL=3", "commid": "kubernetes_pr_6919"}], "negative_passages": []} {"query_id": "q-en-kubernetes-399b2503d95e5517e810fb8fc23f0fca9215d555088a5ac25779615594a9bf09", "query": "This is an offshoot of . cc Here's a log from a single showing that it's getting it's context screwed over by the other clusters coming up at the same time, even those its local environment never changed: Here's the relevant part of the new script: ( is in a separate commit and allows the override of on the GCE so that this script can work.)\nI think this can be resolved by having kube- set the context by exporting KUBECONFIG, so a subsequent run doesn't overwrite it.\nAfter some digging with it's a bit trickier than I thought. If we want to be able to run kube-up in parallel from the same branch/working directory, needs to create a separate kubeconfig for each run. But we also want kube-up to set config such that you can immediately run against the created cluster. Probably best to defer until is resolved.\ncc And the script is sitting in", "positive_passages": [{"docid": "doc-en-kubernetes-d7d35c293eb9d49a4df5193583b4662358b92613807ccd6a703430b817c80057", "text": " #!/bin/bash # Copyright 2015 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/.. function down-clusters { for count in $(seq 1 ${clusters}); do export KUBE_GCE_INSTANCE_PREFIX=e2e-test-${USER}-${count} local cluster_dir=${KUBE_ROOT}/_output/e2e/${KUBE_GCE_INSTANCE_PREFIX} export KUBECONFIG=${cluster_dir}/.kubeconfig go run ${KUBE_ROOT}/hack/e2e.go -down -v & done wait } function up-clusters { for count in $(seq 1 ${clusters}); do export KUBE_GCE_INSTANCE_PREFIX=e2e-test-${USER}-${count} export KUBE_GCE_CLUSTER_CLASS_B=\"10.$((${count}*2-1))\" export MASTER_IP_RANGE=\"10.$((${count}*2)).0.0/24\" local cluster_dir=${KUBE_ROOT}/_output/e2e/${KUBE_GCE_INSTANCE_PREFIX} mkdir -p ${cluster_dir} export KUBECONFIG=${cluster_dir}/.kubeconfig go run hack/e2e.go -up -v |& tee ${cluster_dir}/up.log & done fail=0 for job in $(jobs -p); do wait \"${job}\" || fail=$((fail + 1)) done if (( fail != 0 )); then echo \"${fail} cluster creation failures. Not continuing with tests.\" exit 1 fi } function run-tests { for count in $(seq 1 ${clusters}); do export KUBE_GCE_INSTANCE_PREFIX=e2e-test-${USER}-${count} local cluster_dir=${KUBE_ROOT}/_output/e2e/${KUBE_GCE_INSTANCE_PREFIX} export KUBECONFIG=${cluster_dir}/.kubeconfig export E2E_REPORT_DIR=${cluster_dir} go run hack/e2e.go -test --test_args=\"--ginkgo.noColor\" \"${@:-}\" -down |& tee ${cluster_dir}/e2e.log & done wait } # Outputs something like: # _output/e2e/e2e-test-zml-5/junit.xml # FAIL: Shell tests that services.sh passes function post-process { echo $1 cat $1 | python -c ' import sys from xml.dom.minidom import parse failed = False for testcase in parse(sys.stdin).getElementsByTagName(\"testcase\"): if len(testcase.getElementsByTagName(\"failure\")) != 0: failed = True print \" FAIL: {test}\".format(test = testcase.getAttribute(\"name\")) if not failed: print \" SUCCESS!\" ' } function print-results { for count in $(seq 1 ${clusters}); do for junit in ${KUBE_ROOT}/_output/e2e/e2e-test-${USER}-${count}/junit*.xml; do post-process ${junit} done done } if [[ ${KUBERNETES_PROVIDER:-gce} != \"gce\" ]]; then echo \"$0 not supported on ${KUBERNETES_PROVIDER} yet\" >&2 exit 1 fi readonly clusters=${1:-} if ! [[ \"${clusters}\" =~ ^[0-9]+$ ]]; then echo \"Usage: ${0} [options to hack/e2e.go]\" >&2 exit 1 fi shift 1 rm -rf _output/e2e down-clusters up-clusters run-tests \"${@:-}\" print-results ", "commid": "kubernetes_pr_6919"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1ae5c7614ddb98a42a715ebf40752bbc4cc609990675ef1d30825067e3fee6aa", "query": "As per we should add a common label to all of our add-on pods: DNS, logging, monitoring, etc. The proposal is to add \"\": \"true\" for now.\nFWIW \"true\" has to be quoted in YAML\nCan you close this one once it's completed.", "positive_passages": [{"docid": "doc-en-kubernetes-4bf8d8d501c6c1beef684d3d446a22a8bb461c0df7fe98f0d35d9ce3987e6ad2", "text": "id: monitoring-grafana port: 80 containerPort: 80 labels: name: grafana kubernetes.io/cluster-service: \"true\" selector: name: influxGrafana", "commid": "kubernetes_pr_4978"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1ae5c7614ddb98a42a715ebf40752bbc4cc609990675ef1d30825067e3fee6aa", "query": "As per we should add a common label to all of our add-on pods: DNS, logging, monitoring, etc. The proposal is to add \"\": \"true\" for now.\nFWIW \"true\" has to be quoted in YAML\nCan you close this one once it's completed.", "positive_passages": [{"docid": "doc-en-kubernetes-aabc10c4e70db98d552097afd0892cc748baaab816b67889a2a2baf013292d0a", "text": "id: monitoring-heapster port: 80 containerPort: 8082 labels: name: heapster kubernetes.io/cluster-service: \"true\" selector: name: heapster", "commid": "kubernetes_pr_4978"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1ae5c7614ddb98a42a715ebf40752bbc4cc609990675ef1d30825067e3fee6aa", "query": "As per we should add a common label to all of our add-on pods: DNS, logging, monitoring, etc. The proposal is to add \"\": \"true\" for now.\nFWIW \"true\" has to be quoted in YAML\nCan you close this one once it's completed.", "positive_passages": [{"docid": "doc-en-kubernetes-8d0b6df481fed03e47dd892b0c954df26cee9d00a1ca5aa1b7871f1648434237", "text": "id: monitoring-influxdb port: 80 containerPort: 8086 labels: name: influxdb kubernetes.io/cluster-service: \"true\" selector: name: influxGrafana", "commid": "kubernetes_pr_4978"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-d6055e9b592f05244ce546ab6aec6bdf8553d581c76bc789e9826473781a1952", "text": "- file: /etc/init.d/kubelet {% endif %} - file: /var/lib/kubelet/kubernetes_auth {% if grains.network_mode is defined and grains.network_mode == 'openvswitch' %} - sls: sdn {% endif %} ", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-5b9a37e172661ce5a202c64339f186d844f5ec8062adb5ee69cbb9418faca451", "text": "{% if grains.network_mode is defined and grains.network_mode == 'openvswitch' %} openvswitch: pkg: - installed service.running: - enable: True sdn: cmd.wait: - name: /kubernetes-vagrant/network_closure.sh - watch: - pkg: docker-io - pkg: openvswitch - sls: docker {% endif %}", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-765b5d2c57ffb5494f030cea12477ed5c7952fa97dfb05520cf9603135fbcb88", "text": "- monit - nginx - kube-client-tools {% if grains['cloud'] is defined and grains['cloud'] != 'vagrant' %} - logrotate {% endif %} - kube-addons {% if grains['cloud'] is defined and grains['cloud'] == 'azure' %} - openvpn", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-0a3ad4c572b6c3700789a2d604fdbbbc19c6396bd0e34db946b73745d57862be", "text": "mkdir -p /etc/salt/minion.d cat </etc/salt/minion.d/master.conf master: '$(echo \"$MASTER_NAME\" | sed -e \"s/'/''/g\")' master: '$(echo \"$MASTER_NAME\" | sed -e \"s/'/''/g\")' auth_timeout: 10 auth_tries: 2 auth_safemode: True ping_interval: 1 random_reauth_delay: 3 state_aggregrate: - pkg EOF cat </etc/salt/minion.d/grains.conf", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-a6db8c287ef797d47d8d554ef5764a9dd42e50fed009290511d4403383bb5ac3", "text": "done # Let the minion know who its master is # Recover the salt-minion if the salt-master network changes ## auth_timeout - how long we want to wait for a time out ## auth_tries - how many times we will retry before restarting salt-minion ## auth_safemode - if our cert is rejected, we will restart salt minion ## ping_interval - restart the minion if we cannot ping the master after 1 minute ## random_reauth_delay - wait 0-3 seconds when reauthenticating ## recon_default - how long to wait before reconnecting ## recon_max - how long you will wait upper bound ## state_aggregrate - try to do a single yum command to install all referenced packages where possible at once, should improve startup times ## mkdir -p /etc/salt/minion.d cat </etc/salt/minion.d/master.conf master: '$(echo \"$MASTER_NAME\" | sed -e \"s/'/''/g\")' auth_timeout: 10 auth_tries: 2 auth_safemode: True ping_interval: 1 random_reauth_delay: 3 state_aggregrate: - pkg EOF cat </etc/salt/minion.d/log-level-debug.conf", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-5359c9b829d77e0556ad8d5e9b58571477b16082defc43450114d2fb5d32c5cc", "text": "# Stop docker before making these updates systemctl stop docker # Install openvswitch yum install -y openvswitch systemctl enable openvswitch systemctl start openvswitch # create new docker bridge ip link set dev ${DOCKER_BRIDGE} down || true brctl delbr ${DOCKER_BRIDGE} || true", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8f0f8905f49f915362932082b98bd7c82f49f0968ad531a7f06a4941081c21b7", "query": "It appears that the provision-network script that is run on the kubernetes-master can cause salt provisioning on the minions to fail or block in a forever retry timeout error loop. I think it's because of the service network.restart that happens after openvswitch is installed in provision-network. I only say this because when I disable installing docker on kubernetes-master in vagrant, I do not have an issue. - is there a different way we can structure this on the kubernetes-master? For example, could we run the script in or something?", "positive_passages": [{"docid": "doc-en-kubernetes-ac737da773039ad05b4f0998cb6fd6433620fbc8898203371aa0f5e0139d48db", "text": "echo \"OPTIONS='-b=kbr0 --selinux-enabled ${DOCKER_OPTS}'\" >/etc/sysconfig/docker systemctl daemon-reload systemctl start docker } EOF", "commid": "kubernetes_pr_4699"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7cdd53074381ab5057ebc6ac3d6b49ac153717489df01e93d2a3ba62a136f72", "query": "This works: but this does not: We should somehow emphasize the need for the trailing slash for now, and then work out what is going wrong (client side scripting?) to fix this (our fault in the proxy or Grafana's?)\nLikewise for Kibana.\nFor the Kibana case, I observe a sequence of correctly proxied GETs e.g. to: but then I see: which fails because I assume what was really needed is: and a curl against that gives a pile of plausible Javascript.\nAfter a little thought, I think what's going on here is that web clients can't tell that is a directory without the trailing slash. One thing we could potentially do to help is to redirect (301) from to , since I don't think the former is useful for anyone.\ncc\nHow are we generating that URL? Can we append a \"/\" at the end while generating the URL? Doing a redirect or auto-adding \"/\" in the proxy for all requests could break other requests.\nThe URL is not generated anywhere. It is assumed that '/proxy/service/ // Redirect requests of the form \"/{resource}/{name}\" to \"/{resource}/{name}/\" // This is essentially a hack for https://github.com/GoogleCloudPlatform/kubernetes/issues/4958. // Note: Keep this code after tryUpgrade to not break that flow. if len(parts) == 2 && !strings.HasSuffix(req.URL.Path, \"/\") { w.Header().Set(\"Location\", req.URL.Path+\"/\") w.WriteHeader(http.StatusMovedPermanently) return } proxy := httputil.NewSingleHostReverseProxy(&url.URL{Scheme: location.Scheme, Host: location.Host}) if transport == nil { prepend := path.Join(r.prefix, resource, id)", "commid": "kubernetes_pr_6048"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7cdd53074381ab5057ebc6ac3d6b49ac153717489df01e93d2a3ba62a136f72", "query": "This works: but this does not: We should somehow emphasize the need for the trailing slash for now, and then work out what is going wrong (client side scripting?) to fix this (our fault in the proxy or Grafana's?)\nLikewise for Kibana.\nFor the Kibana case, I observe a sequence of correctly proxied GETs e.g. to: but then I see: which fails because I assume what was really needed is: and a curl against that gives a pile of plausible Javascript.\nAfter a little thought, I think what's going on here is that web clients can't tell that is a directory without the trailing slash. One thing we could potentially do to help is to redirect (301) from to , since I don't think the former is useful for anyone.\ncc\nHow are we generating that URL? Can we append a \"/\" at the end while generating the URL? Doing a redirect or auto-adding \"/\" in the proxy for all requests could break other requests.\nThe URL is not generated anywhere. It is assumed that '/proxy/service/ func TestRedirectOnMissingTrailingSlash(t *testing.T) { table := []struct { // The requested path path string // The path requested on the proxy server. proxyServerPath string }{ {\"/trailing/slash/\", \"/trailing/slash/\"}, {\"/\", \"/\"}, // \"/\" should be added at the end. {\"\", \"/\"}, // \"/\" should not be added at a non-root path. {\"/some/path\", \"/some/path\"}, } for _, item := range table { proxyServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { if req.URL.Path != item.proxyServerPath { t.Errorf(\"Unexpected request on path: %s, expected path: %s, item: %v\", req.URL.Path, item.proxyServerPath, item) } })) defer proxyServer.Close() serverURL, _ := url.Parse(proxyServer.URL) simpleStorage := &SimpleRESTStorage{ errors: map[string]error{}, resourceLocation: serverURL, expectedResourceNamespace: \"ns\", } handler := handleNamespaced(map[string]rest.Storage{\"foo\": simpleStorage}) server := httptest.NewServer(handler) defer server.Close() proxyTestPattern := \"/api/version/proxy/namespaces/ns/foo/id\" + item.path req, err := http.NewRequest( \"GET\", server.URL+proxyTestPattern, strings.NewReader(\"\"), ) if err != nil { t.Errorf(\"unexpected error %v\", err) continue } // Note: We are using a default client here, that follows redirects. resp, err := http.DefaultClient.Do(req) if err != nil { t.Errorf(\"unexpected error %v\", err) continue } if resp.StatusCode != 200 { t.Errorf(\"Unexpected errorCode: %v, expected: 200. Response: %v, item: %v\", resp.StatusCode, resp, item) } } } ", "commid": "kubernetes_pr_6048"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7cdd53074381ab5057ebc6ac3d6b49ac153717489df01e93d2a3ba62a136f72", "query": "This works: but this does not: We should somehow emphasize the need for the trailing slash for now, and then work out what is going wrong (client side scripting?) to fix this (our fault in the proxy or Grafana's?)\nLikewise for Kibana.\nFor the Kibana case, I observe a sequence of correctly proxied GETs e.g. to: but then I see: which fails because I assume what was really needed is: and a curl against that gives a pile of plausible Javascript.\nAfter a little thought, I think what's going on here is that web clients can't tell that is a directory without the trailing slash. One thing we could potentially do to help is to redirect (301) from to , since I don't think the former is useful for anyone.\ncc\nHow are we generating that URL? Can we append a \"/\" at the end while generating the URL? Doing a redirect or auto-adding \"/\" in the proxy for all requests could break other requests.\nThe URL is not generated anywhere. It is assumed that '/proxy/service/ // Redirect requests of the form \"/{resource}/{name}\" to \"/{resource}/{name}/\" // This is essentially a hack for https://github.com/GoogleCloudPlatform/kubernetes/issues/4958. // Note: Keep this code after tryUpgrade to not break that flow. if len(parts) == 2 && !strings.HasSuffix(req.URL.Path, \"/\") { w.Header().Set(\"Location\", req.URL.Path+\"/\") w.WriteHeader(http.StatusMovedPermanently) return } proxy := httputil.NewSingleHostReverseProxy(&url.URL{Scheme: location.Scheme, Host: location.Host}) if transport == nil { prepend := path.Join(r.prefix, resource, id)", "commid": "kubernetes_pr_6175"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7cdd53074381ab5057ebc6ac3d6b49ac153717489df01e93d2a3ba62a136f72", "query": "This works: but this does not: We should somehow emphasize the need for the trailing slash for now, and then work out what is going wrong (client side scripting?) to fix this (our fault in the proxy or Grafana's?)\nLikewise for Kibana.\nFor the Kibana case, I observe a sequence of correctly proxied GETs e.g. to: but then I see: which fails because I assume what was really needed is: and a curl against that gives a pile of plausible Javascript.\nAfter a little thought, I think what's going on here is that web clients can't tell that is a directory without the trailing slash. One thing we could potentially do to help is to redirect (301) from to , since I don't think the former is useful for anyone.\ncc\nHow are we generating that URL? Can we append a \"/\" at the end while generating the URL? Doing a redirect or auto-adding \"/\" in the proxy for all requests could break other requests.\nThe URL is not generated anywhere. It is assumed that '/proxy/service/ func TestRedirectOnMissingTrailingSlash(t *testing.T) { table := []struct { // The requested path path string // The path requested on the proxy server. proxyServerPath string }{ {\"/trailing/slash/\", \"/trailing/slash/\"}, {\"/\", \"/\"}, // \"/\" should be added at the end. {\"\", \"/\"}, // \"/\" should not be added at a non-root path. {\"/some/path\", \"/some/path\"}, } for _, item := range table { proxyServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { if req.URL.Path != item.proxyServerPath { t.Errorf(\"Unexpected request on path: %s, expected path: %s, item: %v\", req.URL.Path, item.proxyServerPath, item) } })) defer proxyServer.Close() serverURL, _ := url.Parse(proxyServer.URL) simpleStorage := &SimpleRESTStorage{ errors: map[string]error{}, resourceLocation: serverURL, expectedResourceNamespace: \"ns\", } handler := handleNamespaced(map[string]rest.Storage{\"foo\": simpleStorage}) server := httptest.NewServer(handler) defer server.Close() proxyTestPattern := \"/api/version2/proxy/namespaces/ns/foo/id\" + item.path req, err := http.NewRequest( \"GET\", server.URL+proxyTestPattern, strings.NewReader(\"\"), ) if err != nil { t.Errorf(\"unexpected error %v\", err) continue } // Note: We are using a default client here, that follows redirects. resp, err := http.DefaultClient.Do(req) if err != nil { t.Errorf(\"unexpected error %v\", err) continue } if resp.StatusCode != http.StatusOK { t.Errorf(\"Unexpected errorCode: %v, expected: 200. Response: %v, item: %v\", resp.StatusCode, resp, item) } } } ", "commid": "kubernetes_pr_6175"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e83aef6ed0412aae3ae7527f665df46f7b642df7deb3335cb962e44d6695abb9", "query": "On latest pull, the Vagrant kube- scripts completes successfully, but the Salt setup in the minion keeps going forever. End result is a cluster that is not useable. Looking at '/var/log/salt/minion' on the minion box, see lots of entries like: 2015-03-04 20:09:11,952 [ ][INFO ] Starting a new job with PID 2015-03-04 20:09:11,954 [ ][INFO ] Returning information for job: 2015-03-04 20:09:12,027 [ ][DEBUG ] Decrypting the current master AES key 2015-03-04 20:09:12,028 [ ][DEBUG ] Loaded minion key: 2015-03-04 20:09:12,217 [ ][INFO ] User vagrant Executing command with jid 2015-03-04 20:09:12,217 [ ][DEBUG ] Command details {'tgt_type': 'glob', 'jid': '', 'tgt': '*', 'ret': '', 'user': 'vagrant', 'arg': [], 'fun': ''} No other help looking at the master's salt log. This is Vagrant 1.7.2, with VirtualBox 4.3.20 on Mac OSX (Yosemite).\nIs this repeatable for you if kube-down and kube-up?\nYes, this is reproducible on my laptop. But, I was able to get it up by doing a kube-push after the failed kube-up. , Eric Tune wrote:\nSelf assigning and will try to reproduce. Sent from my iPhone\nI was able to reproduce. It appears that the Kubelet fails to start properly on first start:\nThe push basically caused the Kubelet to restart, and then it does work. Without the push, systemd would have eventually restarted the Kubelet. We need more investigation as to what was causing the Kubelet to fail on initial startup.", "positive_passages": [{"docid": "doc-en-kubernetes-42ad1333e166aca0b35990bbc08e8fd3e40a602152c86eef15d166f6b8c076c7", "text": "- file: /etc/init.d/kubelet {% endif %} - file: /var/lib/kubelet/kubernetes_auth {% if pillar.get('enable_node_monitoring', '').lower() == 'true' %} - file: /etc/kubernetes/manifests/cadvisor.manifest {% endif %} ", "commid": "kubernetes_pr_5242"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e83aef6ed0412aae3ae7527f665df46f7b642df7deb3335cb962e44d6695abb9", "query": "On latest pull, the Vagrant kube- scripts completes successfully, but the Salt setup in the minion keeps going forever. End result is a cluster that is not useable. Looking at '/var/log/salt/minion' on the minion box, see lots of entries like: 2015-03-04 20:09:11,952 [ ][INFO ] Starting a new job with PID 2015-03-04 20:09:11,954 [ ][INFO ] Returning information for job: 2015-03-04 20:09:12,027 [ ][DEBUG ] Decrypting the current master AES key 2015-03-04 20:09:12,028 [ ][DEBUG ] Loaded minion key: 2015-03-04 20:09:12,217 [ ][INFO ] User vagrant Executing command with jid 2015-03-04 20:09:12,217 [ ][DEBUG ] Command details {'tgt_type': 'glob', 'jid': '', 'tgt': '*', 'ret': '', 'user': 'vagrant', 'arg': [], 'fun': ''} No other help looking at the master's salt log. This is Vagrant 1.7.2, with VirtualBox 4.3.20 on Mac OSX (Yosemite).\nIs this repeatable for you if kube-down and kube-up?\nYes, this is reproducible on my laptop. But, I was able to get it up by doing a kube-push after the failed kube-up. , Eric Tune wrote:\nSelf assigning and will try to reproduce. Sent from my iPhone\nI was able to reproduce. It appears that the Kubelet fails to start properly on first start:\nThe push basically caused the Kubelet to restart, and then it does work. Without the push, systemd would have eventually restarted the Kubelet. We need more investigation as to what was causing the Kubelet to fail on initial startup.", "positive_passages": [{"docid": "doc-en-kubernetes-7e500488dc11b98056f29ac0a80e04e994d938eeea6d301e3ab8980a72e21832", "text": "'roles:kubernetes-pool': - match: grain - docker {% if grains['cloud'] is defined and grains['cloud'] == 'azure' %} - openvpn-client {% else %} - sdn {% endif %} - kubelet - kube-proxy {% if pillar.get('enable_node_monitoring', '').lower() == 'true' %}", "commid": "kubernetes_pr_5242"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e83aef6ed0412aae3ae7527f665df46f7b642df7deb3335cb962e44d6695abb9", "query": "On latest pull, the Vagrant kube- scripts completes successfully, but the Salt setup in the minion keeps going forever. End result is a cluster that is not useable. Looking at '/var/log/salt/minion' on the minion box, see lots of entries like: 2015-03-04 20:09:11,952 [ ][INFO ] Starting a new job with PID 2015-03-04 20:09:11,954 [ ][INFO ] Returning information for job: 2015-03-04 20:09:12,027 [ ][DEBUG ] Decrypting the current master AES key 2015-03-04 20:09:12,028 [ ][DEBUG ] Loaded minion key: 2015-03-04 20:09:12,217 [ ][INFO ] User vagrant Executing command with jid 2015-03-04 20:09:12,217 [ ][DEBUG ] Command details {'tgt_type': 'glob', 'jid': '', 'tgt': '*', 'ret': '', 'user': 'vagrant', 'arg': [], 'fun': ''} No other help looking at the master's salt log. This is Vagrant 1.7.2, with VirtualBox 4.3.20 on Mac OSX (Yosemite).\nIs this repeatable for you if kube-down and kube-up?\nYes, this is reproducible on my laptop. But, I was able to get it up by doing a kube-push after the failed kube-up. , Eric Tune wrote:\nSelf assigning and will try to reproduce. Sent from my iPhone\nI was able to reproduce. It appears that the Kubelet fails to start properly on first start:\nThe push basically caused the Kubelet to restart, and then it does work. Without the push, systemd would have eventually restarted the Kubelet. We need more investigation as to what was causing the Kubelet to fail on initial startup.", "positive_passages": [{"docid": "doc-en-kubernetes-614716874bb4ed3504bd8781da4976fb907024a3a00f6954f2c54d87a613b3ad", "text": "{% endif %} {% endif %} - logrotate {% if grains['cloud'] is defined and grains['cloud'] == 'azure' %} - openvpn-client {% else %} - sdn {% endif %} - monit 'roles:kubernetes-master':", "commid": "kubernetes_pr_5242"}], "negative_passages": []} {"query_id": "q-en-kubernetes-17b7eb889bfb0d0f34c61424511db8cf312d22a6958eeab3e7ebc261ea16baf6", "query": "I tried to spin up GCE kube environment using kube-up and got the following error: The script would then hang on a subsequent command. The verify-prereqs for kube-up should require yaml support for python and fail early. To rectify, I had to do the following:\nThanks for the report Derek!\nYeah, this broke our Jenkins a while ago, and I thought I fixed it, but didn't. The real fix is in", "positive_passages": [{"docid": "doc-en-kubernetes-2fff0e2ca9052c2819fb5a178303f5e76a77e47a732cd98b2874247d6984507d", "text": " #!/usr/bin/python # Copyright 2015 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import yaml def mutate_env(path, var, value): # Load the existing arguments if os.path.exists(path): args = yaml.load(open(path)) else: args = {} args[var] = value yaml.dump(args, stream=open(path, 'w'), default_flow_style=False) if __name__ == \"__main__\": mutate_env(sys.argv[1], sys.argv[2], sys.argv[3]) ", "commid": "kubernetes_pr_5273"}], "negative_passages": []} {"query_id": "q-en-kubernetes-17b7eb889bfb0d0f34c61424511db8cf312d22a6958eeab3e7ebc261ea16baf6", "query": "I tried to spin up GCE kube environment using kube-up and got the following error: The script would then hang on a subsequent command. The verify-prereqs for kube-up should require yaml support for python and fail early. To rectify, I had to do the following:\nThanks for the report Derek!\nYeah, this broke our Jenkins a while ago, and I thought I fixed it, but didn't. The real fix is in", "positive_passages": [{"docid": "doc-en-kubernetes-02e91e5fcf0ca7690ffa78e334912555f3451b6315e242da2ac90804f3e99d87", "text": "done } # Given a yaml file, add or mutate the given env variable # Quote something appropriate for a yaml string. # # TODO(zmerlynn): Yes, this is an O(n^2) build-up right now. If we end # up with so many environment variables feeding into Salt that this # matters, there's probably an issue... function add-to-env { ${KUBE_ROOT}/cluster/gce/kube-env.py \"$1\" \"$2\" \"$3\" # TODO(zmerlynn): Note that this function doesn't so much \"quote\" as # \"strip out quotes\", and we really should be using a YAML library for # this, but PyYAML isn't shipped by default, and *rant rant rant ... SIGH* function yaml-quote { echo \"'$(echo \"${@}\" | sed -e \"s/'/''/g\")'\" } # $1: if 'true', we're building a master yaml, else a node", "commid": "kubernetes_pr_5273"}], "negative_passages": []} {"query_id": "q-en-kubernetes-17b7eb889bfb0d0f34c61424511db8cf312d22a6958eeab3e7ebc261ea16baf6", "query": "I tried to spin up GCE kube environment using kube-up and got the following error: The script would then hang on a subsequent command. The verify-prereqs for kube-up should require yaml support for python and fail early. To rectify, I had to do the following:\nThanks for the report Derek!\nYeah, this broke our Jenkins a while ago, and I thought I fixed it, but didn't. The real fix is in", "positive_passages": [{"docid": "doc-en-kubernetes-201f154bc44c9202f0c580332c3bd1ca932f0b92603891f787bfba007bfc2c29", "text": "local file=$2 rm -f ${file} add-to-env ${file} ENV_TIMESTAMP \"$(date -uIs)\" # Just to track it add-to-env ${file} KUBERNETES_MASTER \"${master}\" add-to-env ${file} INSTANCE_PREFIX \"${INSTANCE_PREFIX}\" add-to-env ${file} NODE_INSTANCE_PREFIX \"${NODE_INSTANCE_PREFIX}\" add-to-env ${file} SERVER_BINARY_TAR_URL \"${SERVER_BINARY_TAR_URL}\" add-to-env ${file} SALT_TAR_URL \"${SALT_TAR_URL}\" add-to-env ${file} PORTAL_NET \"${PORTAL_NET}\" add-to-env ${file} ENABLE_CLUSTER_MONITORING \"${ENABLE_CLUSTER_MONITORING:-false}\" add-to-env ${file} ENABLE_NODE_MONITORING \"${ENABLE_NODE_MONITORING:-false}\" add-to-env ${file} ENABLE_CLUSTER_LOGGING \"${ENABLE_CLUSTER_LOGGING:-false}\" add-to-env ${file} ENABLE_NODE_LOGGING \"${ENABLE_NODE_LOGGING:-false}\" add-to-env ${file} LOGGING_DESTINATION \"${LOGGING_DESTINATION:-}\" add-to-env ${file} ELASTICSEARCH_LOGGING_REPLICAS \"${ELASTICSEARCH_LOGGING_REPLICAS:-}\" add-to-env ${file} ENABLE_CLUSTER_DNS \"${ENABLE_CLUSTER_DNS:-false}\" add-to-env ${file} DNS_REPLICAS \"${DNS_REPLICAS:-}\" add-to-env ${file} DNS_SERVER_IP \"${DNS_SERVER_IP:-}\" add-to-env ${file} DNS_DOMAIN \"${DNS_DOMAIN:-}\" add-to-env ${file} MASTER_HTPASSWD \"${MASTER_HTPASSWD}\" cat >$file < if [[ \"${master}\" != \"true\" ]]; then add-to-env ${file} KUBERNETES_MASTER_NAME \"${MASTER_NAME}\" add-to-env ${file} ZONE \"${ZONE}\" add-to-env ${file} EXTRA_DOCKER_OPTS \"${EXTRA_DOCKER_OPTS}\" add-to-env ${file} ENABLE_DOCKER_REGISTRY_CACHE \"${ENABLE_DOCKER_REGISTRY_CACHE:-false}\" cat >>$file < fi }", "commid": "kubernetes_pr_5273"}], "negative_passages": []} {"query_id": "q-en-kubernetes-20b7e73eec176d141ed7021484976d2204427591b40ebcc573eca325f6704635", "query": "If the cleanup of the replication controller takes too long after a successful test run, then the clean up performed in the AfterEach can actually cause the test to fail. Should only perform the clean up if the rc exists and its replica count is non-zero.", "positive_passages": [{"docid": "doc-en-kubernetes-b7e82026c08d08b7ae922bd0dedf613c49134c30141cb664f82610c7f9596615", "text": "}) AfterEach(func() { // Remove any remaining pods from this test DeleteRC(c, ns, RCName) // Remove any remaining pods from this test if the // replication controller still exists and the replica count // isn't 0. This means the controller wasn't cleaned up // during the test so clean it up here rc, err := c.ReplicationControllers(ns).Get(RCName) if err == nil && rc.Spec.Replicas != 0 { DeleteRC(c, ns, RCName) } }) It(\"should allow starting 100 pods per node\", func() {", "commid": "kubernetes_pr_5387"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6be393fdfa676676e2c2a919450eea6e64e7299e61f3fdbea996cf6d1af38cd8", "query": "We've seen this a few times on GCE now, and I saw it flake on now as well (cc\ncc Victor integrated cadvisor into the kubelet. I wonder if a fake version of cadvisor gets integrated into kubelet in the case of builds.\nI noticed this first on GCE. (FWIW, GKE uses the same binaries as GCE.)\nJenkins shows the following four changes for the GCE build where this test started consistently failing: Commit by mattmoor Enable usage of a \"json key\" for authenticating with Commit by dbsmith Add a system modeler to scheduler Commit by dawnchen Convert RestartPolicy to string for v1beta3. Commit by abshah Build statically linked binaryies. With the change to go 1.4, we probably were generating dynamically linked binaries accidentally.\nIf I had to guess, I would take a stab and say it was , . cc\nFeel free to revert by 5pm if no fix.\nI believe and are correct, makes the build not use cgo which places a fake cAdvisor in the binary and that fails. Looking into the fix.\nis out, it is a revert of the old one along with a plan for not needing static binaries.", "positive_passages": [{"docid": "doc-en-kubernetes-58309873e8f237c1d75ea5028283534cd2880873563fdca39be6e674447b1d47", "text": "if [[ ${GOOS} == \"windows\" ]]; then bin=\"${bin}.exe\" fi CGO_ENABLED=0 go build -installsuffix cgo -o \"${output_path}/${bin}\" go build -o \"${output_path}/${bin}\" \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binary}\" done else CGO_ENABLED=0 go install -installsuffix cgo \"${goflags[@]:+${goflags[@]}}\" go install \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binaries[@]}\" fi", "commid": "kubernetes_pr_5527"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c863156a4e33fc97f13e0b76d7c72461cca123b7f2e4e0958a06f940e1cd99d8", "query": "cc\nWhen deleting/stopping using file, you don't specify resource (in this case 'pod'), as it's inferred from the file.\nThanks and sorry for confusion\nOk, but should not work as well.", "positive_passages": [{"docid": "doc-en-kubernetes-bd9574cf6ad99b1e6306ce6254b5f8778956a474927153f3bbfe683dcacb3c5a", "text": "Long: create_long, Example: create_example, Run: func(cmd *cobra.Command, args []string) { err := RunCreate(f, out, cmd, filenames) cmdutil.CheckErr(err) cmdutil.CheckErr(ValidateArgs(cmd, args)) cmdutil.CheckErr(RunCreate(f, out, cmd, filenames)) }, } cmd.Flags().VarP(&filenames, \"filename\", \"f\", \"Filename, directory, or URL to file to use to create the resource\") return cmd } func ValidateArgs(cmd *cobra.Command, args []string) error { if len(args) != 0 { return cmdutil.UsageError(cmd, \"Unexpected args: %v\", args) } return nil } func RunCreate(f *Factory, out io.Writer, cmd *cobra.Command, filenames util.StringList) error { schema, err := f.Validator() if err != nil {", "commid": "kubernetes_pr_5860"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c863156a4e33fc97f13e0b76d7c72461cca123b7f2e4e0958a06f940e1cd99d8", "query": "cc\nWhen deleting/stopping using file, you don't specify resource (in this case 'pod'), as it's inferred from the file.\nThanks and sorry for confusion\nOk, but should not work as well.", "positive_passages": [{"docid": "doc-en-kubernetes-2b20a6c61d4e9761f363731d28f691dfa8bba87b3516f3f2b3e89a4955ccc1de", "text": "\"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/kubectl/cmd\" ) func TestExtraArgsFail(t *testing.T) { buf := bytes.NewBuffer([]byte{}) f, _, _ := NewAPIFactory() c := f.NewCmdCreate(buf) if cmd.ValidateArgs(c, []string{\"rc\"}) == nil { t.Errorf(\"unexpected non-error\") } } func TestCreateObject(t *testing.T) { _, _, rc := testData() rc.Items[0].Name = \"redis-master-controller\"", "commid": "kubernetes_pr_5860"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48c5dfb8683a81615afc4ea08a4324bb162203e3714a3b2fd8f08b4f5a9b8f4e", "query": "A \"kubectl describe\" on a pod ID give a lot of informations about that pod but don't you think it should also print the pod's env var ?", "positive_passages": [{"docid": "doc-en-kubernetes-55adb618c454a26b81e2819d723ad6d85d5aa63eb3d11cecf8557341d8b64f5b", "text": "\"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/resource\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/fieldpath\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/fields\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/labels\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/types\"", "commid": "kubernetes_pr_10753"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48c5dfb8683a81615afc4ea08a4324bb162203e3714a3b2fd8f08b4f5a9b8f4e", "query": "A \"kubectl describe\" on a pod ID give a lot of informations about that pod but don't you think it should also print the pod's env var ?", "positive_passages": [{"docid": "doc-en-kubernetes-7e0140b0adb2b675e56ed5220fc631088f563c9943191ec969ed394cf184c13b", "text": "fmt.Fprintf(out, \" Ready:t%vn\", printBool(status.Ready)) fmt.Fprintf(out, \" Restart Count:t%dn\", status.RestartCount) fmt.Fprintf(out, \" Variables:n\") for _, e := range container.Env { if e.ValueFrom != nil && e.ValueFrom.FieldRef != nil { valueFrom := envValueFrom(pod, e) fmt.Fprintf(out, \" %s:t%s (%s:%s)n\", e.Name, valueFrom, e.ValueFrom.FieldRef.APIVersion, e.ValueFrom.FieldRef.FieldPath) } else { fmt.Fprintf(out, \" %s:t%sn\", e.Name, e.Value) } } } } func envValueFrom(pod *api.Pod, e api.EnvVar) string { internalFieldPath, _, err := api.Scheme.ConvertFieldLabel(e.ValueFrom.FieldRef.APIVersion, \"Pod\", e.ValueFrom.FieldRef.FieldPath, \"\") if err != nil { return \"\" // pod validation should catch this on create } valueFrom, err := fieldpath.ExtractFieldPathAsString(pod, internalFieldPath) if err != nil { return \"\" // pod validation should catch this on create } return valueFrom } func printBool(value bool) string { if value { return \"True\"", "commid": "kubernetes_pr_10753"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48c5dfb8683a81615afc4ea08a4324bb162203e3714a3b2fd8f08b4f5a9b8f4e", "query": "A \"kubectl describe\" on a pod ID give a lot of informations about that pod but don't you think it should also print the pod's env var ?", "positive_passages": [{"docid": "doc-en-kubernetes-02014de29a0f0f2a414265e1b7b37e48c6e6d1b6a0b4125d8946ffbb2ee6b2a6", "text": "}, expectedElements: []string{\"test\", \"State\", \"Waiting\", \"Ready\", \"True\", \"Restart Count\", \"7\", \"Image\", \"image\"}, }, //env { container: api.Container{Name: \"test\", Image: \"image\", Env: []api.EnvVar{{Name: \"envname\", Value: \"xyz\"}}}, status: api.ContainerStatus{ Name: \"test\", Ready: true, RestartCount: 7, }, expectedElements: []string{\"test\", \"State\", \"Waiting\", \"Ready\", \"True\", \"Restart Count\", \"7\", \"Image\", \"image\", \"envname\", \"xyz\"}, }, // Using limits. { container: api.Container{", "commid": "kubernetes_pr_10753"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dbffed5438f566ecf1b631adacf4d68392348225a8f199c6d9953211f55b24b3", "query": "This is the script that runs to capture event output during e2e runs: It's failing after the recent pflag merge:\ncc\nOr something. I might be completely off. But something broke it recently enough, and I know like his event log.\nThat command is actually invalid. is a flag on the command, not the root. Fix in", "positive_passages": [{"docid": "doc-en-kubernetes-9e8a193df356d1bfa4ea4013a630b82e7f28bfbc8a84ff32231370d87c7ab4d3", "text": "prepare-e2e while true; do ${KUBECTL} --watch-only get events ${KUBECTL} get events --watch-only done", "commid": "kubernetes_pr_5925"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-e975dad90bfbdbb52c98e5ada4e8196e527a8fc1cb30ecd4c49eadd44e414a48", "text": "set -e #clean all init/init.d/configs function do_backup_clean() { #backup all config files init_files=`ls init_conf` for i in $init_files do if [ -f /etc/init/$i ] then mv /etc/init/${i} /etc/init/${i}.bak fi done initd_files=`ls initd_scripts` for i in $initd_files do if [ -f /etc/init.d/$i ] then mv /etc/init.d/${i} /etc/init.d/${i}.bak fi done default_files=`ls default_scripts` for i in $default_files do if [ -e /etc/default/$i ] then mv /etc/default/${i} /etc/default/${i}.bak fi done # clean work dir if [ ! -d ./work ] then mkdir work fi cp -rf default_scripts init_conf initd_scripts work } function cpMaster(){ # copy /etc/init files cp init_conf/etcd.conf /etc/init/ cp init_conf/kube-apiserver.conf /etc/init/ cp init_conf/kube-controller-manager.conf /etc/init/ cp init_conf/kube-scheduler.conf /etc/init/ cp work/init_conf/etcd.conf /etc/init/ cp work/init_conf/kube-apiserver.conf /etc/init/ cp work/init_conf/kube-controller-manager.conf /etc/init/ cp work/init_conf/kube-scheduler.conf /etc/init/ # copy /etc/initd/ files cp initd_scripts/etcd /etc/init.d/ cp initd_scripts/kube-apiserver /etc/init.d/ cp initd_scripts/kube-controller-manager /etc/init.d/ cp initd_scripts/kube-scheduler /etc/init.d/ cp work/initd_scripts/etcd /etc/init.d/ cp work/initd_scripts/kube-apiserver /etc/init.d/ cp work/initd_scripts/kube-controller-manager /etc/init.d/ cp work/initd_scripts/kube-scheduler /etc/init.d/ # copy default configs cp default_scripts/etcd /etc/default/ cp default_scripts/kube-apiserver /etc/default/ cp default_scripts/kube-scheduler /etc/default/ cp default_scripts/kube-controller-manager /etc/default/ cp work/default_scripts/etcd /etc/default/ cp work/default_scripts/kube-apiserver /etc/default/ cp work/default_scripts/kube-scheduler /etc/default/ cp work/default_scripts/kube-controller-manager /etc/default/ } function cpMinion(){ # copy /etc/init files cp init_conf/etcd.conf /etc/init/ cp init_conf/kubelet.conf /etc/init/ cp init_conf/flanneld.conf /etc/init/ cp init_conf/kube-proxy.conf /etc/init/ cp work/init_conf/etcd.conf /etc/init/etcd.conf cp work/init_conf/kubelet.conf /etc/init/kubelet.conf cp work/init_conf/flanneld.conf /etc/init/flanneld.conf cp work/init_conf/kube-proxy.conf /etc/init/ # copy /etc/initd/ files cp initd_scripts/etcd /etc/init.d/ cp initd_scripts/flanneld /etc/init.d/ cp initd_scripts/kubelet /etc/init.d/ cp initd_scripts/kube-proxy /etc/init.d/ cp work/initd_scripts/etcd /etc/init.d/ cp work/initd_scripts/flanneld /etc/init.d/ cp work/initd_scripts/kubelet /etc/init.d/ cp work/initd_scripts/kube-proxy /etc/init.d/ # copy default configs cp default_scripts/etcd /etc/default/ cp default_scripts/flanneld /etc/default/ cp default_scripts/kube-proxy /etc/default cp default_scripts/kubelet /etc/default/ cp work/default_scripts/etcd /etc/default/ cp work/default_scripts/flanneld /etc/default/ cp work/default_scripts/kube-proxy /etc/default cp work/default_scripts/kubelet /etc/default/ } # check if input IP in machine list", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-c3f55efe3425f8ef060adea78459a8f53b1837b141c8847aee79f335f27288fc", "text": "# set values in ETCD_OPTS function configEtcd(){ echo ETCD_OPTS=\"-name $1 -initial-advertise-peer-urls http://$2:2380 -listen-peer-urls http://$2:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster $3 -initial-cluster-state new\" > default_scripts/etcd echo ETCD_OPTS=\"-name $1 -initial-advertise-peer-urls http://$2:2380 -listen-peer-urls http://$2:2380 -initial-cluster-token etcd-cluster-1 -initial-cluster $3 -initial-cluster-state new\" > work/default_scripts/etcd } # check root", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-812ec79a6bd78f20a26ce3b04f209423fdb37485a8e6c9881dceb80bfa3fc595", "text": "exit 1 fi do_backup_clean # use an array to record name and ip declare -A mm", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-d6af49864b448cae688f1a60d2c69989f017e30d96eb1231a3a4a77b126f0f19", "text": "inList $etcdName $myIP configEtcd $etcdName $myIP $cluster # For minion set MINION IP in default_scripts/kubelet sed -i \"s/MY_IP/${myIP}/g\" default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" default_scripts/kube-proxy sed -i \"s/MY_IP/${myIP}/g\" work/default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" work/default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" work/default_scripts/kube-proxy # For master set MINION IPs in kube-controller-manager if [ -z \"$minionIPs\" ]; then if [ -z \"$minionIPs\" ]; then #one node act as both minion and master role minionIPs=\"$myIP\" else minionIPs=\"$minionIPs,$myIP\" fi sed -i \"s/MINION_IPS/${minionIPs}/g\" default_scripts/kube-controller-manager sed -i \"s/MINION_IPS/${minionIPs}/g\" work/default_scripts/kube-controller-manager cpMaster cpMinion", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-4afcefe0ca2b044b6524699f58ae6ce87ec27228fea0b8c4f35c2293ed5c6e88", "text": "inList $etcdName $myIP configEtcd $etcdName $myIP $cluster # set MINION IPs in kube-controller-manager sed -i \"s/MINION_IPS/${minionIPs}/g\" default_scripts/kube-controller-manager sed -i \"s/MINION_IPS/${minionIPs}/g\" work/default_scripts/kube-controller-manager cpMaster break ;;", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bdc2b90ff152c92e3bbe26723a057422401366398c588c04109e25e8a7ccf9b", "query": "I used the ubuntu-cluster setup scripts to build a mini testing environment. The first time I inputted a wrong config and when I reconfigured the environment by running the scripts again an error occured. I looked into the code and found the scripts hadn't deal with old configs. And of kube-proxy configs are not replaced.\nLooks like this was fixed by .", "positive_passages": [{"docid": "doc-en-kubernetes-2c601d9f2c3147613313dedca542b379b32579defaa05386c4c9a6c55cc0e99a", "text": "inList $etcdName $myIP configEtcd $etcdName $myIP $cluster # set MINION IP in default_scripts/kubelet sed -i \"s/MY_IP/${myIP}/g\" default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" default_scripts/kube-proxy sed -i \"s/MY_IP/${myIP}/g\" work/default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" work/default_scripts/kubelet sed -i \"s/MASTER_IP/${masterIP}/g\" work/default_scripts/kube-proxy cpMinion break ;;", "commid": "kubernetes_pr_6275"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cec696c80d7bb22acf8809e7c4a3d65bfaf721f6124d2319a7a6475d4f30f213", "query": "After 6 minutes or so of using 50% my 4-core MBP CPU and chewing up a ton of memory, make release gives up with a timeout.\nThis is on my master branch (no local changes) after rebasing to head this evening. It was working fine earlier today so seems like the likely culprit. /cc\nIt's certainly my changes. I'm on a low function device, feel free to revert. This is only an issue for low memory systems, I suspect. :/ On Apr 1, 2015 10:24 PM, \"Robert Bailey\" wrote:\n(This is functioning fine on Jenkins and my desktop, FWIW.) On Apr 1, 2015 10:36 PM, \"Zachary Loafman\" wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-a79510632547bcd2f868e1f4f3b2b6d160275c9da772868c7aaa29bd931b6abd", "text": "# Find all of the built client binaries local platform platforms platforms=($(cd \"${LOCAL_OUTPUT_BINPATH}\" ; echo */*)) for platform in \"${platforms[@]}\"; do for platform in \"${platforms[@]}\" ; do local platform_tag=${platform///-} # Replace a \"/\" for a \"-\" kube::log::status \"Starting tarball: client $platform_tag\" kube::log::status \"Building tarball: client $platform_tag\" ( local release_stage=\"${RELEASE_STAGE}/client/${platform_tag}/kubernetes\" rm -rf \"${release_stage}\" mkdir -p \"${release_stage}/client/bin\" local release_stage=\"${RELEASE_STAGE}/client/${platform_tag}/kubernetes\" rm -rf \"${release_stage}\" mkdir -p \"${release_stage}/client/bin\" local client_bins=(\"${KUBE_CLIENT_BINARIES[@]}\") if [[ \"${platform%/*}\" == \"windows\" ]]; then client_bins=(\"${KUBE_CLIENT_BINARIES_WIN[@]}\") fi local client_bins=(\"${KUBE_CLIENT_BINARIES[@]}\") if [[ \"${platform%/*}\" == \"windows\" ]]; then client_bins=(\"${KUBE_CLIENT_BINARIES_WIN[@]}\") fi # This fancy expression will expand to prepend a path # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the # KUBE_CLIENT_BINARIES array. cp \"${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}\" \"${release_stage}/client/bin/\" # This fancy expression will expand to prepend a path # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the # KUBE_CLIENT_BINARIES array. cp \"${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}\" \"${release_stage}/client/bin/\" kube::release::clean_cruft kube::release::clean_cruft local package_name=\"${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz\" kube::release::create_tarball \"${package_name}\" \"${release_stage}/..\" ) & local package_name=\"${RELEASE_DIR}/kubernetes-client-${platform_tag}.tar.gz\" kube::release::create_tarball \"${package_name}\" \"${release_stage}/..\" done kube::log::status \"Waiting on tarballs\" wait || { kube::log::error \"client tarball creation failed\"; exit 1; } } # Package up all of the server binaries", "commid": "kubernetes_pr_6360"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cec696c80d7bb22acf8809e7c4a3d65bfaf721f6124d2319a7a6475d4f30f213", "query": "After 6 minutes or so of using 50% my 4-core MBP CPU and chewing up a ton of memory, make release gives up with a timeout.\nThis is on my master branch (no local changes) after rebasing to head this evening. It was working fine earlier today so seems like the likely culprit. /cc\nIt's certainly my changes. I'm on a low function device, feel free to revert. This is only an issue for low memory systems, I suspect. :/ On Apr 1, 2015 10:24 PM, \"Robert Bailey\" wrote:\n(This is functioning fine on Jenkins and my desktop, FWIW.) On Apr 1, 2015 10:36 PM, \"Zachary Loafman\" wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-91c850c1e07e01a691a7fa3ee4bd48c0fb218874d392948e7f48bf8c66e8acc4", "text": "local binaries binaries=($(kube::golang::binaries_from_targets \"${targets[@]}\")) kube::log::status \"Building go targets for ${platforms[@]} in parallel (output will appear in a burst when complete):\" \"${targets[@]}\" local platform for platform in \"${platforms[@]}\"; do ( kube::golang::set_platform_envs \"${platform}\" kube::log::status \"${platform}: Parallel go build started\" for platform in \"${platforms[@]}\"; do kube::golang::set_platform_envs \"${platform}\" kube::log::status \"Building go targets for ${platform}:\" \"${targets[@]}\" local -a statics=() local -a nonstatics=() for binary in \"${binaries[@]}\"; do if kube::golang::is_statically_linked_library \"${binary}\"; then kube::golang::exit_if_stdlib_not_installed; statics+=($binary) else nonstatics+=($binary) fi done if [[ -n ${use_go_build:-} ]]; then # Try and replicate the native binary placement of go install without # calling go install. This means we have to iterate each binary. local output_path=\"${KUBE_GOPATH}/bin\" if [[ $platform != $host_platform ]]; then output_path=\"${output_path}/${platform////_}\" fi local -a statics=() local -a nonstatics=() for binary in \"${binaries[@]}\"; do local bin=$(basename \"${binary}\") if [[ ${GOOS} == \"windows\" ]]; then bin=\"${bin}.exe\" fi if kube::golang::is_statically_linked_library \"${binary}\"; then kube::golang::exit_if_stdlib_not_installed; statics+=($binary) CGO_ENABLED=0 go build -installsuffix cgo -o \"${output_path}/${bin}\" \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binary}\" else nonstatics+=($binary) go build -o \"${output_path}/${bin}\" \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binary}\" fi done if [[ -n ${use_go_build:-} ]]; then # Try and replicate the native binary placement of go install without # calling go install. This means we have to iterate each binary. local output_path=\"${KUBE_GOPATH}/bin\" if [[ $platform != $host_platform ]]; then output_path=\"${output_path}/${platform////_}\" fi for binary in \"${binaries[@]}\"; do local bin=$(basename \"${binary}\") if [[ ${GOOS} == \"windows\" ]]; then bin=\"${bin}.exe\" fi if kube::golang::is_statically_linked_library \"${binary}\"; then kube::golang::exit_if_stdlib_not_installed; CGO_ENABLED=0 go build -installsuffix cgo -o \"${output_path}/${bin}\" \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binary}\" else go build -o \"${output_path}/${bin}\" \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${binary}\" fi done else else # Use go install. if [[ \"${#nonstatics[@]}\" != 0 ]]; then go install \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${nonstatics[@]:+${nonstatics[@]}}\" -ldflags \"${version_ldflags}\" \"${nonstatics[@]:+${nonstatics[@]}}\" fi if [[ \"${#statics[@]}\" != 0 ]]; then CGO_ENABLED=0 go install -installsuffix cgo \"${goflags[@]:+${goflags[@]}}\" -ldflags \"${version_ldflags}\" \"${statics[@]:+${statics[@]}}\" \"${statics[@]:+${statics[@]}}\" fi fi kube::log::status \"${platform}: Parallel go build finished\" ) &> \"/tmp//${platform////_}.build\" & done local fails=0 for job in $(jobs -p); do wait ${job} || let \"fails+=1\" done for platform in \"${platforms[@]}\"; do cat \"/tmp//${platform////_}.build\" fi done exit ${fails} ) }", "commid": "kubernetes_pr_6360"}], "negative_passages": []} {"query_id": "q-en-kubernetes-04040bdb5cd938b726629688668f9baf9bede652c53f08edb44f58e6b143c345", "query": "Restarting might also be acceptable. It's a pretty painful user experience right now if this happens-- it fills the logs with this message.\nYes, it should just panic() , Daniel Smith wrote:\nYeah I guess if it's out of FDs it can't really open a network connection to file an event...\nkube-proxy should send an event when it starts up, just like kubelet does with its method. This works around the fact that you can't send an event when you are out of FDs, and it still provides a lot of debugging value.\nAgreed with 1) kube-proxy should panic when hitting out of fd 2) restarted kube-proxy should pick up the crash reason like what we do for internal agent 3) crash reason should be propogated to upstream as an event.\nMoving to the right sig.\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\n/lifecycle frozen\n/remove-lifecycle stale\n/area kube-proxy\nwe're going to close this because, after 3 years or so , decreasing interest in the userspace proxy and nobody other then some openshift folks, are using the linux kube proxy .. can reopen if necessary ping :)\n/close\nClosing this issue. if isTooManyFDsError(err) { panic(\"Dial failed: \" + err.Error()) } glog.Errorf(\"Dial failed: %v\", err) continue }", "commid": "kubernetes_pr_6727"}], "negative_passages": []} {"query_id": "q-en-kubernetes-04040bdb5cd938b726629688668f9baf9bede652c53f08edb44f58e6b143c345", "query": "Restarting might also be acceptable. It's a pretty painful user experience right now if this happens-- it fills the logs with this message.\nYes, it should just panic() , Daniel Smith wrote:\nYeah I guess if it's out of FDs it can't really open a network connection to file an event...\nkube-proxy should send an event when it starts up, just like kubelet does with its method. This works around the fact that you can't send an event when you are out of FDs, and it still provides a lot of debugging value.\nAgreed with 1) kube-proxy should panic when hitting out of fd 2) restarted kube-proxy should pick up the crash reason like what we do for internal agent 3) crash reason should be propogated to upstream as an event.\nMoving to the right sig.\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\n/lifecycle frozen\n/remove-lifecycle stale\n/area kube-proxy\nwe're going to close this because, after 3 years or so , decreasing interest in the userspace proxy and nobody other then some openshift folks, are using the linux kube proxy .. can reopen if necessary ping :)\n/close\nClosing this issue. if isTooManyFDsError(err) { panic(\"Accept failed: \" + err.Error()) } glog.Errorf(\"Accept failed: %v\", err) continue }", "commid": "kubernetes_pr_6727"}], "negative_passages": []} {"query_id": "q-en-kubernetes-04040bdb5cd938b726629688668f9baf9bede652c53f08edb44f58e6b143c345", "query": "Restarting might also be acceptable. It's a pretty painful user experience right now if this happens-- it fills the logs with this message.\nYes, it should just panic() , Daniel Smith wrote:\nYeah I guess if it's out of FDs it can't really open a network connection to file an event...\nkube-proxy should send an event when it starts up, just like kubelet does with its method. This works around the fact that you can't send an event when you are out of FDs, and it still provides a lot of debugging value.\nAgreed with 1) kube-proxy should panic when hitting out of fd 2) restarted kube-proxy should pick up the crash reason like what we do for internal agent 3) crash reason should be propogated to upstream as an event.\nMoving to the right sig.\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\n/lifecycle frozen\n/remove-lifecycle stale\n/area kube-proxy\nwe're going to close this because, after 3 years or so , decreasing interest in the userspace proxy and nobody other then some openshift folks, are using the linux kube proxy .. can reopen if necessary ping :)\n/close\nClosing this issue. func isTooManyFDsError(err error) bool { return strings.Contains(err.Error(), \"too many open files\") } ", "commid": "kubernetes_pr_6727"}], "negative_passages": []} {"query_id": "q-en-kubernetes-683f0af5e9cbef9ac9e42be8d3345c188822e579c19423287c94afa9fbba590f", "query": "This was broken by and then (hopefully) fixed by . But we need a test for it. cc", "positive_passages": [{"docid": "doc-en-kubernetes-260b668b9d2b5c332fb8f3aeaac028820db6c03d956dc3c419ea356d5d6a10fe", "text": " // +build integration,!no-etcd /* Copyright 2015 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package integration // This file tests the scheduler. import ( \"net/http\" \"net/http/httptest\" \"testing\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/errors\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/testapi\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/apiserver\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client/record\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/master\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/wait\" \"github.com/GoogleCloudPlatform/kubernetes/plugin/pkg/admission/admit\" \"github.com/GoogleCloudPlatform/kubernetes/plugin/pkg/scheduler\" _ \"github.com/GoogleCloudPlatform/kubernetes/plugin/pkg/scheduler/algorithmprovider\" \"github.com/GoogleCloudPlatform/kubernetes/plugin/pkg/scheduler/factory\" ) func init() { requireEtcd() } func TestUnschedulableNodes(t *testing.T) { helper, err := master.NewEtcdHelper(newEtcdClient(), testapi.Version()) if err != nil { t.Fatalf(\"Couldn't create etcd helper: %v\", err) } deleteAllEtcdKeys() var m *master.Master s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { m.Handler.ServeHTTP(w, req) })) defer s.Close() m = master.New(&master.Config{ EtcdHelper: helper, KubeletClient: client.FakeKubeletClient{}, EnableLogsSupport: false, EnableUISupport: false, EnableIndex: true, APIPrefix: \"/api\", Authorizer: apiserver.NewAlwaysAllowAuthorizer(), AdmissionControl: admit.NewAlwaysAdmit(), }) client := client.NewOrDie(&client.Config{Host: s.URL, Version: testapi.Version()}) schedulerConfigFactory := factory.NewConfigFactory(client) schedulerConfig, err := schedulerConfigFactory.Create() if err != nil { t.Fatalf(\"Couldn't create scheduler config: %v\", err) } eventBroadcaster := record.NewBroadcaster() schedulerConfig.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: \"scheduler\"}) eventBroadcaster.StartRecordingToSink(client.Events(\"\")) scheduler.New(schedulerConfig).Run() defer close(schedulerConfig.StopEverything) DoTestUnschedulableNodes(t, client) } func podScheduled(c *client.Client, podNamespace, podName string) wait.ConditionFunc { return func() (bool, error) { pod, err := c.Pods(podNamespace).Get(podName) if errors.IsNotFound(err) { return false, nil } if err != nil { // This could be a connection error so we want to retry. return false, nil } if pod.Spec.Host == \"\" { return false, nil } return true, nil } } func DoTestUnschedulableNodes(t *testing.T, client *client.Client) { node := &api.Node{ ObjectMeta: api.ObjectMeta{Name: \"node\"}, Spec: api.NodeSpec{Unschedulable: true}, } if _, err := client.Nodes().Create(node); err != nil { t.Fatalf(\"Failed to create node: %v\", err) } pod := &api.Pod{ ObjectMeta: api.ObjectMeta{Name: \"my-pod\"}, Spec: api.PodSpec{ Containers: []api.Container{{Name: \"container\", Image: \"kubernetes/pause:go\"}}, }, } myPod, err := client.Pods(api.NamespaceDefault).Create(pod) if err != nil { t.Fatalf(\"Failed to create pod: %v\", err) } // There are no schedulable nodes - the pod shouldn't be scheduled. err = wait.Poll(time.Second, time.Second*10, podScheduled(client, myPod.Namespace, myPod.Name)) if err == nil { t.Errorf(\"Pod scheduled successfully on unschedulable nodes\") } if err != wait.ErrWaitTimeout { t.Errorf(\"Failed while waiting for scheduled pod: %v\", err) } // Make the node schedulable and wait until the pod is scheduled. newNode, err := client.Nodes().Get(node.Name) if err != nil { t.Fatalf(\"Failed to get node: %v\", err) } newNode.Spec.Unschedulable = false if _, err = client.Nodes().Update(newNode); err != nil { t.Fatalf(\"Failed to update node: %v\", err) } err = wait.Poll(time.Second, time.Second*10, podScheduled(client, myPod.Namespace, myPod.Name)) if err != nil { t.Errorf(\"Failed to schedule a pod: %v\", err) } err = client.Pods(api.NamespaceDefault).Delete(myPod.Name) if err != nil { t.Errorf(\"Failed to delete pod: %v\", err) } } ", "commid": "kubernetes_pr_6901"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b08731726b30375f5b0183cb678680fc867aa4d0b55fd733e8704ed7e56ca62", "query": "kubernetes]$ make hack/build- Detected go version: go version devel + Wed Mar 4 20:55:55 2015 +0000 linux/amd64.Kubernetes requires go version 1.2 or greater.Please install Go version 1.2 or later. !!! Error in 'kube::golang::buildbinaries \"$ exited with status 2 Call stack: 1: kube::golang::buildbinaries(...) 2: hack/build- main(...) Exiting with status 1 make: * [all] Error 1\nHello, it would be really nice feature to have and it seems easy to fix/implement. Are there any plans on fixing the build?", "positive_passages": [{"docid": "doc-en-kubernetes-596f366fa566bc20994b853c1e31f6139b85be55792a1a4bd36a7dbc1640b5f7", "text": "if [[ \"${TRAVIS:-}\" != \"true\" ]]; then local go_version go_version=($(go version)) if [[ \"${go_version[2]}\" < \"go1.6\" ]]; then if [[ \"${go_version[2]}\" < \"go1.6\" && \"${go_version[2]}\" != \"devel\" ]]; then kube::log::usage_from_stdin < var ip net.IP for i = range intfs { if flagsSet(intfs[i].Flags, net.FlagUp) && flagsClear(intfs[i].Flags, net.FlagLoopback|net.FlagPointToPoint) { addrs, err := intfs[i].Addrs()", "commid": "kubernetes_pr_7721"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b7b85da4ef1fcf3b1c9916279f506d55083905ef54f59a6d8f39cac395ba8dbc", "query": "The master sets its endpoints into the 'kubernetes' service, but endpoints can only be IPv4 today. So ChooseHostInterface() should either not return an IPv6 address, or we should allow endpoints to be IPv6.\nI think that this is likely the cause of .\nI thought that function explicitly checked for v4. got any bandwidth for review? On May 4, 2015 11:24 AM, \"Robert Bailey\" wrote:\nIt doesn't - and on my mac, it fails to find this address when I add that logic. Not sure if that's FlagPointToPoint or something: ----- Original Message -----\nPEBKAC for the issue I mentioned above, is opened to fix the choice.\nYeah, I can take a look at later this afternoon, just as soon as I've unbroken ELB's :-) smarterclayton@ let me know if is more urgent than that.\nwas smaller than I thought, so committed priority inversion I reviewed it :-)", "positive_passages": [{"docid": "doc-en-kubernetes-75266255fad9538b91757d14d8c1567d65eaf6e6a3c40bb991c0c47926346f38", "text": "return nil, err } if len(addrs) > 0 { // This interface should suffice. break for _, addr := range addrs { if addrIP, _, err := net.ParseCIDR(addr.String()); err == nil { if addrIP.To4() != nil { ip = addrIP.To4() break } } } if ip != nil { // This interface should suffice. break } } } } if i == len(intfs) { return nil, err if ip == nil { return nil, fmt.Errorf(\"no acceptable interface from host\") } glog.V(4).Infof(\"Choosing interface %s for from-host portals\", intfs[i].Name) addrs, err := intfs[i].Addrs() if err != nil { return nil, err } glog.V(4).Infof(\"Interface %s = %s\", intfs[i].Name, addrs[0].String()) ip, _, err := net.ParseCIDR(addrs[0].String()) if err != nil { return nil, err } return ip, nil }", "commid": "kubernetes_pr_7721"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0cb771279e2d0e1af0305b6952611afffca3feab7817e435889dc80d9ca21ce4", "query": "This page: Links to this YAML: Which doesn't exist. There are several examples now in the NFS folder:\nThanks for reporting! Always happy to get docs up-to-date, especially to reduce friction. Please report / open PRs for any others. Also, if the examples / explanations aren't clear, we're happy to address issues for that as well.", "positive_passages": [{"docid": "doc-en-kubernetes-050caf3e2e99eef829227dc2c2ddb069f28b920f81d8423a261f6fd9c0c2006a", "text": "``` ### NFS Kubernetes NFS volumes allow an existing NFS share to be made available to containers within a pod. Kubernetes NFS volumes allow an existing NFS share to be made available to containers within a pod. [The NFS Pod example](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/nfs/test.yaml) demonstrates how to specify the usage of an NFS volume within a pod. In this example one can see that a volumeMount called \"myshare\" is being mounted onto /var/www/html/mount-test in the container \"testpd\". The volume \"myshare\" is defined as type nfs, with the NFS server serving from 172.17.0.2 and exporting directory /tmp as the share. The mount being created in this example is not read only. See the [NFS Pod examples](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/nfs/) section for more details. For example, [nfs-web-pod.yaml](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/nfs/nfs-web-pod.yaml) demonstrates how to specify the usage of an NFS volume within a pod. In this example one can see that a `volumeMount` called \"nfs\" is being mounted onto `/var/www/html` in the container \"web\". The volume \"nfs\" is defined as type `nfs`, with the NFS server serving from `nfs-server.default.kube.local` and exporting directory `/` as the share. The mount being created in this example is not read only. ", "commid": "kubernetes_pr_7816"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 ###### Auto generated by spf13/cobra at 2015-05-28 22:43:52.329286408 +0000 UTC ###### Auto generated by spf13/cobra at 2015-05-29 22:39:51.164275749 +0000 UTC [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_get.md?pixel)]()", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 minions (mi), persistent volumes (pv), persistent volume claims (pvc) nodes (no), persistent volumes (pv), persistent volume claims (pvc) or resource quotas (quota). .PP", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 [ \"$(kubectl get minions -t '{{ .apiVersion }}' \"${kube_flags[@]}\")\" == \"v1beta3\" ] [ \"$(kubectl get nodes -t '{{ .apiVersion }}' \"${kube_flags[@]}\")\" == \"v1beta3\" ] else kube_flags=( -s \"http://127.0.0.1:${API_PORT}\" --match-server-version --api-version=\"${version}\" ) [ \"$(kubectl get minions -t '{{ .apiVersion }}' \"${kube_flags[@]}\")\" == \"${version}\" ] [ \"$(kubectl get nodes -t '{{ .apiVersion }}' \"${kube_flags[@]}\")\" == \"${version}\" ] fi id_field=\".metadata.name\" labels_field=\".metadata.labels\"", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 # Minions # # Nodes # ########### if [[ \"${version}\" = \"v1beta1\" ]] || [[ \"${version}\" = \"v1beta2\" ]]; then kube::log::status \"Testing kubectl(${version}:minions)\" kube::log::status \"Testing kubectl(${version}:nodes)\" kube::test::get_object_assert minions \"{{range.items}}{{$id_field}}:{{end}}\" '127.0.0.1:' kube::test::get_object_assert nodes \"{{range.items}}{{$id_field}}:{{end}}\" '127.0.0.1:' # TODO: I should be a MinionList instead of List kube::test::get_object_assert minions '{{.kind}}' 'List' kube::test::get_object_assert nodes '{{.kind}}' 'List' kube::test::describe_object_assert minions \"127.0.0.1\" \"Name:\" \"Conditions:\" \"Addresses:\" \"Capacity:\" \"Pods:\" kube::test::describe_object_assert nodes \"127.0.0.1\" \"Name:\" \"Conditions:\" \"Addresses:\" \"Capacity:\" \"Pods:\" fi", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 minions (mi), persistent volumes (pv), persistent volume claims (pvc) nodes (no), persistent volumes (pv), persistent volume claims (pvc) or resource quotas (quota). By specifying the output as 'template' and providing a Go template as the value", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 \"fmt\" \"strings\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\"", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 return e.RESTMapper.VersionAndKindForResource(resource) defaultVersion, kind, err = e.RESTMapper.VersionAndKindForResource(resource) // TODO: remove this once v1beta1 and v1beta2 are deprecated if err == nil && kind == \"Minion\" { err = fmt.Errorf(\"Alias minion(s) is deprecated. Use node(s) instead\") } return defaultVersion, kind, err } // expandResourceShortcut will return the expanded version of resource", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-832c9b022e2d3879328d55b738068527b96cd04964ec313be5394fb24e6f52f7", "query": "kubectl 0.17.0: curl http://localhost:8080/api/v1beta3/minions { \"kind\": \"Status\", \"apiVersion\": \"v1beta3\", \"metadata\": {}, \"status\": \"Failure\", \"message\": \"the server could not find the requested resource\", \"reason\": \"NotFound\", \"details\": {}, \"code\": 404 } master3 core # /opt/bin/kubectl get mi master3 core # kubectl 0.16.0: uses: http://localhost:8080/api/v1beta1/minions master3 core # ./kubectl get mi NAME LABELS STATUS 192.168.10.70 \"mi\": \"minions\", \"no\": \"nodes\", \"po\": \"pods\", \"pv\": \"persistentVolumes\", \"pvc\": \"persistentVolumeClaims\",", "commid": "kubernetes_pr_9027"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cb019f3c5a5459d10f5c4e9d499a59fe3549588177be47b0dd8f7a52a6551849", "query": "If a pod is not scheduled because it cannot find a fit, you run into really ugly event output like the following: This makes the output of horrific to work with. A better event would just read: The pod is known from the involved object already so no need to encode it all.", "positive_passages": [{"docid": "doc-en-kubernetes-da62bf18bfaf57b7720326dad878c6948bcea64b3eac24e91547b344597e9159", "text": "// implementation of the error interface func (f *FitError) Error() string { output := fmt.Sprintf(\"failed to find fit for pod: %v\", f.Pod) output := fmt.Sprintf(\"failed to find fit for pod, \") for node, predicateList := range f.FailedPredicates { output = output + fmt.Sprintf(\"Node %s: %s\", node, strings.Join(predicateList.List(), \",\")) }", "commid": "kubernetes_pr_8532"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7801592fe1a929421527a19ff1e4db70f69311885756bd569f596cfe383cc288", "query": "The following lines are incorrect. Set your cluster as the default cluster to use: - Also the whole section is not so appropriate when it comes to issue. After all, certificate authentication works well without such token thing. Why trouble adding parm?", "positive_passages": [{"docid": "doc-en-kubernetes-d128b45c8e928fa1fd4466b7d39f18fa4558800ffcce9d4a0f1937d1380e0872", "text": "- `kubectl config set-cluster $CLUSTER_NAME --server=http://$MASTER_IP --insecure-skip-tls-verify=true` - Otherwise, do this to set the apiserver ip, client certs, and user credentials. - `kubectl config set-cluster $CLUSTER_NAME --certificate-authority=$CA_CERT --embed-certs=true --server=https://$MASTER_IP` - `kubectl config set-credentials $CLUSTER_NAME --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN` - `kubectl config set-credentials $USER --client-certificate=$CLI_CERT --client-key=$CLI_KEY --embed-certs=true --token=$TOKEN` - Set your cluster as the default cluster to use: - `kubectl config set-context $CLUSTER_NAME --cluster=$CLUSTER_NAME --user=admin` - `kubectl config use-context $CONTEXT --cluster=$CONTEXT` - `kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NAME --user=$USER` - `kubectl config use-context $CONTEXT_NAME` Next, make a kubeconfig file for the kubelets and kube-proxy. There are a couple of options for how many distinct files to make:", "commid": "kubernetes_pr_11680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bf5172e25b0232eb28e1fcc7a2afb381c4a8c5c84602ffee854fe026df1028c9", "query": "Hello, I am having issues with a deployment on AWS with version 1.0.1. I had a collision in security group when I launch a service and there already was a security group from an earlier deployment of the \"same\" cluster. I deleted the security group and then kubernetes managed to create the load balancer so thats good, but since then I am having these messages in a loop: E0729 14:48:07. 8 ] Could not delete route k8s-v1.0.1-10.244.2.0/24 10.244.2.0/24: error deleting AWS route (10.244.2.0/24): RequestLimitExceeded: Request limit exceeded. status code: 503, request id: [] E0729 14:48:07. 8 ] Could not create route -35f5-11e5-8163- 10.244.2.0/24: error configuring source-dest-check on instance i-: RequestLimitExceeded: Request limit exceeded. status code: 503, request id: [] E0729 14:48:07. 8 ] Could not create route -35f5-11e5-8163- 10.244.3.0/24: error creating AWS route (10.244.3.0/24): RequestLimitExceeded: Request limit exceeded. status code: 503, request id: [] E0729 14:48:17. 8 ] Could not create route -35f5-11e5-8163- 10.244.1.0/24: error configuring source-dest-check on instance i-: RequestLimitExceeded: Request limit exceeded. status code: 503, request id: [] How can I stop them at least to get under the limit request and try again ? Thanks\nI'm seeing the same thing, and AWS let me know they didn't like it: It seems like maybe when we describe the route tables Kubernetes is not seeing them as being consistent with what it expects internally. Any ideas on what the root cause might be here? I haven't looked at this piece closely yet, but will try to find and fix the issue.\nSome research - I believe this has to do with the name that Kubernetes has assigned to my instances internally. tells me that my node looks like this: Whereas describe-route-tables returns the amazon instance identifier: Based on the above, I'm guessing we should just use node.Spec.ExternalID instead of node.Name when doing the reconciliation. I'll start on a PR and testing this.\nI am not doing the calls, Kubernetes is for me :). It tried to do a load balancer and failed, after that I helped when i saw the logs saying that there already was some security group, deleted it and then it managed to do the load balancer. So i guess the routes making comes from that, My use case is not something out of the ordinary, I launched a reverse proxy and it tried to create an external load balancer on aws. I could try to create the routes manually in the aws console, just tell me what are the routes that kubernetes is trying to create. And you received a message from AWS on my account or is it one of yours ? Adam Guldemann\nYep, sorry, my flurry of details might have been confusing. What I was trying to say was: Kubernetes is doing the same thing on my AWS account and they actually sent me that message to let me know. The load balancer problem you saw is actually another issue that I believe has been merged and should be in the next release, see . No need to create the routes manually, they have probably been created by Kubernetes. The problem, at least as I understand it, is that Kubernetes tries to reconcile the routes it sees on AWS against what it knows internally. That reconciliation fails because of a mismatch between how AWS and Kubernetes identify nodes. I'm working on a fix for that and will have a PR pending some testing on my cluster. I'll keep you posted about the progress on that here.\nthank you for taking this one on - feels pretty serious if AWS are sending warnings out, so I appreciate it. Please tag me when the PR is up (and/or put up a WIP PR and I can test too!) LMK if there's anything I can do to assist.\nI attempted a fix for this on . I'm testing now and am considering adding some logging at a higher log level for future debugging.\nWe should probably add a rate limiter into the AWS client, like we did for GCE:\nagree. I can take a shot at that once I get my fix for route reconciliation done.\nI am having issues with redis / rethinkdb cluster on my AWS cluster, I am wondering if that could from the fact that it can't create routes or maybe deleting and recreating ? Here is a ping request from one container to another for like 5 minutes: .... 64 bytes from 10.244.1.10: icmpseq=962 ttl=62 time=0.693 ms 64 bytes from 10.244.1.10: icmpseq=963 ttl=62 time=0.670 ms 64 bytes from 10.244.1.10: icmpseq=964 ttl=62 time=0.636 ms 64 bytes from 10.244.1.10: icmpseq=965 ttl=62 time=0.732 ms 64 bytes from 10.244.1.10: icmpseq=966 ttl=62 time=0.590 ms 64 bytes from 10.244.1.10: icmpseq=967 ttl=62 time=0.533 ms 64 bytes from 10.244.1.10: icmpseq=968 ttl=62 time=0.544 ms 64 bytes from 10.244.1.10: icmpseq=969 ttl=62 time=0.605 ms 64 bytes from 10.244.1.10: icmpseq=970 ttl=62 time=0.636 ms 64 bytes from 10.244.1.10: icmpseq=971 ttl=62 time=0.603 ms 64 bytes from 10.244.1.10: icmpseq=972 ttl=62 time=0.548 ms 64 bytes from 10.244.1.10: icmpseq=973 ttl=62 time=0.632 ms 64 bytes from 10.244.1.10: icmpseq=974 ttl=62 time=0.596 ms 64 bytes from 10.244.1.10: icmpseq=975 ttl=62 time=0.436 ms 64 bytes from 10.244.1.10: icmpseq=976 ttl=62 time=0.516 ms 64 bytes from 10.244.1.10: icmpseq=977 ttl=62 time=0.660 ms 64 bytes from 10.244.1.10: icmpseq=978 ttl=62 time=0.532 ms 64 bytes from 10.244.1.10: icmpseq=979 ttl=62 time=0.544 ms 64 bytes from 10.244.1.10: icmpseq=980 ttl=62 time=0.588 ms 64 bytes from 10.244.1.10: icmpseq=981 ttl=62 time=0.553 ms 64 bytes from 10.244.1.10: icmpseq=982 ttl=62 time=0.604 ms 64 bytes from 10.244.1.10: icmpseq=983 ttl=62 time=0.380 ms 64 bytes from 10.244.1.10: icmpseq=984 ttl=62 time=0.531 ms 64 bytes from 10.244.1.10: icmpseq=985 ttl=62 time=0.569 ms 64 bytes from 10.244.1.10: icmp_seq=986 ttl=62 time=0.606 ms ^C--- 10.244.1.10 ping statistics --- 987 packets transmitted, 802 packets received, 18% packet loss round-trip min/avg/max/stddev = 0.323/0.547/4.269/0.165 ms I see 18% packet loss, any ideas where that could come from ?\nIf I ping google from the same container 0% packet loss to compare: 64 bytes from 8.8.8.8: icmpseq=193 ttl=54 time=0.988 ms 64 bytes from 8.8.8.8: icmpseq=194 ttl=54 time=1.014 ms 64 bytes from 8.8.8.8: icmpseq=195 ttl=54 time=0.952 ms 64 bytes from 8.8.8.8: icmpseq=196 ttl=54 time=1.069 ms 64 bytes from 8.8.8.8: icmpseq=197 ttl=54 time=1.038 ms 64 bytes from 8.8.8.8: icmpseq=198 ttl=54 time=1.094 ms 64 bytes from 8.8.8.8: icmpseq=199 ttl=54 time=1.083 ms 64 bytes from 8.8.8.8: icmpseq=200 ttl=54 time=1.005 ms 64 bytes from 8.8.8.8: icmpseq=201 ttl=54 time=1.042 ms 64 bytes from 8.8.8.8: icmpseq=202 ttl=54 time=1.014 ms ^C--- 8.8.8.8 ping statistics --- 203 packets transmitted, 203 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.887/1.027/1.166/0.055 ms\nClosed, but obviously please reopen if the problem continues And I think we're tracking the ping problems separately in ; I have my fingers crossed that we will fix them with this path also.\nwe're def seeing this right now on AWS. We spun up about 40 nodes using the default k8s aws installation, and are at times crippled by this... Even to the point where pods fail to start because k8s is unable to mount their EBS volumes due to rate limits. Any ideas re workarounds? Anything we can do interim? FWIW, we've spoken to AWS premium support, and they are pretty adamant about not increasing their API quota. They claim that they should be sufficient, and that they won't even raise those for their biggest customers...\nI'm curious if you had any luck with this? Unfortunately 1 year later we are experiencing the same problems with EBS volumes. Makes me wonder if the Persistent Volume API makes a ton of calls for EBS volumes. As a side note, we are also having problems with terraform and even packer when running in the same account as our kubernetes cluster, which leads me to believe it is indeed the kubernetes cluster", "positive_passages": [{"docid": "doc-en-kubernetes-60c2f67364227f41744e421d2501cf12e8f5450e249650348658ea9b55bb1115", "text": "continue } instance, err := s.getInstanceById(instanceID) if err != nil { return nil, err } instanceName := orEmpty(instance.PrivateDNSName) routeName := clusterName + \"-\" + destinationCIDR routes = append(routes, &cloudprovider.Route{routeName, instanceID, destinationCIDR}) routes = append(routes, &cloudprovider.Route{routeName, instanceName, destinationCIDR}) } return routes, nil", "commid": "kubernetes_pr_12029"}], "negative_passages": []} {"query_id": "q-en-kubernetes-907072de4daa6911db8f6a81adc78d97e42d008af334d61c5840b8a732fcb546", "query": "When mounting several shares over NFS to container and when one of the directories is deleted from NFS server, something in the node is deleting content from other shares. I suspect that something goes wrong with \"cleanupOrphanedVolumes\"? not sure tough. Use case: php-app is running in container, uses php-sessions for remembering login. php sessions are stored in file and shared through nfs-share between containers. also app code is shared and several other directories to store different data (data dirs are mapped under code, code is in separate share so generated document files don't get to git tree), all is managed by separate integration server (in container), that uses same shares to prepare the code. This happened first time in GCE with Cluster API version 0.17.something, now it acts same on 1.0.1 and 0.18.2. Problem is repeatable: First of all you need to install package \"nfs-common\" to all cluster nodes, otherwise you can't map NFS shares at all: now you need nfs server. I created small instance for that with /etc/export stating: also I made directories under /export/test/: next you need to start it up (replace nfs server ip with your own): now start up the pod: verify, that the directories are mapped: now the funny part: remove directory from nfs server the zubuntu pod shows: nothing happens on nfs server yet, but just launch the stop command: and now nfs server shows: you see, that one file was left, but not for long. this is because root in client does not have root access in nfs-file system, but just after giving rights, the file is gone: now you can test \"black hole\" by creating new files. they dissaperar from the server in few moments. the dissapearing continues until cluster node is restarted.\nso the problem looks to me is that containers delete files though they are not told to do. I am not quite convinced is the culprit here, since the directories are still alive. what the first version you believe this problem started to show up? Would it be possible for you to take a tcpdump on the nfs server so we can look at the nfs traffic? Thanks!\nI started to play with the Google Container Engine about in middle of June and the version of the cluster was 0.17. something. I don\u2019t have that cluster anymore. So I don\u2019t know what happened or did it worked before. About tcpdump. What ports and what traffic do you think you need for that? Also in what timeframe? Maybe you can even give me command-line for that? J\u00fcri.\nI am setting up an environment to simulate your tests, stay tuned\nyes, I can reproduce your issue. And I was wrong about . You are more likely to be right on that suspect. Blame me if I point you to the wrong suspect. Here in this line , doesn't return any error if a volume fails to be torn down, which is what happening here: the first volume is not able to be torn down, yet the error message is suppressed, so later on thinks volumes are unmounted and it is safe to remove the directory\nSomething is fishy. Volume cleanup should be happening against the volume dirs, which is totally separate from pod dirs. Pod teardown should un-bind-mount the NFS from th epod dir, so pod cleanup can happen, even if the volume can't umount...\nI didn't touch much of the volume code, but think the directory structure is . Removing pod directories would remove the volumes as well. We can delay the pod directory cleanup until all volumes in it are properly unmounted. Another user reported a similar case in\nin this particular case, the mountpoint cannot be properly unmounted (because it was deleted), and I believe the current NFS mount is , so it won't be easy to unmount even after delay. unmount error might be a good heuristic. In this case, NFS stale handle error is probably good sign nothing is good for wait for.\nAhh yes, it was only GCE PD that was doing something tricker here , Yu-Ju Hong wrote:\nI see. Thanks for the explanation. I didn't read the original post carefully. If we let propagate the error upstream, kubelet may always skip pod directory cleanup because of one failed teardown. That seems undesirable. I agree that the NFS stale handle error alone should be enough for us to skip waiting. In short, should distinguish the NFS error and report a successful teardown in this case. should check if there are any uncleaned volume directories, and avoid removing the specific pod.\ncool, thanks. When a patch is available, I'll give it a test on other volumes too.\nThis was completed by in PR can you take a crack at this?\nok, glad to do it.\nThanks!", "positive_passages": [{"docid": "doc-en-kubernetes-42589122f09f7acb8f52967b1e8cf7978f820445514ee3daed96eeac483b1f77", "text": "for _, volumeKindDir := range volumeKindDirs { volumeKind := volumeKindDir.Name() volumeKindPath := path.Join(podVolDir, volumeKind) volumeNameDirs, err := ioutil.ReadDir(volumeKindPath) // ioutil.ReadDir exits without returning any healthy dir when encountering the first lstat error // but skipping dirs means no cleanup for healthy volumes. switching to a no-exit api solves this problem volumeNameDirs, volumeNameDirsStat, err := util.ReadDirNoExit(volumeKindPath) if err != nil { return []*volumeTuple{}, fmt.Errorf(\"could not read directory %s: %v\", volumeKindPath, err) } for _, volumeNameDir := range volumeNameDirs { volumes = append(volumes, &volumeTuple{Kind: volumeKind, Name: volumeNameDir.Name()}) for i, volumeNameDir := range volumeNameDirs { if volumeNameDir != nil { volumes = append(volumes, &volumeTuple{Kind: volumeKind, Name: volumeNameDir.Name()}) } else { lerr := volumeNameDirsStat[i] glog.Errorf(\"Could not read directory %s: %v\", podVolDir, lerr) } } } return volumes, nil", "commid": "kubernetes_pr_13825"}], "negative_passages": []} {"query_id": "q-en-kubernetes-907072de4daa6911db8f6a81adc78d97e42d008af334d61c5840b8a732fcb546", "query": "When mounting several shares over NFS to container and when one of the directories is deleted from NFS server, something in the node is deleting content from other shares. I suspect that something goes wrong with \"cleanupOrphanedVolumes\"? not sure tough. Use case: php-app is running in container, uses php-sessions for remembering login. php sessions are stored in file and shared through nfs-share between containers. also app code is shared and several other directories to store different data (data dirs are mapped under code, code is in separate share so generated document files don't get to git tree), all is managed by separate integration server (in container), that uses same shares to prepare the code. This happened first time in GCE with Cluster API version 0.17.something, now it acts same on 1.0.1 and 0.18.2. Problem is repeatable: First of all you need to install package \"nfs-common\" to all cluster nodes, otherwise you can't map NFS shares at all: now you need nfs server. I created small instance for that with /etc/export stating: also I made directories under /export/test/: next you need to start it up (replace nfs server ip with your own): now start up the pod: verify, that the directories are mapped: now the funny part: remove directory from nfs server the zubuntu pod shows: nothing happens on nfs server yet, but just launch the stop command: and now nfs server shows: you see, that one file was left, but not for long. this is because root in client does not have root access in nfs-file system, but just after giving rights, the file is gone: now you can test \"black hole\" by creating new files. they dissaperar from the server in few moments. the dissapearing continues until cluster node is restarted.\nso the problem looks to me is that containers delete files though they are not told to do. I am not quite convinced is the culprit here, since the directories are still alive. what the first version you believe this problem started to show up? Would it be possible for you to take a tcpdump on the nfs server so we can look at the nfs traffic? Thanks!\nI started to play with the Google Container Engine about in middle of June and the version of the cluster was 0.17. something. I don\u2019t have that cluster anymore. So I don\u2019t know what happened or did it worked before. About tcpdump. What ports and what traffic do you think you need for that? Also in what timeframe? Maybe you can even give me command-line for that? J\u00fcri.\nI am setting up an environment to simulate your tests, stay tuned\nyes, I can reproduce your issue. And I was wrong about . You are more likely to be right on that suspect. Blame me if I point you to the wrong suspect. Here in this line , doesn't return any error if a volume fails to be torn down, which is what happening here: the first volume is not able to be torn down, yet the error message is suppressed, so later on thinks volumes are unmounted and it is safe to remove the directory\nSomething is fishy. Volume cleanup should be happening against the volume dirs, which is totally separate from pod dirs. Pod teardown should un-bind-mount the NFS from th epod dir, so pod cleanup can happen, even if the volume can't umount...\nI didn't touch much of the volume code, but think the directory structure is . Removing pod directories would remove the volumes as well. We can delay the pod directory cleanup until all volumes in it are properly unmounted. Another user reported a similar case in\nin this particular case, the mountpoint cannot be properly unmounted (because it was deleted), and I believe the current NFS mount is , so it won't be easy to unmount even after delay. unmount error might be a good heuristic. In this case, NFS stale handle error is probably good sign nothing is good for wait for.\nAhh yes, it was only GCE PD that was doing something tricker here , Yu-Ju Hong wrote:\nI see. Thanks for the explanation. I didn't read the original post carefully. If we let propagate the error upstream, kubelet may always skip pod directory cleanup because of one failed teardown. That seems undesirable. I agree that the NFS stale handle error alone should be enough for us to skip waiting. In short, should distinguish the NFS error and report a successful teardown in this case. should check if there are any uncleaned volume directories, and avoid removing the specific pod.\ncool, thanks. When a patch is available, I'll give it a test on other volumes too.\nThis was completed by in PR can you take a crack at this?\nok, glad to do it.\nThanks!", "positive_passages": [{"docid": "doc-en-kubernetes-733ebd6433db6c8caa06952a50b5dd20d52087dcdedcaa6ee0a5c614d47a56dd", "text": "} return true, nil } // borrowed from ioutil.ReadDir // ReadDir reads the directory named by dirname and returns // a list of directory entries, minus those with lstat errors func ReadDirNoExit(dirname string) ([]os.FileInfo, []error, error) { if dirname == \"\" { dirname = \".\" } f, err := os.Open(dirname) if err != nil { return nil, nil, err } defer f.Close() names, err := f.Readdirnames(-1) list := make([]os.FileInfo, 0, len(names)) errs := make([]error, 0, len(names)) for _, filename := range names { fip, lerr := os.Lstat(dirname + \"/\" + filename) if os.IsNotExist(lerr) { // File disappeared between readdir + stat. // Just treat it as if it didn't exist. continue } list = append(list, fip) errs = append(errs, lerr) } return list, errs, nil } ", "commid": "kubernetes_pr_13825"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cab9179de0369f40f8cfa54fff0d46c239e117acfc7e0e5362f896602ccc17de", "query": "I follwed the Getting Started guide from and tried deploying a vagrant based cluster on RHEL 7, first by building the source code and then when that didn't work, using the curl script. In both cases the cluster seems to be set up and I can do a vagrant ssh to the master and a minion, but running kubectl get pods or for that matter almost all kubectl commands return: error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: no route to host I read somewhere that this could happen if your config was faulty, so I tried config info and this is what I get: config view apiVersion: v1 clusters: cluster: insecure-skip-tls-verify: true server: http://127.0.0.1:8080 name: local cluster: certificate-authority-data: REDACTED server: https://10.245.1.2 name: vagrant contexts: context: cluster: local user: \"\" name: local context: cluster: vagrant user: vagrant name: vagrant current-context: vagrant kind: Config preferences: {} users: name: vagrant user: client-certificate-data: REDACTED client-key-data: REDACTED password: vagrant username: vagrant I'd gotten this to work in a different machine successfully with v1beta3. Unfortunately I don't have access to that machine any more so don't know what the configuration had been set to. Any help would be much appreciated. Cheers --Rupa\nLooking at the config-default .sh file in the vagrant folder, I see that MASTERIP and KUBEMASTER_IP is set to 10.245.1.2. However pinging the IP address doesn't work. When I do a vagrant ssh master, and an ifconfig, the eth0 inet address is not 10.245.1.2. either.\nI am having the same problem. Running with ends with: are you seeing this on latest master? Edit: this just recently occurred when rebasing my working branch to upstream, I'm not sure exactly when this occured however as I have been working on the networking, but yesterday morning this worked fine.\nThere was a bug in the v1 release that caused network manager to control the eth1 device when it should not have been doing so. The fix for that was merged in HEAD a few weeks ago, and the cherry pick to go in the 1.0 release was made last night here: As for e2e, it had passed conformance tests, but I will see if I can reproduce the issue you are seeing now to determine the cause if I can reproduce at HEAD.\n- I was able to reproduce your error just now. There must have been some kind of regression in the last day. I see the kubelet starts on the master, and docker is running, but doing a shows no attempt to start the api server container. I will need to do some digging into what caused the regression in the last day.\n- think I found the offending PR that broke the setup here:\nthanks for checking this out, Is there anything I can do to help sort out the regression? Edit: read , looks like you have it covered already. Thanks again!\nSee fix above.\nOnce there's a fix, downloading the latest source from MASTER or using the curl script again should both work ?\nLatest source from MASTER should work fine. Please confirm. , rupsee wrote:\nConfirmed that latest master is again working, sorry it took so long to get back to you.\nI'll try it and post here tomorrow, it's the end of my day here now. Thanks a bunch for the quick turnaround.\nI still see a similar issue with a somewhat different error message, \"connection refused\" instead of \"no route to host\". Here's what I did: git clone cd kubernetes make quick-release export KUBERNETES_PROVIDER=vagrant ./cluster/kube- At the bottom of the master and minion updates I see this: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Wrote config for vagrant to Each machine instance has been created/updated. Now waiting for the Salt provisioning process to complete on each machine. This can take some time based on your network, disk, and cpu speed. It is possible for an error to occur during Salt provision of cluster and this could loop forever. Validating master Validating minion-1 ...... Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: connection refused xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Then I tried: [root kubernetes]# get nodes error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: connection refused\nhere's part of what I get from ifconfig from the master: .................................................................................................. cbr0: flags=4163 DAEMON_ARGS=\"{{daemon_args}} {{api_servers_with_port}} {{kubeconfig}} {{pillar['log_level']}}\" # test_args has to be kept at the end, so they'll overwrite any prior configuration DAEMON_ARGS=\"$DAEMON_ARGS + {{test_args}}\" DAEMON_ARGS=\"{{daemon_args}} {{api_servers_with_port}} {{kubeconfig}} {{pillar['log_level']}} {{test_args}}\" ", "commid": "kubernetes_pr_12301"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cab9179de0369f40f8cfa54fff0d46c239e117acfc7e0e5362f896602ccc17de", "query": "I follwed the Getting Started guide from and tried deploying a vagrant based cluster on RHEL 7, first by building the source code and then when that didn't work, using the curl script. In both cases the cluster seems to be set up and I can do a vagrant ssh to the master and a minion, but running kubectl get pods or for that matter almost all kubectl commands return: error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: no route to host I read somewhere that this could happen if your config was faulty, so I tried config info and this is what I get: config view apiVersion: v1 clusters: cluster: insecure-skip-tls-verify: true server: http://127.0.0.1:8080 name: local cluster: certificate-authority-data: REDACTED server: https://10.245.1.2 name: vagrant contexts: context: cluster: local user: \"\" name: local context: cluster: vagrant user: vagrant name: vagrant current-context: vagrant kind: Config preferences: {} users: name: vagrant user: client-certificate-data: REDACTED client-key-data: REDACTED password: vagrant username: vagrant I'd gotten this to work in a different machine successfully with v1beta3. Unfortunately I don't have access to that machine any more so don't know what the configuration had been set to. Any help would be much appreciated. Cheers --Rupa\nLooking at the config-default .sh file in the vagrant folder, I see that MASTERIP and KUBEMASTER_IP is set to 10.245.1.2. However pinging the IP address doesn't work. When I do a vagrant ssh master, and an ifconfig, the eth0 inet address is not 10.245.1.2. either.\nI am having the same problem. Running with ends with: are you seeing this on latest master? Edit: this just recently occurred when rebasing my working branch to upstream, I'm not sure exactly when this occured however as I have been working on the networking, but yesterday morning this worked fine.\nThere was a bug in the v1 release that caused network manager to control the eth1 device when it should not have been doing so. The fix for that was merged in HEAD a few weeks ago, and the cherry pick to go in the 1.0 release was made last night here: As for e2e, it had passed conformance tests, but I will see if I can reproduce the issue you are seeing now to determine the cause if I can reproduce at HEAD.\n- I was able to reproduce your error just now. There must have been some kind of regression in the last day. I see the kubelet starts on the master, and docker is running, but doing a shows no attempt to start the api server container. I will need to do some digging into what caused the regression in the last day.\n- think I found the offending PR that broke the setup here:\nthanks for checking this out, Is there anything I can do to help sort out the regression? Edit: read , looks like you have it covered already. Thanks again!\nSee fix above.\nOnce there's a fix, downloading the latest source from MASTER or using the curl script again should both work ?\nLatest source from MASTER should work fine. Please confirm. , rupsee wrote:\nConfirmed that latest master is again working, sorry it took so long to get back to you.\nI'll try it and post here tomorrow, it's the end of my day here now. Thanks a bunch for the quick turnaround.\nI still see a similar issue with a somewhat different error message, \"connection refused\" instead of \"no route to host\". Here's what I did: git clone cd kubernetes make quick-release export KUBERNETES_PROVIDER=vagrant ./cluster/kube- At the bottom of the master and minion updates I see this: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Wrote config for vagrant to Each machine instance has been created/updated. Now waiting for the Salt provisioning process to complete on each machine. This can take some time based on your network, disk, and cpu speed. It is possible for an error to occur during Salt provision of cluster and this could loop forever. Validating master Validating minion-1 ...... Waiting for each minion to be registered with cloud provider error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: connection refused xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Then I tried: [root kubernetes]# get nodes error: couldn't read version from server: Get https://10.245.1.2/api: dial tcp 10.245.1.2:443: connection refused\nhere's part of what I get from ifconfig from the master: .................................................................................................. cbr0: flags=4163 DAEMON_ARGS=\"{{daemon_args}} {{api_servers_with_port}} {{debugging_handlers}} {{hostname_override}} {{cloud_provider}} {{config}} {{manifest_url}} --allow_privileged={{pillar['allow_privileged']}} {{pillar['log_level']}} {{cluster_dns}} {{cluster_domain}} {{docker_root}} {{kubelet_root}} {{configure_cbr0}} {{cgroup_root}} {{system_container}} {{pod_cidr}}\" # test_args has to be kept at the end, so they'll overwrite any prior configuration DAEMON_ARGS=\"$DAEMON_ARGS + {{test_args}}\" DAEMON_ARGS=\"{{daemon_args}} {{api_servers_with_port}} {{debugging_handlers}} {{hostname_override}} {{cloud_provider}} {{config}} {{manifest_url}} --allow_privileged={{pillar['allow_privileged']}} {{pillar['log_level']}} {{cluster_dns}} {{cluster_domain}} {{docker_root}} {{kubelet_root}} {{configure_cbr0}} {{cgroup_root}} {{system_container}} {{pod_cidr}} {{test_args}}\" ", "commid": "kubernetes_pr_12301"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-eae2ace7fdeeebb19b4e7abea2eb1203e3ce5810d89b901ea12b04d831fb92ba", "text": "\"os\" \"time\" kube_client \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/registry\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" \"github.com/coreos/go-etcd/etcd\"", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-6d81e9a50fa6ef480d54794529ad8406e86eb8c1559dfd6df7fea99fcf335386", "text": "etcd.SetLogger(log.New(os.Stderr, \"etcd \", log.LstdFlags)) controllerManager := registry.MakeReplicationManager(etcd.NewClient([]string{*etcd_servers}), kube_client.Client{ client.Client{ Host: \"http://\" + *master, })", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-e41a8ac7da5c4913e3b1afba2f468684e76804d65c752d5f81360a994582f75e", "text": "var ( file = flag.String(\"config\", \"\", \"Path to the config file\") etcd_servers = flag.String(\"etcd_servers\", \"\", \"Url of etcd servers in the cluster\") etcdServers = flag.String(\"etcd_servers\", \"\", \"Url of etcd servers in the cluster\") syncFrequency = flag.Duration(\"sync_frequency\", 10*time.Second, \"Max period between synchronizing running containers and config\") fileCheckFrequency = flag.Duration(\"file_check_frequency\", 20*time.Second, \"Duration between checking file for new data\") httpCheckFrequency = flag.Duration(\"http_check_frequency\", 20*time.Second, \"Duration between checking http for new data\") manifest_url = flag.String(\"manifest_url\", \"\", \"URL for accessing the container manifest\") manifestUrl = flag.String(\"manifest_url\", \"\", \"URL for accessing the container manifest\") address = flag.String(\"address\", \"127.0.0.1\", \"The address for the info server to serve on\") port = flag.Uint(\"port\", 10250, \"The port for the info server to serve on\") hostnameOverride = flag.String(\"hostname_override\", \"\", \"If non-empty, will use this string as identification instead of the actual hostname.\") ) const dockerBinary = \"/usr/bin/docker\"", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-2a18d895358c72e58c2762392f6dc264ab9dc1f77a39ac5e5d1d36dae51d1f0b", "text": "log.Fatal(\"Couldn't connnect to docker.\") } hostname, err := exec.Command(\"hostname\", \"-f\").Output() if err != nil { log.Fatalf(\"Couldn't determine hostname: %v\", err) hostname := []byte(*hostnameOverride) if string(hostname) == \"\" { hostname, err = exec.Command(\"hostname\", \"-f\").Output() if err != nil { log.Fatalf(\"Couldn't determine hostname: %v\", err) } } my_kubelet := kubelet.Kubelet{", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-db13024817c4ac5f84137d799d7e696f88964c989339aa2705d1d064f1df8c84", "text": "SyncFrequency: *syncFrequency, HTTPCheckFrequency: *httpCheckFrequency, } my_kubelet.RunKubelet(*file, *manifest_url, *etcd_servers, *address, *port) my_kubelet.RunKubelet(*file, *manifestUrl, *etcdServers, *address, *port) }", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6bc8df82589815e2e610f5ccd928cca589126610fdc4779d12361c6b9ee84a34", "query": "When I'm accessing persistentVolumes and persistentVolumeClaims, I'm getting nil instead of empty list of items, as happening in other entities when there are no items. { \"kind\": \"PersistentVolumeList\", \"apiVersion\": \"v1\", \"metadata\": { \"selfLink\": \"/api/v1/persistentvolumes\", \"resourceVersion\": \"\" } }\nYes, this seems like a bug. I see an empty list when I access pods: And items is nil when I access persistent volumes:\nItems shouldn't be tagged with omitempty:\ncc\nPersistentVolumeList and PersistentVolumeClaimList appear to be the only resource types with this problem.\nPlease see for fix.", "positive_passages": [{"docid": "doc-en-kubernetes-477ce19e1526b0fc2d957f0dab10393085d235078b6c8db61b1651688fe1d0ab", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This command builds and runs a local kubernetes cluster. It's just like # local-up.sh, but this one launches the three separate binaries. # You may need to run this as root to allow kubelet to open docker's socket. if [ \"$(which etcd)\" == \"\" ]; then echo \"etcd must be in your PATH\" exit 1 fi # Stop right away if the build fails set -e $(dirname $0)/build-go.sh echo \"Starting etcd\" ETCD_DIR=$(mktemp -d -t kube-integration.XXXXXX) trap \"rm -rf ${ETCD_DIR}\" EXIT etcd -name test -data-dir ${ETCD_DIR} &> /tmp/etcd.log & ETCD_PID=$! sleep 5 # Shut down anyway if there's an error. set +e API_PORT=8080 KUBELET_PORT=10250 $(dirname $0)/../output/go/apiserver --address=\"127.0.0.1\" --port=\"${API_PORT}\" --etcd_servers=\"http://127.0.0.1:4001\" --machines=\"127.0.0.1\" &> /tmp/apiserver.log & APISERVER_PID=$! $(dirname $0)/../output/go/controller-manager --etcd_servers=\"http://127.0.0.1:4001\" --master=\"127.0.0.1:${API_PORT}\" &> /tmp/controller-manager.log & CTLRMGR_PID=$! $(dirname $0)/../output/go/kubelet --etcd_servers=\"http://127.0.0.1:4001\" --hostname_override=\"127.0.0.1\" --address=\"127.0.0.1\" --port=\"$KUBELET_PORT\" &> /tmp/kubelet.log & KUBELET_PID=$! echo \"Local Kubernetes cluster is running. Press enter to shut it down.\" read unused kill ${APISERVER_PID} kill ${CTLRMGR_PID} kill ${KUBELET_PID} kill ${ETCD_PID} ", "commid": "kubernetes_pr_118"}], "negative_passages": []} {"query_id": "q-en-kubernetes-53453bd5ca3e0fd97097a9d5375a1c84f5c3ea86da4fd838c0b744a110f4390f", "query": "Both the GKE and Parallel tests recently had a run where they timed out. (after 90 minutes!) This is like 10x longer than the parallel run should be taking.\nAnd the build project, too:\nany ideas?\np0\nyup, investigating this is next on my queue, right after I figure out why the PR jenkins is failing so many runs.\nThis happened again: In fact I've seen at least few cases like that: \"at some point tests just stop and completely nothing is happening to the end\" The same happened here - completely nothing happened during last hour of the run,\nWhile the failure rates of the individual test jobs is not very high (e.g. kubernetes-e2e-gce-parallel : MTTF (Last 7 Days) 7 hr 27 min ... the fact that we gate auto-merging on about 7 of these jobs means that a merge fails almost every hour due to this.\nI looked at 4 of the builds in kubernetes-e2e-gce-parallel, comparing the test cases marked \"N/A\" (i.e. no test result given) instead of their usual \"PASSED\" or \"SKIPPED\" report. So far, there's been only one test case that's been consistently N/A in all four of the builds I looked at: I'm not sure what might be happening, but I'm tempted to disable this test in parallel runs now while we investigate what's actually wrong.\nOK, that test has been disabled. I'm hoping that the PR builds recover (along with some git fixes as referenced in ). Now on to figuring out why this is causing us problems...\nHad a successful failure in build 8902. Grepping out the Ginkgo node number shows this test being the problem: So it looks like it got stuck in ? Is there anything in here that should hang without deadline?\nI had another run get stuck in exactly the same place: What's worrying is that another parallel build (http://go/k8s-test/job/kubernetes-e2e-gce-parallel/7960/) timed out again early this morning, but this networking test was not enabled. This suggests that there's still something else triggering the problem.\nI may have a partial fix for this in .\nHave we seen this since merged? Can we close this bug?\n- I think we had a lot of other problems during that time. I would suggest leaving it open during the weekend and close it on Monday if we don't see any problems.\nI don't think we've seen this in a while. Please reopen if we're still having mysterious timeouts in Jenkins.\nWe're seeing this again in . It looks like someone put \"Networking should function for intra-pod communication\" test back in the parallel run recently. I think that we should revert whatever that PR was.\nThat test had been fine. More worrying is that my fix from seems to have vanished.\nOK, looks like that was removed in , referencing issue . I'm not sure if the same issue is happening again (where operations mysteriously hang forever) or if it's something else. any thoughts?\nI think your answer is here: The timeout field, and all references to it in clients were removed, based on the fact that clients should use request.Timeout() instead, but the clients were not updated to do that. I will send a PR shortly.\nIt seems like very few of the e2e tests are using this method, but we aren't seeing issues on most of the tests. I wonder if there's something in particular we're hitting with the networking test?\nCould this be this magic number in ? I guess this means the checks only happen for about a 75 seconds. If one or two of the nodes are spinning up very slowly, this will trip over . Just a thought. Either way I think this needs to be configurable, I'll work on that angle.\nI agree this \"15\" should probably be bigger. However, I'm pretty sure, that this shouldn't cause the test hang forever - it may cause test failures, but hanging forever test shouldn't happen...\nAnyway - I've sent out out for review - we will see if that helps...\nOK - it won't help, because it doesn't work :( - I asked if there is any reasonably easy way to do it...\nAnother instance: Again - the \"Networking should function for intra-pod communication\" is among tests that didn't run...\nassigning this issue to you since you're working on a PR\n- sure, although I'm not sure if this will solve the issue...\nyou can assign back to me if it doesn't :)\nUnfortunately my PR didn't help. This just happened again: - reassigning back to you\nAnother instance:\nAnd again:\nAre folks still seeing this? I'm having a really hard time getting it to occur on-demand.\nOK, I did find another timeout - The list of suspicious tests does not include the networking test this time: Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] Kubectl client Simple pod should support exec through an HTTP proxy Pods should be submitted and removed [Conformance] Secrets should be consumable from pods [Conformance] Will have to keep searching for builds timing out to see if any new pattern emerges.\nLooking more carefully, is another timeout. (I really wish it'd fail on my debug PR...) Suspicious tests: Job should scale a job down Kubectl client Simple pod should support exec through an HTTP proxy I'm gonna start suspecting that Kubectl HTTP proxy test now... are we doing anything in this test that might be blocking?\nhas a very curious failure: only 3 tests ran at all. It seems we are running the cluster-sanity checks in for every Ginkgo node, so there are probably cases like this run where it'll race and fail in mysterious ways (or possibly even deadlock)? We should probably use and (as documented ) to run the sanity checks, cleanup, and log dumping only once, rather than once per Ginkgo node. This may solve the mysterious timeouts.\nBumping down to P1 because this seems pretty rare nowadays (but I'll still address the issues I noted).\nIt seems like this issue is way too general: jobs timing out without a clear reason should have an action of making that reason clear. Is there any action we can take there?\nI think the fixes I mentioned in will address the mysterious timeouts.\nHappened here:\nHoping this one is gone for good, but I'll track and see.\nReopening; saw this again with\nFinal lines of that before being killed: Another run which timed out is kubernetes-pull-build-test-e2e-gce/. Final lines from that: Plus my earlier suspicions in make me think the kubectl proxy test is lacking a necessary timeout somewhere.\nMoving discussion of that test case to .\nWith open, can we close this in favor of ?\nClosing for now. If fixing doesn't fix, I'll keep looking...", "positive_passages": [{"docid": "doc-en-kubernetes-5d15153d6f31aa68a28ecc649d99667226f4b39319938d289c6cd6203a64844f", "text": "flag.StringVar(&testContext.OutputPrintType, \"output-print-type\", \"hr\", \"Comma separated list: 'hr' for human readable summaries 'json' for JSON ones.\") } func TestE2E(t *testing.T) { util.ReallyCrash = true util.InitLogs() defer util.FlushLogs() if testContext.ReportDir != \"\" { if err := os.MkdirAll(testContext.ReportDir, 0755); err != nil { glog.Errorf(\"Failed creating report directory: %v\", err) } defer CoreDump(testContext.ReportDir) } if testContext.Provider == \"\" { // setupProviderConfig validates and sets up cloudConfig based on testContext.Provider. func setupProviderConfig() error { switch testContext.Provider { case \"\": glog.Info(\"The --provider flag is not set. Treating as a conformance test. Some tests may not be run.\") } if testContext.Provider == \"gce\" || testContext.Provider == \"gke\" { case \"gce\", \"gke\": var err error Logf(\"Fetching cloud provider for %qrn\", testContext.Provider) var tokenSource oauth2.TokenSource", "commid": "kubernetes_pr_20049"}], "negative_passages": []} {"query_id": "q-en-kubernetes-53453bd5ca3e0fd97097a9d5375a1c84f5c3ea86da4fd838c0b744a110f4390f", "query": "Both the GKE and Parallel tests recently had a run where they timed out. (after 90 minutes!) This is like 10x longer than the parallel run should be taking.\nAnd the build project, too:\nany ideas?\np0\nyup, investigating this is next on my queue, right after I figure out why the PR jenkins is failing so many runs.\nThis happened again: In fact I've seen at least few cases like that: \"at some point tests just stop and completely nothing is happening to the end\" The same happened here - completely nothing happened during last hour of the run,\nWhile the failure rates of the individual test jobs is not very high (e.g. kubernetes-e2e-gce-parallel : MTTF (Last 7 Days) 7 hr 27 min ... the fact that we gate auto-merging on about 7 of these jobs means that a merge fails almost every hour due to this.\nI looked at 4 of the builds in kubernetes-e2e-gce-parallel, comparing the test cases marked \"N/A\" (i.e. no test result given) instead of their usual \"PASSED\" or \"SKIPPED\" report. So far, there's been only one test case that's been consistently N/A in all four of the builds I looked at: I'm not sure what might be happening, but I'm tempted to disable this test in parallel runs now while we investigate what's actually wrong.\nOK, that test has been disabled. I'm hoping that the PR builds recover (along with some git fixes as referenced in ). Now on to figuring out why this is causing us problems...\nHad a successful failure in build 8902. Grepping out the Ginkgo node number shows this test being the problem: So it looks like it got stuck in ? Is there anything in here that should hang without deadline?\nI had another run get stuck in exactly the same place: What's worrying is that another parallel build (http://go/k8s-test/job/kubernetes-e2e-gce-parallel/7960/) timed out again early this morning, but this networking test was not enabled. This suggests that there's still something else triggering the problem.\nI may have a partial fix for this in .\nHave we seen this since merged? Can we close this bug?\n- I think we had a lot of other problems during that time. I would suggest leaving it open during the weekend and close it on Monday if we don't see any problems.\nI don't think we've seen this in a while. Please reopen if we're still having mysterious timeouts in Jenkins.\nWe're seeing this again in . It looks like someone put \"Networking should function for intra-pod communication\" test back in the parallel run recently. I think that we should revert whatever that PR was.\nThat test had been fine. More worrying is that my fix from seems to have vanished.\nOK, looks like that was removed in , referencing issue . I'm not sure if the same issue is happening again (where operations mysteriously hang forever) or if it's something else. any thoughts?\nI think your answer is here: The timeout field, and all references to it in clients were removed, based on the fact that clients should use request.Timeout() instead, but the clients were not updated to do that. I will send a PR shortly.\nIt seems like very few of the e2e tests are using this method, but we aren't seeing issues on most of the tests. I wonder if there's something in particular we're hitting with the networking test?\nCould this be this magic number in ? I guess this means the checks only happen for about a 75 seconds. If one or two of the nodes are spinning up very slowly, this will trip over . Just a thought. Either way I think this needs to be configurable, I'll work on that angle.\nI agree this \"15\" should probably be bigger. However, I'm pretty sure, that this shouldn't cause the test hang forever - it may cause test failures, but hanging forever test shouldn't happen...\nAnyway - I've sent out out for review - we will see if that helps...\nOK - it won't help, because it doesn't work :( - I asked if there is any reasonably easy way to do it...\nAnother instance: Again - the \"Networking should function for intra-pod communication\" is among tests that didn't run...\nassigning this issue to you since you're working on a PR\n- sure, although I'm not sure if this will solve the issue...\nyou can assign back to me if it doesn't :)\nUnfortunately my PR didn't help. This just happened again: - reassigning back to you\nAnother instance:\nAnd again:\nAre folks still seeing this? I'm having a really hard time getting it to occur on-demand.\nOK, I did find another timeout - The list of suspicious tests does not include the networking test this time: Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] Kubectl client Simple pod should support exec through an HTTP proxy Pods should be submitted and removed [Conformance] Secrets should be consumable from pods [Conformance] Will have to keep searching for builds timing out to see if any new pattern emerges.\nLooking more carefully, is another timeout. (I really wish it'd fail on my debug PR...) Suspicious tests: Job should scale a job down Kubectl client Simple pod should support exec through an HTTP proxy I'm gonna start suspecting that Kubectl HTTP proxy test now... are we doing anything in this test that might be blocking?\nhas a very curious failure: only 3 tests ran at all. It seems we are running the cluster-sanity checks in for every Ginkgo node, so there are probably cases like this run where it'll race and fail in mysterious ways (or possibly even deadlock)? We should probably use and (as documented ) to run the sanity checks, cleanup, and log dumping only once, rather than once per Ginkgo node. This may solve the mysterious timeouts.\nBumping down to P1 because this seems pretty rare nowadays (but I'll still address the issues I noted).\nIt seems like this issue is way too general: jobs timing out without a clear reason should have an action of making that reason clear. Is there any action we can take there?\nI think the fixes I mentioned in will address the mysterious timeouts.\nHappened here:\nHoping this one is gone for good, but I'll track and see.\nReopening; saw this again with\nFinal lines of that before being killed: Another run which timed out is kubernetes-pull-build-test-e2e-gce/. Final lines from that: Plus my earlier suspicions in make me think the kubectl proxy test is lacking a necessary timeout somewhere.\nMoving discussion of that test case to .\nWith open, can we close this in favor of ?\nClosing for now. If fixing doesn't fix, I'll keep looking...", "positive_passages": [{"docid": "doc-en-kubernetes-e024ecbff2aa15f58d36c31753eb16616e5ca3495eff425aea687534b5f06966", "text": "zone := testContext.CloudConfig.Zone region, err := gcecloud.GetGCERegion(zone) if err != nil { glog.Fatalf(\"error parsing GCE region from zone %q: %v\", zone, err) return fmt.Errorf(\"error parsing GCE/GKE region from zone %q: %v\", zone, err) } managedZones := []string{zone} // Only single-zone for now cloudConfig.Provider, err = gcecloud.CreateGCECloud(testContext.CloudConfig.ProjectID, region, zone, managedZones, \"\" /* networkUrl */, tokenSource, false /* useMetadataServer */) if err != nil { glog.Fatal(\"Error building GCE provider: \", err) return fmt.Errorf(\"Error building GCE/GKE provider: \", err) } } if testContext.Provider == \"aws\" { case \"aws\": awsConfig := \"[Global]n\" if cloudConfig.Zone == \"\" { glog.Fatal(\"gce-zone must be specified for AWS\") return fmt.Errorf(\"gce-zone must be specified for AWS\") } awsConfig += fmt.Sprintf(\"Zone=%sn\", cloudConfig.Zone) if cloudConfig.ClusterTag == \"\" { glog.Fatal(\"--cluster-tag must be specified for AWS\") return fmt.Errorf(\"--cluster-tag must be specified for AWS\") } awsConfig += fmt.Sprintf(\"KubernetesClusterTag=%sn\", cloudConfig.ClusterTag) var err error cloudConfig.Provider, err = cloudprovider.GetCloudProvider(testContext.Provider, strings.NewReader(awsConfig)) if err != nil { glog.Fatal(\"Error building AWS provider: \", err) return fmt.Errorf(\"Error building AWS provider: \", err) } } // Disable skipped tests unless they are explicitly requested. if config.GinkgoConfig.FocusString == \"\" && config.GinkgoConfig.SkipString == \"\" { // TODO(ihmccreery) Remove [Skipped] once all [Skipped] labels have been reclassified. config.GinkgoConfig.SkipString = `[Flaky]|[Skipped]|[Feature:.+]` } gomega.RegisterFailHandler(ginkgo.Fail) c, err := loadClient() if err != nil { glog.Fatal(\"Error loading client: \", err) } return nil } // There are certain operations we only want to run once per overall test invocation // (such as deleting old namespaces, or verifying that all system pods are running. // Because of the way Ginkgo runs tests in parallel, we must use SynchronizedBeforeSuite // to ensure that these operations only run on the first parallel Ginkgo node. // // This function takes two parameters: one function which runs on only the first Ginkgo node, // returning an opaque byte array, and then a second function which runs on all Ginkgo nodes, // accepting the byte array. var _ = ginkgo.SynchronizedBeforeSuite(func() []byte { // Run only on Ginkgo node 1 // Delete any namespaces except default and kube-system. This ensures no // lingering resources are left over from a previous test run. if testContext.CleanStart { c, err := loadClient() if err != nil { glog.Fatal(\"Error loading client: \", err) } deleted, err := deleteNamespaces(c, nil /* deleteFilter */, []string{api.NamespaceSystem, api.NamespaceDefault}) if err != nil { t.Errorf(\"Error deleting orphaned namespaces: %v\", err) Failf(\"Error deleting orphaned namespaces: %v\", err) } glog.Infof(\"Waiting for deletion of the following namespaces: %v\", deleted) if err := waitForNamespacesDeleted(c, deleted, namespaceCleanupTimeout); err != nil { glog.Fatalf(\"Failed to delete orphaned namespaces %v: %v\", deleted, err) Failf(\"Failed to delete orphaned namespaces %v: %v\", deleted, err) } }", "commid": "kubernetes_pr_20049"}], "negative_passages": []} {"query_id": "q-en-kubernetes-53453bd5ca3e0fd97097a9d5375a1c84f5c3ea86da4fd838c0b744a110f4390f", "query": "Both the GKE and Parallel tests recently had a run where they timed out. (after 90 minutes!) This is like 10x longer than the parallel run should be taking.\nAnd the build project, too:\nany ideas?\np0\nyup, investigating this is next on my queue, right after I figure out why the PR jenkins is failing so many runs.\nThis happened again: In fact I've seen at least few cases like that: \"at some point tests just stop and completely nothing is happening to the end\" The same happened here - completely nothing happened during last hour of the run,\nWhile the failure rates of the individual test jobs is not very high (e.g. kubernetes-e2e-gce-parallel : MTTF (Last 7 Days) 7 hr 27 min ... the fact that we gate auto-merging on about 7 of these jobs means that a merge fails almost every hour due to this.\nI looked at 4 of the builds in kubernetes-e2e-gce-parallel, comparing the test cases marked \"N/A\" (i.e. no test result given) instead of their usual \"PASSED\" or \"SKIPPED\" report. So far, there's been only one test case that's been consistently N/A in all four of the builds I looked at: I'm not sure what might be happening, but I'm tempted to disable this test in parallel runs now while we investigate what's actually wrong.\nOK, that test has been disabled. I'm hoping that the PR builds recover (along with some git fixes as referenced in ). Now on to figuring out why this is causing us problems...\nHad a successful failure in build 8902. Grepping out the Ginkgo node number shows this test being the problem: So it looks like it got stuck in ? Is there anything in here that should hang without deadline?\nI had another run get stuck in exactly the same place: What's worrying is that another parallel build (http://go/k8s-test/job/kubernetes-e2e-gce-parallel/7960/) timed out again early this morning, but this networking test was not enabled. This suggests that there's still something else triggering the problem.\nI may have a partial fix for this in .\nHave we seen this since merged? Can we close this bug?\n- I think we had a lot of other problems during that time. I would suggest leaving it open during the weekend and close it on Monday if we don't see any problems.\nI don't think we've seen this in a while. Please reopen if we're still having mysterious timeouts in Jenkins.\nWe're seeing this again in . It looks like someone put \"Networking should function for intra-pod communication\" test back in the parallel run recently. I think that we should revert whatever that PR was.\nThat test had been fine. More worrying is that my fix from seems to have vanished.\nOK, looks like that was removed in , referencing issue . I'm not sure if the same issue is happening again (where operations mysteriously hang forever) or if it's something else. any thoughts?\nI think your answer is here: The timeout field, and all references to it in clients were removed, based on the fact that clients should use request.Timeout() instead, but the clients were not updated to do that. I will send a PR shortly.\nIt seems like very few of the e2e tests are using this method, but we aren't seeing issues on most of the tests. I wonder if there's something in particular we're hitting with the networking test?\nCould this be this magic number in ? I guess this means the checks only happen for about a 75 seconds. If one or two of the nodes are spinning up very slowly, this will trip over . Just a thought. Either way I think this needs to be configurable, I'll work on that angle.\nI agree this \"15\" should probably be bigger. However, I'm pretty sure, that this shouldn't cause the test hang forever - it may cause test failures, but hanging forever test shouldn't happen...\nAnyway - I've sent out out for review - we will see if that helps...\nOK - it won't help, because it doesn't work :( - I asked if there is any reasonably easy way to do it...\nAnother instance: Again - the \"Networking should function for intra-pod communication\" is among tests that didn't run...\nassigning this issue to you since you're working on a PR\n- sure, although I'm not sure if this will solve the issue...\nyou can assign back to me if it doesn't :)\nUnfortunately my PR didn't help. This just happened again: - reassigning back to you\nAnother instance:\nAnd again:\nAre folks still seeing this? I'm having a really hard time getting it to occur on-demand.\nOK, I did find another timeout - The list of suspicious tests does not include the networking test this time: Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] Kubectl client Simple pod should support exec through an HTTP proxy Pods should be submitted and removed [Conformance] Secrets should be consumable from pods [Conformance] Will have to keep searching for builds timing out to see if any new pattern emerges.\nLooking more carefully, is another timeout. (I really wish it'd fail on my debug PR...) Suspicious tests: Job should scale a job down Kubectl client Simple pod should support exec through an HTTP proxy I'm gonna start suspecting that Kubectl HTTP proxy test now... are we doing anything in this test that might be blocking?\nhas a very curious failure: only 3 tests ran at all. It seems we are running the cluster-sanity checks in for every Ginkgo node, so there are probably cases like this run where it'll race and fail in mysterious ways (or possibly even deadlock)? We should probably use and (as documented ) to run the sanity checks, cleanup, and log dumping only once, rather than once per Ginkgo node. This may solve the mysterious timeouts.\nBumping down to P1 because this seems pretty rare nowadays (but I'll still address the issues I noted).\nIt seems like this issue is way too general: jobs timing out without a clear reason should have an action of making that reason clear. Is there any action we can take there?\nI think the fixes I mentioned in will address the mysterious timeouts.\nHappened here:\nHoping this one is gone for good, but I'll track and see.\nReopening; saw this again with\nFinal lines of that before being killed: Another run which timed out is kubernetes-pull-build-test-e2e-gce/. Final lines from that: Plus my earlier suspicions in make me think the kubectl proxy test is lacking a necessary timeout somewhere.\nMoving discussion of that test case to .\nWith open, can we close this in favor of ?\nClosing for now. If fixing doesn't fix, I'll keep looking...", "positive_passages": [{"docid": "doc-en-kubernetes-075fda8e83d2b1ebd32359900bdd8f68063c3c3f9a2dcc6d37dd5ecdfb817919", "text": "// test pods from running, and tests that ensure all pods are running and // ready will fail). if err := waitForPodsRunningReady(api.NamespaceSystem, testContext.MinStartupPods, podStartupTimeout); err != nil { t.Errorf(\"Error waiting for all pods to be running and ready: %v\", err) return Failf(\"Error waiting for all pods to be running and ready: %v\", err) } return nil }, func(data []byte) { // Run on all Ginkgo nodes }) // Similar to SynchornizedBeforeSuite, we want to run some operations only once (such as collecting cluster logs). // Here, the order of functions is reversed; first, the function which runs everywhere, // and then the function that only runs on the first Ginkgo node. var _ = ginkgo.SynchronizedAfterSuite(func() { // Run on all Ginkgo nodes }, func() { // Run only Ginkgo on node 1 if testContext.ReportDir != \"\" { CoreDump(testContext.ReportDir) } }) // TestE2E checks configuration parameters (specified through flags) and then runs // E2E tests using the Ginkgo runner. // If a \"report directory\" is specified, one or more JUnit test reports will be // generated in this directory, and cluster logs will also be saved. // This function is called on each Ginkgo node in parallel mode. func TestE2E(t *testing.T) { util.ReallyCrash = true util.InitLogs() defer util.FlushLogs() // We must call setupProviderConfig first since SynchronizedBeforeSuite needs // cloudConfig to be set up already. if err := setupProviderConfig(); err != nil { glog.Fatalf(err.Error()) } gomega.RegisterFailHandler(ginkgo.Fail) // Disable skipped tests unless they are explicitly requested. if config.GinkgoConfig.FocusString == \"\" && config.GinkgoConfig.SkipString == \"\" { // TODO(ihmccreery) Remove [Skipped] once all [Skipped] labels have been reclassified. config.GinkgoConfig.SkipString = `[Flaky]|[Skipped]|[Feature:.+]` } // Run tests through the Ginkgo runner with output to console + JUnit for Jenkins var r []ginkgo.Reporter if testContext.ReportDir != \"\" { r = append(r, reporters.NewJUnitReporter(path.Join(testContext.ReportDir, fmt.Sprintf(\"junit_%02d.xml\", config.GinkgoConfig.ParallelNode)))) // TODO: we should probably only be trying to create this directory once // rather than once-per-Ginkgo-node. if err := os.MkdirAll(testContext.ReportDir, 0755); err != nil { glog.Errorf(\"Failed creating report directory: %v\", err) } else { r = append(r, reporters.NewJUnitReporter(path.Join(testContext.ReportDir, fmt.Sprintf(\"junit_%02d.xml\", config.GinkgoConfig.ParallelNode)))) } } glog.Infof(\"Starting e2e run; %q\", runId) glog.Infof(\"Starting e2e run %q on Ginkgo node %d\", runId, config.GinkgoConfig.ParallelNode) ginkgo.RunSpecsWithDefaultAndCustomReporters(t, \"Kubernetes e2e suite\", r) }", "commid": "kubernetes_pr_20049"}], "negative_passages": []} {"query_id": "q-en-kubernetes-70ca5b9393197b9c681e23f75e1621b021d66b70b0941bb143c6bb4d3bdbccb3", "query": "I0921 20:36:33. ] Could not connect to D-Bus system bus: %sdial unix /var/run/dbus/systembussocket: connect: no such file or directory", "positive_passages": [{"docid": "doc-en-kubernetes-4df3b809265b898ae97976f38e0399d4e18a0f0972d78b011094da465d84e873", "text": "func (runner *runner) connectToFirewallD() { bus, err := runner.dbus.SystemBus() if err != nil { glog.V(1).Info(\"Could not connect to D-Bus system bus: %s\", err) glog.V(1).Infof(\"Could not connect to D-Bus system bus: %s\", err) return }", "commid": "kubernetes_pr_14311"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-9e29ca0c0ba11ebd4340e34529df31c1cc404ea426f56b90efab1ea4130f2a12", "text": "// GetOriginalConfiguration retrieves the original configuration of the object // from the annotation, or nil if no annotation was found. func GetOriginalConfiguration(info *resource.Info) ([]byte, error) { annots, err := info.Mapping.MetadataAccessor.Annotations(info.Object) func GetOriginalConfiguration(mapping *meta.RESTMapping, obj runtime.Object) ([]byte, error) { annots, err := mapping.MetadataAccessor.Annotations(obj) if err != nil { return nil, err }", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-eec3c0fcd5c5fe47934eb4a0350d720b88741bbb8b606c69feb94eae72c658c3", "text": "// UpdateApplyAnnotation calls CreateApplyAnnotation if the last applied // configuration annotation is already present. Otherwise, it does nothing. func UpdateApplyAnnotation(info *resource.Info, codec runtime.Encoder) error { if original, err := GetOriginalConfiguration(info); err != nil || len(original) <= 0 { if original, err := GetOriginalConfiguration(info.Mapping, info.Object); err != nil || len(original) <= 0 { return err } return CreateApplyAnnotation(info, codec)", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-6b8b9a5377af7ed3eb2d5e54950de7aa4a463d9e4a9c4b5eb879a266f179cc00", "text": "import ( \"fmt\" \"io\" \"time\" \"github.com/jonboulle/clockwork\" \"github.com/spf13/cobra\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/errors\" \"k8s.io/kubernetes/pkg/api/meta\" \"k8s.io/kubernetes/pkg/kubectl\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"k8s.io/kubernetes/pkg/kubectl/resource\"", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-0efb673e622ab9ce390bc819f7b382d3d383fcf8d4e9e558dc65c9a9e834d0ca", "text": "} const ( // maxPatchRetry is the maximum number of conflicts retry for during a patch operation before returning failure maxPatchRetry = 5 // backOffPeriod is the period to back off when apply patch resutls in error. backOffPeriod = 1 * time.Second // how many times we can retry before back off triesBeforeBackOff = 1 ) const ( apply_long = `Apply a configuration to a resource by filename or stdin. The resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-b06151c75944440b6acccade9175bfd78b9950b5e164ceb515fc3e1beb87f32c", "text": "return nil } // Serialize the current configuration of the object from the server. current, err := runtime.Encode(encoder, info.Object) if err != nil { return cmdutil.AddSourceToErr(fmt.Sprintf(\"serializing current configuration from:n%vnfor:\", info), info.Source, err) } // Retrieve the original configuration of the object from the annotation. original, err := kubectl.GetOriginalConfiguration(info) if err != nil { return cmdutil.AddSourceToErr(fmt.Sprintf(\"retrieving original configuration from:n%vnfor:\", info), info.Source, err) } // Create the versioned struct from the original from the server for // strategic patch. // TODO: Move all structs in apply to use raw data. Can be done once // builder has a RawResult method which delivers raw data instead of // internal objects. versionedObject, _, err := decoder.Decode(current, nil, nil) if err != nil { return cmdutil.AddSourceToErr(fmt.Sprintf(\"converting encoded server-side object back to versioned struct:n%vnfor:\", info), info.Source, err) } // Compute a three way strategic merge patch to send to server. patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current, versionedObject, true) if err != nil { format := \"creating patch with:noriginal:n%snmodified:n%sncurrent:n%snfrom:n%vnfor:\" return cmdutil.AddSourceToErr(fmt.Sprintf(format, original, modified, current, info), info.Source, err) } helper := resource.NewHelper(info.Client, info.Mapping) _, err = helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) patcher := NewPatcher(encoder, decoder, info.Mapping, helper) patchBytes, err := patcher.patch(info.Object, modified, info.Source, info.Namespace, info.Name) if err != nil { return cmdutil.AddSourceToErr(fmt.Sprintf(\"applying patch:n%snto:n%vnfor:\", patch, info), info.Source, err) return cmdutil.AddSourceToErr(fmt.Sprintf(\"applying patch:n%snto:n%vnfor:\", patchBytes, info), info.Source, err) } if cmdutil.ShouldRecord(cmd, info) { patch, err = cmdutil.ChangeResourcePatch(info, f.Command()) patch, err := cmdutil.ChangeResourcePatch(info, f.Command()) if err != nil { return err }", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-e7d9fa8a1a021ffe36e2a0728914cde5844f5040a334715ecd736a60d9104c39", "text": "return nil } type patcher struct { encoder runtime.Encoder decoder runtime.Decoder mapping *meta.RESTMapping helper *resource.Helper backOff clockwork.Clock } func NewPatcher(encoder runtime.Encoder, decoder runtime.Decoder, mapping *meta.RESTMapping, helper *resource.Helper) *patcher { return &patcher{ encoder: encoder, decoder: decoder, mapping: mapping, helper: helper, backOff: clockwork.NewRealClock(), } } func (p *patcher) patchSimple(obj runtime.Object, modified []byte, source, namespace, name string) ([]byte, error) { // Serialize the current configuration of the object from the server. current, err := runtime.Encode(p.encoder, obj) if err != nil { return nil, cmdutil.AddSourceToErr(fmt.Sprintf(\"serializing current configuration from:n%vnfor:\", obj), source, err) } // Retrieve the original configuration of the object from the annotation. original, err := kubectl.GetOriginalConfiguration(p.mapping, obj) if err != nil { return nil, cmdutil.AddSourceToErr(fmt.Sprintf(\"retrieving original configuration from:n%vnfor:\", obj), source, err) } // Create the versioned struct from the original from the server for // strategic patch. // TODO: Move all structs in apply to use raw data. Can be done once // builder has a RawResult method which delivers raw data instead of // internal objects. versionedObject, _, err := p.decoder.Decode(current, nil, nil) if err != nil { return nil, cmdutil.AddSourceToErr(fmt.Sprintf(\"converting encoded server-side object back to versioned struct:n%vnfor:\", obj), source, err) } // Compute a three way strategic merge patch to send to server. patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current, versionedObject, true) if err != nil { format := \"creating patch with:noriginal:n%snmodified:n%sncurrent:n%snfor:\" return nil, cmdutil.AddSourceToErr(fmt.Sprintf(format, original, modified, current), source, err) } _, err = p.helper.Patch(namespace, name, api.StrategicMergePatchType, patch) return patch, err } func (p *patcher) patch(current runtime.Object, modified []byte, source, namespace, name string) ([]byte, error) { var getErr error patchBytes, err := p.patchSimple(current, modified, source, namespace, name) for i := 1; i <= maxPatchRetry && errors.IsConflict(err); i++ { if i > triesBeforeBackOff { p.backOff.Sleep(backOffPeriod) } current, getErr = p.helper.Get(namespace, name, false) if getErr != nil { return nil, getErr } patchBytes, err = p.patchSimple(current, modified, source, namespace, name) } return patchBytes, err } ", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-19cd83a43e4d8748070de8c6cb30a330aa15448550ad575586bec22e309dd0fe", "text": "import ( \"bytes\" \"encoding/json\" \"fmt\" \"io/ioutil\" \"net/http\" \"os\"", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-faa13d7415f0d7c363915459fa3506a423d16b4a2afae573abe9693339bd2547", "text": "\"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/annotations\" kubeerr \"k8s.io/kubernetes/pkg/api/errors\" \"k8s.io/kubernetes/pkg/api/meta\" \"k8s.io/kubernetes/pkg/api/unversioned\" \"k8s.io/kubernetes/pkg/client/unversioned/fake\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"k8s.io/kubernetes/pkg/runtime\"", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-95af4df06321abec34d22b614a4f94b038ee2b47038f19fa65ee5d7b6d8c32e5", "query": "From an e2e test failure: The command should retry resource version mismatches like scale and rollingupdate. cc\n/cc fyi.\nFrom perspective , Isn't the failure intentional (i.e. optimistic mutate) ? or... maybe I'm (probably) missing something. I would think its the user's job to retry... ?\nThe desired behavior that we settled on previously is to detect version mismatches at the client and error. User can force replacement using --force. See , and .\nWhat is the mismatch here? If it is just resourceVersion, kubectl should retry. It should only bail on a meaningful conflict. We're working on eliminating flaky tests (and flaky parts of the system). Also, that log message isn't very useful.\nper discussion on , we can't determine the correct behavior when the resource version changes without more information, so best to catch version mismatch at the client and bail, and let the user replace the object with --force if they want. We can also make the error message less verbose.\nI don't see any mention of resourceVersion in . It shouldn't be serialized in the annotation, because the user shouldn't specify it in their config. The resourceVersion-based precondition is intended to guard against concurrent read-modify-write operations. kubectl apply should be able retry the entire diff&patch in the case of this type of conflict. That said, it's not obvious to me from the log message what the actual problem was in this case.\nI'm sorry, my mistake. You're right that resourceVersion is not in the annotation, and that we should retry on resourceVersion failure. I was confusing this with the api version change issue discussed in . This is actually a much simpler issue, in the sense that it's an optimistic lock conflict. If we encounter a lock conflict, then we should fetch a fresh copy of the object, and try again to apply the new configuration supplied by the user. If we don't get a merge conflict when we resubmit, then all's well. If we get a merge conflict because the object has changed out from under the user, then we report it in those terms, and the user can decide what to do (e.g., replace using --force). Does this fit your expectation of how it should be handled?\nThis is causing test flake:\nyes, your understanding is correct. Do you have time to fix this? We're trying to get all our tests fixed. Additionally this sounds like a legit system bug, not just a flaky test problem.\nMaybe. Swamped right now, but can fit it in given enough notice. When is the fix needed?\nasap. We'll find someone else.\nThis should just be a generic GET/Update retry loop, eg:\nGo for it. Jack Greenfield | Google , Prashanth B wrote:\nChao, do you have cycles to take a look?\nI thought Patch is retried in the server. I'll investigate.\nPatch is retried at server-side five times before an error is returned: , so I expect this issue to occur very rarely. In fact, it has passed last ~400 kubernetes-gce-e2e tests. If we want to make this test less flaky, I'd recommend increase the number of the retries at the server-side. What do you guys think?\nIt'll fail if an actual conflicting change was made, though. Do we know what the nature of the conflict was? Maybe first we need to add some log messages to figure out what the conflict was next time it occurs. , Chao Xu wrote:\nThis happened last week:\nyou are right. I thought an actual conflicting change would result in a different error message but it's not. I'll add the logs.\nIn I a log entry if there is a meaningful conflict. That said, I don't think we have a meaningful conflict in this particular e2e test. After taking a closer look at the implementation of , I think the submitted is precise, so we shouldn't retry at the client side if server reports a meaningful conflict. [edit] And if the conflict is not meaningful, we should just rely on the server-side patch handler to retry.\ndoes the milestone on this need to be changed to v1.2?\nHow often is the test flaking?\nLast time I checked, I'll check it again.\nLast time I checked the results was two days ago, in the past 48 hours, the test has consecutively passed another 124 builds ( to ).\nI only see 3 reports of failures since October, so I'm going to say, no, it doesn't need to be fixed in 1.2.\nany progress on this now? I returned back to the kubectl apply issues, and hope I can fix all apply related issues before 1.3.0 release.\nI'll try to check the flake rate again. IMO if the flake rate is non-negligible, we should increase the retry limits of Patch, or more aggressively, just remove the limit.\nInfinite retry still scares me, mostly because we may have failure modes that could race for a very long time. , Chao Xu wrote:\nI don't think anyone has worked on this. Though the original failing test/logs weren't recorded, it appears to be a failure of \"should apply a new configuration to an existing RC\": There is a retry loop in the Patch handler in apiserver, but if resourceVersion is specified, the retry loop will always fail in the case of a conflict. (We should detect that and bail out, but I don't think we do.) In the case of such a conflict, apply should diff again, create another patch, and try again. I agree with that it shouldn't retry forever. We should use the same retry limits we use for other commands.\nwe have an infinite loop in GuaranteedUpdate (), maybe we should apply a retry limit there as well?\nAgree. Just note that the test flake doesn't specify resourceVersion.\nSince this is causing flakey test failures, it is something we should focus on. Is this something I should pick up?\nFYI: has a PR out for this under review\nMaking this a P3 (no release block) since I looks like we haven't seen many instances since Feb 2016", "positive_passages": [{"docid": "doc-en-kubernetes-4440ed239d3de07baf1f94a65400fc7783d7117da7a7b872642f488e34b9492e", "text": "} } func TestApplyRetry(t *testing.T) { initTestErrorHandler(t) nameRC, currentRC := readAndAnnotateReplicationController(t, filenameRC) pathRC := \"/namespaces/test/replicationcontrollers/\" + nameRC firstPatch := true retry := false getCount := 0 f, tf, codec := NewAPIFactory() tf.Printer = &testPrinter{} tf.Client = &fake.RESTClient{ Codec: codec, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == pathRC && m == \"GET\": getCount++ bodyRC := ioutil.NopCloser(bytes.NewReader(currentRC)) return &http.Response{StatusCode: 200, Header: defaultHeader(), Body: bodyRC}, nil case p == pathRC && m == \"PATCH\": if firstPatch { firstPatch = false statusErr := kubeerr.NewConflict(unversioned.GroupResource{Group: \"\", Resource: \"rc\"}, \"test-rc\", fmt.Errorf(\"the object has been modified. Please apply at first.\")) bodyBytes, _ := json.Marshal(statusErr) bodyErr := ioutil.NopCloser(bytes.NewReader(bodyBytes)) return &http.Response{StatusCode: http.StatusConflict, Header: defaultHeader(), Body: bodyErr}, nil } retry = true validatePatchApplication(t, req) bodyRC := ioutil.NopCloser(bytes.NewReader(currentRC)) return &http.Response{StatusCode: 200, Header: defaultHeader(), Body: bodyRC}, nil default: t.Fatalf(\"unexpected request: %#vn%#v\", req.URL, req) return nil, nil } }), } tf.Namespace = \"test\" buf := bytes.NewBuffer([]byte{}) cmd := NewCmdApply(f, buf) cmd.Flags().Set(\"filename\", filenameRC) cmd.Flags().Set(\"output\", \"name\") cmd.Run(cmd, []string{}) if !retry || getCount != 2 { t.Fatalf(\"apply didn't retry when get conflict error\") } // uses the name from the file, not the response expectRC := \"replicationcontroller/\" + nameRC + \"n\" if buf.String() != expectRC { t.Fatalf(\"unexpected output: %snexpected: %s\", buf.String(), expectRC) } } func TestApplyNonExistObject(t *testing.T) { nameRC, currentRC := readAndAnnotateReplicationController(t, filenameRC) pathRC := \"/namespaces/test/replicationcontrollers\"", "commid": "kubernetes_pr_26557"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f390f9655b3a3a2bdc3cb72f1f0fbadbba843c0fc19be84f91c4dc99ab97598a", "query": "We should be testing the reliability of a long-running cluster. These tests kill/restart components or nodes in the cluster, defeating the purpose of a soak cluster.\nExample tests: NodessResize Services.restarting DaemonRestart NodessResize Reboot Etcd failure\nAgreed. Some of them already seem to be skipped: GCESOAKCONTINUOUSSKIPTESTS=( \"Density.30spods\" \"Elasticsearch\" \"Etcd.SIGKILL\" \"externalsloadsbalancer\" \"identicallysnamedsservices\" \"networkspartition\" \"Services.Typesgoessfrom\" ) Q\nFYI\nIt'd be hard to prevent regression as more tests are . It might be better if we explicitly define a set of DISRUPTIVETESTS, and include them wherever needed, e.g., GCEPARALLELSKIPTESTS and GCESOAKCONTINUOUSSKIPTESTS. That way, people who are adding tests will be aware of the category. WDYT?\nAgreed, that seems like a good idea. Want to send in a PR? Or at least open an issue?", "positive_passages": [{"docid": "doc-en-kubernetes-119ae8bea7b8c17fa8273d415455f11e786c30e6047ec58a45e0bb663b439ddb", "text": "\"experimentalsresourcesusagestracking\" # Expect --max-pods=100 ) # Tests which kills or restarts components and/or nodes. DISRUPTIVE_TESTS=( \"DaemonRestart\" \"Etcdsfailure\" \"NodessResize\" \"Reboot\" \"Services.*restarting\" ) # The following tests are known to be flaky, and are thus run only in their own # -flaky- build variants. GCE_FLAKY_TESTS=(", "commid": "kubernetes_pr_15722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f390f9655b3a3a2bdc3cb72f1f0fbadbba843c0fc19be84f91c4dc99ab97598a", "query": "We should be testing the reliability of a long-running cluster. These tests kill/restart components or nodes in the cluster, defeating the purpose of a soak cluster.\nExample tests: NodessResize Services.restarting DaemonRestart NodessResize Reboot Etcd failure\nAgreed. Some of them already seem to be skipped: GCESOAKCONTINUOUSSKIPTESTS=( \"Density.30spods\" \"Elasticsearch\" \"Etcd.SIGKILL\" \"externalsloadsbalancer\" \"identicallysnamedsservices\" \"networkspartition\" \"Services.Typesgoessfrom\" ) Q\nFYI\nIt'd be hard to prevent regression as more tests are . It might be better if we explicitly define a set of DISRUPTIVETESTS, and include them wherever needed, e.g., GCEPARALLELSKIPTESTS and GCESOAKCONTINUOUSSKIPTESTS. That way, people who are adding tests will be aware of the category. WDYT?\nAgreed, that seems like a good idea. Want to send in a PR? Or at least open an issue?", "positive_passages": [{"docid": "doc-en-kubernetes-48fc47b26db0e3f35d167dee7b19bc005eae04d84a1115274e64ecbc1fe12f9a", "text": "# Tests which are not able to be run in parallel. GCE_PARALLEL_SKIP_TESTS=( \"Etcd\" \"NetworkingNew\" \"NodessNetwork\" \"NodessResize\" \"MaxPods\" \"Resourcesusagesofssystemscontainers\" \"SchedulerPredicates\" \"Services.*restarting\" \"resourcesusagestracking\" \"${DISRUPTIVE_TESTS[@]}\" ) # Tests which are known to be flaky when run in parallel.", "commid": "kubernetes_pr_15722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f390f9655b3a3a2bdc3cb72f1f0fbadbba843c0fc19be84f91c4dc99ab97598a", "query": "We should be testing the reliability of a long-running cluster. These tests kill/restart components or nodes in the cluster, defeating the purpose of a soak cluster.\nExample tests: NodessResize Services.restarting DaemonRestart NodessResize Reboot Etcd failure\nAgreed. Some of them already seem to be skipped: GCESOAKCONTINUOUSSKIPTESTS=( \"Density.30spods\" \"Elasticsearch\" \"Etcd.SIGKILL\" \"externalsloadsbalancer\" \"identicallysnamedsservices\" \"networkspartition\" \"Services.Typesgoessfrom\" ) Q\nFYI\nIt'd be hard to prevent regression as more tests are . It might be better if we explicitly define a set of DISRUPTIVETESTS, and include them wherever needed, e.g., GCEPARALLELSKIPTESTS and GCESOAKCONTINUOUSSKIPTESTS. That way, people who are adding tests will be aware of the category. WDYT?\nAgreed, that seems like a good idea. Want to send in a PR? Or at least open an issue?", "positive_passages": [{"docid": "doc-en-kubernetes-b682772a7fd9fa3105cdb704e8665c5511aaf299dbed12f300fc21fb879a7ae7", "text": "GCE_SOAK_CONTINUOUS_SKIP_TESTS=( \"Density.*30spods\" \"Elasticsearch\" \"Etcd.*SIGKILL\" \"externalsloadsbalancer\" \"identicallysnamedsservices\" \"networkspartition\" \"Services.*Typesgoessfrom\" \"${DISRUPTIVE_TESTS[@]}\" # avoid component restarts. ) GCE_RELEASE_SKIP_TESTS=(", "commid": "kubernetes_pr_15722"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-0251dc60d3059cef9de5e23a6e1c14e3c3862a59ffb5ef86f20ed1a9ca7444c9", "text": "# Clean up kubectl delete namespace my-namespace ############## ###################### # Pods in Namespaces # ############## ###################### ### Create a new namespace # Pre-condition: the other namespace does not exist", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-ffa9769cdc00eccb1fa8c6189837159fc8fe65c87a7d0e00cf24cceb5d926fa7", "text": "kube::test::get_object_assert 'pods --namespace=other' \"{{range.items}}{{$id_field}}:{{end}}\" 'valid-pod:' # Post-condition: verify shorthand `-n other` has the same results as `--namespace=other` kube::test::get_object_assert 'pods -n other' \"{{range.items}}{{$id_field}}:{{end}}\" 'valid-pod:' # Post-condition: a resource cannot be retrieved by name across all namespaces output_message=$(! kubectl get \"${kube_flags[@]}\" pod valid-pod --all-namespaces 2>&1) kube::test::if_has_string \"${output_message}\" \"a resource cannot be retrieved by name across all namespaces\" ### Delete POD valid-pod in specific namespace # Pre-condition: valid-pod POD exists", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-6d7bdf0dfe1312d54ba2d5f6799c85c5f0930230363985750d24355e11e9d043", "text": "# Clean up kubectl delete namespace other ############## ########### # Secrets # ############## ########### ### Create a new namespace # Pre-condition: the test-secrets namespace does not exist", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-e70a6303c5d32a40ecdd575a88b42861a46544b54a6d9c4e8c76a5e7065e2512", "text": "resources []string namespace string names []string namespace string allNamespace bool names []string resourceTuples []resourceTuple", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-8b3decc54b171c90e997ce394b47b09e7e0ded8ddf2b681dad689013fbbd284d", "text": "if allNamespace { b.namespace = api.NamespaceAll } b.allNamespace = allNamespace return b }", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5aa59b92896882cb1f1faca8e724525235f30c0a0b8b5362a85db1fdeef9947c", "query": "The desciption of --all-namespaces is listing the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.\" But when I used the following command: error: namespace may not be empty when retrieving a resource by name no namespace set on resource pods \"counter\" The error messages are unclear, and the second message isn't labeled as an error even.\nAnd why couldn't use --all-namespaces flag?\nYou're asking for a specific pod name (counter) in all namespaces, which is a nonsensical query. The pod name is not a grep-like filter, it's a specific name. Yeah, the errors could be better. Describe could work, but potentially generates a LOT of information and API calls.\nBut I found the following API is supported now. Get The result is: has the more information. Do we need to support the following kubectl command(using the above API) or just return clear error messages? get pods counter --all-namespaces\nI'd be ok with this being implemented via the field selector.\nThe issue is still here :). What's our expected result? I'd like to contribute this.\n, after check the code, the URL was built by ; do you mean we should update it; or avoid to use , instead, build url () directly.\nAfter checking the code, it's better to add fieldSelector to , is created for fieldSelector. Will submit a PR based on .\ngetting an object with a particular name across all namespaces seems like something we shouldn't encourage... it breaks the assumption of namespace independence.\n, so we'll only update message for this issue? for example:\nDisagree. Need easy way to troubleshouting the problem \"why pods is stuck in Pending status\". It's tiring for every pod to find who created it and in which namespace. In the logs there is only the name of the pod.\nNow, with field selectors we can do To further my understanding, I'd ask:- doesn't itself break independence? One can search by label across all namespaces, it feels funny for by name to be special.", "positive_passages": [{"docid": "doc-en-kubernetes-3ce7671dbaed66b5a37ced67bc22838f568526e7c63db9f31a1bb728afd1f8df", "text": "selectorNamespace = \"\" } else { if len(b.namespace) == 0 { return &Result{singular: isSingular, err: fmt.Errorf(\"namespace may not be empty when retrieving a resource by name\")} errMsg := \"namespace may not be empty when retrieving a resource by name\" if b.allNamespace { errMsg = \"a resource cannot be retrieved by name across all namespaces\" } return &Result{singular: isSingular, err: fmt.Errorf(errMsg)} } }", "commid": "kubernetes_pr_33546"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6b0ebb305fa6f354e94ea080d3ec8b3f747326739d46689612ca744492c9132e", "query": "From gen-swagger-docs should read the local swagger spec rather than fetching it from That way it generates release api-reference when run-gen-swagger-docs is run in release branch and it generates api-reference for HEAD code, when the script is run on HEAD.\nAssigning to as per discussion.\nBlocks", "positive_passages": [{"docid": "doc-en-kubernetes-b69057100e37a2d57befabb5ab8e77b47cf746a4a69bbbd01478ec2f81ce0a4b", "text": "#run the script once to download the dependent java libraries into the image RUN mkdir /output RUN build/gen-swagger-docs.sh v1 https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/api/swagger-spec/v1.json https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/pkg/api/v1/register.go RUN mkdir /swagger-source RUN wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/swagger-spec/v1.json -O /swagger-source/v1.json RUN build/gen-swagger-docs.sh v1 https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/pkg/api/v1/register.go RUN rm /output/* RUN rm /swagger-source/* ENTRYPOINT [\"build/gen-swagger-docs.sh\"]", "commid": "kubernetes_pr_15909"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6b0ebb305fa6f354e94ea080d3ec8b3f747326739d46689612ca744492c9132e", "query": "From gen-swagger-docs should read the local swagger spec rather than fetching it from That way it generates release api-reference when run-gen-swagger-docs is run in release branch and it generates api-reference for HEAD code, when the script is run on HEAD.\nAssigning to as per discussion.\nBlocks", "positive_passages": [{"docid": "doc-en-kubernetes-1bc77974a96c7ff90a4baba8b549df1a29e59e5ecf830ae76adce57210580e77", "text": "cd /build/ wget \"$2\" -O input.json wget \"$3\" -O register.go wget \"$2\" -O register.go # gendocs takes \"input.json\" as the input swagger spec. cp /swagger-source/\"$1\".json input.json ./gradle-2.5/bin/gradle gendocs --info", "commid": "kubernetes_pr_15909"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6b0ebb305fa6f354e94ea080d3ec8b3f747326739d46689612ca744492c9132e", "query": "From gen-swagger-docs should read the local swagger spec rather than fetching it from That way it generates release api-reference when run-gen-swagger-docs is run in release branch and it generates api-reference for HEAD code, when the script is run on HEAD.\nAssigning to as per discussion.\nBlocks", "positive_passages": [{"docid": "doc-en-kubernetes-1acd3b5f00b9f698c8d00d3f2a57cf4249059e02d645055c40e187af8fa172be", "text": "KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. V1_PATH=\"$PWD/${KUBE_ROOT}/docs/api-reference/v1/\" V1BETA1_PATH=\"$PWD/${KUBE_ROOT}/docs/api-reference/extensions/v1beta1\" SWAGGER_PATH=\"$PWD/${KUBE_ROOT}/api/swagger-spec/\" mkdir -p $V1_PATH mkdir -p $V1BETA1_PATH docker run -v $V1_PATH:/output gcr.io/google_containers/gen-swagger-docs:v2 docker run -v $V1_PATH:/output -v ${SWAGGER_PATH}:/swagger-source gcr.io/google_containers/gen-swagger-docs:v3 v1 https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/swagger-spec/v1.json https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/api/v1/register.go docker run -v $V1BETA1_PATH:/output gcr.io/google_containers/gen-swagger-docs:v2 docker run -v $V1BETA1_PATH:/output -v ${SWAGGER_PATH}:/swagger-source gcr.io/google_containers/gen-swagger-docs:v3 v1beta1 https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/swagger-spec/v1beta1.json https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/extensions/v1beta1/register.go", "commid": "kubernetes_pr_15909"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4c43c7fef986a718c460babbe7d4adf5fa347b29ebfc5017571678a77dac017f", "query": "The of the persistent-volumes walkthrough contains the volume mount which seems to be outdated for the used nginx conatiner which is by default serving from the folder. As a result, the data wrtitten in in the node will not be returned by the at the end of the walkthought. Proposed change: Update the path\nThanks for bringing up this issue, Would you be up for sending a pull request with the fix?", "positive_passages": [{"docid": "doc-en-kubernetes-7a96a61adb014355a78cae1f3b8329c09fe45df2fe15335d84e31f56e618a473", "text": "- containerPort: 80 name: \"http-server\" volumeMounts: - mountPath: \"/var/www/html\" - mountPath: \"/usr/share/nginx/html\" name: mypd volumes: - name: mypd", "commid": "kubernetes_pr_16609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e74b76d296fd4bae2768426a77e20fe61b63a0ea9deffdfca568f2c34779640e", "query": "Per our email discussion while getting 1.1 out the door. It looks like this never happened, though, which means we're not running autoscaling tests against GCE on the 1.1 branch, (we were running it on GKE with the 1.1-features job, but I'm removing that job in ). I'd once again like to (strongly) suggest that we mirror test suites running against release branch to jobs running against master, so that we avoid this problem of accidentally not running tests when we think we are. I'd much rather have a perhaps-unnecessary slow/HPA/whatever suite than continually try to figure out where slow/HPA/whatever tests are running, depending on if I'm looking at release jobs versus jobs running against master. cc\n+1 We should be running the autoscaling tests in slow or continuous.", "positive_passages": [{"docid": "doc-en-kubernetes-e02806819a359aa1b43fa998eb13ea48cb3f042371a4e40545cd5e084411abe5", "text": "# Specialized tests which should be skipped by default for projects. GCE_DEFAULT_SKIP_TESTS=( \"${REBOOT_SKIP_TESTS[@]}\" \"AutoscalingsSuite\" \"Reboot\" \"ServiceLoadBalancer\" )", "commid": "kubernetes_pr_17639"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e74b76d296fd4bae2768426a77e20fe61b63a0ea9deffdfca568f2c34779640e", "query": "Per our email discussion while getting 1.1 out the door. It looks like this never happened, though, which means we're not running autoscaling tests against GCE on the 1.1 branch, (we were running it on GKE with the 1.1-features job, but I'm removing that job in ). I'd once again like to (strongly) suggest that we mirror test suites running against release branch to jobs running against master, so that we avoid this problem of accidentally not running tests when we think we are. I'd much rather have a perhaps-unnecessary slow/HPA/whatever suite than continually try to figure out where slow/HPA/whatever tests are running, depending on if I'm looking at release jobs versus jobs running against master. cc\n+1 We should be running the autoscaling tests in slow or continuous.", "positive_passages": [{"docid": "doc-en-kubernetes-81899539dc1a31e8ba82ced0a7cc78dc03d3e9e17956d41f044b10f012c4a3b3", "text": "# Tests which kills or restarts components and/or nodes. DISRUPTIVE_TESTS=( \"AutoscalingsSuite.*scalescluster\" \"DaemonRestart\" \"Etcdsfailure\" \"NodessResize\"", "commid": "kubernetes_pr_17639"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e74b76d296fd4bae2768426a77e20fe61b63a0ea9deffdfca568f2c34779640e", "query": "Per our email discussion while getting 1.1 out the door. It looks like this never happened, though, which means we're not running autoscaling tests against GCE on the 1.1 branch, (we were running it on GKE with the 1.1-features job, but I'm removing that job in ). I'd once again like to (strongly) suggest that we mirror test suites running against release branch to jobs running against master, so that we avoid this problem of accidentally not running tests when we think we are. I'd much rather have a perhaps-unnecessary slow/HPA/whatever suite than continually try to figure out where slow/HPA/whatever tests are running, depending on if I'm looking at release jobs versus jobs running against master. cc\n+1 We should be running the autoscaling tests in slow or continuous.", "positive_passages": [{"docid": "doc-en-kubernetes-453a033a6c395682e208f9b6b53bfaa67c28815b3bea03d4387fc7041cdd2b8c", "text": "# comments below, and for poorly implemented tests, please quote the # issue number tracking speed improvements. GCE_SLOW_TESTS=( # TODO: add deployment test here once it will become stable \"AutoscalingsSuite.*viasreplicationController\" # Before enabling this loadbalancer test in any other test list you must # make sure the associated project has enough quota. At the time of this # writing a GCE project is allowed 3 backend services by default. This", "commid": "kubernetes_pr_17639"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e74b76d296fd4bae2768426a77e20fe61b63a0ea9deffdfca568f2c34779640e", "query": "Per our email discussion while getting 1.1 out the door. It looks like this never happened, though, which means we're not running autoscaling tests against GCE on the 1.1 branch, (we were running it on GKE with the 1.1-features job, but I'm removing that job in ). I'd once again like to (strongly) suggest that we mirror test suites running against release branch to jobs running against master, so that we avoid this problem of accidentally not running tests when we think we are. I'd much rather have a perhaps-unnecessary slow/HPA/whatever suite than continually try to figure out where slow/HPA/whatever tests are running, depending on if I'm looking at release jobs versus jobs running against master. cc\n+1 We should be running the autoscaling tests in slow or continuous.", "positive_passages": [{"docid": "doc-en-kubernetes-6e74958de9ffead225313b3c054d319c8e881be0a6c47e63a9be3ba2f8d86cef", "text": "}) // CPU tests via replication controllers It(fmt.Sprintf(titleUp, \"[Autoscaling Suite]\", kindRC), func() { It(fmt.Sprintf(titleUp, \"[Skipped][Autoscaling Suite]\", kindRC), func() { scaleUp(\"rc\", kindRC, rc, f) }) It(fmt.Sprintf(titleDown, \"[Autoscaling Suite]\", kindRC), func() { It(fmt.Sprintf(titleDown, \"[Skipped][Autoscaling Suite]\", kindRC), func() { scaleDown(\"rc\", kindRC, rc, f) }) })", "commid": "kubernetes_pr_17639"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9d53901b20a3ce5248db51f577ad265c56a786f908ebe1fc18ad2aee2b1cef2a", "query": "Hi, Right now in the installation script for Ubuntu there are hardcoded SSH_OPTS. In my case I am using a custom SSH port, and, unless I am mistaken, I have no way to customize it, either globally or on a per-host basis. Would it be possible to do that? I can try to submit a patch if necessary. Thanks.\nYes, please. Just send a PR to add in and make it configurable.", "positive_passages": [{"docid": "doc-en-kubernetes-62566a0d262661d0e61e82888474b36bd4b22a1b1890dc0b5662f6c1bc21138a", "text": "DEBUG=${DEBUG:-\"false\"} # Add SSH_OPTS: Add this to config ssh port SSH_OPTS=\"-oPort=22 -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR\" ", "commid": "kubernetes_pr_28872"}], "negative_passages": []} {"query_id": "q-en-kubernetes-292213e5846f2bd05d52e0c96b28a8696121992d21f62e361bb9fafd6583182c", "query": "cc\nReopening to remember to check if helped\nNo flakes in recent history\nThis just failed on PR Jenkins for me:\nI've been hitting this one repeatedly in https://doc-0l3o4-0hqt8-s-\nAccess Denied for that link\nJenkins has already purged the link. Anyone who sees this - please open issues WITH THE ERROR MESSAGE if this pops up again. A Jenkins link is limited-lifetime.\nthis just re-occurred, here's the log:\nCan you take a look, with prio, please. Of note:\nI just randomly started debugging this because it caused a PR I was reviewing to fail e2es. I think the problem is the test is unaware that kube-proxies can fall behind each other. This is very likely because the \"relist timeout\" imposed on the watch is randomized server side, also one node could be low on cpu etc. Take a single failing case: To break down what happened: Created webserver-f0m07, tested it was up Picked some node (104.197.253.149) and tested reachability Scheduled hostexec pod on some node (e2e-gce-master-3-minion-w6ut, 104.197.244.14) Ran the ss command and checked for nodeport Evident from the following console logs: 09:05:09 Jan 27 09:04:45.111: INFO: Waiting up to 5m0s for pod webserver-f0m07 status to be running 09:05:09 Jan 27 09:04:45.119: INFO: Found pod 'webserver-f0m07' on node 'e2e-gce-master-3-minion-bcr0' 09:05:09 STEP: trying to dial each unique pod 09:05:09 Jan 27 09:04:45.278: INFO: Controller webserver: Got non-empty result from replica 1 [webserver-f0m07]: 1 of 1 required successes so far 09:05:09 STEP: hitting the pod through the service's NodePort 09:05:09 Jan 27 09:04:45.354: INFO: Successfully reached http://104.197.253.149: 09:05:09 STEP: verifying the node port is locked 09:05:09 Jan 27 09:04:45.376: INFO: Waiting up to 5m0s for pod hostexec status to be running 09:05:09 Jan 27 09:04:57.437: INFO: Found pod 'hostexec' on node 'e2e-gce-master-3-minion-w6ut' 09:05:09 Jan 27 09:04:57.441: INFO: Running '/jenkins-master-data/jobs/kubernetes-pull-build-test-e2e-gce/workspace --server=https://104.197.246.4 --kubeconfig=/var/lib/jenkins/jobs/kubernetes-pull-build-test-e2e- exec --namespace=e2e-tests-services-oht70 hostexec -- /bin/sh -c for i in $(seq 1 10); do if ss -ant46 'sport = :' | grep ^LISTEN; then exit 0; fi; sleep 0.1; done; exit 1' 09:05:09 STEP: deleting service nodeportservice-test in namespace e2e-tests-services-oht70 09:05:09 STEP: stopping RC webserver in namespace e2e-tests-services-oht70 09:05:09 [AfterEach] Services node the webserver pod landed on: node the hostexec pod landed on: Comparing kube-proxy logs from these 2 nodes we see that the second kubeproxy (hostexec) only actually processed the OnUpdate event for the node port at 17:05:17, which is actually waay later than the first kubeproxy sees it (17:04:40). To rule out clock skew I check when the kubelet on the first node saw the hostexec, and this matches up to roughly the time the kubelet on the second node saw the webserver pod (~17:45). To figure out where the skew comes from just check time between OnEndpointsUpdate log lines, you'll see the hostexec kubeproxy taking 2s at times (~17:04:29), while the webserver kube-proxy just whizzes along at 0.5s per update. So I think the problems here are: 's no way to know that all kube-proxies have executed the nodeport actions for a service (i.e \"service readiness\") -proxy maybe processing OnUpdates in-efficiently, but that's not the core problem. test should either have a much larger timeout (the hostexec sh command above has seq(1 10)) and/or make sure it lands on the right host it has already verified preconditions for. The \"service readiness\" problem is a little deeper and will probably help ingress too.\nchatted with Prashanth. He is investigating this.\nAnd again today: gs://kubernetes-jenkins/logs/kubernetes-e2e-gce-parallel/ In the last 20 days this has failed ~9 times in that suite, only counting runs where only a few tests failed. 0.8% fail rate.\nthis is now assigned to you - plans for a fix?\nfor review\nPR for fix is in.", "positive_passages": [{"docid": "doc-en-kubernetes-83b4b7c42e2cf6b629361a435aa70cd983df931ce961f6b083996a4cdf87d968", "text": "ip := pickNodeIP(c) testReachable(ip, nodePort) By(\"verifying the node port is locked\") hostExec := LaunchHostExecPod(f.Client, f.Namespace.Name, \"hostexec\") cmd := fmt.Sprintf(`ss -ant46 'sport = :%d' | tail -n +2 | grep LISTEN`, nodePort) // Loop a bit because we see transient flakes. cmd := fmt.Sprintf(`for i in $(seq 1 10); do if ss -ant46 'sport = :%d' | grep ^LISTEN; then exit 0; fi; sleep 0.1; done; exit 1`, nodePort) stdout, err := RunHostCmd(hostExec.Namespace, hostExec.Name, cmd) if err != nil { Failf(\"expected node port (%d) to be in use, stdout: %v\", nodePort, stdout)", "commid": "kubernetes_pr_18243"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e921a1329e162a603540fb4dce228e0e7a7b55de85a818dddd7891c563012592", "query": "When I try to bring up my cluster (with a custom wrapper script that sources some other environment files,but calling in the end) using the AWS provider, the script fails with following reason: This happens because the correct environment variable set is . Changing the line to use this value fixes the issue.", "positive_passages": [{"docid": "doc-en-kubernetes-95eadf1c7c273cceb7a6e26e14d26401c63106b87c7340a0e951961e6b2cafd0", "text": "# This tarball is only used by Ubuntu Trusty. KUBE_MANIFESTS_TAR= if [[ \"${OS_DISTRIBUTION}\" == \"trusty\" ]]; then KUBE_MANIFESTS_TAR=\"${KUBE_ROOT}/server/kuernetes-manifests.tar.gz\" if [[ \"${KUBE_OS_DISTRIBUTION}\" == \"trusty\" ]]; then KUBE_MANIFESTS_TAR=\"${KUBE_ROOT}/server/kubernetes-manifests.tar.gz\" if [[ ! -f \"${KUBE_MANIFESTS_TAR}\" ]]; then KUBE_MANIFESTS_TAR=\"${KUBE_ROOT}/_output/release-tars/kubernetes-manifests.tar.gz\" fi", "commid": "kubernetes_pr_18747"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dff5a657be89122e51d4f3eb3f8988656cad8de0e3ba7b736bbec0e7c70e87f4", "query": "Have only seen this once, personally.\nThis flaked in Jenkins as well. [job/kubernetes-test-go/5351]\ncan you take a look at this? I think has a lot already :)", "positive_passages": [{"docid": "doc-en-kubernetes-aade3b7b598f081f558e10128e7b70064f4750cf8dcbf5dfc6ba5b365c0d72e1", "text": "package apiserver import ( \"fmt\" \"io/ioutil\" \"net/http\" \"net/http/httptest\"", "commid": "kubernetes_pr_19274"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dff5a657be89122e51d4f3eb3f8988656cad8de0e3ba7b736bbec0e7c70e87f4", "query": "Have only seen this once, personally.\nThis flaked in Jenkins as well. [job/kubernetes-test-go/5351]\ncan you take a look at this? I think has a lot already :)", "positive_passages": [{"docid": "doc-en-kubernetes-2f71df567e4124c557db9cac4354115d299eb985305446284b7c938e88479f4f", "text": "func (f fakeRL) TryAccept() bool { return bool(f) } func (f fakeRL) Accept() {} func expectHTTP(url string, code int, t *testing.T) { func expectHTTP(url string, code int) error { r, err := http.Get(url) if err != nil { t.Errorf(\"unexpected error: %v\", err) return return fmt.Errorf(\"unexpected error: %v\", err) } if r.StatusCode != code { t.Errorf(\"unexpected response: %v\", r.StatusCode) return fmt.Errorf(\"unexpected response: %v\", r.StatusCode) } return nil } func getPath(resource, namespace, name string) string {", "commid": "kubernetes_pr_19274"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dff5a657be89122e51d4f3eb3f8988656cad8de0e3ba7b736bbec0e7c70e87f4", "query": "Have only seen this once, personally.\nThis flaked in Jenkins as well. [job/kubernetes-test-go/5351]\ncan you take a look at this? I think has a lot already :)", "positive_passages": [{"docid": "doc-en-kubernetes-cd22cb52db5241d7811cefa1f9989ca42f487fbf884116c7a46631c92ff4f972", "text": "// 'short' accounted ones. calls := &sync.WaitGroup{} calls.Add(AllowedInflightRequestsNo * 2) // Responses is used to wait until all responses are // received. This prevents some async requests getting EOF // errors from prematurely closing the server responses := sync.WaitGroup{} responses.Add(AllowedInflightRequestsNo * 2) // Block is used to keep requests in flight for as long as we need to. All requests will // be unblocked at the same time. block := sync.WaitGroup{}", "commid": "kubernetes_pr_19274"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dff5a657be89122e51d4f3eb3f8988656cad8de0e3ba7b736bbec0e7c70e87f4", "query": "Have only seen this once, personally.\nThis flaked in Jenkins as well. [job/kubernetes-test-go/5351]\ncan you take a look at this? I think has a lot already :)", "positive_passages": [{"docid": "doc-en-kubernetes-bd0f0893ddae620e1a6109f97a5b918f382a4d7a7880004a7fe2019d6aa3ca6a", "text": "for i := 0; i < AllowedInflightRequestsNo; i++ { // These should hang waiting on block... go func() { expectHTTP(server.URL+\"/foo/bar/watch\", http.StatusOK, t) if err := expectHTTP(server.URL+\"/foo/bar/watch\", http.StatusOK); err != nil { t.Error(err) } responses.Done() }() } // Check that sever is not saturated by not-accounted calls expectHTTP(server.URL+\"/dontwait\", http.StatusOK, t) oneAccountedFinished := sync.WaitGroup{} oneAccountedFinished.Add(1) var once sync.Once if err := expectHTTP(server.URL+\"/dontwait\", http.StatusOK); err != nil { t.Error(err) } // These should hang and be accounted, i.e. saturate the server for i := 0; i < AllowedInflightRequestsNo; i++ { // These should hang waiting on block... go func() { expectHTTP(server.URL, http.StatusOK, t) once.Do(oneAccountedFinished.Done) if err := expectHTTP(server.URL, http.StatusOK); err != nil { t.Error(err) } responses.Done() }() } // We wait for all calls to be received by the server", "commid": "kubernetes_pr_19274"}], "negative_passages": []} {"query_id": "q-en-kubernetes-dff5a657be89122e51d4f3eb3f8988656cad8de0e3ba7b736bbec0e7c70e87f4", "query": "Have only seen this once, personally.\nThis flaked in Jenkins as well. [job/kubernetes-test-go/5351]\ncan you take a look at this? I think has a lot already :)", "positive_passages": [{"docid": "doc-en-kubernetes-6ed99fc05bdd712e5aaf8bac4aa81c14752153bf69293d8b9db4a90d1b2078bb", "text": "// Do this multiple times to show that it rate limit rejected requests don't block. for i := 0; i < 2; i++ { expectHTTP(server.URL, errors.StatusTooManyRequests, t) if err := expectHTTP(server.URL, errors.StatusTooManyRequests); err != nil { t.Error(err) } } // Validate that non-accounted URLs still work expectHTTP(server.URL+\"/dontwait/watch\", http.StatusOK, t) if err := expectHTTP(server.URL+\"/dontwait/watch\", http.StatusOK); err != nil { t.Error(err) } // Let all hanging requests finish block.Done() // Show that we recover from being blocked up. // Too avoid flakyness we need to wait until at least one of the requests really finishes. oneAccountedFinished.Wait() expectHTTP(server.URL, http.StatusOK, t) responses.Wait() if err := expectHTTP(server.URL, http.StatusOK); err != nil { t.Error(err) } } func TestReadOnly(t *testing.T) {", "commid": "kubernetes_pr_19274"}], "negative_passages": []} {"query_id": "q-en-kubernetes-87225d8f4bc8e526bc045df03189198434a2c9e524932dfbbdbe1aba72725fa0", "query": "there is a test that you marked as skipped that needs to be reenabled or otherwise taken care of (see ): \"Security Context [Skipped]\" Can we instead classify these as ? See for context. Happy to answer any questions if anything doesn't make sense.\nThis is work toward .\nI like the reclassification! I would be happy to relabel the test. I'll wait for to merge though\nhas been merged; if you send a PR, I will review it. Thank you!", "positive_passages": [{"docid": "doc-en-kubernetes-9437afb02494d13989a1f78405eb7ff0a368d3448cfe1f204df0e2c2156a4bdb", "text": "return pod } var _ = Describe(\"Security Context [Skipped]\", func() { var _ = Describe(\"Security Context [Feature:SecurityContext]\", func() { framework := NewFramework(\"security-context\") It(\"should support pod.Spec.SecurityContext.SupplementalGroups\", func() {", "commid": "kubernetes_pr_19833"}], "negative_passages": []} {"query_id": "q-en-kubernetes-afb0861e202f8fce3b6c8d79cb48e19d0d8d5c78740b4366b66f69e1018d5f1b", "query": "cc:\nIt was broken by . I'll send a fix in a sec.\nSorry & thanks for fixing.", "positive_passages": [{"docid": "doc-en-kubernetes-c3b080c48f1e44688bd5cd2e48720471d7c26f6071d381fe037ddd66f2879481", "text": "find-release-tars upload-server-tars # ensure that environmental variables specifying number of migs to create set_num_migs if [[ ${KUBE_USE_EXISTING_MASTER:-} == \"true\" ]]; then create-nodes create-autoscaler", "commid": "kubernetes_pr_20806"}], "negative_passages": []} {"query_id": "q-en-kubernetes-afb0861e202f8fce3b6c8d79cb48e19d0d8d5c78740b4366b66f69e1018d5f1b", "query": "cc:\nIt was broken by . I'll send a fix in a sec.\nSorry & thanks for fixing.", "positive_passages": [{"docid": "doc-en-kubernetes-d83d2985b7a92a0f874218e5507e80236e05d33b292a40c3a225be21fc37b540", "text": "create-node-instance-template $template_name } function create-nodes() { local template_name=\"${NODE_INSTANCE_PREFIX}-template\" # Assumes: # - MAX_INSTANCES_PER_MIG # - NUM_NODES # exports: # - NUM_MIGS function set_num_migs() { local defaulted_max_instances_per_mig=${MAX_INSTANCES_PER_MIG:-500} if [[ ${defaulted_max_instances_per_mig} -le \"0\" ]]; then echo \"MAX_INSTANCES_PER_MIG cannot be negative. Assuming default 500\" defaulted_max_instances_per_mig=500 fi local num_migs=$(((${NUM_NODES} + ${defaulted_max_instances_per_mig} - 1) / ${defaulted_max_instances_per_mig})) local instances_per_mig=$(((${NUM_NODES} + ${num_migs} - 1) / ${num_migs})) local last_mig_size=$((${NUM_NODES} - (${num_migs} - 1) * ${instances_per_mig})) export NUM_MIGS=$(((${NUM_NODES} + ${defaulted_max_instances_per_mig} - 1) / ${defaulted_max_instances_per_mig})) } # Assumes: # - NUM_MIGS # - NODE_INSTANCE_PREFIX # - NUM_NODES # - PROJECT # - ZONE function create-nodes() { local template_name=\"${NODE_INSTANCE_PREFIX}-template\" local instances_per_mig=$(((${NUM_NODES} + ${NUM_MIGS} - 1) / ${NUM_MIGS})) local last_mig_size=$((${NUM_NODES} - (${NUM_MIGS} - 1) * ${instances_per_mig})) #TODO: parallelize this loop to speed up the process for i in $(seq $((${num_migs} - 1))); do for i in $(seq $((${NUM_MIGS} - 1))); do gcloud compute instance-groups managed create \"${NODE_INSTANCE_PREFIX}-group-$i\" --project \"${PROJECT}\" ", "commid": "kubernetes_pr_20806"}], "negative_passages": []} {"query_id": "q-en-kubernetes-afb0861e202f8fce3b6c8d79cb48e19d0d8d5c78740b4366b66f69e1018d5f1b", "query": "cc:\nIt was broken by . I'll send a fix in a sec.\nSorry & thanks for fixing.", "positive_passages": [{"docid": "doc-en-kubernetes-a61aa3d91ce1a3ac58cb5de2d9e064de75651479e965f8df667db993093c1760", "text": "--project \"${PROJECT}\" || true; } # Assumes: # - NUM_MIGS # - NODE_INSTANCE_PREFIX # - PROJECT # - ZONE # - ENABLE_NODE_AUTOSCALER # - TARGET_NODE_UTILIZATION # - AUTOSCALER_MAX_NODES # - AUTOSCALER_MIN_NODES function create-autoscaler() { # Create autoscaler for nodes if requested if [[ \"${ENABLE_NODE_AUTOSCALER}\" == \"true\" ]]; then METRICS=\"\" local metrics=\"\" # Current usage METRICS+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/cpu/node_utilization,\" METRICS+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" METRICS+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/memory/node_utilization,\" METRICS+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" metrics+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/cpu/node_utilization,\" metrics+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" metrics+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/memory/node_utilization,\" metrics+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" # Reservation METRICS+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/cpu/node_reservation,\" METRICS+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" METRICS+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/memory/node_reservation,\" METRICS+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" metrics+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/cpu/node_reservation,\" metrics+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" metrics+=\"--custom-metric-utilization metric=custom.cloudmonitoring.googleapis.com/kubernetes.io/memory/node_reservation,\" metrics+=\"utilization-target=${TARGET_NODE_UTILIZATION},utilization-target-type=GAUGE \" echo \"Creating node autoscalers.\" local max_instances_per_mig=$(((${AUTOSCALER_MAX_NODES} + ${num_migs} - 1) / ${num_migs})) local last_max_instances=$((${AUTOSCALER_MAX_NODES} - (${num_migs} - 1) * ${max_instances_per_mig})) local min_instances_per_mig=$(((${AUTOSCALER_MIN_NODES} + ${num_migs} - 1) / ${num_migs})) local last_min_instances=$((${AUTOSCALER_MIN_NODES} - (${num_migs} - 1) * ${min_instances_per_mig})) local max_instances_per_mig=$(((${AUTOSCALER_MAX_NODES} + ${NUM_MIGS} - 1) / ${NUM_MIGS})) local last_max_instances=$((${AUTOSCALER_MAX_NODES} - (${NUM_MIGS} - 1) * ${max_instances_per_mig})) local min_instances_per_mig=$(((${AUTOSCALER_MIN_NODES} + ${NUM_MIGS} - 1) / ${NUM_MIGS})) local last_min_instances=$((${AUTOSCALER_MIN_NODES} - (${NUM_MIGS} - 1) * ${min_instances_per_mig})) for i in $(seq $((${num_migs} - 1))); do for i in $(seq $((${NUM_MIGS} - 1))); do gcloud compute instance-groups managed set-autoscaling \"${NODE_INSTANCE_PREFIX}-group-$i\" --zone \"${ZONE}\" --project \"${PROJECT}\" --min-num-replicas \"${min_instances_per_mig}\" --max-num-replicas \"${max_instances_per_mig}\" ${METRICS} || true --min-num-replicas \"${min_instances_per_mig}\" --max-num-replicas \"${max_instances_per_mig}\" ${metrics} || true done gcloud compute instance-groups managed set-autoscaling \"${NODE_INSTANCE_PREFIX}-group\" --zone \"${ZONE}\" --project \"${PROJECT}\" --min-num-replicas \"${last_min_instances}\" --max-num-replicas \"${last_max_instances}\" ${METRICS} || true --min-num-replicas \"${last_min_instances}\" --max-num-replicas \"${last_max_instances}\" ${metrics} || true fi }", "commid": "kubernetes_pr_20806"}], "negative_passages": []} {"query_id": "q-en-kubernetes-898fb42ff3be6ddf5751636f0c5233242d4bfcd4663d2731ea511851184e64ac", "query": "My reading is that the first attempt to create e2e-gce-minion-template looks like it failed with \"Backend error\", but it seems that the template was actually created, so subsequent attempts failed because it already existed.\nFrom kubernetes-e2e-gce/ claims to have deleted: But it apparently wasn't. Looks like we need to improve handling of this: cc There were previous attempts to make instance template more robust, such as in .\nAnd, after this failed, the subsequent test run failed the leaked-resource check.\nHa, an old TODO strikes again. :)\nIt's actually pretty disturbing that claimed when it wasn't. There may be an internal bug here, since the name should be available for use if delete occurred, in theory.\nDo you think someone from your team could take a look at this one?\nIt sounds like the hypothesis is that the delete failed in , but gcloud reported 'deleted'. But the failure message from the first create operation in is one I haven't seen before: reads to me as \"something went wrong, I don't know the state of the world\". It looks like the template was created OK (ish), but we try recreating with the same name which then fails (as expected). Is there evidence that the delete actually failed in ? My inclination would be that we should have a retry loop that looks more like: (That is go-bash; neither go nor bash. Just be grateful there's no salt in there!)\n- can you take a look at this one this morning?\nI was out this morning but I'll see if I can look into it later this afternoon.\nping\nThis just happened again: I'm going too apply P0 label to the PR that is supposed to fix it.\nAh, this is the same as . We created an internal bug for this too, ref .", "positive_passages": [{"docid": "doc-en-kubernetes-7c69d934eec530cc7129b48a9ab0a579ddb6762fa2e7b0b6e2efe3acebff6947", "text": "echo -e \"${color_yellow}Attempt ${attempt} failed to create instance template $template_name. Retrying.${color_norm}\" >&2 attempt=$(($attempt+1)) sleep $(($attempt * 5)) # In case the previous attempt failed with something like a # Backend Error and left the entry laying around, delete it # before we try again. gcloud compute instance-templates delete \"$template_name\" --project \"${PROJECT}\" &>/dev/null || true else break fi", "commid": "kubernetes_pr_21720"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c681678808babfbe7431923b7db420cefa8f168cf23d7fd835b30d0e6ca097d9", "query": "See example log below. I'm pretty sure the SG in question was from an ELB, but we should just auto-retry anyway.", "positive_passages": [{"docid": "doc-en-kubernetes-2af3f16b74f5b1be74b649544c97960e4fd8a01bacd218ffb29aec310d04257e", "text": "fi } # Deletes a security group # usage: delete_security_group function delete_security_group { local -r sg_id=${1} echo \"Deleting security group: ${sg_id}\" # We retry in case there's a dependent resource - typically an ELB n=0 until [ $n -ge 20 ]; do $AWS_CMD delete-security-group --group-id ${sg_id} > $LOG && return n=$[$n+1] sleep 3 done echo \"Unable to delete security group: ${sg_id}\" exit 1 } function ssh-key-setup { if [[ ! -f \"$AWS_SSH_KEY\" ]]; then ssh-keygen -f \"$AWS_SSH_KEY\" -N ''", "commid": "kubernetes_pr_22783"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c681678808babfbe7431923b7db420cefa8f168cf23d7fd835b30d0e6ca097d9", "query": "See example log below. I'm pretty sure the SG in question was from an ELB, but we should just auto-retry anyway.", "positive_passages": [{"docid": "doc-en-kubernetes-2cf6dfd65d08df1a7542e6202e7422f4c86f36ff81420c3b16059ccdbb4f8da2", "text": "continue fi echo \"Deleting security group: ${sg_id}\" $AWS_CMD delete-security-group --group-id ${sg_id} > $LOG delete_security_group ${sg_id} done subnet_ids=$($AWS_CMD describe-subnets ", "commid": "kubernetes_pr_22783"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-abe299ff9a34766123cf5d47dbe664dabdbc3502c45793018fc677dabdc8637d", "text": "}, \"targetPort\": { \"type\": \"string\", \"description\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of Port is used (an identity map). Defaults to the service port. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service\" \"description\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service\" }, \"nodePort\": { \"type\": \"integer\",", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-1ef6c3b371f15e0fcb7e3be51a55664a8c783753a74dd6621d506d7ff24ec072", "text": "

targetPort

Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod’s container ports. If this is not specified, the value of Port is used (an identity map). Defaults to the service port. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service

Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod’s container ports. If this is not specified, the value of the port field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the port field. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service

false

string

", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-05c95f912510158ad562f4ca219d08f8bce4a5448378fd4a5c3555676185606a", "text": "// Optional: The target port on pods selected by this service. If this // is a string, it will be looked up as a named port in the target // Pod's container ports. If this is not specified, the default value // is the sames as the Port field (an identity map). // Pod's container ports. If this is not specified, the value // of the 'port' field is used (an identity map). // This field is ignored for services with clusterIP=None, and should be // omitted or set equal to the 'port' field. TargetPort intstr.IntOrString `json:\"targetPort\"` // The port on each node on which this service is exposed.", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-5e3694ac29732cd1486e73029b53416a902cc001f3a8e816cde18ab740de7b8d", "text": "// Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. // If this is a string, it will be looked up as a named port in the // target Pod's container ports. If this is not specified, the value // of Port is used (an identity map). // Defaults to the service port. // of the 'port' field is used (an identity map). // This field is ignored for services with clusterIP=None, and should be // omitted or set equal to the 'port' field. // More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service TargetPort intstr.IntOrString `json:\"targetPort,omitempty\"`", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-80ab4a5dfc80ee8e5007fc1c6525528730c79bf7d4ec04fcecdec6fa7d3d37d7", "text": "\"name\": \"The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. This maps to the 'Name' field in EndpointPort objects. Optional if only one ServicePort is defined on this service.\", \"protocol\": \"The IP protocol for this port. Supports \"TCP\" and \"UDP\". Default is TCP.\", \"port\": \"The port that will be exposed by this service.\", \"targetPort\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of Port is used (an identity map). Defaults to the service port. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service\", \"targetPort\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#defining-a-service\", \"nodePort\": \"The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually assigned by the system. If specified, it will be allocated to the service if unused or else creation of the service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info: http://releases.k8s.io/HEAD/docs/user-guide/services.md#type--nodeport\", }", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-90592115971551c634ab29405903d852a663b4237b75b940627e0a106d01123d", "text": "allErrs = append(allErrs, field.Invalid(fldPath.Child(\"targetPort\"), sp.TargetPort, PortNameErrorMsg)) } if isHeadlessService { if sp.TargetPort.Type == intstr.String || (sp.TargetPort.Type == intstr.Int && sp.Port != sp.TargetPort.IntValue()) { allErrs = append(allErrs, field.Invalid(fldPath.Child(\"port\"), sp.Port, \"must be equal to targetPort when clusterIP = None\")) } } // in the v1 API, targetPorts on headless services were tolerated. // once we have version-specific validation, we can reject this on newer API versions, but until then, we have to tolerate it for compatibility. // // if isHeadlessService { // \tif sp.TargetPort.Type == intstr.String || (sp.TargetPort.Type == intstr.Int && sp.Port != sp.TargetPort.IntValue()) { // \t\tallErrs = append(allErrs, field.Invalid(fldPath.Child(\"targetPort\"), sp.TargetPort, \"must be equal to the value of 'port' when clusterIP = None\")) // \t} // } return allErrs }", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-44bcfeddc8a0280d868b3901f7f513a7e1565a2dce681aab6546cd53db5f9e78", "query": "In 1.0 and 1.1, headless services that specified a mismatched port/targetPort were tolerated (the targetPort was ignored). tightened validation to reject these services. That causes problems when upgrading from currently-working clusters.\n2 candidate to avoid breaking backwards compatibility\noh hell. Yeah. I don't think this is worth a breaking change. We should either roll that back ASAP or change it to nuke any user-provided value and replace it with the value. I think the latter is potentially dangerous and also not worth the risk, so I think we should rollback and just update the docs for targetPort. objections?\nWho has time to do this? It doesn't look like an auto-rollback is doable.\nAlready opened a PR in\nYes, I agree this isn't worth a breaking change, But could we give an warning log that targetPort isn't equal to port and would be ignored?\nThere's no facility in validation for warning. I don't think spamming the server log in response to API requests is that helpful here\nYes, that sounds reasonable. I just think that targetPort was ignored is a little unfriendly to user.", "positive_passages": [{"docid": "doc-en-kubernetes-8d7e5a4d14b75b3a9cb0b8c941b1a1eb9e5a4d6e42c5a4939676a75863877e7e", "text": "numErrs: 0, }, { name: \"invalid port headless\", name: \"invalid port headless 1\", tweakSvc: func(s *api.Service) { s.Spec.Ports[0].Port = 11722 s.Spec.Ports[0].TargetPort = intstr.FromInt(11721) s.Spec.ClusterIP = api.ClusterIPNone }, numErrs: 1, // in the v1 API, targetPorts on headless services were tolerated. // once we have version-specific validation, we can reject this on newer API versions, but until then, we have to tolerate it for compatibility. // numErrs: 1, numErrs: 0, }, { name: \"invalid port headless\", name: \"invalid port headless 2\", tweakSvc: func(s *api.Service) { s.Spec.Ports[0].Port = 11722 s.Spec.Ports[0].TargetPort = intstr.FromString(\"target\") s.Spec.ClusterIP = api.ClusterIPNone }, numErrs: 1, // in the v1 API, targetPorts on headless services were tolerated. // once we have version-specific validation, we can reject this on newer API versions, but until then, we have to tolerate it for compatibility. // numErrs: 1, numErrs: 0, }, { name: \"invalid publicIPs localhost\",", "commid": "kubernetes_pr_21680"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ef950703bbc523928d3eaae485e68af79bed5a970f7cb3a77ab67617ae2b27b3", "query": "This might be fixed, haven't reproduced on latest upstream source. But in any case, I thought I'd create the issue. bug = i assume any failure in e2es shouldnt be b/c of a panic. ... I noticed a failure on Openshift today, which I think alludes to a possible issue in replica sets.\ncc\ncc\nYup, looks like a bug, and it looks like it's still there. The isn't checked, so can be nil, which will cause the panic on L102. cc\ni think the iterators should be declarative here i.e. ? thats a better fix .... (update : nvm, i think its fine as is, just add a nil check: Because, the declarative won't work for the entire test, since the function for checking pod name responses requires a full )\nCc\nThanks, Jay. Even with this change, the test would have failed, so we should still debug the cause.\nIt seems we don't check err here: if return an error, will be nil.\nYup.... that is fixed in the PR, right?", "positive_passages": [{"docid": "doc-en-kubernetes-26e8d63c91fb7b0db38ad8826b018c87201a07d0542814a489a967c1244dc789", "text": "label := labels.SelectorFromSet(labels.Set(map[string]string{\"name\": name})) pods, err := podsCreated(f.Client, f.Namespace.Name, name, replicas) Expect(err).NotTo(HaveOccurred()) By(\"Ensuring each pod is running\")", "commid": "kubernetes_pr_22084"}], "negative_passages": []} {"query_id": "q-en-kubernetes-886b06652982581fd7e25d590373ed434d5bcc6b0395de55126cecbb82a25826", "query": "Getting this error... looks like we are quoting : Working on a hotfix...\nLinking to\nFix is", "positive_passages": [{"docid": "doc-en-kubernetes-a4dc979ce6e7f6401ae9ec175d25467a5f47da0dd953fa7e0061f372dc856478", "text": "# use JENKINS_PUBLISHED_VERSION, default to 'ci/latest', since that's # usually what we're testing. check_dirty_workspace fetch_published_version_tars \"${JENKINS_PUBLISHED_VERSION:-'ci/latest'}\" fetch_published_version_tars \"${JENKINS_PUBLISHED_VERSION:-ci/latest}\" fi # Copy GCE keys so we don't keep cycling them.", "commid": "kubernetes_pr_22288"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6f0ac5bc0e5382a6ce45dad10850f935dd1f0d3ace15d7812b713d43f420d4c7", "query": "From Slack, we got a report that \"my service controller is trying to delete a ELB ( im running k8s in ews ) thats is not there, and for that reason its failing over and over again with the message \" It turns out the user had deleted the ELB manually. We should tolerate this better.\nI think this was actually caused by a failure to tag the cluster, which triggered another bug where we can't pass an empty filters list to the AWS API", "positive_passages": [{"docid": "doc-en-kubernetes-746c080ab61e2da636a01be15944f54f56478dca9ad8529a7d41c5c277328242", "text": "// Return all the security groups that are tagged as being part of our cluster func (s *AWSCloud) getTaggedSecurityGroups() (map[string]*ec2.SecurityGroup, error) { request := &ec2.DescribeSecurityGroupsInput{} filters := []*ec2.Filter{} request.Filters = s.addFilters(filters) request.Filters = s.addFilters(nil) groups, err := s.ec2.DescribeSecurityGroups(request) if err != nil { return nil, fmt.Errorf(\"error querying security groups: %v\", err)", "commid": "kubernetes_pr_22788"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6f0ac5bc0e5382a6ce45dad10850f935dd1f0d3ace15d7812b713d43f420d4c7", "query": "From Slack, we got a report that \"my service controller is trying to delete a ELB ( im running k8s in ews ) thats is not there, and for that reason its failing over and over again with the message \" It turns out the user had deleted the ELB manually. We should tolerate this better.\nI think this was actually caused by a failure to tag the cluster, which triggered another bug where we can't pass an empty filters list to the AWS API", "positive_passages": [{"docid": "doc-en-kubernetes-faa12f11d8ab5105c0c97e4816c6eb3cdcd65a2c2303b4b8481b6045d0c52d83", "text": "describeRequest.Filters = s.addFilters(filters) actualGroups, err := s.ec2.DescribeSecurityGroups(describeRequest) if err != nil { return fmt.Errorf(\"error querying security groups: %v\", err) return fmt.Errorf(\"error querying security groups for ELB: %v\", err) } taggedSecurityGroups, err := s.getTaggedSecurityGroups()", "commid": "kubernetes_pr_22788"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6f0ac5bc0e5382a6ce45dad10850f935dd1f0d3ace15d7812b713d43f420d4c7", "query": "From Slack, we got a report that \"my service controller is trying to delete a ELB ( im running k8s in ews ) thats is not there, and for that reason its failing over and over again with the message \" It turns out the user had deleted the ELB manually. We should tolerate this better.\nI think this was actually caused by a failure to tag the cluster, which triggered another bug where we can't pass an empty filters list to the AWS API", "positive_passages": [{"docid": "doc-en-kubernetes-a9cdfe189e34fa530142106e93f38ebbc9fbe9441978686aa6110e04d6ef686a", "text": "for k, v := range s.filterTags { filters = append(filters, newEc2Filter(\"tag:\"+k, v)) } if len(filters) == 0 { // We can't pass a zero-length Filters to AWS (it's an error) // So if we end up with no filters; just return nil return nil } return filters }", "commid": "kubernetes_pr_22788"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f85722742125f7ea9b877706460d59003186444bb7882bd19fd7f4e8c120208f", "query": "I assume isn't actually a fix for this, just a way to get faster logs?\nActually is only where the test failed. To be honest, I didn't have time to look into it at all.\nany progress on this?\nhttp://pr- is another failure.\nOnly this one is an actual error. And... the logging logic doesn't make much sense. It logs \"err\", but it only gets there if err was nil. I can't see any other logs besides the junit logs, which don't provide any extra info. Am I missing something?\nThere is a concurrency bug in the test. We signal the scheduler to stop by but we don't synchronize at this point to verify that the scheduler has stopped. Indeed, since blocks on in the absence of any work, we could sit around a very long time, schedule one more thing, and only then get the signal to exit. We need to synchronize here, but once we do that, we may need to prevent the deadlock that occurs while blocks forever waiting for . I don't know that this is the bug, but the bug is there and it fits the symptom.\nThis is similar to the bug I fixed with . The seems susceptible to this by design. Not only does it provide no way to verify that the goroutine has been stopped, it makes it hard to avoid deadlock because the provided may block forever. We should probably re-do this interface to avoid the problem, although in practice I bet it only pops up in tests. I see two choices: invest in fixing this now, or comment out all the test that come after the assumption \"now we've stopped this scheduler\" and fix it later. WDYT?\nWhen you say Are you talking about everything after step 7?\nYes.\nI guess it is fine to comment out everything after step 7 for now, if you don't see any easy alternative. The key is that we test that the right scheduler is scheduling the pod, and while steps 8/9 do that, earlier parts of the test also do it and I think they're adequate. Please file an issue related to the problem you described, so we can fix it eventually.", "positive_passages": [{"docid": "doc-en-kubernetes-5b24538121f7a0f84368916a8ec61f8edbb0ca7a6436fdd080b8088cfe6d3cf4", "text": "if err != nil { t.Errorf(\"Failed to delete pod: %v\", err) } close(schedulerConfig.StopEverything) //\t8. create 2 pods: testPodNoAnnotation2 and testPodWithAnnotationFitsDefault2 //\t\t- note: these two pods belong to default scheduler which no longer exists podWithNoAnnotation2 := createPod(\"pod-with-no-annotation2\", nil) podWithAnnotationFitsDefault2 := createPod(\"pod-with-annotation-fits-default2\", schedulerAnnotationFitsDefault) testPodNoAnnotation2, err := restClient.Pods(api.NamespaceDefault).Create(podWithNoAnnotation2) if err != nil { t.Fatalf(\"Failed to create pod: %v\", err) } testPodWithAnnotationFitsDefault2, err := restClient.Pods(api.NamespaceDefault).Create(podWithAnnotationFitsDefault2) if err != nil { t.Fatalf(\"Failed to create pod: %v\", err) } // The rest of this test assumes that closing StopEverything will cause the // scheduler thread to stop immediately. It won't, and in fact it will often // schedule 1 more pod before finally exiting. Comment out until we fix that. // // See https://github.com/kubernetes/kubernetes/issues/23715 for more details. //\t9. **check point-3**: //\t\t- testPodNoAnnotation2 and testPodWithAnnotationFitsDefault2 shoule NOT be scheduled err = wait.Poll(time.Second, time.Second*5, podScheduled(restClient, testPodNoAnnotation2.Namespace, testPodNoAnnotation2.Name)) if err == nil { t.Errorf(\"Test MultiScheduler: %s Pod got scheduled, %v\", testPodNoAnnotation2.Name, err) } else { t.Logf(\"Test MultiScheduler: %s Pod not scheduled\", testPodNoAnnotation2.Name) } err = wait.Poll(time.Second, time.Second*5, podScheduled(restClient, testPodWithAnnotationFitsDefault2.Namespace, testPodWithAnnotationFitsDefault2.Name)) if err == nil { t.Errorf(\"Test MultiScheduler: %s Pod got scheduled, %v\", testPodWithAnnotationFitsDefault2.Name, err) } else { t.Logf(\"Test MultiScheduler: %s Pod scheduled\", testPodWithAnnotationFitsDefault2.Name) } /* close(schedulerConfig.StopEverything) //\t8. create 2 pods: testPodNoAnnotation2 and testPodWithAnnotationFitsDefault2 //\t\t- note: these two pods belong to default scheduler which no longer exists podWithNoAnnotation2 := createPod(\"pod-with-no-annotation2\", nil) podWithAnnotationFitsDefault2 := createPod(\"pod-with-annotation-fits-default2\", schedulerAnnotationFitsDefault) testPodNoAnnotation2, err := restClient.Pods(api.NamespaceDefault).Create(podWithNoAnnotation2) if err != nil { t.Fatalf(\"Failed to create pod: %v\", err) } testPodWithAnnotationFitsDefault2, err := restClient.Pods(api.NamespaceDefault).Create(podWithAnnotationFitsDefault2) if err != nil { t.Fatalf(\"Failed to create pod: %v\", err) } //\t9. **check point-3**: //\t\t- testPodNoAnnotation2 and testPodWithAnnotationFitsDefault2 shoule NOT be scheduled err = wait.Poll(time.Second, time.Second*5, podScheduled(restClient, testPodNoAnnotation2.Namespace, testPodNoAnnotation2.Name)) if err == nil { t.Errorf(\"Test MultiScheduler: %s Pod got scheduled, %v\", testPodNoAnnotation2.Name, err) } else { t.Logf(\"Test MultiScheduler: %s Pod not scheduled\", testPodNoAnnotation2.Name) } err = wait.Poll(time.Second, time.Second*5, podScheduled(restClient, testPodWithAnnotationFitsDefault2.Namespace, testPodWithAnnotationFitsDefault2.Name)) if err == nil { t.Errorf(\"Test MultiScheduler: %s Pod got scheduled, %v\", testPodWithAnnotationFitsDefault2.Name, err) } else { t.Logf(\"Test MultiScheduler: %s Pod scheduled\", testPodWithAnnotationFitsDefault2.Name) } */ } func createPod(name string, annotation map[string]string) *api.Pod {", "commid": "kubernetes_pr_23717"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-e8f1c47cef6eeb6df9beb7ae43dde8dc785aac7f0b519a4f66e24c124798877a", "text": "\"k8s.io/kubernetes/pkg/volume\" ) var errUnsupportedVolumeType = fmt.Errorf(\"unsupported volume type\") // This just exports required functions from kubelet proper, for use by volume // plugins. type volumeHost struct {", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-9397439368797452b7c30687a9aedc51cf22104863e51d095624aef827a98486", "text": "return vh.kubelet.kubeClient } // NewWrapperMounter attempts to create a volume mounter // from a volume Spec, pod and volume options. // Returns a new volume Mounter or an error. func (vh *volumeHost) NewWrapperMounter(volName string, spec volume.Spec, pod *api.Pod, opts volume.VolumeOptions) (volume.Mounter, error) { // The name of wrapper volume is set to \"wrapped_{wrapped_volume_name}\" wrapperVolumeName := \"wrapped_\" + volName", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-27715586bca4f2379961a8436355527140474241ee28336aaef941fdf72a3397", "text": "spec.Volume.Name = wrapperVolumeName } b, err := vh.kubelet.newVolumeMounterFromPlugins(&spec, pod, opts) if err == nil && b == nil { return nil, errUnsupportedVolumeType } return b, nil return vh.kubelet.newVolumeMounterFromPlugins(&spec, pod, opts) } // NewWrapperUnmounter attempts to create a volume unmounter // from a volume name and pod uid. // Returns a new volume Unmounter or an error. func (vh *volumeHost) NewWrapperUnmounter(volName string, spec volume.Spec, podUID types.UID) (volume.Unmounter, error) { // The name of wrapper volume is set to \"wrapped_{wrapped_volume_name}\" wrapperVolumeName := \"wrapped_\" + volName", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-c551b82e1893314d00cd8faf8fd7a467d9b50800d777329434abf3684a764120", "text": "if err != nil { return nil, err } if plugin == nil { // Not found but not an error return nil, nil } c, err := plugin.NewUnmounter(spec.Name(), podUID) if err == nil && c == nil { return nil, errUnsupportedVolumeType } return c, nil return plugin.NewUnmounter(spec.Name(), podUID) } func (vh *volumeHost) GetCloudProvider() cloudprovider.Interface {", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-30c536daf4d831701f89dc665a44cdd2cc4a6ccd097464693772a6a56d27cbf3", "text": "glog.Errorf(\"Could not create volume mounter for pod %s: %v\", pod.UID, err) return nil, err } if mounter == nil { return nil, errUnsupportedVolumeType } // some volumes require attachment before mounter's setup. // The plugin can be nil, but non-nil errors are legitimate errors.", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-3919bbce9cc3f836dde3d1f5ccce6801321a078ec0d3c2df938a7e204d9ff4be", "text": "glog.Errorf(\"Could not create volume unmounter for %s: %v\", volume.Name, err) continue } if unmounter == nil { glog.Errorf(\"Could not create volume unmounter for %s: %v\", volume.Name, errUnsupportedVolumeType) continue } tuple := cleanerTuple{Unmounter: unmounter} detacher, err := kl.newVolumeDetacherFromPlugins(volume.Kind, volume.Name, podUID)", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-c90bf14c4cb2ecd576923bd087c0bbc9f47f469488bcc81f8f507200f889c524", "text": "return currentVolumes } // newVolumeMounterFromPlugins attempts to find a plugin by volume spec, pod // and volume options and then creates a Mounter. // Returns a valid Unmounter or an error. func (kl *Kubelet) newVolumeMounterFromPlugins(spec *volume.Spec, pod *api.Pod, opts volume.VolumeOptions) (volume.Mounter, error) { plugin, err := kl.volumePluginMgr.FindPluginBySpec(spec) if err != nil { return nil, fmt.Errorf(\"can't use volume plugins for %s: %v\", spec.Name(), err) } if plugin == nil { // Not found but not an error return nil, nil } physicalMounter, err := plugin.NewMounter(spec, pod, opts) if err != nil { return nil, fmt.Errorf(\"failed to instantiate volume physicalMounter for %s: %v\", spec.Name(), err) return nil, fmt.Errorf(\"failed to instantiate mounter for volume: %s using plugin: %s with a root cause: %v\", spec.Name(), plugin.Name(), err) } glog.V(10).Infof(\"Used volume plugin %q to mount %s\", plugin.Name(), spec.Name()) return physicalMounter, nil } // newVolumeAttacherFromPlugins attempts to find a plugin from a volume spec // and then create an Attacher. // Returns: // - an attacher if one exists // - an error if no plugin was found for the volume // or the attacher failed to instantiate // - nil if there is no appropriate attacher for this volume func (kl *Kubelet) newVolumeAttacherFromPlugins(spec *volume.Spec, pod *api.Pod, opts volume.VolumeOptions) (volume.Attacher, error) { plugin, err := kl.volumePluginMgr.FindAttachablePluginBySpec(spec) if err != nil {", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-7ea5991b5b61aa687c9675a954bae2cfce33db4dc7ca5b081263d5c529402c0f", "text": "return attacher, nil } // newVolumeUnmounterFromPlugins attempts to find a plugin by name and then // create an Unmounter. // Returns a valid Unmounter or an error. func (kl *Kubelet) newVolumeUnmounterFromPlugins(kind string, name string, podUID types.UID) (volume.Unmounter, error) { plugName := strings.UnescapeQualifiedNameForDisk(kind) plugin, err := kl.volumePluginMgr.FindPluginByName(plugName)", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-97f1f658f1757c3194e6969040d82951514b54815a507f365bf15d1e02b0913d", "text": "// TODO: Maybe we should launch a cleanup of this dir? return nil, fmt.Errorf(\"can't use volume plugins for %s/%s: %v\", podUID, kind, err) } if plugin == nil { // Not found but not an error. return nil, nil } unmounter, err := plugin.NewUnmounter(name, podUID) if err != nil { return nil, fmt.Errorf(\"failed to instantiate volume plugin for %s/%s: %v\", podUID, kind, err)", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-388477b7b71a1ca060f820d6295e607b1410db3be88b2f2825a1396c929dcb98", "query": "in there is a generic error variable defined that is used when a failure occurs and no other error is surfaced, it is a catch-all when something goes wrong. Think this should be reworded to something that is more reflective of the true issue and not so confusing for users. This error mostly comes when a builder can not be created and it is typically not true that the volume type is unsupported, but rather some other underlying issue, such as missing secrets or endpoints or etc... propose something like: or\nare you working on this?\n- yes, I would like to submit the PR for this", "positive_passages": [{"docid": "doc-en-kubernetes-ce1e9135180c8d33eeeef94e33f36bcd2ee757ee2593b1fbf2ded85c7196681d", "text": "return unmounter, nil } // newVolumeDetacherFromPlugins attempts to find a plugin by a name and then // create a Detacher. // Returns: // - a detacher if one exists // - an error if no plugin was found for the volume // or the detacher failed to instantiate // - nil if there is no appropriate detacher for this volume func (kl *Kubelet) newVolumeDetacherFromPlugins(kind string, name string, podUID types.UID) (volume.Detacher, error) { plugName := strings.UnescapeQualifiedNameForDisk(kind) plugin, err := kl.volumePluginMgr.FindAttachablePluginByName(plugName)", "commid": "kubernetes_pr_23122"}], "negative_passages": []} {"query_id": "q-en-kubernetes-35fd153d614877189212c9effd62b4185551ad89b0a5afd2d50910e4d13004f7", "query": "GCE PD docs so unused blocks can be freed back into the cloud and performance gains realized. Rummaging through the source of the GCE PD and mounter stuff in Kube, it does not appear that that's being done (the old shell script version of safeformatand_mount does, but I don't think that's used). I haven't confirmed that it's the case at runtime for sure, though, although I will be able to shortly.\nI LGTMed the PR and then realized it needed more work. Reverting it and assigning it to ciwang. We need to make sure that we only use the \"discard\" mount option when the fs supports it. Assigning to ciwang for now. If anyone else wants to work on it feel free to grab it.\nAssigning to This fell through the cracks as ciwang was working in other areas. This did not make 1.4 either moving to next-candidate.\nIssues go stale after 30d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle rotten /remove-lifecycle stale\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-testing, kubernetes/test-infra and/or . /close", "positive_passages": [{"docid": "doc-en-kubernetes-6563ffbd9847c53f7754ca1228a623a2b10b35e164f014021922eed7d799bca7", "text": "options := []string{} if readOnly { options = append(options, \"ro\") } else { // as per https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting options = append(options, \"discard\") } if notMnt { diskMounter := &mount.SafeFormatAndMount{Interface: mounter, Runner: exec.New()}", "commid": "kubernetes_pr_28448"}], "negative_passages": []} {"query_id": "q-en-kubernetes-58924bc188741050d42aef28bcde689e9e536597bd1a8023545ea7c340476df0", "query": "There was a naive attempt to do this in , but _output it hardcoded throughout the code base so we need a more comprehensive solution.\nDo we still need this ?\n/sig release\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle rotten /remove-lifecycle stale\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-testing, kubernetes/test-infra and/or . /close", "positive_passages": [{"docid": "doc-en-kubernetes-b6e0d65922de13099a4ecc62b1ef87b15ee17aebf8857df598681a837a558c8c", "text": ".vscode # This is where the result of the go build goes /output/** /output /_output/** /_output /output*/ /_output*/ # Emacs save files *~", "commid": "kubernetes_pr_24895"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0e001d2949b98669c91261c4dff5862e4b2b89e42c08b2ec522e0f2bf5577576", "query": "The e2e test is flaky. It often fails with the error in the test suite. Sample failures: Marking it P0 since it frequently blocks the merge queue.\ncc\nIn the flaky test, the pod is deleted with grace-period=30, but there is no guarantee that the pod is going to be deleted from etcd in 30s, thus the flake. This piece of test is copied from the pod e2e test: In that test, the total timeout is actually larger than 30s, because there is some extra logic (< 30s) between the time when the deletion is requested, and when the timer is started. Given that there is no flake reported for test in , I will fix this flake by extending the timeout to 60s.\nThis flake should have the same root cause as .\nETA on fix? This is the most frequent cause of test flakes and submit queue blockage.\nsent out as a temporary fix.\nHmm, happened in: Which looks like we're just not handling the watch cancellation properly in the test.\ndoes the timeout really fix the watch cancellation? (didn't look too deep at the pr)\nI think so. see bprashanth's last comment, the timeout occurs again.\nAs of 3:20pm Apr.8, last failure occurs at 3:13:18pm, Apr. 7, Mike's fix is merged at 15:04:42, Apr. 7.", "positive_passages": [{"docid": "doc-en-kubernetes-9334baf3e20ed291321d7d241181b456450b48c23177a13179588f786643546a", "text": "deleted := false timeout := false var lastPod *api.Pod timer := time.After(30 * time.Second) // The 30s grace period is not an upper-bound of the time it takes to // delete the pod from etcd. timer := time.After(60 * time.Second) for !deleted && !timeout { select { case event, _ := <-w.ResultChan():", "commid": "kubernetes_pr_23960"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-8951ce6060e45ae43acc70532e6ba5b7b4393036bb5d9728910dec140fc7fd61", "text": "command := args[0] baseCommand := path.Base(command) serverName := baseCommand args = args[1:] if serverName == hk.Name { args = args[1:] baseFlags := hk.Flags() baseFlags.SetInterspersed(false) // Only parse flags up to the next real command", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-342bd919e4189b9a016c392abf3da001f5604a531ee3f8cd5505b315bf0e3f3c", "text": "\"strings\" \"testing\" \"github.com/spf13/cobra\" \"github.com/stretchr/testify/assert\" )", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-11f7fa71984e6925209581a42dc0df4e1a133096850c5be79929d4bff7789038", "text": "} } const defaultCobraMessage = \"default message from cobra command\" const defaultCobraSubMessage = \"default sub-message from cobra command\" const cobraMessageDesc = \"message to print\" const cobraSubMessageDesc = \"sub-message to print\" func testCobraCommand(n string) *Server { var cobraServer *Server var msg string cmd := &cobra.Command{ Use: n, Long: n, Short: n, Run: func(cmd *cobra.Command, args []string) { cobraServer.hk.Printf(\"msg: %sn\", msg) }, } cmd.PersistentFlags().StringVar(&msg, \"msg\", defaultCobraMessage, cobraMessageDesc) var subMsg string subCmdName := \"subcommand\" subCmd := &cobra.Command{ Use: subCmdName, Long: subCmdName, Short: subCmdName, Run: func(cmd *cobra.Command, args []string) { cobraServer.hk.Printf(\"submsg: %s\", subMsg) }, } subCmd.PersistentFlags().StringVar(&subMsg, \"submsg\", defaultCobraSubMessage, cobraSubMessageDesc) cmd.AddCommand(subCmd) localFlags := cmd.LocalFlags() localFlags.SetInterspersed(false) s := &Server{ SimpleUsage: n, Long: fmt.Sprintf(\"A server named %s which uses a cobra command\", n), Run: func(s *Server, args []string) error { cobraServer = s cmd.SetOutput(s.hk.Out()) cmd.SetArgs(args) return cmd.Execute() }, flags: localFlags, } return s } func runFull(t *testing.T, args string) *result { buf := new(bytes.Buffer) hk := HyperKube{", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-a7b4490e84b0f32f5ba1e1709cd8aa08f76d628c2f212921dfec59011ddcae06", "text": "hk.AddServer(testServer(\"test2\")) hk.AddServer(testServer(\"test3\")) hk.AddServer(testServerError(\"test-error\")) hk.AddServer(testCobraCommand(\"test-cobra-command\")) a := strings.Split(args, \" \") t.Logf(\"Running full with args: %q\", a)", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-622ce212754b02406333adb22d034248c2542fb6188a4bdc34afb413301ed7ad", "text": "assert.Contains(t, x.output, \"test-error Run\") assert.EqualError(t, x.err, \"Server returning error\") } func TestCobraCommandHelp(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command --help\") assert.NoError(t, x.err) assert.Contains(t, x.output, \"A server named test-cobra-command which uses a cobra command\") assert.Contains(t, x.output, cobraMessageDesc) } func TestCobraCommandDefaultMessage(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command\") assert.Contains(t, x.output, fmt.Sprintf(\"msg: %s\", defaultCobraMessage)) } func TestCobraCommandMessage(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command --msg foobar\") assert.Contains(t, x.output, \"msg: foobar\") } func TestCobraSubCommandHelp(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command subcommand --help\") assert.NoError(t, x.err) assert.Contains(t, x.output, cobraSubMessageDesc) } func TestCobraSubCommandDefaultMessage(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command subcommand\") assert.Contains(t, x.output, fmt.Sprintf(\"submsg: %s\", defaultCobraSubMessage)) } func TestCobraSubCommandMessage(t *testing.T) { x := runFull(t, \"hyperkube test-cobra-command subcommand --submsg foobar\") assert.Contains(t, x.output, \"submsg: foobar\") } ", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-2b5f9f0ffb3cc85ba87433b7f111f9e58e4311fd2a021ffa5ce51382e2523acd", "text": "import ( \"os\" \"k8s.io/kubernetes/cmd/kubectl/app\" \"k8s.io/kubernetes/pkg/kubectl/cmd\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" ) func NewKubectlServer() *Server { cmd := cmd.NewKubectlCommand(cmdutil.NewFactory(nil), os.Stdin, os.Stdout, os.Stderr) localFlags := cmd.LocalFlags() localFlags.SetInterspersed(false) return &Server{ name: \"kubectl\", SimpleUsage: \"Kubernetes command line client\", Long: \"Kubernetes command line client\", Run: func(s *Server, args []string) error { os.Args = os.Args[1:] if err := app.Run(); err != nil { os.Exit(1) } os.Exit(0) return nil cmd.SetArgs(args) return cmd.Execute() }, flags: localFlags, } }", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e8ab9ffac8b2165072d2c1fbe420239717b9c3f34b2625dbc28e6435b3ca7b75", "query": "When trying to use kubectl from hyperkube I'm met with \"unknown flag\" errors. It looks as if the flags aren't being passed through to kubectl but hyperkube is trying to handle them itself (and subsequently giving an error when you give it flags it doesn't understand). You can reproduce simply by running: And you'll see the error:\nI have a fix up in PR .", "positive_passages": [{"docid": "doc-en-kubernetes-2732fcd5dc2223412874546d8b3bedb1e727b310d439895f172a715346f9e11c", "text": "cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" ) /* WARNING: this logic is duplicated, with minor changes, in cmd/hyperkube/kubectl.go Any salient changes here will need to be manually reflected in that file. */ func Run() error { cmd := cmd.NewKubectlCommand(cmdutil.NewFactory(nil), os.Stdin, os.Stdout, os.Stderr) return cmd.Execute()", "commid": "kubernetes_pr_25512"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c9313de66f3508f8886cd07268b253d5e2cfa321359afb89838b6984a0afe883", "query": "The recently merged PR a version cache in dockertools, which makes the test flaky. The reason is that injected error into fakedockerclient and then expected to see the error from right after that (See ). However, the error was consumed by the goroutine which periodically updates the version cache (See ). The docker daemon version is rarely updated, and we only need version once for each container creating, during which we also do container create, start, inspect #etc. 1) Do we care about the cost of the call so much that we have to introduce the complexity? 2) Kubelet periodically calls , can we rely on that to update the cache? Anyhow, at least, we should disable the cache or have a fake one in the unit test. :) /cc\nI once considered move this cache to runtime level, while thus may need to change runtime interface. What if only cache apiversion?\nWe can only get via , so there will still be the data race. :) One option is to update the version cache in , now kubelet periodically call to check the runtime status. Another better option is to cache the version in layer, now is under our control, we can refactor it with an additional argument indicating whether to use cached value or get directly from the daemon. If we cache the version only for performance, I prefer the second option which is cleaner and more efficient. :)\nOK, will look into this tonight\nThe unit test itself should not start a goroutine to update the cache (unless you're actually testing the goroutine). We should disable it anyway, as we often do in our unit tests. It also sounds like we don't need a goroutine dedicated for this. If we can tolerate the additional overhead from time to time, we can set a expiration time (e.g., 1 minute), and only call the runtime if the entry expires.\nTotally agree. :) Maybe a new function in wrapping the update logic, or wrapping the logic in the .\nexpiration time sounds good in this case\nThis is happening quite frequently - increasing priority to P0.", "positive_passages": [{"docid": "doc-en-kubernetes-fa09ab41b5e12bb6bc8ee74665b9e578edcbfc2894e7a160da324531e4b96bc7", "text": "\"k8s.io/kubernetes/pkg/kubelet/network\" proberesults \"k8s.io/kubernetes/pkg/kubelet/prober/results\" kubetypes \"k8s.io/kubernetes/pkg/kubelet/types\" \"k8s.io/kubernetes/pkg/kubelet/util/cache\" \"k8s.io/kubernetes/pkg/types\" \"k8s.io/kubernetes/pkg/util/flowcontrol\" \"k8s.io/kubernetes/pkg/util/oom\"", "commid": "kubernetes_pr_24384"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c9313de66f3508f8886cd07268b253d5e2cfa321359afb89838b6984a0afe883", "query": "The recently merged PR a version cache in dockertools, which makes the test flaky. The reason is that injected error into fakedockerclient and then expected to see the error from right after that (See ). However, the error was consumed by the goroutine which periodically updates the version cache (See ). The docker daemon version is rarely updated, and we only need version once for each container creating, during which we also do container create, start, inspect #etc. 1) Do we care about the cost of the call so much that we have to introduce the complexity? 2) Kubelet periodically calls , can we rely on that to update the cache? Anyhow, at least, we should disable the cache or have a fake one in the unit test. :) /cc\nI once considered move this cache to runtime level, while thus may need to change runtime interface. What if only cache apiversion?\nWe can only get via , so there will still be the data race. :) One option is to update the version cache in , now kubelet periodically call to check the runtime status. Another better option is to cache the version in layer, now is under our control, we can refactor it with an additional argument indicating whether to use cached value or get directly from the daemon. If we cache the version only for performance, I prefer the second option which is cleaner and more efficient. :)\nOK, will look into this tonight\nThe unit test itself should not start a goroutine to update the cache (unless you're actually testing the goroutine). We should disable it anyway, as we often do in our unit tests. It also sounds like we don't need a goroutine dedicated for this. If we can tolerate the additional overhead from time to time, we can set a expiration time (e.g., 1 minute), and only call the runtime if the entry expires.\nTotally agree. :) Maybe a new function in wrapping the update logic, or wrapping the logic in the .\nexpiration time sounds good in this case\nThis is happening quite frequently - increasing priority to P0.", "positive_passages": [{"docid": "doc-en-kubernetes-73ce865f9c34194f1c7450d11e4c6f130974cd16147e0f9040323cc00396a57a", "text": "burst, containerLogsDir, osInterface, networkPlugin, runtimeHelper, httpClient, &NativeExecHandler{}, fakeOOMAdjuster, fakeProcFs, false, imageBackOff, false, false, true) dm.dockerPuller = &FakeDockerPuller{} dm.versionCache = cache.NewVersionCache(func() (kubecontainer.Version, kubecontainer.Version, error) { return dm.getVersionInfo() }) return dm }", "commid": "kubernetes_pr_24384"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c9313de66f3508f8886cd07268b253d5e2cfa321359afb89838b6984a0afe883", "query": "The recently merged PR a version cache in dockertools, which makes the test flaky. The reason is that injected error into fakedockerclient and then expected to see the error from right after that (See ). However, the error was consumed by the goroutine which periodically updates the version cache (See ). The docker daemon version is rarely updated, and we only need version once for each container creating, during which we also do container create, start, inspect #etc. 1) Do we care about the cost of the call so much that we have to introduce the complexity? 2) Kubelet periodically calls , can we rely on that to update the cache? Anyhow, at least, we should disable the cache or have a fake one in the unit test. :) /cc\nI once considered move this cache to runtime level, while thus may need to change runtime interface. What if only cache apiversion?\nWe can only get via , so there will still be the data race. :) One option is to update the version cache in , now kubelet periodically call to check the runtime status. Another better option is to cache the version in layer, now is under our control, we can refactor it with an additional argument indicating whether to use cached value or get directly from the daemon. If we cache the version only for performance, I prefer the second option which is cleaner and more efficient. :)\nOK, will look into this tonight\nThe unit test itself should not start a goroutine to update the cache (unless you're actually testing the goroutine). We should disable it anyway, as we often do in our unit tests. It also sounds like we don't need a goroutine dedicated for this. If we can tolerate the additional overhead from time to time, we can set a expiration time (e.g., 1 minute), and only call the runtime if the entry expires.\nTotally agree. :) Maybe a new function in wrapping the update logic, or wrapping the logic in the .\nexpiration time sounds good in this case\nThis is happening quite frequently - increasing priority to P0.", "positive_passages": [{"docid": "doc-en-kubernetes-09f55eea4acb93558518e82e86888b5d51a02329e1f53f0f6f8d89efab01f261", "text": "optf(dm) } // initialize versionCache with a updater dm.versionCache = cache.NewVersionCache(func() (kubecontainer.Version, kubecontainer.Version, error) { return dm.getVersionInfo() }) // update version cache periodically. if dm.machineInfo != nil { dm.versionCache.UpdateCachePeriodly(dm.machineInfo.MachineID) } return dm }", "commid": "kubernetes_pr_24384"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c9313de66f3508f8886cd07268b253d5e2cfa321359afb89838b6984a0afe883", "query": "The recently merged PR a version cache in dockertools, which makes the test flaky. The reason is that injected error into fakedockerclient and then expected to see the error from right after that (See ). However, the error was consumed by the goroutine which periodically updates the version cache (See ). The docker daemon version is rarely updated, and we only need version once for each container creating, during which we also do container create, start, inspect #etc. 1) Do we care about the cost of the call so much that we have to introduce the complexity? 2) Kubelet periodically calls , can we rely on that to update the cache? Anyhow, at least, we should disable the cache or have a fake one in the unit test. :) /cc\nI once considered move this cache to runtime level, while thus may need to change runtime interface. What if only cache apiversion?\nWe can only get via , so there will still be the data race. :) One option is to update the version cache in , now kubelet periodically call to check the runtime status. Another better option is to cache the version in layer, now is under our control, we can refactor it with an additional argument indicating whether to use cached value or get directly from the daemon. If we cache the version only for performance, I prefer the second option which is cleaner and more efficient. :)\nOK, will look into this tonight\nThe unit test itself should not start a goroutine to update the cache (unless you're actually testing the goroutine). We should disable it anyway, as we often do in our unit tests. It also sounds like we don't need a goroutine dedicated for this. If we can tolerate the additional overhead from time to time, we can set a expiration time (e.g., 1 minute), and only call the runtime if the entry expires.\nTotally agree. :) Maybe a new function in wrapping the update logic, or wrapping the logic in the .\nexpiration time sounds good in this case\nThis is happening quite frequently - increasing priority to P0.", "positive_passages": [{"docid": "doc-en-kubernetes-dafa5b1f678212301837456775a8e4b09e095095055e34f4f0a6e870bc468b0c", "text": "return oomScoreAdj } // getCachedVersionInfo gets cached version info of docker runtime. func (dm *DockerManager) getCachedVersionInfo() (kubecontainer.Version, kubecontainer.Version, error) { apiVersion, daemonVersion, err := dm.versionCache.Get(dm.machineInfo.MachineID) if err != nil { glog.Errorf(\"Failed to get cached docker api version %v \", err) } // If we got nil versions, try to update version info. if apiVersion == nil || daemonVersion == nil { dm.versionCache.Update(dm.machineInfo.MachineID) } return apiVersion, daemonVersion, err } // checkDockerAPIVersion checks current docker API version against expected version. // Return: // 1 : newer than expected version // -1: older than expected version // 0 : same version func (dm *DockerManager) checkDockerAPIVersion(expectedVersion string) (int, error) { apiVersion, _, err := dm.getCachedVersionInfo() apiVersion, _, err := dm.getVersionInfo() if err != nil { return 0, err }", "commid": "kubernetes_pr_24384"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1bf3aa6fad4034e758c14fedc4b5cde3693f0c03678440275414d57febd5fa84", "query": "Turns our that when running a container in host network, its that is passed to dockertools.SyncPod() will always contain empty IP, as a result, the environment becomes empty. Maybe we want to use the instead of in this case to ? Since the apiPodStatus has the valid PodIP, which is when the container runs in host network. Or just populate the podIP with hostIP in when the container runs on host network. Ref cc\nWhy not just set the same with before ? Ideally, we want to remove from SyncPod() in the future if possible, it may be better to get it from . WDYT?\nDo you mean \"set the podStatus.IP to apiPodStatus.IP before SyncPod?\" IMO, we should call only once. Why don't we populate set the before calling ?\nWe also use . However, because we don't support host network bandwidth limit, it is fine to leave the IP unset here. However, I think it would be good to have IP always correctly set in .\nI must admit, I have been thinking about this in the background since created this issue, and the discussion hasn't up for me. I finally sat down and spent some time getting my numerous fixes for podIP via downward API back into my brain, and now the problem seems very obvious to me. So, after some thought, I believe that for the docker runtime, the problem is that , which is responsible for generating the which is used to supply the pod IP, doesn't check to see whether the host network is being used and return the correct IP address if so. However, does, which is why the obvious fix seems to be to use the value from this method. I think the right thing to do instead is to change the following call stack to correctly determine the IP when the host network is being used.\nToday doesn't pass the pod spec, so it cannot know if the pod is running on host. This could be solved if we serialize the pod spec into docker labels. BTW, I think we have a similar issue here for rkt.\nAs said, GetPodStatus doesn't have full pod spec today. And I don't think docker manager has knowledge about the host ip now.\nI think we should think about adding the plbuming. I have not had a good experience storingthis information in multiple places :)\nThe code was written so that it'd work in the absence of the pod specs (which we don't always have). I agree with you, but adding the dependency today without some form of checkpointing (e.g., container annotations) of the pod specs may be problematic now. We can revisit this after we store pod specs in the annotations. What do you think? :)\nI'm cool with revisiting but if we do that we should drop this to p2\nReopen the issue to capture\nIs this fixed?\nhi, also interested in knowing if this is fixed.\nI don't think anyone is tacking on this one right now since people are mostly working on the new CRI implementation for different runtimes. cc\nthanks. it also looks like there might be a specific issue with rkt here. Looking at and , it looks like rkt does not handle the case of status.PodIP when using the host network.\nThanks for the heads up. I will take a look to see if we can quickly work around this.\nIssues go stale after 30d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. Prevent issues from auto-closing with an comment. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle rotten /remove-lifecycle stale\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-testing, kubernetes/test-infra and/or . /close", "positive_passages": [{"docid": "doc-en-kubernetes-b51f398c3bc15849aa3a4777151155bf9245b507833de557d3ef844055825539", "text": "// Generate final API pod status with pod and status manager status apiPodStatus := kl.generateAPIPodStatus(pod, podStatus) // The pod IP may be changed in generateAPIPodStatus if the pod is using host network. (See #24576) // TODO(random-liu): After writing pod spec into container labels, check whether pod is using host network, and // set pod IP to hostIP directly in runtime.GetPodStatus podStatus.IP = apiPodStatus.PodIP // Record the time it takes for the pod to become running. existingStatus, ok := kl.statusManager.GetPodStatus(pod.UID)", "commid": "kubernetes_pr_24633"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5a056696a8b17624ee6d97139095a97ae226afc8b36ebaf5f287bd25b6fc9d03", "query": "I'm not sure where the defaults come from but they sometimes drop all packets causing container IPs to become unreachable. ARP fails. I'm not sure if this is related but I also noticed on this machine that the default pfifo qdisc drops all packets. pfifo is the default qdisc for an htb node so it's used in the unconfigured htb qdisc: This is because the default value for queue size is zero. On other machines I've seen the default value set to 1 packet and 1000 packets. When I set the limit explicitly the container becomes reachable again. The broken configuration is applied by kubelet to implement traffic shapping and there is no option to turn this off. This patch fixes the kubelet on broken machines by disabling traffic shapping: cc\ndoes this happen with --configure-cbr0 and packet shaping too? The kubenet implementation just uses the existing kubernetes shaping code that --configure-cbr0 does.\nIt seems the *fifo qdiscs default their packet limit to the txquelen of the interface (man tc-pfifo). It also seems most virtual interface types (bridge, etc) set a dev-txqueuelen of 0. It further seems that relies on the default packet limit. I could swear this worked before with kubenet, and it must have worked with --configure-cbr0 when the code was originally . I'll check around to figure out what the default limit should be for interfaces that don't specify a tx queue length.\nNote that to test with --configure-cbr0 you'd also need --reconcile-cidr.\nUpstream CNI fix is here: and we'll also work around it in kubenet for now.\nFWIW, the upstream CNI fix got merged for CNI 0.3.0", "positive_passages": [{"docid": "doc-en-kubernetes-a3c53f1d3c2a0f33bd9744b965463523b96d8a82534c913c98122e70cdfcec94", "text": "\"syscall\" \"github.com/vishvananda/netlink\" \"github.com/vishvananda/netlink/nl\" \"github.com/appc/cni/libcni\" \"github.com/golang/glog\"", "commid": "kubernetes_pr_25136"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5a056696a8b17624ee6d97139095a97ae226afc8b36ebaf5f287bd25b6fc9d03", "query": "I'm not sure where the defaults come from but they sometimes drop all packets causing container IPs to become unreachable. ARP fails. I'm not sure if this is related but I also noticed on this machine that the default pfifo qdisc drops all packets. pfifo is the default qdisc for an htb node so it's used in the unconfigured htb qdisc: This is because the default value for queue size is zero. On other machines I've seen the default value set to 1 packet and 1000 packets. When I set the limit explicitly the container becomes reachable again. The broken configuration is applied by kubelet to implement traffic shapping and there is no option to turn this off. This patch fixes the kubelet on broken machines by disabling traffic shapping: cc\ndoes this happen with --configure-cbr0 and packet shaping too? The kubenet implementation just uses the existing kubernetes shaping code that --configure-cbr0 does.\nIt seems the *fifo qdiscs default their packet limit to the txquelen of the interface (man tc-pfifo). It also seems most virtual interface types (bridge, etc) set a dev-txqueuelen of 0. It further seems that relies on the default packet limit. I could swear this worked before with kubenet, and it must have worked with --configure-cbr0 when the code was originally . I'll check around to figure out what the default limit should be for interfaces that don't specify a tx queue length.\nNote that to test with --configure-cbr0 you'd also need --reconcile-cidr.\nUpstream CNI fix is here: and we'll also work around it in kubenet for now.\nFWIW, the upstream CNI fix got merged for CNI 0.3.0", "positive_passages": [{"docid": "doc-en-kubernetes-9ecff11b15f008bfbec7f4ae222c722425b90792c6a83bff9eff2e0b3ab41c65", "text": "} } // ensureBridgeTxQueueLen() ensures that the bridge interface's TX queue // length is greater than zero. Due to a CNI <= 0.3.0 'bridge' plugin bug, // the bridge is initially created with a TX queue length of 0, which gets // used as the packet limit for FIFO traffic shapers, which drops packets. // TODO: remove when we can depend on a fixed CNI func (plugin *kubenetNetworkPlugin) ensureBridgeTxQueueLen() { bridge, err := netlink.LinkByName(BridgeName) if err != nil { return } if bridge.Attrs().TxQLen > 0 { return } req := nl.NewNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK) msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) req.AddData(msg) nameData := nl.NewRtAttr(syscall.IFLA_IFNAME, nl.ZeroTerminated(BridgeName)) req.AddData(nameData) qlen := nl.NewRtAttr(syscall.IFLA_TXQLEN, nl.Uint32Attr(1000)) req.AddData(qlen) _, err = req.Execute(syscall.NETLINK_ROUTE, 0) if err != nil { glog.V(5).Infof(\"Failed to set bridge tx queue length: %v\", err) } } func (plugin *kubenetNetworkPlugin) Name() string { return KubenetPluginName }", "commid": "kubernetes_pr_25136"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5a056696a8b17624ee6d97139095a97ae226afc8b36ebaf5f287bd25b6fc9d03", "query": "I'm not sure where the defaults come from but they sometimes drop all packets causing container IPs to become unreachable. ARP fails. I'm not sure if this is related but I also noticed on this machine that the default pfifo qdisc drops all packets. pfifo is the default qdisc for an htb node so it's used in the unconfigured htb qdisc: This is because the default value for queue size is zero. On other machines I've seen the default value set to 1 packet and 1000 packets. When I set the limit explicitly the container becomes reachable again. The broken configuration is applied by kubelet to implement traffic shapping and there is no option to turn this off. This patch fixes the kubelet on broken machines by disabling traffic shapping: cc\ndoes this happen with --configure-cbr0 and packet shaping too? The kubenet implementation just uses the existing kubernetes shaping code that --configure-cbr0 does.\nIt seems the *fifo qdiscs default their packet limit to the txquelen of the interface (man tc-pfifo). It also seems most virtual interface types (bridge, etc) set a dev-txqueuelen of 0. It further seems that relies on the default packet limit. I could swear this worked before with kubenet, and it must have worked with --configure-cbr0 when the code was originally . I'll check around to figure out what the default limit should be for interfaces that don't specify a tx queue length.\nNote that to test with --configure-cbr0 you'd also need --reconcile-cidr.\nUpstream CNI fix is here: and we'll also work around it in kubenet for now.\nFWIW, the upstream CNI fix got merged for CNI 0.3.0", "positive_passages": [{"docid": "doc-en-kubernetes-c9af439bc4a04e9b96a7f947cdb41d824b986a2f523b29ea5b10c5470fe64e1c", "text": "if plugin.shaper == nil { return fmt.Errorf(\"Failed to create bandwidth shaper!\") } plugin.ensureBridgeTxQueueLen() plugin.shaper.ReconcileInterface() }", "commid": "kubernetes_pr_25136"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d2258d12727adc502d6af29b9a737c8c906c7ec8108d6e52316062052f3408e1", "query": "At least twice now, we've created a namespace and seen that the corresponding service account default secret token is never created. (The service account itself is, but not the default secret.) Namespace creation + apiserver logs logstash-ingester: namespace creation triggered service account creation but no secret logstash-broker: namespace creation triggered service account creation and secret creation Resulting service accounts Resulting secrets Controller-manager logs for broken case Controller-manager logs for working case Version Info:\nDupe of , fixed in\nActually, not a dupe. This may be a consequence of other refactors to controller logic - possibly we are missing / dropping events out of the queue. I would expect it to get fixed on the sync interval though.\nWe are also seeing this happening from time to time. In our case, it wasn't fixed in the next sync interval but it took actually a little bit more of 24 hours.\ncan we resolve this and remove it from the 1.6 milestone?\nfixed in in 1.4\nWe just hit this exact issue running 1.5.2. Maybe the fix didn't get the root cause or there was a regression since 1.4? This should be reopened. I waited about an hour to see if the sync interval would resolve this but it did not. I fixed by deleting and recreating the namespace. I can share logs, but they are identical to what the OP noted. Version info:\nWorking with this seems like it's not fixed in v1.5.2 -- what should we do here? We don't have permission to re-open the ticket, but filing the same ticket with the same logs (just fresher dates) seems passive-aggressive. :-) What's your preference?\nno problem, just getting back to this. can reopen this issue.\ncan you recreate with the controller manager running at and include controller manager log lines from ?\nFresh repo, offending namespace \"foo96\", with \"--v=5\" since we already have that logging level in place on my dev stack. This took 96 repros of a \"create namespace / monitor for token creation / delete namespace\" loop. Our script's commands, starting just after 16:36:43: Controller manager log lines from The first reference to the offending namespace (foo96) and the one other entry afterwards: All controller manager log lines citing the offending namespace: For comparison, the previous successfully-created namespace: Status as of 20 minutes later:\nAlso, here's the apiserver logs + controller manager logs, for more action/reaction context: Namespace: foo96 (problem) Namespace: foo95 (success)\nthat's helpful, thanks. it looks like while the token controller is processing the creation event for the service account, it does a live lookup against etcd and fails to find the service account. what version of etcd are you running with? are you running HA etcd? do you have quorum read enabled?\nWe run etcd version 2.3.7 (previously saw this on 2.2.x as well). We run etcd HA, with a 3 node cluster. We do not run the apiserver with the --etcd-quorum-read flag enabled.\n(Note: My specific logs above from this morning are on etcd 2.2.5, but yeah we've seen repro on 2.3.7 as well.)\nyeah, the non-quorum HA etcd config is the issue. I can look into updating the token controller to tolerate that, but I'm positive there are similar issues in many controllers that assume live lookups are reliable\nAhh excellent, glad to have an explanation! So -- now that we know what to google for, we found -- where you and clayton addressed what appears to be this same problem for service accounts, and specifically noted that you wanted to avoid recommending quorum reads. Why is the answer different here? Is it because the downstream impact of non-quorum reads is too diverse to reliably retry every relevant action? (Totally fine if that's the answer, just trying to make sure we understand the reason for what appears to be an entirely opposite conclusion to the same type of problem.)\nI still think controllers should be built to tolerate weak reads, and I'm willing to fix the token controller bug here, but three factors make me think this position is a losing battle: quorum read got a lot cheaper with etcd3 there are strong opinions that etcd quorum read should be on () which means that very few people are writing controllers designed to tolerate weak reads all the CI tests I'm aware of run with a single etcd or with quorum reads on\nPerfect, works for us. As for fixing this particular token controller bug, I no longer have skin in that game -- your call entirely. :-)", "positive_passages": [{"docid": "doc-en-kubernetes-ea45481e59b7183b1c5731c475b389e629267fa25fcb65e821a0a96c378d9229", "text": "// service account no longer exists, so delete related tokens glog.V(4).Infof(\"syncServiceAccount(%s/%s), service account deleted, removing tokens\", saInfo.namespace, saInfo.name) sa = &v1.ServiceAccount{ObjectMeta: metav1.ObjectMeta{Namespace: saInfo.namespace, Name: saInfo.name, UID: saInfo.uid}} if retriable, err := e.deleteTokens(sa); err != nil { retry, err = e.deleteTokens(sa) if err != nil { glog.Errorf(\"error deleting serviceaccount tokens for %s/%s: %v\", saInfo.namespace, saInfo.name, err) retry = retriable } default: // ensure a token exists and is referenced by this service account if retriable, err := e.ensureReferencedToken(sa); err != nil { retry, err = e.ensureReferencedToken(sa) if err != nil { glog.Errorf(\"error synchronizing serviceaccount %s/%s: %v\", saInfo.namespace, saInfo.name, err) retry = retriable } } }", "commid": "kubernetes_pr_44625"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d2258d12727adc502d6af29b9a737c8c906c7ec8108d6e52316062052f3408e1", "query": "At least twice now, we've created a namespace and seen that the corresponding service account default secret token is never created. (The service account itself is, but not the default secret.) Namespace creation + apiserver logs logstash-ingester: namespace creation triggered service account creation but no secret logstash-broker: namespace creation triggered service account creation and secret creation Resulting service accounts Resulting secrets Controller-manager logs for broken case Controller-manager logs for working case Version Info:\nDupe of , fixed in\nActually, not a dupe. This may be a consequence of other refactors to controller logic - possibly we are missing / dropping events out of the queue. I would expect it to get fixed on the sync interval though.\nWe are also seeing this happening from time to time. In our case, it wasn't fixed in the next sync interval but it took actually a little bit more of 24 hours.\ncan we resolve this and remove it from the 1.6 milestone?\nfixed in in 1.4\nWe just hit this exact issue running 1.5.2. Maybe the fix didn't get the root cause or there was a regression since 1.4? This should be reopened. I waited about an hour to see if the sync interval would resolve this but it did not. I fixed by deleting and recreating the namespace. I can share logs, but they are identical to what the OP noted. Version info:\nWorking with this seems like it's not fixed in v1.5.2 -- what should we do here? We don't have permission to re-open the ticket, but filing the same ticket with the same logs (just fresher dates) seems passive-aggressive. :-) What's your preference?\nno problem, just getting back to this. can reopen this issue.\ncan you recreate with the controller manager running at and include controller manager log lines from ?\nFresh repo, offending namespace \"foo96\", with \"--v=5\" since we already have that logging level in place on my dev stack. This took 96 repros of a \"create namespace / monitor for token creation / delete namespace\" loop. Our script's commands, starting just after 16:36:43: Controller manager log lines from The first reference to the offending namespace (foo96) and the one other entry afterwards: All controller manager log lines citing the offending namespace: For comparison, the previous successfully-created namespace: Status as of 20 minutes later:\nAlso, here's the apiserver logs + controller manager logs, for more action/reaction context: Namespace: foo96 (problem) Namespace: foo95 (success)\nthat's helpful, thanks. it looks like while the token controller is processing the creation event for the service account, it does a live lookup against etcd and fails to find the service account. what version of etcd are you running with? are you running HA etcd? do you have quorum read enabled?\nWe run etcd version 2.3.7 (previously saw this on 2.2.x as well). We run etcd HA, with a 3 node cluster. We do not run the apiserver with the --etcd-quorum-read flag enabled.\n(Note: My specific logs above from this morning are on etcd 2.2.5, but yeah we've seen repro on 2.3.7 as well.)\nyeah, the non-quorum HA etcd config is the issue. I can look into updating the token controller to tolerate that, but I'm positive there are similar issues in many controllers that assume live lookups are reliable\nAhh excellent, glad to have an explanation! So -- now that we know what to google for, we found -- where you and clayton addressed what appears to be this same problem for service accounts, and specifically noted that you wanted to avoid recommending quorum reads. Why is the answer different here? Is it because the downstream impact of non-quorum reads is too diverse to reliably retry every relevant action? (Totally fine if that's the answer, just trying to make sure we understand the reason for what appears to be an entirely opposite conclusion to the same type of problem.)\nI still think controllers should be built to tolerate weak reads, and I'm willing to fix the token controller bug here, but three factors make me think this position is a losing battle: quorum read got a lot cheaper with etcd3 there are strong opinions that etcd quorum read should be on () which means that very few people are writing controllers designed to tolerate weak reads all the CI tests I'm aware of run with a single etcd or with quorum reads on\nPerfect, works for us. As for fixing this particular token controller bug, I no longer have skin in that game -- your call entirely. :-)", "positive_passages": [{"docid": "doc-en-kubernetes-baa4893a8c01d707a5e1112a068b11dfc4e1ce9ad2f29c35554bf42206b5d190", "text": "// ensureReferencedToken makes sure at least one ServiceAccountToken secret exists, and is included in the serviceAccount's Secrets list func (e *TokensController) ensureReferencedToken(serviceAccount *v1.ServiceAccount) ( /* retry */ bool, error) { if len(serviceAccount.Secrets) > 0 { allSecrets, err := e.listTokenSecrets(serviceAccount) if err != nil { // Don't retry cache lookup errors return false, err } referencedSecrets := getSecretReferences(serviceAccount) for _, secret := range allSecrets { if referencedSecrets.Has(secret.Name) { // A service account token already exists, and is referenced, short-circuit return false, nil } } if hasToken, err := e.hasReferencedToken(serviceAccount); err != nil { // Don't retry cache lookup errors return false, err } else if hasToken { // A service account token already exists, and is referenced, short-circuit return false, nil } // We don't want to update the cache's copy of the service account", "commid": "kubernetes_pr_44625"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d2258d12727adc502d6af29b9a737c8c906c7ec8108d6e52316062052f3408e1", "query": "At least twice now, we've created a namespace and seen that the corresponding service account default secret token is never created. (The service account itself is, but not the default secret.) Namespace creation + apiserver logs logstash-ingester: namespace creation triggered service account creation but no secret logstash-broker: namespace creation triggered service account creation and secret creation Resulting service accounts Resulting secrets Controller-manager logs for broken case Controller-manager logs for working case Version Info:\nDupe of , fixed in\nActually, not a dupe. This may be a consequence of other refactors to controller logic - possibly we are missing / dropping events out of the queue. I would expect it to get fixed on the sync interval though.\nWe are also seeing this happening from time to time. In our case, it wasn't fixed in the next sync interval but it took actually a little bit more of 24 hours.\ncan we resolve this and remove it from the 1.6 milestone?\nfixed in in 1.4\nWe just hit this exact issue running 1.5.2. Maybe the fix didn't get the root cause or there was a regression since 1.4? This should be reopened. I waited about an hour to see if the sync interval would resolve this but it did not. I fixed by deleting and recreating the namespace. I can share logs, but they are identical to what the OP noted. Version info:\nWorking with this seems like it's not fixed in v1.5.2 -- what should we do here? We don't have permission to re-open the ticket, but filing the same ticket with the same logs (just fresher dates) seems passive-aggressive. :-) What's your preference?\nno problem, just getting back to this. can reopen this issue.\ncan you recreate with the controller manager running at and include controller manager log lines from ?\nFresh repo, offending namespace \"foo96\", with \"--v=5\" since we already have that logging level in place on my dev stack. This took 96 repros of a \"create namespace / monitor for token creation / delete namespace\" loop. Our script's commands, starting just after 16:36:43: Controller manager log lines from The first reference to the offending namespace (foo96) and the one other entry afterwards: All controller manager log lines citing the offending namespace: For comparison, the previous successfully-created namespace: Status as of 20 minutes later:\nAlso, here's the apiserver logs + controller manager logs, for more action/reaction context: Namespace: foo96 (problem) Namespace: foo95 (success)\nthat's helpful, thanks. it looks like while the token controller is processing the creation event for the service account, it does a live lookup against etcd and fails to find the service account. what version of etcd are you running with? are you running HA etcd? do you have quorum read enabled?\nWe run etcd version 2.3.7 (previously saw this on 2.2.x as well). We run etcd HA, with a 3 node cluster. We do not run the apiserver with the --etcd-quorum-read flag enabled.\n(Note: My specific logs above from this morning are on etcd 2.2.5, but yeah we've seen repro on 2.3.7 as well.)\nyeah, the non-quorum HA etcd config is the issue. I can look into updating the token controller to tolerate that, but I'm positive there are similar issues in many controllers that assume live lookups are reliable\nAhh excellent, glad to have an explanation! So -- now that we know what to google for, we found -- where you and clayton addressed what appears to be this same problem for service accounts, and specifically noted that you wanted to avoid recommending quorum reads. Why is the answer different here? Is it because the downstream impact of non-quorum reads is too diverse to reliably retry every relevant action? (Totally fine if that's the answer, just trying to make sure we understand the reason for what appears to be an entirely opposite conclusion to the same type of problem.)\nI still think controllers should be built to tolerate weak reads, and I'm willing to fix the token controller bug here, but three factors make me think this position is a losing battle: quorum read got a lot cheaper with etcd3 there are strong opinions that etcd quorum read should be on () which means that very few people are writing controllers designed to tolerate weak reads all the CI tests I'm aware of run with a single etcd or with quorum reads on\nPerfect, works for us. As for fixing this particular token controller bug, I no longer have skin in that game -- your call entirely. :-)", "positive_passages": [{"docid": "doc-en-kubernetes-3e8c3b013ef13b1449ad4bd2c712c4adc618d1f92caecbaf5a3c12b471ddd637", "text": "serviceAccounts := e.client.Core().ServiceAccounts(serviceAccount.Namespace) liveServiceAccount, err := serviceAccounts.Get(serviceAccount.Name, metav1.GetOptions{}) if err != nil { // Retry for any error other than a NotFound return !apierrors.IsNotFound(err), err // Retry if we cannot fetch the live service account (for a NotFound error, either the live lookup or our cache are stale) return true, err } if liveServiceAccount.ResourceVersion != serviceAccount.ResourceVersion { // our view of the service account is not up to date // we'll get notified of an update event later and get to try again glog.V(2).Infof(\"serviceaccount %s/%s is not up to date, skipping token creation\", serviceAccount.Namespace, serviceAccount.Name) return false, nil // Retry if our liveServiceAccount doesn't match our cache's resourceVersion (either the live lookup or our cache are stale) glog.V(4).Infof(\"liveServiceAccount.ResourceVersion (%s) does not match cache (%s), retrying\", liveServiceAccount.ResourceVersion, serviceAccount.ResourceVersion) return true, nil } // Build the secret", "commid": "kubernetes_pr_44625"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d2258d12727adc502d6af29b9a737c8c906c7ec8108d6e52316062052f3408e1", "query": "At least twice now, we've created a namespace and seen that the corresponding service account default secret token is never created. (The service account itself is, but not the default secret.) Namespace creation + apiserver logs logstash-ingester: namespace creation triggered service account creation but no secret logstash-broker: namespace creation triggered service account creation and secret creation Resulting service accounts Resulting secrets Controller-manager logs for broken case Controller-manager logs for working case Version Info:\nDupe of , fixed in\nActually, not a dupe. This may be a consequence of other refactors to controller logic - possibly we are missing / dropping events out of the queue. I would expect it to get fixed on the sync interval though.\nWe are also seeing this happening from time to time. In our case, it wasn't fixed in the next sync interval but it took actually a little bit more of 24 hours.\ncan we resolve this and remove it from the 1.6 milestone?\nfixed in in 1.4\nWe just hit this exact issue running 1.5.2. Maybe the fix didn't get the root cause or there was a regression since 1.4? This should be reopened. I waited about an hour to see if the sync interval would resolve this but it did not. I fixed by deleting and recreating the namespace. I can share logs, but they are identical to what the OP noted. Version info:\nWorking with this seems like it's not fixed in v1.5.2 -- what should we do here? We don't have permission to re-open the ticket, but filing the same ticket with the same logs (just fresher dates) seems passive-aggressive. :-) What's your preference?\nno problem, just getting back to this. can reopen this issue.\ncan you recreate with the controller manager running at and include controller manager log lines from ?\nFresh repo, offending namespace \"foo96\", with \"--v=5\" since we already have that logging level in place on my dev stack. This took 96 repros of a \"create namespace / monitor for token creation / delete namespace\" loop. Our script's commands, starting just after 16:36:43: Controller manager log lines from The first reference to the offending namespace (foo96) and the one other entry afterwards: All controller manager log lines citing the offending namespace: For comparison, the previous successfully-created namespace: Status as of 20 minutes later:\nAlso, here's the apiserver logs + controller manager logs, for more action/reaction context: Namespace: foo96 (problem) Namespace: foo95 (success)\nthat's helpful, thanks. it looks like while the token controller is processing the creation event for the service account, it does a live lookup against etcd and fails to find the service account. what version of etcd are you running with? are you running HA etcd? do you have quorum read enabled?\nWe run etcd version 2.3.7 (previously saw this on 2.2.x as well). We run etcd HA, with a 3 node cluster. We do not run the apiserver with the --etcd-quorum-read flag enabled.\n(Note: My specific logs above from this morning are on etcd 2.2.5, but yeah we've seen repro on 2.3.7 as well.)\nyeah, the non-quorum HA etcd config is the issue. I can look into updating the token controller to tolerate that, but I'm positive there are similar issues in many controllers that assume live lookups are reliable\nAhh excellent, glad to have an explanation! So -- now that we know what to google for, we found -- where you and clayton addressed what appears to be this same problem for service accounts, and specifically noted that you wanted to avoid recommending quorum reads. Why is the answer different here? Is it because the downstream impact of non-quorum reads is too diverse to reliably retry every relevant action? (Totally fine if that's the answer, just trying to make sure we understand the reason for what appears to be an entirely opposite conclusion to the same type of problem.)\nI still think controllers should be built to tolerate weak reads, and I'm willing to fix the token controller bug here, but three factors make me think this position is a losing battle: quorum read got a lot cheaper with etcd3 there are strong opinions that etcd quorum read should be on () which means that very few people are writing controllers designed to tolerate weak reads all the CI tests I'm aware of run with a single etcd or with quorum reads on\nPerfect, works for us. As for fixing this particular token controller bug, I no longer have skin in that game -- your call entirely. :-)", "positive_passages": [{"docid": "doc-en-kubernetes-064722e3b12f197af86bf290d62916e3abb7e4afb7ce983a078a079a3467a239", "text": "// This prevents the service account update (below) triggering another token creation, if the referenced token couldn't be found in the store e.secrets.Add(createdToken) liveServiceAccount.Secrets = append(liveServiceAccount.Secrets, v1.ObjectReference{Name: secret.Name}) // Try to add a reference to the newly created token to the service account addedReference := false err = clientretry.RetryOnConflict(clientretry.DefaultRetry, func() error { // refresh liveServiceAccount on every retry defer func() { liveServiceAccount = nil }() // fetch the live service account if needed, and verify the UID matches and that we still need a token if liveServiceAccount == nil { liveServiceAccount, err = serviceAccounts.Get(serviceAccount.Name, metav1.GetOptions{}) if err != nil { return err } if liveServiceAccount.UID != serviceAccount.UID { // If we don't have the same service account, stop trying to add a reference to the token made for the old service account. return nil } if hasToken, err := e.hasReferencedToken(liveServiceAccount); err != nil { // Don't retry cache lookup errors return nil } else if hasToken { // A service account token already exists, and is referenced, short-circuit return nil } } // Try to add a reference to the token liveServiceAccount.Secrets = append(liveServiceAccount.Secrets, v1.ObjectReference{Name: secret.Name}) if _, err := serviceAccounts.Update(liveServiceAccount); err != nil { return err } if _, err = serviceAccounts.Update(liveServiceAccount); err != nil { addedReference = true return nil }) if !addedReference { // we weren't able to use the token, try to clean it up. glog.V(2).Infof(\"deleting secret %s/%s because reference couldn't be added (%v)\", secret.Namespace, secret.Name, err) deleteOpts := &metav1.DeleteOptions{Preconditions: &metav1.Preconditions{UID: &createdToken.UID}} if deleteErr := e.client.Core().Secrets(createdToken.Namespace).Delete(createdToken.Name, deleteOpts); deleteErr != nil { glog.Error(deleteErr) // if we fail, just log it } } if err != nil { if apierrors.IsConflict(err) || apierrors.IsNotFound(err) { // if we got a Conflict error, the service account was updated by someone else, and we'll get an update notification later // if we got a NotFound error, the service account no longer exists, and we don't need to create a token for it", "commid": "kubernetes_pr_44625"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d2258d12727adc502d6af29b9a737c8c906c7ec8108d6e52316062052f3408e1", "query": "At least twice now, we've created a namespace and seen that the corresponding service account default secret token is never created. (The service account itself is, but not the default secret.) Namespace creation + apiserver logs logstash-ingester: namespace creation triggered service account creation but no secret logstash-broker: namespace creation triggered service account creation and secret creation Resulting service accounts Resulting secrets Controller-manager logs for broken case Controller-manager logs for working case Version Info:\nDupe of , fixed in\nActually, not a dupe. This may be a consequence of other refactors to controller logic - possibly we are missing / dropping events out of the queue. I would expect it to get fixed on the sync interval though.\nWe are also seeing this happening from time to time. In our case, it wasn't fixed in the next sync interval but it took actually a little bit more of 24 hours.\ncan we resolve this and remove it from the 1.6 milestone?\nfixed in in 1.4\nWe just hit this exact issue running 1.5.2. Maybe the fix didn't get the root cause or there was a regression since 1.4? This should be reopened. I waited about an hour to see if the sync interval would resolve this but it did not. I fixed by deleting and recreating the namespace. I can share logs, but they are identical to what the OP noted. Version info:\nWorking with this seems like it's not fixed in v1.5.2 -- what should we do here? We don't have permission to re-open the ticket, but filing the same ticket with the same logs (just fresher dates) seems passive-aggressive. :-) What's your preference?\nno problem, just getting back to this. can reopen this issue.\ncan you recreate with the controller manager running at and include controller manager log lines from ?\nFresh repo, offending namespace \"foo96\", with \"--v=5\" since we already have that logging level in place on my dev stack. This took 96 repros of a \"create namespace / monitor for token creation / delete namespace\" loop. Our script's commands, starting just after 16:36:43: Controller manager log lines from The first reference to the offending namespace (foo96) and the one other entry afterwards: All controller manager log lines citing the offending namespace: For comparison, the previous successfully-created namespace: Status as of 20 minutes later:\nAlso, here's the apiserver logs + controller manager logs, for more action/reaction context: Namespace: foo96 (problem) Namespace: foo95 (success)\nthat's helpful, thanks. it looks like while the token controller is processing the creation event for the service account, it does a live lookup against etcd and fails to find the service account. what version of etcd are you running with? are you running HA etcd? do you have quorum read enabled?\nWe run etcd version 2.3.7 (previously saw this on 2.2.x as well). We run etcd HA, with a 3 node cluster. We do not run the apiserver with the --etcd-quorum-read flag enabled.\n(Note: My specific logs above from this morning are on etcd 2.2.5, but yeah we've seen repro on 2.3.7 as well.)\nyeah, the non-quorum HA etcd config is the issue. I can look into updating the token controller to tolerate that, but I'm positive there are similar issues in many controllers that assume live lookups are reliable\nAhh excellent, glad to have an explanation! So -- now that we know what to google for, we found -- where you and clayton addressed what appears to be this same problem for service accounts, and specifically noted that you wanted to avoid recommending quorum reads. Why is the answer different here? Is it because the downstream impact of non-quorum reads is too diverse to reliably retry every relevant action? (Totally fine if that's the answer, just trying to make sure we understand the reason for what appears to be an entirely opposite conclusion to the same type of problem.)\nI still think controllers should be built to tolerate weak reads, and I'm willing to fix the token controller bug here, but three factors make me think this position is a losing battle: quorum read got a lot cheaper with etcd3 there are strong opinions that etcd quorum read should be on () which means that very few people are writing controllers designed to tolerate weak reads all the CI tests I'm aware of run with a single etcd or with quorum reads on\nPerfect, works for us. As for fixing this particular token controller bug, I no longer have skin in that game -- your call entirely. :-)", "positive_passages": [{"docid": "doc-en-kubernetes-0bbd89e62d5f54a1a58962448bd84050df38dd6f1d5d74a4d56517716f679a28", "text": "return false, nil } // hasReferencedToken returns true if the serviceAccount references a service account token secret func (e *TokensController) hasReferencedToken(serviceAccount *v1.ServiceAccount) (bool, error) { if len(serviceAccount.Secrets) == 0 { return false, nil } allSecrets, err := e.listTokenSecrets(serviceAccount) if err != nil { return false, err } referencedSecrets := getSecretReferences(serviceAccount) for _, secret := range allSecrets { if referencedSecrets.Has(secret.Name) { return true, nil } } return false, nil } func (e *TokensController) secretUpdateNeeded(secret *v1.Secret) (bool, bool, bool) { caData := secret.Data[v1.ServiceAccountRootCAKey] needsCA := len(e.rootCA) > 0 && bytes.Compare(caData, e.rootCA) != 0", "commid": "kubernetes_pr_44625"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae8bb1341a4214e1ef13fd4f4194d54f9693f2beafba7641cbf9e981c2373024", "query": "When users try to label/annotate multiple resources, we should tell them to add . Current output: Expected output:\nand too\nInstead of \"--all is false\", say \"multiple resources were provided, but no label selector or --all flag specified\"\nDescription updated based on suggestion\nI think \"resource(s) were provided, but no name, label selector, or --all flag specified\" is better. I will send a pull request for this as my first PR.", "positive_passages": [{"docid": "doc-en-kubernetes-56cb17bd52acb3a37faf611beea7d9e20a6db552816eb00e094c7443cf6cec85", "text": "\"k8s.io/kubernetes/pkg/api/errors\" \"k8s.io/kubernetes/pkg/api/testapi\" \"k8s.io/kubernetes/pkg/api/unversioned\" \"k8s.io/kubernetes/pkg/client/restclient\" \"k8s.io/kubernetes/pkg/client/unversioned/fake\" )", "commid": "kubernetes_pr_25612"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae8bb1341a4214e1ef13fd4f4194d54f9693f2beafba7641cbf9e981c2373024", "query": "When users try to label/annotate multiple resources, we should tell them to add . Current output: Expected output:\nand too\nInstead of \"--all is false\", say \"multiple resources were provided, but no label selector or --all flag specified\"\nDescription updated based on suggestion\nI think \"resource(s) were provided, but no name, label selector, or --all flag specified\" is better. I will send a pull request for this as my first PR.", "positive_passages": [{"docid": "doc-en-kubernetes-2da289ba6f0cfb2972610e5a6ac56552c36d94377f9f65cd63323e56c1658600", "text": "t.Errorf(\"unexpected output: %s\", buf.String()) } } func TestResourceErrors(t *testing.T) { testCases := map[string]struct { args []string flags map[string]string errFn func(error) bool }{ \"no args\": { args: []string{}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"you must provide one or more resources\") }, }, \"resources but no selectors\": { args: []string{\"pods\"}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"resource(s) were provided, but no name, label selector, or --all flag specified\") }, }, \"multiple resources but no selectors\": { args: []string{\"pods,deployments\"}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"resource(s) were provided, but no name, label selector, or --all flag specified\") }, }, } for k, testCase := range testCases { f, tf, _ := NewAPIFactory() tf.Printer = &testPrinter{} tf.Namespace = \"test\" tf.ClientConfig = &restclient.Config{ContentConfig: restclient.ContentConfig{GroupVersion: testapi.Default.GroupVersion()}} buf := bytes.NewBuffer([]byte{}) cmd := NewCmdDelete(f, buf) cmd.SetOutput(buf) for k, v := range testCase.flags { cmd.Flags().Set(k, v) } err := RunDelete(f, buf, cmd, testCase.args, &DeleteOptions{}) if !testCase.errFn(err) { t.Errorf(\"%s: unexpected error: %v\", k, err) continue } if tf.Printer.(*testPrinter).Objects != nil { t.Errorf(\"unexpected print to default printer\") } if buf.Len() > 0 { t.Errorf(\"buffer should be empty: %s\", string(buf.Bytes())) } } } ", "commid": "kubernetes_pr_25612"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae8bb1341a4214e1ef13fd4f4194d54f9693f2beafba7641cbf9e981c2373024", "query": "When users try to label/annotate multiple resources, we should tell them to add . Current output: Expected output:\nand too\nInstead of \"--all is false\", say \"multiple resources were provided, but no label selector or --all flag specified\"\nDescription updated based on suggestion\nI think \"resource(s) were provided, but no name, label selector, or --all flag specified\" is better. I will send a pull request for this as my first PR.", "positive_passages": [{"docid": "doc-en-kubernetes-b7dba791c3185946230869a336da6b130d5f116b10b729980ddab13cda2dce1b", "text": "args: []string{\"pods=bar\"}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"one or more resources must be specified\") }, }, \"resources but no selectors\": { args: []string{\"pods\", \"app=bar\"}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"resource(s) were provided, but no name, label selector, or --all flag specified\") }, }, \"multiple resources but no selectors\": { args: []string{\"pods,deployments\", \"app=bar\"}, errFn: func(err error) bool { return strings.Contains(err.Error(), \"resource(s) were provided, but no name, label selector, or --all flag specified\") }, }, } for k, testCase := range testCases {", "commid": "kubernetes_pr_25612"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae8bb1341a4214e1ef13fd4f4194d54f9693f2beafba7641cbf9e981c2373024", "query": "When users try to label/annotate multiple resources, we should tell them to add . Current output: Expected output:\nand too\nInstead of \"--all is false\", say \"multiple resources were provided, but no label selector or --all flag specified\"\nDescription updated based on suggestion\nI think \"resource(s) were provided, but no name, label selector, or --all flag specified\" is better. I will send a pull request for this as my first PR.", "positive_passages": [{"docid": "doc-en-kubernetes-5c3292f0e598bb785460032640759b3a0aa1fae75c3ed3cf2c18376f38d6496e", "text": "return &Result{singular: singular, visitor: visitors, sources: b.paths} } if len(b.resources) != 0 { return &Result{err: fmt.Errorf(\"resource(s) were provided, but no name, label selector, or --all flag specified\")} } return &Result{err: fmt.Errorf(\"you must provide one or more resources by argument or filename (%s)\", strings.Join(InputExtensions, \"|\"))} }", "commid": "kubernetes_pr_25612"}], "negative_passages": []} {"query_id": "q-en-kubernetes-921194a3ee9035489edb0f46fa5ed9bbb84000a216afd5f16934a80b7eed9035", "query": "Currently a Docker bug may cause KubeProxy to crashloop thus making the Node unable to use cluster network, which in turn makes Pods scheduled on this Node unreachable. To mitigate this issue we need to surface the information about problems with KubeProxy, that will allow scheduler to ignore given node when scheduling new Pods. ProblemAPI is going to address this kind of problems in the future, but we need a fix for the 1.3. Either Kubelet or KubeProxy needs to update a NodeStatus with the information that Node networking is down. I suggests that we reuse NodeReady Condition for this, as this would make the rest of ControlPlane work out of the box. We will replace this fix with the ProblemAPI use in the future. Another option, which requires more work, is creating a completely new NodeCondition type to handle this case. This would require making Scheduler aware of this new Condition.\nDoes kubelet monitor kube-proxy health?\nKube-proxy today runs as a static pod (daemonset), kubelet treats it the same as other static pods. Yes, Kubelet reports kube-proxy podstatus back to master, that is all. The detail information on kube-proxy is in crashloop at\ncc/ Brandon, this issue explains my long position against running kubelet, kube-proxy, etc. critical daemons as docker container. I am ok to packaging them into docker image, and running as linux containers, but they shouldn't depend on docker daemon.\nI think we don't want to run them as docker containers as well, we want to run them as rkt containers.\nThe initial proposal I reviewed was docker container, and I did raise my concerns to and coreos developers then. Running it as rkt container should be ok since rkt is daemon-less today, and shouldn't have this chicken egg issue at the first place. But we still have the problem on the nodes without rkt as runtime. At the meeting, I suggested that kube-proxy update NodeStatus with a newly introduced NodeCondition, like what we plan to do with kernel issue at: The reason of not using NodeReady is that kubelet might override that. Another very hacky way is before crashing, Kube-proxy logs the read-only filesystem problem somewhere, like /var/lib/kube-proxy/... Kubelet will pick up the information, then update NodeReady Condition with detail error message prefixing with \"kube-proxy:\" kube-proxy's responsibility to make the information up-to-date.\nCan I have more description of the bug? Is there a related docker issue? Is there a repro? Does this only affect kube-proxy or will it affect other pods on the node? We need to support running critical applications (including system daemons) in pods in order to allow easy deployment and integration with vendors (e.g. storage and network vendors). cc since we were just talking about integrating through addons. What features are we missing that we can't support this right now? Is it just that our default container runtime is buggy and flaky and unreliable? Do we need the NodeProblem API? Do we need taints/tolerations?\n- I don't know about any way to repro it other than running huge clusters. Pretty much every run of a 1000 Node cluster has this problem on at least one Node.\nIf you want to know the detail, see It is a docker issue, and pretty rare.\nCan we regroup on this issue this week? Like tomorrow? , Dawn Chen wrote:\nI will happily make kube-proxy write to or offer up pretty much any API we deem necessary to detect this issue. To me it still feels like a docker health-check suite is needed, and one of the tests is that sysfs gets mounted rw. BUt in the mean time, how about this: make be the API. Any component which feels that it has a node problem can write a JSON file into that directory. For this case, kube-proxy would write a file: Kubelet can turn that into Node status reports. , Tim Hockin wrote:\ncc\nPR of introducing NodeProblemDetector was merged last week, and the pr to enable it by default for Kubernetes cluster was merged over the weekend. So far everything works fine. Even it is an alpha feature, GKE folks agreed to enable it for GKE. To workaround this particular issue before we have proper docker fix, we can easily extend today's KernelMonitor module of NodeProblemDetector to monitor kube-proxy log, and report a new NodeCondition called SysfsReadonly to the issue visible. But we don't have remedy system yet, and pr of converting node problem (events, conditions etc.) to taints is still under the debate / review: 1) Should NodeController respect this new Condition, and simply mark node NotReady? 2) Should repair system pickup this issue, and restart docker?\nDo we still observe the problem with docker 1.11.X? If no, I want to close this; otherwise, we are going to reconfigure NodeProblemDetector to make the issue visible.\ncc/\nAgree, I'd love to not \"fix\" this..\nI think we've seen it the day before yesterday during our tests - can you please confirm?\nI can't recall - we certainly did saw some failed nodes during the test, but I'm not sure we made sure it was a kube-proxy issue.\nBut what I can tell for sure that it's definitely less frequent - we started 2000-node cluster 10+ times during this week and we've seen it at most once (which makes it at least 99.995% reliable).\nNot enough data. GKE is still deploying the old ContainerVM and isn't running Docker 1.11.1 yet, unless I'm mistaken.\naah ok - I didn't know that...\nI'm going to tentatively close this until we show that it's a problem on Docker 1.11.2, now that Docker 1.11.2 is in on both GCE and GKE.\n(Obviously feel free to reopen if I'm wrong.)\nAnother occurrence:\nSo it's still happening, can we teach NPD to detect and surface this as node unschedulable?\nYes, we can. But I found that the error only occurred once with docker 1.11 till now. At least, this should be a known issue, right? XRef Do we want to fix this in 1.3 or just claim known issues in release notes? :)\nDoesn't this render the node unusable (i.e no services will work because kube-proxy is broken)? if so, we should surface regardless of frequency IMO. Whether we fix or paper over it is a different issue.\n+1\nBecause it's code freeze now, I can add a temporary short hack in node problem detector to detect this issue. Should we do that? :)\nRemoved the priority and it back to 1.3 milestone, so that we can discuss at burndown meeting.\nYeah i think codefreeze just means no features\nIt is very rare after we upgrade to docker 1.11.X. To extend KernelMontor in NodeProblemDetector to detect the issue is totally overkill, since we have to allocate more compute resource to NodeProblemDetector to process tons of kube-proxy logs. Here is what I suggested for this issue for 1.3 and 1.4+: Document this as known issue for docker releases: docker 1.10.X (), docker 1.11.X (). Also document it as Kubernetes 1.3 release known issue at is also updated admin guide for NodeProblemDetector, and the suggestion on how to handle this issue will be included there. For 1.4, I think we should change kube-proxy. Instead of logging the error then crashing, kube-proxy should report its problem to NodeProblemDector, which aggregates the node issues and propagate them to upstream layers, so that the admin / remedy system & control-plane can take proper action upon the issues. I am removing this from 1.3 milestone, and unassigning myself. cc/ to find the right owner from cluster team for next release here.\nOn a 2k node cluster, I just saw 4 nodes come up this way on Docker 1.11.2. I'm not saying it's going to stop the release, but it does stop bringup on large clusters pretty frequently. For some reason we went through a \"nice\" patch where it was rare, but today it was just bad.\nWe can have a quick fix in node problem detector to catch this kind of problem by parsing the kube-proxy log. The only problem is that, kube-proxy generates logs relatively fast. If we let node problem detector do that, it has to waste significant cpu even on the good nodes. A better solution is to make node problem detector expose an endpoint and let other system components report problems. However, we are not there yet for 1.3. If we'd still want to solve this for 1.3, we could: 1) Let kube-proxy have some kind of error log, and only print error into it. Then we can easily change node problem detector to parse the error log and report problems. By this, we can at least make these errors visible to users for 1.3. 2) Or let kube-proxy report the node condition itself. /cc\nComment at is still valid. We cannot do much for 1.3 release here, and extending NodeProblemDetector to parse the application logs is wrong move. Also NodeProblemDetector is not running on GKE nodes yet.\nI'm seeing people say they reproduced on 1.11.2, is there a 1.11 docker issue? I know that was closed. If I'm going to get my docker guys to dig, I need a docker bug report...\nCan you please ramp up on this and come up with a way to bring node and kube proxy teams together to resolve? Even if we don't land in 1.3.0, I propose this is at least a P1 for 1.3.1.\nFYI and others In I a hacky way of resolving those issues to our test framework. In the last run of gke-large-cluster run it seemed to work (there was 1 broken kube-proxy, this triggered restarting docker, which in turn fixed the problem). This is obviously not the solution, but: it seems to prove that 'docker restart' solves the problem it should stop the bleeding in our tests\n& I copied & paste what I wrote in another internal thread to give you some background: No, it is not a blocker for 1.3 release: It is not a regression from 1.2 release. It is way less frequently after we upgrade to docker 1.11.2 from docker 1.9.1 NodeProblemDetector is not designed for processing arbitrary logs. It is an overkill solution given such low frequency and relative high resource overhead. By default NodeProblemDetector is not running on gke node due to the extra overhead per node introduced to our customer. Meanwhile Since GKE already has repair system built, it should be very easy to extend GKE repair system to check if kube-proxy is in crashloop. If yes, then kubectl logs to retrieve the last terminated container log to see if the issue is caused by readonly root filesystem. If yes, the repair system can issue a docker restart.\nagreed not a blocker. This is likely only fixable in docker. We should keep it up the priority so we dig into and fix docker. can try next week to get our docker people talking to whoever on your side you want to get this fixed (in 1.11.3 or later probably)\nCan we add a node readiness or liveness probe to solve this? (I remember there is a issue for this, but can't find it now) One of the default behaviours of the probe is to make sure the key node components are running normally, and the behaviour could be extended overtime. Probably adding a node probe controller or node health controller to perform the probing. Or just use the health probe of GCE MIG, if its functionality is enough to handle this. This could be part of the remedy system. Just random thoughts, not for 1.3 and even may not work. :)\nAll are good ideas. But let's move that discussion to a separate issue, and use this one to focus providing the workaround for KubeProxy panic within 1.3 timeframe. Thanks!\nAlso what you suggested probe controller can be easily achieved by GKE repair system which I suggested above.\nAgree. needs quite a lot of work, definitely not for 1.3 :)\nKeeping in v1.3 until we decide on Docker 1.11 for sure.\nThe issue seems to be present on old and new docker alike, sadly.\nI opened a docker issue here Feel free to supply information on that issue.\nSo what's the right kube-proxy fix? , Lantao Liu wrote:\n- didn't we decided that we just want to discover this problem in NodeProblemDetector and then mark this node as not-healthy (whatever that means - adding a new condition or extending NodeReady or ...)? In my opinion if it would be clear to user why this node is not healthy, this should be good enough. FYI in I worked around this in our tests :)\nIt wasn't clear to me if kube-proxy should do something different or not. , Wojciech Tyczynski < :\nyeah this is a docker bug (), unsure if there's a pod level workaround, couldn't make out if it was specific to hostNetwork. the work around is restarting docker. It happened more frequently in 1.9, but still happens on 1.11 (node team empirical data). To surface it to the NPD we need to either send some http event to a NPD server (vaporware) or write to a special hostPath termination log type file (parsing kube-proxy logs form NPD is too expensive).\nThis feels like a hot potato but I am not sure if it still needs to be addressed or not. If it does, let's do so. I apologize for not being able to understand what the current status is. It looks like we have a . Reading comment from 12 days ago she states: While folks may want the node problem detector extended to parse logs to detect it, that is not a good fit for that system and it doesn't run on all nodes anyway. Also GKE can be extended to possibly repair this problem. Questions: Is this an issue that we need to still address for production? Or can we wait for docker fix? If we haven't fixed the kube-proxy panic (which we should) how do ensure we still provide a signal that this is happening?\nI am leaving this on 1.3 until we know the best thing to do.\nIf we can DIRECTLY tell the node problem system about a problem, I have no qualms about doing that. drop a file in a dir, for example. , Michael Rubin wrote:\nNode problem detector is not running on GKE cluster now. :) and should be a better fix for this. If we still want a fix in the node problem detector, letting kubeproxy drop a file could be enough for now.\nIf you can give me a spec of what dir to mount and what sort of file to write, I'll do it. , Lantao Liu wrote:\nFor temporary fix, something like should be enough. The kernel monitor of node problem detector could be ported to parse other logs. The only reason we can't do that for kube- is that the log is too spammy. should be enough for now. Only the kubeproxy panic error is printed directly into : Even though the default of glog is , I checked the node in and cluster, normally only few lines are at Error level.\nI thought that Dawn said we DO NOT want to parse logs. Parsing logs is, in general, a total hack and very very brittle. we should have a real push API. , Lantao Liu wrote:\nI agree with I also understood thought we should not parse logs. Is that common practise for the node problem detector? It feels fragile to me and to be avoided.\nYeah, I agree parsing log is fragile, that's why I call it a temporary or a quick fix. :) The only good thing is that it doesn't need significant change on both sides, although I don't think it's a good solution, either. :( For a better fix, it will need some more design and work. Let me think about it a little more~ Anyway, I still think and is needed. As a key component running on each node, it's wired that we don't have a component to monitor whether it is ready or not.\nRe making kube-proxy \"not panic\", it currently fails in resizing the hash table for conntrack entries, for which it needs to write to /sys. I'm not sure if there are more locations (we moved /sys/module/br_netfilter loading into kubelet IIUC), or if conntrack itself needs write access to /sys. I think Dawn/Liu are more worried about the NPD not parsing kube-proxy logs, because its memory usage would balloon. If kube-proxy can detect the situtation internally, either by parsing its own logs (which is brittle) or poking someting in /sys for example (like the hash table resizing it's currently failing on), it can drop a token into a hostPath (hoping the docker bug doesn't manifest as ro hostPath). Kube-proxy will still remain dysfunctional, though we might be able to: the issue/mark node unusable a feedback loop that restarts docker We just need to be careful to cleanup the hostPath so it doesn't end up in a restart loop.\nMy understanding of 1.3's state based on a brief chat with : something like ~1 in 1000 nodes will be affected by this issue and if nodes are affected, pods scheduled on those nodes have {broken,little,no} networking. We hope Docker will eventually fix it. But because that's not the case for 1.3, and we don't want to make excessive changes to 1.3, perhaps that means that we should fix it expediently. Summarizing my understanding of previous discussion: expedient thing would be some mechanism to detect ( proposal like kube-proxy detecting and writing to /tmp/kube-proxy-claims-docker-issue-) and $something (kubelet, node problem detector, a bash script in a loop) that checks for existence of that file followed by some form of remediation {kill docker, sudo reboot, .....} to bash the node into working again. Maybe an ugly hack is okay for now and we hope that docker fixes it for the next release so in 1.4 we can rm the code? Or for 1.4 we can do something that is More Elegant And Less Offensive To Engineering Sensibilities?\nBounded-lifetime hacks are fine, but they need to come with giant disclaimers and named-assignees to cleanup the mess after a specific event. , Alex Mohr wrote:\nAgree, re: bounded-lifetime hacks needing care and we shouldn't do them indiscriminately. We and our users have a problem today -- and we've had a problem for the past N months without traction. I'd like the problem solved for 1.3 because I'd like us to have a great product and I don't think the state of broken nodes is there. Aside; we've spent N hours discussing the issue and could likely have worked around the issue in less aggregate people time than we spent discussing? Apologies for the leaked frustration: this issue seems to be stuck in either some form of analysis paralysis or cracks between teams or perfect-is-the-enemy-of-the-good state. Again, I don't care about implementation details. We also need more than just tech or a piece of code. Exit criteria is that users don't get broken nodes shipped to them.\nWe're not going to solve this problem for as long as we depend on a lagging docker release cycle, and they don't backport or do intermediate releases to help us. This is just a one off docker bug that's left us in a bad state. Such issues come up every release, get documented in the release notes and we ship anyway. The easiest fix (disclaimers and all) is to redirect kube-proxy stderr and parse it out from NPD. Even then, the node will remain broken unless we restart docker. the NDP will only surface the error.\nI'm working on a PR yesterday night, will send it out soon.\nFYI, a fix is here . In , I let kube-proxy update a node condition with specific reason, message and hit to the administrator about the remediation. The specific reason why we want to do this is discussed in the PR description\nFWIW I agree 100% with using node condition SGTM, but your PR should probably also modify the scheduler here to prevent the system from sending new pods to a node that has RuntimeUnhealthy condition. I realize there will still be a time window when we might send some pods to that node (before the problem is detected) but at least the time window will be bounded this way.\nYeah, we can do that. But before that, we should make sure that the node is really unusable without setting conntrack. In fact, we only setting conntrack to increase the max connection limit on the node (default 64k-256k) Without this, is the node really considered to be unusable? If so, we should prevent scheduling pods to the node. If not, maybe we should just surface the problem to the administrator and let the node keep working. I'll run e2e test without conntrack set to see whether there is any problem. Yeah, that's why I think we should not restart docker ourselves to try to remedy the problem. There may still be workloads running the node. We'd better leave this to the user to decide. :)\nis merged, and hopefully, it could solve this issue. I've also sent a PR to revert the walkaround in the test framework to verify whether the issue is really fully solved. Feel free to reopen this if the issue or relative issue happens again.\nthanks for the help!\nI just ran into this, and I am able to repro consistently. It might be due to the craziness of my experiment, but I still wanted to contribute the data point. I am experimenting with docker inside docker. All I am about to describe is itself running inside a privileged CentOS 7 container that has systemd and Docker installed. The CentOS 7 container is running on my dev machine (Docker for Mac). I have a kubelet running fine, and all control plane components running as static pods. When I attempt to run kube-proxy as a static pod (privileged) it crashes with the following logs: sysfs is mounted as rw: sysfs is mounted as rw on containers: Using busybox to further inspect:\nI'm seeing the same issue when running kubeadm-dind-cluster, where kube-proxy is failing to come up, because it is trying to write to /sys/module/nf_conntrack/parameters/hashsize. This is on the filesystem sysfs, which \"mount\" shows is rw (so it passes the R/W check in kube-proxy code), but the filesystem is not writeable. Within kube-proxy: I'm also running docker containers inside of docker containers. FYI, others have not seen this issue, when just running docker on bare-metal.", "positive_passages": [{"docid": "doc-en-kubernetes-65761135dc6bf6eb8bdd425ce249d937f86ec5d89c67e113ce20faf48d97ab16", "text": "package app import ( \"errors\" \"io/ioutil\" \"strconv\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/util/mount\" \"k8s.io/kubernetes/pkg/util/sysctl\" )", "commid": "kubernetes_pr_28697"}], "negative_passages": []} {"query_id": "q-en-kubernetes-921194a3ee9035489edb0f46fa5ed9bbb84000a216afd5f16934a80b7eed9035", "query": "Currently a Docker bug may cause KubeProxy to crashloop thus making the Node unable to use cluster network, which in turn makes Pods scheduled on this Node unreachable. To mitigate this issue we need to surface the information about problems with KubeProxy, that will allow scheduler to ignore given node when scheduling new Pods. ProblemAPI is going to address this kind of problems in the future, but we need a fix for the 1.3. Either Kubelet or KubeProxy needs to update a NodeStatus with the information that Node networking is down. I suggests that we reuse NodeReady Condition for this, as this would make the rest of ControlPlane work out of the box. We will replace this fix with the ProblemAPI use in the future. Another option, which requires more work, is creating a completely new NodeCondition type to handle this case. This would require making Scheduler aware of this new Condition.\nDoes kubelet monitor kube-proxy health?\nKube-proxy today runs as a static pod (daemonset), kubelet treats it the same as other static pods. Yes, Kubelet reports kube-proxy podstatus back to master, that is all. The detail information on kube-proxy is in crashloop at\ncc/ Brandon, this issue explains my long position against running kubelet, kube-proxy, etc. critical daemons as docker container. I am ok to packaging them into docker image, and running as linux containers, but they shouldn't depend on docker daemon.\nI think we don't want to run them as docker containers as well, we want to run them as rkt containers.\nThe initial proposal I reviewed was docker container, and I did raise my concerns to and coreos developers then. Running it as rkt container should be ok since rkt is daemon-less today, and shouldn't have this chicken egg issue at the first place. But we still have the problem on the nodes without rkt as runtime. At the meeting, I suggested that kube-proxy update NodeStatus with a newly introduced NodeCondition, like what we plan to do with kernel issue at: The reason of not using NodeReady is that kubelet might override that. Another very hacky way is before crashing, Kube-proxy logs the read-only filesystem problem somewhere, like /var/lib/kube-proxy/... Kubelet will pick up the information, then update NodeReady Condition with detail error message prefixing with \"kube-proxy:\" kube-proxy's responsibility to make the information up-to-date.\nCan I have more description of the bug? Is there a related docker issue? Is there a repro? Does this only affect kube-proxy or will it affect other pods on the node? We need to support running critical applications (including system daemons) in pods in order to allow easy deployment and integration with vendors (e.g. storage and network vendors). cc since we were just talking about integrating through addons. What features are we missing that we can't support this right now? Is it just that our default container runtime is buggy and flaky and unreliable? Do we need the NodeProblem API? Do we need taints/tolerations?\n- I don't know about any way to repro it other than running huge clusters. Pretty much every run of a 1000 Node cluster has this problem on at least one Node.\nIf you want to know the detail, see It is a docker issue, and pretty rare.\nCan we regroup on this issue this week? Like tomorrow? , Dawn Chen wrote:\nI will happily make kube-proxy write to or offer up pretty much any API we deem necessary to detect this issue. To me it still feels like a docker health-check suite is needed, and one of the tests is that sysfs gets mounted rw. BUt in the mean time, how about this: make be the API. Any component which feels that it has a node problem can write a JSON file into that directory. For this case, kube-proxy would write a file: Kubelet can turn that into Node status reports. , Tim Hockin wrote:\ncc\nPR of introducing NodeProblemDetector was merged last week, and the pr to enable it by default for Kubernetes cluster was merged over the weekend. So far everything works fine. Even it is an alpha feature, GKE folks agreed to enable it for GKE. To workaround this particular issue before we have proper docker fix, we can easily extend today's KernelMonitor module of NodeProblemDetector to monitor kube-proxy log, and report a new NodeCondition called SysfsReadonly to the issue visible. But we don't have remedy system yet, and pr of converting node problem (events, conditions etc.) to taints is still under the debate / review: 1) Should NodeController respect this new Condition, and simply mark node NotReady? 2) Should repair system pickup this issue, and restart docker?\nDo we still observe the problem with docker 1.11.X? If no, I want to close this; otherwise, we are going to reconfigure NodeProblemDetector to make the issue visible.\ncc/\nAgree, I'd love to not \"fix\" this..\nI think we've seen it the day before yesterday during our tests - can you please confirm?\nI can't recall - we certainly did saw some failed nodes during the test, but I'm not sure we made sure it was a kube-proxy issue.\nBut what I can tell for sure that it's definitely less frequent - we started 2000-node cluster 10+ times during this week and we've seen it at most once (which makes it at least 99.995% reliable).\nNot enough data. GKE is still deploying the old ContainerVM and isn't running Docker 1.11.1 yet, unless I'm mistaken.\naah ok - I didn't know that...\nI'm going to tentatively close this until we show that it's a problem on Docker 1.11.2, now that Docker 1.11.2 is in on both GCE and GKE.\n(Obviously feel free to reopen if I'm wrong.)\nAnother occurrence:\nSo it's still happening, can we teach NPD to detect and surface this as node unschedulable?\nYes, we can. But I found that the error only occurred once with docker 1.11 till now. At least, this should be a known issue, right? XRef Do we want to fix this in 1.3 or just claim known issues in release notes? :)\nDoesn't this render the node unusable (i.e no services will work because kube-proxy is broken)? if so, we should surface regardless of frequency IMO. Whether we fix or paper over it is a different issue.\n+1\nBecause it's code freeze now, I can add a temporary short hack in node problem detector to detect this issue. Should we do that? :)\nRemoved the priority and it back to 1.3 milestone, so that we can discuss at burndown meeting.\nYeah i think codefreeze just means no features\nIt is very rare after we upgrade to docker 1.11.X. To extend KernelMontor in NodeProblemDetector to detect the issue is totally overkill, since we have to allocate more compute resource to NodeProblemDetector to process tons of kube-proxy logs. Here is what I suggested for this issue for 1.3 and 1.4+: Document this as known issue for docker releases: docker 1.10.X (), docker 1.11.X (). Also document it as Kubernetes 1.3 release known issue at is also updated admin guide for NodeProblemDetector, and the suggestion on how to handle this issue will be included there. For 1.4, I think we should change kube-proxy. Instead of logging the error then crashing, kube-proxy should report its problem to NodeProblemDector, which aggregates the node issues and propagate them to upstream layers, so that the admin / remedy system & control-plane can take proper action upon the issues. I am removing this from 1.3 milestone, and unassigning myself. cc/ to find the right owner from cluster team for next release here.\nOn a 2k node cluster, I just saw 4 nodes come up this way on Docker 1.11.2. I'm not saying it's going to stop the release, but it does stop bringup on large clusters pretty frequently. For some reason we went through a \"nice\" patch where it was rare, but today it was just bad.\nWe can have a quick fix in node problem detector to catch this kind of problem by parsing the kube-proxy log. The only problem is that, kube-proxy generates logs relatively fast. If we let node problem detector do that, it has to waste significant cpu even on the good nodes. A better solution is to make node problem detector expose an endpoint and let other system components report problems. However, we are not there yet for 1.3. If we'd still want to solve this for 1.3, we could: 1) Let kube-proxy have some kind of error log, and only print error into it. Then we can easily change node problem detector to parse the error log and report problems. By this, we can at least make these errors visible to users for 1.3. 2) Or let kube-proxy report the node condition itself. /cc\nComment at is still valid. We cannot do much for 1.3 release here, and extending NodeProblemDetector to parse the application logs is wrong move. Also NodeProblemDetector is not running on GKE nodes yet.\nI'm seeing people say they reproduced on 1.11.2, is there a 1.11 docker issue? I know that was closed. If I'm going to get my docker guys to dig, I need a docker bug report...\nCan you please ramp up on this and come up with a way to bring node and kube proxy teams together to resolve? Even if we don't land in 1.3.0, I propose this is at least a P1 for 1.3.1.\nFYI and others In I a hacky way of resolving those issues to our test framework. In the last run of gke-large-cluster run it seemed to work (there was 1 broken kube-proxy, this triggered restarting docker, which in turn fixed the problem). This is obviously not the solution, but: it seems to prove that 'docker restart' solves the problem it should stop the bleeding in our tests\n& I copied & paste what I wrote in another internal thread to give you some background: No, it is not a blocker for 1.3 release: It is not a regression from 1.2 release. It is way less frequently after we upgrade to docker 1.11.2 from docker 1.9.1 NodeProblemDetector is not designed for processing arbitrary logs. It is an overkill solution given such low frequency and relative high resource overhead. By default NodeProblemDetector is not running on gke node due to the extra overhead per node introduced to our customer. Meanwhile Since GKE already has repair system built, it should be very easy to extend GKE repair system to check if kube-proxy is in crashloop. If yes, then kubectl logs to retrieve the last terminated container log to see if the issue is caused by readonly root filesystem. If yes, the repair system can issue a docker restart.\nagreed not a blocker. This is likely only fixable in docker. We should keep it up the priority so we dig into and fix docker. can try next week to get our docker people talking to whoever on your side you want to get this fixed (in 1.11.3 or later probably)\nCan we add a node readiness or liveness probe to solve this? (I remember there is a issue for this, but can't find it now) One of the default behaviours of the probe is to make sure the key node components are running normally, and the behaviour could be extended overtime. Probably adding a node probe controller or node health controller to perform the probing. Or just use the health probe of GCE MIG, if its functionality is enough to handle this. This could be part of the remedy system. Just random thoughts, not for 1.3 and even may not work. :)\nAll are good ideas. But let's move that discussion to a separate issue, and use this one to focus providing the workaround for KubeProxy panic within 1.3 timeframe. Thanks!\nAlso what you suggested probe controller can be easily achieved by GKE repair system which I suggested above.\nAgree. needs quite a lot of work, definitely not for 1.3 :)\nKeeping in v1.3 until we decide on Docker 1.11 for sure.\nThe issue seems to be present on old and new docker alike, sadly.\nI opened a docker issue here Feel free to supply information on that issue.\nSo what's the right kube-proxy fix? , Lantao Liu wrote:\n- didn't we decided that we just want to discover this problem in NodeProblemDetector and then mark this node as not-healthy (whatever that means - adding a new condition or extending NodeReady or ...)? In my opinion if it would be clear to user why this node is not healthy, this should be good enough. FYI in I worked around this in our tests :)\nIt wasn't clear to me if kube-proxy should do something different or not. , Wojciech Tyczynski < :\nyeah this is a docker bug (), unsure if there's a pod level workaround, couldn't make out if it was specific to hostNetwork. the work around is restarting docker. It happened more frequently in 1.9, but still happens on 1.11 (node team empirical data). To surface it to the NPD we need to either send some http event to a NPD server (vaporware) or write to a special hostPath termination log type file (parsing kube-proxy logs form NPD is too expensive).\nThis feels like a hot potato but I am not sure if it still needs to be addressed or not. If it does, let's do so. I apologize for not being able to understand what the current status is. It looks like we have a . Reading comment from 12 days ago she states: While folks may want the node problem detector extended to parse logs to detect it, that is not a good fit for that system and it doesn't run on all nodes anyway. Also GKE can be extended to possibly repair this problem. Questions: Is this an issue that we need to still address for production? Or can we wait for docker fix? If we haven't fixed the kube-proxy panic (which we should) how do ensure we still provide a signal that this is happening?\nI am leaving this on 1.3 until we know the best thing to do.\nIf we can DIRECTLY tell the node problem system about a problem, I have no qualms about doing that. drop a file in a dir, for example. , Michael Rubin wrote:\nNode problem detector is not running on GKE cluster now. :) and should be a better fix for this. If we still want a fix in the node problem detector, letting kubeproxy drop a file could be enough for now.\nIf you can give me a spec of what dir to mount and what sort of file to write, I'll do it. , Lantao Liu wrote:\nFor temporary fix, something like should be enough. The kernel monitor of node problem detector could be ported to parse other logs. The only reason we can't do that for kube- is that the log is too spammy. should be enough for now. Only the kubeproxy panic error is printed directly into : Even though the default of glog is , I checked the node in and cluster, normally only few lines are at Error level.\nI thought that Dawn said we DO NOT want to parse logs. Parsing logs is, in general, a total hack and very very brittle. we should have a real push API. , Lantao Liu wrote:\nI agree with I also understood thought we should not parse logs. Is that common practise for the node problem detector? It feels fragile to me and to be avoided.\nYeah, I agree parsing log is fragile, that's why I call it a temporary or a quick fix. :) The only good thing is that it doesn't need significant change on both sides, although I don't think it's a good solution, either. :( For a better fix, it will need some more design and work. Let me think about it a little more~ Anyway, I still think and is needed. As a key component running on each node, it's wired that we don't have a component to monitor whether it is ready or not.\nRe making kube-proxy \"not panic\", it currently fails in resizing the hash table for conntrack entries, for which it needs to write to /sys. I'm not sure if there are more locations (we moved /sys/module/br_netfilter loading into kubelet IIUC), or if conntrack itself needs write access to /sys. I think Dawn/Liu are more worried about the NPD not parsing kube-proxy logs, because its memory usage would balloon. If kube-proxy can detect the situtation internally, either by parsing its own logs (which is brittle) or poking someting in /sys for example (like the hash table resizing it's currently failing on), it can drop a token into a hostPath (hoping the docker bug doesn't manifest as ro hostPath). Kube-proxy will still remain dysfunctional, though we might be able to: the issue/mark node unusable a feedback loop that restarts docker We just need to be careful to cleanup the hostPath so it doesn't end up in a restart loop.\nMy understanding of 1.3's state based on a brief chat with : something like ~1 in 1000 nodes will be affected by this issue and if nodes are affected, pods scheduled on those nodes have {broken,little,no} networking. We hope Docker will eventually fix it. But because that's not the case for 1.3, and we don't want to make excessive changes to 1.3, perhaps that means that we should fix it expediently. Summarizing my understanding of previous discussion: expedient thing would be some mechanism to detect ( proposal like kube-proxy detecting and writing to /tmp/kube-proxy-claims-docker-issue-) and $something (kubelet, node problem detector, a bash script in a loop) that checks for existence of that file followed by some form of remediation {kill docker, sudo reboot, .....} to bash the node into working again. Maybe an ugly hack is okay for now and we hope that docker fixes it for the next release so in 1.4 we can rm the code? Or for 1.4 we can do something that is More Elegant And Less Offensive To Engineering Sensibilities?\nBounded-lifetime hacks are fine, but they need to come with giant disclaimers and named-assignees to cleanup the mess after a specific event. , Alex Mohr wrote:\nAgree, re: bounded-lifetime hacks needing care and we shouldn't do them indiscriminately. We and our users have a problem today -- and we've had a problem for the past N months without traction. I'd like the problem solved for 1.3 because I'd like us to have a great product and I don't think the state of broken nodes is there. Aside; we've spent N hours discussing the issue and could likely have worked around the issue in less aggregate people time than we spent discussing? Apologies for the leaked frustration: this issue seems to be stuck in either some form of analysis paralysis or cracks between teams or perfect-is-the-enemy-of-the-good state. Again, I don't care about implementation details. We also need more than just tech or a piece of code. Exit criteria is that users don't get broken nodes shipped to them.\nWe're not going to solve this problem for as long as we depend on a lagging docker release cycle, and they don't backport or do intermediate releases to help us. This is just a one off docker bug that's left us in a bad state. Such issues come up every release, get documented in the release notes and we ship anyway. The easiest fix (disclaimers and all) is to redirect kube-proxy stderr and parse it out from NPD. Even then, the node will remain broken unless we restart docker. the NDP will only surface the error.\nI'm working on a PR yesterday night, will send it out soon.\nFYI, a fix is here . In , I let kube-proxy update a node condition with specific reason, message and hit to the administrator about the remediation. The specific reason why we want to do this is discussed in the PR description\nFWIW I agree 100% with using node condition SGTM, but your PR should probably also modify the scheduler here to prevent the system from sending new pods to a node that has RuntimeUnhealthy condition. I realize there will still be a time window when we might send some pods to that node (before the problem is detected) but at least the time window will be bounded this way.\nYeah, we can do that. But before that, we should make sure that the node is really unusable without setting conntrack. In fact, we only setting conntrack to increase the max connection limit on the node (default 64k-256k) Without this, is the node really considered to be unusable? If so, we should prevent scheduling pods to the node. If not, maybe we should just surface the problem to the administrator and let the node keep working. I'll run e2e test without conntrack set to see whether there is any problem. Yeah, that's why I think we should not restart docker ourselves to try to remedy the problem. There may still be workloads running the node. We'd better leave this to the user to decide. :)\nis merged, and hopefully, it could solve this issue. I've also sent a PR to revert the walkaround in the test framework to verify whether the issue is really fully solved. Feel free to reopen this if the issue or relative issue happens again.\nthanks for the help!\nI just ran into this, and I am able to repro consistently. It might be due to the craziness of my experiment, but I still wanted to contribute the data point. I am experimenting with docker inside docker. All I am about to describe is itself running inside a privileged CentOS 7 container that has systemd and Docker installed. The CentOS 7 container is running on my dev machine (Docker for Mac). I have a kubelet running fine, and all control plane components running as static pods. When I attempt to run kube-proxy as a static pod (privileged) it crashes with the following logs: sysfs is mounted as rw: sysfs is mounted as rw on containers: Using busybox to further inspect:\nI'm seeing the same issue when running kubeadm-dind-cluster, where kube-proxy is failing to come up, because it is trying to write to /sys/module/nf_conntrack/parameters/hashsize. This is on the filesystem sysfs, which \"mount\" shows is rw (so it passes the R/W check in kube-proxy code), but the filesystem is not writeable. Within kube-proxy: I'm also running docker containers inside of docker containers. FYI, others have not seen this issue, when just running docker on bare-metal.", "positive_passages": [{"docid": "doc-en-kubernetes-76bbdf5b01ccd140fbfe4437655cd672e6a70e95523ed0bc530be69d8b5accce", "text": "type realConntracker struct{} var readOnlySysFSError = errors.New(\"ReadOnlySysFS\") func (realConntracker) SetMax(max int) error { glog.Infof(\"Setting nf_conntrack_max to %d\", max) if err := sysctl.SetSysctl(\"net/netfilter/nf_conntrack_max\", max); err != nil { return err } // sysfs is expected to be mounted as 'rw'. However, it may be unexpectedly mounted as // 'ro' by docker because of a known docker issue (https://github.com/docker/docker/issues/24000). // Setting conntrack will fail when sysfs is readonly. When that happens, we don't set conntrack // hashsize and return a special error readOnlySysFSError here. The caller should deal with // readOnlySysFSError differently. writable, err := isSysFSWritable() if err != nil { return err } if !writable { return readOnlySysFSError } // TODO: generify this and sysctl to a new sysfs.WriteInt() glog.Infof(\"Setting conntrack hashsize to %d\", max/4) return ioutil.WriteFile(\"/sys/module/nf_conntrack/parameters/hashsize\", []byte(strconv.Itoa(max/4)), 0640)", "commid": "kubernetes_pr_28697"}], "negative_passages": []} {"query_id": "q-en-kubernetes-921194a3ee9035489edb0f46fa5ed9bbb84000a216afd5f16934a80b7eed9035", "query": "Currently a Docker bug may cause KubeProxy to crashloop thus making the Node unable to use cluster network, which in turn makes Pods scheduled on this Node unreachable. To mitigate this issue we need to surface the information about problems with KubeProxy, that will allow scheduler to ignore given node when scheduling new Pods. ProblemAPI is going to address this kind of problems in the future, but we need a fix for the 1.3. Either Kubelet or KubeProxy needs to update a NodeStatus with the information that Node networking is down. I suggests that we reuse NodeReady Condition for this, as this would make the rest of ControlPlane work out of the box. We will replace this fix with the ProblemAPI use in the future. Another option, which requires more work, is creating a completely new NodeCondition type to handle this case. This would require making Scheduler aware of this new Condition.\nDoes kubelet monitor kube-proxy health?\nKube-proxy today runs as a static pod (daemonset), kubelet treats it the same as other static pods. Yes, Kubelet reports kube-proxy podstatus back to master, that is all. The detail information on kube-proxy is in crashloop at\ncc/ Brandon, this issue explains my long position against running kubelet, kube-proxy, etc. critical daemons as docker container. I am ok to packaging them into docker image, and running as linux containers, but they shouldn't depend on docker daemon.\nI think we don't want to run them as docker containers as well, we want to run them as rkt containers.\nThe initial proposal I reviewed was docker container, and I did raise my concerns to and coreos developers then. Running it as rkt container should be ok since rkt is daemon-less today, and shouldn't have this chicken egg issue at the first place. But we still have the problem on the nodes without rkt as runtime. At the meeting, I suggested that kube-proxy update NodeStatus with a newly introduced NodeCondition, like what we plan to do with kernel issue at: The reason of not using NodeReady is that kubelet might override that. Another very hacky way is before crashing, Kube-proxy logs the read-only filesystem problem somewhere, like /var/lib/kube-proxy/... Kubelet will pick up the information, then update NodeReady Condition with detail error message prefixing with \"kube-proxy:\" kube-proxy's responsibility to make the information up-to-date.\nCan I have more description of the bug? Is there a related docker issue? Is there a repro? Does this only affect kube-proxy or will it affect other pods on the node? We need to support running critical applications (including system daemons) in pods in order to allow easy deployment and integration with vendors (e.g. storage and network vendors). cc since we were just talking about integrating through addons. What features are we missing that we can't support this right now? Is it just that our default container runtime is buggy and flaky and unreliable? Do we need the NodeProblem API? Do we need taints/tolerations?\n- I don't know about any way to repro it other than running huge clusters. Pretty much every run of a 1000 Node cluster has this problem on at least one Node.\nIf you want to know the detail, see It is a docker issue, and pretty rare.\nCan we regroup on this issue this week? Like tomorrow? , Dawn Chen wrote:\nI will happily make kube-proxy write to or offer up pretty much any API we deem necessary to detect this issue. To me it still feels like a docker health-check suite is needed, and one of the tests is that sysfs gets mounted rw. BUt in the mean time, how about this: make be the API. Any component which feels that it has a node problem can write a JSON file into that directory. For this case, kube-proxy would write a file: Kubelet can turn that into Node status reports. , Tim Hockin wrote:\ncc\nPR of introducing NodeProblemDetector was merged last week, and the pr to enable it by default for Kubernetes cluster was merged over the weekend. So far everything works fine. Even it is an alpha feature, GKE folks agreed to enable it for GKE. To workaround this particular issue before we have proper docker fix, we can easily extend today's KernelMonitor module of NodeProblemDetector to monitor kube-proxy log, and report a new NodeCondition called SysfsReadonly to the issue visible. But we don't have remedy system yet, and pr of converting node problem (events, conditions etc.) to taints is still under the debate / review: 1) Should NodeController respect this new Condition, and simply mark node NotReady? 2) Should repair system pickup this issue, and restart docker?\nDo we still observe the problem with docker 1.11.X? If no, I want to close this; otherwise, we are going to reconfigure NodeProblemDetector to make the issue visible.\ncc/\nAgree, I'd love to not \"fix\" this..\nI think we've seen it the day before yesterday during our tests - can you please confirm?\nI can't recall - we certainly did saw some failed nodes during the test, but I'm not sure we made sure it was a kube-proxy issue.\nBut what I can tell for sure that it's definitely less frequent - we started 2000-node cluster 10+ times during this week and we've seen it at most once (which makes it at least 99.995% reliable).\nNot enough data. GKE is still deploying the old ContainerVM and isn't running Docker 1.11.1 yet, unless I'm mistaken.\naah ok - I didn't know that...\nI'm going to tentatively close this until we show that it's a problem on Docker 1.11.2, now that Docker 1.11.2 is in on both GCE and GKE.\n(Obviously feel free to reopen if I'm wrong.)\nAnother occurrence:\nSo it's still happening, can we teach NPD to detect and surface this as node unschedulable?\nYes, we can. But I found that the error only occurred once with docker 1.11 till now. At least, this should be a known issue, right? XRef Do we want to fix this in 1.3 or just claim known issues in release notes? :)\nDoesn't this render the node unusable (i.e no services will work because kube-proxy is broken)? if so, we should surface regardless of frequency IMO. Whether we fix or paper over it is a different issue.\n+1\nBecause it's code freeze now, I can add a temporary short hack in node problem detector to detect this issue. Should we do that? :)\nRemoved the priority and it back to 1.3 milestone, so that we can discuss at burndown meeting.\nYeah i think codefreeze just means no features\nIt is very rare after we upgrade to docker 1.11.X. To extend KernelMontor in NodeProblemDetector to detect the issue is totally overkill, since we have to allocate more compute resource to NodeProblemDetector to process tons of kube-proxy logs. Here is what I suggested for this issue for 1.3 and 1.4+: Document this as known issue for docker releases: docker 1.10.X (), docker 1.11.X (). Also document it as Kubernetes 1.3 release known issue at is also updated admin guide for NodeProblemDetector, and the suggestion on how to handle this issue will be included there. For 1.4, I think we should change kube-proxy. Instead of logging the error then crashing, kube-proxy should report its problem to NodeProblemDector, which aggregates the node issues and propagate them to upstream layers, so that the admin / remedy system & control-plane can take proper action upon the issues. I am removing this from 1.3 milestone, and unassigning myself. cc/ to find the right owner from cluster team for next release here.\nOn a 2k node cluster, I just saw 4 nodes come up this way on Docker 1.11.2. I'm not saying it's going to stop the release, but it does stop bringup on large clusters pretty frequently. For some reason we went through a \"nice\" patch where it was rare, but today it was just bad.\nWe can have a quick fix in node problem detector to catch this kind of problem by parsing the kube-proxy log. The only problem is that, kube-proxy generates logs relatively fast. If we let node problem detector do that, it has to waste significant cpu even on the good nodes. A better solution is to make node problem detector expose an endpoint and let other system components report problems. However, we are not there yet for 1.3. If we'd still want to solve this for 1.3, we could: 1) Let kube-proxy have some kind of error log, and only print error into it. Then we can easily change node problem detector to parse the error log and report problems. By this, we can at least make these errors visible to users for 1.3. 2) Or let kube-proxy report the node condition itself. /cc\nComment at is still valid. We cannot do much for 1.3 release here, and extending NodeProblemDetector to parse the application logs is wrong move. Also NodeProblemDetector is not running on GKE nodes yet.\nI'm seeing people say they reproduced on 1.11.2, is there a 1.11 docker issue? I know that was closed. If I'm going to get my docker guys to dig, I need a docker bug report...\nCan you please ramp up on this and come up with a way to bring node and kube proxy teams together to resolve? Even if we don't land in 1.3.0, I propose this is at least a P1 for 1.3.1.\nFYI and others In I a hacky way of resolving those issues to our test framework. In the last run of gke-large-cluster run it seemed to work (there was 1 broken kube-proxy, this triggered restarting docker, which in turn fixed the problem). This is obviously not the solution, but: it seems to prove that 'docker restart' solves the problem it should stop the bleeding in our tests\n& I copied & paste what I wrote in another internal thread to give you some background: No, it is not a blocker for 1.3 release: It is not a regression from 1.2 release. It is way less frequently after we upgrade to docker 1.11.2 from docker 1.9.1 NodeProblemDetector is not designed for processing arbitrary logs. It is an overkill solution given such low frequency and relative high resource overhead. By default NodeProblemDetector is not running on gke node due to the extra overhead per node introduced to our customer. Meanwhile Since GKE already has repair system built, it should be very easy to extend GKE repair system to check if kube-proxy is in crashloop. If yes, then kubectl logs to retrieve the last terminated container log to see if the issue is caused by readonly root filesystem. If yes, the repair system can issue a docker restart.\nagreed not a blocker. This is likely only fixable in docker. We should keep it up the priority so we dig into and fix docker. can try next week to get our docker people talking to whoever on your side you want to get this fixed (in 1.11.3 or later probably)\nCan we add a node readiness or liveness probe to solve this? (I remember there is a issue for this, but can't find it now) One of the default behaviours of the probe is to make sure the key node components are running normally, and the behaviour could be extended overtime. Probably adding a node probe controller or node health controller to perform the probing. Or just use the health probe of GCE MIG, if its functionality is enough to handle this. This could be part of the remedy system. Just random thoughts, not for 1.3 and even may not work. :)\nAll are good ideas. But let's move that discussion to a separate issue, and use this one to focus providing the workaround for KubeProxy panic within 1.3 timeframe. Thanks!\nAlso what you suggested probe controller can be easily achieved by GKE repair system which I suggested above.\nAgree. needs quite a lot of work, definitely not for 1.3 :)\nKeeping in v1.3 until we decide on Docker 1.11 for sure.\nThe issue seems to be present on old and new docker alike, sadly.\nI opened a docker issue here Feel free to supply information on that issue.\nSo what's the right kube-proxy fix? , Lantao Liu wrote:\n- didn't we decided that we just want to discover this problem in NodeProblemDetector and then mark this node as not-healthy (whatever that means - adding a new condition or extending NodeReady or ...)? In my opinion if it would be clear to user why this node is not healthy, this should be good enough. FYI in I worked around this in our tests :)\nIt wasn't clear to me if kube-proxy should do something different or not. , Wojciech Tyczynski < :\nyeah this is a docker bug (), unsure if there's a pod level workaround, couldn't make out if it was specific to hostNetwork. the work around is restarting docker. It happened more frequently in 1.9, but still happens on 1.11 (node team empirical data). To surface it to the NPD we need to either send some http event to a NPD server (vaporware) or write to a special hostPath termination log type file (parsing kube-proxy logs form NPD is too expensive).\nThis feels like a hot potato but I am not sure if it still needs to be addressed or not. If it does, let's do so. I apologize for not being able to understand what the current status is. It looks like we have a . Reading comment from 12 days ago she states: While folks may want the node problem detector extended to parse logs to detect it, that is not a good fit for that system and it doesn't run on all nodes anyway. Also GKE can be extended to possibly repair this problem. Questions: Is this an issue that we need to still address for production? Or can we wait for docker fix? If we haven't fixed the kube-proxy panic (which we should) how do ensure we still provide a signal that this is happening?\nI am leaving this on 1.3 until we know the best thing to do.\nIf we can DIRECTLY tell the node problem system about a problem, I have no qualms about doing that. drop a file in a dir, for example. , Michael Rubin wrote:\nNode problem detector is not running on GKE cluster now. :) and should be a better fix for this. If we still want a fix in the node problem detector, letting kubeproxy drop a file could be enough for now.\nIf you can give me a spec of what dir to mount and what sort of file to write, I'll do it. , Lantao Liu wrote:\nFor temporary fix, something like should be enough. The kernel monitor of node problem detector could be ported to parse other logs. The only reason we can't do that for kube- is that the log is too spammy. should be enough for now. Only the kubeproxy panic error is printed directly into : Even though the default of glog is , I checked the node in and cluster, normally only few lines are at Error level.\nI thought that Dawn said we DO NOT want to parse logs. Parsing logs is, in general, a total hack and very very brittle. we should have a real push API. , Lantao Liu wrote:\nI agree with I also understood thought we should not parse logs. Is that common practise for the node problem detector? It feels fragile to me and to be avoided.\nYeah, I agree parsing log is fragile, that's why I call it a temporary or a quick fix. :) The only good thing is that it doesn't need significant change on both sides, although I don't think it's a good solution, either. :( For a better fix, it will need some more design and work. Let me think about it a little more~ Anyway, I still think and is needed. As a key component running on each node, it's wired that we don't have a component to monitor whether it is ready or not.\nRe making kube-proxy \"not panic\", it currently fails in resizing the hash table for conntrack entries, for which it needs to write to /sys. I'm not sure if there are more locations (we moved /sys/module/br_netfilter loading into kubelet IIUC), or if conntrack itself needs write access to /sys. I think Dawn/Liu are more worried about the NPD not parsing kube-proxy logs, because its memory usage would balloon. If kube-proxy can detect the situtation internally, either by parsing its own logs (which is brittle) or poking someting in /sys for example (like the hash table resizing it's currently failing on), it can drop a token into a hostPath (hoping the docker bug doesn't manifest as ro hostPath). Kube-proxy will still remain dysfunctional, though we might be able to: the issue/mark node unusable a feedback loop that restarts docker We just need to be careful to cleanup the hostPath so it doesn't end up in a restart loop.\nMy understanding of 1.3's state based on a brief chat with : something like ~1 in 1000 nodes will be affected by this issue and if nodes are affected, pods scheduled on those nodes have {broken,little,no} networking. We hope Docker will eventually fix it. But because that's not the case for 1.3, and we don't want to make excessive changes to 1.3, perhaps that means that we should fix it expediently. Summarizing my understanding of previous discussion: expedient thing would be some mechanism to detect ( proposal like kube-proxy detecting and writing to /tmp/kube-proxy-claims-docker-issue-) and $something (kubelet, node problem detector, a bash script in a loop) that checks for existence of that file followed by some form of remediation {kill docker, sudo reboot, .....} to bash the node into working again. Maybe an ugly hack is okay for now and we hope that docker fixes it for the next release so in 1.4 we can rm the code? Or for 1.4 we can do something that is More Elegant And Less Offensive To Engineering Sensibilities?\nBounded-lifetime hacks are fine, but they need to come with giant disclaimers and named-assignees to cleanup the mess after a specific event. , Alex Mohr wrote:\nAgree, re: bounded-lifetime hacks needing care and we shouldn't do them indiscriminately. We and our users have a problem today -- and we've had a problem for the past N months without traction. I'd like the problem solved for 1.3 because I'd like us to have a great product and I don't think the state of broken nodes is there. Aside; we've spent N hours discussing the issue and could likely have worked around the issue in less aggregate people time than we spent discussing? Apologies for the leaked frustration: this issue seems to be stuck in either some form of analysis paralysis or cracks between teams or perfect-is-the-enemy-of-the-good state. Again, I don't care about implementation details. We also need more than just tech or a piece of code. Exit criteria is that users don't get broken nodes shipped to them.\nWe're not going to solve this problem for as long as we depend on a lagging docker release cycle, and they don't backport or do intermediate releases to help us. This is just a one off docker bug that's left us in a bad state. Such issues come up every release, get documented in the release notes and we ship anyway. The easiest fix (disclaimers and all) is to redirect kube-proxy stderr and parse it out from NPD. Even then, the node will remain broken unless we restart docker. the NDP will only surface the error.\nI'm working on a PR yesterday night, will send it out soon.\nFYI, a fix is here . In , I let kube-proxy update a node condition with specific reason, message and hit to the administrator about the remediation. The specific reason why we want to do this is discussed in the PR description\nFWIW I agree 100% with using node condition SGTM, but your PR should probably also modify the scheduler here to prevent the system from sending new pods to a node that has RuntimeUnhealthy condition. I realize there will still be a time window when we might send some pods to that node (before the problem is detected) but at least the time window will be bounded this way.\nYeah, we can do that. But before that, we should make sure that the node is really unusable without setting conntrack. In fact, we only setting conntrack to increase the max connection limit on the node (default 64k-256k) Without this, is the node really considered to be unusable? If so, we should prevent scheduling pods to the node. If not, maybe we should just surface the problem to the administrator and let the node keep working. I'll run e2e test without conntrack set to see whether there is any problem. Yeah, that's why I think we should not restart docker ourselves to try to remedy the problem. There may still be workloads running the node. We'd better leave this to the user to decide. :)\nis merged, and hopefully, it could solve this issue. I've also sent a PR to revert the walkaround in the test framework to verify whether the issue is really fully solved. Feel free to reopen this if the issue or relative issue happens again.\nthanks for the help!\nI just ran into this, and I am able to repro consistently. It might be due to the craziness of my experiment, but I still wanted to contribute the data point. I am experimenting with docker inside docker. All I am about to describe is itself running inside a privileged CentOS 7 container that has systemd and Docker installed. The CentOS 7 container is running on my dev machine (Docker for Mac). I have a kubelet running fine, and all control plane components running as static pods. When I attempt to run kube-proxy as a static pod (privileged) it crashes with the following logs: sysfs is mounted as rw: sysfs is mounted as rw on containers: Using busybox to further inspect:\nI'm seeing the same issue when running kubeadm-dind-cluster, where kube-proxy is failing to come up, because it is trying to write to /sys/module/nf_conntrack/parameters/hashsize. This is on the filesystem sysfs, which \"mount\" shows is rw (so it passes the R/W check in kube-proxy code), but the filesystem is not writeable. Within kube-proxy: I'm also running docker containers inside of docker containers. FYI, others have not seen this issue, when just running docker on bare-metal.", "positive_passages": [{"docid": "doc-en-kubernetes-5aafaa33c8f67ada86bc7f3ddba017f54eff4527436c3451e62ae5003294d095", "text": "glog.Infof(\"Setting nf_conntrack_tcp_timeout_established to %d\", seconds) return sysctl.SetSysctl(\"net/netfilter/nf_conntrack_tcp_timeout_established\", seconds) } // isSysFSWritable checks /proc/mounts to see whether sysfs is 'rw' or not. func isSysFSWritable() (bool, error) { const permWritable = \"rw\" const sysfsDevice = \"sysfs\" m := mount.New() mountPoints, err := m.List() if err != nil { glog.Errorf(\"failed to list mount points: %v\", err) return false, err } for _, mountPoint := range mountPoints { if mountPoint.Device != sysfsDevice { continue } // Check whether sysfs is 'rw' if len(mountPoint.Opts) > 0 && mountPoint.Opts[0] == permWritable { return true, nil } glog.Errorf(\"sysfs is not writable: %+v\", mountPoint) break } return false, nil } ", "commid": "kubernetes_pr_28697"}], "negative_passages": []} {"query_id": "q-en-kubernetes-921194a3ee9035489edb0f46fa5ed9bbb84000a216afd5f16934a80b7eed9035", "query": "Currently a Docker bug may cause KubeProxy to crashloop thus making the Node unable to use cluster network, which in turn makes Pods scheduled on this Node unreachable. To mitigate this issue we need to surface the information about problems with KubeProxy, that will allow scheduler to ignore given node when scheduling new Pods. ProblemAPI is going to address this kind of problems in the future, but we need a fix for the 1.3. Either Kubelet or KubeProxy needs to update a NodeStatus with the information that Node networking is down. I suggests that we reuse NodeReady Condition for this, as this would make the rest of ControlPlane work out of the box. We will replace this fix with the ProblemAPI use in the future. Another option, which requires more work, is creating a completely new NodeCondition type to handle this case. This would require making Scheduler aware of this new Condition.\nDoes kubelet monitor kube-proxy health?\nKube-proxy today runs as a static pod (daemonset), kubelet treats it the same as other static pods. Yes, Kubelet reports kube-proxy podstatus back to master, that is all. The detail information on kube-proxy is in crashloop at\ncc/ Brandon, this issue explains my long position against running kubelet, kube-proxy, etc. critical daemons as docker container. I am ok to packaging them into docker image, and running as linux containers, but they shouldn't depend on docker daemon.\nI think we don't want to run them as docker containers as well, we want to run them as rkt containers.\nThe initial proposal I reviewed was docker container, and I did raise my concerns to and coreos developers then. Running it as rkt container should be ok since rkt is daemon-less today, and shouldn't have this chicken egg issue at the first place. But we still have the problem on the nodes without rkt as runtime. At the meeting, I suggested that kube-proxy update NodeStatus with a newly introduced NodeCondition, like what we plan to do with kernel issue at: The reason of not using NodeReady is that kubelet might override that. Another very hacky way is before crashing, Kube-proxy logs the read-only filesystem problem somewhere, like /var/lib/kube-proxy/... Kubelet will pick up the information, then update NodeReady Condition with detail error message prefixing with \"kube-proxy:\" kube-proxy's responsibility to make the information up-to-date.\nCan I have more description of the bug? Is there a related docker issue? Is there a repro? Does this only affect kube-proxy or will it affect other pods on the node? We need to support running critical applications (including system daemons) in pods in order to allow easy deployment and integration with vendors (e.g. storage and network vendors). cc since we were just talking about integrating through addons. What features are we missing that we can't support this right now? Is it just that our default container runtime is buggy and flaky and unreliable? Do we need the NodeProblem API? Do we need taints/tolerations?\n- I don't know about any way to repro it other than running huge clusters. Pretty much every run of a 1000 Node cluster has this problem on at least one Node.\nIf you want to know the detail, see It is a docker issue, and pretty rare.\nCan we regroup on this issue this week? Like tomorrow? , Dawn Chen wrote:\nI will happily make kube-proxy write to or offer up pretty much any API we deem necessary to detect this issue. To me it still feels like a docker health-check suite is needed, and one of the tests is that sysfs gets mounted rw. BUt in the mean time, how about this: make be the API. Any component which feels that it has a node problem can write a JSON file into that directory. For this case, kube-proxy would write a file: Kubelet can turn that into Node status reports. , Tim Hockin wrote:\ncc\nPR of introducing NodeProblemDetector was merged last week, and the pr to enable it by default for Kubernetes cluster was merged over the weekend. So far everything works fine. Even it is an alpha feature, GKE folks agreed to enable it for GKE. To workaround this particular issue before we have proper docker fix, we can easily extend today's KernelMonitor module of NodeProblemDetector to monitor kube-proxy log, and report a new NodeCondition called SysfsReadonly to the issue visible. But we don't have remedy system yet, and pr of converting node problem (events, conditions etc.) to taints is still under the debate / review: 1) Should NodeController respect this new Condition, and simply mark node NotReady? 2) Should repair system pickup this issue, and restart docker?\nDo we still observe the problem with docker 1.11.X? If no, I want to close this; otherwise, we are going to reconfigure NodeProblemDetector to make the issue visible.\ncc/\nAgree, I'd love to not \"fix\" this..\nI think we've seen it the day before yesterday during our tests - can you please confirm?\nI can't recall - we certainly did saw some failed nodes during the test, but I'm not sure we made sure it was a kube-proxy issue.\nBut what I can tell for sure that it's definitely less frequent - we started 2000-node cluster 10+ times during this week and we've seen it at most once (which makes it at least 99.995% reliable).\nNot enough data. GKE is still deploying the old ContainerVM and isn't running Docker 1.11.1 yet, unless I'm mistaken.\naah ok - I didn't know that...\nI'm going to tentatively close this until we show that it's a problem on Docker 1.11.2, now that Docker 1.11.2 is in on both GCE and GKE.\n(Obviously feel free to reopen if I'm wrong.)\nAnother occurrence:\nSo it's still happening, can we teach NPD to detect and surface this as node unschedulable?\nYes, we can. But I found that the error only occurred once with docker 1.11 till now. At least, this should be a known issue, right? XRef Do we want to fix this in 1.3 or just claim known issues in release notes? :)\nDoesn't this render the node unusable (i.e no services will work because kube-proxy is broken)? if so, we should surface regardless of frequency IMO. Whether we fix or paper over it is a different issue.\n+1\nBecause it's code freeze now, I can add a temporary short hack in node problem detector to detect this issue. Should we do that? :)\nRemoved the priority and it back to 1.3 milestone, so that we can discuss at burndown meeting.\nYeah i think codefreeze just means no features\nIt is very rare after we upgrade to docker 1.11.X. To extend KernelMontor in NodeProblemDetector to detect the issue is totally overkill, since we have to allocate more compute resource to NodeProblemDetector to process tons of kube-proxy logs. Here is what I suggested for this issue for 1.3 and 1.4+: Document this as known issue for docker releases: docker 1.10.X (), docker 1.11.X (). Also document it as Kubernetes 1.3 release known issue at is also updated admin guide for NodeProblemDetector, and the suggestion on how to handle this issue will be included there. For 1.4, I think we should change kube-proxy. Instead of logging the error then crashing, kube-proxy should report its problem to NodeProblemDector, which aggregates the node issues and propagate them to upstream layers, so that the admin / remedy system & control-plane can take proper action upon the issues. I am removing this from 1.3 milestone, and unassigning myself. cc/ to find the right owner from cluster team for next release here.\nOn a 2k node cluster, I just saw 4 nodes come up this way on Docker 1.11.2. I'm not saying it's going to stop the release, but it does stop bringup on large clusters pretty frequently. For some reason we went through a \"nice\" patch where it was rare, but today it was just bad.\nWe can have a quick fix in node problem detector to catch this kind of problem by parsing the kube-proxy log. The only problem is that, kube-proxy generates logs relatively fast. If we let node problem detector do that, it has to waste significant cpu even on the good nodes. A better solution is to make node problem detector expose an endpoint and let other system components report problems. However, we are not there yet for 1.3. If we'd still want to solve this for 1.3, we could: 1) Let kube-proxy have some kind of error log, and only print error into it. Then we can easily change node problem detector to parse the error log and report problems. By this, we can at least make these errors visible to users for 1.3. 2) Or let kube-proxy report the node condition itself. /cc\nComment at is still valid. We cannot do much for 1.3 release here, and extending NodeProblemDetector to parse the application logs is wrong move. Also NodeProblemDetector is not running on GKE nodes yet.\nI'm seeing people say they reproduced on 1.11.2, is there a 1.11 docker issue? I know that was closed. If I'm going to get my docker guys to dig, I need a docker bug report...\nCan you please ramp up on this and come up with a way to bring node and kube proxy teams together to resolve? Even if we don't land in 1.3.0, I propose this is at least a P1 for 1.3.1.\nFYI and others In I a hacky way of resolving those issues to our test framework. In the last run of gke-large-cluster run it seemed to work (there was 1 broken kube-proxy, this triggered restarting docker, which in turn fixed the problem). This is obviously not the solution, but: it seems to prove that 'docker restart' solves the problem it should stop the bleeding in our tests\n& I copied & paste what I wrote in another internal thread to give you some background: No, it is not a blocker for 1.3 release: It is not a regression from 1.2 release. It is way less frequently after we upgrade to docker 1.11.2 from docker 1.9.1 NodeProblemDetector is not designed for processing arbitrary logs. It is an overkill solution given such low frequency and relative high resource overhead. By default NodeProblemDetector is not running on gke node due to the extra overhead per node introduced to our customer. Meanwhile Since GKE already has repair system built, it should be very easy to extend GKE repair system to check if kube-proxy is in crashloop. If yes, then kubectl logs to retrieve the last terminated container log to see if the issue is caused by readonly root filesystem. If yes, the repair system can issue a docker restart.\nagreed not a blocker. This is likely only fixable in docker. We should keep it up the priority so we dig into and fix docker. can try next week to get our docker people talking to whoever on your side you want to get this fixed (in 1.11.3 or later probably)\nCan we add a node readiness or liveness probe to solve this? (I remember there is a issue for this, but can't find it now) One of the default behaviours of the probe is to make sure the key node components are running normally, and the behaviour could be extended overtime. Probably adding a node probe controller or node health controller to perform the probing. Or just use the health probe of GCE MIG, if its functionality is enough to handle this. This could be part of the remedy system. Just random thoughts, not for 1.3 and even may not work. :)\nAll are good ideas. But let's move that discussion to a separate issue, and use this one to focus providing the workaround for KubeProxy panic within 1.3 timeframe. Thanks!\nAlso what you suggested probe controller can be easily achieved by GKE repair system which I suggested above.\nAgree. needs quite a lot of work, definitely not for 1.3 :)\nKeeping in v1.3 until we decide on Docker 1.11 for sure.\nThe issue seems to be present on old and new docker alike, sadly.\nI opened a docker issue here Feel free to supply information on that issue.\nSo what's the right kube-proxy fix? , Lantao Liu wrote:\n- didn't we decided that we just want to discover this problem in NodeProblemDetector and then mark this node as not-healthy (whatever that means - adding a new condition or extending NodeReady or ...)? In my opinion if it would be clear to user why this node is not healthy, this should be good enough. FYI in I worked around this in our tests :)\nIt wasn't clear to me if kube-proxy should do something different or not. , Wojciech Tyczynski < :\nyeah this is a docker bug (), unsure if there's a pod level workaround, couldn't make out if it was specific to hostNetwork. the work around is restarting docker. It happened more frequently in 1.9, but still happens on 1.11 (node team empirical data). To surface it to the NPD we need to either send some http event to a NPD server (vaporware) or write to a special hostPath termination log type file (parsing kube-proxy logs form NPD is too expensive).\nThis feels like a hot potato but I am not sure if it still needs to be addressed or not. If it does, let's do so. I apologize for not being able to understand what the current status is. It looks like we have a . Reading comment from 12 days ago she states: While folks may want the node problem detector extended to parse logs to detect it, that is not a good fit for that system and it doesn't run on all nodes anyway. Also GKE can be extended to possibly repair this problem. Questions: Is this an issue that we need to still address for production? Or can we wait for docker fix? If we haven't fixed the kube-proxy panic (which we should) how do ensure we still provide a signal that this is happening?\nI am leaving this on 1.3 until we know the best thing to do.\nIf we can DIRECTLY tell the node problem system about a problem, I have no qualms about doing that. drop a file in a dir, for example. , Michael Rubin wrote:\nNode problem detector is not running on GKE cluster now. :) and should be a better fix for this. If we still want a fix in the node problem detector, letting kubeproxy drop a file could be enough for now.\nIf you can give me a spec of what dir to mount and what sort of file to write, I'll do it. , Lantao Liu wrote:\nFor temporary fix, something like should be enough. The kernel monitor of node problem detector could be ported to parse other logs. The only reason we can't do that for kube- is that the log is too spammy. should be enough for now. Only the kubeproxy panic error is printed directly into : Even though the default of glog is , I checked the node in and cluster, normally only few lines are at Error level.\nI thought that Dawn said we DO NOT want to parse logs. Parsing logs is, in general, a total hack and very very brittle. we should have a real push API. , Lantao Liu wrote:\nI agree with I also understood thought we should not parse logs. Is that common practise for the node problem detector? It feels fragile to me and to be avoided.\nYeah, I agree parsing log is fragile, that's why I call it a temporary or a quick fix. :) The only good thing is that it doesn't need significant change on both sides, although I don't think it's a good solution, either. :( For a better fix, it will need some more design and work. Let me think about it a little more~ Anyway, I still think and is needed. As a key component running on each node, it's wired that we don't have a component to monitor whether it is ready or not.\nRe making kube-proxy \"not panic\", it currently fails in resizing the hash table for conntrack entries, for which it needs to write to /sys. I'm not sure if there are more locations (we moved /sys/module/br_netfilter loading into kubelet IIUC), or if conntrack itself needs write access to /sys. I think Dawn/Liu are more worried about the NPD not parsing kube-proxy logs, because its memory usage would balloon. If kube-proxy can detect the situtation internally, either by parsing its own logs (which is brittle) or poking someting in /sys for example (like the hash table resizing it's currently failing on), it can drop a token into a hostPath (hoping the docker bug doesn't manifest as ro hostPath). Kube-proxy will still remain dysfunctional, though we might be able to: the issue/mark node unusable a feedback loop that restarts docker We just need to be careful to cleanup the hostPath so it doesn't end up in a restart loop.\nMy understanding of 1.3's state based on a brief chat with : something like ~1 in 1000 nodes will be affected by this issue and if nodes are affected, pods scheduled on those nodes have {broken,little,no} networking. We hope Docker will eventually fix it. But because that's not the case for 1.3, and we don't want to make excessive changes to 1.3, perhaps that means that we should fix it expediently. Summarizing my understanding of previous discussion: expedient thing would be some mechanism to detect ( proposal like kube-proxy detecting and writing to /tmp/kube-proxy-claims-docker-issue-) and $something (kubelet, node problem detector, a bash script in a loop) that checks for existence of that file followed by some form of remediation {kill docker, sudo reboot, .....} to bash the node into working again. Maybe an ugly hack is okay for now and we hope that docker fixes it for the next release so in 1.4 we can rm the code? Or for 1.4 we can do something that is More Elegant And Less Offensive To Engineering Sensibilities?\nBounded-lifetime hacks are fine, but they need to come with giant disclaimers and named-assignees to cleanup the mess after a specific event. , Alex Mohr wrote:\nAgree, re: bounded-lifetime hacks needing care and we shouldn't do them indiscriminately. We and our users have a problem today -- and we've had a problem for the past N months without traction. I'd like the problem solved for 1.3 because I'd like us to have a great product and I don't think the state of broken nodes is there. Aside; we've spent N hours discussing the issue and could likely have worked around the issue in less aggregate people time than we spent discussing? Apologies for the leaked frustration: this issue seems to be stuck in either some form of analysis paralysis or cracks between teams or perfect-is-the-enemy-of-the-good state. Again, I don't care about implementation details. We also need more than just tech or a piece of code. Exit criteria is that users don't get broken nodes shipped to them.\nWe're not going to solve this problem for as long as we depend on a lagging docker release cycle, and they don't backport or do intermediate releases to help us. This is just a one off docker bug that's left us in a bad state. Such issues come up every release, get documented in the release notes and we ship anyway. The easiest fix (disclaimers and all) is to redirect kube-proxy stderr and parse it out from NPD. Even then, the node will remain broken unless we restart docker. the NDP will only surface the error.\nI'm working on a PR yesterday night, will send it out soon.\nFYI, a fix is here . In , I let kube-proxy update a node condition with specific reason, message and hit to the administrator about the remediation. The specific reason why we want to do this is discussed in the PR description\nFWIW I agree 100% with using node condition SGTM, but your PR should probably also modify the scheduler here to prevent the system from sending new pods to a node that has RuntimeUnhealthy condition. I realize there will still be a time window when we might send some pods to that node (before the problem is detected) but at least the time window will be bounded this way.\nYeah, we can do that. But before that, we should make sure that the node is really unusable without setting conntrack. In fact, we only setting conntrack to increase the max connection limit on the node (default 64k-256k) Without this, is the node really considered to be unusable? If so, we should prevent scheduling pods to the node. If not, maybe we should just surface the problem to the administrator and let the node keep working. I'll run e2e test without conntrack set to see whether there is any problem. Yeah, that's why I think we should not restart docker ourselves to try to remedy the problem. There may still be workloads running the node. We'd better leave this to the user to decide. :)\nis merged, and hopefully, it could solve this issue. I've also sent a PR to revert the walkaround in the test framework to verify whether the issue is really fully solved. Feel free to reopen this if the issue or relative issue happens again.\nthanks for the help!\nI just ran into this, and I am able to repro consistently. It might be due to the craziness of my experiment, but I still wanted to contribute the data point. I am experimenting with docker inside docker. All I am about to describe is itself running inside a privileged CentOS 7 container that has systemd and Docker installed. The CentOS 7 container is running on my dev machine (Docker for Mac). I have a kubelet running fine, and all control plane components running as static pods. When I attempt to run kube-proxy as a static pod (privileged) it crashes with the following logs: sysfs is mounted as rw: sysfs is mounted as rw on containers: Using busybox to further inspect:\nI'm seeing the same issue when running kubeadm-dind-cluster, where kube-proxy is failing to come up, because it is trying to write to /sys/module/nf_conntrack/parameters/hashsize. This is on the filesystem sysfs, which \"mount\" shows is rw (so it passes the R/W check in kube-proxy code), but the filesystem is not writeable. Within kube-proxy: I'm also running docker containers inside of docker containers. FYI, others have not seen this issue, when just running docker on bare-metal.", "positive_passages": [{"docid": "doc-en-kubernetes-501fb9b94e8dae0f57a8864afbad3b3c2a554a99411fa68dc33e7d26e5467768", "text": "// Tune conntrack, if requested if s.Conntracker != nil { if s.Config.ConntrackMax > 0 { if err := s.Conntracker.SetMax(int(s.Config.ConntrackMax)); err != nil { return err err := s.Conntracker.SetMax(int(s.Config.ConntrackMax)) if err != nil { if err != readOnlySysFSError { return err } // readOnlySysFSError is caused by a known docker issue (https://github.com/docker/docker/issues/24000), // the only remediation we know is to restart the docker daemon. // Here we'll send an node event with specific reason and message, the // administrator should decide whether and how to handle this issue, // whether to drain the node and restart docker. // TODO(random-liu): Remove this when the docker bug is fixed. const message = \"DOCKER RESTART NEEDED (docker issue #24000): /sys is read-only: can't raise conntrack limits, problems may arise later.\" s.Recorder.Eventf(s.Config.NodeRef, api.EventTypeWarning, err.Error(), message) } } if s.Config.ConntrackTCPEstablishedTimeout.Duration > 0 {", "commid": "kubernetes_pr_28697"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-0add599a93d358f290a387b56036fbc30b2730b80f6813b5c9c22b4ca3a693dd", "text": "return ConformanceContainer{containers[0], cc.Client, pod.Spec.RestartPolicy, pod.Spec.Volumes, pod.Spec.NodeName, cc.Namespace, cc.podName}, nil } func (cc *ConformanceContainer) GetStatus() (api.ContainerStatus, api.PodPhase, error) { func (cc *ConformanceContainer) IsReady() (bool, error) { pod, err := cc.Client.Pods(cc.Namespace).Get(cc.podName) if err != nil { return api.ContainerStatus{}, api.PodUnknown, err return false, err } return api.IsPodReady(pod), nil } func (cc *ConformanceContainer) GetPhase() (api.PodPhase, error) { pod, err := cc.Client.Pods(cc.Namespace).Get(cc.podName) if err != nil { return api.PodUnknown, err } return pod.Status.Phase, nil } func (cc *ConformanceContainer) GetStatus() (api.ContainerStatus, error) { pod, err := cc.Client.Pods(cc.Namespace).Get(cc.podName) if err != nil { return api.ContainerStatus{}, err } statuses := pod.Status.ContainerStatuses if len(statuses) != 1 { return api.ContainerStatus{}, api.PodUnknown, errors.New(\"Failed to get container status\") if len(statuses) != 1 || statuses[0].Name != cc.Container.Name { return api.ContainerStatus{}, fmt.Errorf(\"unexpected container statuses %v\", statuses) } return statuses[0], pod.Status.Phase, nil return statuses[0], nil } func (cc *ConformanceContainer) Present() (bool, error) {", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-e9667b2811b19740015a8be453bcbd6266f3c63c1f6844c8f4fc51b2b4247d91", "text": "return false, err } type ContainerState uint32 type ContainerState int const ( ContainerStateWaiting ContainerState = 1 << iota ContainerStateWaiting ContainerState = iota ContainerStateRunning ContainerStateTerminated ContainerStateUnknown", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-7d0f20186558116c9d129aeb57603503414062cc233f706e09a46ac42458dea2", "text": "pollInterval = time.Second * 5 ) type testStatus struct { type testCase struct { Name string RestartPolicy api.RestartPolicy Phase api.PodPhase", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-4e168ee6cfc5455ee91a2f8533b45baa279be7ee82cf7959efa154230df67aca", "text": "Ready bool } var _ = Describe(\"[FLAKY] Container runtime Conformance Test\", func() { var _ = Describe(\"Container Runtime Conformance Test\", func() { var cl *client.Client BeforeEach(func() {", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-496cd48fe616df40d2b146e60cbcbe9c969e7343befa940d388791893a4b05f3", "text": "}) Describe(\"container runtime conformance blackbox test\", func() { var testCContainers []ConformanceContainer namespace := \"runtime-conformance\" BeforeEach(func() { testCContainers = []ConformanceContainer{} }) Context(\"when start a container that exits successfully\", func() { Context(\"when starting a container that exits\", func() { It(\"it should run with the expected status [Conformance]\", func() { restartCountVolumeName := \"restart-count\" restartCountVolumePath := \"/restart-count\" testContainer := api.Container{ Image: ImageRegistry[busyBoxImage], VolumeMounts: []api.VolumeMount{ { MountPath: \"/restart-count\", Name: \"restart-count\", MountPath: restartCountVolumePath, Name: restartCountVolumeName, }, }, ImagePullPolicy: api.PullIfNotPresent, } testVolumes := []api.Volume{ { Name: \"restart-count\", Name: restartCountVolumeName, VolumeSource: api.VolumeSource{ HostPath: &api.HostPathVolumeSource{ Path: os.TempDir(),", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-f3e748c2e76fd0090cbfd20256785c7a206152780844463dca8007770c4a0d37", "text": "}, }, } testCount := int32(3) testStatuses := []testStatus{ {\"terminate-cmd-rpa\", api.RestartPolicyAlways, api.PodRunning, ContainerStateWaiting | ContainerStateRunning | ContainerStateTerminated, \">\", testCount, false}, {\"terminate-cmd-rpof\", api.RestartPolicyOnFailure, api.PodSucceeded, ContainerStateTerminated, \"==\", testCount, false}, {\"terminate-cmd-rpn\", api.RestartPolicyNever, api.PodSucceeded, ContainerStateTerminated, \"==\", 0, false}, testCases := []testCase{ {\"terminate-cmd-rpa\", api.RestartPolicyAlways, api.PodRunning, ContainerStateRunning, \"==\", 2, true}, {\"terminate-cmd-rpof\", api.RestartPolicyOnFailure, api.PodSucceeded, ContainerStateTerminated, \"==\", 1, false}, {\"terminate-cmd-rpn\", api.RestartPolicyNever, api.PodFailed, ContainerStateTerminated, \"==\", 0, false}, } for _, testStatus := range testStatuses { for _, testCase := range testCases { tmpFile, err := ioutil.TempFile(\"\", \"restartCount\") Expect(err).NotTo(HaveOccurred()) defer os.Remove(tmpFile.Name()) // It fails in the first three runs and succeeds after that. tmpCmd := fmt.Sprintf(\"echo 'hello' >> /restart-count/%s ; test $(wc -l /restart-count/%s| awk {'print $1'}) -ge %d\", path.Base(tmpFile.Name()), path.Base(tmpFile.Name()), testCount+1) testContainer.Name = testStatus.Name // It failed at the 1st run, then succeeded at 2nd run, then run forever cmdScripts := ` f=%s count=$(echo 'hello' >> $f ; wc -l $f | awk {'print $1'}) if [ $count -eq 1 ]; then exit 1 fi if [ $count -eq 2 ]; then exit 0 fi while true; do sleep 1; done ` tmpCmd := fmt.Sprintf(cmdScripts, path.Join(restartCountVolumePath, path.Base(tmpFile.Name()))) testContainer.Name = testCase.Name testContainer.Command = []string{\"sh\", \"-c\", tmpCmd} terminateContainer := ConformanceContainer{ Container: testContainer, Client: cl, RestartPolicy: testStatus.RestartPolicy, RestartPolicy: testCase.RestartPolicy, Volumes: testVolumes, NodeName: *nodeName, Namespace: namespace, } err = terminateContainer.Create() Expect(err).NotTo(HaveOccurred()) testCContainers = append(testCContainers, terminateContainer) Expect(terminateContainer.Create()).To(Succeed()) defer terminateContainer.Delete() Eventually(func() api.PodPhase { _, phase, _ := terminateContainer.GetStatus() return phase }, retryTimeout, pollInterval).ShouldNot(Equal(api.PodPending)) var status api.ContainerStatus By(\"it should get the expected 'RestartCount'\") Eventually(func() int32 { status, _, _ = terminateContainer.GetStatus() return status.RestartCount }, retryTimeout, pollInterval).Should(BeNumerically(testStatus.RestartCountOper, testStatus.RestartCount)) Eventually(func() (int32, error) { status, err := terminateContainer.GetStatus() return status.RestartCount, err }, retryTimeout, pollInterval).Should(BeNumerically(testCase.RestartCountOper, testCase.RestartCount)) By(\"it should get the expected 'Ready' status\") Expect(status.Ready).To(Equal(testStatus.Ready)) By(\"it should get the expected 'Phase'\") Eventually(terminateContainer.GetPhase, retryTimeout, pollInterval).Should(Equal(testCase.Phase)) By(\"it should get the expected 'State'\") Expect(GetContainerState(status.State) & testStatus.State).NotTo(Equal(0)) By(\"it should get the expected 'Ready' condition\") Expect(terminateContainer.IsReady()).Should(Equal(testCase.Ready)) By(\"it should be possible to delete [Conformance]\") err = terminateContainer.Delete() Expect(err).NotTo(HaveOccurred()) Eventually(func() bool { isPresent, err := terminateContainer.Present() return err == nil && !isPresent }, retryTimeout, pollInterval).Should(BeTrue()) } }) }) Context(\"when start a container that keeps running\", func() { It(\"it should run with the expected status [Conformance]\", func() { testContainer := api.Container{ Image: ImageRegistry[busyBoxImage], Command: []string{\"sh\", \"-c\", \"while true; do echo hello; sleep 1; done\"}, ImagePullPolicy: api.PullIfNotPresent, } testStatuses := []testStatus{ {\"loop-cmd-rpa\", api.RestartPolicyAlways, api.PodRunning, ContainerStateRunning, \"==\", 0, true}, {\"loop-cmd-rpof\", api.RestartPolicyOnFailure, api.PodRunning, ContainerStateRunning, \"==\", 0, true}, {\"loop-cmd-rpn\", api.RestartPolicyNever, api.PodRunning, ContainerStateRunning, \"==\", 0, true}, } for _, testStatus := range testStatuses { testContainer.Name = testStatus.Name runningContainer := ConformanceContainer{ Container: testContainer, Client: cl, RestartPolicy: testStatus.RestartPolicy, NodeName: *nodeName, Namespace: namespace, } err := runningContainer.Create() Expect(err).NotTo(HaveOccurred()) testCContainers = append(testCContainers, runningContainer) Eventually(func() api.PodPhase { _, phase, _ := runningContainer.GetStatus() return phase }, retryTimeout, pollInterval).Should(Equal(api.PodRunning)) var status api.ContainerStatus var phase api.PodPhase Consistently(func() api.PodPhase { status, phase, err = runningContainer.GetStatus() return phase }, consistentCheckTimeout, pollInterval).Should(Equal(testStatus.Phase)) Expect(err).NotTo(HaveOccurred()) By(\"it should get the expected 'RestartCount'\") Expect(status.RestartCount).To(BeNumerically(testStatus.RestartCountOper, testStatus.RestartCount)) By(\"it should get the expected 'Ready' status\") Expect(status.Ready).To(Equal(testStatus.Ready)) status, err := terminateContainer.GetStatus() Expect(err).ShouldNot(HaveOccurred()) By(\"it should get the expected 'State'\") Expect(GetContainerState(status.State) & testStatus.State).NotTo(Equal(0)) Expect(GetContainerState(status.State)).To(Equal(testCase.State)) By(\"it should be possible to delete [Conformance]\") err = runningContainer.Delete() Expect(err).NotTo(HaveOccurred()) Eventually(func() bool { isPresent, err := runningContainer.Present() return err == nil && !isPresent }, retryTimeout, pollInterval).Should(BeTrue()) } }) }) Context(\"when start a container that exits failure\", func() { It(\"it should run with the expected status [Conformance]\", func() { testContainer := api.Container{ Image: ImageRegistry[busyBoxImage], Command: []string{\"false\"}, ImagePullPolicy: api.PullIfNotPresent, } testStatuses := []testStatus{ {\"fail-cmd-rpa\", api.RestartPolicyAlways, api.PodRunning, ContainerStateWaiting | ContainerStateRunning | ContainerStateTerminated, \">\", 0, false}, {\"fail-cmd-rpof\", api.RestartPolicyOnFailure, api.PodRunning, ContainerStateTerminated, \">\", 0, false}, {\"fail-cmd-rpn\", api.RestartPolicyNever, api.PodFailed, ContainerStateTerminated, \"==\", 0, false}, } for _, testStatus := range testStatuses { testContainer.Name = testStatus.Name failureContainer := ConformanceContainer{ Container: testContainer, Client: cl, RestartPolicy: testStatus.RestartPolicy, NodeName: *nodeName, Namespace: namespace, } err := failureContainer.Create() Expect(err).NotTo(HaveOccurred()) testCContainers = append(testCContainers, failureContainer) Eventually(func() api.PodPhase { _, phase, _ := failureContainer.GetStatus() return phase }, retryTimeout, pollInterval).ShouldNot(Equal(api.PodPending)) var status api.ContainerStatus By(\"it should get the expected 'RestartCount'\") Eventually(func() int32 { status, _, _ = failureContainer.GetStatus() return status.RestartCount }, retryTimeout, pollInterval).Should(BeNumerically(testStatus.RestartCountOper, testStatus.RestartCount)) By(\"it should get the expected 'Ready' status\") Expect(status.Ready).To(Equal(testStatus.Ready)) By(\"it should get the expected 'State'\") Expect(GetContainerState(status.State) & testStatus.State).NotTo(Equal(0)) By(\"it should be possible to delete [Conformance]\") err = failureContainer.Delete() Expect(err).NotTo(HaveOccurred()) Eventually(func() bool { isPresent, err := failureContainer.Present() return err == nil && !isPresent }, retryTimeout, pollInterval).Should(BeTrue()) Expect(terminateContainer.Delete()).To(Succeed()) Eventually(terminateContainer.Present, retryTimeout, pollInterval).Should(BeFalse()) } }) })", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f9cc6996cc57c60be4f48c68936328b146c9dc2bbf1bb7013e816c2703702419", "query": "Slow conformance test has caused issues like We should make runtime/image conformance test run faster without breaking the test coverage, possibly by: Shorten consistently check interval . Remove redundant test cases such as ... /cc", "positive_passages": [{"docid": "doc-en-kubernetes-42d7e3f6a08032abb5df01459fb8cbc97fc3e40e70bfe2e1bb39f3f732e3cdc9", "text": "Context(\"when running a container with invalid image\", func() { It(\"it should run with the expected status [Conformance]\", func() { testContainer := api.Container{ Image: \"foo.com/foo/foo\", Command: []string{\"false\"}, ImagePullPolicy: api.PullIfNotPresent, Image: \"foo.com/foo/foo\", Command: []string{\"false\"}, } testStatus := testStatus{\"invalid-image-rpa\", api.RestartPolicyAlways, api.PodPending, ContainerStateWaiting, \"==\", 0, false} testContainer.Name = testStatus.Name testCase := testCase{\"invalid-image-rpa\", api.RestartPolicyAlways, api.PodPending, ContainerStateWaiting, \"==\", 0, false} testContainer.Name = testCase.Name invalidImageContainer := ConformanceContainer{ Container: testContainer, Client: cl, RestartPolicy: testStatus.RestartPolicy, RestartPolicy: testCase.RestartPolicy, NodeName: *nodeName, Namespace: namespace, } err := invalidImageContainer.Create() Expect(err).NotTo(HaveOccurred()) testCContainers = append(testCContainers, invalidImageContainer) Expect(invalidImageContainer.Create()).To(Succeed()) defer invalidImageContainer.Delete() var status api.ContainerStatus var phase api.PodPhase Eventually(invalidImageContainer.GetPhase, retryTimeout, pollInterval).Should(Equal(testCase.Phase)) Consistently(invalidImageContainer.GetPhase, consistentCheckTimeout, pollInterval).Should(Equal(testCase.Phase)) Consistently(func() api.PodPhase { if status, phase, err = invalidImageContainer.GetStatus(); err != nil { return api.PodPending } else { return phase } }, consistentCheckTimeout, pollInterval).Should(Equal(testStatus.Phase)) status, err := invalidImageContainer.GetStatus() Expect(err).NotTo(HaveOccurred()) By(\"it should get the expected 'RestartCount'\") Expect(status.RestartCount).To(BeNumerically(testStatus.RestartCountOper, testStatus.RestartCount)) Expect(status.RestartCount).To(BeNumerically(testCase.RestartCountOper, testCase.RestartCount)) By(\"it should get the expected 'Ready' status\") Expect(status.Ready).To(Equal(testStatus.Ready)) Expect(status.Ready).To(Equal(testCase.Ready)) By(\"it should get the expected 'State'\") Expect(GetContainerState(status.State) & testStatus.State).NotTo(Equal(0)) Expect(GetContainerState(status.State)).To(Equal(testCase.State)) By(\"it should be possible to delete [Conformance]\") err = invalidImageContainer.Delete() Expect(err).NotTo(HaveOccurred()) Eventually(func() bool { isPresent, err := invalidImageContainer.Present() return err == nil && !isPresent }, retryTimeout, pollInterval).Should(BeTrue()) Expect(invalidImageContainer.Delete()).To(Succeed()) Eventually(invalidImageContainer.Present, retryTimeout, pollInterval).Should(BeFalse()) }) }) AfterEach(func() { for _, cc := range testCContainers { cc.Delete() } }) }) })", "commid": "kubernetes_pr_26856"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cdbdab84d851624095d1ccede1b49e262098d3b6f4a3201986bfd86cae33cda9", "query": "Ref and should sort revisions correctly (10 should be after 9, not 1).\nI can help with this bug. Looks like trivial fix in PrintRolloutHistory\nThanks\nare you still taking care of this one ? I am searching for some easy to do PR to just get familiar with the process and put my first PR in.\nI created pr some time ago to close this issue\ngreat thanks :-)\nyou can look for help-wanted issues or check and\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-e49efa8702d2dd204b45132c7f0e661de28a43edea50722af099a2b0e3f58a6e", "text": "import ( \"fmt\" \"io\" \"sort\" \"strconv\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/meta\"", "commid": "kubernetes_pr_25802"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cdbdab84d851624095d1ccede1b49e262098d3b6f4a3201986bfd86cae33cda9", "query": "Ref and should sort revisions correctly (10 should be after 9, not 1).\nI can help with this bug. Looks like trivial fix in PrintRolloutHistory\nThanks\nare you still taking care of this one ? I am searching for some easy to do PR to just get familiar with the process and put my first PR in.\nI created pr some time ago to close this issue\ngreat thanks :-)\nyou can look for help-wanted issues or check and\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-9202faa481fc18e47dc861a99a0c6677932eb1d73ae82a64b3845d2a2aef4467", "text": "clientset \"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset\" \"k8s.io/kubernetes/pkg/runtime\" deploymentutil \"k8s.io/kubernetes/pkg/util/deployment\" \"k8s.io/kubernetes/pkg/util/errors\" sliceutil \"k8s.io/kubernetes/pkg/util/slice\" ) const (", "commid": "kubernetes_pr_25802"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cdbdab84d851624095d1ccede1b49e262098d3b6f4a3201986bfd86cae33cda9", "query": "Ref and should sort revisions correctly (10 should be after 9, not 1).\nI can help with this bug. Looks like trivial fix in PrintRolloutHistory\nThanks\nare you still taking care of this one ? I am searching for some easy to do PR to just get familiar with the process and put my first PR in.\nI created pr some time ago to close this issue\ngreat thanks :-)\nyou can look for help-wanted issues or check and\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-bdef9988174204850b71afad632e59270b0b2164fb9eb972197a6a8cfbaa0db0", "text": "return fmt.Sprintf(\"No rollout history found in %s %q\", resource, name), nil } // Sort the revisionToChangeCause map by revision var revisions []string for k := range historyInfo.RevisionToTemplate { revisions = append(revisions, strconv.FormatInt(k, 10)) var revisions []int64 for r := range historyInfo.RevisionToTemplate { revisions = append(revisions, r) } sort.Strings(revisions) sliceutil.SortInts64(revisions) return tabbedString(func(out io.Writer) error { fmt.Fprintf(out, \"%s %q:n\", resource, name) fmt.Fprintf(out, \"REVISIONtCHANGE-CAUSEn\") errs := []error{} for _, r := range revisions { // Find the change-cause of revision r r64, err := strconv.ParseInt(r, 10, 64) if err != nil { errs = append(errs, err) continue } changeCause := historyInfo.RevisionToTemplate[r64].Annotations[ChangeCauseAnnotation] changeCause := historyInfo.RevisionToTemplate[r].Annotations[ChangeCauseAnnotation] if len(changeCause) == 0 { changeCause = \"\" } fmt.Fprintf(out, \"%st%sn\", r, changeCause) fmt.Fprintf(out, \"%dt%sn\", r, changeCause) } return errors.NewAggregate(errs) return nil }) }", "commid": "kubernetes_pr_25802"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cdbdab84d851624095d1ccede1b49e262098d3b6f4a3201986bfd86cae33cda9", "query": "Ref and should sort revisions correctly (10 should be after 9, not 1).\nI can help with this bug. Looks like trivial fix in PrintRolloutHistory\nThanks\nare you still taking care of this one ? I am searching for some easy to do PR to just get familiar with the process and put my first PR in.\nI created pr some time ago to close this issue\ngreat thanks :-)\nyou can look for help-wanted issues or check and\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-beeb26d9d3f67a6478696b0b6a7d4bc41517b362f9b5d4c280e6f6d1e1e91990", "text": "} return shuffled } // Int64Slice attaches the methods of Interface to []int64, // sorting in increasing order. type Int64Slice []int64 func (p Int64Slice) Len() int { return len(p) } func (p Int64Slice) Less(i, j int) bool { return p[i] < p[j] } func (p Int64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] } // Sorts []int64 in increasing order func SortInts64(a []int64) { sort.Sort(Int64Slice(a)) } ", "commid": "kubernetes_pr_25802"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cdbdab84d851624095d1ccede1b49e262098d3b6f4a3201986bfd86cae33cda9", "query": "Ref and should sort revisions correctly (10 should be after 9, not 1).\nI can help with this bug. Looks like trivial fix in PrintRolloutHistory\nThanks\nare you still taking care of this one ? I am searching for some easy to do PR to just get familiar with the process and put my first PR in.\nI created pr some time ago to close this issue\ngreat thanks :-)\nyou can look for help-wanted issues or check and\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-360921e4f4f43f476ab8d5fa4fa35d2234f8c9bb7e3cd7b93aa56e9ffd46d1e3", "text": "} } } func TestSortInts64(t *testing.T) { src := []int64{10, 1, 2, 3, 4, 5, 6} expected := []int64{1, 2, 3, 4, 5, 6, 10} SortInts64(src) if !reflect.DeepEqual(src, expected) { t.Errorf(\"func Ints64 didnt sort correctly, %v !- %v\", src, expected) } } ", "commid": "kubernetes_pr_25802"}], "negative_passages": []} {"query_id": "q-en-kubernetes-87a1a6cd7255a7d3af9b5badeea221b6e0f9a3874b0c406972669f22c6eff182", "query": "vSphere volume spec fails with the following error, as its not been in api/validation. \"spec.volumes[0]: Required value: must specify a volume type\" Fixed in PR", "positive_passages": [{"docid": "doc-en-kubernetes-949d74511ac1c896810bbf3584cb5ea2759a6395486370578a4d1893b94b4c6d", "text": "numVolumes++ allErrs = append(allErrs, validateAzureFile(source.AzureFile, fldPath.Child(\"azureFile\"))...) } if source.VsphereVolume != nil { if numVolumes > 0 { allErrs = append(allErrs, field.Forbidden(fldPath.Child(\"vsphereVolume\"), \"may not specify more than 1 volume type\")) } else { numVolumes++ allErrs = append(allErrs, validateVsphereVolumeSource(source.VsphereVolume, fldPath.Child(\"vsphereVolume\"))...) } } if numVolumes == 0 { allErrs = append(allErrs, field.Required(fldPath, \"must specify a volume type\")) }", "commid": "kubernetes_pr_26121"}], "negative_passages": []} {"query_id": "q-en-kubernetes-87a1a6cd7255a7d3af9b5badeea221b6e0f9a3874b0c406972669f22c6eff182", "query": "vSphere volume spec fails with the following error, as its not been in api/validation. \"spec.volumes[0]: Required value: must specify a volume type\" Fixed in PR", "positive_passages": [{"docid": "doc-en-kubernetes-8d7511a03199b15d96f41c7fec845f5d20d46cc4015030fee26a111eb337c9c9", "text": "return allErrs } func validateVsphereVolumeSource(cd *api.VsphereVirtualDiskVolumeSource, fldPath *field.Path) field.ErrorList { allErrs := field.ErrorList{} if len(cd.VolumePath) == 0 { allErrs = append(allErrs, field.Required(fldPath.Child(\"volumePath\"), \"\")) } return allErrs } // ValidatePersistentVolumeName checks that a name is appropriate for a // PersistentVolumeName object. var ValidatePersistentVolumeName = NameIsDNSSubdomain", "commid": "kubernetes_pr_26121"}], "negative_passages": []} {"query_id": "q-en-kubernetes-87a1a6cd7255a7d3af9b5badeea221b6e0f9a3874b0c406972669f22c6eff182", "query": "vSphere volume spec fails with the following error, as its not been in api/validation. \"spec.volumes[0]: Required value: must specify a volume type\" Fixed in PR", "positive_passages": [{"docid": "doc-en-kubernetes-38e0c4be396d9904cdc0caa0c15f7878d4a5b9e4b7958c83ebe546ec4f62269b", "text": "numVolumes++ allErrs = append(allErrs, validateAzureFile(pv.Spec.AzureFile, specPath.Child(\"azureFile\"))...) } if pv.Spec.VsphereVolume != nil { if numVolumes > 0 { allErrs = append(allErrs, field.Forbidden(specPath.Child(\"vsphereVolume\"), \"may not specify more than 1 volume type\")) } else { numVolumes++ allErrs = append(allErrs, validateVsphereVolumeSource(pv.Spec.VsphereVolume, specPath.Child(\"vsphereVolume\"))...) } } if numVolumes == 0 { allErrs = append(allErrs, field.Required(specPath, \"must specify a volume type\")) }", "commid": "kubernetes_pr_26121"}], "negative_passages": []} {"query_id": "q-en-kubernetes-67015524d03b3c3a046beb56e6abbbe143f04ab34a9cc2bb70bdff31e035c48c", "query": "PersistentVolume integration tests fail sometimes: There are some interesting reports in the log like: Thans for report.\nIt turns out that volume integration tests have no logs at all, was not checked and I don't really know what's going on there. adds error checks to help me find out what's wrong there. Still, I suspect there are some troubles with etcd, not related to the test itself.\nThis flake affects also TestPersistentVolumeMultiPVs:\nExtended logging got merged just now, please post links to logs when it breaks again!\nMarking as P1 as per\nI have no real usable logs, but I think it could be fixed by , it hasn't flaked since it was merged. And I have (in merge queue) to further stabilize the tests.\nNot blocking this on 1.3.\nIt hasn't flaked for almost a month, can we close this?\nClosing this issue since previous fixes made the flake go away.", "positive_passages": [{"docid": "doc-en-kubernetes-69f4610b982cfc5347d64830935c1045a02a531cfaab4a6a7a4fa1dcbd30ccee", "text": "pvc := createPVC(\"fake-pvc\", \"5G\", []api.PersistentVolumeAccessMode{api.ReadWriteOnce}) w, _ := testClient.PersistentVolumes().Watch(api.ListOptions{}) w, err := testClient.PersistentVolumes().Watch(api.ListOptions{}) if err != nil { t.Errorf(\"Failed to watch PersistentVolumes: %v\", err) } defer w.Stop() _, _ = testClient.PersistentVolumes().Create(pv) _, _ = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) _, err = testClient.PersistentVolumes().Create(pv) if err != nil { t.Errorf(\"Failed to create PersistentVolume: %v\", err) } _, err = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) if err != nil { t.Errorf(\"Failed to create PersistentVolumeClaim: %v\", err) } // wait until the controller pairs the volume and claim waitForPersistentVolumePhase(w, api.VolumeBound)", "commid": "kubernetes_pr_26262"}], "negative_passages": []} {"query_id": "q-en-kubernetes-67015524d03b3c3a046beb56e6abbbe143f04ab34a9cc2bb70bdff31e035c48c", "query": "PersistentVolume integration tests fail sometimes: There are some interesting reports in the log like: Thans for report.\nIt turns out that volume integration tests have no logs at all, was not checked and I don't really know what's going on there. adds error checks to help me find out what's wrong there. Still, I suspect there are some troubles with etcd, not related to the test itself.\nThis flake affects also TestPersistentVolumeMultiPVs:\nExtended logging got merged just now, please post links to logs when it breaks again!\nMarking as P1 as per\nI have no real usable logs, but I think it could be fixed by , it hasn't flaked since it was merged. And I have (in merge queue) to further stabilize the tests.\nNot blocking this on 1.3.\nIt hasn't flaked for almost a month, can we close this?\nClosing this issue since previous fixes made the flake go away.", "positive_passages": [{"docid": "doc-en-kubernetes-ab06c5975896b748494152a3af160f8072ce4ef48227c3a9c86f654d2b3d2705", "text": "// change the reclamation policy of the PV for the next test pv.Spec.PersistentVolumeReclaimPolicy = api.PersistentVolumeReclaimDelete w, _ = testClient.PersistentVolumes().Watch(api.ListOptions{}) w, err = testClient.PersistentVolumes().Watch(api.ListOptions{}) if err != nil { t.Errorf(\"Failed to watch PersistentVolumes: %v\", err) } defer w.Stop() _, _ = testClient.PersistentVolumes().Create(pv) _, _ = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) _, err = testClient.PersistentVolumes().Create(pv) if err != nil { t.Errorf(\"Failed to create PersistentVolume: %v\", err) } _, err = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) if err != nil { t.Errorf(\"Failed to create PersistentVolumeClaim: %v\", err) } waitForPersistentVolumePhase(w, api.VolumeBound)", "commid": "kubernetes_pr_26262"}], "negative_passages": []} {"query_id": "q-en-kubernetes-67015524d03b3c3a046beb56e6abbbe143f04ab34a9cc2bb70bdff31e035c48c", "query": "PersistentVolume integration tests fail sometimes: There are some interesting reports in the log like: Thans for report.\nIt turns out that volume integration tests have no logs at all, was not checked and I don't really know what's going on there. adds error checks to help me find out what's wrong there. Still, I suspect there are some troubles with etcd, not related to the test itself.\nThis flake affects also TestPersistentVolumeMultiPVs:\nExtended logging got merged just now, please post links to logs when it breaks again!\nMarking as P1 as per\nI have no real usable logs, but I think it could be fixed by , it hasn't flaked since it was merged. And I have (in merge queue) to further stabilize the tests.\nNot blocking this on 1.3.\nIt hasn't flaked for almost a month, can we close this?\nClosing this issue since previous fixes made the flake go away.", "positive_passages": [{"docid": "doc-en-kubernetes-d5e237f5125fe62a6eb80055d1a2d6e7468a7568f8bd76270e9219af52efbaf3", "text": "pvc := createPVC(\"pvc-2\", strconv.Itoa(maxPVs/2)+\"G\", []api.PersistentVolumeAccessMode{api.ReadWriteOnce}) w, _ := testClient.PersistentVolumes().Watch(api.ListOptions{}) w, err := testClient.PersistentVolumes().Watch(api.ListOptions{}) if err != nil { t.Errorf(\"Failed to watch PersistentVolumes: %v\", err) } defer w.Stop() for i := 0; i < maxPVs; i++ { _, _ = testClient.PersistentVolumes().Create(pvs[i]) _, err = testClient.PersistentVolumes().Create(pvs[i]) if err != nil { t.Errorf(\"Failed to create PersistentVolume %d: %v\", i, err) } } _, _ = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) _, err = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) if err != nil { t.Errorf(\"Failed to create PersistentVolumeClaim: %v\", err) } // wait until the controller pairs the volume and claim waitForPersistentVolumePhase(w, api.VolumeBound)", "commid": "kubernetes_pr_26262"}], "negative_passages": []} {"query_id": "q-en-kubernetes-67015524d03b3c3a046beb56e6abbbe143f04ab34a9cc2bb70bdff31e035c48c", "query": "PersistentVolume integration tests fail sometimes: There are some interesting reports in the log like: Thans for report.\nIt turns out that volume integration tests have no logs at all, was not checked and I don't really know what's going on there. adds error checks to help me find out what's wrong there. Still, I suspect there are some troubles with etcd, not related to the test itself.\nThis flake affects also TestPersistentVolumeMultiPVs:\nExtended logging got merged just now, please post links to logs when it breaks again!\nMarking as P1 as per\nI have no real usable logs, but I think it could be fixed by , it hasn't flaked since it was merged. And I have (in merge queue) to further stabilize the tests.\nNot blocking this on 1.3.\nIt hasn't flaked for almost a month, can we close this?\nClosing this issue since previous fixes made the flake go away.", "positive_passages": [{"docid": "doc-en-kubernetes-cad12ee78e821ea91d2d3d25408743284295fffe04f1ff1d142bdf71c5422120", "text": "pvc := createPVC(\"pvc-rwm\", \"5G\", []api.PersistentVolumeAccessMode{api.ReadWriteMany}) w, _ := testClient.PersistentVolumes().Watch(api.ListOptions{}) w, err := testClient.PersistentVolumes().Watch(api.ListOptions{}) if err != nil { t.Errorf(\"Failed to watch PersistentVolumes: %v\", err) } defer w.Stop() _, _ = testClient.PersistentVolumes().Create(pv_rwm) _, _ = testClient.PersistentVolumes().Create(pv_rwo) _, err = testClient.PersistentVolumes().Create(pv_rwm) if err != nil { t.Errorf(\"Failed to create PersistentVolume: %v\", err) } _, err = testClient.PersistentVolumes().Create(pv_rwo) if err != nil { t.Errorf(\"Failed to create PersistentVolume: %v\", err) } _, _ = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) _, err = testClient.PersistentVolumeClaims(api.NamespaceDefault).Create(pvc) if err != nil { t.Errorf(\"Failed to create PersistentVolumeClaim: %v\", err) } // wait until the controller pairs the volume and claim waitForPersistentVolumePhase(w, api.VolumeBound)", "commid": "kubernetes_pr_26262"}], "negative_passages": []} {"query_id": "q-en-kubernetes-74ce4fea7378dc634ccb83a22893b6255fa243ed3e0b4f0e57320154ecc66caf", "query": "removed this test. We need to continue testing this. This wasn't about testing upgrades, the prefix has nothing to do with upgrades. It's just there so people can share etcd. We maybe don't need to double everything but we do need to make sure that keeps working.\nTo be honest - I think that what we was testing before didn't give us completely any information. We didn't testing what information we are storing in etcd and where. That said, if we had a hardcoded prefix somewhere in our code, we wouldn't discover it at all, and those tests would still be passing. We should have a test that is directly checking etcd contents. I think it should be possible to write a simple unit test for it. Will try to put something together next week.", "positive_passages": [{"docid": "doc-en-kubernetes-bb579085541433eea7607bbcec3b281daab625fb4d62f21af3cca072a610211f", "text": "ETCD_HOST=${ETCD_HOST:-127.0.0.1} ETCD_PORT=${ETCD_PORT:-4001} ETCD_PREFIX=${ETCD_PREFIX:-randomPrefix} API_PORT=${API_PORT:-8080} API_HOST=${API_HOST:-127.0.0.1} KUBE_API_VERSIONS=\"\"", "commid": "kubernetes_pr_26553"}], "negative_passages": []} {"query_id": "q-en-kubernetes-74ce4fea7378dc634ccb83a22893b6255fa243ed3e0b4f0e57320154ecc66caf", "query": "removed this test. We need to continue testing this. This wasn't about testing upgrades, the prefix has nothing to do with upgrades. It's just there so people can share etcd. We maybe don't need to double everything but we do need to make sure that keeps working.\nTo be honest - I think that what we was testing before didn't give us completely any information. We didn't testing what information we are storing in etcd and where. That said, if we had a hardcoded prefix somewhere in our code, we wouldn't discover it at all, and those tests would still be passing. We should have a test that is directly checking etcd contents. I think it should be possible to write a simple unit test for it. Will try to put something together next week.", "positive_passages": [{"docid": "doc-en-kubernetes-f4949ced9af0b45f1bfa8099a45bec008466ab0e4baaa62ffdf55566faa1ed5e", "text": "--bind-address=\"${API_HOST}\" --insecure-port=\"${API_PORT}\" --etcd-servers=\"http://${ETCD_HOST}:${ETCD_PORT}\" --etcd-prefix=\"${ETCD_PREFIX}\" --runtime-config=\"${RUNTIME_CONFIG}\" --cert-dir=\"${TMPDIR:-/tmp/}\" --service-cluster-ip-range=\"10.0.0.0/24\" ", "commid": "kubernetes_pr_26553"}], "negative_passages": []} {"query_id": "q-en-kubernetes-74ce4fea7378dc634ccb83a22893b6255fa243ed3e0b4f0e57320154ecc66caf", "query": "removed this test. We need to continue testing this. This wasn't about testing upgrades, the prefix has nothing to do with upgrades. It's just there so people can share etcd. We maybe don't need to double everything but we do need to make sure that keeps working.\nTo be honest - I think that what we was testing before didn't give us completely any information. We didn't testing what information we are storing in etcd and where. That said, if we had a hardcoded prefix somewhere in our code, we wouldn't discover it at all, and those tests would still be passing. We should have a test that is directly checking etcd contents. I think it should be possible to write a simple unit test for it. Will try to put something together next week.", "positive_passages": [{"docid": "doc-en-kubernetes-d3292c1563bd973ca1bb0530f19c4f4e000c0d91ca9a584ceeba2e93c6a5cbd7", "text": "### END TEST DEFINITION CUSTOMIZATION ### ####################################################### # Step 1: Start a server which supports both the old and new api versions, # but KUBE_OLD_API_VERSION is the latest (storage) version.", "commid": "kubernetes_pr_26553"}], "negative_passages": []} {"query_id": "q-en-kubernetes-74ce4fea7378dc634ccb83a22893b6255fa243ed3e0b4f0e57320154ecc66caf", "query": "removed this test. We need to continue testing this. This wasn't about testing upgrades, the prefix has nothing to do with upgrades. It's just there so people can share etcd. We maybe don't need to double everything but we do need to make sure that keeps working.\nTo be honest - I think that what we was testing before didn't give us completely any information. We didn't testing what information we are storing in etcd and where. That said, if we had a hardcoded prefix somewhere in our code, we wouldn't discover it at all, and those tests would still be passing. We should have a test that is directly checking etcd contents. I think it should be possible to write a simple unit test for it. Will try to put something together next week.", "positive_passages": [{"docid": "doc-en-kubernetes-4dc3275367e11f5562004c8b1381ddb178173ded1fb822e1dd719149efc57524", "text": "old_storage_version=${test_data[4]} kube::log::status \"Verifying ${resource}/${namespace}/${name} has storage version ${old_storage_version} in etcd\" curl -s http://${ETCD_HOST}:${ETCD_PORT}/v2/keys/registry/${resource}/${namespace}/${name} | grep ${old_storage_version} curl -s http://${ETCD_HOST}:${ETCD_PORT}/v2/keys/${ETCD_PREFIX}/${resource}/${namespace}/${name} | grep ${old_storage_version} done killApiServer", "commid": "kubernetes_pr_26553"}], "negative_passages": []} {"query_id": "q-en-kubernetes-74ce4fea7378dc634ccb83a22893b6255fa243ed3e0b4f0e57320154ecc66caf", "query": "removed this test. We need to continue testing this. This wasn't about testing upgrades, the prefix has nothing to do with upgrades. It's just there so people can share etcd. We maybe don't need to double everything but we do need to make sure that keeps working.\nTo be honest - I think that what we was testing before didn't give us completely any information. We didn't testing what information we are storing in etcd and where. That said, if we had a hardcoded prefix somewhere in our code, we wouldn't discover it at all, and those tests would still be passing. We should have a test that is directly checking etcd contents. I think it should be possible to write a simple unit test for it. Will try to put something together next week.", "positive_passages": [{"docid": "doc-en-kubernetes-ea666026182ccfccc6a19616d22f71b8cbd9d8ea62fdfd2e828b408581b8a1a0", "text": "new_storage_version=${test_data[5]} kube::log::status \"Verifying ${resource}/${namespace}/${name} has updated storage version ${new_storage_version} in etcd\" curl -s http://${ETCD_HOST}:${ETCD_PORT}/v2/keys/registry/${resource}/${namespace}/${name} | grep ${new_storage_version} curl -s http://${ETCD_HOST}:${ETCD_PORT}/v2/keys/${ETCD_PREFIX}/${resource}/${namespace}/${name} | grep ${new_storage_version} done killApiServer", "commid": "kubernetes_pr_26553"}], "negative_passages": []} {"query_id": "q-en-kubernetes-94f9f800309f4cdedddf2efc2c38e4c633d471e79f09976ab3145ec32dee399a", "query": "If the first scheduling failed (whatever the reason is), the Error() func is called. However, the default Error function is wrong and in case of some errors may drop the pod and never requeue it for scheduling again. The problem is that if the Get() function: return an error (for me it was \"dial tcp 127.0.0.1:8080: connect: cannot assign requested address\", but it doesn't really matter), then we simply forget about a pod and will never schedule it again. We should retry those requests (with some backoff).", "positive_passages": [{"docid": "doc-en-kubernetes-880af6824cf1396ec77b6524be64b165abb5f80b5f51187b1e57d369b2b917b5", "text": "const ( SchedulerAnnotationKey = \"scheduler.alpha.kubernetes.io/name\" initialGetBackoff = 100 * time.Millisecond maximalGetBackoff = time.Minute ) // ConfigFactory knows how to fill out a scheduler config with its support functions.", "commid": "kubernetes_pr_29100"}], "negative_passages": []} {"query_id": "q-en-kubernetes-94f9f800309f4cdedddf2efc2c38e4c633d471e79f09976ab3145ec32dee399a", "query": "If the first scheduling failed (whatever the reason is), the Error() func is called. However, the default Error function is wrong and in case of some errors may drop the pod and never requeue it for scheduling again. The problem is that if the Get() function: return an error (for me it was \"dial tcp 127.0.0.1:8080: connect: cannot assign requested address\", but it doesn't really matter), then we simply forget about a pod and will never schedule it again. We should retry those requests (with some backoff).", "positive_passages": [{"docid": "doc-en-kubernetes-d3d903d1d477dcdd4c9fc1409e711c28e0d98b0b0f1c5de37359cc855cad9544", "text": "} // Get the pod again; it may have changed/been scheduled already. pod = &api.Pod{} err := factory.Client.Get().Namespace(podID.Namespace).Resource(\"pods\").Name(podID.Name).Do().Into(pod) if err != nil { if !errors.IsNotFound(err) { glog.Errorf(\"Error getting pod %v for retry: %v; abandoning\", podID, err) getBackoff := initialGetBackoff for { if err := factory.Client.Get().Namespace(podID.Namespace).Resource(\"pods\").Name(podID.Name).Do().Into(pod); err == nil { break } return if errors.IsNotFound(err) { glog.Warning(\"A pod %v no longer exists\", podID) return } glog.Errorf(\"Error getting pod %v for retry: %v; retrying...\", podID, err) if getBackoff = getBackoff * 2; getBackoff > maximalGetBackoff { getBackoff = maximalGetBackoff } time.Sleep(getBackoff) } if pod.Spec.NodeName == \"\" { podQueue.AddIfNotPresent(pod)", "commid": "kubernetes_pr_29100"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5922ad1ee5081362e41d63aecf4f82c8ff968e4c864a1f671bcdef7e00456963", "query": "For example\nThis is with recent kubectl:\nI think this is a duplicate, but just in case.\nI will double check this.\nThanks - it isn't a big deal, because I obviously didn't intend to rolling update to an empty manifest :-) (I was experimenting with templating my manifests and made an error) But let me know if you need any more info to reproduce!", "positive_passages": [{"docid": "doc-en-kubernetes-f8cfc0397cffa4621eecb3111246462eca5618cde1cd8f5611701d2d1e5a13c6", "text": "if len(list.Items) > 1 { return cmdutil.UsageError(cmd, \"%s specifies multiple items\", filename) } if len(list.Items) == 0 { return cmdutil.UsageError(cmd, \"please make sure %s exists and is not empty\", filename) } obj = list.Items[0] } newRc, ok = obj.(*api.ReplicationController)", "commid": "kubernetes_pr_29531"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-2d9abc84f7d83687196a738c8c0517698a256dde96e31270b2f2e187abdc8391", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v18 name: kube-dns-v19 namespace: kube-system labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: replicas: __PILLAR__DNS__REPLICAS__ selector: k8s-app: kube-dns version: v18 version: v19 template: metadata: labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-bb7ed09e60cf086d9428d487460bca6bed3c87139bb7c3d5957bfebce6ace503", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1:10053 >/dev/null - -port=8080", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-91268e90086f52f1e9aa926852589b88c07051a0f6351235dbb4cce7282453e3", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v18 name: kube-dns-v19 namespace: kube-system labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: replicas: {{ pillar['dns_replicas'] }} selector: k8s-app: kube-dns version: v18 version: v19 template: metadata: labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-698bc41afd4cdf6edf397de5584ee9f4c28b1939c6cbced1e8ee571d0d1248f2", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1:10053 >/dev/null - -port=8080", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-c910906c697e3fa1c6f06091c15c4a915709e4c37a247bd3d799c72cdb1e8427", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v18 name: kube-dns-v19 namespace: kube-system labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: replicas: $DNS_REPLICAS selector: k8s-app: kube-dns version: v18 version: v19 template: metadata: labels: k8s-app: kube-dns version: v18 version: v19 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-12db0bdff444977b5df92669d8af725a8a2446d9ea05b111904e5c5d3fdfe23c", "text": "# \"burstable\" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi memory: 170Mi requests: cpu: 100m memory: 100Mi memory: 70Mi livenessProbe: httpGet: path: /healthz", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-5ab066989f827398a2d71668d0a5e668030f1e64f01a26e865b29a44446416dd", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1:10053 >/dev/null - -port=8080", "commid": "kubernetes_pr_29693"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-b04492437015dcacda2d4eced572a07c951a7acba827e4dffe8128d673df23b6", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v17 name: kube-dns-v17.1 namespace: kube-system labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: replicas: __PILLAR__DNS__REPLICAS__ selector: k8s-app: kube-dns version: v17 version: v17.1 template: metadata: labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-dc28f2d2e3a23fdd2c0331cd23d651d85de25a03b06dfd1beb9a260f397d6998", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null - -port=8080", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-8967f725a8b647ed04389f31a984147076d364d59087f131ee3164f27ba4ba00", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v17 name: kube-dns-v17.1 namespace: kube-system labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: replicas: {{ pillar['dns_replicas'] }} selector: k8s-app: kube-dns version: v17 version: v17.1 template: metadata: labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-870ac7fc599968439250cadbc207abcbaae0c107e1251648197f6cf5790ed88e", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null - -port=8080", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-be4dc01cb5abbc785ba135d92af93b36399626318c514f7e98a32747b6cf9d36", "text": "apiVersion: v1 kind: ReplicationController metadata: name: kube-dns-v17 name: kube-dns-v17.1 namespace: kube-system labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: replicas: $DNS_REPLICAS selector: k8s-app: kube-dns version: v17 version: v17.1 template: metadata: labels: k8s-app: kube-dns version: v17 version: v17.1 kubernetes.io/cluster-service: \"true\" spec: containers:", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-12db0bdff444977b5df92669d8af725a8a2446d9ea05b111904e5c5d3fdfe23c", "text": "# \"burstable\" category so the kubelet doesn't backoff from restarting it. limits: cpu: 100m memory: 200Mi memory: 170Mi requests: cpu: 100m memory: 100Mi memory: 70Mi livenessProbe: httpGet: path: /healthz", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-14cd86c3860004d87bc9d8599ab7960ef295aad162f6de5c31fc464723da6cf3", "query": "Observed weird memory spikes for both the dns healthz and kube-dns pods in production. Kube-dns didn't oom because there's enough headroom, but healthz did. I can't explain the spikes yet. I tried a bunch of different repros: no , bad nslookup command, sharing the node with cpu hogs etc, but was unable to trigger it. The memory grows, but stays under 10Mi. As a short term mitigation, we stole some ram from kube-dns, because it was always comfortably within limits. Related b/\ncan be closed -- healthz has been removed\n-- close issue?", "positive_passages": [{"docid": "doc-en-kubernetes-64023d3a61409eda09f6f3f4ee67811336097202149915f9935a5c0e6fa64fea", "text": "# keep request = limit to keep this container in guaranteed class limits: cpu: 10m memory: 20Mi memory: 50Mi requests: cpu: 10m memory: 20Mi # Note that this container shouldn't really need 50Mi of memory. The # limits are set higher than expected pending investigation on #29688. # The extra memory was stolen from the kubedns container to keep the # net memory requested by the pod constant. memory: 50Mi args: - -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN 127.0.0.1 >/dev/null - -port=8080", "commid": "kubernetes_pr_29713"}], "negative_passages": []} {"query_id": "q-en-kubernetes-89a6771191a9d4789a41e22c8438d008748f4255d70087ba009452760b4511ad", "query": "https://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nLooking into this...\ntrying to track this flake down. It looks like the pod is launched despite AntiAffinity in . In my local setup I saw the same, but noticed that the new admission controller was not activate. I have activate it and now get this:\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nShould be fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-1cd4a2ec7bc492ef0c3f78adb0d286c6da34f61f04f6e2f303dc937ab2ea7585", "text": "// test when the pod anti affinity rule is not satisfied, the pod would stay pending. It(\"validates that InterPodAntiAffinity is respected if matching 2\", func() { // launch a pod to find a node which can launch a pod. We intentionally do // not just take the node list and choose the first of them. Depending on the // cluster and the scheduler it might be that a \"normal\" pod cannot be // scheduled onto it. By(\"Trying to launch a pod with a label to get a node which can launch it.\") // launch pods to find nodes which can launch a pod. We intentionally do // not just take the node list and choose the first and the second of them. // Depending on the cluster and the scheduler it might be that a \"normal\" pod // cannot be scheduled onto it. By(\"Launching two pods on two distinct nodes to get two node names\") CreateHostPortPods(f, \"host-port\", 2, true) defer framework.DeleteRC(f.Client, f.Namespace.Name, \"host-port\") podList, err := c.Pods(ns).List(api.ListOptions{}) ExpectNoError(err) Expect(len(podList.Items)).To(Equal(2)) nodeNames := []string{podList.Items[0].Spec.NodeName, podList.Items[1].Spec.NodeName} Expect(nodeNames[0]).ToNot(Equal(nodeNames[1])) By(\"Applying a random label to both nodes.\") k := \"e2e.inter-pod-affinity.kubernetes.io/zone\" v := \"china-e2etest\" for _, nodeName := range nodeNames { framework.AddOrUpdateLabelOnNode(c, nodeName, k, v) framework.ExpectNodeHasLabel(c, nodeName, k, v) defer framework.RemoveLabelOffNode(c, nodeName, k) } By(\"Trying to launch another pod on the first node with the service label.\") podName := \"with-label-\" + string(uuid.NewUUID()) pod, err := c.Pods(ns).Create(&api.Pod{ TypeMeta: unversioned.TypeMeta{", "commid": "kubernetes_pr_30141"}], "negative_passages": []} {"query_id": "q-en-kubernetes-89a6771191a9d4789a41e22c8438d008748f4255d70087ba009452760b4511ad", "query": "https://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nLooking into this...\ntrying to track this flake down. It looks like the pod is launched despite AntiAffinity in . In my local setup I saw the same, but noticed that the new admission controller was not activate. I have activate it and now get this:\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nShould be fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-900e1c058dce909200242e0921ee8d4ecdd9b1aedec867ae9f021f25b3170515", "text": "Image: framework.GetPauseImageName(f.Client), }, }, NodeSelector: map[string]string{k: v}, // only launch on our two nodes }, }) framework.ExpectNoError(err)", "commid": "kubernetes_pr_30141"}], "negative_passages": []} {"query_id": "q-en-kubernetes-89a6771191a9d4789a41e22c8438d008748f4255d70087ba009452760b4511ad", "query": "https://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nLooking into this...\ntrying to track this flake down. It looks like the pod is launched despite AntiAffinity in . In my local setup I saw the same, but noticed that the new admission controller was not activate. I have activate it and now get this:\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nShould be fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-ab88b3f964120ea9e709648838d46461d21adb705b40092af6ae7fa6520fcd73", "text": "pod, err = c.Pods(ns).Get(podName) framework.ExpectNoError(err) nodeName := pod.Spec.NodeName By(\"Trying to apply a random label on the found node.\") k := \"e2e.inter-pod-affinity.kubernetes.io/zone\" v := \"china-e2etest\" framework.AddOrUpdateLabelOnNode(c, nodeName, k, v) framework.ExpectNodeHasLabel(c, nodeName, k, v) defer framework.RemoveLabelOffNode(c, nodeName, k) By(\"Trying to launch the pod, now with podAffinity with same Labels.\") labelPodName := \"with-podaffinity-\" + string(uuid.NewUUID()) By(\"Trying to launch another pod, now with podAntiAffinity with same Labels.\") labelPodName := \"with-podantiaffinity-\" + string(uuid.NewUUID()) _, err = c.Pods(ns).Create(&api.Pod{ TypeMeta: unversioned.TypeMeta{ Kind: \"Pod\",", "commid": "kubernetes_pr_30141"}], "negative_passages": []} {"query_id": "q-en-kubernetes-89a6771191a9d4789a41e22c8438d008748f4255d70087ba009452760b4511ad", "query": "https://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nLooking into this...\ntrying to track this flake down. It looks like the pod is launched despite AntiAffinity in . In my local setup I saw the same, but noticed that the new admission controller was not activate. I have activate it and now get this:\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nShould be fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-497f9c9acf19e9078f11a587a73ad7850261eb86579f6d3e4099b35e2b8585f1", "text": "Image: framework.GetPauseImageName(f.Client), }, }, NodeSelector: map[string]string{k: v}, // only launch on our two nodes, contradicting the podAntiAffinity }, }) framework.ExpectNoError(err)", "commid": "kubernetes_pr_30141"}], "negative_passages": []} {"query_id": "q-en-kubernetes-89a6771191a9d4789a41e22c8438d008748f4255d70087ba009452760b4511ad", "query": "https://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nLooking into this...\ntrying to track this flake down. It looks like the pod is launched despite AntiAffinity in . In my local setup I saw the same, but noticed that the new admission controller was not activate. I have activate it and now get this:\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nhttps://k8s- Failed: [] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}\nShould be fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-dabac2a64e0328fb7c1db41b2b100c32e116c87e1f43822a12ec9ceb78d78f6b", "text": "framework.Logf(\"Sleeping 10 seconds and crossing our fingers that scheduler will run in that time.\") time.Sleep(10 * time.Second) verifyResult(c, labelPodName, 2, 0, ns) verifyResult(c, labelPodName, 3, 1, ns) }) // test the pod affinity successful matching scenario with multiple Label Operators.", "commid": "kubernetes_pr_30141"}], "negative_passages": []} {"query_id": "q-en-kubernetes-75c5d65065e1971e68f3d623cfc156efea5ba70b11e44a4a727242bc91b29a35", "query": "https://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite} Happened on a presubmit run in .\nSorry, it is a new e2e test I am still working on. It is not merged yet.\nDon't worry about it, our flake detection isn't quite perfect yet :)\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nThis new e2e test has been merged. Let's observe it few more days and close this issue if nothing bad happen.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nThis does't look like a failure in new test, rather its a generic failure with serviceaccounts.\nThanks for the investigation. Yeah I think it is the serviceaccounts issue. I wonder when could we close this flaky issue. Seems like it is stable if serviceaccounts issue is fixed.\nYou can close it with a reference to the serviceaccounts issue - do you know the # for that one ?\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nHave a fix for this, I'm now keeping running tests on my own cluster. Will send out PR once verify the fix works.", "positive_passages": [{"docid": "doc-en-kubernetes-f6c55bdc399e6286717ff723b851c75c5b2b5b1408560f085d1544ee508cd69d", "text": "Expect(err).NotTo(HaveOccurred()) }() // Waiting for service to expose endpoint // Waiting for service to expose endpoint. validateEndpointsOrFail(c, ns, serviceName, PortsByPodName{serverPodName: {servicePort}}) By(\"Retrieve sourceip from a pod on the same node\")", "commid": "kubernetes_pr_34107"}], "negative_passages": []} {"query_id": "q-en-kubernetes-75c5d65065e1971e68f3d623cfc156efea5ba70b11e44a4a727242bc91b29a35", "query": "https://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite} Happened on a presubmit run in .\nSorry, it is a new e2e test I am still working on. It is not merged yet.\nDon't worry about it, our flake detection isn't quite perfect yet :)\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nThis new e2e test has been merged. Let's observe it few more days and close this issue if nothing bad happen.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nThis does't look like a failure in new test, rather its a generic failure with serviceaccounts.\nThanks for the investigation. Yeah I think it is the serviceaccounts issue. I wonder when could we close this flaky issue. Seems like it is stable if serviceaccounts issue is fixed.\nYou can close it with a reference to the serviceaccounts issue - do you know the # for that one ?\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nHave a fix for this, I'm now keeping running tests on my own cluster. Will send out PR once verify the fix works.", "positive_passages": [{"docid": "doc-en-kubernetes-73cfc07f077ec264d75583fcd4225e594e841957ac66097194029fbaee3f11c0", "text": "timeout := 2 * time.Minute framework.Logf(\"Waiting up to %v for sourceIp test to be executed\", timeout) cmd := fmt.Sprintf(`wget -T 30 -qO- %s:%d | grep client_address`, serviceIp, servicePort) // need timeout mechanism because it may takes more times for iptables to be populated // Need timeout mechanism because it may takes more times for iptables to be populated. for start := time.Now(); time.Since(start) < timeout; time.Sleep(2) { stdout, err = framework.RunHostCmd(execPod.Namespace, execPod.Name, cmd) if err != nil { framework.Logf(\"got err: %v, retry until timeout\", err) continue } // Need to check output because wget -q might omit the error. if strings.TrimSpace(stdout) == \"\" { framework.Logf(\"got empty stdout, retry until timeout\") continue } break } ExpectNoError(err) // the stdout return from RunHostCmd seems to come with \"n\", so TrimSpace is needed // desired stdout in this format: client_address=x.x.x.x // The stdout return from RunHostCmd seems to come with \"n\", so TrimSpace is needed. // Desired stdout in this format: client_address=x.x.x.x outputs := strings.Split(strings.TrimSpace(stdout), \"=\") sourceIp := \"\" if len(outputs) != 2 { // fail the test if output format is unexpected // Fail the test if output format is unexpected. framework.Failf(\"exec pod returned unexpected stdout format: [%v]n\", stdout) return execPodIp, \"\" } else { sourceIp = outputs[1] } return execPodIp, outputs[1] return execPodIp, sourceIp }", "commid": "kubernetes_pr_34107"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bd12a4d0d123e25009d08863d65df818502d45139444b90d3cc95fd834eb7a2c", "query": "On a cluster using GCI, I'm not able to create docker containers using the docker CLI. On further introspection, it looks like bridge is being deleted. I consider this a regression. K8s used to allow creating docker containers out of band and in-fact it is a feature that is taken into account as part of k8s node features. To further exacerbate this issue, exists on containervm. We need to match the behavior between GCI and container-vm and ideally stop deleting docker0 bridge. cc\nDuring GCI start up, got deleted to avoid overlapping cidr with . ref:\nthere's no need to delete docker0 afaik. It was deleted on gci master to fix the \"overlapping cidr\" problem, i.e we were supplying to kubelet on GKE master, and this resulted in 2 bridges with the same cidr. At the time I'd suggested passing the correct cidr. Don't remember where this discussion ended. Note that containers on docker0 with/without cni are broken anyway, we used to start docker with --bridge=cbr0, so would result in an ip from cbr0. This is no longer the case. We use kubenet to put containers on cbr0, so will result in an ip from docker0, which is not connected to nanything else.\nThis shouldn't cause docker crashes? you should still be able to create containers, just no networking. And networking is broken through docker CLI anyway because of the cni issue mentioned, or are you observing something different?\nwill return error. Have to add\nSince bridge has been deleted, docker run by default will fail unless one overrides the network ns to be host or none.\nso what we're really losing is the ability to docker run then talk to others (external internet, other pods) without net=host on GCI. This is useful and we should fix it, as mentioned earlier we probably don't need to delete docker0. docker run + others talking to us is broken because of the cni change, and will be broken even with docker0 (IMO this is fine).\ncc\nYeah. I'd prefer ignoring docker0 as long as it doesn't affect kubelet functionality. , Prashanth B wrote:\nVish pointed out that of docker dies, docker0 comes back. Given that we should be consistent. Either force docker not to create a bridge at all, EVER, or just ignore docker0. We may even want to consider assigning --bip= 169.254.123.0/24 or something, so we can reliably use 172.* for clusters eventually. , Vish Kannan wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-d0606c836343780806d9aa462e0a07f2d19efb232d2e3407f3b0aa98169c9dc3", "text": "docker_opts+=\" --log-level=warn\" fi local use_net_plugin=\"true\" if [[ \"${NETWORK_PROVIDER:-}\" != \"kubenet\" && \"${NETWORK_PROVIDER:-}\" != \"cni\" ]]; then if [[ \"${NETWORK_PROVIDER:-}\" == \"kubenet\" || \"${NETWORK_PROVIDER:-}\" == \"cni\" ]]; then # set docker0 cidr to private ip address range to avoid conflict with cbr0 cidr range docker_opts+=\" --bip=169.254.123.1/24\" else use_net_plugin=\"false\" docker_opts+=\" --bridge=cbr0\" fi", "commid": "kubernetes_pr_31637"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bd12a4d0d123e25009d08863d65df818502d45139444b90d3cc95fd834eb7a2c", "query": "On a cluster using GCI, I'm not able to create docker containers using the docker CLI. On further introspection, it looks like bridge is being deleted. I consider this a regression. K8s used to allow creating docker containers out of band and in-fact it is a feature that is taken into account as part of k8s node features. To further exacerbate this issue, exists on containervm. We need to match the behavior between GCI and container-vm and ideally stop deleting docker0 bridge. cc\nDuring GCI start up, got deleted to avoid overlapping cidr with . ref:\nthere's no need to delete docker0 afaik. It was deleted on gci master to fix the \"overlapping cidr\" problem, i.e we were supplying to kubelet on GKE master, and this resulted in 2 bridges with the same cidr. At the time I'd suggested passing the correct cidr. Don't remember where this discussion ended. Note that containers on docker0 with/without cni are broken anyway, we used to start docker with --bridge=cbr0, so would result in an ip from cbr0. This is no longer the case. We use kubenet to put containers on cbr0, so will result in an ip from docker0, which is not connected to nanything else.\nThis shouldn't cause docker crashes? you should still be able to create containers, just no networking. And networking is broken through docker CLI anyway because of the cni issue mentioned, or are you observing something different?\nwill return error. Have to add\nSince bridge has been deleted, docker run by default will fail unless one overrides the network ns to be host or none.\nso what we're really losing is the ability to docker run then talk to others (external internet, other pods) without net=host on GCI. This is useful and we should fix it, as mentioned earlier we probably don't need to delete docker0. docker run + others talking to us is broken because of the cni change, and will be broken even with docker0 (IMO this is fine).\ncc\nYeah. I'd prefer ignoring docker0 as long as it doesn't affect kubelet functionality. , Prashanth B wrote:\nVish pointed out that of docker dies, docker0 comes back. Given that we should be consistent. Either force docker not to create a bridge at all, EVER, or just ignore docker0. We may even want to consider assigning --bip= 169.254.123.0/24 or something, so we can reliably use 172.* for clusters eventually. , Vish Kannan wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-5ce277c5b29c24a23fa0113c6d41f8cdc72bc32aeed65bcabc94fc887bd32adb", "text": "WantedBy=multi-user.target EOF # Delete docker0 to avoid interference # Flush iptables nat table iptables -t nat -F || true ip link set docker0 down || true brctl delbr docker0 || true systemctl start kubelet.service }", "commid": "kubernetes_pr_31637"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ce9585d659f1aa3e82e33b4391083ff4b7d6946234ff3a25101fb4ae4755e247", "query": "When I tried to use the following command to build kubernetes: I encountered the following errors: I'm using Ubuntu 14.04 and the code from master branch. will submit a patch then to fix this problem.\nwhoa, I just landed in the exact same place, I am trying to build k8s on a CentOS7 box. Tried with removing the unused utils and works like a charm !", "positive_passages": [{"docid": "doc-en-kubernetes-a479fcef92f855d9f8c331b63e287627496db6df67541290b6a2f6ed16ce48b4", "text": "\"os\" \"k8s.io/kubernetes/pkg/healthz\" \"k8s.io/kubernetes/pkg/util\" \"k8s.io/kubernetes/pkg/util/flag\" \"k8s.io/kubernetes/pkg/util/logs\" \"k8s.io/kubernetes/pkg/version/verflag\"", "commid": "kubernetes_pr_31682"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ce9585d659f1aa3e82e33b4391083ff4b7d6946234ff3a25101fb4ae4755e247", "query": "When I tried to use the following command to build kubernetes: I encountered the following errors: I'm using Ubuntu 14.04 and the code from master branch. will submit a patch then to fix this problem.\nwhoa, I just landed in the exact same place, I am trying to build k8s on a CentOS7 box. Tried with removing the unused utils and works like a charm !", "positive_passages": [{"docid": "doc-en-kubernetes-d4e94b3a6a3959c1e6e906a3d4d4cc320186adeb1ac205c01f334c1d114f8dcc", "text": "\"github.com/spf13/pflag\" \"k8s.io/kubernetes/contrib/mesos/pkg/executor/service\" \"k8s.io/kubernetes/contrib/mesos/pkg/hyperkube\" \"k8s.io/kubernetes/pkg/util\" \"k8s.io/kubernetes/pkg/util/flag\" \"k8s.io/kubernetes/pkg/util/logs\" \"k8s.io/kubernetes/pkg/version/verflag\"", "commid": "kubernetes_pr_31682"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ce9585d659f1aa3e82e33b4391083ff4b7d6946234ff3a25101fb4ae4755e247", "query": "When I tried to use the following command to build kubernetes: I encountered the following errors: I'm using Ubuntu 14.04 and the code from master branch. will submit a patch then to fix this problem.\nwhoa, I just landed in the exact same place, I am trying to build k8s on a CentOS7 box. Tried with removing the unused utils and works like a charm !", "positive_passages": [{"docid": "doc-en-kubernetes-ab7319771dca689e973b1476f8f62042d648f2178db94897ba1dadf705da52b6", "text": "\"github.com/spf13/pflag\" \"k8s.io/kubernetes/contrib/mesos/pkg/hyperkube\" \"k8s.io/kubernetes/contrib/mesos/pkg/scheduler/service\" \"k8s.io/kubernetes/pkg/util\" \"k8s.io/kubernetes/pkg/util/flag\" \"k8s.io/kubernetes/pkg/util/logs\" \"k8s.io/kubernetes/pkg/version/verflag\"", "commid": "kubernetes_pr_31682"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3b5c5f4907e997aaf8daa53f4c0daf41c3d1e2651d7a7af8d3943232c2d64d40", "query": "liveness probe was introduced in Heapster seems to work correctly so we should either fix it or revert liveness probe.\ncc\nport seems to be wrong", "positive_passages": [{"docid": "doc-en-kubernetes-83302a99ab452b94f286e986bdc6b925693a83056cf52096a255d7aeab06a467", "text": "livenessProbe: httpGet: path: /healthz port: 8080 port: 8082 scheme: HTTP initialDelaySeconds: 180 timeoutSeconds: 5", "commid": "kubernetes_pr_31961"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6d7f7bc8962ffccb3a122ebb5d170bbec0359b559f947a84eb028587481ba4c8", "query": "introduced a new requirement for node names to be DNS1123Labels. In case of hack/local-up-, an IP address is used as node name and thus endpoints aren't created for services, e.g. kube-dns doesn't work anymore. There are other projects that use IP addresses as node names for running simplified k8s dev environment: Probably there are other projects where it's convenient to have just an IP address as a node name. Verified against . On affected installs, service latency e2e test hangs, see e.g. Example ( some newlines):", "positive_passages": [{"docid": "doc-en-kubernetes-e23ef35c0c92fa40e582033a1dc418822df40ea7cb4fa7191f37ac0f7a0ad07a", "text": "if len(address.Hostname) > 0 { allErrs = append(allErrs, ValidateDNS1123Label(address.Hostname, fldPath.Child(\"hostname\"))...) } // During endpoint update, validate NodeName is DNS1123 compliant and transition rules allow the update // During endpoint update, verify that NodeName is a DNS subdomain and transition rules allow the update if address.NodeName != nil { allErrs = append(allErrs, ValidateDNS1123Label(*address.NodeName, fldPath.Child(\"nodeName\"))...) for _, msg := range ValidateNodeName(*address.NodeName, false) { allErrs = append(allErrs, field.Invalid(fldPath.Child(\"nodeName\"), *address.NodeName, msg)) } } allErrs = append(allErrs, validateEpAddrNodeNameTransition(address, ipToNodeName, fldPath.Child(\"nodeName\"))...) if len(allErrs) > 0 {", "commid": "kubernetes_pr_32052"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6d7f7bc8962ffccb3a122ebb5d170bbec0359b559f947a84eb028587481ba4c8", "query": "introduced a new requirement for node names to be DNS1123Labels. In case of hack/local-up-, an IP address is used as node name and thus endpoints aren't created for services, e.g. kube-dns doesn't work anymore. There are other projects that use IP addresses as node names for running simplified k8s dev environment: Probably there are other projects where it's convenient to have just an IP address as a node name. Verified against . On affected installs, service latency e2e test hangs, see e.g. Example ( some newlines):", "positive_passages": [{"docid": "doc-en-kubernetes-d1f2d8ba982bdf760f6c4791f288aa280e32514d2650dd5b67d56c5d5948c414", "text": "} } func TestEndpointAddressNodeNameInvalidDNS1123(t *testing.T) { func TestEndpointAddressNodeNameInvalidDNSSubdomain(t *testing.T) { // Check NodeName DNS validation endpoint := newNodeNameEndpoint(\"illegal.nodename\") endpoint := newNodeNameEndpoint(\"illegal*.nodename\") errList := ValidateEndpoints(endpoint) if len(errList) == 0 { t.Error(\"Endpoint should reject invalid NodeName\") } } func TestEndpointAddressNodeNameCanBeAnIPAddress(t *testing.T) { endpoint := newNodeNameEndpoint(\"10.10.1.1\") errList := ValidateEndpoints(endpoint) if len(errList) != 0 { t.Error(\"Endpoint should accept a NodeName that is an IP address\") } } ", "commid": "kubernetes_pr_32052"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-16c0ef436a98f55d88b5948bf9e69c7b590706e841dbb758076b9dc80b378cd5", "text": "function usage() { echo \"!!! EXPERIMENTAL !!!\" echo \"\" echo \"${0} [-M|-N|-P] -l | \" echo \"${0} [-M|-N|-P] -l -o | \" echo \" Upgrades master and nodes by default\" echo \" -M: Upgrade master only\" echo \" -N: Upgrade nodes only\" echo \" -P: Node upgrade prerequisites only (create a new instance template)\" echo \" -o: Use os distro sepcified in KUBE_NODE_OS_DISTRIBUTION for new nodes\" echo \" -l: Use local(dev) binaries\" echo \"\" echo ' Version number or publication is either a proper version number'", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-8c5c2d6ee689fd4f32f4adace332f17a3c0eac4a73d2acce8952be7233b0197c", "text": "'http://metadata/computeMetadata/v1/instance/attributes/kube-env'\" 2>/dev/null } # Read os distro information from /os/release on node. # $1: The name of node # # Assumed vars: # PROJECT # ZONE function get-node-os() { gcloud compute ssh \"$1\" --project \"${PROJECT}\" --zone \"${ZONE}\" --command \"cat /etc/os-release | grep \"^ID=.*\" | cut -c 4-\" } # Assumed vars: # KUBE_VERSION # NODE_SCOPES", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-fb4d73b5db05b095052d2b22d5f371888573b15e31bca7d1579751748d352914", "text": "# compatible way? write-node-env if [[ \"${env_os_distro}\" == \"false\" ]]; then NODE_OS_DISTRIBUTION=$(get-node-os \"${NODE_NAMES[0]}\") source \"${KUBE_ROOT}/cluster/gce/${NODE_OS_DISTRIBUTION}/node-helper.sh\" # Reset the node image based on current os distro set-node-image fi # TODO(zmerlynn): Get configure-vm script from ${version}. (Must plumb this # through all create-node-instance-template implementations). local template_name=$(get-template-name-from-version ${SANITIZED_VERSION})", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-0507b246aa02ba08dc6913f9be80af68317891f83f136f9e2086b06cd9df1ee3", "text": "node_upgrade=true node_prereqs=false local_binaries=false env_os_distro=false while getopts \":MNPlh\" opt; do case ${opt} in", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-317045b41310f9debc1325fdac6086fdb7d7724e6dcaa079ca5d70866d3a8498", "text": "l) local_binaries=true ;; o) env_os_distro=true ;; h) usage exit 0", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0585853c1eef197c56eb9e796f54145c38efffe62a23c566467505aa4e95a99", "query": "During a manual upgrade test, described in , a kubelet failed to communicate via SSL to its master. This is currently blocking the 1.4 release. There is not a repro for this yet. The cluster was running 1.3, and was upgraded to 1.4 successfully, but upon downgrading to 1.3, the kubelets one one or more machines are not able to talk to the master. Current kubelet version is . (EDITED) The error when talking to the master is (lots like this): I think this means that the kubelet cannot verify the master's cert. I can't find the master's cert. Since there is not flag to kubelet to tell it where to find its config, I assume it is using the default location, which is . It seems wrong that clusters[0].cluster.certificate-authority-data is empty. That should have the cert for the master, right? I see this thing: but I don't understand what process generates , and if, presumably, from that template, why it isn't setting the env var CA_CERT.\nin case this is related to kubelet bootstrap changes in 1.4.\nCurrently node is :\nCorrection. The node is running 1.4.0-beta.5\nIf the kubelet hasn't set the then it will not go through the bootstrap code path. Is the kubeconfig provided through the bootstrap flag or the kubeconfig flag?\nNot set. For reference, args are:\nQuestion: Could you please tell me why your nodes are running with GCI image at the first place? The test is starting with 1.3 release, which is using old debian-based containervm image, and I thought doesn't change the image at all (separate topic).\nIt does change to gci on gce with the default upgrade scripts, i brought this up on an email thread. I'm guessing something was dropped when we translated salt grains (containervm) to gci env vars.\nWhat the default upgrade scripts you were referring to? I just double-checked with test cluster which was upgrade to 1.4 through , the node is stay with debian-based containervm, but Kubernetes version is bumped up to 1.4-beta.X\nOk, I am assigning this to to confirm. If upgrade logic from 1.3 to 1.4 changes the underneath os, that is wrong. I strongly believe this is just a simple operational error.\nMy understanding is that ignores base images altogether. , Dawn Chen wrote:\nHmm\nI've tried and the node image did get updated to GCI. Before upgrade to v1.4: After upgrade to v1.4: After downgrade to v1.3: /cc\nhelped me debug this. He found that the metadata for the instance did not include the CA_CERT in it. That seems wrong.\nRenamed from \"x509 failure for kubelet after upgrade test\"\nYeah, that's the error you'd get if the proper ca isn't loaded.\nWill we get this problem if we don't upgrade to GCI (i.e. we stay with CVM) in -N? I guess we can try it and find out.\nManual upgrades have been successful if we don't switch image types. , David Oppenheimer < :\nre: If you don't change the underneath image during the upgrade, the issue shouldn't occur.", "positive_passages": [{"docid": "doc-en-kubernetes-0614f79dcd8e9a09815612ac925c66b4eecd797f8fc1a36b406c21d97fd77c66", "text": "MASTER_IMAGE_PROJECT=${KUBE_GCE_MASTER_PROJECT:-google-containers} fi if [[ \"${NODE_OS_DISTRIBUTION}\" == \"gci\" ]]; then # If the node image is not set, we use the latest GCI image. # Otherwise, we respect whatever is set by the user. NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-${GCI_VERSION}} NODE_IMAGE_PROJECT=${KUBE_GCE_NODE_PROJECT:-google-containers} elif [[ \"${NODE_OS_DISTRIBUTION}\" == \"debian\" ]]; then NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-${CVM_VERSION}} NODE_IMAGE_PROJECT=${KUBE_GCE_NODE_PROJECT:-google-containers} fi # Sets node image based on the specified os distro. Currently this function only # supports gci and debian. function set-node-image() { if [[ \"${NODE_OS_DISTRIBUTION}\" == \"gci\" ]]; then # If the node image is not set, we use the latest GCI image. # Otherwise, we respect whatever is set by the user. NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-${GCI_VERSION}} NODE_IMAGE_PROJECT=${KUBE_GCE_NODE_PROJECT:-google-containers} elif [[ \"${NODE_OS_DISTRIBUTION}\" == \"debian\" ]]; then NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-${CVM_VERSION}} NODE_IMAGE_PROJECT=${KUBE_GCE_NODE_PROJECT:-google-containers} fi } set-node-image # Verfiy cluster autoscaler configuration. if [[ \"${ENABLE_CLUSTER_AUTOSCALER}\" == \"true\" ]]; then", "commid": "kubernetes_pr_32840"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e62f6c1056f3c575902ec60b40c73397e20805db4e60e6464c072402ee00112", "query": "./hack/e2e- line 1192: local: `replica-pd=jenkins-us-central1-b-master-pd': not a valid identifier FYI - this looks related to your recent work? I don't see any obvious PR to blame, but I seem to remember you guys touching this part of the code recently? BTW, I deleted a bunch of leaked resources in the relevant test project on Friday (9/16), assuming that the test scripts would re-create whatever they needed. It's possible (but unlikely) that this has anything to do with that.\nHere are some example logs to aid debugging: https://k8s-\nI believe this to be the source of dysfunction: Which is turn caused by what you found, which causes the disk to not be cleaned up ahead of time: You can try this out in your local shell: git blames leads us to this recent commit: cc the author The root problem is unfortunatley very annoying: this is how bash tells you that variable names (in this case, $replica-pd) cannot contain dashes!\nfix is up in\nAwesome many thanks. I'll make sure it gets merged ASAP.", "positive_passages": [{"docid": "doc-en-kubernetes-dfc6ad9d9accc5787d6c0001e56a180791c451a8416ddffddce72e2555ffd34e", "text": "# Delete the master replica pd (possibly leaked by kube-up if master create failed). # TODO(jszczepkowski): remove also possibly leaked replicas' pds local -r replica-pd=\"${REPLICA_NAME:-${MASTER_NAME}}-pd\" if gcloud compute disks describe \"${replica-pd}\" --zone \"${ZONE}\" --project \"${PROJECT}\" &>/dev/null; then local -r replica_pd=\"${REPLICA_NAME:-${MASTER_NAME}}-pd\" if gcloud compute disks describe \"${replica_pd}\" --zone \"${ZONE}\" --project \"${PROJECT}\" &>/dev/null; then gcloud compute disks delete --project \"${PROJECT}\" --quiet --zone \"${ZONE}\" \"${replica-pd}\" \"${replica_pd}\" fi # Delete disk for cluster registry if enabled", "commid": "kubernetes_pr_33039"}], "negative_passages": []} {"query_id": "q-en-kubernetes-97f44e21cc78f43605c4bf074e1d4149cbd6d9ee084c3fb8b7a55d290addbb40", "query": "After PR , I did a bit experiments on kube-dns graceful termination and got some unintended result as below: I guess it is because the other container was terminated before . DNS query from clients failed because is the actual connection point. After a bit discussion with and two solutions for this are: another program wrap up the dnsmasq and capture the SIGTERM signal. out a way to modify the sig mask of process without touching the original codes.\nSomething I have been meaning to try: Put in Dockerfile Build and push image We should deliver SIGCONT rather than SIGTERM. I did a quick test and it seems to work. Otherwise trapping SIGTERM in the shell and ignoring it is well documented... , Zihong Zheng wrote:\nThanks, I will take a look and test these.", "positive_passages": [{"docid": "doc-en-kubernetes-4d34a948dbd42d950fe10b33fe115cead3c679add0fe34ad03522a731f4f878f", "text": " # This Dockerfile will build an image that is configured # to run Fluentd with an Elasticsearch plug-in and the # provided configuration file. # TODO(satnam6502): Use a lighter base image, e.g. some form of busybox. # The image acts as an executable for the binary /usr/sbin/td-agent. # Note that fluentd is run with root permssion to allow access to # log files with root only access under /var/lib/docker/containers/* # Please see http://docs.fluentd.org/articles/install-by-deb for more # information about installing fluentd using deb package. FROM ubuntu:14.04 MAINTAINER Satnam Singh \"satnam@google.com\" # Ensure there are enough file descriptors for running Fluentd. RUN ulimit -n 65536 # Install prerequisites. RUN apt-get update && apt-get install -y curl && apt-get install -y -q libcurl4-openssl-dev make && apt-get clean # Install Fluentd. RUN /usr/bin/curl -L http://toolbelt.treasuredata.com/sh/install-ubuntu-trusty-td-agent2.sh | sh # Change the default user and group to root. # Needed to allow access to /var/log/docker/... files. RUN sed -i -e \"s/USER=td-agent/USER=root/\" -e \"s/GROUP=td-agent/GROUP=root/\" /etc/init.d/td-agent # Install the Elasticsearch Fluentd plug-in. RUN /usr/sbin/td-agent-gem install fluent-plugin-elasticsearch # Copy the Fluentd configuration file. COPY td-agent.conf /etc/td-agent/td-agent.conf # Copy a script that determines the name of the host machine # and then patch the Fluentd configuration files and then # run Fluentd in the foreground. ADD run.sh /run.sh # Always run the this setup script. ENTRYPOINT [\"/run.sh\"] ", "commid": "kubernetes_pr_1756"}], "negative_passages": []} {"query_id": "q-en-kubernetes-97f44e21cc78f43605c4bf074e1d4149cbd6d9ee084c3fb8b7a55d290addbb40", "query": "After PR , I did a bit experiments on kube-dns graceful termination and got some unintended result as below: I guess it is because the other container was terminated before . DNS query from clients failed because is the actual connection point. After a bit discussion with and two solutions for this are: another program wrap up the dnsmasq and capture the SIGTERM signal. out a way to modify the sig mask of process without touching the original codes.\nSomething I have been meaning to try: Put in Dockerfile Build and push image We should deliver SIGCONT rather than SIGTERM. I did a quick test and it seems to work. Otherwise trapping SIGTERM in the shell and ignoring it is well documented... , Zihong Zheng wrote:\nThanks, I will take a look and test these.", "positive_passages": [{"docid": "doc-en-kubernetes-dab31027122866f58e8e938ce5ec4117f20ea478734d791209d9d00a6fd46d81", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Build the fluentd-elasticsearch image and push # to google/fluentd-elasticsearch. sudo docker build -t kubernetes/fluentd-elasticsearch . sudo docker push kubernetes/fluentd-elasticsearch ", "commid": "kubernetes_pr_1756"}], "negative_passages": []} {"query_id": "q-en-kubernetes-97f44e21cc78f43605c4bf074e1d4149cbd6d9ee084c3fb8b7a55d290addbb40", "query": "After PR , I did a bit experiments on kube-dns graceful termination and got some unintended result as below: I guess it is because the other container was terminated before . DNS query from clients failed because is the actual connection point. After a bit discussion with and two solutions for this are: another program wrap up the dnsmasq and capture the SIGTERM signal. out a way to modify the sig mask of process without touching the original codes.\nSomething I have been meaning to try: Put in Dockerfile Build and push image We should deliver SIGCONT rather than SIGTERM. I did a quick test and it seems to work. Otherwise trapping SIGTERM in the shell and ignoring it is well documented... , Zihong Zheng wrote:\nThanks, I will take a look and test these.", "positive_passages": [{"docid": "doc-en-kubernetes-70110d5c6ffc0451f07001914705172d394a99b071c33120c0d3da9869671711", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # WARNING! HORRIBLE HACK! We expect /outerhost to be mapped to # the enclosing /etc/host file so we can determine the name of # the host machine (super fragile). This is a temporary hack until # service IPs are done. OUTER_HOST=`tail -n 1 /outerhost | awk '{print $3}'` # Copy the Fluentd config file and patch it to refer to the # name of the host machine for ES_HOST. HACK! cp td-agent.conf /etc/td-agent sed -i -e \"s/ES_HOST/${OUTER_HOST}/\" /etc/td-agent/td-agent.conf /usr/sbin/td-agent ", "commid": "kubernetes_pr_1756"}], "negative_passages": []} {"query_id": "q-en-kubernetes-97f44e21cc78f43605c4bf074e1d4149cbd6d9ee084c3fb8b7a55d290addbb40", "query": "After PR , I did a bit experiments on kube-dns graceful termination and got some unintended result as below: I guess it is because the other container was terminated before . DNS query from clients failed because is the actual connection point. After a bit discussion with and two solutions for this are: another program wrap up the dnsmasq and capture the SIGTERM signal. out a way to modify the sig mask of process without touching the original codes.\nSomething I have been meaning to try: Put in Dockerfile Build and push image We should deliver SIGCONT rather than SIGTERM. I did a quick test and it seems to work. Otherwise trapping SIGTERM in the shell and ignoring it is well documented... , Zihong Zheng wrote:\nThanks, I will take a look and test these.", "positive_passages": [{"docid": "doc-en-kubernetes-dac7ce3b756390a961a3487c79ac685ec5848ede5f1417c03ecb0bf659467b79", "text": " # This configuration file for Fluentd / td-agent is used # to watch changes to Docker log files that live in the # directory /var/lib/docker/containers/ which are then submitted to # Elasticsearch (running on the machine ES_HOST:9200) which # assumes the installation of the fluentd-elasticsearch plug-in. # See https://github.com/uken/fluent-plugin-elasticsearch for # more information about the plug-in. This file needs to be # patched to replace ES_HOST with the name of the actual # machine running Elasticsearch. # Maintainer: Satnam Singh (satnam@google.com) # # Exampe # ====== # A line in the Docker log file might like like this JSON: # # {\"log\":\"2014/09/25 21:15:03 Got request with path wombatn\", # \"stream\":\"stderr\", # \"time\":\"2014-09-25T21:15:03.499185026Z\"} # # The time_format specification below makes sure we properly # parse the time format produced by Docker. This will be # submitted to Elasticsearch and should appear like: # $ curl 'http://elasticsearch:9200/_search?pretty' # ... # { # \"_index\" : \"logstash-2014.09.25\", # \"_type\" : \"fluentd\", # \"_id\" : \"VBrbor2QTuGpsQyTCdfzqA\", # \"_score\" : 1.0, # \"_source\":{\"log\":\"2014/09/25 22:45:50 Got request with path wombatn\", # \"stream\":\"stderr\",\"tag\":\"docker.container.all\", # \"@timestamp\":\"2014-09-25T22:45:50+00:00\"} # }, # ... type tail format json time_key time path /var/lib/docker/containers/*/*-json.log time_format %Y-%m-%dT%H:%M:%S tag docker.container.all type elasticsearch log_level info include_tag_key true host ES_HOST port 9200 logstash_format true flush_interval 5s ", "commid": "kubernetes_pr_1756"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e62d168d1db3d76f5f6327d88acb5a7be21698d985e9cda01df5be46fc45acd4", "query": "Kubernetes version (use ): 1.3.7 Environment: Cloud provider or hardware configuration: Bare-metal Ubuntu cluster OS (e.g. from /etc/os-release): Ubuntu 14.04.3 LTS Kernel (e.g. ): 3.19.0-47-generic Install tools: KUBERNETESPROVIDER=ubuntu ./kube- Others: Docker version 1.12.1 What happened: appends the same copy of each time we start and stop Kubernetes cluster (using Ubuntu's & ). For example, if we start Kubernetes cluster the first time and restart it again, the content of is doubled: This cause the following error when starting Kubernetes as seen in : What you expected to happen: content should not be concatenated every time Kubernetes is restarted. How to reproduce it (as minimally and precisely as possible): Start Kubernetes, stop it, and start it again, using and . Anything else do we need to know: One possible fix is to remove the in the following line in :\nThis is a bug and easy to fix. I'll open a PR to fix it\nBug has been fixed.", "positive_passages": [{"docid": "doc-en-kubernetes-acf7d4c212138fc6d10860cf50d207e6d1afa3901f85c7178da53b847b6f63e9", "text": "source /run/flannel/subnet.env source /etc/default/docker echo DOCKER_OPTS=\"${DOCKER_OPTS} -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock echo DOCKER_OPTS=\" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker sudo service docker restart }", "commid": "kubernetes_pr_33163"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-4c7a504e9ce3d4483cf43ffc319706342208fcc8602e0566f5cb69ccedc29a4e", "text": "if len(tc.plugin) != 0 { c.AuthProvider = &clientcmdapi.AuthProviderConfig{Name: tc.plugin} } tConfig, err := c.transportConfig() tConfig, err := c.TransportConfig() if err != nil { // Unknown/bad plugins are expected to fail here. if !tc.expectErr {", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-e11cfd6bbb46a8ec4952551a4abe38067f465ccb7f37905096932989599448b2", "text": "// TLSConfigFor returns a tls.Config that will provide the transport level security defined // by the provided Config. Will return nil if no transport level security is requested. func TLSConfigFor(config *Config) (*tls.Config, error) { cfg, err := config.transportConfig() cfg, err := config.TransportConfig() if err != nil { return nil, err }", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-edb8769e03ae4e0084c56d93238832199c3f4914a20cc69e22e7411ed4c6a880", "text": "// or transport level security defined by the provided Config. Will return the // default http.DefaultTransport if no special case behavior is needed. func TransportFor(config *Config) (http.RoundTripper, error) { cfg, err := config.transportConfig() cfg, err := config.TransportConfig() if err != nil { return nil, err }", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-338a222b842e4f3de3e86175f33687de20c2de39c1b587a902e5ce7322cabfad", "text": "// the underlying connection (like WebSocket or HTTP2 clients). Pure HTTP clients should use // the higher level TransportFor or RESTClientFor methods. func HTTPWrappersForConfig(config *Config, rt http.RoundTripper) (http.RoundTripper, error) { cfg, err := config.transportConfig() cfg, err := config.TransportConfig() if err != nil { return nil, err } return transport.HTTPWrappersForConfig(cfg, rt) } // transportConfig converts a client config to an appropriate transport config. func (c *Config) transportConfig() (*transport.Config, error) { // TransportConfig converts a client config to an appropriate transport config. func (c *Config) TransportConfig() (*transport.Config, error) { wt := c.WrapTransport if c.AuthProvider != nil { provider, err := GetAuthProvider(c.Host, c.AuthProvider, c.AuthConfigPersister)", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-b27241a3fe3289867015e582beff85551da2930f4a6701d78342e3e99792f2d1", "text": "\"fmt\" \"math\" \"math/rand\" \"net\" \"net/http\" \"os\" \"strconv\" \"sync\"", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-cb9abe00153df5e51873b9f5ab7e3ffcd23271f07f40c9ecad45b97247238e41", "text": "\"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset\" unversionedcore \"k8s.io/kubernetes/pkg/client/clientset_generated/internalclientset/typed/core/unversioned\" \"k8s.io/kubernetes/pkg/client/restclient\" \"k8s.io/kubernetes/pkg/client/transport\" client \"k8s.io/kubernetes/pkg/client/unversioned\" \"k8s.io/kubernetes/pkg/labels\" \"k8s.io/kubernetes/pkg/util/intstr\" utilnet \"k8s.io/kubernetes/pkg/util/net\" \"k8s.io/kubernetes/test/e2e/framework\" . \"github.com/onsi/ginkgo\"", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-aba2889c8f26edd6669075af99e738a1ba959f568752a5e1786be882c0588d94", "text": "namespaces = createNamespaces(f, nodeCount, itArg.podsPerNode) totalPods := itArg.podsPerNode * nodeCount configs = generateRCConfigs(totalPods, itArg.image, itArg.command, c, namespaces) configs = generateRCConfigs(totalPods, itArg.image, itArg.command, namespaces) var services []*api.Service // Read the environment variable to see if we want to create services createServices := os.Getenv(\"CREATE_SERVICES\")", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-f88b416c1c8cd23efda250ffd4526221cf497d0e2d0a9e7d5dc2d92bc2035fc5", "text": "return namespaces } func createClients(numberOfClients int) ([]*client.Client, error) { clients := make([]*client.Client, numberOfClients) for i := 0; i < numberOfClients; i++ { config, err := framework.LoadConfig() Expect(err).NotTo(HaveOccurred()) config.QPS = 100 config.Burst = 200 if framework.TestContext.KubeAPIContentType != \"\" { config.ContentType = framework.TestContext.KubeAPIContentType } // For the purpose of this test, we want to force that clients // do not share underlying transport (which is a default behavior // in Kubernetes). Thus, we are explicitly creating transport for // each client here. transportConfig, err := config.TransportConfig() if err != nil { return nil, err } tlsConfig, err := transport.TLSConfigFor(transportConfig) if err != nil { return nil, err } config.Transport = utilnet.SetTransportDefaults(&http.Transport{ Proxy: http.ProxyFromEnvironment, TLSHandshakeTimeout: 10 * time.Second, TLSClientConfig: tlsConfig, MaxIdleConnsPerHost: 100, Dial: (&net.Dialer{ Timeout: 30 * time.Second, KeepAlive: 30 * time.Second, }).Dial, }) // Overwrite TLS-related fields from config to avoid collision with // Transport field. config.TLSClientConfig = restclient.TLSClientConfig{} c, err := client.New(config) if err != nil { return nil, err } clients[i] = c } return clients, nil } func computeRCCounts(total int) (int, int, int) { // Small RCs owns ~0.5 of total number of pods, medium and big RCs ~0.25 each. // For example for 3000 pods (100 nodes, 30 pods per node) there are:", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-276209858b32894cc42d3f339c3506f4463f5a2951f4a4759747232d12c7a7b6", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nWe understand the problem here - I'm working on the fix.\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-bbdbbacd8e9d8c3e3746bdcd06c797b3c757269ec9877dc045bac82a63b2b3d9", "text": "return smallRCCount, mediumRCCount, bigRCCount } func generateRCConfigs(totalPods int, image string, command []string, c *client.Client, nss []*api.Namespace) []*framework.RCConfig { func generateRCConfigs(totalPods int, image string, command []string, nss []*api.Namespace) []*framework.RCConfig { configs := make([]*framework.RCConfig, 0) smallRCCount, mediumRCCount, bigRCCount := computeRCCounts(totalPods) configs = append(configs, generateRCConfigsForGroup(c, nss, smallRCGroupName, smallRCSize, smallRCCount, image, command)...) configs = append(configs, generateRCConfigsForGroup(c, nss, mediumRCGroupName, mediumRCSize, mediumRCCount, image, command)...) configs = append(configs, generateRCConfigsForGroup(c, nss, bigRCGroupName, bigRCSize, bigRCCount, image, command)...) configs = append(configs, generateRCConfigsForGroup(nss, smallRCGroupName, smallRCSize, smallRCCount, image, command)...) configs = append(configs, generateRCConfigsForGroup(nss, mediumRCGroupName, mediumRCSize, mediumRCCount, image, command)...) configs = append(configs, generateRCConfigsForGroup(nss, bigRCGroupName, bigRCSize, bigRCCount, image, command)...) // Create a number of clients to better simulate real usecase // where not everyone is using exactly the same client. rcsPerClient := 20 clients, err := createClients((len(configs) + rcsPerClient - 1) / rcsPerClient) framework.ExpectNoError(err) for i := 0; i < len(configs); i++ { configs[i].Client = clients[i%len(clients)] } return configs } func generateRCConfigsForGroup(c *client.Client, nss []*api.Namespace, groupName string, size, count int, image string, command []string) []*framework.RCConfig { func generateRCConfigsForGroup( nss []*api.Namespace, groupName string, size, count int, image string, command []string) []*framework.RCConfig { configs := make([]*framework.RCConfig, 0, count) for i := 1; i <= count; i++ { config := &framework.RCConfig{ Client: c, Client: nil, // this will be overwritten later Name: groupName + \"-\" + strconv.Itoa(i), Namespace: nss[i%len(nss)].Name, Timeout: 10 * time.Minute,", "commid": "kubernetes_pr_33733"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff687515ab25c4bdd4af394a75c676d411e02b095a7423ff0f91396e6bca07a8", "query": "What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): vSphere, \"message\": \"no endpoints available for service \"kube-dns\"\", kube-dns, etc -- similar issues include: however it's not quite the same Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Kubernetes version (use ): Environment: Cloud provider or hardware configuration: vSphere OS (e.g. from /etc/os-release): Using provided vmdk Kernel (e.g. ): and Install tools: GOVC + the script What happened: Our environment utilizes the RFC1918 10.0.0.0/8 block, so I edited to utilize an address space we are not using. This is my current configuration: After the deployment is finished, I am able to login to the dashboard via the URL provided via , however, when I check the DNS server I see: When I list services, I see: When I start a container and try to ping kube-dns, I am unable to: I can ping external IP addresses: Here's the configuration on each minion and the master: What you expected to happen: I expect containers can reach ping eachother. How to reproduce it (as minimally and precisely as possible): I've reproduced this problem using both 172.16.0.0/16 addresses as well as the 192.168.0.0/16 addresses. Anything else do we need to know: I followed the documentation here: and the only change I made was to the network configuration.\nI just noticed that doesn't include any routes for the service IPs I defined (e.g. has no routing table entries on any of the minions, only ). I'm not sure how to fix it, but it looks like the DNS server can make requests to the upstream DNS, it just can't get responses back.\ncc\nSome additional info I probably should have included the first time:\nI just noticed that while from inside a container I can ping google: I cannot ping our internal DNS which is at IP . So I think I'm getting closer to where the problem actually is. I also think the 192.168.120.120 was not actually unreachable, just not pingable. For example: Returns right away, while doing the same to IP hangs. I'm not sure why when using our DNS in the of my master/minions doesn't seem to give them the ability to resolve internal assets on our network.\nCan you provide your output for iptables also\nHi imkin, Sure thing! Minion 1 - 10.248.30.38 Minion 2 - 10.248.30.40 Minion 3 - 10.248.30.37 Minion 4 - 10.248.30.39 Master - 10.248.30.36 Let me know what else I can provide, I haven't had much time to continue troubleshooting this week, but getting this to work would be a really great template for building our bare metal deployment. Thanks! ZC\nCan you try with DNSSERVERIP=\"192.168.128.120\"\nHi - I will edit that in the initial configuration and re-up my cluster this morning to see if that helps things work out of the box so to speak. Thanks! ZC\nHey I have edited my configuration as follows: I have re-run the script and everything came up and validated. I am still experiencing the same behavior, however. I now see this in the logs for the DNS service via the dashboard: I see that the service can't be reached by kubectl either (it doesn't even return kube-dns like it did previously): I also notice that on the minions, none of them seem to have routes for the higher IP ranges:\nThis looks to be the same as posting a PR shortly.\nAwesome!! Thanks - I'm grabbing the from your pull request and will re-open this issue if that did not resolve it (once it's closed by the merge).\nIn the second case [when your kube-dns had the ip 192.168.128.120] where was your kube-dns?\nbased on the logs you posted you were definitely impacted by the issue in but I am not sure if that is the only issue on going on your setup. Please let us know how the fix goes for your environment.\nThere is no route for 192.168.3.0/24", "positive_passages": [{"docid": "doc-en-kubernetes-6aa2ad414abe5acb41e91416b87578b3d2095bbad07cde825f966e9851549d54", "text": "MASTER_CPU=1 NODE_NAMES=($(eval echo ${INSTANCE_PREFIX}-minion-{1..${NUM_NODES}})) NODE_IP_RANGES=\"10.244.0.0/16\" NODE_IP_RANGES=\"10.244.0.0/16\" # Min Prefix supported is 16 MASTER_IP_RANGE=\"${MASTER_IP_RANGE:-10.246.0.0/24}\" NODE_MEMORY_MB=2048 NODE_CPU=1", "commid": "kubernetes_pr_35232"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff687515ab25c4bdd4af394a75c676d411e02b095a7423ff0f91396e6bca07a8", "query": "What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): vSphere, \"message\": \"no endpoints available for service \"kube-dns\"\", kube-dns, etc -- similar issues include: however it's not quite the same Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Kubernetes version (use ): Environment: Cloud provider or hardware configuration: vSphere OS (e.g. from /etc/os-release): Using provided vmdk Kernel (e.g. ): and Install tools: GOVC + the script What happened: Our environment utilizes the RFC1918 10.0.0.0/8 block, so I edited to utilize an address space we are not using. This is my current configuration: After the deployment is finished, I am able to login to the dashboard via the URL provided via , however, when I check the DNS server I see: When I list services, I see: When I start a container and try to ping kube-dns, I am unable to: I can ping external IP addresses: Here's the configuration on each minion and the master: What you expected to happen: I expect containers can reach ping eachother. How to reproduce it (as minimally and precisely as possible): I've reproduced this problem using both 172.16.0.0/16 addresses as well as the 192.168.0.0/16 addresses. Anything else do we need to know: I followed the documentation here: and the only change I made was to the network configuration.\nI just noticed that doesn't include any routes for the service IPs I defined (e.g. has no routing table entries on any of the minions, only ). I'm not sure how to fix it, but it looks like the DNS server can make requests to the upstream DNS, it just can't get responses back.\ncc\nSome additional info I probably should have included the first time:\nI just noticed that while from inside a container I can ping google: I cannot ping our internal DNS which is at IP . So I think I'm getting closer to where the problem actually is. I also think the 192.168.120.120 was not actually unreachable, just not pingable. For example: Returns right away, while doing the same to IP hangs. I'm not sure why when using our DNS in the of my master/minions doesn't seem to give them the ability to resolve internal assets on our network.\nCan you provide your output for iptables also\nHi imkin, Sure thing! Minion 1 - 10.248.30.38 Minion 2 - 10.248.30.40 Minion 3 - 10.248.30.37 Minion 4 - 10.248.30.39 Master - 10.248.30.36 Let me know what else I can provide, I haven't had much time to continue troubleshooting this week, but getting this to work would be a really great template for building our bare metal deployment. Thanks! ZC\nCan you try with DNSSERVERIP=\"192.168.128.120\"\nHi - I will edit that in the initial configuration and re-up my cluster this morning to see if that helps things work out of the box so to speak. Thanks! ZC\nHey I have edited my configuration as follows: I have re-run the script and everything came up and validated. I am still experiencing the same behavior, however. I now see this in the logs for the DNS service via the dashboard: I see that the service can't be reached by kubectl either (it doesn't even return kube-dns like it did previously): I also notice that on the minions, none of them seem to have routes for the higher IP ranges:\nThis looks to be the same as posting a PR shortly.\nAwesome!! Thanks - I'm grabbing the from your pull request and will re-open this issue if that did not resolve it (once it's closed by the merge).\nIn the second case [when your kube-dns had the ip 192.168.128.120] where was your kube-dns?\nbased on the logs you posted you were definitely impacted by the issue in but I am not sure if that is the only issue on going on your setup. Please let us know how the fix goes for your environment.\nThere is no route for 192.168.3.0/24", "positive_passages": [{"docid": "doc-en-kubernetes-b7b17c669788d497428062485299e18a629d651ffa6d9625f41abda0f3530727", "text": "# identify the subnet assigned to the node by the kubernetes controller manager. KUBE_NODE_BRIDGE_NETWORK=() for (( i=0; i<${#NODE_NAMES[@]}; i++)); do printf \" finding network of cbr0 bridge on node ${NODE_NAMES[$i]}n\" network=$(kube-ssh ${KUBE_NODE_IP_ADDRESSES[$i]} 'sudo ip route show | grep -E \"dev cbr0\" | cut -d \" \" -f1') KUBE_NODE_BRIDGE_NETWORK+=(\"${network}\") done printf \" finding network of cbr0 bridge on node ${NODE_NAMES[$i]}n\" network=\"\" top2_octets_final=$(echo $NODE_IP_RANGES | awk -F \".\" '{ print $1 \".\" $2 }') # Assume that a 24 bit mask per node attempt=0 max_attempt=60 while true ; do attempt=$(($attempt+1)) network=$(kube-ssh ${KUBE_NODE_IP_ADDRESSES[$i]} 'sudo ip route show | grep -E \"dev cbr0\" | cut -d \" \" -f1') top2_octets_read=$(echo $network | awk -F \".\" '{ print $1 \".\" $2 }') if [[ \"$top2_octets_read\" == \"$top2_octets_final\" ]]; then break fi if (( $attempt == $max_attempt )); then echo echo \"(Failed) Waiting for cbr0 bridge to come up @ ${NODE_NAMES[$i]}\" echo exit 1 fi printf \".\" sleep 5 done printf \"n\" KUBE_NODE_BRIDGE_NETWORK+=(\"${network}\") done # Make the pods visible to each other and to the master. # The master needs have routes to the pods for the UI to work. local j for (( i=0; i<${#NODE_NAMES[@]}; i++)); do printf \"setting up routes for ${NODE_NAMES[$i]}\" kube-ssh \"${KUBE_MASTER_IP}\" \"sudo route add -net ${KUBE_NODE_BRIDGE_NETWORK[${i}]} gw ${KUBE_NODE_IP_ADDRESSES[${i}]}\" for (( j=0; j<${#NODE_NAMES[@]}; j++)); do if [[ $i != $j ]]; then kube-ssh ${KUBE_NODE_IP_ADDRESSES[$i]} \"sudo route add -net ${KUBE_NODE_BRIDGE_NETWORK[$j]} gw ${KUBE_NODE_IP_ADDRESSES[$j]}\" fi done printf \"setting up routes for ${NODE_NAMES[$i]}n\" printf \" adding route to ${MASTER_NAME} for network ${KUBE_NODE_BRIDGE_NETWORK[${i}]} via ${KUBE_NODE_IP_ADDRESSES[${i}]}n\" kube-ssh \"${KUBE_MASTER_IP}\" \"sudo route add -net ${KUBE_NODE_BRIDGE_NETWORK[${i}]} gw ${KUBE_NODE_IP_ADDRESSES[${i}]}\" for (( j=0; j<${#NODE_NAMES[@]}; j++)); do if [[ $i != $j ]]; then printf \" adding route to ${NODE_NAMES[$j]} for network ${KUBE_NODE_BRIDGE_NETWORK[${i}]} via ${KUBE_NODE_IP_ADDRESSES[${i}]}n\" kube-ssh ${KUBE_NODE_IP_ADDRESSES[$i]} \"sudo route add -net ${KUBE_NODE_BRIDGE_NETWORK[$j]} gw ${KUBE_NODE_IP_ADDRESSES[$j]}\" fi done printf \"n\" done }", "commid": "kubernetes_pr_35232"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff687515ab25c4bdd4af394a75c676d411e02b095a7423ff0f91396e6bca07a8", "query": "What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): vSphere, \"message\": \"no endpoints available for service \"kube-dns\"\", kube-dns, etc -- similar issues include: however it's not quite the same Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Kubernetes version (use ): Environment: Cloud provider or hardware configuration: vSphere OS (e.g. from /etc/os-release): Using provided vmdk Kernel (e.g. ): and Install tools: GOVC + the script What happened: Our environment utilizes the RFC1918 10.0.0.0/8 block, so I edited to utilize an address space we are not using. This is my current configuration: After the deployment is finished, I am able to login to the dashboard via the URL provided via , however, when I check the DNS server I see: When I list services, I see: When I start a container and try to ping kube-dns, I am unable to: I can ping external IP addresses: Here's the configuration on each minion and the master: What you expected to happen: I expect containers can reach ping eachother. How to reproduce it (as minimally and precisely as possible): I've reproduced this problem using both 172.16.0.0/16 addresses as well as the 192.168.0.0/16 addresses. Anything else do we need to know: I followed the documentation here: and the only change I made was to the network configuration.\nI just noticed that doesn't include any routes for the service IPs I defined (e.g. has no routing table entries on any of the minions, only ). I'm not sure how to fix it, but it looks like the DNS server can make requests to the upstream DNS, it just can't get responses back.\ncc\nSome additional info I probably should have included the first time:\nI just noticed that while from inside a container I can ping google: I cannot ping our internal DNS which is at IP . So I think I'm getting closer to where the problem actually is. I also think the 192.168.120.120 was not actually unreachable, just not pingable. For example: Returns right away, while doing the same to IP hangs. I'm not sure why when using our DNS in the of my master/minions doesn't seem to give them the ability to resolve internal assets on our network.\nCan you provide your output for iptables also\nHi imkin, Sure thing! Minion 1 - 10.248.30.38 Minion 2 - 10.248.30.40 Minion 3 - 10.248.30.37 Minion 4 - 10.248.30.39 Master - 10.248.30.36 Let me know what else I can provide, I haven't had much time to continue troubleshooting this week, but getting this to work would be a really great template for building our bare metal deployment. Thanks! ZC\nCan you try with DNSSERVERIP=\"192.168.128.120\"\nHi - I will edit that in the initial configuration and re-up my cluster this morning to see if that helps things work out of the box so to speak. Thanks! ZC\nHey I have edited my configuration as follows: I have re-run the script and everything came up and validated. I am still experiencing the same behavior, however. I now see this in the logs for the DNS service via the dashboard: I see that the service can't be reached by kubectl either (it doesn't even return kube-dns like it did previously): I also notice that on the minions, none of them seem to have routes for the higher IP ranges:\nThis looks to be the same as posting a PR shortly.\nAwesome!! Thanks - I'm grabbing the from your pull request and will re-open this issue if that did not resolve it (once it's closed by the merge).\nIn the second case [when your kube-dns had the ip 192.168.128.120] where was your kube-dns?\nbased on the logs you posted you were definitely impacted by the issue in but I am not sure if that is the only issue on going on your setup. Please let us know how the fix goes for your environment.\nThere is no route for 192.168.3.0/24", "positive_passages": [{"docid": "doc-en-kubernetes-9c8601faad6f82bbbb88a75c8b0e776039e90b4266b8e6ab5fe06e36c64b5224", "text": "printf \"Waiting for salt-master to be up on ${KUBE_MASTER} ...n\" remote-pgrep ${KUBE_MASTER_IP} \"salt-master\" printf \"Waiting for all packages to be installed on ${KUBE_MASTER} ...n\" kube-check ${KUBE_MASTER_IP} 'sudo salt \"kubernetes-master\" state.highstate -t 30 | grep -E \"Failed:[[:space:]]+0\"' local i for (( i=0; i<${#NODE_NAMES[@]}; i++)); do printf \"Waiting for salt-minion to be up on ${NODE_NAMES[$i]} ....n\" remote-pgrep ${KUBE_NODE_IP_ADDRESSES[$i]} \"salt-minion\" printf \"Waiting for all salt packages to be installed on ${NODE_NAMES[$i]} .... n\" kube-check ${KUBE_MASTER_IP} 'sudo salt '\"${NODE_NAMES[$i]}\"' state.highstate -t 30 | grep -E \"Failed:[[:space:]]+0\"' printf \" OKn\" done printf \"Waiting for init highstate to be done on all nodes (this can take a few minutes) ...n\" kube-check ${KUBE_MASTER_IP} 'sudo salt '''*''' state.show_highstate -t 50' printf \"Waiting for all packages to be installed on all nodes (this can take a few minutes) ...n\" kube-check ${KUBE_MASTER_IP} 'sudo salt '''*''' state.highstate -t 50 | grep -E \"Failed:[[:space:]]+0\"' echo echo \"Waiting for master and node initialization.\"", "commid": "kubernetes_pr_35232"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cf9984ddc8d3414505d43a827d639ed0a0feb51bbf3cc5f704b07594a951615a", "query": "Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT Kubernetes version (use ): Environment: Cloud provider or hardware configuration: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Kernel (e.g. ): 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux What happened: Deployed a scheduled job from this manifest: After some time the jobs are no longer executed: What you expected to happen: Jobs are executed as defined in the manifest. How to reproduce it (as minimally and precisely as possible): Deploy a scheduled job and wait for some time.\nSome logs from the controller manager:\nPing. More than 2 weeks and no action at all? Anything else i can do?\nI'm also encountering this using both and as concurrency policies, fwiw.\nI'm hitting this as well. Kube\nDo the jobs start to reschedule if you delete old one? Try running . cc\nFrom kube-controller-manager logs: The job is scheduled for 1:00. I think the \"Multiple unmet start times\" message is a clue of where things start going downhill. Also, notice how at 1:00 the controller wants to create job (for a whole day, it had failed on ).\nDeleting (all) old jobs / pods fixes the problem temporarily. After some time the scheduled jobs face the same issue again.\nThere's a fix () that I've submitted against master to fix the Replace strategy, I'll mark it for inclusion in 1.4.\nThe other problem that I see here is this: It is possible that our is creating duplicates. I'll try to investigate that a bit more.\nsorry for the delay in response time, I must have missed that while going through issues. Next time you're experiencing problems/have a question wrt Jobs/ScheduledJobs, just ping me directly in the issue.\nNo problem and I will. Thanks!\nI'm still hitting this on\nFix in flight , I'll make sure to cherry-pick it back to 1.4\ndoes this ship in to latest kubernetes release?\nyes, this should be present in 1.5, 1.4 cherry-pick hasn't been merged yet.\nso this issue is still present in v1.4.6?\nyes - it is still present there. My fix was merged to 1.4 yesterday. You should wait for the next 1.4.x release.\nthanks!\nwhich according to should happen Dec 8th.\nany news on the release?\nSee !msg/kubernetes-dev/o0At2-4KKQ8/_kPVM1XyDAAJ\nso v1.4.7 will be released on that same day?\nLook like should be available later today: !msg/kubernetes-dev/00lJpT9pxDc/YZfDzCxMDQAJ", "positive_passages": [{"docid": "doc-en-kubernetes-11a23d68a53cc23621f7c81f02f43b62b96d7b32f5adcc0e35be808425cb57e5", "text": "\"//pkg/runtime:go_default_library\", \"//pkg/types:go_default_library\", \"//pkg/util/errors:go_default_library\", \"//pkg/util/hash:go_default_library\", \"//pkg/util/metrics:go_default_library\", \"//pkg/util/runtime:go_default_library\", \"//pkg/util/wait:go_default_library\",", "commid": "kubernetes_pr_36812"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cf9984ddc8d3414505d43a827d639ed0a0feb51bbf3cc5f704b07594a951615a", "query": "Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT Kubernetes version (use ): Environment: Cloud provider or hardware configuration: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Kernel (e.g. ): 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux What happened: Deployed a scheduled job from this manifest: After some time the jobs are no longer executed: What you expected to happen: Jobs are executed as defined in the manifest. How to reproduce it (as minimally and precisely as possible): Deploy a scheduled job and wait for some time.\nSome logs from the controller manager:\nPing. More than 2 weeks and no action at all? Anything else i can do?\nI'm also encountering this using both and as concurrency policies, fwiw.\nI'm hitting this as well. Kube\nDo the jobs start to reschedule if you delete old one? Try running . cc\nFrom kube-controller-manager logs: The job is scheduled for 1:00. I think the \"Multiple unmet start times\" message is a clue of where things start going downhill. Also, notice how at 1:00 the controller wants to create job (for a whole day, it had failed on ).\nDeleting (all) old jobs / pods fixes the problem temporarily. After some time the scheduled jobs face the same issue again.\nThere's a fix () that I've submitted against master to fix the Replace strategy, I'll mark it for inclusion in 1.4.\nThe other problem that I see here is this: It is possible that our is creating duplicates. I'll try to investigate that a bit more.\nsorry for the delay in response time, I must have missed that while going through issues. Next time you're experiencing problems/have a question wrt Jobs/ScheduledJobs, just ping me directly in the issue.\nNo problem and I will. Thanks!\nI'm still hitting this on\nFix in flight , I'll make sure to cherry-pick it back to 1.4\ndoes this ship in to latest kubernetes release?\nyes, this should be present in 1.5, 1.4 cherry-pick hasn't been merged yet.\nso this issue is still present in v1.4.6?\nyes - it is still present there. My fix was merged to 1.4 yesterday. You should wait for the next 1.4.x release.\nthanks!\nwhich according to should happen Dec 8th.\nany news on the release?\nSee !msg/kubernetes-dev/o0At2-4KKQ8/_kPVM1XyDAAJ\nso v1.4.7 will be released on that same day?\nLook like should be available later today: !msg/kubernetes-dev/00lJpT9pxDc/YZfDzCxMDQAJ", "positive_passages": [{"docid": "doc-en-kubernetes-1484fab69a17845a133d95ffc63682543297824d6e5a70e84f2a0a6a6248d32e", "text": "import ( \"encoding/json\" \"fmt\" \"hash/adler32\" \"time\" \"github.com/golang/glog\"", "commid": "kubernetes_pr_36812"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cf9984ddc8d3414505d43a827d639ed0a0feb51bbf3cc5f704b07594a951615a", "query": "Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT Kubernetes version (use ): Environment: Cloud provider or hardware configuration: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Kernel (e.g. ): 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux What happened: Deployed a scheduled job from this manifest: After some time the jobs are no longer executed: What you expected to happen: Jobs are executed as defined in the manifest. How to reproduce it (as minimally and precisely as possible): Deploy a scheduled job and wait for some time.\nSome logs from the controller manager:\nPing. More than 2 weeks and no action at all? Anything else i can do?\nI'm also encountering this using both and as concurrency policies, fwiw.\nI'm hitting this as well. Kube\nDo the jobs start to reschedule if you delete old one? Try running . cc\nFrom kube-controller-manager logs: The job is scheduled for 1:00. I think the \"Multiple unmet start times\" message is a clue of where things start going downhill. Also, notice how at 1:00 the controller wants to create job (for a whole day, it had failed on ).\nDeleting (all) old jobs / pods fixes the problem temporarily. After some time the scheduled jobs face the same issue again.\nThere's a fix () that I've submitted against master to fix the Replace strategy, I'll mark it for inclusion in 1.4.\nThe other problem that I see here is this: It is possible that our is creating duplicates. I'll try to investigate that a bit more.\nsorry for the delay in response time, I must have missed that while going through issues. Next time you're experiencing problems/have a question wrt Jobs/ScheduledJobs, just ping me directly in the issue.\nNo problem and I will. Thanks!\nI'm still hitting this on\nFix in flight , I'll make sure to cherry-pick it back to 1.4\ndoes this ship in to latest kubernetes release?\nyes, this should be present in 1.5, 1.4 cherry-pick hasn't been merged yet.\nso this issue is still present in v1.4.6?\nyes - it is still present there. My fix was merged to 1.4 yesterday. You should wait for the next 1.4.x release.\nthanks!\nwhich according to should happen Dec 8th.\nany news on the release?\nSee !msg/kubernetes-dev/o0At2-4KKQ8/_kPVM1XyDAAJ\nso v1.4.7 will be released on that same day?\nLook like should be available later today: !msg/kubernetes-dev/00lJpT9pxDc/YZfDzCxMDQAJ", "positive_passages": [{"docid": "doc-en-kubernetes-6cd1cb973578f88d0ec9e66237fdb7bb9d9e9daf2d705be775682a417a4aaac4", "text": "\"k8s.io/kubernetes/pkg/apis/batch\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/types\" hashutil \"k8s.io/kubernetes/pkg/util/hash\" ) // Utilities for dealing with Jobs and CronJobs and time.", "commid": "kubernetes_pr_36812"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cf9984ddc8d3414505d43a827d639ed0a0feb51bbf3cc5f704b07594a951615a", "query": "Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT Kubernetes version (use ): Environment: Cloud provider or hardware configuration: AWS OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie) Kernel (e.g. ): 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux What happened: Deployed a scheduled job from this manifest: After some time the jobs are no longer executed: What you expected to happen: Jobs are executed as defined in the manifest. How to reproduce it (as minimally and precisely as possible): Deploy a scheduled job and wait for some time.\nSome logs from the controller manager:\nPing. More than 2 weeks and no action at all? Anything else i can do?\nI'm also encountering this using both and as concurrency policies, fwiw.\nI'm hitting this as well. Kube\nDo the jobs start to reschedule if you delete old one? Try running . cc\nFrom kube-controller-manager logs: The job is scheduled for 1:00. I think the \"Multiple unmet start times\" message is a clue of where things start going downhill. Also, notice how at 1:00 the controller wants to create job (for a whole day, it had failed on ).\nDeleting (all) old jobs / pods fixes the problem temporarily. After some time the scheduled jobs face the same issue again.\nThere's a fix () that I've submitted against master to fix the Replace strategy, I'll mark it for inclusion in 1.4.\nThe other problem that I see here is this: It is possible that our is creating duplicates. I'll try to investigate that a bit more.\nsorry for the delay in response time, I must have missed that while going through issues. Next time you're experiencing problems/have a question wrt Jobs/ScheduledJobs, just ping me directly in the issue.\nNo problem and I will. Thanks!\nI'm still hitting this on\nFix in flight , I'll make sure to cherry-pick it back to 1.4\ndoes this ship in to latest kubernetes release?\nyes, this should be present in 1.5, 1.4 cherry-pick hasn't been merged yet.\nso this issue is still present in v1.4.6?\nyes - it is still present there. My fix was merged to 1.4 yesterday. You should wait for the next 1.4.x release.\nthanks!\nwhich according to should happen Dec 8th.\nany news on the release?\nSee !msg/kubernetes-dev/o0At2-4KKQ8/_kPVM1XyDAAJ\nso v1.4.7 will be released on that same day?\nLook like should be available later today: !msg/kubernetes-dev/00lJpT9pxDc/YZfDzCxMDQAJ", "positive_passages": [{"docid": "doc-en-kubernetes-18b5bf76c1ec8e273c12695d30b44afadfebfe6564acf49e0914b896ee149c11", "text": "return job, nil } func getTimeHash(scheduledTime time.Time) uint32 { timeHasher := adler32.New() hashutil.DeepHashObject(timeHasher, scheduledTime) return timeHasher.Sum32() // Return Unix Epoch Time func getTimeHash(scheduledTime time.Time) int64 { return scheduledTime.Unix() } // makeCreatedByRefJson makes a json string with an object reference for use in \"created-by\" annotation value", "commid": "kubernetes_pr_36812"}], "negative_passages": []} {"query_id": "q-en-kubernetes-379dabb0d3852f2c76fc6671dc8637dfc359bc3af1d634613a62926d11f19208", "query": "It fails with: cc\nLooking at the generated I see that FEDERATIONIMAGETAG is \"\" which is resulting in this error. I see that before , FEDERATIONIMAGETAG was being set to but that docker tag file does not exist for hyperkube. I see the following exist in my : but does not exist there. Trying to figure out why.\ncc\nmake sure you are running the scripts with env var exported. That is what controls whether gets created.\nWith , we are not generating the federation-apiserver binary and instead using the hyperkube binary.\nsorry, i copied the wrong line. The droid you're looking for: File conditionally written during process File consumed when templating federation control plane yaml\nRight. My is also empty. I am trying to run federation e2e tests. The command I am running is: which is what I have always used. I guess something changed in\nthat should definitely work. Please and then run the step, capture the output and email to me? I was literally doing exactly this process for another PR today, and I verified the image tag file was getting written correctly as well.\nI figured out this is happening because after make clean or a fresh clone, kubectl will not be available when we are trying to get the semanticimagetag_version() in here.\nAwesome! Thanks", "positive_passages": [{"docid": "doc-en-kubernetes-9eaa01497f9dabf4c879a7ac9bd3da67061bf3ab5737f85a54b128e0947f55ff", "text": "kube::build::run_build_command make test-integration fi kube::build::copy_output if [[ \"${FEDERATION:-}\" == \"true\" ]];then ( source \"${KUBE_ROOT}/build/util.sh\"", "commid": "kubernetes_pr_34898"}], "negative_passages": []} {"query_id": "q-en-kubernetes-379dabb0d3852f2c76fc6671dc8637dfc359bc3af1d634613a62926d11f19208", "query": "It fails with: cc\nLooking at the generated I see that FEDERATIONIMAGETAG is \"\" which is resulting in this error. I see that before , FEDERATIONIMAGETAG was being set to but that docker tag file does not exist for hyperkube. I see the following exist in my : but does not exist there. Trying to figure out why.\ncc\nmake sure you are running the scripts with env var exported. That is what controls whether gets created.\nWith , we are not generating the federation-apiserver binary and instead using the hyperkube binary.\nsorry, i copied the wrong line. The droid you're looking for: File conditionally written during process File consumed when templating federation control plane yaml\nRight. My is also empty. I am trying to run federation e2e tests. The command I am running is: which is what I have always used. I guess something changed in\nthat should definitely work. Please and then run the step, capture the output and email to me? I was literally doing exactly this process for another PR today, and I verified the image tag file was getting written correctly as well.\nI figured out this is happening because after make clean or a fresh clone, kubectl will not be available when we are trying to get the semanticimagetag_version() in here.\nAwesome! Thanks", "positive_passages": [{"docid": "doc-en-kubernetes-5c5ac41516d1522f3ff522741ceecff8bb18370d5f1c7d2e0371122056fcff81", "text": ") fi kube::build::copy_output kube::release::package_tarballs kube::release::package_hyperkube", "commid": "kubernetes_pr_34898"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ea165427d7925a52e0a3175d5df532a0d4e87bf9f1606c149ffe6c4d4639a9f0", "query": "Implement unit (eventually, integration) tests for . This is an effort for Q4 2016 to be lead by and Refs\nHas been implemented very much thanks to ;)", "positive_passages": [{"docid": "doc-en-kubernetes-fab72fd71153da3379098eff45458c1094b3dde8a2cadb1f0469ff328d19f796", "text": " /* Copyright 2016 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package images import ( \"fmt\" \"runtime\" \"testing\" kubeadmapi \"k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm\" ) type getCoreImageTest struct { i string c *kubeadmapi.MasterConfiguration o string } const testversion = \"1\" func TestGetCoreImage(t *testing.T) { var tokenTest = []struct { t getCoreImageTest expected string }{ {getCoreImageTest{o: \"override\"}, \"override\"}, {getCoreImageTest{ i: KubeEtcdImage, c: &kubeadmapi.MasterConfiguration{}}, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"etcd\", runtime.GOARCH, etcdVersion), }, {getCoreImageTest{ i: KubeAPIServerImage, c: &kubeadmapi.MasterConfiguration{KubernetesVersion: testversion}}, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kube-apiserver\", runtime.GOARCH, testversion), }, {getCoreImageTest{ i: KubeControllerManagerImage, c: &kubeadmapi.MasterConfiguration{KubernetesVersion: testversion}}, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kube-controller-manager\", runtime.GOARCH, testversion), }, {getCoreImageTest{ i: KubeSchedulerImage, c: &kubeadmapi.MasterConfiguration{KubernetesVersion: testversion}}, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kube-scheduler\", runtime.GOARCH, testversion), }, {getCoreImageTest{ i: KubeProxyImage, c: &kubeadmapi.MasterConfiguration{KubernetesVersion: testversion}}, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kube-proxy\", runtime.GOARCH, testversion), }, } for _, rt := range tokenTest { actual := GetCoreImage(rt.t.i, rt.t.c, rt.t.o) if actual != rt.expected { t.Errorf( \"failed GetCoreImage:ntexpected: %snt actual: %s\", rt.expected, actual, ) } } } func TestGetAddonImage(t *testing.T) { var tokenTest = []struct { t string expected string }{ {\"matches nothing\", \"\"}, { KubeDNSImage, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kubedns\", runtime.GOARCH, kubeDNSVersion), }, { KubeDNSmasqImage, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"kube-dnsmasq\", runtime.GOARCH, dnsmasqVersion), }, { KubeExechealthzImage, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"exechealthz\", runtime.GOARCH, exechealthzVersion), }, { Pause, fmt.Sprintf(\"%s/%s-%s:%s\", gcrPrefix, \"pause\", runtime.GOARCH, pauseVersion), }, } for _, rt := range tokenTest { actual := GetAddonImage(rt.t) if actual != rt.expected { t.Errorf( \"failed GetCoreImage:ntexpected: %snt actual: %s\", rt.expected, actual, ) } } } ", "commid": "kubernetes_pr_35332"}], "negative_passages": []} {"query_id": "q-en-kubernetes-530f1b156a6d906607bacfbd0f2d31ee7d1e4a6539ca9af4be5b6cbffe99dc57", "query": "Failed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\n, it seems we forgot to add to ; I'll create a PR for that.\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!\nFailed: https://k8s- Run so broken it didn't make JUnit output!", "positive_passages": [{"docid": "doc-en-kubernetes-6d12e73095aa6b41d935ff72fde77e5bc736e633a346ab308ba6f0d162ee2628", "text": "package mount type Mounter struct{} type Mounter struct { mounterPath string } func (mounter *Mounter) Mount(source string, target string, fstype string, options []string) error { return nil", "commid": "kubernetes_pr_35271"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-546d2780874ab3ebcf4b459f97e0f8d379dff0b3b72e8a671cc9b7c7f4777f0a", "text": "tags = [\"automanaged\"], deps = [ \"//pkg/api:go_default_library\", \"//pkg/api/helper:go_default_library\", \"//pkg/api/service:go_default_library\", \"//pkg/api/testing:go_default_library\", \"//pkg/features:go_default_library\",", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-c8867da14fcfe960d851085c8543a0a1b7a1f7cec081fe37badd86f89f95f4d6", "text": "} }() if helper.IsServiceIPRequested(service) { // Allocate next available. ip, err := rs.serviceIPs.AllocateNext() if err != nil { // TODO: what error should be returned here? It's not a // field-level validation failure (the field is valid), and it's // not really an internal error. return nil, errors.NewInternalError(fmt.Errorf(\"failed to allocate a serviceIP: %v\", err)) } service.Spec.ClusterIP = ip.String() releaseServiceIP = true } else if helper.IsServiceIPSet(service) { // Try to respect the requested IP. if err := rs.serviceIPs.Allocate(net.ParseIP(service.Spec.ClusterIP)); err != nil { // TODO: when validation becomes versioned, this gets more complicated. el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"clusterIP\"), service.Spec.ClusterIP, err.Error())} return nil, errors.NewInvalid(api.Kind(\"Service\"), service.Name, el) var err error if service.Spec.Type != api.ServiceTypeExternalName { if releaseServiceIP, err = rs.initClusterIP(service); err != nil { return nil, err } releaseServiceIP = true } nodePortOp := portallocator.StartOperation(rs.serviceNodePorts) defer nodePortOp.Finish() assignNodePorts := shouldAssignNodePorts(service) svcPortToNodePort := map[int]int{} for i := range service.Spec.Ports { servicePort := &service.Spec.Ports[i] allocatedNodePort := svcPortToNodePort[int(servicePort.Port)] if allocatedNodePort == 0 { // This will only scan forward in the service.Spec.Ports list because any matches // before the current port would have been found in svcPortToNodePort. This is really // looking for any user provided values. np := findRequestedNodePort(int(servicePort.Port), service.Spec.Ports) if np != 0 { err := nodePortOp.Allocate(np) if err != nil { // TODO: when validation becomes versioned, this gets more complicated. el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), np, err.Error())} return nil, errors.NewInvalid(api.Kind(\"Service\"), service.Name, el) } servicePort.NodePort = int32(np) svcPortToNodePort[int(servicePort.Port)] = np } else if assignNodePorts { nodePort, err := nodePortOp.AllocateNext() if err != nil { // TODO: what error should be returned here? It's not a // field-level validation failure (the field is valid), and it's // not really an internal error. return nil, errors.NewInternalError(fmt.Errorf(\"failed to allocate a nodePort: %v\", err)) } servicePort.NodePort = int32(nodePort) svcPortToNodePort[int(servicePort.Port)] = nodePort } } else if int(servicePort.NodePort) != allocatedNodePort { if servicePort.NodePort == 0 { servicePort.NodePort = int32(allocatedNodePort) } else { err := nodePortOp.Allocate(int(servicePort.NodePort)) if err != nil { // TODO: when validation becomes versioned, this gets more complicated. el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), servicePort.NodePort, err.Error())} return nil, errors.NewInvalid(api.Kind(\"Service\"), service.Name, el) } } if service.Spec.Type == api.ServiceTypeNodePort || service.Spec.Type == api.ServiceTypeLoadBalancer { if err := rs.initNodePorts(service, nodePortOp); err != nil { return nil, err } }", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-473ea90178323de9eeb4fe6fb30c59e57f265a8170d166f1a9d9c3be15bc227b", "text": "// Update service from NodePort or LoadBalancer to ExternalName or ClusterIP, should release NodePort if exists. if (oldService.Spec.Type == api.ServiceTypeNodePort || oldService.Spec.Type == api.ServiceTypeLoadBalancer) && (service.Spec.Type == api.ServiceTypeExternalName || service.Spec.Type == api.ServiceTypeClusterIP) { rs.releaseNodePort(oldService, nodePortOp) rs.releaseNodePorts(oldService, nodePortOp) } // Update service from any type to NodePort or LoadBalancer, should update NodePort. if service.Spec.Type == api.ServiceTypeNodePort || service.Spec.Type == api.ServiceTypeLoadBalancer { if err := rs.updateNodePort(oldService, service, nodePortOp); err != nil { if err := rs.updateNodePorts(oldService, service, nodePortOp); err != nil { return nil, false, err } }", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-3c3d75e6fc1bbb49d8da2315dbe47ee771c9bc77a5a82dce877f8e11925389aa", "text": "return servicePorts } func shouldAssignNodePorts(service *api.Service) bool { switch service.Spec.Type { case api.ServiceTypeLoadBalancer: return true case api.ServiceTypeNodePort: return true case api.ServiceTypeClusterIP: return false case api.ServiceTypeExternalName: return false default: glog.Errorf(\"Unknown service type: %v\", service.Spec.Type) return false } } // Loop through the service ports list, find one with the same port number and // NodePort specified, return this NodePort otherwise return 0. func findRequestedNodePort(port int, servicePorts []api.ServicePort) int {", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-7ad3c24eb3e2c50938f6bc2e7bdd204ceabc9db3ff203e6f2fc90533b8f35c75", "text": "return false, nil } func (rs *REST) updateNodePort(oldService, newService *api.Service, nodePortOp *portallocator.PortAllocationOperation) error { func (rs *REST) initNodePorts(service *api.Service, nodePortOp *portallocator.PortAllocationOperation) error { svcPortToNodePort := map[int]int{} for i := range service.Spec.Ports { servicePort := &service.Spec.Ports[i] allocatedNodePort := svcPortToNodePort[int(servicePort.Port)] if allocatedNodePort == 0 { // This will only scan forward in the service.Spec.Ports list because any matches // before the current port would have been found in svcPortToNodePort. This is really // looking for any user provided values. np := findRequestedNodePort(int(servicePort.Port), service.Spec.Ports) if np != 0 { err := nodePortOp.Allocate(np) if err != nil { // TODO: when validation becomes versioned, this gets more complicated. el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), np, err.Error())} return errors.NewInvalid(api.Kind(\"Service\"), service.Name, el) } servicePort.NodePort = int32(np) svcPortToNodePort[int(servicePort.Port)] = np } else { nodePort, err := nodePortOp.AllocateNext() if err != nil { // TODO: what error should be returned here? It's not a // field-level validation failure (the field is valid), and it's // not really an internal error. return errors.NewInternalError(fmt.Errorf(\"failed to allocate a nodePort: %v\", err)) } servicePort.NodePort = int32(nodePort) svcPortToNodePort[int(servicePort.Port)] = nodePort } } else if int(servicePort.NodePort) != allocatedNodePort { // TODO(xiangpengzhao): do we need to allocate a new NodePort in this case? // Note: the current implementation is better, because it saves a NodePort. if servicePort.NodePort == 0 { servicePort.NodePort = int32(allocatedNodePort) } else { err := nodePortOp.Allocate(int(servicePort.NodePort)) if err != nil { // TODO: when validation becomes versioned, this gets more complicated. el := field.ErrorList{field.Invalid(field.NewPath(\"spec\", \"ports\").Index(i).Child(\"nodePort\"), servicePort.NodePort, err.Error())} return errors.NewInvalid(api.Kind(\"Service\"), service.Name, el) } } } } return nil } func (rs *REST) updateNodePorts(oldService, newService *api.Service, nodePortOp *portallocator.PortAllocationOperation) error { oldNodePorts := CollectServiceNodePorts(oldService) newNodePorts := []int{}", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-90056a1143c7f228814c87ad6501a6197af1886e6203f4d7c15c080bcbac3b37", "text": "return nil } func (rs *REST) releaseNodePort(service *api.Service, nodePortOp *portallocator.PortAllocationOperation) { func (rs *REST) releaseNodePorts(service *api.Service, nodePortOp *portallocator.PortAllocationOperation) { nodePorts := CollectServiceNodePorts(service) for _, nodePort := range nodePorts {", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-cb30776a47404d5bb2aa601323af66b62891c36869ba45c1efb1458e83d9f3cd", "text": "\"k8s.io/apiserver/pkg/registry/rest\" utilfeature \"k8s.io/apiserver/pkg/util/feature\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/helper\" \"k8s.io/kubernetes/pkg/api/service\" \"k8s.io/kubernetes/pkg/features\" \"k8s.io/kubernetes/pkg/registry/core/service/ipallocator\"", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-22ec6fe0f7109f2204afbef68c05317a54fe99c66699a4aba7342fe601b14367", "text": "if test.name == \"Allocate specified ClusterIP\" && test.svc.Spec.ClusterIP != \"1.2.3.4\" { t.Errorf(\"%q: expected ClusterIP %q, but got %q\", test.name, \"1.2.3.4\", test.svc.Spec.ClusterIP) } if hasAllocatedIP { if helper.IsServiceIPSet(test.svc) { storage.serviceIPs.Release(net.ParseIP(test.svc.Spec.ClusterIP)) } } } } func TestInitNodePorts(t *testing.T) { storage, _ := NewTestREST(t, nil) nodePortOp := portallocator.StartOperation(storage.serviceNodePorts) defer nodePortOp.Finish() testCases := []struct { name string service *api.Service expectSpecifiedNodePorts []int }{ { name: \"Service doesn't have specified NodePort\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, }, }, }, }, expectSpecifiedNodePorts: []int{}, }, { name: \"Service has one specified NodePort\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{{ Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30053, }}, }, }, expectSpecifiedNodePorts: []int{30053}, }, { name: \"Service has two same ports with different protocols and specifies same NodePorts\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30054, }, { Name: \"port-udp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, NodePort: 30054, }, }, }, }, expectSpecifiedNodePorts: []int{30054, 30054}, }, { name: \"Service has two same ports with different protocols and specifies different NodePorts\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30055, }, { Name: \"port-udp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, NodePort: 30056, }, }, }, }, expectSpecifiedNodePorts: []int{30055, 30056}, }, { name: \"Service has two different ports with different protocols and specifies different NodePorts\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30057, }, { Name: \"port-udp\", Port: 54, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, NodePort: 30058, }, }, }, }, expectSpecifiedNodePorts: []int{30057, 30058}, }, { name: \"Service has two same ports with different protocols but only specifies one NodePort\", service: &api.Service{ ObjectMeta: metav1.ObjectMeta{Name: \"foo\"}, Spec: api.ServiceSpec{ Selector: map[string]string{\"bar\": \"baz\"}, Type: api.ServiceTypeNodePort, Ports: []api.ServicePort{ { Name: \"port-tcp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolTCP, NodePort: 30059, }, { Name: \"port-udp\", Port: 53, TargetPort: intstr.FromInt(6502), Protocol: api.ProtocolUDP, }, }, }, }, expectSpecifiedNodePorts: []int{30059, 30059}, }, } for _, test := range testCases { err := storage.initNodePorts(test.service, nodePortOp) if err != nil { t.Errorf(\"%q: unexpected error: %v\", test.name, err) continue } serviceNodePorts := CollectServiceNodePorts(test.service) if len(test.expectSpecifiedNodePorts) == 0 { for _, nodePort := range serviceNodePorts { if !storage.serviceNodePorts.Has(nodePort) { t.Errorf(\"%q: unexpected NodePort %d, out of range\", test.name, nodePort) } } } else if !reflect.DeepEqual(serviceNodePorts, test.expectSpecifiedNodePorts) { t.Errorf(\"%q: expected NodePorts %v, but got %v\", test.name, test.expectSpecifiedNodePorts, serviceNodePorts) } } } func TestUpdateNodePort(t *testing.T) { func TestUpdateNodePorts(t *testing.T) { storage, _ := NewTestREST(t, nil) nodePortOp := portallocator.StartOperation(storage.serviceNodePorts) defer nodePortOp.Finish()", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-036fbe154616710f38e97e5f728b943492772b1362cb5fd735c4695cc31f0484", "query": "Similar to\nis a dup of this. PR is just a quick try. Will spend more time on it to cover all cases.\nstatus?\nI sent PR which is waiting for reviewing from . Also, that PR is blocked by issue which is self-assigned by you but not fixed yet :)\nThat issue might be not hard to fix, but I didn't find how.So I need your help. IMO, we mignt need to change CI jobs configure in some files or images.\nAfter discussing with reviewer ( this might be better suited to merge but not for 1.7.0, as it touches a part of the code that is quite old and stable. In other words, we probably need to give this change some time to soak in the e2e system as opposed to put right away into the release.", "positive_passages": [{"docid": "doc-en-kubernetes-e9b0df1ce1a76be3e78597dcda54e0cfa90d25105b96886dec63f06011f7b6a0", "text": "} for _, test := range testCases { err := storage.updateNodePort(test.oldService, test.newService, nodePortOp) err := storage.updateNodePorts(test.oldService, test.newService, nodePortOp) if err != nil { t.Errorf(\"%q: unexpected error: %v\", test.name, err) continue } _ = nodePortOp.Commit() serviceNodePorts := CollectServiceNodePorts(test.newService)", "commid": "kubernetes_pr_48418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-6f0afc6f5bb87e4e6a0796419b897e03b743cf6c06e15bb5c0128536770fe820", "query": " [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_edit_configmap.md?pixel)]() ", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99f20a6006e53b856138dd520ed013802aa15e162b7578d9adad8b350decc8ea", "query": "kubectl edit is not working on HEAD. It's working on 1.5.", "positive_passages": [{"docid": "doc-en-kubernetes-54e2e9be0fe36878c64c8ac989095887075506a28302915844fd6722a00bb138", "text": "concurrent-serviceaccount-token-syncs concurrent-service-syncs config-map config-map-data config-map-namespace config-sync-period configure-cloud-routes", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99f20a6006e53b856138dd520ed013802aa15e162b7578d9adad8b350decc8ea", "query": "kubectl edit is not working on HEAD. It's working on 1.5.", "positive_passages": [{"docid": "doc-en-kubernetes-fa0a5eb725ecaf710001c598561154179b69cd98dba6f4bf8c56e3f6761e5d64", "text": "\"describe.go\", \"drain.go\", \"edit.go\", \"edit_configmap.go\", \"exec.go\", \"explain.go\", \"expose.go\",", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99f20a6006e53b856138dd520ed013802aa15e162b7578d9adad8b350decc8ea", "query": "kubectl edit is not working on HEAD. It's working on 1.5.", "positive_passages": [{"docid": "doc-en-kubernetes-056d98202e728b37ece4c6567383d1140bbbbe4c90dfd1aa2ba75cbbcf10913a", "text": "Long: editLong, Example: fmt.Sprintf(editExample), Run: func(cmd *cobra.Command, args []string) { args = append([]string{\"configmap\"}, args...) err := RunEdit(f, out, errOut, cmd, args, options) cmdutil.CheckErr(err) }, ValidArgs: validArgs, ArgAliases: argAliases, } addEditFlags(cmd, options) cmd.AddCommand(NewCmdEditConfigMap(f, out, errOut)) return cmd } func addEditFlags(cmd *cobra.Command, options *resource.FilenameOptions) { usage := \"to use to edit the resource\" cmdutil.AddFilenameOptionFlags(cmd, options, usage) cmdutil.AddValidateFlags(cmd)", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99f20a6006e53b856138dd520ed013802aa15e162b7578d9adad8b350decc8ea", "query": "kubectl edit is not working on HEAD. It's working on 1.5.", "positive_passages": [{"docid": "doc-en-kubernetes-bc544f231b71f26863b8bd150b6292ea1a6a70cbf6bc961e84b879421f70f569", "text": "cmdutil.AddApplyAnnotationFlags(cmd) cmdutil.AddRecordFlag(cmd) cmdutil.AddInclude3rdPartyFlags(cmd) return cmd } func RunEdit(f cmdutil.Factory, out, errOut io.Writer, cmd *cobra.Command, args []string, options *resource.FilenameOptions) error {", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99f20a6006e53b856138dd520ed013802aa15e162b7578d9adad8b350decc8ea", "query": "kubectl edit is not working on HEAD. It's working on 1.5.", "positive_passages": [{"docid": "doc-en-kubernetes-b98ad6cc8053ffd0041e849411ca7b6a2ce878ff2a5ddf7417e8f2b44c138927", "text": " /* Copyright 2016 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package cmd import ( \"bytes\" \"fmt\" \"io\" \"os\" \"github.com/spf13/cobra\" \"k8s.io/apimachinery/pkg/apis/meta/v1\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"k8s.io/kubernetes/pkg/kubectl/cmd/util/editor\" \"k8s.io/kubernetes/pkg/kubectl/resource\" ) // NewCmdEditConfigMap is a macro command to edit config maps func NewCmdEditConfigMap(f cmdutil.Factory, cmdOut, errOut io.Writer) *cobra.Command { options := &resource.FilenameOptions{} cmd := &cobra.Command{ Use: \"configmap\", Aliases: []string{\"cm\"}, Short: \"Edit a config map object.\", Long: \"Edit and update a config map object\", Run: func(cmd *cobra.Command, args []string) { RunEditConfigMap(cmd, f, args, cmdOut, errOut, options) }, } addEditFlags(cmd, options) cmd.Flags().String(\"config-map-data\", \"\", \"If non-empty, specify the name of a data slot in a config map to edit.\") return cmd } // RunEditConfigMap runs the edit command for config maps. It either edits the complete map // or it edits individual files inside the config map. func RunEditConfigMap(cmd *cobra.Command, f cmdutil.Factory, args []string, cmdOut, errOut io.Writer, options *resource.FilenameOptions) error { dataFile := cmdutil.GetFlagString(cmd, \"config-map-data\") if len(dataFile) == 0 { // We need to add the resource type back on to the front args = append([]string{\"configmap\"}, args...) return RunEdit(f, cmdOut, errOut, cmd, args, options) } cmdNamespace, _, err := f.DefaultNamespace() if err != nil { return err } cs, err := f.ClientSet() if err != nil { return err } configMap, err := cs.Core().ConfigMaps(cmdNamespace).Get(args[0], v1.GetOptions{}) if err != nil { return err } value, found := configMap.Data[dataFile] if !found { keys := []string{} for key := range configMap.Data { keys = append(keys, key) } return fmt.Errorf(\"No such data file (%s), filenames are: %vn\", dataFile, keys) } edit := editor.NewDefaultEditor(os.Environ()) data, file, err := edit.LaunchTempFile(fmt.Sprintf(\"%s-edit-\", dataFile), \"\", bytes.NewBuffer([]byte(value))) defer func() { os.Remove(file) }() if err != nil { return err } configMap.Data[dataFile] = string(data) if _, err := cs.Core().ConfigMaps(cmdNamespace).Update(configMap); err != nil { return err } return nil } ", "commid": "kubernetes_pr_40421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-21ec941edfa0690a26ff395dea19961384462e3c6c950d124e082489371eeaf3", "query": " [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubectl/kubectl_run.md?pixel)]()", "commid": "kubernetes_pr_13917"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f731f964b0030896cdca6269d2746eb82544a41be31752c20e1617edbddc38f1", "query": "