{"_id":"q-en-kubernetes-91fe6ceffd0a40ec3ded85dd062721e869736715f793aa77d338c920cf20d98f","text":"CA Advisor runs inside docker containers, but I don't believe it's correct to show the Kubelet and Proxy processes as inside the docker box on the minion. Also, shouldn't the internet-firewall-really load balance across minion proxies instead of be connected to just one SPOF minion, treating that connection more like a logical connection across all minion proxies?\nThis is the desired state, which is admittedly not the current state. Yeah, this is a good point. k8s currently relies on its environment for load balancing. Maybe or can speak to the desired state here.\nWe should just review the diagram and make it correctly reflect the state of 1.0."} {"_id":"q-en-kubernetes-c2e5391174f153a1c9277af876ed0c0c2fa7a8f69a8e1b77e99161308c31e686","text":"I can see that this message is flooding my minion logs so I was curious what should this config file looks like. I was trying to find some example, but with no luck. Also is this config file created during the runtime? What actually creates it? Shouldn't we just give up when the config file does not exists? Also there are better ways (maybe) to do polling if the file exists or not. I meaning you can use inotify () instead of an infinite loop + sleep. I'm happy to make a patch if you think it is a good idea. PS: I know that you put the timeout 5 seconds before retries.\nThanks for the report. I think this is just a bug-- we should check whether it exists and not pollute the log. I'm pretty sure we don't currently even use that file. Also we should NOT be reading the proxy's config from a file in /tmp where anybody can write it. Bad default location.\nProxy config sounds like an /etc thing, same for kubelet's container manifests from disk\nAre we interested in polling for the configuration files changes? Or do we want to read the settings just once, as an ordinary unix binary?\nYes, polling is a requirement. This is a live config file like the kubelet's machine files."} {"_id":"q-en-kubernetes-7240f013dec109f7f6f3db86b7f71f4f168b8139d5a4174b363b9b39856ae102","text":"Had a report of this over email. Filing this issue so we don't forget to investigate.\nCan't reproduce."} {"_id":"q-en-kubernetes-da9d81cdbfc64fa83c7a8102d516c422ecb439ff0b0723a5f46e6b4cdef1149c","text":"The hash function generates names with upper-case letters, but the validation only allows lower-case. First, we clearly need a test for this. Second, we need to fix it. Should we allow upper-case characters (which the RFC does allow for DNS labels) or should we fix the hash? Allowing upper-case seems attractive, except that the RFC also says that case doesn't matter, but should be preserved \"wherever possible\". So we really have three choices, I see. a) only allow lower-case b) allow upper-case and always use caseless comparisons everywhere c) allow upper-case, but normalize to lower-case since it was your hash code\nRe: test - sounds like a config source -config integration test, which I'll add. Re: hash - would argue we should fix the hash to generate lowercase for now. I think, but will double check, that the collision space is still improbably low for 63 lowercase chars. I can do that shortly.\nAnd a validate test on all sources.\nLowercase would require base32, rather than base64, no? On Aug 3, 2014 8:20 AM, \"Clayton Coleman\" wrote:\nYeah, although I don't believe there's a difference to the outcome between base32 and base64(lowercase) given an appropriate hash function.\nWell, base32 is reversible and a modified base64is not :) On Aug 3, 2014 9:00 AM, \"Clayton Coleman\" wrote:\nRemind me again why reversibility is important? We discussed it in another issue but I can't find the context.\nI'm going to change the generated name to which even with 2^8 files per host and a thousand hosts should be well below 0.% chance of collision\nIts only important if we want to use it. We don't AFAIK. On Aug 3, 2014 9:34 AM, \"Clayton Coleman\" wrote:\nFix makes them unique, somewhat traceable (first 15 safe chars of the name), and generates base32 output."} {"_id":"q-en-kubernetes-c5eabf1d549f41dc6ff35936ea16374bcf7ff68f6c6211b1fe87c9dc27d00d31","text":"It's causing problems and is kind-of GCE dependent anyway, we should just use the kubecfg binary, and document the environment variables it uses.\nI think its beneficial to keep something, but remove the GCE dependent pieces, i.e. \"external-ip\" piece.\nI put together some instructions here:\nWe can remove this when kubecfg is deprecated .\nReopened since it still needs to be removed. /cc\nDoh. Didn't mean to declare this fixed in that PR.\nClosing this (again) as it is a dupe of"} {"_id":"q-en-kubernetes-9d9eb5f8f6da0df47984d5529d5299f1f0821654a05b1fd79fc26d6b03acedb8","text":"Salt was failing to install on the master vagrant instance. The change noted by the FIXME solved it.\nIt's possible the required merge in the upstream repo occurred, will take a look.\nI just attempted to install a new cluster from upstream/master and was not able to reproduce the issue described. Was there an issue with curl-ing that specific version of salt-bootstrap? I guess more details on what the failure you encountered appeared as would help know how to proceed.\nI will submit a PR to revert back to using the default non-versioned bootstrap URL. Salt-stack PR: Fixed the errors in install that were previously encountered around salt-api not being found.\nSee , thanks!"} {"_id":"q-en-kubernetes-009e7492cbeef180b9883cfe661e04a8dca4a46ed7fabb95f8d66a2876df42c7","text":"The Kubelet REST end-point has turned into a complicated switch-case code block that is harder to maintain overtime. The Kubelet should move to a http.ServeMux pattern and register handler functions to a pattern.\nI am working on a PR to submit in next couple of days that will make this change, so feel free to assign to me if possible.\nTagging because I asked for this in one of his PRs\nThis one is also under my radar. If no one had time, feel free to assign it to me too.\nI have a branch in progress. Feel free to offer feedback on it when I submit. Sent from my iPhone"} {"_id":"q-en-kubernetes-ba201f6cc37fd09086cd86a1a7a1d3d232b0107e0e285e64b6f2a09676e0b920","text":"In the guestbook example, the redis slaves are instructed to connect to the redis-master using the proxy provided by the minion on which they reside. The minion is designated by its hostname \"kubernetes-minion-N\". The container cannot resolve this name to an IP address and so slave connections fail. 1) create vagrant cluster 2) create redis-master pod 3) create redis-master service 4) create redis-slave replicationController Verify redis-slave pod locations: list pods ID Image(s) Host Labels Status redis-master-2 dockerfile/redis kubernetes-minion-1/10.245.2.2 name=redis-master Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-1/10.245.2.2 name=redisslave,replicationController=redisSlaveController Running -3839-11e4-a922- brendanburns/redis-slave kubernetes-minion-3/10.245.2.4 name=redisslave,replicationController=redisSlaveController Running Log onto a minion containing a slave (note: vagrant hostnames are minion-N, not kubernetes-minion-N) Find the slave container docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES brendanburns/redis-slave:latest /bin/sh -c About an hour ago Up About an hour k8s--slave.---3839-11e4-a922- docker logs | tail [7] 09 Sep 17:27:08.428 Connecting to MASTER kubernetes-minion-1: [7] 09 Sep 17:27:08.430 # Unable to connect to MASTER: No such file or directory [7] 09 Sep 17:27:09.435 Connecting to MASTER kubernetes-minion-1: sudo nsenter -m -u -n -i -p -t $(docker inspect --format '{{ .State.Pid }}' ) /bin/bash [ root ]$ ping kubernetes-minion-1 ping: unknown host kubernetes-minion-1 [ root ]$ cat /etc/hosts 244.1.4 -3839-11e4-a922- 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters [ root ]$ cat nameserver 10.0.2.3 cat /proc/1/environ HOME=/rootPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=-3839-11e4-a922-0800279696e1REDISMASTERSERVICEPORT=10000REDISMASTERPORT=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCP=tcp://kubernetes-minion-1:10000REDISMASTERPORT6379TCPPROTO=tcpREDISMASTERPORT6379TCPPORT=10000REDISMASTERPORT6379TCPADDR=kubernetes-minion-1SERVICE_HOST=kubernetes-\nTim - are you the right guy to handle this? I think we need to resolve the name of the service host and only give IPs to the minion in env variables. We can't assume the containers have the same DNS set up as the host system.\nTo be clear -- this example is for the checked in Vagrant stuff? If that is the case would be the guy? I'm trying to catch up on IRC discussions from earlier today.\nI can look at this, but there are two possible solutions. The first solution is in the vagrant cluster (or any other local vm setup) we could run a dns name server on the kubernetes-master that can resolve each minion by host name. We would then pass a -dns option to the docker daemon to specify that name server to use in each container. The second solution is that we pass IP address instead of host name and make no assumption or override on the containers specified name server. Is there any obvious negative to this solution? Has much thought been done on the expected Nameserver used by containers spawned by our service? - if you can weigh in, I would be happy to resolve using either approach. My preference is solution two.\nSo, the vagrant case is a little special as there is no sane DNS set up by default. Another suggestion (3) would be to revert the stuff and just devolve to using ip addresses directly. I would do (3) before (2) before (1). But I'm cool with any of them. As for the nameserver of the containers that are spawned -- docker currently just takes the of the host and copies that into the container. It doesn't do that for . that is the core of the issue here. If/when we do offer some level of split horizon DNS (I'd love to) we can start exposing an extra layer of DNS on top of whatever is configured for the host. That is part of the \"ip per service portal\" stuff that is being talked about a lot right now.\nIf I do (3) there is a piece of code in pod storage that mangles my hostname where I return 10.245.x.x and it asks the cloud provider to resolve the IP address for \"10\". Since other providers appear to also be returning IP as host, I will look to change this so kubernetes makes no modifications to hostnames that it's given. Look for PR tomorrow.\nyou're right I'd missed that point. This really only applies to the Vagrant cluster and so a fix to the vagrant setup is sufficient. I also agree with your priorities. Injecting a line into /etc/hosts (if that's reasonable) is the simplest solution. Either reverting or allowing a switch to pass the minion public IP rather than hostname is next. Adding a DNS service to the vagrant master and forcing DNS resolution to the master host is most complicated and least preferable. Thanks both of you for helping me think this through better. Mark\nI don't grok what the difference between 2 and 3 is? I sent a PR to docker a while back () to insert arbitrary lines to /etc/hosts. I still think it is a reasonable thing to want to do.\n- difference between (2) and (3) is that in the past, the vagrant cloud provider was returning the IP address as the literal hostname for the minion. I had changed this after noticing that it had some ugly side-effects in the CLI, where a call to list pods would report pod ip addresses as IP / nil. The issue (which I now believe is a bug that I want to change independent of what we do here) is that the code at this line mangles the hostname returned by the cloudprovider: This makes using the IP address as a hostname (10.245.2.2) to have the ugly side-effect of the code then asking the cloudprovider for the IP address of hostname \"10\", and then the cloudprovider having no idea how to proceed. Independent of what is done in this issue, I want to submit a PR tomorrow to stop that code from manipulating the reported hostname that was provided by the cloudprovider before asking that same cloudprovider to return the IP address using a potentially different input value. so with that backlog (3) means revert vagrant cloudprovider to use IPAddress as hostname, which means when we pass in what we believe is hostname in this environment, it will just work because it is in-fact the IPAddress. (2) means do not pass in hostname but pass in IPAddress by changing more core k8s code. all that being said, there is an option (4) that I am intrigued by, - after following your PR through its final conclusion, it looks like as of docker 1.2.0 you can change the /etc/hosts file of a running container. this introduces an option where the kubelet injects the /etc/hosts file of minion into the running container so it has the /etc/hosts values of the minion. I would need to play with that support a little more to know pros/cons more. I think longer term, we need a plan on what nameserver our containers are expecting to reference, and if they are inherently different in some way. I have run into a number of times the issue where doing things like running docker in docker causes the nested container to default to the nameserver 8.8.8.8, which happens to be blocked on the Red Hat network ;-) For now to unblock I will probably pursue (3) and fix the hostname bug referenced here at the same time. It does make the CLI uglier when working with this environment, but not a huge deal, at least it will function.\nThanks both for the complete explanation of what's happening and for the suggested fixes. I think in the long run using hostnames in SERVICEHOST and ensuriing either through IP injection into container /etc/hosts or by providing DNS are the best options. I'm not sure if there might still be cases where resolution of SERVICEHOST would still be best done by presenting the IP address or not."} {"_id":"q-en-kubernetes-fb3b457ee434f4c000eee30c833933190e7e6916431cae811f57cb551a433ff1","text":"Kubernetes use kubernetes/pause image to create network container. The problem is that whatever signals the pause container received, it will exit with code 2. Instead, with a normal exit, it should exit with code 0; while it exit with port binding issue, it should exit with code -1.\nThe port-binding issues will occur before the pause image gets invoked, but the pause image itself should exit 0.\nI filed to docker to capture port-binding issue there. Port-binding is not my concern since we validate the port to avoid the conflict anyway. The concern I am having is PreStart we plan to introduce between docker container creation and start ().\nThere seems to be another issue - quick testing shows that if there is a conflict with a port allocated OUTSIDE of docker (e.g. the kube-proxy), Docker will accept the container and run it, but the host port doesn't work at all, and there's no way to tell except trawling logs. , Dawn Chen wrote:\nIs there anything we could do to help go through, to eliminate the need for the network container?\nGood catch. But following your steps above, my quick testing shows that docker container is running, but doesn't work at all. Also daemon logs doesn't have any error information. Will file another issue to docker: 1) # netstat -lnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN tcp6 0 0 :::4194 ::: LISTEN tcp6 0 0 ::: ::: LISTEN tcp6 0 0 :::8080 ::: LISTEN tcp6 0 0 :::22 :::* LISTEN 2) # docker run -p :80 -d dockerimages/apache 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11 3) # docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES dockerimages/apache:latest \"apache2 -DFOREGROUN 6 seconds ago Up 5 seconds 0.0.0.0:-80/tcp jollynewton 4) # docker logs AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.11. Set the 'ServerName' directive globally to suppress this message [Tue Sep 30 00:03:16. 2014] [core:warn] [pid 1:tid ] AH00098: pid file overwritten -- Unclean shutdown of previous Apache run? [Tue Sep 30 00:03:16. 2014] [mpmevent:notice] [pid 1:tid ] AH00489: Apache/2.4.7 (Ubuntu) configured -- resuming normal operations [Tue Sep 30 00:03:16. 2014] [core:notice] [pid 1:tid ] AH00094: Command line: 'apache2 -D FOREGROUND' 5) # cat | grep [] +job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(create, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] POST /v1.14/containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/start [] +job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] +job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateinterface(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job allocateport(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] +job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) [] -job log(start, 6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11, dockerimages/apache:latest) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job start(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /containers/6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11/json [] +job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) [] -job containerinspect(6f7309782be63f95dbc3f87bded69637e3299cd255668f1f171dc5eef28f5a11) = OK (0) [info] GET /v1.14/containers//json [] +job containerinspect() [] -job containerinspect() = OK (0) [info] GET /containers//logs?stderr=1&stdout=1&tail=all [] +job containerinspect() [] -job containerinspect() = OK (0) [] +job logs() [] -job logs() = OK (0)\nI agreed we should help docker/docker go through, so that network container should be gone forever which makes our system, especially kubelet much simple. But that doesn't help with docker/docker and a new issue found by following.\nhave you tried this on docker master because it should have been fixed by , just want to prevent duplicate issues\nI just rebuild docker from HEAD, and tested it with above steps I posted. The problem is fixed: 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458 2014/09/30 16:29:20 Error response from daemon: Cannot start container 37a039245cd9faba6ae900cc463313eae77c9c9dce589616ec992a7e8eecb458: Bind for 0.0.0.0:8080 failed: port is already allocated But another issue I filed (docker/docker) is still exist. Thanks!\ngreat, just wanted to make sure! and yes I can see if want to change the existing functionality for docker/docker"} {"_id":"q-en-kubernetes-c13ad6ae8245ad270ca4d0b7b122e9cd5dbf6367a9eed33f85c1a7904dfa7c4f","text":"It's not well tested nor widely used; so it should either be removed or have tests and uses ."} {"_id":"q-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15","text":"Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to"} {"_id":"q-en-kubernetes-12a3ede7d945c55e14c6bf835d8548e1819c9a378123160dd4714cd3f7d4519c","text":"On Fedora20 I have the following error: [fedora ~]$ sudo kube-proxy.service - Kubernetes Kube-Proxy Server Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 24ms ago Docs: Main PID: (kube-proxy) CGroup: /usr/bin/kube-proxy --logtostderr=true --v=0 --etcdservers=http://fed-master:4001 ott 29 15:15:21 fed-minion systemd[1]: Started Kubernetes Kube-Proxy Server. ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: get registry/services/specs...:4001] ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Connecting to etcd: attempt...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: http://fed- GET ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: recv.response.fromhttp://fe...=false ott 29 15:15:21 fed-minion kube-proxy[]: I1029 15:15:21. ] etcd DEBUG: Hint: Some lines were ellipsized, use -l to show in full. kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled) Active: activating (auto-restart) (Result: exit-code) since mer 2014-10-29 15:15:21 UTC; 29ms ago Docs: Main PID: (code=exited, status=2) ott 29 15:15:21 fed-minion systemd[1]: kubelet.service: main process exited, code=exited, status=2/INVALIDARGUMENT ott 29 15:15:21 fed-minion systemd[1]: Unit kubelet.service entered failed state. docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: active (running) since mer 2014-10-29 15:15:21 UTC; 28ms ago Docs: Main PID: (docker) CGroup: /usr/bin/docker -d -H fd:// --selinux-enabled ott 29 15:15:21 fed-minion docker[]: 2014/10/29 15:15:21 docker daemon: 1.3.0 /1.3.0; execdriver: native; graphdriver: ott 29 15:15:21 fed-minion docker[]: [] +job serveapi(fd://) ott 29 15:15:21 fed-minion docker[]: [info] Listening for HTTP on fd () ott 29 15:15:21 fed-minion docker[]: [] +job initnetworkdriver() ott 29 15:15:21 fed-minion docker[]: [] -job init_networkdriver() = OK (0) ott 29 15:15:21 fed-minion docker[]: [info] Loading containers: ott 29 15:15:21 fed-minion docker[]: [info] : done. ott 29 15:15:21 fed-minion docker[]: [] +job acceptconnections() ott 29 15:15:21 fed-minion docker[]: [] -job acceptconnections() = OK (0) ott 29 15:15:21 fed-minion systemd[1]: Started Docker Application Container Engine. When I run netstat -tulnp (cadvisor)\" I don't see anything in output.\nExit code of 2 implies glog.Fatalf to me. What are the contents of the logs for kubelet and kube-proxy? What was the command line of kubelet?\nPlease reopen if you have more details."} {"_id":"q-en-kubernetes-ddb3c77615a6f61140a3f455ef45c381d55feb4351606a018aa9cb9944c9a41d","text":"introduces PullPolicies (PullAlways, PullNever, PullIfNotPresent), but by default, the policy is PullAlways, which bite docker repository again today. The related #google-containers IRC conversation can be found at: We should Introduce a new policy to cap the number of attempts. Maybe getting rid of PullAlways completely? cc/\nI would like to change the default one to PullIfNoPresent for now. We can continue discussing the right policies.\nSuggestions: PullIfNotPresent should be our default? PullAlways needs to be nerfed. Suggestion: max N pull attempts per pod config change, with