prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am running some internal services and also some customer facing services in one K8s cluster. The internal ones should only be accessible from some specific ips and the customer facing services should be accessible worldwide.</p>
<p>So I created my Ingresses and an nginx Ingress Controller and some K8s LoadBalancer Services with the proper ip filters. </p>
<p>Now I see those Firewall rules in GCP are created behind the scenes. But they are conflicting and the "customer facing" firewall rules overrule the "internal" ones. And so everything of my K8s Cluster is visible worldwide. </p>
<p>The usecase sounds not that exotic to me - do you have an idea how to get some parts of a K8s cluster protected by firewall rules and some accessible everywhere?</p>
| <p>As surprising as it is, the L7 (http/https) load balancer in GCP created by a Kubernetes Ingress object <strong>has no IP whitelisting capabilities</strong> by default, so what you described is working as intended. You can filter on your end using the <code>X-Forwarded-For</code> header (see Target Proxies under <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="nofollow noreferrer">Setting Up HTTP(S) Load Balancing</a>).</p>
<p>Whitelisting will be available trough <a href="https://cloud.google.com/armor/docs/security-policy-concepts" rel="nofollow noreferrer">Cloud Armour</a>, which is in private beta at the moment.</p>
<p>To make this situation slightly more complicated: the L4 (tcp/ssl) load balancer in GCP created by a Kubernetes LoadBalancer object (so, not an Ingress) does have IP filtering capability. You simply set <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service" rel="nofollow noreferrer"><code>.spec.loadBalancerSourceRanges</code></a> on the Service for that. Of course, a Service will not give you url/host based routing, but you can achieve that by deploying an ingress controller like <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx-ingress</a>. If you go this route you can still create Ingresses for your internal services you just need to annotate them so the new ingress controller picks them up. This is a fairly standard solution, and is actually cheaper than creating L7s for each of your internal services (you will only have to pay for 1 forwarding rule for all of your internal services).</p>
<p>(By "internal services" above I meant services you need to be able to access from outside of the itself cluster but only from specific IPs, say a VPN, office, etc. For services you only need to access from inside the cluster you should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer"><code>type: ClusterIP</code></a>)</p>
|
<p>I'm encountering a situation where pods are occasionally getting evicted after running out of memory. Is there any way to set up some kind of alerting where I can be notified when this happens?</p>
<p>As it is, Kubernetes keeps doing its job and re-creating pods after the old ones are removed, and it's often hours or days before I'm made aware that a problem exists at all.</p>
| <p>GKE exports Kubernetes Events (<code>kubectl get events</code>) to Stackdriver Logging, to the "GKE Cluster Operations" table:
<a href="https://i.stack.imgur.com/SsvHR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SsvHR.png" alt=""></a></p>
<p>Next, write a query specifically targeting evictions (the query I pasted below might not be accurate):</p>
<p><a href="https://i.stack.imgur.com/AbhF7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AbhF7.png" alt="enter image description here"></a></p>
<p>Then click "CREATE METRIC" button.</p>
<p>This will create a Log-based Metric. On the left sidebar, click "Logs-based metrics" and click the "Create alert from metric" option on the context menu of this metric:</p>
<p><a href="https://i.stack.imgur.com/BxsKn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BxsKn.png" alt="enter image description here"></a></p>
<p>Next, you'll be taken to Stackdriver Alerting portal. You can set up alerts there based on thresholds etc.</p>
|
<p>I really like the kubernetes Ingress schematics. I currently run <code>ingress-nginx</code> controllers to route traffic into my kubernetes pods.</p>
<p>I would like to use this to also route traffic to 'normal' machines: ie vm's or physical nodes that are not part of my kubernetes infrastructure. Is this possible? How?</p>
| <p>In Kubernetes you can define an externalName service in which you define a FQND to an external server.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
</code></pre>
<p>Then you can use <code>my-service</code> in your nginx rule.</p>
|
<p>I am using <code>launcher.gcr.io/google/jenkins2</code> to run jenkins in gcp kubernetes engine.</p>
<p>Everything seems ok except that I get <code>Could not initialize class org.jfree.chart.JFreeChart</code> error for every chart that jenkins attempt to draw. I googled the error and almost everyone solves that with adding <code>-Djava.awt.headless=true</code>. As you can guess I already tried that and it does not work.</p>
<p>Ideas?</p>
| <p>One other possible solution/workaround is seen in <a href="https://issues.jenkins-ci.org/browse/JENKINS-39636" rel="nofollow noreferrer">JENKINS issue 39636</a>:</p>
<blockquote>
<p>I installed <code>libxext6 libxrender1 fontconfig libfontconfig</code> but it didn't help. After that I also installed <code>libjfreechart-java</code> but I still have the same problem.</p>
<p>Commenting out the <code>assistive_technologies</code> line in <code>/etc/java-8-openjdk/accessibility.properties</code> solved it.</p>
</blockquote>
<p>You can see that recommendation in <a href="https://stackoverflow.com/users/433558/tianon">tianon</a>'s <a href="https://stackoverflow.com/questions/21841269/performance-graphs-on-jenkins-causing-could-not-initialize-class-org-jfree-char#comment76900079_41428450">comment of that answer</a>:</p>
<blockquote>
<p>In my case it ended up being <a href="https://bugs.debian.org/798794" rel="nofollow noreferrer">bugs.debian.org/798794</a> (from "<a href="https://askubuntu.com/a/723503/5470">Assistive technology not found error while building <code>aprof-plot</code></a>").<br />
Adding "<code>RUN sed -i 's/^assistive_technologies=/#&/' /etc/java-8-openjdk/accessibility.properties</code>" to my <code>Dockerfile</code> fixed it. :)</p>
</blockquote>
|
<p>I got yaml file for specifying ssl certificate (provided by aws certificate manager)to load balancer for kubernetes deployment. But, we are running kubernetes cluster in aws china account where certification manager option is not available. Now if I have SSL certificate provided by Godaddy, how can I install it? Is any other alternative ways to install certificate rather than load balancer? Can I install it in my tomcat container itself and build new image with it? </p>
| <p>As far as I know, you cannot setup an ELB deployed with a kubernetes <code>Service</code> to use a certificate which is NOT an ACM certificate. In fact, if you take a look at the possibile annotations <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go#L72" rel="nofollow noreferrer">here</a> you'll see that the only annotation available to select a certificate is <code>service.beta.kubernetes.io/aws-load-balancer-ssl-cert</code> and the documentation for that annotation says the following:</p>
<pre><code>ServiceAnnotationLoadBalancerCertificate is the annotation used on the
service to request a secure listener. Value is a valid certificate ARN.
For more, see http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html
CertARN is an IAM or CM certificate ARN, e.g. arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
</code></pre>
<p>As you ask, you can for sure terminate your ssl inside your kubernetes Pod and make the ELB a simple TCP proxy.</p>
<p>In order to do so, you need to add the following annotation to your <code>Service</code> manifest:</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
</code></pre>
<p>Also, you will need to forward both your http and https ports in order to handle http to https redirect correctly inside you pod.</p>
<p>If you need more specific help, please post you current manifest.</p>
|
<p>I've set up an application stack running on Google Kubernetes Engine + Google Cloud SQL. When developing locally, I would like my application to connect to a postgres database server running <em>outside</em> the cluster to simulate the production environment.</p>
<p>It seems that the way to do this is by defining an external endpoint as described here: <a href="https://stackoverflow.com/questions/43354167/minikube-expose-mysql-running-on-localhost-as-service">Minikube expose MySQL running on localhost as service</a></p>
<p>Unfortunately, I am not able to specify "127.0.0.1" as the Endpoint IP address:</p>
<pre><code>kubectl apply -f kubernetes/local/postgres-service.yaml
service "postgres-db" unchanged
The Endpoints "postgres-db" is invalid: subsets[0].addresses[0].ip:
Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
</code></pre>
<p>So I am forced to bind postgres to my actual machine address.</p>
<p>It seems like there MUST be a way to map a port from my localhost into the local kubernetes cluster, but so far I can't find a way to do it. </p>
<p>Anybody know the trick? Or alternatively, can someone suggest an alternative solution which doesn't involve running postgres inside the cluster?</p>
| <p>May not be an answer for Minikube, but I ended up here so I share what I did for Kubernetes in Docker for Mac.</p>
<p>I added a service like this for PostgreSQL:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: default
spec:
type: ExternalName
# https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
externalName: host.docker.internal
ports:
- name: port
port: 5432
</code></pre>
<p>My application was able to connect to the locally running postgres server with this setup using the domain name <code>postgres</code>. The Postgres server can listen to <code>127.0.0.1</code> with this setup.</p>
|
<p>I have a local kubernetes cluster setup using the edge release of docker (mac). My pods use an env var that I've defined to be my DB's url. These env vars are defined in a config map as:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
DB_URL: postgres://user@localhost/my_dev_db?sslmode=disable
</code></pre>
<p>What should I be using here instead of localhost? I need this env var to point to my local dev machine.</p>
| <p>You can use the private lan address of your computer, but please ensure that your database software is listening to all network interfaces and there is no firewall blocking incoming traffic.</p>
<p>If your LAN address is dynamic, you could use an internal DNS name pointing to your computer if your network setup provides one.</p>
<p>Another option is to run your database inside the kubernetes cluster: this way you could use it's <code>service</code> name as the hostname.</p>
|
<p>I am trying to setup a spark cluster on k8s. I've managed to create and setup a cluster with three nodes by following this article:
<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p>
<p>After that when I tried to deploy spark on the cluster it failed at spark submit setup.
I used this command:</p>
<pre><code>~/opt/spark/spark-2.3.0-bin-hadoop2.7/bin/spark-submit \
--master k8s://https://206.189.126.172:6443 \
--deploy-mode cluster \
--name word-count \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=docker.io/garfiny/spark:v2.3.0 \
—-conf spark.kubernetes.driver.pod.name=word-count \
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>And it gives me this error:</p>
<pre><code>Exception in thread "main" org.apache.spark.SparkException: The Kubernetes mode does not yet support referencing application dependencies in the local file system.
at org.apache.spark.deploy.k8s.submit.DriverConfigOrchestrator.getAllConfigurationSteps(DriverConfigOrchestrator.scala:122)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:229)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:227)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2585)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:227)
at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:192)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
</code></pre>
<p>2018-06-04 10:58:24 INFO ShutdownHookManager:54 - Shutdown hook called
2018-06-04 10:58:24 INFO ShutdownHookManager:54 - Deleting directory /private/var/folders/lz/0bb8xlyd247cwc3kvh6pmrz00000gn/T/spark-3967f4ae-e8b3-428d-ba22-580fc9c840cd</p>
<p>Note: I followed this article for installing spark on k8s.
<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a></p>
| <p>The error message comes from <a href="https://github.com/apache/spark/commit/5d7c4ba4d73a72f26d591108db3c20b4a6c84f3f" rel="noreferrer">commit 5d7c4ba4d73a72f26d591108db3c20b4a6c84f3f</a> and include the page you mention: "<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#dependency-management" rel="noreferrer">Running Spark on Kubernetes</a>" with the mention that you indicate:</p>
<pre class="lang-scala prettyprint-override"><code>// TODO(SPARK-23153): remove once submission client local dependencies are supported.
if (existSubmissionLocalFiles(sparkJars) || existSubmissionLocalFiles(sparkFiles)) {
throw new SparkException("The Kubernetes mode does not yet support referencing application " +
"dependencies in the local file system.")
}
</code></pre>
<p>This is described in <a href="https://issues.apache.org/jira/browse/SPARK-18278" rel="noreferrer">SPARK-18278</a>:</p>
<blockquote>
<p>it wouldn't accept running a local: jar file, e.g. <code>local:///opt/spark/examples/jars/spark-examples_2.11-2.2.0-k8s-0.5.0.jar</code>, on my spark docker image (<code>allowsMixedArguments</code> and <code>isAppResourceReq booleans</code> in <code>SparkSubmitCommandBuilder.java</code> get in the way). </p>
</blockquote>
<p>And this is linked to <a href="https://github.com/kubernetes/kubernetes/issues/34377" rel="noreferrer">kubernetes issue 34377</a></p>
<p>The <a href="https://issues.apache.org/jira/browse/SPARK-22962" rel="noreferrer">issue SPARK-22962 "Kubernetes app fails if local files are used"</a> mentions:</p>
<blockquote>
<p>This is the resource staging server use-case. We'll upstream this in the 2.4.0 timeframe.</p>
</blockquote>
<p>In the meantime, that error message was introduced in <a href="https://github.com/apache/spark/pull/20320" rel="noreferrer">PR 20320</a>.</p>
<p>It includes the comment:</p>
<blockquote>
<p>The manual tests I did actually use a main app jar located on gcs and http.<br>
To be specific and for record, I did the following tests:</p>
<ul>
<li>Using a gs:// main application jar and a http:// dependency jar. Succeeded.</li>
<li>Using a https:// main application jar and a http:// dependency jar. Succeeded.</li>
<li>Using a local:// main application jar. Succeeded.</li>
<li>Using a file:// main application jar. Failed.</li>
<li>Using a file:// dependency jar. Failed.</li>
</ul>
</blockquote>
<p>That issue should been fixed by now, and the <a href="https://stackoverflow.com/users/714376/garfiny">OP garfiny</a> confirms <a href="https://stackoverflow.com/questions/50637250/spark-on-k8s-getting-error-kube-mode-not-support-referencing-app-depenpendcie/50673683?noredirect=1#comment88396515_50673683">in the comments</a>:</p>
<blockquote>
<p>I used the newest <code>spark-kubernetes jar</code> to replace the one in <code>spark-2.3.0-bin-hadoop2.7</code> package. The exception is gone.</p>
</blockquote>
|
<p>I was setting up microservices based on <a href="https://github.com/Netflix/eureka" rel="noreferrer">Netflix Eureka</a> and experimenting on top of <a href="https://cloud.spring.io/spring-cloud-netflix/" rel="noreferrer">spring-cloud</a> and after weeks of research and development the question rose! </p>
<p>Why do I need the Eureka and spring-cloud?</p>
<p>Why not developing your independent containers and deploy them on <a href="https://kubernetes.io" rel="noreferrer">Kubernetes</a> as pods and maintain everything from there?</p>
<p>You can support load balancing, service registery, monitoring, containerization, etc. from <a href="https://kubernetes.io" rel="noreferrer">Kubernetes</a> too.</p>
<p>Here are some points that I can think of:</p>
<ul>
<li>developer friendly</li>
<li>lower server costs for the complete stack</li>
<li>less OPS dependent</li>
<li>more resources on developer communities and tutorials</li>
<li>gradual learning curve</li>
</ul>
| <p>I conclude and the top highlights for the reason why using <code>Eureka</code> and/or <code>Kubernetes</code> can be listed as:</p>
<ul>
<li>Java only</li>
<li>developer friendly</li>
<li>lower server costs for the complete stack</li>
<li>less OPS dependent</li>
<li>more resources on developer communities and tutorials</li>
<li>gradual learning curve</li>
</ul>
<p>So,</p>
<blockquote>
<p>If you need some of your microservices in an other language or you
can rely on your developers knowledge on <code>Kubernetes</code> are not afraid
to spend a bit more time and money investing in your tech stack to
have a wider and less dependent system then <code>Kubernetes</code> is the way to
go.</p>
</blockquote>
<p>On the other hand</p>
<blockquote>
<p>If you need a fast development well integrated with <a href="https://spring.io/projects/spring-boot" rel="noreferrer">spring-boot</a> stack
with easy to use Java annotations without large involvement of DevOps then and less resource to train your developers then go for Eureka and <a href="http://projects.spring.io/spring-cloud/" rel="noreferrer">spring-cloud</a> stack.</p>
</blockquote>
<p>for more details and comparison charts and features list please refer to <a href="http://www.ofbizian.com/2016/12/spring-cloud-compared-kubernetes.html" rel="noreferrer">this article</a>.</p>
|
<p>I have this basic Dockerfile:</p>
<pre><code>FROM nginx
RUN apt-get -y update && apt install -y curl
</code></pre>
<p>In the master node of my Kubernetes cluster I build that image:</p>
<pre><code>docker build -t cnginx:v1 .
</code></pre>
<p><code>docker images</code> shows that the image has been correctly generated:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cgninx v1 d3b1b19d069e 39 minutes ago 141MB
</code></pre>
<p>I use this deployment referencing this custom image:</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: cnginx
image: cnginx:v1
imagePullPolicy: Never
ports:
- containerPort: 80
nodeSelector:
nodetype: webserver
</code></pre>
<p>However the image is not found: </p>
<pre><code>NAME READY STATUS RESTARTS AGE
nginx-deployment-7dd98bd746-lw6tp 0/1 ErrImageNeverPull 0 4s
nginx-deployment-7dd98bd746-szr9n 0/1 ErrImageNeverPull 0 4s
</code></pre>
<p>Describe pod info:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned nginx-deployment-7dd98bd746-szr9n to kubenode2
Normal SuccessfulMountVolume 1m kubelet, kubenode2 MountVolume.SetUp succeeded for volume "default-token-bpbpl"
Warning ErrImageNeverPull 9s (x9 over 1m) kubelet, kubenode2 Container image "cnginx:v1" is not present with pull policy of Never
Warning Failed 9s (x9 over 1m) kubelet, kubenode2 Error: ErrImageNeverPull
</code></pre>
<p>I have also tried using the default imagePullPolicy, and some other things such as tagging the image with latest...</p>
<p>So, how can I make Kubernetes use a locally generated docker image?</p>
| <p>Your PODs are scheduled on your worker nodes. Since you set <code>imagePullPolicy</code> to <code>Never</code> you need to make your image available to both nodes. In other words, you need to build it on both nodes as you did on the master.</p>
<p>As a sidenote, it would be probably easier in the long term if you setup a custom docker registry and push your images there.</p>
|
<p>I'm trying to use Kubernetes integration with AWS, but kube-controller-manager don't start.
(BTW: Everything works perfectly without the ASW option) </p>
<p>Here is what I do:</p>
<p>-- 1 --</p>
<p>ubuntu@ip-172-31-17-233:~$ more /etc/kubernetes/aws.conf</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
cloudProvider: aws
kubernetesVersion: 1.10.3
</code></pre>
<p>-- 2 --</p>
<p>ubuntu@ip-172-31-17-233:~$ more /etc/kubernetes/cloud-config.conf </p>
<pre><code>[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes
</code></pre>
<p>(I tried lots of combinations here, according to the examples which I found, including "ws_access_key_id", "aws_secret_access_key", omitting the .conf, or removing this file, but nothing worked)</p>
<p>-- 3 --</p>
<p>ubuntu@ip-172-31-17-233:~$ sudo kubeadm init --config /etc/kubernetes/aws.conf</p>
<pre><code>[init] Using Kubernetes version: v1.10.3
[init] Using Authorization modes: [Node RBAC]
[init] WARNING: For cloudprovider integrations to work --cloud-provider must be set for all kubelets in the cluster.
(/etc/systemd/system/kubelet.service.d/10-kubeadm.conf should be edited for this purpose)
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-172-31-17-233 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.17.233]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-172-31-17-233] and IPs [172.31.17.233]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.001348 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ip-172-31-17-233 as master by adding a label and a taint
[markmaster] Master ip-172-31-17-233 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: x8hi0b.uxjr40j9gysc7lcp
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.31.17.233:6443 --token x8hi0b.uxjr40j9gysc7lcp --discovery-token-ca-cert-hash sha256:8ad9dfbcacaeba5bc3242c811b1e83c647e2e88f98b0d783875c2053f7a40f44
</code></pre>
<p>-- 4 --</p>
<pre><code>ubuntu@ip-172-31-17-233:~$ mkdir -p $HOME/.kube
ubuntu@ip-172-31-17-233:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/home/ubuntu/.kube/config'? y
ubuntu@ip-172-31-17-233:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>-- 5 --</p>
<p>ubuntu@ip-172-31-17-233:~$ kubectl get pods --all-namespaces</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-ip-172-31-17-233 1/1 Running 0 40s
kube-system kube-apiserver-ip-172-31-17-233 1/1 Running 0 45s
kube-system kube-controller-manager-ip-172-31-17-233 0/1 CrashLoopBackOff 3 1m
kube-system kube-scheduler-ip-172-31-17-233 1/1 Running 0 35s
</code></pre>
<p>kubectl version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any idea?
I'm new to Kubernetes, and I have no idea what I can do...</p>
<p>Thanks,
Michal.</p>
| <blockquote>
<p>Any idea?</p>
</blockquote>
<p>Check following points as potential issues:</p>
<ul>
<li><p><code>kubelet</code> has proper provider set, check <code>/etc/systemd/system/kubelet.service.d/20-cloud-provider.conf</code> containing:</p>
<pre><code>Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf
</code></pre>
<p>if not, add and restart <code>kubelet</code> service.</p></li>
<li><p>In <code>/etc/kubernetes/manifests/</code> check following files have proper configuration:</p>
<ul>
<li><p><code>kube-controller-manager.yaml</code> and <code>kube-apiserver.yaml</code>:</p>
<pre><code>--cloud-provider=aws
</code></pre>
<p>if not, just add, and pod will be automatically restarted.</p></li>
</ul></li>
<li>Just in case, check that AWS resources (EC2 instances, etc) are tagged with <code>kubernetes</code> tag (taken from your <code>cloud-config.conf</code>) and IAM policies are properly set.</li>
</ul>
<p>If you could supply logs as requested by Artem in comments that could shed more light on the issue.</p>
<h1>Edit</h1>
<p>As requested in comment, short overview of IAM policy handling:</p>
<ul>
<li><p>create new IAM policy (or edit appropriately if already created), say <code>k8s-default-policy</code>. Given below is quite a liberal policy and you can fine grain exact settings to match you security preferences. Pay attention to load balancer section in your case. In the description put something along the lines of "Allows EC2 instances to call AWS services on your behalf." or similar... </p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::kubernetes-*"
]
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:AttachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DetachVolume",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": ["ec2:*"],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": ["elasticloadbalancing:*"],
"Resource": ["*"]
} ]
}
</code></pre></li>
<li><p>create new role (or edit approptiately if already created) and attach previous policy to it, say attach <code>k8s-default-policy</code> to <code>k8s-default-role</code>.</p></li>
<li><p>Attach Role to instances that can handle AWS resources. You can create different roles for master and for workers if you need to. <code>EC2</code> -> <code>Instances</code> -> (select instance) -> <code>Actions</code> -> <code>Instance Settings</code> -> <code>Attach/Replace IAM Role</code> -> (select appropriate role)</p></li>
<li><p>Also, apart from this check that all resources in question are tagged with kubernetes tag.</p></li>
</ul>
|
<p>I am writing a Jenkins Global pipeline library where I have a stage to deploy my docker image to K8s cluster.
So after building my docker image during CI process I am promoting(deploying) the image to multiple environments(sequentially lower to higher).
So, to get the correct status of deployment after running </p>
<pre><code>kubectl apply -f Application-k8s-file.yaml
</code></pre>
<p>I used following command in a shell step.</p>
<pre><code>kubectl rollout status deployment deployment_name
</code></pre>
<p>Things goes well if my deployment does not have error but if my deployment has some error(might be some code bug, application does not start) then this command <code>kubectl rollout status deployment <deployment name></code> runs infinitely(as k8s retries again and again to redeploy) and my Jenkins job runs infinitely(till the Job timeout).</p>
<p>So to find a hack I tried a logic to put the timeout on this command and calculations are something like this:</p>
<p>timeout = (number of pods * liveness probe time + number of pods* 10) seconds</p>
<p>Not sure if this calculation is correct or not.</p>
<p>My code snippet looks like this</p>
<pre><code> sh(returnStdout: true,script:"#!/bin/sh +e\n timeout --preserve-status ${timeout_value} kubectl rollout status deployment ${deploymentName} --kubeconfig='/opt/kubernetes-secrets/${env}/kubeconfig' 2>tmpfile; echo \$? > tmpfile1")
def readThisFile = readFile "tmpfile1.txt"
def var=readThisFile.toInteger()
if(var==0)
{
echo "deployment successful"
}
else{"do something else"}
</code></pre>
<p>This works well initially but later I find that k8s "kubectl rollout status deployment " command doesn't give exit code 0 until all the pods get scheduled and old get terminated completely which sometimes take time.</p>
<p>What I basically want is a minimal calculated timeout value.</p>
<p>My K8s file have parameters like this:</p>
<pre><code> spec:
minReadySeconds: 30
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 45
periodSeconds: 5
timeoutSeconds: 60
name: test-dummyservice
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 60
periodSeconds: 120
timeoutSeconds: 60
</code></pre>
<p>I did not find anything specific related to this in K8s documentation. Anyone facing same challenge?</p>
| <p>You should take a look at <code>progressDeadlineSeconds</code>. Once this exceeds the deadline the <code>rollout status</code> will exit out.</p>
<pre><code>kubectl rollout status deployment ng
Waiting for rollout to finish: 2 out of 7 new replicas have been updated...
error: deployment "ng" exceeded its progress deadline
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds</a></p>
|
<p>Output of <code>kubectl create -f redis.yaml</code>:</p>
<blockquote>
<p>Error from server (BadRequest): error when creating "redis.yaml":
DaemonSet in version "v1" cannot be handled as a DaemonSet: no kind
"DaemonSet" is registered for version "apps/v1"</p>
</blockquote>
<p>What does this error indicate? How could I solve it?</p>
<p><strong>redis.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
terminationGracePeriodSeconds: 30
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: redis:4.0-alpine
name: redis
</code></pre>
| <p>The version was set too low:</p>
<pre><code>apiVersion: apps/v1beta2
</code></pre>
|
<p>I have noticed that when I create and mount a config map that contains some text files, the container will see those files as symlinks to <code>../data/myfile.txt</code> .</p>
<p>For example, if my config map is named tc-configs and contains 2 xml files named stripe1.xml and stripe2.xml, if I mount this config map to /configs in my container, I will have, in my container :</p>
<pre><code>bash-4.4# ls -al /configs/
total 12
drwxrwxrwx 3 root root 4096 Jun 4 14:47 .
drwxr-xr-x 1 root root 4096 Jun 4 14:47 ..
drwxr-xr-x 2 root root 4096 Jun 4 14:47 ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 31 Jun 4 14:47 ..data -> ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe1.xml -> ..data/stripe1.xml
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe2.xml -> ..data/stripe2.xml
</code></pre>
<p>I guess Kubernetes requires those symlinks and ../data and ..timestamp/ folders, but I know some applications that can fail to startup if they see non expected files or folders</p>
<p>Is there a way to tell Kubernetes not to generate all those symlinks and directly mount the files ?</p>
| <p>I think this solution is satisfactory : specifying exact file path in mountPath, will get rid of the symlinks to <code>..data and ..2018_06_04_19_31_41.860238952</code></p>
<p>So if I apply such a manifest : </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html/users.xml
name: site-data
subPath: users.xml
volumes:
- name: site-data
configMap:
name: users
---
apiVersion: v1
kind: ConfigMap
metadata:
name: users
data:
users.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<users>
</users>
</code></pre>
<p>Apparently, I'm making use of <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subpath</a> explicitly, and they're not part of the "auto update magic" from ConfigMaps, I won't see any more symlinks : </p>
<pre><code>$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxr-xr-x 1 www-data www-data 4096 Jun 4 19:18 .
drwxr-xr-x 1 root root 4096 Jun 4 17:58 ..
-rw-r--r-- 1 root root 73 Jun 4 19:18 users.xml
</code></pre>
<p>Be careful to not forget <code>subPath</code>, otherwise users.xml will be a directory !</p>
<p>Back to my initial manifest :</p>
<pre><code>spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
volumes:
- name: site-data
configMap:
name: users
</code></pre>
<p>I'll see those symlinks coming back :</p>
<pre><code>$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxrwxrwx 3 root root 4096 Jun 4 19:31 .
drwxr-xr-x 3 root root 4096 Jun 4 17:58 ..
drwxr-xr-x 2 root root 4096 Jun 4 19:31 ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 31 Jun 4 19:31 ..data -> ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 16 Jun 4 19:31 users.xml -> ..data/users.xml
</code></pre>
<p><strong>Many thanks to <a href="https://stackoverflow.com/users/46842/psycotica0">psycotica0</a> on <a href="https://k8scanada.slack.com/" rel="noreferrer">K8s Canada slack</a> for putting me on the right track with <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="noreferrer">subpath</a> (they are quickly mentioned in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="noreferrer">configmap documentation</a>)</strong></p>
|
<p>I understand that <code>kubelet</code> is responsible for taking PodSpecs (usually from the API Server) and running the Pod.</p>
<blockquote>
<p><strong><a href="https://kubernetes.io/docs/concepts/overview/components/#node-components" rel="nofollow noreferrer">Kubernetes Components > Node Components > kubelet</a></strong></p>
<p>"The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy."</p>
</blockquote>
<p>But how does the API Server keep track of the status of each Pod (e.g. running / failed) ? Does <code>kubelet</code> send regular requests to the API Server? Or does the API Server poll <code>kubelet</code> regularly? Or some other mechanism?</p>
| <p>kubelet does everything on the node. A typical process to create a pod would be the following:</p>
<ol>
<li>By default, kubelet is hooked up to api-server through this "thing" called <em>watch</em>. It's a sort of pub/sub. So kubelet would be subscribed to "create pod" event, and api-server would notify it when a pod needs to be created.</li>
<li>kubelet would get the container runtime (docker or rkt), along with other pod specifications, and would create the pod.</li>
</ol>
<p>Note: There are more components involved here, like scheduler and controller manager (mentioned in your post as various mechanisms), but I will skip them.</p>
<ol start="3">
<li>kubelet will make the necessary liveness and readiness probes and report back to api-server the status. Say success!</li>
<li>api-server will update etcd (by adding the metadata of the pod) to keep the track of what is going on in the cluster.</li>
</ol>
<p>At this point kubelet will be in charge of this pod. If the pod goes down, kubelet will report api-server, api-server will give the order to kill the pod, will spin up a new one, and again will update etcd server.</p>
<p>One thing to point out is that all components in k8s talk to api-server directly. So, controller manager or scheduler do not say kubelet what to do. Rather they say it to api-server, and api-server to kubelet.</p>
|
<p>I have kubernetes + minicube installed (MacOs 10.12.6) installed. But while trying to start the minicube i get constant errors:</p>
<pre><code>$: minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0601 15:24:50.571967 67567 start.go:281] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
</code></pre>
<p>I've also tried to do <code>minikube delete</code> and the <code>minikube start</code> that didn't help (<a href="https://stackoverflow.com/questions/50554009/minikube-never-start-error-restarting-cluster/50566528">Minikube never start - Error restarting cluster</a>). Also <code>kubectl config use-context minikube</code> was done.</p>
<p>I have minikube version: v0.26.1</p>
<p>It looks to me that kubeadm.yaml file is missing or misplaced.</p>
| <p><a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">Minikube</a> is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.</p>
<p>In your issue, below steps should do the initialization process successfully:</p>
<pre><code>minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
</code></pre>
<p>In the case you mixed Kubernetes and minikube environments I suggest to inspect $HOME/.kube/config file
and delete minikube entries to avoid problem with reinitialization.</p>
<p>If minikube still refuses to start please post logs to analyze. To get detailed log start minikube this way:</p>
<pre><code>minikube start --v=9
</code></pre>
|
<p>I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix. </p>
<blockquote>
<p>ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.</p>
</blockquote>
<pre><code>NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
</code></pre>
<p>I added two things for volume here but getting error every time. </p>
<p>first :</p>
<pre><code>"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
</code></pre>
<p>Second : </p>
<pre><code> "volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
</code></pre>
<p>---------------------complete yaml --------------------------------</p>
<pre><code>{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
</code></pre>
<p>Regards
Anupam Narayan</p>
| <p>As stated in the error log:</p>
<pre><code>for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
</code></pre>
<p>According to this <a href="https://askubuntu.com/questions/525243/why-do-i-get-wrong-fs-type-bad-option-bad-superblock-error">question</a>, you might be missing the nfs-commons package which you can install using <code>sudo apt install nfs-common</code></p>
|
<p>I am using kubernetes to deploy a simple application. The pieces are:</p>
<ul>
<li>a rabbitMQ instance</li>
<li>a stateless HTTP server</li>
<li>a worker that takes jobs from the message queue and processes them</li>
</ul>
<p>I want to be able to scale the HTTP server and the worker up and down independently of each other. Would it be more appropriate for me to create a single deployment containing one pod for the HTTP server and one for the worker, or separate deployments for the HTTP server / worker?</p>
| <p>You should definitely choose different deployment for HTTP Server and the worker. For following reasons:</p>
<ul>
<li><p>Your scaling characteristics are different for both of them. It does not make sense to put them in the same deployment</p></li>
<li><p>The parameters on which you will scale will be different too. For HTTP server it might be RPS and for worker application, it will number of items pending/to be processed state. You can create HPA and scale them for different parameters that suit them best</p></li>
<li><p>The metrics & logs that you want to collect and measure for each would be again different and would make sense to keep them separate.</p></li>
</ul>
<p>I think the Single Responsibility principle fits well too and would unnecessarily mix things up if you keep then in same pod/deployment.</p>
|
<p><a href="https://i.stack.imgur.com/Ctb2f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ctb2f.png" alt="enter image description here"></a></p>
<p>Why sometimes kubernetes has 2 internal endpoints for a service, sometimes 4?
why do the internal endpoints always come in pairs?</p>
| <p>This is based on my loose understanding of things and an assumption. The assumption is that this seems to be the case when the cluster is deployed to GKE.</p>
<p>Since I don't have Kafka manager installed, I will use the example of Kubernetes service, which has a similar port configuration in the console. This service is of type <code>ClusterIP</code></p>
<pre><code>Name Cluster IP Internal Endpoints
Kubernetes 10.11.240.1 kubernetes:443 TCP
kubernetes:0 TCP
</code></pre>
<p>The port 0 is added by GKE Ingress to randomly select a port for forwarding, as <a href="https://stackoverflow.com/questions/48814326/what-is-port-0-used-for-in-kubernetes-services">explained here</a> and also related <a href="https://stackoverflow.com/questions/45738404/gce-loadbalancer-invalid-value-for-field-namedports0-port-0-must-be-gr/45974827#45974827">discussion here</a></p>
<p>In case of NodePort service, it is a different story. </p>
<pre><code>Name Cluster IP Internal Endpoints
hello-web 10.11.249.126 helloweb:8080 TCP
helloweb:30193 TCP
</code></pre>
<p>This can be also seen in the service description. Since service is already exposed on a nodeport, there is no need to additionally expose on a random port.</p>
<pre><code>$kubectl describe service helloweb-backend -n default
Name: helloweb-backend
Type: NodePort
IP: 10.11.249.126
Port: <unset> 8080/TCP
NodePort: <unset> 30193/TCP
Endpoints: 10.8.3.3:8080
</code></pre>
|
<p>I am using Google Kubernetes Engine and have the Google HTTPS Load Balancer as my ingress.</p>
<p>Right now the load balancer uses Let's Encrypt certificates. However, is there a simple way to ensure that the certificates are automatically renewed prior to their 90 day expiry?</p>
| <p>You have not specified how you configured Let's Encrypt for your load balancer. Right now Google does not offer this for you, so I assume you mean you set the Let's Encrypt certificate yourself. In this case, Google can't renew your certificate.</p>
<p>Until there's an official support you can install a third-party add-on like <code>cert-manager</code> to automate certificate configuration and renewal. There's a GKE tutorial for doing this at <a href="https://github.com/ahmetb/gke-letsencrypt" rel="nofollow noreferrer">https://github.com/ahmetb/gke-letsencrypt</a>.</p>
|
<p>We noticed when we <code>exec -it</code> to connect into a pod, after certain idle time the connection get destroyed. Is there any option to leave the connection open longer?</p>
<p>I see there is an open <a href="https://github.com/kubernetes/kubernetes/pull/63793/files" rel="nofollow noreferrer">PR</a>, but wondering if there is any workaround for this issue.</p>
| <p>The short answer is no. And that's why:</p>
<p><a href="https://stromasys.atlassian.net/wiki/spaces/KBP/pages/43221121/Enabling+TCP+keepalive+for+console+connections" rel="nofollow noreferrer">Enabling TCP keepalive for console connections</a></p>
<blockquote>
<p>TCP keepalive is a TCP option that causes packets to be exchanged over
a connection even if there is no traffic to transport. It should be
enabled on both ends of the connection. TCP keepalive must be enabled
at the operating-system level <strong>and</strong> by the application/program
opening TCP connections.</p>
<p>On Linux, edit the "/etc/sysctl.conf" file and add these lines:</p>
<pre><code>net.ipv4.tcp_keepalive_time = 200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 50
</code></pre>
<p>(feel free to adapt the values as you see fit). When done editing, you
must make the new values known to the kernel:</p>
<pre><code># sysctl --load=/etc/sysctl.conf
</code></pre>
</blockquote>
<p><a href="http://coryklein.com/tcp/2015/11/25/custom-configuration-of-tcp-socket-keep-alive-timeouts.html" rel="nofollow noreferrer">Custom Configuration of TCP Socket Keep-Alive Timeouts</a></p>
<blockquote>
<p>Default values for these properties is:</p>
<pre><code>tcp_keepalive_time = 7200 seconds
tcp_keepalive_probes = 9
tcp_keepalive_intvl = 75 seconds
</code></pre>
</blockquote>
<p>The other possible way is to start some kind of proxy server on the client side and connect to Kubernetes apiserver through it.
I haven’t tested it myself and it could be tricky, but <a href="https://ma.ttias.be/enable-keepalive-connections-in-nginx-upstream-proxy-configurations/" rel="nofollow noreferrer">here</a> is an example of how to enable keepalives to backend for Nginx.</p>
|
<p>I am unable to get the TLS termination at nginx ingress controller working on my kubernetes cluster.</p>
<p>my ingress rule looks as the following : </p>
<pre><code>Christophers-MacBook-Pro-2:acme-microservice cjaime$ kubectl describe ing myapp-ingress-1
Name: myapp-ingress-1
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
acme-io terminates myapp-default.acme.io
Rules:
Host Path Backends
---- ---- --------
myapp-default.acme.io
/ myapp:80 (<none>)
Annotations:
ingress.kubernetes.io/ssl-redirect: true
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1
Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1
Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1
Normal UPDATE 53m (x2 over 1h) nginx-ingress-controller Ingress default/myapp-ingress-1
</code></pre>
<p>Whenever I try to access this from the browser I get the back the following server certificate</p>
<pre><code>Server certificate
subject=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
issuer=/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
</code></pre>
<p>This is preventing me from creating a valid SSL connection. I know my secret is correct because when using openssl I get a valid connection as follows</p>
<pre><code>openssl s_client -servername myapp-default.acme.io -connect us1a-k8s-4.acme.io:31443 -showcerts
CONNECTED(00000003)
<content omitted>
Start Time: 1528241749
Timeout : 300 (sec)
Verify return code: 0 (ok)
---
</code></pre>
<p>However If I run the same command with the servername omitted I get the same fake certificate and a connection error</p>
<pre><code>openssl s_client -connect us1a-k8s-4.acme.io:31443 -showcerts
CONNECTED(00000003)
depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
i:/O=Acme Co/CN=Kubernetes Ingress Controller Fake Certificate
<content omitted>
Start Time: 1528241957
Timeout : 300 (sec)
Verify return code: 21 (unable to verify the first certificate)
</code></pre>
| <p>Your tests with openssl are executed correctly and they show that nginx does offer the valid certificate for <strong>myapp-default.acme.io</strong> when that hostname is provided in the request via <a href="https://en.m.wikipedia.org/wiki/Server_Name_Indication" rel="nofollow noreferrer">SNI</a>. This is in harmony with what you configured in the Ingress.</p>
<p>For other hostnames or requests without a hostname the <strong>default certificate</strong> is sent. That certificate is to be stored in a Secret and configured via a command line parameter to the ingress controller (<code>--default-ssl-certificate=$(POD_NAMESPACE)/tls-ingress</code>).</p>
<p>Your browser warning was either because of a mismatch in the hostname or a cached fake certificate in your browser. I suggest you look up how flush the certificate cache in your b browser and/or redo the test with curl:</p>
<pre><code>curl -v https://myapp-default.acme.io
</code></pre>
<p>If it still does not work correctly, you may be affected by <a href="https://github.com/kubernetes/ingress-nginx/issues/1954" rel="nofollow noreferrer">#1954</a> - update nginx-ingress-controller.</p>
|
<p>Trying to generate deployments for my helm charts by using this template</p>
<pre><code>{{- range .Values.services }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ . }}
spec:
replicas: {{ .replicaCount }}
template:
metadata:
labels:
app: myapp-{{ . }}
chart: myapp-{{ $.Values.cluster }}-{{ $.Values.environment }}
spec:
containers:
- name: myapp-{{ . }}
image: {{ $.Values.containerRegistry }}/myapp-{{ . }}:latest
ports:
- containerPort: {{ .targetPort }}
env:
{{- with .environmentVariables }}
{{ indent 10 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
</code></pre>
<p>for 2 of my services. In values.yaml I got</p>
<pre><code>environment: dev
cluster: sandbox
ingress:
enabled: true
containerRegistry: myapp.io
services:
- backend:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- web:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
</code></pre>
<p>... but the output is not being properly formatted</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-map[backend:map[replicaCount:1 targetPort:8080 environmentVariables:[map[name:SOME_VAR value:hello] port:80]]
</code></pre>
<p>instead of</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-web
(...)
</code></pre>
<p>and another config</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-backend
(...)
</code></pre>
<p>what functions can I use or some different data structure? None of the references (i.e. .environmentVariables are working correctly)</p>
| <p>I think you should reconsider the way the data is structured, this would work better:</p>
<pre><code> services:
- name: backend
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
- name: web
settings:
port: 80
targetPort: 8080
replicaCount: 1
environmentVariables:
- name: SOME_VAR
value: "hello"
</code></pre>
<p>And your Deployment to look like this:</p>
<pre><code>{{- range .Values.services }}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-{{ .name }}
spec:
replicas: {{ .settings.replicaCount }}
template:
metadata:
labels:
app: myapp-{{ .name }}
spec:
containers:
- name: myapp-{{ .name }}
image: {{ $.Values.containerRegistry }}/myapp-{{ .name }}:latest
ports:
- containerPort: {{ .settings.targetPort }}
env:
{{- with .settings.environmentVariables }}
{{ toYaml . | trim | indent 6 }}
{{- end }}
imagePullSecrets:
- name: myregistry
{{- end }}
</code></pre>
<p>would actually create two deployments, by adding the <code>---</code> separator.</p>
|
<p>I want to have a instance group that scales from 0 to x pods. I get <code>Insufficient nvidia.com/gpu</code>. Does someone see what I'm doing wrong here? This is on Kubernetes v1.9.6 with autoscaler 1.1.2. </p>
<p>I have two instance groups, one with cpus, and a new one I want to scale down to 0 nodes called gpus, so <code>kops edit ig gpus</code> is:</p>
<pre><code>apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-05-31T09:27:31Z
labels:
kops.k8s.io/cluster: ci.k8s.local
name: gpus
spec:
cloudLabels:
instancegroup: gpus
k8s.io/cluster-autoscaler/enabled: ""
image: ami-4450543d
kubelet:
featureGates:
DevicePlugins: "true"
machineType: p2.xlarge
maxPrice: "0.5"
maxSize: 3
minSize: 0
nodeLabels:
kops.k8s.io/instancegroup: gpus
role: Node
rootVolumeOptimization: true
subnets:
- eu-west-1c
</code></pre>
<p>And the autoscaler deployment has:</p>
<pre><code> spec:
containers:
- command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --nodes=0:3:gpus.ci.k8s.local
env:
- name: AWS_REGION
value: eu-west-1
image: k8s.gcr.io/cluster-autoscaler:v1.1.2
</code></pre>
<p>Now I try to deploy a simple GPU test:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: simple-gpu-test
spec:
replicas: 1
template:
metadata:
labels:
app: "simplegputest"
spec:
containers:
- name: "nvidia-smi-gpu"
image: "nvidia/cuda:8.0-cudnn5-runtime"
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
volumeMounts:
- mountPath: /usr/local/nvidia
name: nvidia
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do nvidia-smi; sleep 5; done;" ]
volumes:
- hostPath:
path: /usr/local/nvidia
name: nvidia
</code></pre>
<p>I expect the instance group to go from 0 to 1, but the autoscaler logs show: </p>
<pre><code>I0605 11:27:29.865576 1 scale_up.go:54] Pod default/simple-gpu-test-6f48d9555d-l9822 is unschedulable
I0605 11:27:29.961051 1 scale_up.go:86] Upcoming 0 nodes
I0605 11:27:30.005163 1 scale_up.go:146] Scale-up predicate failed: PodFitsResources predicate mismatch, cannot put default/simple-gpu-test-6f48d9555d-l9822 on template-node-for-gpus.ci.k8s.local-5829202798403814789, reason: Insufficient nvidia.com/gpu
I0605 11:27:30.005262 1 scale_up.go:175] No pod can fit to gpus.ci.k8s.local
I0605 11:27:30.005324 1 scale_up.go:180] No expansion options
I0605 11:27:30.005393 1 static_autoscaler.go:299] Calculating unneeded nodes
I0605 11:27:30.008919 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"simple-gpu-test-6f48d9555d-l9822", UID:"3416d787-68b3-11e8-8e8f-0639a6e973b0", APIVersion:"v1", ResourceVersion:"12429157", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added)
I0605 11:27:30.031707 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
</code></pre>
<p>When I start a node by setting the minimum tot 1, I see that it has the capacity: </p>
<pre><code>Capacity:
cpu: 4
memory: 62884036Ki
nvidia.com/gpu: 1
pods: 110
and labels
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=p2.xlarge
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=eu-west-1
failure-domain.beta.kubernetes.io/zone=eu-west-1c
kops.k8s.io/instancegroup=gpus
kubernetes.io/role=node
node-role.kubernetes.io/node=
spot=true
</code></pre>
<p>The required tag is present on the AWS Scale group:</p>
<pre><code>{
"ResourceId": "gpus.ci.k8s.local",
"ResourceType": "auto-scaling-group",
"Key": "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup",
"Value": "gpus",
"PropagateAtLaunch": true
}
</code></pre>
<p>Finally, when I set the min pool size to 1, it can scale from 1 to 3 automatically. Just doesn't do 0 to 1.</p>
<p>Is there someway I can perhaps inspect the template to see why it doesn't have the resource?</p>
| <p><a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="nofollow noreferrer">Cluster Autoscaler</a>
is a standalone program that adjusts the size of a Kubernetes cluster to meet the current needs.
Cluster Autoscaler can manage GPU resources provided by the cloud provider in the same manner.</p>
<p>Based on cluster <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-scale-a-node-group-to-0" rel="nofollow noreferrer">autoscaler documentation</a>,
for AWS, it is possible to scale a node group to 0 (and obviously from 0), assuming that all scale-down conditions are met.</p>
<p>Going back to your question, for AWS, if you are using nodeSelector, you need to tag your nodes in the ASG template using labels like "k8s.io/cluster-autoscaler/node-template/label/".
Please note that Kubernetes and AWS GPU support require different labels.</p>
<p>For example, for a node label of foo=bar, you would tag the ASG with:</p>
<pre><code>{
"ResourceType": "auto-scaling-group",
"ResourceId": "foo.example.com",
"PropagateAtLaunch": true,
"Value": "bar",
"Key": "k8s.io/cluster-autoscaler/node-template/label/foo"
}
</code></pre>
|
<p>We have a pod in a Google Cloud Platform Kubernetes cluster writing JsonFormatted to StdOut. This is picked up by Stackdriver out of box. However, we see the disk usage of the pod just growing and growing, and we can't understand how to set a max size on the Deployment for log rotation.</p>
<p>Documentation on Google Cloud and Kubernetes is unclear on this.</p>
<p>This is just the last hour:</p>
<p><a href="https://i.stack.imgur.com/xbwqg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xbwqg.png" alt="Memory consumption on a Pod"></a></p>
| <p>Are you sure that disk usage of the pod is high because of the logs?
If the application writes logs to stdout, it doesn't use any disk space inside the pod.
All logs are usually stored in a log file on the node’s filesystem and can be managed by the node logrotate process.</p>
<p>Perhaps application uses pod's disk space for something else, like temp files or debug information?</p>
<p>Here is the part of <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">documentation</a> related to log rotation:</p>
<blockquote>
<h2>Logging at the node level:</h2>
<p>Everything a containerized application writes to stdout and stderr is
handled and redirected somewhere by a container engine. For example,
the Docker container engine redirects those two streams to a <a href="https://docs.docker.com/engine/admin/logging/overview" rel="nofollow noreferrer">logging
driver</a>, which is configured in Kubernetes to write to a file in
json format.</p>
<p>An important consideration in node-level logging is implementing log
rotation, so that logs don’t consume all available storage <strong>on the
node</strong>.</p>
<p>Kubernetes currently is not responsible for rotating logs, but rather
a deployment tool should set up a solution to address that. For
example, in Kubernetes clusters, deployed by the kube-up.sh script,
there is a logrotate tool configured to run each hour.</p>
<p>You can also set up a container runtime to rotate application’s logs
automatically, e.g. by using Docker’s log-opt.</p>
<p>In the kube-up.sh script, the latter approach is used for COS image on
GCP, and the former approach is used in any other environment. In
both cases, by default rotation is configured to take place when log
file exceeds 10MB.</p>
<p>As an example, you can find detailed information about how kube-up.sh
sets up logging for COS image on GCP in the corresponding <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh#L324" rel="nofollow noreferrer">script</a>.</p>
</blockquote>
<p>Here is a part of the script related to logrotate:</p>
<pre><code># Installs logrotate configuration files
function setup-logrotate() {
mkdir -p /etc/logrotate.d/
# Configure log rotation for all logs in /var/log, which is where k8s services
# are configured to write their log files. Whenever logrotate is ran, this
# config will:
# * rotate the log file if its size is > 100Mb OR if one day has elapsed
# * save rotated logs into a gzipped timestamped backup
# * log file timestamp (controlled by 'dateformat') includes seconds too. This
# ensures that logrotate can generate unique logfiles during each rotation
# (otherwise it skips rotation if 'maxsize' is reached multiple times in a
# day).
# * keep only 5 old (rotated) logs, and will discard older logs.
cat > /etc/logrotate.d/allvarlogs <<EOF
/var/log/*.log {
rotate ${LOGROTATE_FILES_MAX_COUNT:-5}
copytruncate
missingok
notifempty
compress
maxsize ${LOGROTATE_MAX_SIZE:-100M}
daily
dateext
dateformat -%Y%m%d-%s
create 0644 root root
}
EOF
}
</code></pre>
|
<p>I followed the Quickstart docs (<a href="https://learn.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough-portal" rel="nofollow noreferrer">here</a>) to deploy a k8s cluster in the Western Europe region. The cluster boots up fine, but I cannot connect to it using kubectl - kubectl times out while trying to perform a TLS handshake:</p>
<pre><code>Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p><a href="https://github.com/Azure/AKS/issues/112" rel="nofollow noreferrer">There is currently a github issue where others are reporting the same problem.</a></p>
<p>Following some advice on the thread, I attempted to perform an upgrade from 1.8.1 to 1.8.2, which failed:</p>
<pre><code>bash-4.3# az aks upgrade --resource-group=k8s --name=phlo -k 1.8.2
Kubernetes may be unavailable during cluster upgrades.
Are you sure you want to perform this operation? (y/n): y
/ Running ..
Deployment failed. Correlation ID: <redacted>. Operation failed with status: 200. Details: Resource state Failed
</code></pre>
<p>According to others on the github thread, it seems to be a region-specific issue.</p>
| <p>The solution to this one for me was to scale the nodes in my Cluster from the Azure Kubernetes service blade web console.</p>
<h1>Workaround / Solution</h1>
<p>An interesting solution (worked for me!) to test...</p>
<ol>
<li>Log into the Azure Console — Kubernetes Service blade.</li>
<li>Scale your cluster up by 1 node.</li>
<li>Wait for scale to complete and attempt to connect (you should be able to).</li>
<li>Scale your cluster back down to the normal size to avoid cost increases.</li>
</ol>
<p>Total time it took me ~2 mins. </p>
<h2>More Background Info on the Issue</h2>
<p>Added this to the full ticket description write up that I posted over here (if you want more info have a read): </p>
<p><a href="https://stackoverflow.com/questions/50726534/unable-to-connect-net-http-tls-handshake-timeout-why-cant-kubectl-connect">'Unable to connect Net/http: TLS handshake timeout' — Why can't Kubectl connect to Azure AKS server?</a></p>
|
<p>I'm using Kubernetes 1.7 and I have a GPU machine in the cluster. The <code>kubectl top nodes</code> gives only CPU and Memory usage like this </p>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
machine1 183m 1% 5284Mi 8%
machine2 181m 1% 5543Mi 8%
machine3 250m 1% 7682Mi 7%
machine4 328m 5% 1578Mi 16%
</code></pre>
<p>Is there a way to get <strong>GPU usage</strong>? </p>
| <p>I'm afraid that <code>kubectl top</code> does not provide GPU metrics, regardless if drivers are installed or not.</p>
<p>To inspect GPU statistics, I recommend using <a href="https://github.com/kubernetes/heapster/blob/master/docs/storage-schema.md" rel="nofollow noreferrer">Heapster</a> monitoring tool
and <a href="https://github.com/chhibber/pgme" rel="nofollow noreferrer">Prometheus</a> that provides the extension to get the accelerator metrics.</p>
<p>At last, you may be interested in <a href="https://developer.nvidia.com/kubernetes-gpu" rel="nofollow noreferrer">Nvidia GPU</a> works for Kubernetes and <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gpus" rel="nofollow noreferrer">GCE</a> concepts of using this kind of hardware.</p>
|
<p>Team,
I am running kubectl from my laptop and captured Wireshark trace of the same. how could I decrypt the traffic hitting the api server?</p>
<p>ex: we decrypt the http calls using private key of the web server. in k8s world, how would I decrypt the calls to cluster? new to this so need to understand. </p>
| <p>The traffic itself and the (api)server keys will not be enough for that. Current TLS implementation is using <a href="https://en.m.wikipedia.org/wiki/Forward_secrecy" rel="nofollow noreferrer">forward secrecy</a> so you would need the session keys as well. Browsers can be configured to log session keys (which you can feed into wireshark) but I don't know if kubectl can be configured with some debugging options to do that. Probably not.</p>
<p>An alternative is a <a href="https://en.m.wikipedia.org/wiki/Man-in-the-middle_attack" rel="nofollow noreferrer">man in the middle "attack"</a> which will defeat TLS with forward secrecy. I capture kubectl traffic with <a href="https://github.com/mitmproxy/mitmproxy" rel="nofollow noreferrer">mitmproxy</a>. You will need to set it up with kubernetes CA, so the generated certificates are valid as far as kubectl is concerned.</p>
<p><em>Of course, all for educational purposes at trainings ;-)</em></p>
|
<p>Installed Rancher server and 2 Rancher agents in Vagrant. Then switch to K8S environment from Rancher server.</p>
<p>On Rancher server host, installed <code>kubectl</code> and <code>helm</code>. Then installed <code>Prometheus</code> by <code>Helm</code>:</p>
<pre><code>helm install stable/prometheus
</code></pre>
<p>Now check the status from Kubernetes dashboard, there are 2 pods pending:
<a href="https://i.stack.imgur.com/bl8Hn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bl8Hn.png" alt="enter image description here"></a></p>
<p>It noticed <code>PersistentVolumeClaim is not bound</code>, so aren't the K8S components been installed default with Rancher server?</p>
<p>(another name, same issue)
<a href="https://i.stack.imgur.com/jBGUy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jBGUy.png" alt="enter image description here"></a></p>
<h1>Edit</h1>
<pre><code>> kubectl get pvc
NAME STATUS VOLUME CAPACITY
ACCESSMODES STORAGECLASS AGE
voting-prawn-prometheus-alertmanager Pending 6h
voting-prawn-prometheus-server Pending 6h
> kubectl get pv
No resources found.
</code></pre>
<h1>Edit 2</h1>
<pre><code>$ kubectl describe pvc voting-prawn-prometheus-alertmanager
Name: voting-prawn-prometheus-alertmanager
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-4.6.9
component=alertmanager
heritage=Tiller
release=voting-prawn
Annotations: <none>
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x10 over 2m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
$ kubectl describe pvc voting-prawn-prometheus-server
Name: voting-prawn-prometheus-server
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-4.6.9
component=server
heritage=Tiller
release=voting-prawn
Annotations: <none>
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x14 over 3m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
</code></pre>
| <p>I had same issues as you. I found two ways to solve this:</p>
<ul>
<li><p>edit <code>values.yaml</code> under <code>persistentVolumes.enabled=false</code> this will allow you to use <code>emptyDir</code> "this applies to Prometheus-Server and AlertManager"</p></li>
<li><p>If you can't change <code>values.yaml</code> you will have to create the PV before deploying the chart so that the pod can bind to the volume otherwise it will stay in the pending state forever</p></li>
</ul>
|
<p>I am trying to run a Factorio game server on Kubernetes (hosted on GKE).</p>
<p>I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory. </p>
<p>I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server. </p>
<p>What would be the best way to upload a file to this Persistent Volume Claim?</p>
<p>I have thought of 2 ways but I'm not sure which is best or if either are a good idea:</p>
<ul>
<li>Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim</li>
<li>Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container</li>
</ul>
| <p>It turns out there is a much simpler way: The <code>kubectl cp</code> command. </p>
<p>This command lets you copy data from your computer to a container running on your cluster. </p>
<p>In my case I ran:</p>
<pre><code>kubectl cp ~/.factorio/saves/k8s-test.zip factorio/factorio-0:/factorio/saves/
</code></pre>
<p>This copied the <code>k8s-test.zip</code> file on my computer to <code>/factorio/saves/k8s-test.zip</code> in a container running on my cluster.</p>
<p>See <code>kubectl cp -h</code> for more more detail usage information and examples.</p>
|
<p>I stared my kubernetes cluster on AWS EC2 with kops using a private hosted zone in route53. Now when I do something like <code>kubectl get nodes</code>, the cli says that it can't connect to <code>api.kops.test.com</code> as it is unable to resolve it. So I fixed this issue by manually adding <code>api.kops.test.com</code> and its corresponding public IP (got through record sets) mapping in <code>/etc/hosts</code> file.</p>
<p>I wanted to know if there is a cleaner way to do this (without modifying the system-wide <code>/etc/hosts</code> file), maybe programmatically or through the cli itself.</p>
| <p>Pragmatically speaking, I would add the public IP as a <code>IP</code> SAN to the master's x509 cert, and then just use the public IP in your kubeconfig. Either that, or the DNS record <em>not</em> in the private route53 zone.</p>
<p>You are in a situation where you purposefully made things private, so now they are.</p>
<hr>
<p>Another option, depending on whether it would be worth the effort, is to use a VPN server in your VPC and then connect your machine to EC2 where the VPN connection can add the EC2 DNS servers to your machine's config as a side-effect of connecting. Our corporate Cisco AnyConnect client does something very similar to that.</p>
|
<p>I created a new cluster as per the <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="noreferrer">Azure guide</a> and created the cluster without issue but when I enter the <code>kubectl get nodes</code> to list the nodes I only get this response <code>Unable to connect to the server: net/http: TLS handshake timeout</code>.</p>
<p>I tried once in the Cloud Shell and once on my machine using the latest version of the Azure CLI (2.0.20).</p>
<p>I saw that there was a similar earlier issue regarding <a href="https://github.com/Azure/ACS/issues/4" rel="noreferrer">Service Principal credentials</a>, which I updated but that didn't seem to solve my issue either.</p>
<p>Any guidance would be greatly appreciated.</p>
| <p>The solution to this one for me was to scale the nodes up — and then back down — for my impacted Cluster from the Azure Kubernetes service blade web console.</p>
<h1>Workaround / Potential Solution</h1>
<ol>
<li>Log into the Azure Console — Kubernetes Service blade.</li>
<li>Scale your cluster up by 1 node.</li>
<li>Wait for scale to complete and attempt to connect (you should be able to).</li>
<li>Scale your cluster back down to the normal size to avoid cost increases.</li>
</ol>
<p>Total time it took me ~2 mins. </p>
<h2>More Background Info on the Issue</h2>
<p>Also added this solution to the full ticket description write up that I posted over here (if you want more info have a read): </p>
<p><a href="https://stackoverflow.com/questions/50726534/unable-to-connect-net-http-tls-handshake-timeout-why-cant-kubectl-connect">'Unable to connect Net/http: TLS handshake timeout' — Why can't Kubectl connect to Azure Kubernetes server? (AKS)</a></p>
|
<p>I'm currently migrating a legacy server to Kubernetes, and I found that <code>kubectl</code> or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.</p>
<p>In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.</p>
<p>So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?</p>
<ul>
<li>Of course, in a pinch, I could probably just execute something like <code>run_server.sh | tee /var/log/my-own.log</code> when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.</li>
</ul>
| <p>So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the <code>--previous</code> flag to look at logs:</p>
<pre><code>kubectl logs -f <pod-name-xyz> --previous
</code></pre>
<p>But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:</p>
<pre><code> volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
</code></pre>
<p>This will allow the directory which has all logs from <code>/var/log</code> directory from host to <code>/tmp/log</code> inside the container and the command will ensure that content of all files is flushed. Now you can run:</p>
<pre><code>kubectl logs <pod-name-abc> -c count-log-1
</code></pre>
<p>This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-node-logging-agent" rel="nofollow noreferrer">1</a> or <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent" rel="nofollow noreferrer">2</a></p>
|
<p>I set up a deployment of storm nimbus and another deployment of storm supervisor (and a UI), but supervisor dies because it can't resolve nimbus... which always registers itself with the pod FQDN:</p>
<blockquote>
<p>2018-06-06 17:07:01.935 o.a.s.n.NimbusInfo main [INFO] Nimbus figures out its name to storm-nimbus-hl874</p>
</blockquote>
<p>As you can see that's the pod name in the replicationcontroller, and it can't be resolver by other pods...</p>
<p>Supervisor config has the proper address for a headless service I created, which is resolvable and connectable, but once it starts it queries the nimbus name from Zookeeper and that is the FQDN, ignoring the values (nimbus seeds) in the config file. So I assume that the nimbus seeds aren't actually a way to tell supervisor or the UI where to find a nimbus.</p>
<p>Not sure how to fix that or if there is a workaround.</p>
<p>I could create a statefulset with a headless service in which both the pod and the service had the same name, but that looks really ugly.</p>
| <p>Here's where Nimbus figures out its hostname <a href="https://github.com/apache/storm/blob/c9b73a5e3ffcfc458772100dc6bd62ad1cf468de/storm-client/src/jvm/org/apache/storm/nimbus/NimbusInfo.java#L56" rel="nofollow noreferrer">https://github.com/apache/storm/blob/c9b73a5e3ffcfc458772100dc6bd62ad1cf468de/storm-client/src/jvm/org/apache/storm/nimbus/NimbusInfo.java#L56</a></p>
<p>You should be able to set <a href="https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/Config.java#L1108" rel="nofollow noreferrer">https://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/Config.java#L1108</a> to override it.</p>
|
<p>The Jenkinsfile similar to the one below works fine for me without <code>properties</code> section. But when I add <code>properties</code> Jenkins job fails with</p>
<pre><code>java.lang.NoSuchMethodError: No such DSL method 'properties' found among steps [archive, bat, build, catchError, checkout, container, containerLog, deleteDir, dir, dockerFingerprintFrom, dockerFingerprintRun, echo, error, .....
</code></pre>
<p>I've tried to place it in the root section too, but with the same result. So, I'm not sure where to put it now.</p>
<p>Jenkinsfile</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true)
],
volumes: [
]) {
node(label) {
properties(
[
[
$class : 'jenkins.model.BuildDiscarderProperty',
strategy: [
$class: 'LogRotator',
numToKeepStr: '50'
]
],
pipelineTriggers(
[
[
$class: 'hudson.triggers.TimerTrigger',
spec : "*/5 * * * *"
]
]
)
]
)
stage('Run kubectl') {
container('kubectl') {
withEnv([
"ES_URL=elasticsearch.storage:9200"
]){
sh """
kubectl run -it --rm=true busybox-curl --image=yauritux/busybox-curl --restart=Never -- curl "$ES_URL"
"""
}
}
}
}
</code></pre>
<p>Jenkins is running on Kubernetes. To set up Jenkins I used <a href="https://github.com/kubernetes/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">stable/jenkins</a> helm chart.</p>
<p>Thank you!</p>
| <p>Ok, I've figured out the problem. Some of the plugins were not updated and apparently caused some conflicts. I updated all the plugins and the problem has gone.</p>
|
<p>I'm new to Kubernetes development, and I have a task to change some features in Kuberenetes services. </p>
<p>I have the <a href="https://github.com/kubernetes/community/tree/master/contributors/devel#readme" rel="nofollow noreferrer">Kubernetes local cluster</a>, and also I have a testbed consisting of 4 RPI's, My first changes were done over <code>Kubectl</code>, and what I have done was cross compiling the binaries on my machine and then replacing the old binaries found in the RPI's (since compiling the code in the RPI's is time-consuming). </p>
<p>Now, I have reached a point where I have to update some changes to <code>Kube-proxy</code>, in the local cluster the binary files are in <code>_output/bin</code>, however in the real cluster the Kube-proxy is running in a container (for each node), so changing the binaries would need creating new Docker images and forcing Kubernetes to choose those images instead of the default ones (as far as I understood). </p>
<p>So My questions: </p>
<ol>
<li><p>What is the best/easiest way to push the changes on <code>Kube-proxy</code> on the running cluster? </p></li>
<li><p>If I compiled the code in the RPI, the <code>make</code> will release a binary file of <code>Kube-proxy</code>, but how the cluster will take those changes?</p></li>
</ol>
<p>Cluster Spec's: </p>
<ol>
<li><p>Kubernetes v1.9.</p></li>
<li><p>OS hypriotos-v7+.</p></li>
<li><p>Architecture arm 32-bit.</p></li>
</ol>
| <p>All nodes should run kube-proxy. (Running kube-proxy on a “master” node is not strictly required, but being consistent is easier.) Obtain a binary as described for kubelet.</p>
<p>Arguments to consider:</p>
<p>If following the HTTPS security approach:</p>
<pre><code>--master=https://$MASTER_IP
--kubeconfig=/var/lib/kube-proxy/kubeconfig
</code></pre>
<p>Otherwise, if taking the firewall-based security approach</p>
<pre><code>--master=http://$MASTER_IP
</code></pre>
<p>Note that on some Linux platforms, you may need to manually install the conntrack package which is a dependency of kube-proxy, or else kube-proxy cannot be started successfully.</p>
<p>For more details on debugging kube-proxy problems, please refer to Debug Services</p>
<p>Source : <a href="https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy</a></p>
|
<p>I am reviewing Azure Kubernetes Service for my current employer and trying to determine if there are any limitations to using istio on AKS. Does anyone have any experience doing so? Does it work as normal?</p>
| <p>Here is some information for you to refer.</p>
<p>In the article of <a href="https://istio.io/docs/setup/kubernetes/quick-start" rel="nofollow noreferrer">Quick start instructions to install and configure Istio in a Kubernetes cluster</a> , you will find the <strong>Prerequisites</strong> of using Istio in a Kubernetes cluster.</p>
<blockquote>
<p>The following instructions recommend you have access to a Kubernetes 1.9 or newer cluster <strong>with RBAC (Role-Based Access Control) enabled</strong>. You will also need kubectl 1.9 or newer installed.</p>
</blockquote>
<p>But in AKS, RBAC is currently not supported in AKS, you could pay attention on it. It will be available soon. Refer to this <a href="https://learn.microsoft.com/en-us/azure/aks/faq#does-aks-support-kubernetes-role-based-access-control-rbac" rel="nofollow noreferrer">link</a>.</p>
<p>In Azure, you also can use <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-walkthrough" rel="nofollow noreferrer">ACS</a> which is an older version of AKS, but with control over Kubernetes master or <a href="https://github.com/Azure/acs-engine" rel="nofollow noreferrer">acs-engine</a> where you have full control over Kubernetes cluster.</p>
<p>Or for installing the Istio incubator package with RBAC disabled, refer to this <a href="https://readon.ly/post/2017-05-25-deploy-istio-to-azure-container-service/" rel="nofollow noreferrer">article</a>.</p>
|
<p>have issue with </p>
<pre><code>kubectl run -ti
</code></pre>
<p>in gitlab ci.
For testing in CI we run docker container with "npm t" command in interactive mode and it was perfectly work on docker.
After migrate to Kubernetes have issue, as kubectl run give next error: <code>Unable to use a TTY - input is not a terminal or the right kind of file</code><br>
Job run in image: <strong>lachlanevenson/k8s-kubectl</strong>
If run kubectl run from local machine all work.
Pls help</p>
| <p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="nofollow noreferrer">PodSpec <code>container:</code></a> has a <code>tty</code> attribute, which defaults to <code>false</code> but which one can set to <code>true</code> (that's what the <code>-t</code> option, which is a shortcut for <code>--tty=true</code>, does in <code>kubectl exec</code>). You can experiment with setting <code>stdin: true</code> but at your peril, since it can hang the Pod waiting for "someone" to type something.</p>
|
<p>We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.</p>
<p>Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).</p>
| <p><a href="https://github.com/kubernetes/kubernetes/issues/30427" rel="noreferrer">Right now</a> the only option is to extract index from host name</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export INDEX=${HOSTNAME##*-}"]
</code></pre>
|
<p>i'm launching a glassfish pod on my Kubernetes cluster, and i'm trying to copy some .war files from a folder that's on my host, but the command cp always seems to fail.</p>
<p>my yaml file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: glassfish
spec:
# replicas: 2
selector:
matchLabels:
app: glassfish
strategy:
type: Recreate
template:
metadata:
labels:
app: glassfish
spec:
containers:
- image: glassfish:latest
name: glassfish
ports:
- containerPort: 8080
name: glassfishhttp
- containerPort: 4848
name: glassfishadmin
command: ["/bin/cp"]
args: #["/mnt/apps/*","/usr/local/glassfish4/glassfish/domains/domain1/autodeploy/"]
- /mnt/apps/
- /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
volumeMounts:
- name: glassfish-persistent-storage
mountPath: /mount
- name: app
mountPath: /mnt/apps
volumes:
- name: glassfish-persistent-storage
persistentVolumeClaim:
claimName: fish-mysql-pvc
- name: app
hostPath:
path: /mnt/nfs
type: Directory</code></pre>
</div>
</div>
</p>
<p>I'm trying to use the following command in my container:</p>
<pre><code>cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy
</code></pre>
<p>What am I doing wrong?</p>
<p>So far i've tried to do it with the /<em>, without it, using /*
When i use apps/</em> i see "item or directory not found", when i use apps/ i get "directory ommitted", i need only whats in the map, not the map itself so -r doesn't really help either.</p>
| <p>Two things to note here:</p>
<ol>
<li>If you want to copy a direcyory using <code>cp</code>, you have to provide the <code>-a</code> or <code>-R</code> flag to <code>cp</code>:</li>
</ol>
<blockquote>
<pre><code> -R If source_file designates a directory, cp copies the directory and the entire subtree connected at
that point. If the source_file ends in a /, the contents of the directory are copied rather than
the directory itself. This option also causes symbolic links to be copied, rather than indirected
through, and for cp to create special files rather than copying them as normal files. Created
directories have the same mode as the corresponding source directory, unmodified by the process'
umask.
In -R mode, cp will continue copying even if errors are detected.
</code></pre>
</blockquote>
<ol start="2">
<li><p>If you use <code>/bin/cp</code> as your entrypoint in the pod, then this command is not executed in a shell. The <code>*</code> in <code>/path/to/*</code> however is a shell feature.</p></li>
<li><p>initContainers do not have <code>args</code>, only `command.</p></li>
</ol>
<p>To make this work, use <code>/bin/sh</code> as the command instead:</p>
<pre><code>command:
- /bin/sh
- -c
- cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
</code></pre>
|
<p>Rancher 2 provides 4 options in the "Ports" section when deploying a new workload:</p>
<ul>
<li>NodePort</li>
<li>HostPort</li>
<li>Cluster IP</li>
<li>Layer-4 Load Balancer</li>
</ul>
<p>What are the differences? Especially between NodePort, HostPort and Cluster IP?</p>
| <p><strong>HostPort (nodes running a pod):</strong> Similiar to docker, this will open a port on the node on which the pod is running (this allows you to open port 80 on the host). This is pretty easy to setup an run, however:</p>
<blockquote>
<p>Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each combination must be unique. If you don’t specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
<a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="noreferrer">kubernetes.io</a></p>
</blockquote>
<p><strong>NodePort (On every node):</strong> Is restricted to ports between port 30,000 to ~33,000. This usually only makes sense in combination with an external loadbalancer (in case you want to publish a web-application on port 80)</p>
<blockquote>
<p>If you explicitly need to expose a Pod’s port on the node, consider using a NodePort Service before resorting to hostPort.
<a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="noreferrer">kubernetes.io</a></p>
</blockquote>
<p><strong>Cluster IP (Internal only):</strong> As the description says, this will open a port only available for internal applications running in the same <strong>cluster</strong>. A service using this option is accessbile via the internal cluster-ip. </p>
|
<p>I've got a deployment that has a pod that is stuck at :</p>
<p>The <code>describe</code> output has some sensitive details in it, but the events has this at the end:</p>
<pre><code> ...
Normal Pulled 18m (x3 over 21m) kubelet, ip-10-151-21-127.ec2.internal Successfully pulled image "example/asdf"
Warning FailedSync 7m (x53 over 19m) kubelet, ip-10-151-21-127.ec2.internal Error syncing pod
</code></pre>
<p>What is the cause of this error? How can I diagnose this further?</p>
<hr>
<p>It seems to be repulling the image, however it's odd that it's <code>x10 over 27m</code> I wonder if it's maybe reaching a timeout?</p>
<pre><code> Warning FailedSync 12m (x53 over 23m) kubelet, ip-10-151-21-127.ec2.internal Error syncing pod
Normal Pulling 2m (x10 over 27m) kubelet, ip-10-151-21-127.ec2.internal pulling image "aoeuoeauhtona.epgso"
</code></pre>
| <p>The <strong>kubelet</strong> process is responsible for pulling images from a registry.</p>
<p>This is how you can check the <code>kubelet</code> logs:</p>
<pre><code>$ journalctl -u kubelet
</code></pre>
<p>More information about images can be found in <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">documentation</a>.</p>
|
<p>I am using the Kubernetes Helm requirements.yaml file for dependencies addition. Based on the values.yaml condition, it will create the dependencies pods.</p>
<p>Here I want to execute required dependencies when apache.enabled == false</p>
<pre><code>values.yaml
external_apache:
enabled: false
</code></pre>
<h3>File requirements.yaml</h3>
<pre><code>dependencies:
- name:
version:
repository:
condition: external_apache.enabled
</code></pre>
<p>How do I add a false condition?</p>
<p>I have tried the below condition, but it's not working:</p>
<pre><code>condition: external_apache.enabled == false
</code></pre>
| <p>What version of Helm are you using?</p>
<p>There was a similar issue in the Kubernetes repository on GitHub:</p>
<p><a href="https://github.com/kubernetes/helm/issues/2111" rel="nofollow noreferrer">Unable to use condition in 'requirements.yaml' #2111</a></p>
<p>The solution was to upgrade Helm to v2.2.0+. In that version, condition support was added.</p>
<p><a href="https://helm.sh/docs/topics/v2_v3_migration/" rel="nofollow noreferrer"><strong>Helm 2 to Helm 3 upgrade note:</strong></a></p>
<blockquote>
<p>Chart apiVersion bumped to "v2" for following specification changes:</p>
<ul>
<li><strong>Dynamically linked chart dependencies moved to <a href="https://helm.sh/docs/topics/charts/#the-chartyaml-file" rel="nofollow noreferrer">Chart.yaml</a> (requirements.yaml removed and requirements --> dependencies)</strong></li>
<li>Library charts (helper/common charts) can now be added as dynamically linked chart dependencies</li>
<li>Charts have a type metadata field to define the chart to be of an application or library chart. It is application by default which means it is renderable and installable</li>
<li>Helm 2 charts (apiVersion=v1) are still installable</li>
</ul>
</blockquote>
<p>In the <a href="https://docs.helm.sh/developing_charts/#tags-and-condition-fields-in-requirements-yaml" rel="nofollow noreferrer">Helm documentation</a> or <a href="https://github.com/kubernetes/helm/blob/master/docs/charts.md#tags-and-condition-fields-in-requirementsyaml" rel="nofollow noreferrer">repository</a>, there is an explanation of how the condition works:
(I've added some comments to make reading easier)</p>
<p><strong>Condition</strong> - The condition field holds one or more YAML paths (delimited by commas). <br/>
<strong>Tags</strong> - The tags field is a YAML list of labels to associate with this chart.</p>
<pre><code># parentchart/requirements.yaml
dependencies:
- name: subchart1
repository: http://localhost:10191
version: 0.1.0
condition: subchart1.enabled, global.subchart1.enabled
tags:
- front-end #(chart should be disabled because the tags.front-end is “false” in values.yaml file , but ...)
- subchart1 #(subchart1.enabled condition path is present in values.yaml file and it has "true" value...)
#(this condition, so it overrides tag front-end and this chart will be enabled)
- name: subchart2
repository: http://localhost:10191
version: 0.1.0
condition: subchart2.enabled,global.subchart2.enabled
#(as soon as no one from these paths is exists in values.yaml this condition has ho effect)
tags:
- back-end #(chart should be enabled because the tags.back-end is “true” in values.yaml file)
- subchart2 #(and there is no condition path found in values.yaml to override it)
</code></pre>
<p>If this <strong>condition path</strong> exists in the top parent’s <code>values</code> and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value.
Only the first valid path found in the list is evaluated and <strong>if no paths exist then the condition has no effect</strong>.</p>
<p>In the top parent’s values, all charts with <strong>tags</strong> can be enabled or disabled by specifying the tag and a boolean value.</p>
<pre><code># parentchart/values.yaml
subchart1:
enabled: true #(this could be found from requirements as subchart1.enabled and override tags in this case)
tags:
front-end: false #(this disables charts with tag front-end)
back-end: true #(this enables charts with tag back-end)
</code></pre>
<p>The logic and sequence of conditions and tags are described in <a href="https://docs.helm.sh/developing_charts/#tags-and-condition-resolution" rel="nofollow noreferrer">Tags and Condition Resolution</a>:</p>
<ul>
<li><strong>Conditions (when set in values) always override tags</strong>. The first condition path that exists wins and subsequent ones for that chart are ignored.</li>
<li>Tags are evaluated as ‘if any of the chart’s tags are true then enable the chart’.</li>
<li>Tags and conditions values must be set in the top parent’s values.</li>
<li>The tags: key in values must be a top level key. <strong>Globals and nested tags: tables are not currently supported.</strong></li>
</ul>
<p>You can also set tags and conditions in the command line:</p>
<pre><code>helm install --set tags.front-end=true --set subchart2.enabled=false
</code></pre>
|
<p>i'm launching a glassfish pod on my Kubernetes cluster, and i'm trying to copy some .war files from a folder that's on my host, but the command cp always seems to fail.</p>
<p>my yaml file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: glassfish
spec:
# replicas: 2
selector:
matchLabels:
app: glassfish
strategy:
type: Recreate
template:
metadata:
labels:
app: glassfish
spec:
containers:
- image: glassfish:latest
name: glassfish
ports:
- containerPort: 8080
name: glassfishhttp
- containerPort: 4848
name: glassfishadmin
command: ["/bin/cp"]
args: #["/mnt/apps/*","/usr/local/glassfish4/glassfish/domains/domain1/autodeploy/"]
- /mnt/apps/
- /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/
volumeMounts:
- name: glassfish-persistent-storage
mountPath: /mount
- name: app
mountPath: /mnt/apps
volumes:
- name: glassfish-persistent-storage
persistentVolumeClaim:
claimName: fish-mysql-pvc
- name: app
hostPath:
path: /mnt/nfs
type: Directory</code></pre>
</div>
</div>
</p>
<p>I'm trying to use the following command in my container:</p>
<pre><code>cp /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy
</code></pre>
<p>What am I doing wrong?</p>
<p>So far i've tried to do it with the /<em>, without it, using /*
When i use apps/</em> i see "item or directory not found", when i use apps/ i get "directory ommitted", i need only whats in the map, not the map itself so -r doesn't really help either.</p>
| <blockquote>
<p>What am I doing wrong?</p>
</blockquote>
<p>here is correct command to execute:</p>
<pre><code>command: ["sh", "-c", "cp -r /mnt/apps/* /usr/local/glassfish4/glassfish/domains/domain1/autodeploy/ && asadmin start-domain --verbose"]
</code></pre>
<p>With your cp command you effectively overwrite legitimate required command to start everything. You can see original one (running ok without your cp command) if you inspect that container. Initially, container is started with:</p>
<pre><code>...
"Cmd": [
"/bin/sh",
"-c",
"asadmin start-domain --verbose"
],
...
</code></pre>
<p>simply adding it after the copy command solves your issue.</p>
|
<p>I am trying to label gpu to my node
This does not work</p>
<pre><code>kubectl label node gke-kubia-default-pool-98519add-25c1/10.164.0.3 gpu=true
error: there is no need to specify a resource type as a separate argument when passing arguments in resource/name form (e.g. 'kubectl get resource/<resource_name>' instead of 'kubectl get resource resource/<resource_name>'
</code></pre>
<p>Neither without node</p>
<pre><code>kubectl label gke-kubia-default-pool-98519add-25c1/10.164.0.2 gpu=true
error: the server doesn't have a resource type "gke-kubia-default-pool-98519add-25c1"
</code></pre>
<p>I think that node exists</p>
<pre><code>Name: kubia-manual-v2
Namespace: default
Node: gke-kubia-default-pool-98519add-hsrc/10.164.0.2
Start Time: Fri, 08 Jun 2018 14:58:04 +0200
</code></pre>
<p>How to set the label?</p>
<p>How to list available nodes?</p>
| <p>I have solved my problem.First</p>
<pre><code>kubectl get no
NAME STATUS ROLES AGE VERSION
gke-kubia-default-pool-98519add-25c1 Ready <none> 20h v1.8.10-gke.0
gke-kubia-default-pool-98519add-5kfr Ready <none> 20h v1.8.10-gke.0
gke-kubia-default-pool-98519add-hsrc Ready <none> 20h v1.8.10-gke.0
</code></pre>
<p>Then</p>
<pre><code>kubectl label node gke-kubia-default-pool-98519add-25c1 gpu=true
# output:
node "gke-kubia-default-pool-98519add-25c1" labeled
</code></pre>
|
<p>I would like to resolve the kube-dns names from outside of the Kubernetes cluster by adding a stub zone to my DNS servers. This requires changing the cluster.local domain to something that fits into my DNS namespace.</p>
<p>The cluster DNS is working fine with cluster.local. To change the domain I have modified the line with KUBELET_DNS_ARGS on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to read:</p>
<pre><code>Environment="KUBELET_DNS_ARGS=--cluster-dns=x.y.z --cluster-domain=cluster.mydomain.local --resolv-conf=/etc/resolv.conf.kubernetes"
</code></pre>
<p>After restarting kubelet external names are resolvable but kubernetes name resolution failed.</p>
<p>I can see that kube-dns is still running with:</p>
<pre><code>/kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
</code></pre>
<p>The only place I was able to find cluster.local was within the pods yaml configuration which reads:</p>
<pre><code> containers:
- args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
</code></pre>
<p>After modifying the yaml and recreating the pod using</p>
<pre><code>kubectl replace --force -f kube-dns.yaml
</code></pre>
<p>I still see kube-dns gettings started with --domain=cluster.local.</p>
<p>What am I missing?</p>
| <p>I had a similar problem where I have been porting a microservices based application to Kubernetes. Changing the internal DNS zone to cluster.local was going to be a fairly complex task that we didn't really want to deal with.</p>
<p>In our case, we <a href="https://coredns.io/2018/01/29/deploying-kubernetes-with-coredns-using-kubeadm/" rel="noreferrer">switched from KubeDNS to CoreDNS</a>, and simply enabled the <a href="https://coredns.io/plugins/rewrite" rel="noreferrer">coreDNS rewrite plugin</a> to translate our <code>our.internal.domain</code> to <code>ourNamespace.svc.cluster.local</code>. </p>
<p>After doing this, the corefile part of our CoreDNS configmap looks something like this:</p>
<pre><code>data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
rewrite name substring our.internal.domain ourNamespace.svc.cluster.local
proxy . /etc/resolv.conf
cache 30
}
</code></pre>
<p>This enables our kubernetes services to respond on both the default DNS zone and our own zone.</p>
|
<p>I got the task to setup a Kubernetes setup in place 2 days ago with no background in that technology. So sorry if my questions or setup are not good.</p>
<p>The topology is quite simple, a <em>public IP</em>, a dedicated <em>HA proxy</em> configured to forward requests to a <strong>Kubernetes services</strong> containing a deployment of 2 pods. (Stickiness required!)</p>
<pre><code> Service setup
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "api-admin2",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/api-admin2",
"uid": "98121d0d-698b-11e8-8d90-262e68d4dba8",
"resourceVersion": "245163",
"creationTimestamp": "2018-06-06T13:14:50Z",
"labels": {
"app": "api-admin"
},
"annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 6543,
"nodePort": 31302
}
],
"selector": {
"app": "api-admin"
},
"clusterIP": "10.100.22.118",
"type": "LoadBalancer",
"sessionAffinity": "ClientIP",
"externalTrafficPolicy": "Local",
"healthCheckNodePort": 32660,
"sessionAffinityConfig": {
"clientIP": {
"timeoutSeconds": 10800
}
}
},
"status": {
"loadBalancer": {
"ingress": [
{
"ip": "10.100.21.97"
}
]
}
}
}
</code></pre>
<p>The traffic arrives on pods but not in round robin, the entire traffic goes to the same pod. To have traffic going to another pod, I have to stop the one getting it... Which is not the purpose of this...</p>
<p>Any idea how to have the traffic properly loadbalanced with stickiness ? </p>
<p>Thanks !</p>
| <p>from the service documentation for <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="nofollow noreferrer">proxy mode: IPVS</a>: </p>
<blockquote>
<p>In any of these proxy model, any traffic bound for the Service’s
IP:Port is proxied to an appropriate backend without the clients
knowing anything about Kubernetes or Services or Pods. <code>Client-IP</code> based
session affinity can be selected by setting
service.spec.sessionAffinity to <code>“ClientIP”</code> (the default is <code>“None”</code>),
and you can set the max session sticky time by setting the field
<code>service.spec.sessionAffinityConfig.clientIP.timeoutSeconds</code> if you have
already set <code>service.spec.sessionAffinity to “ClientIP”</code> (the default is
“10800”).</p>
</blockquote>
<p>In your configuration the session affinity which is responsible for choosing the pod is set to clientIP which means 10800 is the sticky time, all the traffic will be forwarded to the same pod for 3 hours if they are coming from the same client. </p>
<p>If you want to specify time, as well, this is what needs to be changed:</p>
<pre><code> sessionAffinityConfig:
clientIP:
timeoutSeconds: _TIME_
</code></pre>
<p>This will allow you to change the time of sickness, so if you changed <em>TIME</em> to 10 the service will switch pods every 10 seconds. </p>
|
<p>I have the following Ingress resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
spec:
tls:
- secretName: the-secret
hosts:
- sample.domain.com
- sample2.domain.com
- rabbit.domain.com
- hub.domain.com
- grafana.domain.com
rules:
- host: sample.domain.com
http:
paths:
- path: /
backend:
serviceName: fe-srvc
servicePort: 80
- path: /api
backend:
serviceName: be-srvc
servicePort: 80
- host: sample2.domain.com
http:
paths:
- path: /
backend:
serviceName: fe2-srvc
servicePort: 80
- path: /api
backend:
serviceName: be-srvc
servicePort: 80
## The Extra Services ###
- host: rabbit.domain.com
http:
paths:
- path: /
backend:
serviceName: rabbitmq-srvc
servicePort: 80
</code></pre>
<p>and I want to patch it after it is deployed.</p>
<p>So I use this, to try and replace the <code>be-srvc</code> value with <code>some-srvc</code> :</p>
<pre><code>kubectl patch ing/main-ingress --patch '{ "spec" : { "rules": [{"http":{"paths":[ {"- path":"/"},{"backend":{"serviceName":"other-srvc"}},{"servicePort":"80"} ] }}]}}'
</code></pre>
<p>and I get this error:</p>
<pre><code>The Ingress "main-ingress" is invalid:
* spec.rules[0].http.backend.serviceName: Required value
* spec.rules[0].http.backend.servicePort: Invalid value: 0: must be between 1 and 65535, inclusive
</code></pre>
<p>Any insight would be appreciated!</p>
| <p>Your patch has a number of problems; for example <code>"- path"</code> instead of <code>"path"</code> but also incorrect referencing of object levels. However, even if you fixed the mistakes this would not work as intended. Let's see why.</p>
<p><code>kubectl patch</code> is a request for a <strong><em>strategic merge patch</em></strong>. When patching arrays, like the <code>.spec.rules</code> and <code>.spec.rules.http.paths</code> in this case, a <em>strategic merge patch</em> can use the defined <em>patch type</em> and <em>merge patch merge key</em> for the object to do The Right Thing. However, in case of the Ingress object no one bothered to define these. This means that any patch will overwrite the entire object; it will not be a nice merge that one is hoping for.</p>
<p>To accomplish the particular change referred to in the question you can do:</p>
<pre><code>kubectl get ing/main-ingress -o json \
| jq '(.spec.rules[].http.paths[].backend.serviceName | select(. == "be-srvc")) |= "some-srvc"' \
| kubectl apply -f -
</code></pre>
<p>The above will change all occurrences of the <code>be-srvc</code> Service to <code>some-srvc</code>. Keep in mind that there is a short race condition here: if the Ingress is modified after <code>kubectl get</code> ran the change will fail with the error <code>Operation cannot be fulfilled on ingresses.extensions "xx": the object has been modified</code>; to handle that case you need implement a retry logic.</p>
<p>If the indexes are known in the arrays mentioned above you can accomplish the patch directly:</p>
<pre><code>kubectl patch ing/main-ingress --type=json \
-p='[{"op": "replace", "path": "/spec/rules/0/http/paths/1/backend/serviceName", "value":"some-srvc"}]'
kubectl patch ing/main-ingress --type=json \
-p='[{"op": "replace", "path": "/spec/rules/1/http/paths/1/backend/serviceName", "value":"some-srvc"}]'
</code></pre>
<p>The two commands above will change the backends for <code>sample.domain.com/api</code> and <code>sample2.domain.com/api</code> to <code>some-srvc</code>.</p>
<p>The two commands can also be combined like this:</p>
<pre><code>kubectl patch ing/main-ingress --type=json \
-p='[{"op": "replace", "path": "/spec/rules/0/http/paths/1/backend/serviceName", "value":"some-srvc"}, {"op": "replace", "path": "/spec/rules/1/http/paths/1/backend/serviceName", "value":"some-srvc"}]'
</code></pre>
<p>This has the same effect and as an added bonus there is no race condition here; the patch guaranteed to be atomic.</p>
|
<p>I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.</p>
<p>One of our open questions is how to manage to manage GCP service accounts on the cluster.</p>
<p>I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!</p>
<p>Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.</p>
<p>Is that possible?</p>
<p>I've considered some options, but I'm not confident on the feasibility or desirability:</p>
<ol>
<li>A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set <code>GOOGLE_DEFAULT_CREDENTAILS</code> to point to it</li>
<li>Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?</li>
<li>Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?</li>
</ol>
| <p>You are on the right track. GCP service accounts can be used in GKE for PODs to assign permissions to GCP resources.</p>
<p>Create an account:</p>
<pre><code>cloud iam service-accounts create ${SERVICE_ACCOUNT_NAME}
</code></pre>
<p>Add IAM permissions to the service account:</p>
<pre><code>gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
--role='roles/${ROLE_ID}'
</code></pre>
<p>Generate a JSON file for the service account:</p>
<pre><code>gcloud iam service-accounts keys create \
--iam-account "${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \
service-account.json
</code></pre>
<p>Create a secret with that JSON:</p>
<pre><code>kubectl create secret generic echo --from-file service-account.json
</code></pre>
<p>Create a deployment for your application using that secret:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
volumes:
- name: "service-account"
secret:
secretName: "echo"
</code></pre>
<p>If you want to use various permissions for separate deployments, you need to create some GCP service accounts with different permissions, generate JSON tokens for them, and assign them to the deployments according to your plans. PODs will have access according to mounted service accounts, not to service the account assigned to the node.</p>
<p>For more information, you can look through the links:</p>
<ul>
<li><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">Authenticating to Cloud Platform with Service Accounts</a></li>
<li><a href="https://github.com/kelseyhightower/gke-service-accounts-tutorial" rel="nofollow noreferrer">Google Cloud Service Accounts with Google Container Engine (GKE) - Tutorial</a></li>
</ul>
|
<p>Yesterday I spun up an Azure Kubernetes Service cluster running a few simple apps. Three of them have exposed public IPs that were reachable yesterday.</p>
<p>As of this morning I can't get the dashboard tunnel to work or the <code>LoadBalancer</code> IPs themselves.</p>
<p>I was asked by the Azure twitter account to solicit help here.</p>
<p><strong>I don't know how to troubleshoot this apparent network issue - only <code>az</code> seems to be able to touch my cluster.</strong></p>
<h2>dashboard error log</h2>
<p><code>❯❯❯ make dashboard ~/c/azure-k8s (master)
az aks browse --resource-group=akc-rg-cf --name=akc-237
Merged "akc-237" as current context in /var/folders/9r/wx8xx8ls43l8w8b14f6fns8w0000gn/T/tmppst_atlw
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
error: error upgrading connection: error dialing backend: dial tcp 10.240.0.4:10250: getsockopt: connection timed out
</code></p>
<h2>service+pod listing</h2>
<p><code>❯❯❯ kubectl get services,pods ~/c/azure-k8s (master)
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-back ClusterIP 10.0.125.49 <none> 6379/TCP 16h
azure-vote-front LoadBalancer 10.0.185.4 40.71.248.106 80:31211/TCP 16h
hubot LoadBalancer 10.0.20.218 40.121.215.233 80:31445/TCP 26m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 19h
mti411-web LoadBalancer 10.0.162.209 52.168.123.30 80:30874/TCP 26m
</code></p>
<p><code>NAME READY STATUS RESTARTS AGE
azure-vote-back-7556ff9578-sjjn5 1/1 Running 0 2h
azure-vote-front-5b8878fdcd-9lpzx 1/1 Running 0 16h
hubot-74f659b6b8-wctdz 1/1 Running 0 9s
mti411-web-6cc87d46c-g255d 1/1 Running 0 26m
mti411-web-6cc87d46c-lhjzp 1/1 Running 0 26m
</code></p>
<h2>http failures</h2>
<p><code>❯❯❯ curl --connect-timeout 2 -I http://40.121.215.233 ~/c/azure-k8s (master)
curl: (28) Connection timed out after 2005 milliseconds
❯❯❯ curl --connect-timeout 2 -I http://52.168.123.30 ~/c/azure-k8s (master)
curl: (28) Connection timed out after 2001 milliseconds
</code></p>
| <p>If you are getting <strong>getsockopt: connection timed out</strong> while trying to access to your AKS Dashboard, I think deleting tunnelfront pod will help as once you delete the tunnelfront pod, this will trigger creation of new tunnelfront by Master. Its something I have tried and worked for me. </p>
|
<p>I have the following <strong>PersistentVolumeClaim</strong>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi
storageClassName: fask
</code></pre>
<p>and <strong>Deployment</strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: "/var/www/html"
volumes:
- name: data
persistentVolumeClaim:
claimName: nginx-pvc
</code></pre>
<p>If I run with a <em>single</em> replica Deployment my PV gets dynamically created by the vsphere <strong>StorageClass</strong></p>
<p>However, if I have more than <em>2</em> replicas it will fail to create the second PV:</p>
<pre><code>AttachVolume.Attach failed for volume "pvc-8facf319-6b1a-11e8-935b-00505680b1b8" : Failed to add disk 'scsi0:1'.
Unable to mount volumes for pod "nginx-deployment-7886f48dcd-lzms8_default(b0e38764-6b1a-11e8-935b-00505680b1b8)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-deployment-7886f48dcd-lzms8". list of unmounted volumes=[data]. list of unattached volumes=[data default-token-5q7kr]
</code></pre>
| <blockquote>
<p>However, if I have more than 2 replicas it will fail to create the second PV</p>
</blockquote>
<p>You should then probably use <code>StatefulSet</code> and <code>volumeClaimTemplates</code> within it instead of <code>Deployment</code> and <code>PersistentVolumeClaim</code>.</p>
<p>In your case each deployment is having same <code>PersistentVolumeClaim</code> (that is ReadWriteOnly and can't be mounted on second request), while with <code>volumeClaimTemplates</code> you get different one provisioned per each replica.</p>
|
<p>It seems by default Kubernetes creates a hostPath volume with <code>755</code> permission on a directory.</p>
<p>Is it possible to set this value to something else by a <code>volume</code> spec? As opposed to manually doing a <code>chmod</code> on the relevent host directory.</p>
| <pre><code> initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /tmp/docker"]
volumeMounts:
- name: redis-socket
mountPath: /tmp/docker
</code></pre>
|
<p>I trying to debug it for few hours to no effect. I either do know how to google it or it unusual issue.</p>
<p>I have Kubernetes Engine cluster on google cloud. To test it I deploy simple app:
<a href="https://github.com/iniside/uSpark" rel="nofollow noreferrer">https://github.com/iniside/uSpark</a>
It is composed from three console apps:</p>
<ul>
<li>GreeterServer - Frontend with gRPC and Google Endpoints</li>
<li>Greeter.Backend - Backend with gRPC service.</li>
<li>uSpark - client console app.</li>
</ul>
<p>All of them are using .NET Core 2.1, gRPC 1.12, and C#. Kubernetes Engine is 1.10</p>
<p>I already tried to see if the service dns is resolved inside cluster.
I also tried to call service directly by it's IP when creating Channel.</p>
<p>Either of those end up with:</p>
<pre><code>Grpc.Core.RpcException: 'Status(StatusCode=Unavailable, Detail="Name resolution failure")'
</code></pre>
<p>GreeterServer is trying to talk to Greeter.Backend:</p>
<pre><code>public override Task<HelloReply> SayHelloAgain(HelloRequest request, ServerCallContext context)
{
Channel channel = new Channel("grpc-greeter-backend.default.svc.cluster.local", 9000, ChannelCredentials.Insecure);
var backendClient = new GreeterBackend.GreeterBackendClient(channel);
var reply = backendClient.SayHelloFromBackend(new BackendHelloRequest { Name = "iniside" });
channel.ShutdownAsync().Wait(); //not neeed to wait, but easier to debug now.
return Task.FromResult<HelloReply>(new HelloReply { Message = "Hello Again " + request.Name + " " + reply.Message });
}
</code></pre>
<p>And this is code of my backend:</p>
<pre><code>class GreeterBackendImpl : GreeterBackend.GreeterBackendBase
{
public override Task<BackendHelloReply> SayHelloFromBackend(BackendHelloRequest request, ServerCallContext context)
{
return Task.FromResult<BackendHelloReply>(new BackendHelloReply { Message = "Hello " + request.Name + " From Backend"});
}
}
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
Server server = new Server
{
Services = { GreeterBackend.BindService(new GreeterBackendImpl()) },
Ports = { new ServerPort("127.0.0.1", 9000, ServerCredentials.Insecure) }
};
server.Start();
Console.WriteLine("Gretter server is linstening on port 50051");
Console.WriteLine("Press any key to stop server");
int read = Console.Read();
while (read < 0)
{
}
}
}
</code></pre>
<p>Configuration of Frontend:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: esp-grpc-greeter
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9100
protocol: TCP
name: http2
selector:
app: esp-grpc-greeter
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: esp-grpc-greeter
spec:
replicas: 1
template:
metadata:
labels:
app: esp-grpc-greeter
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9100",
"--service=hellohorld3.endpoints.sa-game-206414.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:9000"
]
ports:
- containerPort: 9100
- name: greeter
image: eu.gcr.io/sa-game-206414/greeter-service:v1
ports:
- containerPort: 8000
</code></pre>
<p>Backend Configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: grpc-greeter-backend
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 9000
protocol: TCP
name: http2
- port: 9000
targetPort: 9000
protocol: TCP
name: http2900
selector:
app: grpc-greeter-backend
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grpc-greeter-backend
spec:
replicas: 1
template:
metadata:
labels:
app: grpc-greeter-backend
spec:
containers:
- name: greeter-backend
image: eu.gcr.io/sa-game-206414/greeter-backend:b1
ports:
- containerPort: 9000
</code></pre>
| <p>I fixed issue and it was two fold. First as @spender suggested I did changed gRPC addresses to 0.0.0.0 both backend and frontend.</p>
<p>But the changes were not picked up by cluster. Because all this time it was pulling wrong images.</p>
<p>Every time I made changes, I deleted old images, with old tags (both locally and on GCR). Created new image with the same remote repository and tag and pushed them to GCR.</p>
<p>For some reason the images were not updated.</p>
|
<p>I have a few kubefiles defining Kubernetes services and deployments. When I create a cluster of 4 nodes on GCP (never changes), all the small <code>kube-system</code> pods are spread across the nodes instead of filling one at a time. Same with the pods created when I apply my kubefiles. </p>
<p>The problem is sometimes I have plenty of available total CPU for a deployment, but its pods can't be provisioned because no single node has that much free. It's fragmented, and it would obviously fit if the kube-system pods all went into one node instead of being spread out.</p>
<p>I can avoid problems by using bigger/fewer nodes, but I feel like I shouldn't have to do that. I'd also rather not deal with pod affinity settings for such a basic testing setup. Is there a solution to this, maybe a setting to have it prefer filling nodes in order? Like using an already opened carton of milk instead of opening a fresh one each time.</p>
<p>Haven't tested this, but the order I apply files in probably matters, meaning applying the biggest CPU users first could help. But that seems like a hack.</p>
<p>I know there's some <a href="https://github.com/kubernetes/kubernetes/issues/12140" rel="nofollow noreferrer">discussion on rescheduling</a> that gets complicated because they're dealing with a dynamic node pool, and it seems like they don't have it ready, so I'm guessing there's no way to have it rearrange my pods dynamically.</p>
| <p>You can write your own scheduler. Almost all components in k8s are replaceable.</p>
<p>I know you won't. If you don't want to deal with affinity, you def won't write your own scheduler. But know that you have that option.</p>
<p>With GCP native, try to have all your pods with resource request and limits set up.</p>
|
<p>How does pods get unique IP addresses even if they reside in the same worker node?</p>
<p>Also pod is not a device what is logic behind having it an IP address?<br>
Is the IP address assigned to a pod a virtual IP? </p>
| <p>A <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">pod</a> is part of a cluster (group of nodes), and <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">cluster networking</a> tells you that:</p>
<blockquote>
<p>In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. </p>
<p>This means that containers within a Pod can all reach each other’s ports on localhost.<br>
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.<br>
This is called the <strong>“IP-per-pod” model</strong>.</p>
</blockquote>
<p>The constraints are:</p>
<blockquote>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
</blockquote>
<p>See more with "<a href="https://medium.com/practo-engineering/networking-with-kubernetes-1-3db116ad3c98" rel="nofollow noreferrer"><strong>Networking with Kubernetes</strong></a>" from <a href="https://twitter.com/alsingh87" rel="nofollow noreferrer"><strong>Alok Kumar Singh</strong></a>:</p>
<p><a href="https://i.stack.imgur.com/O62bV.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O62bV.gif" alt="https://cdn-images-1.medium.com/max/1000/1*lAfpMbHRf266utcd4xmLjQ.gif"></a></p>
<p>Here:</p>
<blockquote>
<p>We have a machine, it is called a <strong>node</strong> in kubernetes.<br>
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.</p>
</blockquote>
<p>(<a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDR: Classless Inter-Domain Routing</a>, a method for allocating IP addresses and IP routing)</p>
<blockquote>
<p>The node has an network interface <code>eth0</code> attached. It belongs to root network namespace of the node.<br>
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.<br>
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.</p>
</blockquote>
<p>For the, see "<a href="https://cloudnativelabs.github.io/post/2017-04-18-kubernetes-networking/" rel="nofollow noreferrer"><strong>Kubernetes Networking</strong></a>" from <a href="https://twitter.com/cloudnativelabs" rel="nofollow noreferrer"><strong>CloudNativelabs</strong></a>:</p>
<blockquote>
<p>Kubernetes does not orchestrate setting up the network and offloads the job to the <strong><a href="https://github.com/containernetworking/cni" rel="nofollow noreferrer">CNI (Container Network Interface)</a></strong> plug-ins. Please refer to the <strong><a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="nofollow noreferrer">CNI spec</a></strong> for further details on CNI specification. </p>
<p>Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:</p>
<ul>
<li>layer 2 (switching) solution</li>
<li>layer 3 (routing) solution</li>
<li>overlay solutions</li>
</ul>
</blockquote>
<h2>layer 2 (switching)</h2>
<p><a href="https://i.stack.imgur.com/VYSiH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VYSiH.jpg" alt="https://cloudnativelabs.github.io/img/l2-network.jpg"></a></p>
<p>You can see their IP attributed as part of a container subnet address range.</p>
<h2>layer 3 (routing)</h2>
<p><a href="https://i.stack.imgur.com/Vkt6G.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vkt6G.jpg" alt="https://cloudnativelabs.github.io/img/l3-gateway-routing.jpg"></a></p>
<p>This is about populating the default gateway router with routes for the subnet as shown in the diagram.<br>
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively. </p>
<h2>overlay solutions</h2>
<p>Generally not used.</p>
<p>Note: See also (Oct. 2018): "<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="nofollow noreferrer">Google Kubernetes Engine networking</a>".</p>
|
<p>I'm pretty new to Kubernetes, trying to figure it out. I haven't been able to google this answer tho, so I'm stumped. Can Kubernetes mount two secrets to the same path? say given the following deployment:</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-deployment
version: v1
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
version: v1
spec:
volumes:
- name: nginxlocal
hostPath:
path: /srv/docker/nginx
- name: requestcert
secret:
secretName: requests-certificate
- name: mysitecert
secret:
secretName: mysitecert
containers:
- name: nginx
image: nginx:mainline-alpine # Use 1.15.0
volumeMounts:
- name: nginxlocal
subPath: config/nginx.conf
mountPath: /etc/nginx/nginx.conf
- name: requestcert
mountPath: /etc/nginx/ssl
- name: mysitecert
mountPath: /etc/nginx/ssl
- name: nginxlocal
subPath: logs
mountPath: /etc/nginx/logs
ports:
- containerPort: 443
</code></pre>
<p>would it be possible to mount both SSL certs to the same directory (/etc/nginx/ssl/*)?</p>
<p>If not, can storing the TLS cert+key as "Opaque" instead of kubernetes.io/tls type work? I tried to combine both certs+keys into one secret of tls type, but kubernetes expected it to be called tls.crt and tls.key, so I had to split it into two secret files. if they could be done as opaque, i think I could remove the two secret values and just use the one opaque entry. </p>
<p>Thanks!</p>
| <blockquote>
<p>would it be possible to mount both SSL certs to the same directory (/etc/nginx/ssl/*)?</p>
</blockquote>
<p>No, because (at least when using a docker runtime) it uses volume mounts, which behave exactly the same as <code>mount -t ext4 /dev/something /path/something</code> in that <code>/path/something</code> will be last-one-wins.</p>
<p>However, you have an only mildly smelly work-around available to you: mount secret <code>requestcert</code> as <code>/etc/nginx/.reqcert</code> (or similar), mount secret <code>mysitecert</code> as <code>/etc/nginx/.sitecert</code>, then supersede the <code>entrypoint</code> of the image and copy the files into place before delegating down to the actual entrypoint:</p>
<pre><code>containers:
- name: nginx
image: etc etc
command:
- bash
- -c
- |
mkdir -p /etc/nginx/ssl
cp /etc/nginx/.*cert/* /etc/nginx/ssl/
# or whatever initialization you'd like
# then whatever the entrypoint is for your image
/usr/local/sbin/nginx -g "daemon off;"
</code></pre>
<p>Or, if that doesn't seem like a good idea, you can leverage a disposable, Pod-specific directory in combination with <code>initContainers:</code>:</p>
<pre><code>spec:
volumes:
# all the rest of them, as you had them
- name: temp-config
emptyDir: {}
initContainers:
- name: setup-config
image: busybox # or whatever
command:
- sh
- -c
- |
# "stage" all the config files, including certs
# into /nginx-config which will evaporate on Pod destruction
volumeMounts:
- name: temp-config
mountPath: /nginx-config
# and the rest
containers:
- name: nginx
# ...
volumeMounts:
- name: temp-config
mountPath: /etc/nginx
</code></pre>
<p>They differ in complexity based on whether you want to have to deal with keeping track of the upstream image's entrypoint command, versus leaving the upstream image untouched, but expending a lot more initialization energy</p>
|
<p>I'm trying to add my Azure AKS Kubernetes cluster to my GitLab CI/CD Kubernetes integration.</p>
<p>I can execute <code>kubectl</code> commands on the cluster from my pc, after I ran this command:</p>
<p><code>az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name></code></p>
<p>It created a <code>.kube/config</code> file with a content like this:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <some long base64 string here>
server: https://<resource-group-name+some-hexadecimal-chars>.hcp.westeurope.azmk8s.io:443
name: <kubernetes-cluster-name>
contexts:
- context:
cluster: <kubernetes-cluster-name>
user: clusterUser_<resource-group-name>_<kubernetes-cluster-name>
name: <kubernetes-cluster-name>
current-context: <kubernetes-cluster-name>
kind: Config
preferences: {}
users:
- name: clusterUser_<resource-group-name>_<kubernetes-cluster-name>
user:
client-certificate-data: <some long base64 string here>
client-key-data: <some long base64 string here>
token: <some secret string of hexadecimal chars here>
</code></pre>
<p>In GitLab form, I have to input these fields:</p>
<ol>
<li>Kubernetes cluster name</li>
<li>API URL</li>
<li>CA Certificate - Certificate Authority bundle (PEM format)</li>
<li>Token</li>
<li>Project namespace (optional, unique)</li>
</ol>
<p>I tried these values:</p>
<ol>
<li>I put my <code><kubernetes-cluster-name></code> to match the name of the cluster on azure and the cluster name on the <code>.kube/config</code> file.</li>
<li>I put the url <code>https://<resource-group-name+some-hexadecimal-chars>.hcp.westeurope.azmk8s.io:443</code> copied from the <code>.kube/config</code> file.</li>
<li>I tried first the <code>certificate-authority-data</code> from the <code>.kube/config</code> file, but didn't work and I already tried all three base64 strings from the <code>.kube/config</code> file, none worked.</li>
<li>I put the token from the <code>.kube/config</code> file.</li>
<li>Leave this empty, as it is optional.</li>
</ol>
<p>In GitLab, When I try to hit the button <code>Install</code> to install the Helm Tiller, I got this error:</p>
<pre><code>Something went wrong while installing Helm Tiller
Can't start installation process. nested asn1 error
</code></pre>
<p>And sometimes I get this error instead:</p>
<pre><code>Kubernetes error: SSL_connect returned=1 errno=0 state=error: certificate verify failed
</code></pre>
<p>I'm trying to make this to work since yesterday, had google it a lot and doesn't find anything.</p>
<p>I think the problem is with this 3rd field, the CA Certificate, maybe there are some other way to get this content from the command line <code>az</code> or <code>kubectl</code>.</p>
<p>Are there someone here that already got this Kubernetes integration from GitLab to Azure AKS working?</p>
| <p>I found out later that the base64 string in the <code>certificate-authority-data</code> of the <code>.kube/config</code> file that I was coping its content into the <code>CA Certificate</code> field of GitLab "Add Kubernetes cluster" form, it is the PEM format, but base64 encoded.</p>
<p>The PEM format already is a base64 encoded representation of the certificate bits, but it has some line breaks in the middle. This whole content is base64 encoded again before it goes to the <code>.kube/config</code> so it is turned into a big base64 single-line string.</p>
<p>I just had to base64 decode this big single-line string (I used the javascript <code>atob("....")</code> in the Chrome's Console window), what gave me something like this:</p>
<pre><code>-----BEGIN CERTIFICATE-----
MIIEyDCCArCgAwIBAgIRAOL3N8oMIwWIxcFTZhTkfgMwDQYJKoZIhvcNAQELBQAw
...
...
...
5gP7yoL1peZ+AWjCgcUVZYiItqrBLpWYDgY9g8btYDUIiWlqkmC0+kBaPfwCtckx
cUp3vlwRITrv0mzrxiQjTLTUpEy7EcD+U6IecA==
-----END CERTIFICATE-----
</code></pre>
<p>Then I just copied this content into the GitLab "CA Certificate" field and it worked.</p>
|
<p>I have a StatefulSet with pods <code>server-0</code>, <code>server-1</code>, etc. I want to expose them directly to the internet with URLs like server-0.mydomain.com or like mydomain.com/server-0.</p>
<p>I want to be able to scale the StatefulSet and automatically be able to access the new pods from the internet. For example, if I scale it up to include a <code>server-2</code>, I want mydomain.com/server-2 to route requests to the new pod when it's ready. I don't want to have to also scale some other resource or create another Service to achieve that effect.</p>
<p>I could achieve this with a custom proxy service that just checks the request path and forwards to the correct pod internally, but this seems error-prone and wasteful.</p>
<p>Is there a way to cause an Ingress to automatically route to different pods within a StatefulSet, or some other built-in technique that would avoid custom code?</p>
| <p>I don't think you can do it. Being part of the same statefulSet, all pods up to pod-x, are targeted by a service. As you can't define which pod is going to get a request, you can't force "pod-1.yourapp.com" or "yourapp.com/pod-1" to be sent to pod-1. It will be sent to the service, and the service might sent it to pod-4.</p>
<p>Even though if you could, you would need to dynamically update your ingress rules, which can cause a downtime of minutes, easily.</p>
<p>With the custom proxy, I see it impossible too. Note that it would need to basically replace the service behind the pods. If your ingress controller knows that it needs to deliver a packet to a service, now you have to force it to deliver to your proxy. But how?</p>
|
<blockquote>
<p>Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.</p>
</blockquote>
<h2>More Background</h2>
<p>We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.</p>
<p>That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.</p>
<p>That GitLab-Ci integration is the goal, and the 'Why' behind this question.</p>
<h1>My Question</h1>
<p>Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).</p>
<p>While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster</a>) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).</p>
<p>My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):</p>
<blockquote>
<p>How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?</p>
</blockquote>
<p>If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!</p>
<h1>Kubernetes Resources I am Reviewing</h1>
<ol>
<li><a href="https://stackoverflow.com/questions/37426096/how-should-i-manage-deployments-with-kubernetes?rq=1">How should I manage deployments with kubernetes</a></li>
<li><a href="https://stackoverflow.com/questions/34773609/kubernetes-deployments">Kubernetes Deployments</a></li>
</ol>
<p>Will update as I work through the process.</p>
| <h2>Creating the integration</h2>
<p>I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this <a href="https://stackoverflow.com/questions/50775758/how-to-add-an-azure-aks-kubernetes-cluster-self-signed-ca-to-gitlab-ci-cd-kubern/50778120">question</a> because I was having some error when I tried to add my Kubernetes cluester info into GitLab.</p>
<p>How to integrate them:</p>
<ol>
<li>Inside GitLab, go to "Operations" > "Kubernetes" menu.</li>
<li>Click on the "Add Kubernetes cluster" button on the top of the page</li>
<li>You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using <code>az login</code> command, and then execute this other command to get the Kubernetes cluster credentials: <code>az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name></code></li>
<li>The previous command will create a <code>~/.kube/config</code> file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this <code>.kube/config</code> file</li>
</ol>
<p>These are the fields:</p>
<ol>
<li><strong>Kubernetes cluster name:</strong> It's the name of your cluster on Azure, it's in the <code>.kube/config</code> file too. </li>
<li><strong>API URL:</strong> It's the URL in the field <code>server</code> of the <code>.kube/config</code> file.</li>
<li><strong>CA Certificate:</strong> It's the field <code>certificate-authority-data</code> of the <code>.kube/config</code> file, but you will have to base64 decode it.</li>
</ol>
<p>After you decode it, it must be something like this:</p>
<pre><code>-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
</code></pre>
<ol start="4">
<li><strong>Token:</strong> It's the string of hexadecimal chars in the field <code>token</code> of the <code>.kube/config</code> file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with <strong>cluster-admin</strong> privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen <a href="https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster" rel="noreferrer">here</a> under <em>Create a gitlab service account in the default namespace</em>) and apply it to your cluster by means of <code>kubectl apply -f serviceaccount.yml</code>.</li>
<li><strong>Project namespace (optional, unique):</strong> I leave it empty, don't know yet for what or where this namespace can be used.</li>
</ol>
<p>Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.</p>
<h2>Deploy</h2>
<p>In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the <code>kubectl</code> command, here is a list of all the variables available:</p>
<p><a href="https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables" rel="noreferrer">https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables</a></p>
<p>To have these variables injected in your deploy job, there are some conditions:</p>
<ul>
<li>You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above</li>
<li>Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your <code>.gitlab-ci.yml</code>) must have an <code>environment</code> key (take a look at the line 31 in this <a href="https://gist.github.com/lmcarreiro/75186bd1d4e387c13c63d0a7087e9999#file-gitlab-ci-yml-L31" rel="noreferrer">example</a>), and the environment name must match the name you used in menu "Operations" > "Environments".</li>
</ul>
<p>Here are an example of a <a href="https://gist.github.com/lmcarreiro/75186bd1d4e387c13c63d0a7087e9999" rel="noreferrer">.gitlab-ci.yml</a> with three stages:</p>
<ul>
<li><strong>Build:</strong> it builds a docker image and push it to gitlab private registry</li>
<li><strong>Test:</strong> it doesn't do anything yet, just put an <code>exit 0</code> to change it later</li>
<li><strong>Deploy:</strong> download a stable version of <code>kubectl</code>, copy the <code>.kube/config</code> file to be able to run <code>kubectl</code> commands in the cluster and executes a <code>kubectl cluster-info</code> to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this <code>kubectl cluster-info</code> command is executing fine.</li>
</ul>
<p><strong>Tip:</strong> to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command <code>env</code> in the script of your deploy stage. It helps a lot to debug a job.</p>
|
<p>I have just started a new Kubernetes 1.8.0 environment using minikube (0.27) on Windows 10.</p>
<p>I followed this steps but it didn't work:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</a></p>
<p>When I list pods this is the result:</p>
<pre><code>C:\WINDOWS\system32>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-minikube 1/1 Running 0 23m
kube-system heapster-69b5d4974d-s9vrf 1/1 Running 0 5m
kube-system kube-addon-manager-minikube 1/1 Running 0 23m
kube-system kube-apiserver-minikube 1/1 Running 0 23m
kube-system kube-controller-manager-minikube 1/1 Running 0 23m
kube-system kube-dns-545bc4bfd4-xkt7l 3/3 Running 3 1h
kube-system kube-proxy-7jnk6 1/1 Running 0 23m
kube-system kube-scheduler-minikube 1/1 Running 0 23m
kube-system kubernetes-dashboard-5569448c6d-8zqnc 1/1 Running 2 52m
kube-system kubernetes-dashboard-869db7f6b4-ddlmq 0/1 CrashLoopBackOff 19 51m
kube-system monitoring-influxdb-78d4c6f5b6-b66m9 1/1 Running 0 4m
kube-system storage-provisioner 1/1 Running 2 1h
</code></pre>
<p>As you can see, I have 2 kubernets-dashboard pods now, one of then is running and the other one is CrashLookBackOff.</p>
<p>When I try to run minikube dashboard this is the result:</p>
<blockquote>
<p>"Waiting, endpoint for service is not ready yet..."</p>
</blockquote>
<p>I have tried to remove kubernetes-dashboard-869db7f6b4-ddlmq pod:</p>
<pre><code>kubectl delete pod kubernetes-dashboard-869db7f6b4-ddlmq
</code></pre>
<p>This is the result:</p>
<blockquote>
<p>"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"</p>
</blockquote>
| <blockquote>
<p>"Error from server (NotFound): pods "kubernetes-dashboard-869db7f6b4-ddlmq" not found"</p>
</blockquote>
<p>You failed to delete the pod due to the lack of namespace (add <code>-n kube-system</code>). And it should be 1 dashboard pod if no modification's applied. If it still fails to run minikube dashboard after you delete the abnormal pod, more logs should be provided. </p>
|
<p>A few weeks ago the Windows Server 2019 was announced as Preview with native Kubernetes Support.</p>
<p>Is there any documentation how to activate or install Kubernetes?
I already set up a virtual server (with Desktop Feature) on my local Hyper-V, but I can not find <em>any</em> hint to test the preview features of Kubernetes on Windows Server 2019.</p>
<p>Or do I misunderstand the current Preview notes and Kubernetes is only announced but not available yet?</p>
| <p>Kubernetes and its Windows cni plugins are <a href="https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/" rel="nofollow noreferrer">in beta at the time of writing</a> and insider build 1803 doesn't have any features/role for it (neither does <code>dockeree</code> though).</p>
<p>There are installation instructions <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/getting-started-kubernetes-windows" rel="nofollow noreferrer">here</a>.</p>
<p>I've not found any information on what <em>precisely</em> is new in Server 2019 for Kubernetes...</p>
<p>Would be great to have powershell <code>Install-Module</code>/<code>Install-Package</code> for installation and a Windows Admin Center plugins for administering a cluster.</p>
|
<p>I am starting exploring runnign docker containers with Kubernetes. I did the following</p>
<ol>
<li>Docker run etcd</li>
<li>docker run master</li>
<li>docker run service proxy</li>
<li>kubectl run web --image=nginx</li>
</ol>
<p>To cleanup the state, I first stopped all the containers and cleared the downloaded images. However I still see pods running.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-3476088249-w66jr 1/1 Running 0 16m
</code></pre>
<p>How can I remove this?</p>
| <p>To delete the pod:</p>
<pre><code>kubectl delete pods web-3476088249-w66jr
</code></pre>
<p>If this pod is started via some replicaSet or deployment or anything that is creating replicas then find that and delete that first.</p>
<pre><code>kubectl get all
</code></pre>
<p>This will list all the resources that have been created in your k8s cluster. To get information with respect to resources created in your namespace <code>kubectl get all --namespace=<your_namespace></code></p>
<p>To get info about the resource that is controlling this pod, you can do</p>
<pre><code>kubectl describe web-3476088249-w66jr
</code></pre>
<p>There will be a field "Controlled By", or some owner field using which you can identify which resource created it.</p>
|
<p>I want to clear cache in all the pods in my Kubernetes namespace. I want to send one request to the end-point which will then send a HTTP call to all the pods in the namespace to clear cache. Currently, I can hit only one pod using Kubernetes and I do not have control over which pod would get hit.</p>
<p>Even though the load-balancer is set to RR, continuously hitting the pods(n number of times, where n is the total number of pods) doesn't help as some other requests can creep in.</p>
<p>The same issue was discussed here, but I couldn't find a solution for the implementation:
<a href="https://github.com/kubernetes/kubernetes/issues/18755" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/18755</a></p>
<p>I'm trying to implement the clearing cache part using Hazelcast, wherein I will store all the cache and Hazelcast automatically takes care of the cache update.</p>
<p>If there is an alternative approach for this problem, or a way to configure kubernetes to hit all end-points for some specific requests, sharing here would be a great help.</p>
| <p>Provided you got kubectl in your pod and have access to the api-server, you can get all endpoint adressess and pass them to curl:</p>
<pre><code>kubectl get endpoints <servicename> \
-o jsonpath="{.subsets[*].addresses[*].ip}" | xargs curl
</code></pre>
<p><strong>Alternative without kubectl in pod:</strong></p>
<p>the recommended way to access the api server from a pod is by using kubectl proxy: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod</a> this would of course add at least the same overhead. alternatively you could directly call the REST api, you'd have to provide the token manually.</p>
<pre><code>APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets \
| grep ^default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
</code></pre>
<p>if you provide the APISERVER and TOKEN variables, you don't need kubectl in your pod, this way you only need curl to access the api server and "jq" to parse the json output:</p>
<pre><code>curl $APISERVER/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
</code></pre>
<p><strong>UPDATE (final version)</strong></p>
<p>APISERVER usually can be set to kubernetes.default.svc and the token should be available at /var/run/secrets/kubernetes.io/serviceaccount/token in the pod, so no need to provide anything manually:</p>
<pre><code>TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); \
curl https://kubernetes.default.svc/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
</code></pre>
<p>jq is available here: <a href="https://stedolan.github.io/jq/download/" rel="noreferrer">https://stedolan.github.io/jq/download/</a> (< 4 MiB, but worth it for easily parsing JSON)</p>
|
<p>In kubernetes I can currently limit the CPU and Memory of the containers, but what about the hard disk size of the containers.</p>
<p>For example, how could I avoid that someone runs a container in my k8s worker node that stores internally .jpg files making this container grow and grow with the time.</p>
<p>Not talking about Persistent Volume and Persistent Volume Claim. I'm talking that if someone makes an error in the container and write inside the container filesystem I want to control it.</p>
<p>Is there a way to limit the amount of disk used by the containers?</p>
<p>Thank you.</p>
| <p>There is some support for this; the tracking issues are <a href="https://github.com/kubernetes/features/issues/361" rel="nofollow noreferrer">#361</a>, <a href="https://github.com/kubernetes/features/issues/362" rel="nofollow noreferrer">#362</a> and <a href="https://github.com/kubernetes/features/issues/363" rel="nofollow noreferrer">#363</a>. You can define requests and/or limits on the resource called <code>ephemeral-storage</code>, like so (for a Pod/PodTemplate):</p>
<pre><code>spec:
containers:
- name: foo
resources:
requests:
ephemeral-storage: 50Mi
limits:
ephemeral-storage: 50Mi
</code></pre>
<p>The page <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">Reserve Compute Resources for System Daemons</a> has some additional information on this feature.</p>
|
<p>I have deployed a kubernetes HA cluster using the next config.yaml:</p>
<pre><code>etcd:
endpoints:
- "http://172.16.8.236:2379"
- "http://172.16.8.237:2379"
- "http://172.16.8.238:2379"
networking:
podSubnet: "192.168.0.0/16"
apiServerExtraArgs:
endpoint-reconciler-type: lease
</code></pre>
<p>When I check <code>kubectl get nodes</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
master1 Ready master 22m v1.10.4
master2 NotReady master 17m v1.10.4
master3 NotReady master 16m v1.10.4
</code></pre>
<p>If I check the pods, I can see too much are failing:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-5jftb 0/1 NodeLost 0 16m
calico-etcd-kl7hb 1/1 Running 0 16m
calico-etcd-z7sps 0/1 NodeLost 0 16m
calico-kube-controllers-79dccdc4cc-vt5t7 1/1 Running 0 16m
calico-node-dbjl2 2/2 Running 0 16m
calico-node-gkkth 0/2 NodeLost 0 16m
calico-node-rqzzl 0/2 NodeLost 0 16m
kube-apiserver-master1 1/1 Running 0 21m
kube-controller-manager-master1 1/1 Running 0 22m
kube-dns-86f4d74b45-rwchm 1/3 CrashLoopBackOff 17 22m
kube-proxy-226xd 1/1 Running 0 22m
kube-proxy-jr2jq 0/1 ContainerCreating 0 18m
kube-proxy-zmjdm 0/1 ContainerCreating 0 17m
kube-scheduler-master1 1/1 Running 0 21m
</code></pre>
<p>If I run <code>kubectl describe node master2</code>:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl describe node master2
Name: master2
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=master2
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Mon, 11 Jun 2018 12:06:03 +0200
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
MemoryPressure Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure False Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:00 +0200 KubeletHasSufficientPID kubelet has sufficient PID available
Ready Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 172.16.8.237
Hostname: master2
Capacity:
cpu: 2
ephemeral-storage: 37300436Ki
</code></pre>
<p>Then if I check the pods, <code>kubectl describe pod -n kube-system calico-etcd-5jftb</code>:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl describe pod -n kube-system calico-etcd-5jftb
Name: calico-etcd-5jftb
Namespace: kube-system
Node: master2/
Labels: controller-revision-hash=4283683065
k8s-app=calico-etcd
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Terminating (lasts 20h)
Termination Grace Period: 30s
Reason: NodeLost
Message: Node master2 which was running pod calico-etcd-5jftb is unresponsive
IP:
Controlled By: DaemonSet/calico-etcd
Containers:
calico-etcd:
Image: quay.io/coreos/etcd:v3.1.10
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/etcd
Args:
--name=calico
--data-dir=/var/etcd/calico-data
--advertise-client-urls=http://$CALICO_ETCD_IP:6666
--listen-client-urls=http://0.0.0.0:6666
--listen-peer-urls=http://0.0.0.0:6667
--auto-compaction-retention=1
Environment:
CALICO_ETCD_IP: (v1:status.podIP)
Mounts:
/var/etcd from var-etcd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tj6d7 (ro)
Volumes:
var-etcd:
Type: HostPath (bare host directory volume)
Path: /var/etcd
HostPathType:
default-token-tj6d7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tj6d7
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
</code></pre>
<p>I have tried to update etcd cluster, to version 3.3 and now I can see the next logs (and some more timeouts):</p>
<pre><code>2018-06-12 09:17:51.305960 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io\" " took too long (190.475363ms) to execute
2018-06-12 09:18:06.788558 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " took too long (109.543763ms) to execute
2018-06-12 09:18:34.875823 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " took too long (136.649505ms) to execute
2018-06-12 09:18:41.634057 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " took too long (106.00073ms) to execute
2018-06-12 09:18:42.345564 W | etcdserver: request "header:<ID:4449666326481959890 > lease_revoke:<ID:4449666326481959752 > " took too long (142.771179ms) to execute
</code></pre>
<p>I have checked: <code>kubectl get events</code></p>
<pre><code>22m 22m 1 master2.15375fdf087fc69f Node Normal Starting kube-proxy, master2 Starting kube-proxy.
22m 22m 1 master3.15375fe744055758 Node Normal Starting kubelet, master3 Starting kubelet.
22m 22m 5 master3.15375fe74d47afa2 Node Normal NodeHasSufficientDisk kubelet, master3 Node master3 status is now: NodeHasSufficientDisk
22m 22m 5 master3.15375fe74d47f80f Node Normal NodeHasSufficientMemory kubelet, master3 Node master3 status is now: NodeHasSufficientMemory
22m 22m 5 master3.15375fe74d48066e Node Normal NodeHasNoDiskPressure kubelet, master3 Node master3 status is now: NodeHasNoDiskPressure
22m 22m 5 master3.15375fe74d481368 Node Normal NodeHasSufficientPID kubelet, master3 Node master3 status is now: NodeHasSufficientPID
</code></pre>
| <p>I see multiple calico-etcd pods attempting to be ran, if you have used a calico.yaml that deploys etcd for you, that will not work in a multi-master environment.</p>
<p>That manifest is not intended for production deployment and will not work in a multi-master environment because the etcd it deploys is not configured to attempt to form a cluster.</p>
<p>You could still use that manifest but you would need to remove the etcd pods it deploys and set the etcd_endpoints to an etcd cluster you have deployed.</p>
|
<p>Kubernetes question: I seem to not be able to access my service from my local machine (pods are running on minikube), when I run the command
kubectl <code>describe pod pod-id</code> I see these two lines but I don't know if they have anything to do with not being able to access the deployment:</p>
<pre><code>Port: 8888/TCP
Host Port: 0/TCP
</code></pre>
<p>But when I run</p>
<pre><code>kubectl get services
</code></pre>
<p>I see no External-IP</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ci-master NodePort 10.103.13.96 <none> 8888:31388/TCP 1m
</code></pre>
<p>Here's my <code>service.yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ci-master
labels:
app: ci
tier: fullstack
role: master
spec:
type: NodePort
ports:
- port: 8888
targetPort: 8888
selector:
app: ci
role: master
tier: fullstack
</code></pre>
<p>And the part that I think should be relevant in my <code>deployment.yaml</code>:</p>
<pre><code>spec:
template:
spec:
containers:
- name: ci
image: rand/image-one:latest
ports:
- containerPort: 8888
</code></pre>
<p>When I try to access the service with <code>minikube service ci-master</code> I get this message indefinitely:
<code>Waiting, endpoint for service is not ready yet...</code></p>
<p>The service is serving, when I try <code>kubectl describe pod pod-id</code> I see this line at the bottom:</p>
<pre><code> Type Reason Age From Message
Normal Started 24m kubelet, minikube Started container
</code></pre>
<p>What am I missing?</p>
| <p>What I was missing was the Service Type, if I want something accessible from the external world, I had to pick <code>LoadBalancer</code>.</p>
<p>So this simple change fixed it: </p>
<pre><code>spec:
type: LoadBalancer
ports:
- port: 8888
targetPort: 8888
</code></pre>
<p>Now I finally have:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ci-master LoadBalancer 10.15.243.32 35.211.12.22 80:32463/TCP 1m
</code></pre>
|
<p>I upgraded minikube on macOS and now I cannot get it to start (on two different machines). I've been reading a bunch of forums and this seems to be a common problem but there are no consistent solutions, and there is no guidance on how to go looking for the root cause.</p>
<p>There is an error on the first download of the VM, using </p>
<pre><code>./minikube start --vm-driver=vmwarefusion
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
150.53 MB / 150.53 MB [============================================] 100.00% 0s
E0609 09:18:29.104704 891 start.go:159] Error starting host: Error creating host: Error executing step: Creating VM.
: exit status 1.
</code></pre>
<p>and then when running "minikube start" a second time it just sits at "Starting cluster components..." for ages (and ages) and then times out with:</p>
<pre><code>./minikube start
Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0609 09:45:32.715278 1030 start.go:281] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition
</code></pre>
<p>It is all a bit of a black box really and I'd like to work out how to troubleshoot it. I cannot find any useful logs at all.</p>
<p>I am not after someone to solve it for me - better to learn how to fish...</p>
<p>What information is available to help troubleshoot minikube?
What approach would people suggest to diagnose this?</p>
<p>
Here is an update to the question after increasing the log levels as suggested below:
Thanks @MatthewLDaniel and @d0bry. I increased the debug level and narrowed the problem down to not being able to establish an SSH session with the VM. It appears that the VM's IP address is not being returned properly from VMware. The log cycles over this until it ultimately fails:</p>
<pre><code>Waiting for VM to come online...
executing: /Applications/VMware Fusion.app/Contents/Library/vmrun list
MAC address in VMX: 00:0c:29:53:88:5d
Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet1/dhcpd.conf
Following IPs found map[00:50:56:c0:00:01:172.16.158.1]
Trying to find IP address in configuration file: /Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf
Following IPs found map[00:50:56:c0:00:08:192.168.108.1]
Trying to find IP address in leases file: /var/db/vmware/vmnet-dhcpd-vmnet1.leases
IP found in DHCP lease table: 172.16.158.138
Got an ip: 172.16.158.138
SSH Daemon not responding yet: dial tcp 172.16.158.138:22: i/o timeout
</code></pre>
<p>But the VM's IP address is actually 192.168.108.147. I can ping it and ssh into it using ssh docker@192.168.108.147 and password tcuser from my Mac.</p>
| <p>It is hard to determine what, in this issue, caused <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="noreferrer">minikube</a> not to run properly on this VM engine. You can search deeper in the logs with:</p>
<pre><code>minikube -v=9 start
</code></pre>
<p>option in the command line.</p>
<p>I prefer to rebuild all minikube environment:</p>
<pre><code>minikube stop
minikube delete
rm -fr $HOME/.minikube
minikube start
</code></pre>
|
<p>On a Mac, Docker has a built in kubernetes mode that helps you set up a dev cluster in 1 click. Is there a way to do with with Docker-CE on my linux laptop? If not, what is the recommended approach (seems weird that this is actually easier on a mac or windows lol)</p>
| <p>To get the current OS version, try <code>kubectl get nodes -o wide</code></p>
<p>If you're looking to run a local Kubernetes cluster, you should use Minikube: <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/minikube/</a></p>
|
<p>I am to trying to pass variables to kubernetes YAML file from Ansible but somehow values are not being populated.</p>
<p>Here is my playbook:</p>
<pre><code>- hosts: master
gather_facts: no
vars:
logstash_port: 5044
tasks:
- name: Creating kubernetes pod
command: kubectl create -f logstash.yml
</code></pre>
<p><code>logstash.yml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: {{ logstash_port }}
</code></pre>
<p>Is there a better way to pass arguments to Kubernetes YAML file that is being invoked using command task?</p>
| <p>What you are trying to do has no chance of working. Kubernetes (the <code>kubectl</code> command) has <strong>nothing</strong> to do with Jinja2 syntax, which you try to use in the <code>logstash.yml</code>, and it has no access to Ansible objects (for multiple reasons).</p>
<hr>
<p>Instead, use <a href="https://docs.ansible.com/ansible/latest/modules/k8s_raw_module.html" rel="nofollow noreferrer"><code>k8s_raw</code> module</a> to manage Kubernetes objects.</p>
<p>You can include Kubernetes' manifest directly in the <code>definition</code> declaration and there you can use Jinja2 templates:</p>
<pre><code>- k8s_raw:
state: present
definition:
apiVersion: v1
kind: Pod
metadata: logstash
spec:
containers:
- name: logstash
image: logstash
ports:
- containerPort: "{{ logstash_port }}"
</code></pre>
<p>Or you can leave your <code>logstash.yml</code> as is, and feed it using the template lookup plugin:</p>
<pre><code>- k8s_raw:
state: present
definition: "{{ lookup('template', 'path/to/logstash.yml') | from_yaml }}"
</code></pre>
<p><sub>Notice if you used Jinja2 template directly in Ansible code, you must quote it. It's not necessary with the template plugin.</sub></p>
|
<p>I am following a simple Kuberentes Minikube tutorial on Linux Mint 18.3, trying to to create a volume from the following get started tutorial :</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p>
<pre><code>minikube start --vm-driver=virtualbox
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
kubectl config use-context minikube
Switched to context "minikube"
kubectl create -f https://k8s.io/docs/tasks/run-application/mysql-pv.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p>Why do I receive that error ? Searched thoroughly through the docs and github but could not find an answer.</p>
| <p>I could successfully try out the above example in minikube v0.23.0 with Kuberebtes version 1.8. But the reason for failing in v1.10 as I guess is the default Authentication and Authorization to Kubelet's API is not set properly. You should set legacy default values to the KubeletConfiguration to preserve the command line API. </p>
<p>This is the Kubernetes source snippet that sets up those legacy defaults.
<a href="https://github.com/kubernetes/kubernetes/blob/de8cc313554b7f7d41509ca620f71439cd8729eb/cmd/kubelet/app/options/options.go#L281-L293" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/de8cc313554b7f7d41509ca620f71439cd8729eb/cmd/kubelet/app/options/options.go#L281-L293</a></p>
<p>Running the below command should set those values: </p>
<pre><code>minikube start --extra-config=kubelet.Authentication.Anonymous.Enabled=true
--extra-config=kubelet.Authentication.Webhook.Enabled=false
--extra-config=kubelet.Authorization.Mode=AlwaysAllow
</code></pre>
|
<p>I am unable to mount a configmap using multi-folder volume mount.</p>
<p>The structure of my <code>volume-mount.yaml</code> is below:</p>
<p><strong>DOESN'T WORK</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-web
spec:
containers:
- name: test-web
image: docker.io/hello-world
volumeMounts:
- name: config-volume
mountPath: /usr/test
volumes:
- name: config-volume
configMap:
name: test-config-map
items:
- key: application.properties
path: test-web
- key: test.xml
path: test-web/configs/test_config
restartPolicy: Never
</code></pre>
<p><strong>Error</strong>:</p>
<pre><code>MountVolume.SetUp failed for volume "config-volume" : open /var/lib/kubelet/pods/93768c34-6dc6-11e8-9546-025000000001/volumes/kubernetes.io~configmap/config-volume/..2018_06_11_22_27_08.476420050/test-web: is a directory`
</code></pre>
<p><strong>WORKS</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-web
spec:
containers:
- name: test-web
image: docker.io/hello-world
volumeMounts:
- name: config-volume
mountPath: /usr/test
volumes:
- name: config-volume
configMap:
name: test-config-map
items:
- key: test.xml
path: test-web/configs/test_config
restartPolicy: Never
</code></pre>
<p>Also the configmap is getting mounted as root user where as we want the volume mount to happen as a specific user.</p>
<p>Can you please let me know what i may be missing in my yaml file to fix the above 2 issues.</p>
| <p>In your case this should fit well:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-web
spec:
containers:
- name: test-web
image: docker.io/hello-world
volumeMounts:
- name: config-volume-1
mountPath: /usr/test1
- name: config-volume-2
mountPath: /usr/test2
volumes:
- name: config-volume-1
configMap:
name: test-config-map
items:
- key: application.properties
path: test-web
- name: config-volume-2
configMap:
name: test-config-map
items:
- key: test.xml
path: test-web/configs/test_config
restartPolicy: Never
</code></pre>
<p>Reference is at: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a></p>
|
<p>I am using helm charts to deploy pods with a "ConfigMap" managing the configurations.</p>
<p>I edit ConfigMap directly to make changes to configuration files and then delete pods using kubectl delete, for the new configuration to take effect.</p>
<p>Is there any easy way using <strong>helm</strong> to replace a running pod with the new configuration without executing "<strong>kubectl delete</strong>" command</p>
| <p>We have found that using <code>--recreate-pods</code> will immediately terminate all running pods of that deployment, meaning some downtime for your service. In other words, there will be no rolling update of your pods.</p>
<p>The issue to address this in Helm is still open: <a href="https://github.com/kubernetes/helm/issues/1702" rel="noreferrer">https://github.com/kubernetes/helm/issues/1702</a></p>
<p>Instead helm suggests adding a checksum of your configuration files to the deployment in an annotation. That way the deployment will have a different hash and essentially look 'new' to helm, causing it to update correctly.</p>
<blockquote>
<p>The sha256sum function can be used to ensure a deployment's annotation section is updated if another file changes:</p>
</blockquote>
<pre><code>kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
</code></pre>
<p>From the docs here: <a href="https://helm.sh/docs/charts_tips_and_tricks/#automatically-roll-deployments-when-configmaps-or-secrets-change" rel="noreferrer">https://helm.sh/docs/charts_tips_and_tricks/#automatically-roll-deployments-when-configmaps-or-secrets-change</a></p>
|
<p>Reading the <a href="http://kubernetes.io/docs/user-guide/labels/#selecting-sets-of-nodes" rel="noreferrer">Kubernets documentation</a> it looks to be possible to select a certain range of pods based on labels. I want to select all the pods on one node but I don't want to label each pod on their corresponding node.</p>
<p>Am I missing something from the documentation or is it just not possible to select by node?</p>
<p>If I do:</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get pods \
--output=wide
--namespace=$NS \
--server=$SERVER | head
#=>
NAME READY STATUS RESTARTS AGE NODE
</code></pre>
<p>Can any of these headers be used as selector? If yes, how to do it with <code>kubectl</code>? How to do it with the API?</p>
| <p>As mentioned in the accepted answer the PR is now merged and you can get pods by node as follows:</p>
<pre><code>kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<node>
</code></pre>
|
<p>I am trying alluxio 1.7.1 with docker 1.13.1, kubernetes 1.9.6, 1.10.1</p>
<p>I created the alluxio docker image as per the instructions on <a href="https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html" rel="nofollow noreferrer">https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html</a></p>
<p>Then I followed the <a href="https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Kubernetes.html" rel="nofollow noreferrer">https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Kubernetes.html</a> guide to run alluxio on kubernetes. I was able to bring up the alluxio master pod properly, but when I try to bring up alluxio worker I get the error that Address in use. I have not modified anything in the yamls which I downloaded from alluxio git. Only change I did was for alluxio docker image name and api version in yamls for k8s to match properly.</p>
<p>I checked ports being used in my k8s cluster setup, and even on the nodes also. There are no ports that alluxio wants being used by any other process, but I still get address in use error. I am unable to understand what I can do to debug further or what I should change to make this work. I don't have any other application running on my k8s cluster setup. I tried with single node k8s cluster setup and multi node k8s cluster setup also. I tried k8s version 1.9 and 1.10 also.</p>
<p>There is definitely some issue from alluxio worker side which I am unable to debug.</p>
<p>This is the log that I get from worker pod:</p>
<pre><code>[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl logs po/alluxio-worker-knqt4
Formatting Alluxio Worker @ vm-sushil-scrum1-08062018-alluxio-1
2018-06-08 10:09:55,723 INFO Configuration - Configuration file /opt/alluxio/conf/alluxio-site.properties loaded.
2018-06-08 10:09:55,845 INFO Format - Formatting worker data folder: /alluxioworker/
2018-06-08 10:09:55,845 INFO Format - Formatting Data path for tier 0:/dev/shm/alluxioworker
2018-06-08 10:09:55,856 INFO Format - Formatting complete
2018-06-08 10:09:56,357 INFO Configuration - Configuration file /opt/alluxio/conf/alluxio-site.properties loaded.
2018-06-08 10:09:56,549 INFO TieredIdentityFactory - Initialized tiered identity TieredIdentity(node=10.194.11.7, rack=null)
2018-06-08 10:09:56,866 INFO BlockWorkerFactory - Creating alluxio.worker.block.BlockWorker
2018-06-08 10:09:56,866 INFO FileSystemWorkerFactory - Creating alluxio.worker.file.FileSystemWorker
2018-06-08 10:09:56,942 WARN StorageTier - Failed to verify memory capacity
2018-06-08 10:09:57,082 INFO log - Logging initialized @1160ms
2018-06-08 10:09:57,509 INFO AlluxioWorkerProcess - Domain socket data server is enabled at /opt/domain.
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at alluxio.worker.AlluxioWorkerProcess.<init>(AlluxioWorkerProcess.java:164)
at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:45)
at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:37)
at alluxio.worker.AlluxioWorker.main(AlluxioWorker.java:56)
Caused by: java.lang.RuntimeException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at alluxio.util.CommonUtils.createNewClassInstance(CommonUtils.java:224)
at alluxio.worker.DataServer$Factory.create(DataServer.java:45)
at alluxio.worker.AlluxioWorkerProcess.<init>(AlluxioWorkerProcess.java:159)
... 3 more
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at io.netty.channel.unix.Errors.newIOException(Errors.java:117)
at io.netty.channel.unix.Socket.bind(Socket.java:259)
at io.netty.channel.epoll.EpollServerDomainSocketChannel.doBind(EpollServerDomainSocketChannel.java:75)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:504)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:495)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:480)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:213)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:305)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at java.lang.Thread.run(Thread.java:748)
-----------------------
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl get all
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds/alluxio-worker 1 1 0 1 0 <none> 42m
ds/alluxio-worker 1 1 0 1 0 <none> 42m
NAME DESIRED CURRENT AGE
statefulsets/alluxio-master 1 1 44m
NAME READY STATUS RESTARTS AGE
po/alluxio-master-0 1/1 Running 0 44m
po/alluxio-worker-knqt4 0/1 Error 12 42m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/alluxio-master ClusterIP None <none> 19998/TCP,19999/TCP 44m
svc/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 1h
---------------------
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl describe po/alluxio-worker-knqt4
Name: alluxio-worker-knqt4
Namespace: default
Node: vm-sushil-scrum1-08062018-alluxio-1/10.194.11.7
Start Time: Fri, 08 Jun 2018 10:09:05 +0000
Labels: app=alluxio
controller-revision-hash=3081903053
name=alluxio-worker
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 10.194.11.7
Controlled By: DaemonSet/alluxio-worker
Containers:
alluxio-worker:
Container ID: docker://40a1eff2cd4dff79d9189d7cb0c4826a6b6e4871fbac65221e7cdd341240e358
Image: alluxio:1.7.1
Image ID: docker://sha256:b080715bd53efc783ee5f54e7f1c451556f93e7608e60e05b4615d32702801af
Ports: 29998/TCP, 29999/TCP, 29996/TCP
Command:
/entrypoint.sh
Args:
worker
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 08 Jun 2018 11:01:37 +0000
Finished: Fri, 08 Jun 2018 11:02:02 +0000
Ready: False
Restart Count: 14
Limits:
cpu: 1
memory: 2G
Requests:
cpu: 500m
memory: 2G
Environment Variables from:
alluxio-config ConfigMap Optional: false
Environment:
ALLUXIO_WORKER_HOSTNAME: (v1:status.hostIP)
Mounts:
/dev/shm from alluxio-ramdisk (rw)
/opt/domain from alluxio-domain (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7xlz7 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
alluxio-ramdisk:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
alluxio-domain:
Type: HostPath (bare host directory volume)
Path: /tmp/domain
HostPathType: Directory
default-token-7xlz7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7xlz7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "alluxio-domain"
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "alluxio-ramdisk"
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "default-token-7xlz7"
Normal Pulled 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Container image "alluxio:1.7.1" already present on machine
Normal Created 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Created container
Normal Started 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Started container
Warning BackOff 1m (x222 over 55m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Back-off restarting failed container
</code></pre>
<hr>
<pre><code>[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :19999 | grep LISTEN
java 8949 root 29u IPv4 12518521 0t0 TCP *:dnp-sec (LISTEN)
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :19998 | grep LISTEN
java 8949 root 19u IPv4 12520458 0t0 TCP *:iec-104-sec (LISTEN)
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29998 | grep LISTEN
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29999 | grep LISTEN
[root@vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29996 | grep LISTEN
</code></pre>
<p>The alluxio-worker container is always restarting and failing again and again for the same error.</p>
<p>Please guide me what I can do to solve this.</p>
<p>Thanks</p>
| <p>The problem was short circuit unix domain socket path. I was using whatever was present by default in alluxio git. In the default integration/kubernetes/conf/alluxio.properties.template the address for ALLUXIO_WORKER_DATA_SERVER_DOMAIN_SOCKET_ADDRESS was not complete. This is properly explained in <a href="https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html" rel="nofollow noreferrer">https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html</a> for enabling short circuit reads in alluxio worker containers using unix domain sockets.
Just because of a missing complete path for unix domain socket the alluxio worker was not able to come up in kubernetes when short circuit read was enabled for alluxio worker.</p>
<p>When I corrected the path in integration/kubernetes/conf/alluxio.properties for ALLUXIO_WORKER_DATA_SERVER_DOMAIN_SOCKET_ADDRESS=/opt/domain<b>/d</b>
Then things started wokring properly. Now also some tests are failing but alteast the alluxio setup is properly up. Now I will debug why some tests are failing.</p>
<p>I have submitted this fix in alluxio git for them to merge it in master branch.
<a href="https://github.com/Alluxio/alluxio/pull/7376" rel="nofollow noreferrer">https://github.com/Alluxio/alluxio/pull/7376</a></p>
|
<p>While I explored yaml definitions of Kubernetes templates, I stumbled across different definitions of sizes. First I thought it's about the apiVersions but they are the same. So what is the difference there? Which are right when both are the same?</p>
<p><code>storage: 5G</code> and <code>storage: 5Gi</code></p>
<pre><code>volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
resources:
requests:
storage: 2Gi
</code></pre>
<p>see here in detail:
<a href="https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml" rel="noreferrer">https://github.com/cvallance/mongo-k8s-sidecar/blob/master/example/StatefulSet/mongo-statefulset.yaml</a></p>
<p>and this one:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
resources:
requests:
storage: 5G
</code></pre>
<p>here in detail: <a href="https://github.com/openebs/openebs/blob/master/k8s/demo/mongodb/mongo-statefulset.yml" rel="noreferrer">https://github.com/openebs/openebs/blob/master/k8s/demo/mongodb/mongo-statefulset.yml</a></p>
| <p>From <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">Kubernetes source</a>:</p>
<blockquote>
<p>Limits and requests for memory are measured in bytes. You can express
memory as a plain integer or as a fixed-point integer using one of
these suffixes: E, P, T, G, M, K. <strong>You can also use the power-of-two
equivalents</strong>: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following
represent <strong>roughly</strong> the same value:</p>
<p>128974848, 129e6, 129M, 123Mi</p>
</blockquote>
<p>So those are the "bibyte" counterparts, like user2864740 commented.</p>
<p>A <a href="https://en.wikipedia.org/wiki/Kibibyte" rel="nofollow noreferrer">little info</a> on those orders of magnitude:</p>
<blockquote>
<p>The kibibyte was designed to replace the kilobyte in those computer
science contexts in which the term kilobyte is used to mean 1024
bytes. The interpretation of kilobyte to denote 1024 bytes,
<strong>conflicting with the SI definition of the prefix kilo (1000)</strong>, used to
be common.</p>
</blockquote>
<p>So, as you can see, 5G means 5 Gigabytes while 5Gi means 5 Gibibytes. They amount to:</p>
<ul>
<li>5 G = 5000000 KB / 5000 MB</li>
<li>5 Gi = 5368709.12 KB / 5368.70 MB</li>
</ul>
<p>Therefore, in terms of size, they are not the same.</p>
<p>But don't worry if you don't understand the differences at first read. Even Windows gets it wrong!</p>
<p><a href="https://i.stack.imgur.com/rhZvM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rhZvM.png" alt="enter image description here" /></a></p>
|
<p>I have an application with some endpoints that are quite CPU intensive. Because of that I have configured a Horizontal Pod Autoscaler like this:</p>
<pre><code>spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: DeploymentConfig
name: some-app
targetCPUUtilizationPercentage: 30
</code></pre>
<p>The point is, supposing there's a request on a pod that keeps it working at 100% CPU for 5 mins. I takes two minutes until Openshift/Kubernetes schedules new pods.</p>
<p>Is there a way to speed up this process? It forces us to be almost unresponsive for two minutes. </p>
<p>The same thing happens to downscale, having to wait for two minutes until it destroys the unnecessary pods.</p>
<p>Ideally there should be some config option to set this up.</p>
| <p>For OpenShift, Please modify <code>/etc/origin/master/master-config.yaml</code></p>
<pre><code>kubernetesMasterConfig:
controllerArguments:
horizontal-pod-autoscaler-downscale-delay: 2m0s
horizontal-pod-autoscaler-upscale-delay: 2m0s
</code></pre>
<p>and restart openshift master.</p>
|
<p>I have deployed a kubernetes HA cluster using the next config.yaml:</p>
<pre><code>etcd:
endpoints:
- "http://172.16.8.236:2379"
- "http://172.16.8.237:2379"
- "http://172.16.8.238:2379"
networking:
podSubnet: "192.168.0.0/16"
apiServerExtraArgs:
endpoint-reconciler-type: lease
</code></pre>
<p>When I check <code>kubectl get nodes</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
master1 Ready master 22m v1.10.4
master2 NotReady master 17m v1.10.4
master3 NotReady master 16m v1.10.4
</code></pre>
<p>If I check the pods, I can see too much are failing:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-5jftb 0/1 NodeLost 0 16m
calico-etcd-kl7hb 1/1 Running 0 16m
calico-etcd-z7sps 0/1 NodeLost 0 16m
calico-kube-controllers-79dccdc4cc-vt5t7 1/1 Running 0 16m
calico-node-dbjl2 2/2 Running 0 16m
calico-node-gkkth 0/2 NodeLost 0 16m
calico-node-rqzzl 0/2 NodeLost 0 16m
kube-apiserver-master1 1/1 Running 0 21m
kube-controller-manager-master1 1/1 Running 0 22m
kube-dns-86f4d74b45-rwchm 1/3 CrashLoopBackOff 17 22m
kube-proxy-226xd 1/1 Running 0 22m
kube-proxy-jr2jq 0/1 ContainerCreating 0 18m
kube-proxy-zmjdm 0/1 ContainerCreating 0 17m
kube-scheduler-master1 1/1 Running 0 21m
</code></pre>
<p>If I run <code>kubectl describe node master2</code>:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl describe node master2
Name: master2
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=master2
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Mon, 11 Jun 2018 12:06:03 +0200
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
MemoryPressure Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure False Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:00 +0200 KubeletHasSufficientPID kubelet has sufficient PID available
Ready Unknown Mon, 11 Jun 2018 12:06:15 +0200 Mon, 11 Jun 2018 12:06:56 +0200 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 172.16.8.237
Hostname: master2
Capacity:
cpu: 2
ephemeral-storage: 37300436Ki
</code></pre>
<p>Then if I check the pods, <code>kubectl describe pod -n kube-system calico-etcd-5jftb</code>:</p>
<pre><code>[ikerlan@master1 ~]$ kubectl describe pod -n kube-system calico-etcd-5jftb
Name: calico-etcd-5jftb
Namespace: kube-system
Node: master2/
Labels: controller-revision-hash=4283683065
k8s-app=calico-etcd
pod-template-generation=1
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Terminating (lasts 20h)
Termination Grace Period: 30s
Reason: NodeLost
Message: Node master2 which was running pod calico-etcd-5jftb is unresponsive
IP:
Controlled By: DaemonSet/calico-etcd
Containers:
calico-etcd:
Image: quay.io/coreos/etcd:v3.1.10
Port: <none>
Host Port: <none>
Command:
/usr/local/bin/etcd
Args:
--name=calico
--data-dir=/var/etcd/calico-data
--advertise-client-urls=http://$CALICO_ETCD_IP:6666
--listen-client-urls=http://0.0.0.0:6666
--listen-peer-urls=http://0.0.0.0:6667
--auto-compaction-retention=1
Environment:
CALICO_ETCD_IP: (v1:status.podIP)
Mounts:
/var/etcd from var-etcd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tj6d7 (ro)
Volumes:
var-etcd:
Type: HostPath (bare host directory volume)
Path: /var/etcd
HostPathType:
default-token-tj6d7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tj6d7
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
</code></pre>
<p>I have tried to update etcd cluster, to version 3.3 and now I can see the next logs (and some more timeouts):</p>
<pre><code>2018-06-12 09:17:51.305960 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.authentication.k8s.io\" " took too long (190.475363ms) to execute
2018-06-12 09:18:06.788558 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " took too long (109.543763ms) to execute
2018-06-12 09:18:34.875823 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " took too long (136.649505ms) to execute
2018-06-12 09:18:41.634057 W | etcdserver: read-only range request "key:\"/registry/minions\" range_end:\"/registry/miniont\" count_only:true " took too long (106.00073ms) to execute
2018-06-12 09:18:42.345564 W | etcdserver: request "header:<ID:4449666326481959890 > lease_revoke:<ID:4449666326481959752 > " took too long (142.771179ms) to execute
</code></pre>
<p>I have checked: <code>kubectl get events</code></p>
<pre><code>22m 22m 1 master2.15375fdf087fc69f Node Normal Starting kube-proxy, master2 Starting kube-proxy.
22m 22m 1 master3.15375fe744055758 Node Normal Starting kubelet, master3 Starting kubelet.
22m 22m 5 master3.15375fe74d47afa2 Node Normal NodeHasSufficientDisk kubelet, master3 Node master3 status is now: NodeHasSufficientDisk
22m 22m 5 master3.15375fe74d47f80f Node Normal NodeHasSufficientMemory kubelet, master3 Node master3 status is now: NodeHasSufficientMemory
22m 22m 5 master3.15375fe74d48066e Node Normal NodeHasNoDiskPressure kubelet, master3 Node master3 status is now: NodeHasNoDiskPressure
22m 22m 5 master3.15375fe74d481368 Node Normal NodeHasSufficientPID kubelet, master3 Node master3 status is now: NodeHasSufficientPID
</code></pre>
| <p>I have solved it:</p>
<ol>
<li><p>Adding all the masters IPs and LB IP to the apiServerCertSANs</p></li>
<li><p>Copying the kubernetes certificates from the first master to the other masters.</p></li>
</ol>
|
<p>I am trying to run apache ignite cluster using Google Kubernetes Engine.</p>
<p>After following the tutorial here are some <strong>yaml</strong> files.</p>
<p>First I create a service -
<strong>ignite-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
# Name of Ignite Service used by Kubernetes IP finder.
# The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName.
name: ignite
namespace: default
spec:
clusterIP: None # custom value.
ports:
- port: 9042 # custom value.
selector:
# Must be equal to one of the labels set in Ignite pods'
# deployement configuration.
app: ignite
</code></pre>
<p><strong><code>kubectl create -f ignite-service.yaml</code></strong></p>
<p>Second, I create a deployment for my ignite nodes <strong>ignite-deployment.yaml</strong></p>
<h1>An example of a Kubernetes configuration for Ignite pods deployment.</h1>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
</code></pre>
<p><strong><code>kubectl create -f ignite-deployment.yaml</code></strong></p>
<p>After that I check status of my pods which are running in my case. However when I check logs for any of my pod, I get the following error,</p>
<pre><code>java.io.IOException: Server returned HTTP response code: 403 for URL: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite
</code></pre>
<p>Things I have tried:-</p>
<ol>
<li>I followed this <a href="https://stackoverflow.com/questions/49395481/how-to-setmasterurl-in-ignite-xml-config-for-kubernetes-ipfinder/49405879#49405879">link</a> to make my cluster work. But in step 4, when I run the daemon yaml file, I get the following error</li>
</ol>
<p><code>error: error validating "daemon.yaml": error validating data: ValidationError(DaemonSet.spec.template.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false</code></p>
<p>Can anybody point me to my mistake which I might be doing here?</p>
<p>Thanks.</p>
| <p>Step 1: <code>kubectl apply -f ignite-service.yaml</code> (with the file in your question)</p>
<p>Step 2: <code>kubectl apply -f ignite-rbac.yaml</code></p>
<p>ignite-rbac.yaml is like this:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ignite-endpoint-access
namespace: default
labels:
app: ignite
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["ignite"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ignite-role-binding
namespace: default
labels:
app: ignite
subjects:
- kind: ServiceAccount
name: ignite
roleRef:
kind: Role
name: ignite-endpoint-access
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Step 3: <code>kubectl apply -f ignite-deployment.yaml</code> (very similar to your file, I've only added one line, <code>serviceAccount: ignite</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Custom Ignite cluster's name.
name: ignite-cluster
namespace: default
spec:
# A number of Ignite pods to be started by Kubernetes initially.
replicas: 2
template:
metadata:
labels:
app: ignite
spec:
serviceAccount: ignite ## Added line
containers:
# Custom Ignite pod name.
- name: ignite-node
image: apacheignite/ignite:2.4.0
env:
- name: OPTION_LIBS
value: ignite-kubernetes
- name: CONFIG_URI
value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube.xml
ports:
# Ports to open.
# Might be optional depending on your Kubernetes environment.
- containerPort: 11211 # REST port number.
- containerPort: 47100 # communication SPI port number.
- containerPort: 47500 # discovery SPI port number.
- containerPort: 49112 # JMX port number.
- containerPort: 10800 # SQL port number.
</code></pre>
<p>This should work fine. I've got this in the logs of the pod (<code>kubectl logs -f ignite-cluster-xx-yy</code>), showing the 2 Pods successfully locating each other:</p>
<pre><code>[13:42:00] Ignite node started OK (id=f89698d6)
[13:42:00] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1, offheap=0.72GB, heap=1.0GB]
[13:42:00] Data Regions Configured:
[13:42:00] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]
[13:42:01] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2, offheap=1.4GB, heap=2.0GB]
[13:42:01] Data Regions Configured:
[13:42:01] ^-- default [initSize=256.0 MiB, maxSize=740.8 MiB, persistenceEnabled=false]
</code></pre>
|
<p>I have a multi-tenant cluster, where multi-tenancy is achieved via namespaces. Every tenant has their own namespace. Pods from a tenant cannot talk to pods of other tenants. However, some pods in every tenant have to expose a service to the internet, using an Ingress.</p>
<p>This I how far I got (I am using Calico):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: tenant1-isolate-namespace
namespace: tenant1
spec:
policyTypes:
- Ingress
podSelector: {} # Select all pods in this namespace
ingress:
- from:
- namespaceSelector:
matchLabels:
name: tenant1 # white list current namespace
</code></pre>
<p>Deployed for each namespace (<code>tenant1</code>, <code>tenant2</code>, ... ), this limits communication between pods within their namespace. However, this prevents pods from the <code>kube-system</code> namespace to talk to pods in this namespace.</p>
<p>However, the <code>kube-system</code> namespace does not have any labels by default so I can not specifically white list this namespace.</p>
<p>I found a (dirty) workaround for this issue by manually giving it a label:</p>
<pre><code>kubectl label namespace/kube-system permission=talk-to-all
</code></pre>
<p>And adding the whitelist rule to the networkpolicy:</p>
<pre><code>...
- from:
- namespaceSelector:
matchLabels:
permission: talk-to-all # allow namespaces that have the "talk-to-all privilege"
</code></pre>
<p>Is there a better solution, without manually giving <code>kube-system</code> a label?</p>
<p><strong>Edit:</strong> I tried to additionally add an "OR" rule to specifically allow communication from pods that have the label "app=nginx-ingress", but without luck:</p>
<pre><code> - from
...
- podSelector:
matchLabels:
app: nginx-ingress # Allow pods that have the app=nginx-ingress label
</code></pre>
| <p><strong><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">apiVersion: networking.k8s.io/v1</a></strong> </p>
<p>The namespaceSelector is designed to match namespaces by labels only. There is no way to select namespace by name.</p>
<p>The podSelector can only select pods in the same namespace with NetworkPolicy object. For objects located in different namespaces, only selection of the whole namespace is possible.</p>
<p>Here is an example of Kubernetes Network Policy implementation:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
</code></pre>
<p>Follow this <a href="https://ahmet.im/blog/kubernetes-network-policy/" rel="nofollow noreferrer">link</a> to read a good explanation of the whole concept of Network policy, or this <a href="https://www.youtube.com/watch?v=3gGpMmYeEO8" rel="nofollow noreferrer">link</a> to watch the lecture.</p>
<p><strong><a href="https://docs.projectcalico.org/v3.1/reference/calicoctl/resources/networkpolicy" rel="nofollow noreferrer">apiVersion: projectcalico.org/v3</a></strong></p>
<p>Calico API gives you more options for writing NetworkPolicy rules, so, at some point, you can achieve your goal with less efforts and mind-breaking. </p>
<p>For example, using Calico implementation of Network Policy you can: </p>
<ul>
<li>set action for the rule (Allow, Deny, Log, Pass), </li>
<li>use negative matching (protocol, notProtocol, selector, notSelector), </li>
<li>apply more complex label selectors(has(k), k not in { ‘v1’, ‘v2’ }),</li>
<li>combine selectors with operator &&,</li>
<li>use port range (ports: [8080, "1234:5678", "named-port"]),</li>
<li>match pods in other namespaces.</li>
</ul>
<p>But still, you can match namespaces only by labels.</p>
<p>Consider reading Calico <a href="https://docs.projectcalico.org/v3.1/reference/calicoctl/resources/networkpolicy" rel="nofollow noreferrer">documentation</a> for the details.</p>
<p>Here is an example of Calico Network Policy implementation:</p>
<pre><code>apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-tcp-6379
namespace: production
spec:
selector: role == 'database'
types:
- Ingress
- Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: role == 'frontend'
destination:
ports:
- 6379
egress:
- action: Allow
</code></pre>
|
<p>Cluster:
1 master
2 workers</p>
<p>I am deploying StatefulSet using the local-volume using the PV (kubernetes.io/no-provisioner storageClass) with 3 replicas.
Created 2 PV for Both worker nodes.</p>
<p>Expectation: pods will be scheduled on both workers and sharing the same volume.</p>
<p>result: 3 stateful pods are created on single worker node.
yaml :-</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-1
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv-2
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/vol2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-node2
---
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: test-headless
port: 8000
clusterIP: None
selector:
app: test
---
apiVersion: v1
kind: Service
metadata:
name: test-service
labels:
app: test
spec:
ports:
- name: test
port: 8000
protocol: TCP
nodePort: 30063
type: NodePort
selector:
app: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-stateful
spec:
selector:
matchLabels:
app: test
serviceName: stateful-service
replicas: 6
template:
metadata:
labels:
app: test
spec:
containers:
- name: container-1
image: <Image-name>
imagePullPolicy: Always
ports:
- name: http
containerPort: 8000
volumeMounts:
- name: localvolume
mountPath: /tmp/
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim
</code></pre>
| <p>This happened because Kubernetes doesn't care about distribution. It has the mechanism for providing specific distribution called Pod Affinity.
For distributing pods on all workers, you may use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Pod Affinity</a>.
Furthermore, you can use soft affinity (the differences I <a href="https://stackoverflow.com/a/49427383/9485673">explain here</a> ), it isn't strict and allows to spawn all your pods. For example, StatefulSet will look like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: my-app
replicas: 3
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: app-name
image: k8s.gcr.io/super-app:0.8
ports:
- containerPort: 21
name: web
</code></pre>
<p>This StatefulSet will try to spawn each pod on a new worker; if there are not enough workers, it will spawn the pod on the node where the pod already exists.</p>
|
<p><strong>Kubernetes version:</strong>
v1.10.3</p>
<p><strong>Docker version:</strong>
17.03.2-ce</p>
<p><strong>Operating system and kernel:</strong>
Centos 7</p>
<p><strong>Steps to Reproduce:</strong>
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/</a></p>
<p><strong>Results:</strong></p>
<p>[root@rd07 rd]# kubectl describe services example-service</p>
<p>Name: example-service<br>
Namespace: default<br>
Labels: run=load-balancer-example<br>
Annotations: <br>
Selector: run=load-balancer-example<br>
Type: NodePort<br>
IP: 10.108.214.162<br>
Port: 9090/TCP<br>
TargetPort: 9090/TCP<br>
NodePort: 31105/TCP<br>
Endpoints: 192.168.1.23:9090,192.168.1.24:9090<br>
Session Affinity: None<br>
External Traffic Policy: Cluster<br>
Events: <br></p>
<p><strong>Expected:</strong></p>
<p>Expect to be able to curl the cluster ip defined in the kubernetes service</p>
<p>I'm not exactly sure which is the so called "public-node-ip", so I tried every related ip address, only when using the master ip as the "public-node-ip" it shows "No route to host".</p>
<p>I used "netstat" to check if the endpoint is listened.</p>
<p>I tried "<a href="https://github.com/rancher/rancher/issues/6139" rel="nofollow noreferrer">https://github.com/rancher/rancher/issues/6139</a>" to flush my iptables, and it was not working at all.</p>
<p>I tried "<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/</a>", "nslookup hostnames.default" is not working.</p>
<p>The services seems working perfectly fine, but the services still cannot be accessed.</p>
<p>I'm using "calico" and the "flannel" was also tried.</p>
<p>I tried so many tutorials of apply services, they all cannot be accessed.</p>
<p>I'm new to kubernetes, plz if anyone could help me.</p>
| <p>If you are on any public cloud you are not supposed to get public ip address at <code>ip a</code> command. But even though the port will be exposed to <code>0.0.0.0:31105</code> </p>
<p>Here is the sample file you can verify for your configuration:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: app-name
name: bss
namespace: default
spec:
externalIPs:
- 172.16.2.2
- 172.16.2.3
- 172.16.2.4
externalTrafficPolicy: Cluster
ports:
- port: 9090
protocol: TCP
targetPort: 9090
selector:
k8s-app: bss
sessionAffinity: ClientIP
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
<p>Just replace your <code><private-ip></code> at <code>externalIPs:</code> and do curl your public ip with your node port.</p>
<p>If you are using any cloud to deploy application, Also verify configuration from cloud security groups/firewall for opening port.</p>
<p>Hope this may help.</p>
<p>Thank you! </p>
|
<p>I setup a 3 nodes kubernetes (<code>v1.9.3</code>) cluster on Ubuntu 16.04. </p>
<p>Prior setup I cleared the iptables rules and follow k8s documents for flannel with following command to initialize the cluster:</p>
<pre><code># kubeadm init --apiserver-advertise-address 192.168.56.20 --pod-network-cidr=10.244.0.0/16 --kubernetes-version 1.9.3
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
</code></pre>
<p>The previous command seemed successful:</p>
<pre><code># kubectl -n kube-system -n kube-system get pods
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 0 3m
kube-apiserver-master 1/1 Running 0 2m
kube-controller-manager-master 1/1 Running 0 2m
kube-dns-6f4fd4bdf-4c76v 3/3 Running 0 3m
kube-flannel-ds-wbx97 1/1 Running 0 1m
kube-proxy-x65lv 1/1 Running 0 3m
kube-scheduler-master 1/1 Running 0 2m
</code></pre>
<p>But the problem is <code>kube-dns</code> seems got wrong service endpoint address assigned, this can be seen with following commands:</p>
<pre><code># kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.17.0.2:53,172.17.0.2:53 3m
root@master:~# kubectl describe service kube-dns -n kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: <none>
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.96.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 172.17.0.2:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 172.17.0.2:53
Session Affinity: None
Events: <none>
</code></pre>
<p>The <code>172.17.0.2</code> is the IP address assigned by docker bridge (<code>docker0</code>) for <code>kube-dns</code> container. On working k8s network setup, the <code>kube-dns</code> should have endpoints with address from <code>podSubnet</code> (<code>10.244.0.0/16</code>). </p>
<p>The effect of current setup is all the pods will not have functioned DNS while IP communication is ok. </p>
<p>I tried to delete <code>kube-dns</code> pod to see the new <code>kube-dns</code> containers can pick up the endpoints from <code>podSubnet</code> but they don't. </p>
<p>From the startup logs of 3 <code>kube-dns</code> containers, there are no ANY error messages. </p>
| <p>I think I have found out the root cause for this. It is the previous <code>kubeadm reset</code> did not remove both <code>cni</code> and <code>flannel.1</code> interfaces. So the next <code>kubeadm init</code> makes <code>kube-dns</code> believes the Kubernetes network plugin is already in place before I apply the flannel yaml.</p>
<p>After I check and remove any virtual NICs created by flannel plugin when tearing down kubernetes cluster, the next <code>kubeadm init</code> can succeed without this issue any more. </p>
<p>The same thing applies to <em>Weave Net</em> that requires to run <code>weave reset</code> to remove remained virtual weave NICs. </p>
|
<p>I am trying to package a war file using a tomcat Docker image.
I'm trying to expose the tomcat port in the image for it to be accessed externally.</p>
<p>This is my docker file</p>
<pre><code> FROM tomcat8:3.0.0
COPY target/hello-world-war-1.0.0.war /usr/local/tomcat/webapps
EXPOSE 8080
</code></pre>
<p>However, when I build this image, run it and try to access from my host browser (localhost:8080) it doesn't hit the endpoint, therefore it does not work as expected</p>
<p>I still have to expose the port externally with the -p switch.</p>
<p>What have I missed? or is the usage of EXPOSE something else?</p>
<p>My end need is to be able to deploy this in kubernetes.</p>
| <blockquote>
<p>What have I missed?</p>
</blockquote>
<p>You need to publish port in some way:</p>
<ul>
<li><p>either by directly publishing it in <code>docker run</code> command remapping the <code>EXPOSE</code> Dockerfile directive to different host port using (note lowercase p):</p>
<pre><code>-p=[] : Publish a container᾿s port or a range of ports to the host
</code></pre></li>
<li><p>or, as stated in <a href="https://docs.docker.com/engine/reference/run/#overriding-dockerfile-image-defaults" rel="nofollow noreferrer">docker documentation</a>, implicitly publish ports listed in the <code>EXPOSE</code> Dockerfile directive using (note capital P):</p>
<pre><code>-P : Publish all exposed ports to the host interfaces
</code></pre></li>
</ul>
<blockquote>
<p>My end need is to be able to deploy this in kubernetes.</p>
</blockquote>
<p>And to that goal you will be using ports listed in k8s manifests in similar fashion. Say, you expose port 8080 in your docker image (and publish it with -p or -P), you will then be using it in container spec (exact manifest depending on chosen k8s object type) and very probably later in service manifest, and in each layer you can change port mapping (similar to <code>docker run -p</code> mapping host:docker port to different values if you require to do so). So as far as k8s is concerned if you <code>EXPOSE</code> port in Dockerfile and it works correctly locally in container and also on host through any means of publishing it (<code>docker run</code> with -p or -P) it will function in kubernetes as well (as far as you list it in proper manifest files).</p>
|
<p>I want to make a call to Kubernetes API from .NET Core app outside the cluster. </p>
<p>I have an HttpClient with an HttpClientHandler where I set this callback to ignore invalid (untrusted) certificates and it works:</p>
<pre><code>handler.ServerCertificateCustomValidationCallback +=
(message, certificate, chain, errors) => true;
</code></pre>
<p>But in my kubeconfig from kubectl I have this:</p>
<pre><code>...
clusters:
- cluster:
certificate-authority-data: SOME_AUTHORITY_DATA
server: https://myserver.io:443
...
</code></pre>
<p>How can I validate server certificate using that <strong>certificate-authority-data</strong> in my application?</p>
| <pre><code>private static byte[] s_issuingCABytes = { ... };
handler.ServerCertificateCustomValidationCallback +=
(message, certificate, chain, errors) =>
{
const SslPolicyErrors Mask =
#if CA_IS_TRUSTED
~SslPolicyErrors.None;
#else
~SslPolicyErrors.RemoteCertificateChainErrors;
#endif
// If a cert is not present, or it didn't match the host.
// (And if the CA should have been root trusted anyways, also checks that)
if ((errors & Mask) != SslPolicyErrors.None)
{
return false;
}
foreach (X509ChainElement element in chain.ChainElements)
{
if (element.Certificate.RawData.SequenceEqual(s_issuingCABytes))
{
// The expected certificate was found, huzzah!
return true;
}
}
// The expected cert was not in the chain.
return false;
};
</code></pre>
|
<p>I have a Kubernetes cluster (v1.5.6) with 3 nodes etcd cluster (etcd version 3.1.5) on vmware.
This etcd nodes are running in three docker containers(on three hosts) on coreos on vmware.</p>
<p>I try to backup etcd with the following solution:</p>
<pre><code>docker run --rm --net=host -v /tmp:/etcd_backup -e ETCDCTL_API=3 quay.io/coreos/etcd:v3.1.5 etcdctl --endpoints=[1.1.1.1:2379,2.2.2.2:2379,3.3.3.3:2379] snapshot save etcd_backup/snapshot.db
</code></pre>
<p>The backup has been completed succesfully.</p>
<p>I want to create this kubernetes cluster from zero in another vmware environment, but I need to restore etcd from snapshot for this.</p>
<p>So far, I have not found the right solution that works with etcd in docker containers.</p>
<p>I try to restore with the following method, but unfortunately I did not succeed.</p>
<p>First, I created a new etcd node after I run the following command:</p>
<pre><code>docker run --rm --net=host -v /tmp/etcd_bak:/etcd_backup -e ETCDCTL_API=3 registry:5000/quay.io/coreos/etcd:v3.1.5 etcdctl snapshot restore etcd_backup/snapshot.db --name etcd0 --initial-cluster etcd0=http://etcd0:2380,etcd1=http://etcd1:2380,etcd2=http://etcd2:2380 --initial-cluster-token etcd-cluster-1 --initial-advertise-peer-urls http://etcd0:2380
</code></pre>
<p>Result:</p>
<pre><code>2018-06-04 09:25:52.314747 I | etcdserver/membership: added member 7ff5c9c6942f82e [http://etcd0:2380] to cluster 5d1b637f4b7740d5
2018-06-04 09:25:52.314940 I | etcdserver/membership: added member 91b417e7701c2eeb [http://etcd2:2380] to cluster 5d1b637f4b7740d5
2018-06-04 09:25:52.315096 I | etcdserver/membership: added member faeb78734ee4a93d [http://etcd1:2380] to cluster 5d1b637f4b7740d5
</code></pre>
<p>Unfortunately, nothing happens.</p>
<p>What is the good solution to restore the etcd backup?</p>
<p>How do I create an empty etcd cluster/node and how should I restore the snapshot?</p>
| <p>according to the Etcd <a href="https://github.com/coreos/etcd/blob/master/Documentation/op-guide/recovery.md" rel="noreferrer">Disaster Recovery</a> document, you need restore all three etcd nodes from snapshot with commands like yours, then run three node with commands like this:</p>
<pre><code>etcd \
--name m1 \
--listen-client-urls http://host1:2379 \
--advertise-client-urls http://host1:2379 \
--listen-peer-urls http://host1:2380 &
</code></pre>
<p>Also, you can extract etcdctl from the image, like this:</p>
<pre><code>docker run --rm -v /opt/bin:/opt/bin registry:5000/quay.io/coreos/etcd:v3.1.5 cp /usr/local/bin/etcdctl /opt/bin
</code></pre>
<p>Then use etcdctl to restore snapshot:</p>
<pre><code># ETCDCTL_API=3 ./etcdctl snapshot restore snapshot.db \
--name m1 \
--initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls http://host1:2380 \
--data-dir /var/lib/etcd
</code></pre>
<p>This will restore snapshot to the /var/lib/etcd directory. Then start etcd with docker, don't forget mount /var/lib/etcd into your container, and specify --data-dir to it .</p>
|
<p>I created a service account and created a Pod associated to this service account.<br>
Inside the Pod I have the service account token: </p>
<pre><code>eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im15c2VydmljZS10b2tlbi1xY21jcSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJteXNlcnZpY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlZmVkM2MwZi02ZTEwLTExZTgtYWEwOC0wMDUwNTY4NTdjYWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpteXNlcnZpY2UifQ.Q6evTXCaZ99eBRsOrNnu-UlCJsYu4EKNijxEYyMe8Kq6G9e5likG08DwqMyUOP9uVT7kbOR6VOqIuJ4w0xShG6H2zcXhsF7dViFdo9LaYrs2830XjkiMRAxqJmkcvNseeqwBL1aS5SiNz_xf8RgIZaU1Oik69KVRWncno3dZHEyr2PrwDt4akSorCAC9nyhWKV-oL7FWtQjRKzfr3utbvGMLU6YKVN6cDR4C-GrvVUM1eI0o_-6kovz4VKJKfiOb0c7ttAM_9h4kNOaRxtmTVPTBzBEy6qJUgva0IUlpya8AChRyGncXc6qIJaVOkgUvZm7SpI77Czxz0TrkGezVhA/
</code></pre>
<p>I decoded the JWT with <a href="https://jwt.io/#debugger-io?token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im15c2VydmljZS10b2tlbi1xY21jcSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJteXNlcnZpY2UiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlZmVkM2MwZi02ZTEwLTExZTgtYWEwOC0wMDUwNTY4NTdjYWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpteXNlcnZpY2UifQ.Q6evTXCaZ99eBRsOrNnu-UlCJsYu4EKNijxEYyMe8Kq6G9e5likG08DwqMyUOP9uVT7kbOR6VOqIuJ4w0xShG6H2zcXhsF7dViFdo9LaYrs2830XjkiMRAxqJmkcvNseeqwBL1aS5SiNz_xf8RgIZaU1Oik69KVRWncno3dZHEyr2PrwDt4akSorCAC9nyhWKV-oL7FWtQjRKzfr3utbvGMLU6YKVN6cDR4C-GrvVUM1eI0o_-6kovz4VKJKfiOb0c7ttAM_9h4kNOaRxtmTVPTBzBEy6qJUgva0IUlpya8AChRyGncXc6qIJaVOkgUvZm7SpI77Czxz0TrkGezVhA%2F" rel="noreferrer">jwt.io</a> website. </p>
<p>Down below it has a place that I can verify my token but it asks for Public Key or certificate:<br>
<a href="https://i.stack.imgur.com/ZZXj6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZZXj6.png" alt="enter image description here"></a></p>
<p>Inside the Pod I have the cluster certificate (<code>ca.crt</code>):<br>
<a href="https://i.stack.imgur.com/Kbg9k.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Kbg9k.png" alt="enter image description here"></a></p>
<p>I entered it but it said that it not valid.<br>
<strong>ca.crt content:</strong> </p>
<pre><code>-----BEGIN CERTIFICATE-----
MIICyDCCAbCgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTE4MDYxMTA4MzkwM1oXDTI4MDYwODA4MzkwM1owFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANLb
SLZuYZhsLD4eazkGglgcusKD02MMO7hYw02OD5M9G4biC/ZFlWjdayLY9OTKwISW
kdyNHrOvW5+UMf1Kha+aJgRfaf96YpDadADsdA/pXKlxA23TzjCmCLgZiO4h+PNO
nxRTftM/o8hoUNhY6t71m2Pn5gE2+bdFuzBLM4rGIaDlFagn1iYnAys6faz4Q3Xn
n8xpp2Fl+kxf2JpCffgdfHd3M7DAHlFqdyBPa8i9byCPknSt/j8dR62Etxl1xOxl
QwtLXmRKOe134g1yxPMrIbh54JkOMuU0dyVV9WriUj04jskH+zIrQnGOztSvES2N
iLqwIM52IC6EwKAcbXUCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMfXDTlaoUFFloE/Ce8TwA32d6fD
tSUBX2xs5pMclacGAFtx505e+fpvkowR3qcJcjawov0bPdMyGNZyXyDWpCsvDVnH
850m+z91+mkqqom9vGmPk5MXoz7trzrKvkvl7CDaWyQlX8SV83c3ijyaeH8/crvC
Z3qeOkjekWnS/lKvQ9a+dFFou0ZsN3UGUpUtI12gwjvHYgTOwdpjX+CBp2TEnzd+
5NSkfv2QNom8cQmnEiSnI5JCxv3Vzi6z9uTGC7ok1EQAwwlYLBeyTpcz5PmCNvZ/
rw3eEneIyKxlSLafkNlEdhol2ZXZe+JS2zTzUVWlFbEtG/odGmn29BSN3nU=
-----END CERTIFICATE-----
</code></pre>
<p>Where can I find associated public key or certificate to this service account token?<br>
Inside the worker there is a location <code>/var/lib/kubelet/pki</code> but the keys and certificates are related to the kubelet:<br>
<code>kubelet-client.crt kubelet-client.key kubelet.crt kubelet.key</code></p>
<p>Reference:<br>
<a href="https://stackoverflow.com/questions/46402683/ca-certificate-and-jwt-tokens-on-kubernetes">CA Certificate and JWT tokens on kubernetes</a></p>
| <p>It is not kubelet but kube-controller-manager which emits the service account tokens. You can find the key in the master node filesystem. Specifically, you can find the path in the value for the parameter <code>--service-account-signing-key-file</code> (private key) and <code>--service-account-key-file</code> (public key) in the apiserver configuration.</p>
<p>See the docs for these values here: <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/</a></p>
|
<p>I created an InitializerConfiguration that adds my initializer for pods.</p>
<p>The documentation says to use a Deployment (<a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-initializers-on-the-fly</a>). However, doing so results in my initializer Pod being stuck in "pending" because it's waiting for itself to initialize it. I tried overriding the pending initializers to an empty list in the pod spec of the Deployment, but that seems to be ignored.</p>
<p>What's the correct way to deploy a Pod initializer without deadlocking?</p>
<p>I found a couple bug reports that seem related, but no solutions that worked for me:
* <a href="https://github.com/kubernetes/kubernetes/issues/51485" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/51485</a> (based on this one I added the "initialize" verb for pods to the ClusterRole system:controller:replicaset-controller, but that didn't help either)</p>
| <blockquote>
<p>However, doing so results in my initializer Pod being stuck in "pending" because it's waiting for itself to initialize it</p>
</blockquote>
<p>But the docs say:</p>
<blockquote>
<p>You should first deploy the initializer controller and make sure that it is working properly before creating the <code>initializerConfiguration</code>. Otherwise, any newly created resources will be stuck in an uninitialized state.</p>
</blockquote>
<p>So it sounds to me like you will want to <code>kubectl delete initializerConfiguration --all</code> (or, of course, the specific name of the <code>initializerConfiguration</code>), allow your initializer Pod to start successfully, <strong>then</strong> <code>kubectl create -f my-initializer-config.yaml</code> or whatever.</p>
|
<p>When I run exec command</p>
<pre><code> kubectl exec kubia-zgxn9 -- curl -s http://10.47.252.17
Error from server (BadRequest): pod kubia-zgxn9 does not have a host assigned
</code></pre>
<p>Describe pod shows host</p>
<pre><code>IP:
Controlled By: ReplicationController/kubia
Containers:
kubia:
Image: luksa/kubia
Port: 8080/TCP
Host Port: 0/TCP
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xs7qx (ro)
</code></pre>
<p>This is my service</p>
<pre><code>Name: kubia
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kubia
Type: ClusterIP
IP: 10.47.252.17
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
</code></pre>
<p>Why did I get error from server?</p>
| <p>The Pod is probably not yet scheduled to a Node.</p>
<p>Maybe it just took a little longer than expected or perhaps it's asking for resources that no node can satisfy at the moment.</p>
<p>Check the output of <code>kubectl get pod kubia-zgxn9</code> and see if the state is <code>Running</code>. If so, retry now. If it still fails to exec this might be a bug.</p>
<p>If it's not running, check the describe output for notices. (Unfortunately you cut the output short in your question so we can't see what's wrong with it).</p>
|
<p>Is it possible, to pull private images from Docker Hub to a Google Cloud Kubernetes cluster?
Is this recommended, or do I need to push my private images also to Google Cloud?</p>
<p>I read the documentation, but I found nothing that could explain me this clearly. It seems that it is possible, but I don´t know if it's recommended.</p>
| <p>There is no restriction to use any registry you want. If you just use the image name, (e.g., image: nginx) in pod specification, the image will be pulled from public docker hub registry with tag assumed as :latest</p>
<p>As mentioned in the Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>The image property of a container supports the same syntax as the
docker command does, including private registries and tags. Private
registries may require keys to read images from them.</p>
<h2>Using Google Container Registry</h2>
<p>Kubernetes has native support for the <a href="https://cloud.google.com/tools/container-registry/" rel="nofollow noreferrer">Google Container Registry (GCR)</a>, when running on Google
Compute Engine (GCE). If you are running your cluster on GCE or Google
Kubernetes Engine, simply use the full image name (e.g.
gcr.io/my_project/image:tag). All pods in a cluster will have read
access to images in this registry.</p>
<h2>Using AWS EC2 Container Registry</h2>
<p>Kubernetes has native support for the <a href="https://aws.amazon.com/ecr/" rel="nofollow noreferrer">AWS EC2 Container Registry</a>, when nodes are AWS EC2 instances.
Simply use the full image name (e.g.
ACCOUNT.dkr.ecr.REGION.amazonaws.com/imagename:tag) in the Pod
definition. All users of the cluster who can create pods will be able
to run pods that use any of the images in the ECR registry.</p>
<h2>Using Azure Container Registry (ACR)</h2>
<p>When using <a href="https://azure.microsoft.com/en-us/services/container-registry/" rel="nofollow noreferrer">Azure Container Registry</a> you can authenticate using either an admin user or a
service principal. In either case, authentication is done via standard
Docker authentication. These instructions assume the azure-cli command
line tool.</p>
<p>You first need to create a registry and generate credentials, complete
documentation for this can be found in the <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli" rel="nofollow noreferrer">Azure container registry
documentation</a>.</p>
<h2>Configuring Nodes to Authenticate to a Private Repository</h2>
<p>Here are the recommended steps to configuring your nodes to use a private
registry. In this example, run these on your desktop/laptop:</p>
<ol>
<li>Run docker login [server] for each set of credentials you want to use. This updates <code>$HOME/.docker/config.json</code>.</li>
<li>View <code>$HOME/.docker/config.json</code> in an editor to ensure it contains just the credentials you want to use.</li>
<li>Get a list of your nodes, for example:
<ul>
<li>if you want the names: <code>nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')</code></li>
<li>if you want to get the IPs: <code>nodes=$(kubectl get nodes -o jsonpath='{range
.items[*].status.addresses[?(@.type=="ExternalIP")]}{.address}
{end}')</code></li>
</ul></li>
<li>Copy your local .docker/config.json to the home directory of root on each node.
<ul>
<li>for example: <code>for n in $nodes; do scp ~/.docker/config.json root@$n:/root/.docker/config.json; done</code></li>
</ul></li>
</ol>
</blockquote>
<h2>Use cases:</h2>
<blockquote>
<p>There are a number of solutions for configuring private registries.
Here are some common use cases and suggested solutions.</p>
<ol>
<li>Cluster running only non-proprietary (e.g. open-source) images. No need to hide images.
<ul>
<li>Use public images on the Docker hub.
<ul>
<li>No configuration required.</li>
<li>On GCE/Google Kubernetes Engine, a local mirror is automatically used for improved speed and availability.</li>
</ul></li>
</ul></li>
<li>Cluster running some proprietary images which should be hidden to those outside the company, but visible to all cluster users.
<ul>
<li>Use a hosted private Docker registry.
<ul>
<li>It may be hosted on the Docker Hub, or elsewhere.</li>
<li>Manually configure <code>.docker/config.json</code> on each node as described above.</li>
</ul></li>
<li>Or, run an internal private registry behind your firewall with open read access.
<ul>
<li>No Kubernetes configuration is required.</li>
</ul></li>
<li>Or, when on GCE/Google Kubernetes Engine, use the project’s Google Container Registry.
<ul>
<li>It will work better with cluster autoscaling than manual node configuration.</li>
</ul></li>
<li>Or, on a cluster where changing the node configuration is inconvenient, use <strong>imagePullSecrets</strong>.</li>
</ul></li>
<li>Cluster with a proprietary images, a few of which require stricter access control.
<ul>
<li>Ensure AlwaysPullImages admission controller is active. Otherwise, all Pods potentially have access to all images.</li>
<li>Move sensitive data into a “Secret” resource, instead of packaging it in an image.</li>
</ul></li>
<li>A multi-tenant cluster where each tenant needs own private registry.
<ul>
<li>Ensure <code>AlwaysPullImages</code> admission controller is active. Otherwise, all Pods of all tenants potentially have access to all
images.</li>
<li>Run a private registry with authorization required.</li>
<li>Generate registry credential for each tenant, put into secret, and populate secret to each tenant namespace.</li>
<li>The tenant adds that secret to <code>imagePullSecrets</code> of each namespace.</li>
</ul></li>
</ol>
</blockquote>
<p>Consider reading the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Pull an Image from a Private Registry</a> document if you decide to use a private registry. </p>
|
<p>I am running minikube on MacOS, and want to expose the ip address and port for running this example helm chart -
<a href="https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/</a></p>
<p>I tried to ping the localhost:58064, but it wouldn't connect.</p>
<pre><code>helm install --dry-run --debug ./mychart --set service.internalPort=8080
[debug] Created tunnel using local port: '58064'
[debug] SERVER: "127.0.0.1:58064"
[debug] Original chart version: ""
[debug] CHART PATH: /Users/me/Desktop/HelmTest/mychart
NAME: messy-penguin
REVISION: 1
RELEASED: Tue Jun 12 17:56:41 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
service:
internalPort: 8080
COMPUTED VALUES:
affinity: {}
image:
pullPolicy: IfNotPresent
repository: nginx
tag: stable
ingress:
annotations: {}
enabled: false
hosts:
- chart-example.local
path: /
tls: []
nodeSelector: {}
replicaCount: 1
resources: {}
service:
internalPort: 8080
port: 80
type: ClusterIP
tolerations: []
HOOKS:
MANIFEST:
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: messy-penguin-mychart
labels:
app: mychart
chart: mychart-0.1.0
release: messy-penguin
heritage: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app: mychart
release: messy-penguin
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: messy-penguin-mychart
labels:
app: mychart
chart: mychart-0.1.0
release: messy-penguin
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: mychart
release: messy-penguin
template:
metadata:
labels:
app: mychart
release: messy-penguin
spec:
containers:
- name: mychart
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
MacBook-Pro:~/Desktop/HelmTest quantum-fusion$ curl 127.0.0.1:58064
curl: (7) Failed to connect to 127.0.0.1 port 58064: Connection refused
</code></pre>
| <p>Because <code>minikube</code> is from the docker-machine family, running <code>minikube ip</code> will output the IP address of the virtual machine, and <strong>that</strong> is the IP upon which you should attempt to contact your cluster, not localhost.</p>
<p>Furthermore, <code>[debug] Created tunnel using local port: '58064'</code> is <strong>helm</strong> making a tunnel to the embedded <code>tiller</code> Pod inside your cluster, and is not anything that you should be using at all. That's actually why it is prefixed with <code>[debug]</code>: because it is only useful for extreme circumstances.</p>
<p>Finally, you will need to use <code>kubectl port-forward</code> to reach your deployed Pod since the <code>Service</code> is using a <code>ClusterIP</code>, which as its name implies is only valid inside the cluster. You can also create a second <code>Service</code> of <code>type: NodePort</code> and it will allocate a TCP/IP port on the virtual machine's IP that routes to the <code>port:</code> of the <code>Service</code>. You <em>may</em> be able to inform your Helm chart to do that for you, depending on whether the author exposed that kind of decision through the chart's <code>values.yaml</code>.</p>
<p>The other "asterisk" to that <code>port-forward</code> versus <code>Service</code> of <code>type: NodePort</code> part is that I see in the output a mention of an <code>Ingress</code> resource for <code>chart-example.local</code>, but that pragmatically is only meaningful if you have a running "ingress controller," but if you do, then it <em>already</em> has a TCP/IP port upon which you should contact your cluster, just ensuring that you provide the connection with a <code>curl -H "host: chart-example.local" http://$(minikube ip):${the_ingress_port}</code> so that the ingress controller can route the request to the correct <code>Ingress</code>.</p>
|
<p>I'm using GKE 1.10.2-gke.3 and would like to know how I can configure both IPv4 and IPv6 connectivity with my Google Load Balancer Ingress yaml.</p>
<p>I can configure IPv4 or IPv6 but not both. My yaml file is below. Assume I have reserved static IPv4 and IPv6 addresses called app-static-ipv4 and app-static-ipv6 respectively. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app-static-ipv6
spec:
tls:
- secretName: tls-certificates
backend:
serviceName: app-service
servicePort: 8080
</code></pre>
| <p>You can attach a IPv6 Ip to a GCLB and have the same type of routing globally as you would with IPv4. </p>
<p>One strategy would be to configure the GCLB with an IPv6 address to handle all IPv6 traffic. Just create an additional forwarding rule with the IPv6 address.</p>
<p>Then you can associate both IPv6 and IPv4 with the same load balancer and back end instances. More on IPv6 support.</p>
<p><a href="https://cloud.google.com/compute/docs/load-balancing/ipv6" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/load-balancing/ipv6</a></p>
|
<p>Kubernetes has both <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> (in front of a Service) and Service with type:
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">LoadBalancer</a>. These seem to do identical things: allow public traffic into the pods matching the service's selector. What are the benefits and drawbacks of each? In what scenarios would I choose one over the other?</p>
| <p>Ingress can be used to expose many services depending on the path or even multiple applications depending on the host or domain in the request.</p>
<p>A load balancer always exposes one service only.</p>
|
<p>Is there any way to get statistics such as service / endpoint access for services defined in Kubernetes cluster?</p>
<p>I've read about Heapster, but it doesn't seem to provide these statistics. Plus, the whole setup is tremendously complicated and relies on a ton of third-party components. I'd really like something much, much simpler than that.</p>
<p>I've been looking into what may be available in <code>kube-system</code> namespace, and there's a bunch of containers and services, there, Heapster including, but they are effectively inaccessible because they require authentication I cannot provide, and <code>kubectl</code> doesn't seem to have any API to access them (or does it?).</p>
| <p>Heapster is the agent that collects data, but then you need a monitoring agent to interpret these data. On GCP, for example, that's fluentd who gets these metrics and sends to Stackdriver.</p>
<p><a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> is an excellent monitoring tool. I would recommend this one, if youare not on GCP.</p>
<p>If you would be on GCP, then as mentioned above you have Stackdriver Monitoring, that is configured by default for K8s clusters. All you have to do is to create a Stackdriver accound (this is done by one click from GCP Console), and you are good to go.</p>
|
<p>I'm using Kubernetes(K8) v1.7 and wondering if I can share a GPU among multiple pods. I have an MNIST machine learning program in TensorFlow where GPU Allocation is set to 30% and GPU Growth <code>allow_growth</code> is false. Two manifest files are being used to deploy two separate jobs under K8. When I run them together, one gets scheduled and other waits as the GPU resource is not available. In my manifest file, I set <code>alpha.kubernetes.io/nvidia-gpu</code> to 0.5. When I check GPU processes, it always shows one process is running. If I stop one job from Kubernetes then other gets scheduled and shows up in the GPU processes. FYI, the machine I'm using has only one GPU. So my question is whether the Kubernetes supports GPU sharing or not? Can I share a GPU and define it in the manifest file?</p>
<p>A portion of the Manifest (both jobs have same GPU request)</p>
<pre><code> resources:
limits:
alpha.kubernetes.io/nvidia-gpu: 0.5
</code></pre>
<p>The output of the <code>nvidia-smi</code> command shows 1 process at a time</p>
<pre><code>+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:0B:00.0 Off | N/A |
| 0% 33C P2 59W / 275W | 177MiB / 11178MiB | 8% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15513 C python3 167MiB |
+-----------------------------------------------------------------------------+
</code></pre>
| <p>GPU sharing on multiple containers is not supported at the moment, and it is unlikely to be supported anytime soon. You would need to have each virtual machine be a separate Kubernetes node, each with a separate GPU.</p>
|
<p>I have 2 services. One containing 2 pod replicas for a web application which is dependent on another backend service having pod (2 replicas) for MySQL containers.</p>
<p>The web application uses environment variables set by backend DB service. I've all the json inside the same directory.</p>
<p>Is there any way to express the dependencies so that kubectl always create (and run) backend pods and services before it starts the web application service? I have used kubedeam to create the cluster.</p>
| <p>I can suggest two solutions:</p>
<p>First, to attach an init container to the web servers that waits until MySQL is up and running. The deployment would be something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: init-wait
image: alpine
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 mysql 3306 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: web-server
ports:
- containerPort: 80
protocol: TCP
</code></pre>
<p>It uses netcat to try to start a TCP connection to the mysql service on the port 3306 every 3 seconds. Once it achieves to connect, the init-container ends and the web server starts normally.</p>
<p>The second option is to use <a href="https://github.com/Mirantis/k8s-AppController" rel="noreferrer">Mirantis AppController</a>. It allows you to create dependency objects as you need between server and database deployments. Check their repo for a full documentation.</p>
|
<p>We are considering to use Polly as our failover lib.</p>
<p>We run our applications in a kubernetes environment. We can't guarantee that the pods that for example are running retry attempts against a certain service will live long enough for the request to be successful. And if the pod dies the retry chain disappears forever I guess.</p>
<p>How do you deal with that scenario? It is important for us that certain retry chains always continue until success.</p>
| <p>Polly is an in-process library and doesn't currently offer any out-of-process retry co-ordination. There is no current option to persist Polly retry state anywhere out-of-process. </p>
|
<p>i am setting up a Kubernetes-Cluser and can't get the weave network up properly. </p>
<p>I have 3 nodes: rowlf (master), rizzo and fozzie. The pods are looking fine: </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/etcd-rowlf 1/1 Running 0 32m
kube-system pod/kube-apiserver-rowlf 1/1 Running 9 33m
kube-system pod/kube-controller-manager-rowlf 1/1 Running 0 32m
kube-system pod/kube-dns-686d6fb9c-kjdxt 3/3 Running 0 33m
kube-system pod/kube-proxy-6kpr9 1/1 Running 0 9m
kube-system pod/kube-proxy-f7nk5 1/1 Running 0 33m
kube-system pod/kube-proxy-nrbbl 1/1 Running 0 21m
kube-system pod/kube-scheduler-rowlf 1/1 Running 0 32m
kube-system pod/weave-net-4sj4n 2/2 Running 1 21m
kube-system pod/weave-net-kj6q7 2/2 Running 1 9m
kube-system pod/weave-net-nsp22 2/2 Running 0 30m
</code></pre>
<p>But weave status showing up failures:</p>
<pre><code>$ kubectl exec -n kube-system weave-net-nsp22 -c weave -- /home/weave/weave --local status
Version: 2.3.0 (up to date; next check at 2018/06/14 00:30:09)
Service: router
Protocol: weave 1..2
Name: 7a:8f:22:1f:0a:17(rowlf)
Encryption: disabled
PeerDiscovery: enabled
Targets: 1
Connections: 1 (1 failed)
Peers: 1
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
</code></pre>
<p>First I do not undestand why the connection is marked as failed. Second in the logs i found these two lines: </p>
<pre><code>INFO: 2018/06/13 17:22:59.170536 ->[172.16.20.12:54077] connection accepted
INFO: 2018/06/13 17:22:59.480262 ->[172.16.20.12:54077|7a:8f:22:1f:0a:17(rowlf)]: connection shutting down due to error: local "7a:8f:22:1f:0a:17(rowlf)" and remote "7a:8f:22:1f:0a:17(rizzo)" peer names collision
INFO: 2018/06/13 17:34:12.668693 ->[172.16.20.13:52541] connection accepted
INFO: 2018/06/13 17:34:12.672113 ->[172.16.20.13:52541|7a:8f:22:1f:0a:17(rowlf)]: connection shutting down due to error: local "7a:8f:22:1f:0a:17(rowlf)" and remote "7a:8f:22:1f:0a:17(fozzie)" peer names collision
</code></pre>
<p>The second misunderstood thing is the "peer names collision" error. Is this normal?</p>
<p>This is the log from "rizzo"</p>
<pre><code>kubectl logs weave-net-4sj4n -n kube-system weave
DEBU: 2018/06/13 17:22:58.731864 [kube-peers] Checking peer "7a:8f:22:1f:0a:17" against list &{[{7a:8f:22:1f:0a:17 rowlf}]}
INFO: 2018/06/13 17:22:58.833350 Command line options: map[conn-limit:100 docker-api: host-root:/host http-addr:127.0.0.1:6784 ipalloc-range:10.32.0.0/12 no-dns:true expect-npc:true name:7a:8f:22:1f:0a:17 datapath:datapath db-prefix:/weavedb/weave-net ipalloc-init:consensus=2 metrics-addr:0.0.0.0:6782 nickname:rizzo port:6783]
INFO: 2018/06/13 17:22:58.833525 weave 2.3.0
INFO: 2018/06/13 17:22:59.119956 Bridge type is bridged_fastdp
INFO: 2018/06/13 17:22:59.120025 Communication between peers is unencrypted.
INFO: 2018/06/13 17:22:59.141576 Our name is 7a:8f:22:1f:0a:17(rizzo)
INFO: 2018/06/13 17:22:59.141787 Launch detected - using supplied peer list: [172.16.20.12 172.16.20.11]
INFO: 2018/06/13 17:22:59.141894 Checking for pre-existing addresses on weave bridge
INFO: 2018/06/13 17:22:59.157517 [allocator 7a:8f:22:1f:0a:17] Initialising with persisted data
INFO: 2018/06/13 17:22:59.157884 Sniffing traffic on datapath (via ODP)
INFO: 2018/06/13 17:22:59.158806 ->[172.16.20.11:6783] attempting connection
INFO: 2018/06/13 17:22:59.159081 ->[172.16.20.12:6783] attempting connection
INFO: 2018/06/13 17:22:59.159815 ->[172.16.20.12:42371] connection accepted
INFO: 2018/06/13 17:22:59.161572 ->[172.16.20.12:6783|7a:8f:22:1f:0a:17(rizzo)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/06/13 17:22:59.161836 ->[172.16.20.12:42371|7a:8f:22:1f:0a:17(rizzo)]: connection shutting down due to error: cannot connect to ourself
INFO: 2018/06/13 17:22:59.265736 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2018/06/13 17:22:59.266483 Listening for metrics requests on 0.0.0.0:6782
INFO: 2018/06/13 17:22:59.443937 ->[172.16.20.11:6783|7a:8f:22:1f:0a:17(rizzo)]: connection shutting down due to error: local "7a:8f:22:1f:0a:17(rizzo)" and remote "7a:8f:22:1f:0a:17(rowlf)" peer names collision
INFO: 2018/06/13 17:23:00.355761 [kube-peers] Added myself to peer list &{[{7a:8f:22:1f:0a:17 rowlf}]}
DEBU: 2018/06/13 17:23:00.367309 [kube-peers] Nodes that have disappeared: map[]
INFO: 2018/06/13 17:34:12.671287 ->[172.16.20.13:60523] connection accepted
INFO: 2018/06/13 17:34:12.674712 ->[172.16.20.13:60523|7a:8f:22:1f:0a:17(rizzo)]: connection shutting down due to error: local "7a:8f:22:1f:0a:17(rizzo)" and remote "7a:8f:22:1f:0a:17(fozzie)" peer names collision
</code></pre>
<p>I ask, because i reinstall everything from scratch the fourth time by now and every time I have some trouble to connect from traefik to a pod on another host. I blame the network, because this does not look health. Can you please tell me if the setup is correct so far. Are the errors normal or do I have to care about them? And finally: how do I request for help and what information do I have to provide to make it some people like you easy to help me out of this frustrating position? </p>
<p>This is my version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm"}
</code></pre>
<p>Thank you. </p>
<p>++++ UPDATE ++++
I reset the machine-id like it is mentiond in here: <a href="https://github.com/weaveworks/weave/issues/2767" rel="nofollow noreferrer">https://github.com/weaveworks/weave/issues/2767</a> But this cause a constantly reboot of my machines!</p>
<pre><code>kernel:[ 2257.674153] Internal error: Oops: 80000007 [#1] SMP ARM
</code></pre>
| <p>Finally I found the solution here: <a href="https://github.com/weaveworks/weave/issues/3314" rel="nofollow noreferrer">https://github.com/weaveworks/weave/issues/3314</a>
We have to disable fastDP!</p>
|
<p>I'm reading about Secrets but I'm not able to find the value of them to store user and password compared to Config Maps.</p>
<p>The content is not encrypted anyway and the way to handle or access a secret is exactly the same as using config maps.</p>
<p>And I think it is the same for storing certificates, I could use also ConfigMaps.</p>
<p>I only see the use for storing the credentials to access to a docker registry because it has a predefined structure understood by k8s.</p>
<p>I found this question:
<a href="https://stackoverflow.com/questions/36912372/kubernetes-secrets-vs-configmaps?rq=1">Kubernetes Secrets vs ConfigMaps</a>
But it is more than a year old, a lot of k8s versions.</p>
<p>What is the value of Secrets compared to ConfigMaps?</p>
| <blockquote>
<p>What is the value of Secrets compared to ConfigMaps?</p>
</blockquote>
<p>If you are sole user of your cluster and you issue commands as root/master/admin then, as you stated, probably minimal, but if your cluster is used by multiple users and you want to fine grain access rights using RBAC as stated in <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">documentation</a> then you could, for example, deny access to Secrets and alow access to ConfigMaps for particular user using RoleBindings or similar... Such user could handle settings for container without actually having access to credentials. We use such setup in our CI/CD pipeline for example.</p>
|
<p>DNS resolution looks fine, but I cannot ping my service. What could be the reason?</p>
<p>From another pod in the cluster:</p>
<pre><code>$ ping backend
PING backend.default.svc.cluster.local (10.233.14.157) 56(84) bytes of data.
^C
--- backend.default.svc.cluster.local ping statistics ---
36 packets transmitted, 0 received, 100% packet loss, time 35816ms
</code></pre>
<p>EDIT:</p>
<p>The service definition:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- name: api
protocol: TCP
port: 10000
selector:
app: backend
</code></pre>
<p>The deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
run: backend
replicas: 1
template:
metadata:
labels:
run: backend
spec:
containers:
- name: backend
image: nha/backend:latest
imagePullPolicy: Always
ports:
- name: api
containerPort: 10000
</code></pre>
<p>I can <code>curl</code> my service from the same container:</p>
<pre><code>kubectl exec -it backend-7f67c8cbd8-mf894 -- /bin/bash
root@backend-7f67c8cbd8-mf894:/# curl localhost:10000/my-endpoint
{"ok": "true"}
</code></pre>
<p>It looks like the endpoint on port <code>10000</code> does not get exposed though:</p>
<pre><code> kubectl get ep
NAME ENDPOINTS AGE
backend <none> 2h
</code></pre>
| <p>Ping doesn't work with service's cluster IPs like 10.233.14.157, as it is a virtual IP. You should be able to ping a specific pod, but no a service.</p>
|
<p>I create cluster on <code>Google Kubernetes Engine</code> with <code>Cluster Autoscaler</code> option enabled.<br>
I want to config the scaling behavior such as <code>--scale-down-delay-after-delete</code> according to <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md</a> .</p>
<p>But I found no Pod or Deployment on kube-system which is cluster autoscaler.</p>
<p>Anyone has ideas?</p>
<hr>
<p>Edit:
I am <strong>not</strong> saying <code>Horizontal Pod Autoscaler</code>.</p>
<p>And I hope I can configure it as like this :</p>
<pre><code>$ gcloud container clusters update cluster-1 --enable-autoscaling --scan-interval=5 --scale-down-unneeded-time=3m
ERROR: (gcloud.container.clusters.update) unrecognized arguments:
--scan-interval=5
--scale-down-unneeded-time=3m
</code></pre>
| <p>It is not possible according to <a href="https://github.com/kubernetes/autoscaler/issues/966" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/issues/966</a></p>
<p>Probably because there is no way to access the executable (which it seems to be) on GKE.</p>
<p>You can't even view the logs of the autoscaler on GKE: <a href="https://github.com/kubernetes/autoscaler/issues/972" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/issues/972</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.