prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Currently I am trying to add version number or build number for Docker image to deploy on Kubernetes cluster. Previously I was working only with <code>:latest</code> tag. But when I am using <code>latest</code> tag , I found problem for pulling from Dockerhub image registry. So when I am using the build number to my docker image like <code>&lt;image-name&gt;:{build-number}</code> . </p> <h2>Application Structure</h2> <p>In my Kubernetes deployment, I am using deployment and service. I am defining my image repository in my deployment file like the following,</p> <pre><code> containers: - name: test-kube-deployment-container image: samplekubernetes020/testimage:latest ports: - name: http containerPort: 8085 protocol: TCP </code></pre> <p>Here instead of latest tag, I want to put build number with my image in deployment YAML.</p> <ol> <li>Can I use an environment variable for holding the random build number for accessing like <code>&lt;image-name&gt;:${buildnumber}</code> ?</li> <li>If i want to use a environment variable which providing the random number how I can generate a random number to a environment variable?</li> </ol> <p><strong>Updates On Image Version Implementation</strong></p> <p>My modified Jenkinsfile contains the step like following to assign the image version number to image. But still I am not getting the updated result after changes to repository,</p> <p>I created step like the following in Jenkinsfile</p> <pre><code>stage ('imagebuild') { steps { sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes /var/lib/jenkins/workspace/jpipeline/pipeline' sh 'docker login --username=my-username --password=my-password' sh "docker tag spacestudymilletech010/spacestudykubernetes:latest spacestudymilletech010/spacestudykubernetes:${VERSION}" sh 'docker push spacestudymilletech010/spacestudykubernetes:latest' } } </code></pre> <p>And my deployment YAML file contains like the following,</p> <pre><code> containers: - name: test-kube-deployment-container image: spacestudymilletech010/spacestudykubernetes:latest ports: - name: http containerPort: 8085 protocol: TCP </code></pre> <p><strong>Confusions</strong> </p> <p><strong>NB:</strong> When I am checking the dockerhub repository, every time it showing the latest push status</p> <p>So my confusions are:</p> <ol> <li>Is there any problem with pulling latest image in my deployment.yaml file?</li> <li>Is the problem when I am tagging the image at my machine from where I am building the image and pushing?</li> </ol>
<p>The standard way or at least the way that has worked for most of us is to create versioned or tagged images. For example</p> <pre><code>samplekubernetes020/testimage:1 samplekubernetes020/testimage:2 samplekubernetes020/testimage:3 ... ... </code></pre> <p>Now I will try to answer your actual question which is <em>how do I update the image which is in my deployment when my image tag upgrades?</em></p> <p><strong>Enter Solution</strong></p> <ul> <li><p>When you compile and build a new image with latest version of code, <strong>tag it</strong> with an incremental unique version. This tag can be anything unique or build number, etc.</p> </li> <li><p>Then push this tagged image to docker registry</p> </li> <li><p>Once the image is uploaded, this is when you can use <code>kubectl</code> or kubernetes API to update the deployment with the latest container image.</p> <p><code>kubectl set image deployment/my-deployment test-kube-deployment-container=samplekubernetes020/testimage:1 --record</code></p> </li> <li><p>The above set of steps generally take place in your CI pipeline, where you store the image version or the image: version in the environment variable itself.</p> </li> </ul> <hr /> <p><strong>Update Post comment</strong></p> <p>Since you are using Jenkins, you can get the current <em>build number</em> and <em>commit-id</em> and many other variables in Jenkinsfile itself as Jenkins injects these values at builds runtime. For me, this works. Just a reference.</p> <pre><code>environment { NAME = &quot;myapp&quot; VERSION = &quot;${env.BUILD_ID}-${env.GIT_COMMIT}&quot; IMAGE = &quot;${NAME}:${VERSION}&quot; } stages { stage('Build') { steps { echo &quot;Running ${VERSION} on ${env.JENKINS_URL}&quot; git branch: &quot;${BRANCH}&quot;, ..... echo &quot;for brnach ${env.BRANCH_NAME}&quot; sh &quot;docker build -t ${NAME} .&quot; sh &quot;docker tag ${NAME}:latest ${IMAGE_REPO}/${NAME}:${VERSION}&quot; } } } </code></pre>
<p>I have 3 GKE clusters sitting in 3 different regions on Google Cloud Platform. I would like to create a Kafka cluster which has one Zookeeper and one Kafka node (broker) in every region (each GKE cluster).</p> <p>This set-up is intended to survive regional failure (I know a whole GCP region going down is rare and highly unlikely).</p> <p>I am trying this set-up using this <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">Helm Chart</a> provided by Incubator.</p> <p>I tried this setup manually on 3 GCP VMs following <a href="https://codeforgeek.com/how-to-setup-zookeeper-cluster-for-kafka/" rel="nofollow noreferrer">this guide</a> and I was able to do it without any issues.</p> <p>However, setting up a Kafka cluster on Kubernetes seems complicated.</p> <p>As we know we have to provide the IPs of all the zookeeper server in each zookeeper configuration file like below:</p> <pre><code>... # list of servers server.1=0.0.0.0:2888:3888 server.2=&lt;Ip of second server&gt;:2888:3888 server.3=&lt;ip of third server&gt;:2888:3888 ... </code></pre> <p>As I can see in the Helm chart <a href="https://github.com/helm/charts/blob/ca895761948d577df1cb37243b6afaf7b077bac3/incubator/zookeeper/templates/config-script.yaml#L82" rel="nofollow noreferrer">config-script.yaml</a> file has a script which creates the Zookeeper configuration file for every deployment.</p> <p>The part of the script which <em>echos</em> the zookeeper servers looks something like below:</p> <pre><code>... for (( i=1; i&lt;=$ZK_REPLICAS; i++ )) do echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" &gt;&gt; $ZK_CONFIG_FILE done ... </code></pre> <p>As of now the configuration that this Helm chart creates has the below Zookeeper server in the configuration with one replica (<em>replica</em> here means <code>Kubernetes Pods replicas</code>).</p> <pre><code>... # "release-name" is the name of the Helm release server.1=release-name-zookeeper-0.release-name-zookeeper-headless.default.svc.cluster.local:2888:3888 ... </code></pre> <p>At this point, I am clueless and do not know what to do, so that all the Zookeeper servers get included in the configuration file?</p> <p>How shall I modify the script?</p>
<p>I see you are trying to create 3 node <em>zookeeper</em> cluster on top of 3 different GKE clusters.</p> <p>This is not an easy task and I am sure there are multiple ways to achieve it but I will show you one way in which it can be done and I believe it should solve your problem.</p> <p>The first thing you need to do is create a LoadBalancer service for every zookeeper instance. After LoadBalancers are created, note down the ip addresses that got assigned (remember that by default these ip addresses are ephemeral so you might want to change them later to static).</p> <p>Next thing to do is to create an <a href="https://cloud.google.com/compute/docs/internal-dns" rel="nofollow noreferrer">private DNS zone</a> on GCP and create A records for every zookeeper LoadBalancer endpoint e.g.:</p> <pre><code>release-name-zookeeper-1.zookeeper.internal. release-name-zookeeper-2.zookeeper.internal. release-name-zookeeper-3.zookeeper.internal. </code></pre> <p>and in GCP it would look like this:<br><br> <a href="https://i.stack.imgur.com/7xjRJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7xjRJ.png" alt="dns"></a></p> <p>After it's done, just modify <a href="https://github.com/helm/charts/blob/ca895761948d577df1cb37243b6afaf7b077bac3/incubator/zookeeper/templates/config-script.yaml#L48" rel="nofollow noreferrer">this line</a>:</p> <pre><code>... DOMAIN=`hostname -d' ... </code></pre> <p>to something like this:</p> <pre><code>... DOMAIN={{ .Values.domain }} ... </code></pre> <p>and remember to set <code>domain</code> variable in <code>Values</code> file to <code>zookeeper.internal</code></p> <p>so in the end it should look like this:</p> <pre><code>DOMAIN=zookeeper.internal </code></pre> <p>and it should generate the folowing config:</p> <pre><code>... server.1=release-name-zookeeper-1.zookeeper.internal:2888:3888 server.2=release-name-zookeeper-2.zookeeper.internal:2888:3888 server.3=release-name-zookeeper-3.zookeeper.internal:2888:3888 ... </code></pre> <p>Let me know if it is helpful</p>
<p>I am trying to run a shell script at regular interval of 1 minute using a CronJob.</p> <p>I have created following Cron job in my openshift template:</p> <pre><code>- kind: CronJob apiVersion: batch/v2alpha1 metadata: name: "${APPLICATION_NAME}" spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: mycron-container image: alpine:3 imagePullPolicy: IfNotPresent command: [ "/bin/sh" ] args: [ "/var/httpd-init/croyscript.sh" ] volumeMounts: - name: script mountPath: "/var/httpd-init/" volumes: - name: script configMap: name: ${APPLICATION_NAME}-croyscript restartPolicy: OnFailure terminationGracePeriodSeconds: 0 concurrencyPolicy: Replace </code></pre> <p>The following is the configmap inserted as a volume in this job:</p> <pre><code>- kind: ConfigMap apiVersion: v1 metadata: name: ${APPLICATION_NAME}-croyscript labels: app: "${APPLICATION_NAME}" data: croyscript.sh: | #!/bin/sh if [ "${APPLICATION_PATH}" != "" ]; then mkdir -p /var/httpd-resources/${APPLICATION_PATH} fi mkdir temp cd temp ###### SOME CODE ###### </code></pre> <p>This Cron job is running. as I can see the name of the job getting replaced every 1 min (as scheduled in my job). But it is not executing the shell script croyscript.sh Am I doing anything wrong here? (Maybe I have inserted the configmap in a wrong way, so Job is not able to access the shell script)</p>
<h2>Try below approach</h2> <p>Update permissions on configmap location</p> <pre><code> volumes: - name: script configMap: name: ${APPLICATION_NAME}-croyscript defaultMode: 0777 </code></pre> <p>If this one doesnt work, most likely the script in mounted volume might have been with READONLY permissions. use initContainer to copy the script to different location and set appropriate permissions and use that location in command parameter</p>
<p>I am new to Kubernetes and i have been browsing looking and reading why my external ip is not resolving.</p> <p>I am running minikube on a ubuntu 16.04 distro.</p> <p>In the services overview of the dashboard i have this </p> <pre><code> my-nginx | run: my-nginx | 10.0.0.11 | my-nginx:80 TCP my-nginx:32431 | TCP 192.168.42.71:80 </code></pre> <p>When i do an http get at <a href="http://192.168.42.165:32431/" rel="noreferrer">http://192.168.42.165:32431/</a> i get the nginx page.</p> <p>The configuration of the service is as follows </p> <pre><code> # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 kind: Service metadata: creationTimestamp: 2016-09-23T12:11:13Z labels: run: my-nginx name: my-nginx namespace: default resourceVersion: "4220" selfLink: /api/v1/namespaces/default/services/my-nginx uid: d24b617b-8186-11e6-a25b-9ed0bca2797a spec: clusterIP: 10.0.0.11 deprecatedPublicIPs: - 192.168.42.71 externalIPs: - 192.168.42.71 ports: - nodePort: 32431 port: 80 protocol: TCP targetPort: 80 selector: run: my-nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: {} </code></pre> <p>These are parts of my ifconfog</p> <pre><code> virbr0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4895 errors:0 dropped:0 overruns:0 frame:0 TX packets:8804 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:303527 (303.5 KB) TX bytes:12601315 (12.6 MB) virbr1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74 inet addr:192.168.42.1 Bcast:192.168.42.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7462 errors:0 dropped:0 overruns:0 frame:0 TX packets:12176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3357881 (3.3 MB) TX bytes:88555007 (88.5 MB) vnet0 Link encap:Ethernet HWaddr fe:54:00:37:8f:41 inet6 addr: fe80::fc54:ff:fe37:8f41/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4895 errors:0 dropped:0 overruns:0 frame:0 TX packets:21173 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:372057 (372.0 KB) TX bytes:13248977 (13.2 MB) vnet1 Link encap:Ethernet HWaddr fe:54:00:9a:39:74 inet addr:192.168.23.1 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::fc54:ff:fe9a:3974/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7462 errors:0 dropped:0 overruns:0 frame:0 TX packets:81072 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3462349 (3.4 MB) TX bytes:92936270 (92.9 MB) </code></pre> <p>Does anyone have some pointers, because i am lost?</p>
<p>Minikube doesn't support LoadBalancer services, so the service will never get an external IP.</p> <p>But you can access the service anyway with its external port.</p> <p>You can get the IP and PORT by running:</p> <pre><code>minikube service &lt;service_name&gt; </code></pre>
<p>I want to deploy my service as a ClusterIP but am not able to apply it for the given error message:</p> <pre><code>[xetra11@x11-work coopr-infrastructure]$ kubectl apply -f teamcity-deployment.yaml deployment.apps/teamcity unchanged ingress.extensions/teamcity unchanged The Service "teamcity" is invalid: spec.ports[0].nodePort: Forbidden: may not be used when `type` is 'ClusterIP' </code></pre> <p>This here is my .yaml file:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: teamcity labels: app: teamcity spec: replicas: 1 selector: matchLabels: app: teamcity template: metadata: labels: app: teamcity spec: containers: - name: teamcity-server image: jetbrains/teamcity-server:latest ports: - containerPort: 8111 --- apiVersion: v1 kind: Service metadata: name: teamcity labels: app: teamcity spec: type: ClusterIP ports: - port: 8111 targetPort: 8111 protocol: TCP selector: app: teamcity --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: teamcity annotations: kubernetes.io/ingress.class: nginx spec: backend: serviceName: teamcity servicePort: 8111 </code></pre>
<p>Apply a configuration to the resource by filename:</p> <pre><code>kubectl apply -f [.yaml file] --force </code></pre> <p>This resource will be created if it doesn't exist yet. To use 'apply', always create the resource initially with either 'apply' or 'create --save-config'.</p> <p>2) If the first one fails, you can force replace, delete and then re-create the resource:</p> <pre><code>kubectl replace -f grav-deployment.yml </code></pre> <p>This command is only used when grace-period=0. If true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.</p>
<p>I have a simple Express.js server Dockerized and when I run it like:</p> <pre><code>docker run -p 3000:3000 mytag:my-build-id </code></pre> <p><a href="http://localhost:3000/" rel="nofollow noreferrer">http://localhost:3000/</a> responds just fine and also if I use the LAN IP of my workstation like <a href="http://10.44.103.60:3000/" rel="nofollow noreferrer">http://10.44.103.60:3000/</a></p> <p>Now if I deploy this to MicroK8s with a service deployment declaration like:</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: null name: my-service spec: type: NodePort ports: - name: "3000" port: 3000 targetPort: 3000 status: loadBalancer: {} </code></pre> <p>and pod specification like so (update 2019-11-05):</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null name: my-service spec: replicas: 1 selector: matchLabels: name: my-service strategy: {} template: metadata: creationTimestamp: null labels: name: my-service spec: containers: - image: mytag:my-build-id name: my-service ports: - containerPort: 3000 resources: {} restartPolicy: Always status: {} </code></pre> <p>and obtain the exposed NodePort via <code>kubectl get services</code> to be 32750 and try to visit it on the MicroK8s host machine like so:</p> <p>curl <a href="http://127.0.0.1:32750" rel="nofollow noreferrer">http://127.0.0.1:32750</a></p> <p>then the request just hangs and if I try to visit the LAN IP of the MicroK8s host from my workstation at <a href="http://192.168.191.248:32750/" rel="nofollow noreferrer">http://192.168.191.248:32750/</a> then the request is immediately refused.</p> <p>But, if I try to port forward into the pod with</p> <pre><code>kubectl port-forward my-service-5db955f57f-q869q 3000:3000 </code></pre> <p>then <a href="http://localhost:3000/" rel="nofollow noreferrer">http://localhost:3000/</a> works just fine.</p> <p>So the pod deployment seems to be working fine and example services like the microbot-service work just fine on that cluster.</p> <p>I've made sure the Express.js server listens on all IPs with </p> <pre><code>app.listen(port, '0.0.0.0', () =&gt; ... </code></pre> <p>So what can be the issue?</p>
<p>You need to add a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">selector</a> to your service. This will tell Kubernetes how to find your deployment. Additionally you can use nodePort to specify the port number of your service. After doing that you will be able to curl your MicroK8s IP.</p> <p>Your Service YAML should look like this: </p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: null name: my-service spec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30001 selector: name: my-service status: loadBalancer: {} </code></pre>
<p>We've got a Java application that generates word documents using a 3rd party (Asposee but I don't think it matters here). The app is built from a simple Docker file:</p> <pre><code>FROM openjdk:10-jdk-slim COPY target/*.jar /opt/ CMD $JAVA_HOME/bin/java $JAVA_OPTS -jar /opt/*.jar </code></pre> <p>When we build the application locally (<code>mvn package</code> then <code>docker build</code>) and run the application inside <code>k8s</code> it works well.</p> <p>However, when we build the image in our CI/CD pipeline with Jenkins we get a runtime exception when running through a specific process which apparently requires additional fonts:</p> <pre><code>Caused by: java.lang.NullPointerException: null at java.desktop/sun.awt.FontConfiguration.getVersion(FontConfiguration.java:1288) at java.desktop/sun.awt.FontConfiguration.readFontConfigFile(FontConfiguration.java:225) at java.desktop/sun.awt.FontConfiguration.init(FontConfiguration.java:107) at java.desktop/sun.awt.X11FontManager.createFontConfiguration(X11FontManager.java:765) at java.desktop/sun.font.SunFontManager$2.run(SunFontManager.java:440) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.desktop/sun.font.SunFontManager.&lt;init&gt;(SunFontManager.java:385) at java.desktop/sun.awt.FcFontManager.&lt;init&gt;(FcFontManager.java:35) at java.desktop/sun.awt.X11FontManager.&lt;init&gt;(X11FontManager.java:56) </code></pre> <p>In that case the project is buit in Jenkins, compiled by the docker image <code>maven:3.5.4-jdk-10-slim</code>.</p> <p>I've checked both jar files (locally and from jenkins) and the class files are the same (as expected).</p> <p>In both cases it's the same base image so I don't understand what could be the difference. Is something different in Docker when building locally vs inside another Docker container?</p> <p><strong>EDIT</strong></p> <p>We've looked into both docker images and found the following difference.</p> <p>Since locally built image <code>ls -l /usr/lib</code> returns:</p> <pre><code>drwxr-xr-x 2 root root 4096 May 3 2017 X11 drwxr-xr-x 5 root root 4096 Apr 26 00:00 apt drwxr-xr-x 2 root root 4096 May 26 08:31 binfmt.d drwxr-xr-x 2 root root 4096 Jun 6 01:50 cgmanager drwxr-xr-x 2 root root 4096 Jun 6 01:50 dbus-1.0 drwxr-xr-x 2 root root 4096 Jun 6 01:51 dconf drwxr-xr-x 3 root root 4096 Jun 6 01:51 debug drwxr-xr-x 3 root root 4096 Apr 20 10:08 dpkg drwxr-xr-x 2 root root 4096 Jun 6 01:50 environment.d drwxr-xr-x 3 root root 4096 Apr 25 04:56 gcc drwxr-xr-x 2 root root 4096 Jun 6 01:51 glib-networking drwxr-xr-x 2 root root 4096 Apr 26 00:00 init drwxr-xr-x 1 root root 4096 Jun 6 01:51 jvm drwxr-xr-x 3 root root 4096 Jun 6 01:50 kernel lrwxrwxrwx 1 root root 20 Mar 4 09:49 libnih-dbus.so.1 -&gt; libnih-dbus.so.1.0.0 -rw-r--r-- 1 root root 34824 Mar 4 09:49 libnih-dbus.so.1.0.0 lrwxrwxrwx 1 root root 15 Mar 4 09:49 libnih.so.1 -&gt; libnih.so.1.0.0 -rw-r--r-- 1 root root 92184 Mar 4 09:49 libnih.so.1.0.0 drwxr-xr-x 3 root root 4096 Mar 29 19:47 locale drwxr-xr-x 3 root root 4096 Jun 6 01:50 lsb drwxr-xr-x 1 root root 4096 Jul 21 2017 mime drwxr-xr-x 2 root root 4096 Jun 6 01:50 modprobe.d drwxr-xr-x 2 root root 4096 May 26 08:31 modules-load.d -rw-r--r-- 1 root root 198 Jan 13 23:36 os-release drwxr-xr-x 3 root root 4096 Jun 6 01:51 ssl drwxr-xr-x 1 root root 4096 Jun 6 01:50 systemd drwxr-xr-x 2 root root 4096 Jun 6 01:50 sysusers.d drwxr-xr-x 2 root root 4096 Jul 21 2017 tar drwxr-xr-x 15 root root 4096 Feb 11 20:06 terminfo drwxr-xr-x 1 root root 4096 Jun 6 01:50 tmpfiles.d drwxr-xr-x 1 root root 4096 Apr 26 00:00 udev drwxr-xr-x 1 root root 16384 Jun 6 01:51 x86_64-linux-gnu </code></pre> <p>But inside Jenkins built image <code>ls -l /usr/lib</code> returns:</p> <pre><code>drwxr-xr-x 5 root root 4096 Jun 25 00:00 apt drwxr-xr-x 3 root root 4096 Jul 3 01:00 debug drwxr-xr-x 3 root root 4096 Apr 20 10:08 dpkg drwxr-xr-x 3 root root 4096 Jun 17 03:36 gcc drwxr-xr-x 2 root root 4096 Jun 25 00:00 init drwxr-xr-x 1 root root 4096 Jul 3 01:00 jvm drwxr-xr-x 1 root root 4096 Jul 12 11:00 locale drwxr-xr-x 3 root root 4096 Jul 3 01:00 lsb drwxr-xr-x 1 root root 4096 May 16 07:47 mime -rw-r--r-- 1 root root 198 Jan 13 23:36 os-release drwxr-xr-x 3 root root 4096 Jul 3 01:00 ssl drwxr-xr-x 3 root root 4096 Apr 20 10:08 systemd drwxr-xr-x 2 root root 4096 May 16 07:47 tar drwxr-xr-x 15 root root 4096 May 21 08:54 terminfo drwxr-xr-x 2 root root 4096 Jun 25 00:00 tmpfiles.d drwxr-xr-x 3 root root 4096 Jun 25 00:00 udev drwxr-xr-x 2 root root 4096 May 3 2017 X11 drwxr-xr-x 1 root root 4096 Jul 3 01:00 x86_64-linux-gnu </code></pre> <p>This is really puzzling as I thought Docker would always produce the same image from identical Dockerfiles</p>
<p>With openjdk:8u111-jdk-alpine, installing dejavu fix the problem:</p> <p>For example:</p> <p>Dockerfile:</p> <pre><code>FROM openjdk:8u111-jdk-alpine # Needed to fix 'Fontconfig warning: ignoring C.UTF-8: not a valid language tag' ENV LANG en_GB.UTF-8 # JRE fails to load fonts if there are no standard fonts in the image; DejaVu is a good choice, # see https://github.com/docker-library/openjdk/issues/73#issuecomment-207816707 RUN apk add --update ttf-dejavu &amp;&amp; rm -rf /var/cache/apk/* VOLUME /tmp COPY /target/*.jar app.jar ENTRYPOINT [&quot;java&quot;,&quot;-Xmx100m&quot;,&quot;-Djava.security.egd=file:/dev/./urandom&quot;,&quot;-jar&quot;,&quot;/app.jar&quot;] </code></pre>
<p>It's not so digital ocean specific, would be really nice to verify if this is an expected behavior or not.</p> <p>I'm trying to setup ElasticSearch cluster on DO managed Kubernetes cluster with helm chart from ElasticSearch <a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">itself</a> </p> <p>And they say that I need to specify a <code>storageClassName</code> in a <code>volumeClaimTemplate</code> in order to use volume which is provided by managed kubernetes service. For DO it's <code>do-block-storages</code> according to their <a href="https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/" rel="nofollow noreferrer">docs</a>. Also seems to be it's not necessary to define PVC, helm chart should do it itself. </p> <p>Here's config I'm using</p> <pre><code># Specify node pool nodeSelector: doks.digitalocean.com/node-pool: elasticsearch # Shrink default JVM heap. esJavaOpts: "-Xmx128m -Xms128m" # Allocate smaller chunks of memory per pod. resources: requests: cpu: "100m" memory: "512M" limits: cpu: "1000m" memory: "512M" # Specify Digital Ocean storage # Request smaller persistent volumes. volumeClaimTemplate: accessModes: [ "ReadWriteOnce" ] storageClassName: do-block-storage resources: requests: storage: 10Gi extraInitContainers: | - name: create image: busybox:1.28 command: ['mkdir', '/usr/share/elasticsearch/data/nodes/'] volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-master - name: file-permissions image: busybox:1.28 command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/'] volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-master </code></pre> <p>Helm chart i'm setting with terraform, but it doesn't matter anyway, which way you'll do it:</p> <pre><code>resource "helm_release" "elasticsearch" { name = "elasticsearch" chart = "elastic/elasticsearch" namespace = "elasticsearch" values = [ file("charts/elasticsearch.yaml") ] } </code></pre> <p>Here's what I've got when checking pod logs:</p> <pre><code>51s Normal Provisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-2" 2m28s Normal ExternalProvisioning persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator </code></pre> <p>I'm pretty sure the problem is a volume. it should've been automagically provided by kubernetes. Describing persistent storage gives this:</p> <pre><code>holms@debian ~/D/c/s/b/t/s/post-infra&gt; kubectl describe pvc elasticsearch-master-elasticsearch-master-0 --namespace elasticsearch Name: elasticsearch-master-elasticsearch-master-0 Namespace: elasticsearch StorageClass: do-block-storage Status: Pending Volume: Labels: app=elasticsearch-master Annotations: volume.beta.kubernetes.io/storage-provisioner: dobs.csi.digitalocean.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: elasticsearch-master-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 4m57s (x176 over 14h) dobs.csi.digitalocean.com_master-setupad-eu_04e43747-fafb-11e9-b7dd-e6fd8fbff586 External provisioner is provisioning volume for claim "elasticsearch/elasticsearch-master-elasticsearch-master-0" Normal ExternalProvisioning 93s (x441 over 111m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "dobs.csi.digitalocean.com" or manually created by system administrator </code></pre> <p>I've google everything already, it seems to be everything is correct, and volume should be up withing DO side with no problems, but it hangs in pending state. Is this expected behavior or should I ask DO support to check what's going on their side?</p>
<p>Yes, this is expected behavior. This chart might not be compatible with Digital Ocean Kubernetes service.</p> <p><a href="https://www.digitalocean.com/docs/kubernetes/overview/#known-issues" rel="nofollow noreferrer">Digital Ocean</a> documentation has the following information in Known Issues section:</p> <blockquote> <ul> <li><p>Support for resizing DigitalOcean Block Storage Volumes in Kubernetes has not yet been implemented.</p></li> <li><p>In the DigitalOcean Control Panel, cluster resources (worker nodes, load balancers, and block storage volumes) are listed outside of the Kubernetes page. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with <code>kubectl</code> or from the control panel’s Kubernetes page.</p></li> </ul> </blockquote> <p>In the <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch#prerequisites-details" rel="nofollow noreferrer">charts/stable/elasticsearch</a> there are specific requirements mentioned:</p> <blockquote> <h3>Prerequisites Details</h3> <ul> <li>Kubernetes 1.10+</li> <li>PV dynamic provisioning support on the underlying infrastructure</li> </ul> </blockquote> <p>You can ask Digital Ocean support for help or try to deploy ElasticSearch without helm chart.</p> <p>It is even mentioned on <a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch#usage-notes-and-getting-started" rel="nofollow noreferrer">github</a> that:</p> <blockquote> <p>Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine).</p> </blockquote> <hr> <p><strong>Update:</strong></p> <p>The same issue is present on my kubeadm ha cluster.</p> <p>However I managed to get it working by manually creating <code>PersistentVolumes</code>'s for my <code>storageclass</code>.</p> <p>My storageclass definition: <code>storageclass.yaml</code>:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ssd provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer parameters: type: pd-ssd </code></pre> <pre><code>$ kubectl apply -f storageclass.yaml </code></pre> <pre><code>$ kubectl get sc NAME PROVISIONER AGE ssd local 50m </code></pre> <p>My PersistentVolume definition: <code>pv.yaml</code>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: ssd capacity: storage: 30Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - &lt;name of the node&gt; </code></pre> <pre><code>kubectl apply -f pv.yaml </code></pre> <p>After that I ran helm chart:</p> <pre><code>helm install stable/elasticsearch --name my-release --set data.persistence.storageClass=ssd,data.storage=30Gi --set data.persistence.storageClass=ssd,master.storage=30Gi </code></pre> <p>PVC finally got bound.</p> <pre><code>$ kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default data-my-release-elasticsearch-data-0 Bound task-pv-volume2 30Gi RWO ssd 17m default data-my-release-elasticsearch-master-0 Pending 17m </code></pre> <p>Note that I only manually satisfied only single pvc and ElasticSearch manual volume provisioning might be very inefficient.</p> <p>I suggest contacting DO support for automated volume provisioning solution. </p>
<p>I have a 3-node cluster running on GKE. All the nodes are pre-emptible meaning they can be killed at any time and generally do not live longer than 24 hours. In the event a node is killed the autoscaler spins up a new node to replace it. This usually takes a minute or so when this happens.</p> <p>In my cluster I have a deployment with its replicas set to 3. My intention is that each pod will be spread across all the nodes such that my application will still run as long as at least one node in my cluster is alive.</p> <p>I've used the following affinity configuration such that pods prefer running on hosts different to ones already running pods for that deployment:</p> <pre><code>spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - my-app topologyKey: kubernetes.io/hostname weight: 100 </code></pre> <p>When I scale my application from 0 this seems to work as intended. But in practice the following happens:</p> <ol> <li>Lets say pods belonging to the <code>my-app</code> replicaset <code>A</code>, <code>B</code> and <code>C</code> are running on nodes <code>1</code>, <code>2</code> and <code>3</code> respectively. So state would be:</li> </ol> <pre><code> 1 -&gt; A 2 -&gt; B 3 -&gt; C </code></pre> <ol start="2"> <li>Node 3 is killed taking pod C with it, resulting in 2 running pods in the replicaset.</li> <li>The scheduler automatically starts to schedule a new pod to bring the replicaset back up to 3.</li> <li>It looks for a node without any pods for <code>my-app</code>. As the autoscalar is still in the process of starting a replacement node (<code>4</code>), only <code>1</code> and <code>2</code> are available.</li> <li>It schedules the new pod <code>D</code> on node <code>1</code></li> <li>Node <code>4</code> eventually comes online but as <code>my-app</code> has all its pods scheduled it doesn't have any pods running on it. Resultant state is</li> </ol> <pre><code> 1 -&gt; A, D 2 -&gt; B 4 -&gt; - </code></pre> <p>This is not the ideal configuration. The problem arises because there's a delay creating the new node and the schedular is not aware that it'll be available very soon.</p> <p>Is there a better configuration that can ensure the pods will always be distributed across the node? I was thinking a directive like <code>preferredDuringSchedulingpreferredDuringExecution</code> might do it but that doesn't exist.</p>
<p><em>preferredDuringSchedulingIgnoredDuringExecution</em> means it is a preference not a hard requirement, which could explain 1 -> A, D </p> <p>I believe you are searching for <em>requiredDuringSchedulingIgnoredDuringExecution</em> in conjunction with anti-affinity such that you have distributed workloads.</p> <p>Please have a look at this <a href="https://github.com/Aahzymandius/k8s-workshops/tree/master/2-scheduling" rel="nofollow noreferrer">github</a> for more details and examples.</p>
<p>I am running a Python app on production but my pod is restarting frequently on production environment. While on a staging environment it's not happening.</p> <p>So I thought it could be CPU &amp; Memory limit issue. I have updated that also.</p> <p>Further debug I got <code>137</code> exit code.</p> <p>For more debug I go inside Kubernetes node and check for container.</p> <p>Command used: <code>docker inspect &lt; container id &gt;</code></p> <p>Here is output:</p> <pre><code> { "Id": "a0f18cd48fb4bba66ef128581992e919c4ddba5e13d8b6a535a9cff6e1494fa6", "Created": "2019-11-04T12:47:14.929891668Z", "Path": "/bin/sh", "Args": [ "-c", "python3 run.py" ], "State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 137, "Error": "", "StartedAt": "2019-11-04T12:47:21.108670992Z", "FinishedAt": "2019-11-05T00:01:30.184225387Z" }, </code></pre> <p>OOMKilled is false so I think that is not issue. </p> <p>Using GKE master version: <code>1.13.10-gke.0</code> </p>
<p>Exit code 137 is a <a href="https://bobcares.com/blog/error-137-docker/" rel="nofollow noreferrer">docker exit code</a> that tells us that the container is killed by the OOM killer. This does not mean that the container itself reached a memory limit or that it does not have sufficient memory to run. Since the OS level OOM killer is killing the application, the pod and docker won't register OOM for the container itself because it did not necessarily reach a memory limit.</p> <p>The above linked doc goes into some detials on how to debug error 137, though you can also check your node metrics for memory usage or check the node logs to see if OOM was ever registered at the OS level.</p> <p>If this is a regular problem, make sure your python container includes limits, and make sure your other containers in the cluster have appropriate requests and limits set.</p>
<p>I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.</p> <p>Pod Config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: webapp name: webapp namespace: my-app spec: replicas: 1 selector: matchLabels: name: webapp template: metadata: labels: name: webapp spec: containers: - image: asia.gcr.io/my-app/my-app:latest name: webapp ports: - containerPort: 80 name: http-server </code></pre> <p>Service Config:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: webapp-service spec: type: NodePort selector: name: webapp ports: - protocol: TCP port: 50000 targetPort: 80 </code></pre> <p>Ingress Config:</p> <pre><code>kind: Ingress metadata: name: webapp-ingress spec: backend: serviceName: webapp-service servicePort: 50000 </code></pre> <p>This results in backend services reporting as UNHEALTHY.</p> <p>The health check settings:</p> <pre><code>Path: / Protocol: HTTP Port: 32463 Proxy protocol: NONE </code></pre> <p>Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.</p>
<p>With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.</p> <p>Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.</p> <p>First, make sure your pod is serving traffic properly;</p> <p><code>kubectl exec [pod_name] -- wget localhost:80</code> </p> <p>If the application has <code>curl</code> built in, you can use that instead of <code>wget</code>. If the application has neither wget or curl, skip to the next step.</p> <ol> <li>get the following output and keep track of the output: <blockquote> <p>kubectl get po -l name=webapp -o wide<br> kubectl get svc webapp-service</p> </blockquote></li> </ol> <p>You need to keep the service and pod clusterIPs</p> <ol start="2"> <li><p>SSH to a node in your cluster and run <code>sudo toolbox bash</code></p></li> <li><p>Install curl:</p></li> </ol> <blockquote> <p>apt-get install curl`</p> </blockquote> <ol start="4"> <li>Test the pods to make sure they are serving traffic within the cluster:</li> </ol> <blockquote> <p>curl -I [pod_clusterIP]:80</p> </blockquote> <p>This needs to return a 200 response</p> <ol start="5"> <li>Test the service:</li> </ol> <blockquote> <p>curl -I [service_clusterIP]:80</p> </blockquote> <p>If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod. </p> <p>if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster. </p> <p>Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.</p>
<p>I have created the 2 config map named personservice and personservice-dev.</p> <p>I am running spring boot application with profile dev but it is not loading the right config map. This is what I see in logs of the pod which gets crashed.</p> <pre><code> 2019-11-05 16:29:37.336 INFO [personservice,,,] 7 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='composite-configmap', propertySources=[ConfigMapPropertySource {name='configmap.personservice.default'}]} 2019-11-05 16:29:37.341 INFO [personservice,,,] 7 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: SecretsPropertySource {name='secrets.personservice.default'} 2019-11-05 16:29:37.445 INFO [personservice,,,] 7 --- [ main] c.person.PersonMicroServiceApplication : The following profiles are active: kubernetes,dev </code></pre> <p><strong>Kubectl get configmaps</strong></p> <p><a href="https://i.stack.imgur.com/BioKS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BioKS.png" alt="enter image description here"></a></p> <p><strong>Deployment file:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: personservice labels: app: personservice spec: replicas: 1 selector: matchLabels: app: personservice template: metadata: labels: app: personservice spec: containers: - name: personservice image: microservice-k8s/personmicroservice-k8s:1.0 ports: - containerPort: 8080 env: - name: PROFILE value: "dev" - name: SERVER_PORT value: "8080" - name: ZIPKIN_URI value: "http://172.19.27.145:9411" </code></pre> <p><strong>Bootstrap:</strong></p> <pre><code>spring: application: name: personservice </code></pre>
<p>You confused things. Your configmap is named <code>personservice-dev</code> and your application's name is <code>personservice</code> not <code>personservice-dev</code>, by default Spring Cloud K8S looks for configmap with name equals to <code>spring.application.name</code> and not <code>spring.application.name-{profile}</code>.</p> <p>You have 2 ways to solve your problem:</p> <p>1-Remove <code>personservice-dev</code> and in your <code>personservice</code> configmap:</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: personservice data: application.yml: |- p1: pa: blabla --- spring: profiles: dev p1: pa: blibli --- spring: profiles: prod p1: pa: blublu </code></pre> <p>2-Keep <code>personservice-dev</code> and <code>personservice</code> and define this in <code>bootstrap.yml</code>:</p> <pre><code>spring: cloud: kubernetes: config: name: ${spring.application.name} #This is optional sources: - name: ${spring.application.name}-${PROFILE} # Here you get your `personservice-dev` configmap </code></pre>
<p>I am setting up my default namespace in my kubernetes cluster to allow incoming traffic from external nodes/hosts but deny any possible inter pod communication. I have 2 nginx pods which I want to completely isolate inside the cluster. Both pods are exposed with a service of the type nodePort and they are accessible from outside.</p> <p>I first apply the following default deny network policy:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: {} policyTypes: - Ingress </code></pre> <p>Then, I try allowing external traffic with the following network policy:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-external spec: podSelector: {} ingress: - from: - ipBlock: cidr: 192.168.0.0/16 </code></pre> <p>But unfortunately I am not able to access the service either from outside and inside my cluster.</p> <p>Running example in: - macOS High Sierra v10.13.6 - minikube v1.5.2 --> with network plugin = cilium - kubectl v1.16.2</p> <p>How could I face this problem?</p>
<p>If you want to allow any incoming traffic to any pod except traffic that originates from your cluster you can use the "except" notation in a rule that allows traffic from all IP's. In below replace <code>172.17.1.0/24</code> with the cidr containing your pods:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-internal spec: podSelector: {} policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 0.0.0.0/0 except: - 172.17.1.0/24 </code></pre>
<p>I'm running Kubernetes/Docker on Google Container Optimized OS on a GCE instance. When I run <code>docker info</code> it says</p> <pre><code>$ docker info Containers: 116 Running: 97 Paused: 0 Stopped: 19 Images: 8 Server Version: 1.11.2 Storage Driver: overlay Backing Filesystem: extfs Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null host bridge Kernel Version: 4.4.21+ Operating System: Container-Optimized OS from Google OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 14.67 GiB Name: REDACTED ID: REDACTED Docker Root Dir: /var/lib/docker Debug mode (client): false Debug mode (server): false Registry: https://index.docker.io/v1/ WARNING: No swap limit support </code></pre> <p>The last line says that there is no swap limit support. I'm having trouble figuring out how to enable swap limit support. I found instructions for Ubuntu/Debian <a href="https://docs.docker.com/install/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities" rel="nofollow noreferrer">here</a>.</p> <p>My problem is that my docker containers get OOMKilled as soon as they reach their memory limit instead of trying swapping. I want the containers to use swap as a buffer instead of dying immediately.</p>
<p>Container-Optimized OS (COS) is actually configured with swap disabled completely. You can verify this via running <code>cat /proc/meminfo | grep SwapTotal</code> in a COS VM, which will say that it is configured to 0 kB.</p> <p>I'm not sure whether it's a good idea to enable swap in your environment, as it may cause more problems (e.g. disk IO starvation/slowdown, kernel hung) if you are using swap frequently.</p> <p>But if you wanna try it out, these commands might help you (run all of them as root):</p> <pre><code>cos-swap / # sysctl vm.disk_based_swap=1 vm.disk_based_swap = 1 cos-swap / # fallocate -l 1G /var/swapfile cos-swap / # chmod 600 /var/swapfile cos-swap / # mkswap /var/swapfile Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes) no label, UUID=406d3dfc-3780-44bf-8add-d19a24fdbbbb cos-swap / # swapon /var/swapfile cos-swap / # cat /proc/meminfo | grep Swap SwapCached: 0 kB SwapTotal: 1048572 kB SwapFree: 1048572 kB </code></pre>
<p>I have a Dockerfile running Kong-api to deploy on openshift. It build okay, but when I check pods I get <code>Back-off restarting failed container</code>. Here is my dockerfile</p> <pre><code>FROM ubuntu:18.04 RUN apt-get update &amp;&amp; apt-get install -y apt-transport-https curl lsb-core RUN echo "deb https://kong.bintray.com/kong-deb `lsb_release -sc` main" | tee -a /etc/apt/sources.list RUN curl -o bintray.key https://bintray.com/user/downloadSubjectPublicKey?username=bintray RUN apt-key add bintray.key RUN apt-get update &amp;&amp; apt-get install -y kong COPY kong.conf /etc/kong/ RUN kong migrations bootstrap [-c /etc/kong/kong.conf] EXPOSE 8000 8443 8001 8444 ENTRYPOINT ["kong", "start", "[-c /etc/kong/kong.conf]"] </code></pre> <p>Where is my wrong? Please help me. Thanks in advance</p>
<p>In order to make the kong start correctly, you need to execute these commands when you have an active Postgres connection:</p> <pre><code>kong migrations bootstrap &amp;&amp; kong migrations up </code></pre> <p>Also, note that the format of the current Dockerfile is not valid if you would like to pass options within the <code>ENTRYPOINT</code> you can write it like that:</p> <pre><code>ENTRYPOINT ["kong", "start","-c", "/etc/kong/kong.conf"] </code></pre> <p>Also, you need to remove this line:</p> <pre><code>RUN kong migrations bootstrap [-c /etc/kong/kong.conf] </code></pre> <p>Note that the format of the above line is not valid as <code>RUN</code> expects a normal shell command so using <code>[]</code> in this case is not correct.</p> <p>So as you deploy to Openshift there are several ways to achieve what you need.</p> <ul> <li>You can make use of <a href="https://docs.openshift.com/container-platform/4.1/nodes/containers/nodes-containers-init.html" rel="nofollow noreferrer">initContainers</a> which allows you to execute the required commands before the actual service is up.</li> <li>You can check <a href="https://github.com/helm/charts/tree/master/stable/kong" rel="nofollow noreferrer">the official helm chart for Kong</a> in order to know how it works or use helm to install Kong itself.</li> </ul>
<p>I'm looking to redirect some logs from a command run with <code>kubectl exec</code> to that pod's logs, so that they can be read with <code>kubectl logs &lt;pod-name&gt;</code> (or really, <code>/var/log/containers/&lt;pod-name&gt;.log</code>). I can see the logs I need as output when running the command, and they're stored inside a separate log directory inside the running container. </p> <p>Redirecting the output (i.e. <code>&gt;&gt; logfile.log</code>) to the file which I thought was mirroring what is in <code>kubectl logs &lt;pod-name&gt;</code> does not update that container's logs, and neither does redirecting to stdout.</p> <p>When calling <code>kubectl logs &lt;pod-name&gt;</code>, my understanding is that kubelet gets them from it's internal <code>/var/log/containers/</code> directory. But what determines which logs are stored there? Is it the same process as the way logs get stored inside any other docker container?</p> <p>Is there a way to examine/trace the logging process, or determine where these logs are coming from?</p>
<p>Logs from the <code>STDOUT</code> and <code>STDERR</code> of containers in the pod are captured and stored inside files in /var/log/containers. This is what is presented when <code>kubectl log</code> is run.</p> <p>In order to understand why output from commands run by kubectl exec is not shown when running <code>kubectl log</code>, let's have a look how it all works with an example:</p> <p>First launch a pod running ubuntu that are sleeping forever:</p> <pre><code>$&gt; kubectl run test --image=ubuntu --restart=Never -- sleep infinity </code></pre> <p>Exec into it</p> <pre><code>$&gt; kubectl exec -it test bash </code></pre> <p>Seen from inside the container it is the <code>STDOUT</code> and <code>STDERR</code> of PID 1 that are being captured. When you do a <code>kubectl exec</code> into the container a new process is created living alongside PID 1:</p> <pre><code>root@test:/# ps -auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 7 0.0 0.0 18504 3400 pts/0 Ss 20:04 0:00 bash root 19 0.0 0.0 34396 2908 pts/0 R+ 20:07 0:00 \_ ps -auxf root 1 0.0 0.0 4528 836 ? Ss 20:03 0:00 sleep infinity </code></pre> <p>Redirecting to <code>STDOUT</code> is not working because <code>/dev/stdout</code> is a symlink to the process accessing it (<code>/proc/self/fd/1</code> rather than <code>/proc/1/fd/1</code>).</p> <pre><code>root@test:/# ls -lrt /dev/stdout lrwxrwxrwx 1 root root 15 Nov 5 20:03 /dev/stdout -&gt; /proc/self/fd/1 </code></pre> <p>In order to see the logs from commands run with <code>kubectl exec</code> the logs need to be redirected to the streams that are captured by the kubelet (<code>STDOUT</code> and <code>STDERR</code> of pid 1). This can be done by redirecting output to <code>/proc/1/fd/1</code>.</p> <pre><code>root@test:/# echo "Hello" &gt; /proc/1/fd/1 </code></pre> <p>Exiting the interactive shell and checking the logs using <code>kubectl logs</code> should now show the output</p> <pre><code>$&gt; kubectl logs test Hello </code></pre>
<p>I've 3 angular applications that are deployed on Kubernetes. I'm trying to run all 3 with just one hostname and different paths. Here is my ingress.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 name: test-ingress-deployment namespace: my-namespace spec: tls: - hosts: - dns-name.eastus.cloudapp.azure.com secretName: aks-ingress-tls rules: - host: dns-name.eastus.cloudapp.azure.com http: paths: - backend: serviceName: ui-svc servicePort: 80 path: /(.*) - backend: serviceName: ui-svc-one servicePort: 80 path: /one/?(.*) - backend: serviceName: ui-svc-two servicePort: 80 path: /two/?(.*) </code></pre> <p>All these 3 services are in different namespaces. I'm getting 503 for every endpoint I'm trying to hit, after applying ingress.</p>
<p>Documentation around this is scarce, at least I wasn't able to find something except for Github issues shedding a little light into this issue. But as far as I know cross namespace access was intentionally avoided in K8s, it would be a prime source of privilege escalation attacks.</p> <p>To my knowledge you do have two options:</p> <ol> <li>You can run your ingress and ingress controller inside the <code>kube-system</code> namespace. But this is not recommended as <code>kube-system</code> is meant for K8s components only and it most likely creates security concerns.</li> <li>You can have the ingress in the namespace your service resides in. But then you need an ingress controller which allows merging of rules which not all support afaik.</li> </ol> <p>There are probably more solutions out there, maybe someone with more in-depth K8s knowledge can shed more light on it.</p>
<p>I'm using kubernetes on my cluster with several rails / node docker images. Most of them have :3000/healtz health check that simply returns status 200 with OK in body. </p> <p>Now I'm trying to discover the best way how this health check can be performed on docker image running sidekiq. How I can verify that the worker is running?</p>
<p>If your image is unix like, you can check if the proccess is running with</p> <pre><code>$ ps aux | grep '[s]idekiq' </code></pre> <p>But this don't guarantee that everything is working inside sidekiq and redis.</p> <p>A better approach is described/developed in this sidekiq plugin <a href="https://github.com/arturictus/sidekiq_alive" rel="nofollow noreferrer">https://github.com/arturictus/sidekiq_alive</a> </p> <p>I'm facing problems with <code>livenessProbe</code> for k8s and trying to solve without using this lib but not successful yet.</p>
<p>I have a question about Kafka in Kubernetes, specially autoscaling...</p> <p>Let say I have 3 Kafka Brokers in 3 Pods in Kubernetes and there is a TopicA with 5 partitions (P1, P2, P3, P4, P5) and replication factor is 3 and all Brokers have their Persistent Volumes and I have auto scaling in Kubernetes configured so if it detects, lets say %80 CPU/Memory usage in Kafka Pods it will starts additional Pods for Kafka Brokers...</p> <p>If I am not completely wrong, Kafka will detect over Zookeeper extra instances and can shift Partitions (so lets say P1, P2 were at Broker1 and P3, P4 were at Broker2 and P5 was at Broker3) so a new Pod comes the picture I will expect would be something like following P1 at Broker1, P3, P4 Broker2, P5 Broker3 and P2 at Broker4.</p> <p>So my first question is, are the above assumptions correct and Kafka behave like this or not?</p> <p>Second question is about down scaling, lets load peak is gone and we don't need Pod4, can Kubernetes shotdown the Pod and Kafka can return to the 3 Brokers configuration, that is the part I am not sure of. While I have replication factor 3, 2 other brokers should be able to continue to work, kann Kafka pull Partition P2 to Broker 1 or 2 or 3?</p> <p>And the final question would be, if the Kubernetes spawned Pod5,6,7 can we downscale to 3 Pods configuration again?</p> <p>Thx for answers..</p>
<blockquote> <p>Kafka will detect over Zookeeper extra instances and can shift Partitions</p> </blockquote> <p>Partitions will not be rebalanced when expanding a cluster.</p> <p>In the case of downscale, partitions must be moved off the brokers before they can be removed from the cluster, otherwise you'll have permanently offline partitions that cannot replicate. And you need to be conscious of disk utilization when shrinking a cluster as a partition is limited in size by the smallest data directory</p> <p>Kubernetes itself won't help Kafka perform these operations and in non-k8s environments, this process is mostly manual but can be scripted (see kafka-kit by Datadog). I believe that the k8s operators such as Strimzi operator could make data rebalances easier when scaling, however at the time of this answer , it doesn't support automatic reassignment, as per what I see <a href="https://strimzi.io/docs/latest/#scaling-clusters-deployment-configuration-kafka" rel="nofollow noreferrer">in the docs</a></p>
<p>I want to print the internal IP of all nodes in the same line separated by a space using <code>jq</code> in k8s. How can I do this?</p> <p>Using jsonpath I can filter using <code>.addresses[?(@.type=="InternalIP")]</code>. How to achieve the same with <code>jq</code>?</p>
<p>You could use select and pipe to achieve desired output.</p> <p>below command shows the internal ip separated by new-line</p> <pre><code>kubectl get nodes -o json | jq '.items[].status.addresses[] | select(.type=="InternalIP") | .address' </code></pre> <p>for space separated internal-ips:</p> <pre><code>kubectl get nodes -o json | jq '.items[].status.addresses[] | select(.type=="InternalIP") | .address' | tr -d '\"' | tr '\n' ' ' </code></pre>
<p>I want to have a Kustomize manifest where value for some attribute comes from entire contents of some file or URI.</p> <p>How can I do this?</p>
<p>Usually <strong>with kustomize</strong> what you are going to use is an <a href="https://github.com/kubernetes-sigs/kustomize#2-create-variants-using-overlays" rel="nofollow noreferrer">overlay and patches</a> (which is one or multiple files) that are kind of merged into your base file. A Patch overrides an attribute. With those two features you predefine some probable manifest-compositions and just combine them right before you apply them to your cluster.</p> <p>You can add or edit/set some specific attributes <a href="https://kubectl.docs.kubernetes.io/pages/app_management/container_images.html" rel="nofollow noreferrer">with patches</a> or with kustomize subcommands like so:</p> <pre><code>kustomize edit set image your-registry.com:$image_tag # just to identify version in metadata sections for service/deployment/pods - not just via image tag kustomize edit add annotation appVersion:$image_tag kustomize build . | kubectl -n ${your_ns} apply -f - </code></pre> <p>But if you want to have a single manifest file and manipulate the same attributes over and over again (on the fly), you should consider using <strong>helm's templating mechanism</strong>. This is also an option if kustomize does not allow you to edit that single specific attribute you want to alter.</p> <p>You just need a <em>values.yaml</em> file (containing key/value pairs) and a <em>template.yaml</em> file. You can pre-set some attributes in the <em>values.yaml</em> - on demand you can override them per CLI. The tool will generate you a k8s manifest with those values backed in.</p> <p>template file:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ .Values.appSettings.appName }} namespace: {{ .Values.appSettings.namespace }} labels: name: {{ .Values.appSettings.appName }} spec: replicas: 1 template: metadata: labels: name: {{ .Values.appSettings.appName }} spec: containers: - name: {{ .Values.appSettings.appName }} image: "{{ .Values.appSettings.image }}" ports: - containerPort: 8080 [...] --- apiVersion: v1 kind: Service metadata: name: {{ .Values.appSettings.appName }}-svc namespace: {{ .Values.appSettings.namespace }} labels: name: {{ .Values.appSettings.appName }} spec: ports: - port: 8080 targetPort: 8080 selector: name: {{ .Values.appSettings.appName }} </code></pre> <p>Values file:</p> <pre><code>appSettings: appName: your-fancy-app appDomain: please_override_via_cli namespace: please_override_via_cli </code></pre> <p>CLI:</p> <pre><code>helm template --set appSettings.image=your-registry.com/service:$(cat versionTag.txt) --set appSettings.namespace=your-ns --set appSettings.appDomain=your-domain.com ./ -f ./values.yaml | kubectl apply -f - </code></pre>
<p>I'm trying to understand exactly how kubernetes command probes work, but the documentation is quite dry on this. </p> <p>Every example I found on kubernetes command probes gives the same kind of code: </p> <pre><code>livenessProbe: exec: command: - cat - /tmp/healthy </code></pre> <p>I seems possible to pass any command to the exec object. So my question is:</p> <ol> <li>What would be other good examples of probes commands?</li> <li>How will kubernetes determine if the result of the command is a success or a failure?</li> </ol>
<p>You can pass any command as an <code>exec</code> probe.</p> <p>The healthy of the container is <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">determined by the exit code</a>. If the command succeeds, it returns <code>0</code>, and the <code>kubelet</code> considers the Container to be alive and healthy. Anything different than exit code <code>0</code> is considered unhealthy.</p> <p>Some applications provide binaries/scripts that are made for health checks.</p> <p>Examples:</p> <ul> <li><a href="https://github.com/helm/charts/blob/6b3367d471b789900526a0e2ecc0bafa8aeb4d5a/stable/rabbitmq/templates/statefulset.yaml#L152" rel="nofollow noreferrer">RabbitMQ</a>: Provides the <code>rabbitmq-api-check</code></li> <li><a href="https://github.com/helm/charts/blob/6b3367d471b789900526a0e2ecc0bafa8aeb4d5a/stable/postgresql/templates/statefulset.yaml#L175" rel="nofollow noreferrer">PostgreSQL</a>: Provides the <code>pg_isready</code></li> <li><a href="https://github.com/helm/charts/blob/6b3367d471b789900526a0e2ecc0bafa8aeb4d5a/stable/mysql/templates/deployment.yaml#L120" rel="nofollow noreferrer">MySQL</a>: Provides the <code>mysqladmin ping</code></li> </ul> <p>The use of <code>exec</code> probe is also useful when you need to define an entire script with the logic of your expected healthiness.</p>
<p>I am following Kubernetes tutorial: <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/#creating-the-redis-master-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/guestbook/#creating-the-redis-master-service</a></p> <p>However there is one line i do not understand. In frontend-deployment there is GET_HOSTS_FROM variable. Its value is "dns". Is it evaluated further or does it remain as "dns"?</p> <p>This is the whole corresponding yaml:</p> <pre><code>#application/guestbook/frontend-deployment.yaml apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: frontend labels: app: guestbook spec: selector: matchLabels: app: guestbook tier: frontend replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # Using `GET_HOSTS_FROM=dns` requires your cluster to # provide a dns service. As of Kubernetes 1.3, DNS is a built-in # service launched automatically. However, if the cluster you are using # does not have a built-in DNS service, you can instead # access an environment variable to find the master # service's host. To do so, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 80 </code></pre>
<p>The value for the <code>GET_HOSTS_FROM</code> isn't evaluated any further - it remains as "dns".</p> <p>Looking at the application's source code <a href="https://github.com/kubernetes/examples/blob/master/guestbook/php-redis/guestbook.php" rel="noreferrer">here</a>, <code>GET_HOSTS_FROM</code> is used to determine if the hosts for the Redis primary and slave will come from the environment or, by default, be the Kubernetes service names for the <a href="https://github.com/kubernetes/examples/blob/master/guestbook/all-in-one/guestbook-all-in-one.yaml#L4" rel="noreferrer">primary</a> and the <a href="https://github.com/kubernetes/examples/blob/master/guestbook/all-in-one/guestbook-all-in-one.yaml#L49" rel="noreferrer">slave</a>:</p> <pre><code> $host = 'redis-master'; if (getenv('GET_HOSTS_FROM') == 'env') { $host = getenv('REDIS_MASTER_SERVICE_HOST'); } </code></pre> <p>When the host names are Kubernetes Service names, they will be resolved using cluster's <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="noreferrer">DNS</a>.</p> <p>It is worth mentioning how pods can reference Services in the same vs. a different namespace (the excerpt is from the link to the docs given above):</p> <blockquote> <p>...if you have a Service called "my-service" in a Kubernetes Namespace "my-ns", the control plane and the DNS Service acting together create a DNS record for "my-service.my-ns". Pods in the "my-ns" Namespace should be able to find it by simply doing a name lookup for my-service ("my-service.my-ns" would also work).</p> </blockquote>
<p>I am receiving NoExecuteTaintManager events that are deleting my pod but I can't figure out why. The node is healthy and the Pod has the appropriate tolerations. </p> <p>This is actually causing infinite scale up because my Pod is setup so that it uses 3/4 Node CPUs and has a Toleration Grace Period > 0. This forces a new node when a Pod terminates. Cluster Autoscaler tries to keep the replicas == 2. </p> <p>How do I figure out which taint is causing it specifically? Any then why it thinks that node had that taint? Currently the pods are being killed at exactly 600 seconds (which I have changed <code>tolerationSeconds</code> to be for <code>node.kubernetes.io/unreachable</code> and <code>node.kubernetes.io/not-ready</code>) however the node does not appear to undergo either of those situations.</p> <pre><code>NAME READY STATUS RESTARTS AGE my-api-67df7bd54c-dthbn 1/1 Running 0 8d my-api-67df7bd54c-mh564 1/1 Running 0 8d my-pod-6d7b698b5f-28rgw 1/1 Terminating 0 15m my-pod-6d7b698b5f-2wmmg 1/1 Terminating 0 13m my-pod-6d7b698b5f-4lmmg 1/1 Running 0 4m32s my-pod-6d7b698b5f-7m4gh 1/1 Terminating 0 71m my-pod-6d7b698b5f-8b47r 1/1 Terminating 0 27m my-pod-6d7b698b5f-bb58b 1/1 Running 0 2m29s my-pod-6d7b698b5f-dn26n 1/1 Terminating 0 25m my-pod-6d7b698b5f-jrnkg 1/1 Terminating 0 38m my-pod-6d7b698b5f-sswps 1/1 Terminating 0 36m my-pod-6d7b698b5f-vhqnf 1/1 Terminating 0 59m my-pod-6d7b698b5f-wkrtg 1/1 Terminating 0 50m my-pod-6d7b698b5f-z6p2c 1/1 Terminating 0 47m my-pod-6d7b698b5f-zplp6 1/1 Terminating 0 62m </code></pre> <pre><code>14:22:43.678937 8 taint_manager.go:102] NoExecuteTaintManager is deleting Pod: my-pod-6d7b698b5f-dn26n 14:22:43.679073 8 event.go:221] Event(v1.ObjectReference{Kind:"Pod", Namespace:"prod", Name:"my-pod-6d7b698b5f-dn26n", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Marking for deletion Pod prod/my-pod-6d7b698b5f-dn26n </code></pre> <pre><code># kubectl -n prod get pod my-pod-6d7b698b5f-8b47r -o yaml apiVersion: v1 kind: Pod metadata: annotations: checksum/config: bcdc41c616f736849a6bef9c726eec9bf704ce7d2c61736005a6fedda0ee14d0 kubernetes.io/psp: eks.privileged creationTimestamp: "2019-10-25T14:09:17Z" deletionGracePeriodSeconds: 172800 deletionTimestamp: "2019-10-27T14:20:40Z" generateName: my-pod-6d7b698b5f- labels: app.kubernetes.io/instance: my-pod app.kubernetes.io/name: my-pod pod-template-hash: 6d7b698b5f name: my-pod-6d7b698b5f-8b47r namespace: prod ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: my-pod-6d7b698b5f uid: c6360643-f6a6-11e9-9459-12ff96456b32 resourceVersion: "2408256" selfLink: /api/v1/namespaces/prod/pods/my-pod-6d7b698b5f-8b47r uid: 08197175-f731-11e9-9459-12ff96456b32 spec: containers: - args: - -c - from time import sleep; sleep(10000) command: - python envFrom: - secretRef: name: pix4d - secretRef: name: rabbitmq image: python:3.7-buster imagePullPolicy: Always name: my-pod ports: - containerPort: 5000 name: http protocol: TCP resources: requests: cpu: "3" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-gv6q5 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-10-142-54-235.ec2.internal nodeSelector: nodepool: zeroscaling-gpu-accelerated-p2-xlarge priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 172800 tolerations: - key: specialized operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 600 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 600 volumes: - name: default-token-gv6q5 secret: defaultMode: 420 secretName: default-token-gv6q5 status: conditions: - lastProbeTime: null lastTransitionTime: "2019-10-25T14:10:40Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-10-25T14:11:09Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-10-25T14:11:09Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-10-25T14:10:40Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://15e2e658c459a91a86573c1096931fa4ac345e06f26652da2a58dc3e3b3d5aa2 image: python:3.7-buster imageID: docker-pullable://python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977 lastState: {} name: my-pod ready: true restartCount: 0 state: running: startedAt: "2019-10-25T14:11:09Z" hostIP: 10.142.54.235 phase: Running podIP: 10.142.63.233 qosClass: Burstable startTime: "2019-10-25T14:10:40Z" </code></pre> <pre><code># kubectl -n prod describe pod my-pod-6d7b698b5f-8b47r Name: my-pod-6d7b698b5f-8b47r Namespace: prod Priority: 0 PriorityClassName: &lt;none&gt; Node: ip-10-142-54-235.ec2.internal/10.142.54.235 Start Time: Fri, 25 Oct 2019 10:10:40 -0400 Labels: app.kubernetes.io/instance=my-pod app.kubernetes.io/name=my-pod pod-template-hash=6d7b698b5f Annotations: checksum/config: bcdc41c616f736849a6bef9c726eec9bf704ce7d2c61736005a6fedda0ee14d0 kubernetes.io/psp: eks.privileged Status: Terminating (lasts 47h) Termination Grace Period: 172800s IP: 10.142.63.233 Controlled By: ReplicaSet/my-pod-6d7b698b5f Containers: my-pod: Container ID: docker://15e2e658c459a91a86573c1096931fa4ac345e06f26652da2a58dc3e3b3d5aa2 Image: python:3.7-buster Image ID: docker-pullable://python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977 Port: 5000/TCP Host Port: 0/TCP Command: python Args: -c from time import sleep; sleep(10000) State: Running Started: Fri, 25 Oct 2019 10:11:09 -0400 Ready: True Restart Count: 0 Requests: cpu: 3 Environment Variables from: pix4d Secret Optional: false rabbitmq Secret Optional: false Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-gv6q5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-gv6q5: Type: Secret (a volume populated by a Secret) SecretName: default-token-gv6q5 Optional: false QoS Class: Burstable Node-Selectors: nodepool=zeroscaling-gpu-accelerated-p2-xlarge Tolerations: node.kubernetes.io/not-ready:NoExecute for 600s node.kubernetes.io/unreachable:NoExecute for 600s specialized Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 12m (x2 over 12m) default-scheduler 0/13 nodes are available: 1 Insufficient pods, 13 Insufficient cpu, 6 node(s) didn't match node selector. Normal TriggeredScaleUp 12m cluster-autoscaler pod triggered scale-up: [{prod-worker-gpu-accelerated-p2-xlarge 7-&gt;8 (max: 13)}] Warning FailedScheduling 11m (x5 over 11m) default-scheduler 0/14 nodes are available: 1 Insufficient pods, 1 node(s) had taints that the pod didn't tolerate, 13 Insufficient cpu, 6 node(s) didn't match node selector. Normal Scheduled 11m default-scheduler Successfully assigned prod/my-pod-6d7b698b5f-8b47r to ip-10-142-54-235.ec2.internal Normal Pulling 11m kubelet, ip-10-142-54-235.ec2.internal pulling image "python:3.7-buster" Normal Pulled 10m kubelet, ip-10-142-54-235.ec2.internal Successfully pulled image "python:3.7-buster" Normal Created 10m kubelet, ip-10-142-54-235.ec2.internal Created container Normal Started 10m kubelet, ip-10-142-54-235.ec2.internal Started container </code></pre> <pre><code># kubectl -n prod describe node ip-10-142-54-235.ec2.internal Name: ip-10-142-54-235.ec2.internal Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=p2.xlarge beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-east-1 failure-domain.beta.kubernetes.io/zone=us-east-1b kubernetes.io/hostname=ip-10-142-54-235.ec2.internal nodepool=zeroscaling-gpu-accelerated-p2-xlarge Annotations: node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 25 Oct 2019 10:10:20 -0400 Taints: specialized=true:NoExecute Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:19 -0400 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 25 Oct 2019 10:23:11 -0400 Fri, 25 Oct 2019 10:10:40 -0400 KubeletReady kubelet is posting ready status Addresses: InternalIP: 10.142.54.235 ExternalIP: 3.86.112.24 Hostname: ip-10-142-54-235.ec2.internal InternalDNS: ip-10-142-54-235.ec2.internal ExternalDNS: ec2-3-86-112-24.compute-1.amazonaws.com Capacity: attachable-volumes-aws-ebs: 39 cpu: 4 ephemeral-storage: 209702892Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 62872868Ki pods: 58 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 4 ephemeral-storage: 200777747706 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 61209892Ki pods: 58 System Info: Machine ID: 0e76fec3e06d41a6bf2c49a18fbe1795 System UUID: EC29973A-D616-F673-6899-A96C97D5AE2D Boot ID: 4bc510b6-f615-48a7-9e1e-47261ddf26a4 Kernel Version: 4.14.146-119.123.amzn2.x86_64 OS Image: Amazon Linux 2 Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.6.1 Kubelet Version: v1.13.11-eks-5876d6 Kube-Proxy Version: v1.13.11-eks-5876d6 ProviderID: aws:///us-east-1b/i-0f5b519aa6e38e04a Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- amazon-cloudwatch cloudwatch-agent-4d24j 50m (1%) 250m (6%) 50Mi (0%) 250Mi (0%) 12m amazon-cloudwatch fluentd-cloudwatch-wkslq 50m (1%) 0 (0%) 150Mi (0%) 300Mi (0%) 12m prod my-pod-6d7b698b5f-8b47r 3 (75%) 0 (0%) 0 (0%) 0 (0%) 14m kube-system aws-node-6nr6g 10m (0%) 0 (0%) 0 (0%) 0 (0%) 13m kube-system kube-proxy-wf8k4 100m (2%) 0 (0%) 0 (0%) 0 (0%) 13m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 3210m (80%) 250m (6%) memory 200Mi (0%) 550Mi (0%) ephemeral-storage 0 (0%) 0 (0%) attachable-volumes-aws-ebs 0 0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 13m kubelet, ip-10-142-54-235.ec2.internal Starting kubelet. Normal NodeHasSufficientMemory 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 13m (x2 over 13m) kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 13m kubelet, ip-10-142-54-235.ec2.internal Updated Node Allocatable limit across pods Normal Starting 12m kube-proxy, ip-10-142-54-235.ec2.internal Starting kube-proxy. Normal NodeReady 12m kubelet, ip-10-142-54-235.ec2.internal Node ip-10-142-54-235.ec2.internal status is now: NodeReady </code></pre> <pre><code># kubectl get node ip-10-142-54-235.ec2.internal -o yaml apiVersion: v1 kind: Node metadata: annotations: node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" creationTimestamp: "2019-10-25T14:10:20Z" labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/instance-type: p2.xlarge beta.kubernetes.io/os: linux failure-domain.beta.kubernetes.io/region: us-east-1 failure-domain.beta.kubernetes.io/zone: us-east-1b kubernetes.io/hostname: ip-10-142-54-235.ec2.internal nodepool: zeroscaling-gpu-accelerated-p2-xlarge name: ip-10-142-54-235.ec2.internal resourceVersion: "2409195" selfLink: /api/v1/nodes/ip-10-142-54-235.ec2.internal uid: 2d934979-f731-11e9-89b8-0234143df588 spec: providerID: aws:///us-east-1b/i-0f5b519aa6e38e04a taints: - effect: NoExecute key: specialized value: "true" status: addresses: - address: 10.142.54.235 type: InternalIP - address: 3.86.112.24 type: ExternalIP - address: ip-10-142-54-235.ec2.internal type: Hostname - address: ip-10-142-54-235.ec2.internal type: InternalDNS - address: ec2-3-86-112-24.compute-1.amazonaws.com type: ExternalDNS allocatable: attachable-volumes-aws-ebs: "39" cpu: "4" ephemeral-storage: "200777747706" hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 61209892Ki pods: "58" capacity: attachable-volumes-aws-ebs: "39" cpu: "4" ephemeral-storage: 209702892Ki hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 62872868Ki pods: "58" conditions: - lastHeartbeatTime: "2019-10-25T14:23:51Z" lastTransitionTime: "2019-10-25T14:10:19Z" message: kubelet has sufficient memory available reason: KubeletHasSufficientMemory status: "False" type: MemoryPressure - lastHeartbeatTime: "2019-10-25T14:23:51Z" lastTransitionTime: "2019-10-25T14:10:19Z" message: kubelet has no disk pressure reason: KubeletHasNoDiskPressure status: "False" type: DiskPressure - lastHeartbeatTime: "2019-10-25T14:23:51Z" lastTransitionTime: "2019-10-25T14:10:19Z" message: kubelet has sufficient PID available reason: KubeletHasSufficientPID status: "False" type: PIDPressure - lastHeartbeatTime: "2019-10-25T14:23:51Z" lastTransitionTime: "2019-10-25T14:10:40Z" message: kubelet is posting ready status reason: KubeletReady status: "True" type: Ready daemonEndpoints: kubeletEndpoint: Port: 10250 images: - names: - python@sha256:f0db6711abee8d406121c9e057bc0f7605336e8148006164fea2c43809fe7977 - python:3.7-buster sizeBytes: 917672801 - names: - 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni@sha256:5b7e7435f88a86bbbdb2a5ecd61e893dc14dd13c9511dc8ace362d299259700a - 602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon-k8s-cni:v1.5.4 sizeBytes: 290739356 - names: - fluent/fluentd-kubernetes-daemonset@sha256:582770d951f81e0971e852089239ced0186e0bdc3226daf16b99ca4cc22de4f7 - fluent/fluentd-kubernetes-daemonset:v1.3.3-debian-cloudwatch-1.4 sizeBytes: 261867521 - names: - amazon/cloudwatch-agent@sha256:877106acbc56e747ebe373548c88cd37274f666ca11b5c782211db4c5c7fb64b - amazon/cloudwatch-agent:latest sizeBytes: 131360039 - names: - 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy@sha256:4767b441ddc424b0ea63c305b79be154f65fb15ebefe8a3b2832ce55aa6de2f0 - 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy:v1.13.8 sizeBytes: 80183964 - names: - busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e - busybox:latest sizeBytes: 1219782 - names: - 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64@sha256:bea77c323c47f7b573355516acf927691182d1333333d1f41b7544012fab7adf - 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 sizeBytes: 742472 nodeInfo: architecture: amd64 bootID: 4bc510b6-f615-48a7-9e1e-47261ddf26a4 containerRuntimeVersion: docker://18.6.1 kernelVersion: 4.14.146-119.123.amzn2.x86_64 kubeProxyVersion: v1.13.11-eks-5876d6 kubeletVersion: v1.13.11-eks-5876d6 machineID: 0e76fec3e06d41a6bf2c49a18fbe1795 operatingSystem: linux osImage: Amazon Linux 2 systemUUID: EC29973A-D616-F673-6899-A96C97D5AE2D </code></pre>
<p>Unfortunately, I don't have an exact answer to your issue, but I may have some workaround.</p> <p>I think I had the same issue with Amazon EKS cluster, version 1.13.11 - my pod was triggering node scale-up, pod was scheduled, works for 300s and then evicted:</p> <pre><code>74m Normal TaintManagerEviction pod/master-3bb760a7-b782-4138-b09f-0ca385db9ad7-workspace Marking for deletion Pod project-beta/master-3bb760a7-b782-4138-b09f-0ca385db9ad7-workspace </code></pre> <p>Interesting, that the same pod was able to run with no problem if it was scheduled on existing node and not a just created one.</p> <p>From my investigation, it really looks like some issue with this specific Kubernetes version. Maybe some edge case of the TaintBasedEvictions feature(I think it was enabled by default in 1.13 version of Kubernetes).</p> <p>To "fix" this issue I updated cluster version to 1.14. After that, mysterious pod eviction did not happen anymore.</p> <p>So, if it's possible to you, I suggest updating your cluster to 1.14 version(together with cluster-autoscaler).</p>
<p>I'm having trouble finding the kernel options that Google's Container Optimized OS is built with. I tried looking at the usual locations like <code>boot/config-*</code> and <code>/proc/config.gz</code>, but didn't find anything. I searched the source code and didn't find anything either, but I'm probably just searching wrong.</p> <p>The specific option I'm curious about is CONFIG_CFS_BANDWIDTH and whether it is enabled or not. Thanks!</p>
<p>You can get it via running <code>zcat /proc/config.gz</code> in a Container-optimized OS VM.</p> <p>The kernel config is generated from the source <a href="https://chromium.googlesource.com/chromiumos/overlays/board-overlays/+/refs/heads/master/overlay-lakitu/sys-kernel/lakitu-kernel-4_14/files/base.config" rel="nofollow noreferrer">here</a>. However, note that the source kernel config are changed during the OS image building process. So they are not 100% the same.</p>
<p>We have several applications in a Kubernetes cluster that use Apache Ignite. Ignite creates various thread pools that are sized as follows:</p> <pre><code>Math.max(8, Runtime.getRuntime().availableProcessors()) </code></pre> <p>So basically the thread pool will always have at least size 8, but could be more if the system believes there are more processors.</p> <p>The problem we're having is that some pods are spinning up with pool size 8, and others are using size 36, which is the number of CPUs on the node.</p> <p>We're using Helm to deploy all apps, but we're <strong>not</strong> setting any CPU limits for any pods. In theory they should all see the same number of available CPUs.</p> <p>What else could cause pods on the same node to see different views of how many processors are available?</p> <h2>Update</h2> <p>We have a health end point in all of our apps that shows the number of CPUS reported by the JVM, using the same <code>Runtime#availableProcessors()</code> method that Ignite uses.</p> <p>All of our apps, including ones where Ignite thinks there are 36 CPUs, report 2 processors once the process has started.</p> <p>I found this interesting line in the Java documentation for that method:</p> <blockquote> <p>This value may change during a particular invocation of the virtual machine. Applications that are sensitive to the number of available processors should therefore occasionally poll this property and adjust their resource usage appropriately.</p> </blockquote> <p>It seems we're in a race condition where early on in app startup, that value reports 36 but at some point drops to 2. Depending on when the Ignite beans are fired, they're seeing either 36 or 2.</p>
<p><em>tl;dr</em> The underlying issue seems to be when <code>resources.requests.cpu</code> is set exactly to <code>1000m</code>.</p> <p>I wrote a simple Java app that dumps the available number of processors:</p> <pre class="lang-java prettyprint-override"><code>public class CpuTest { public static void main(String[] args) { System.out.println(&quot;Number of CPUs = &quot; + Runtime.getRuntime().availableProcessors()); } } </code></pre> <p>I packaged into a Dockerfile and created a simple deployment:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: cputest labels: app: cputest spec: replicas: 1 selector: matchLabels: app: cputest template: metadata: labels: app: cputest spec: containers: - name: cputest image: dev/cputest:latest imagePullPolicy: Never </code></pre> <p>I ran this on my local RedHat 7 machine which has 24 cores. The output was expected:</p> <pre><code>Number of CPUs = 24 </code></pre> <p>I then applied various CPU resource requests to the deployment:</p> <pre class="lang-yaml prettyprint-override"><code> resources: requests: cpu: 1000m </code></pre> <p>and re-deployed. The results were interesting:</p> <ul> <li>CPU request set to 500m: the app reports 1 CPU</li> <li><strong>CPU request set to 1000m: the app reports 24 CPU</strong> &lt;==</li> <li>CPU request set to 1001m: the app reports 2 CPU</li> <li>CPU request set to 2000m: the app reports 2 CPU</li> <li>CPU request set to 4000m: the app reports 4 CPU</li> </ul> <p>So the issue only arises when the CPU request is set <code>1000m</code> (also tried <code>1</code> and got the same result, where it thinks it has all 24 CPUs).</p> <p>I went back and looked at all of our apps. Sure enough, the ones where we set the CPU request to exactly <code>1000m</code> are the ones that have the issue. Any other value works as expected.</p> <p>Of inerest, when I also set CPU limit to <code>1000m</code>, the problem goes away and the JVM reports 1 CPU.</p> <p>It very well could be this is expected and I don't fully understand how CPU resource and limits are used by Kubernetes, or perhaps an issue with the version we're on (1.12.7).</p> <p>Either way I at least have an answer as to why some of our pods are seeing different CPUs.</p>
<p>According to the page <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md" rel="nofollow noreferrer">here</a>, kubetest should be installed with the following go command:</p> <pre><code>go get -u k8s.io/test-infra/kubetest </code></pre> <p>I've done that and tried running <code>kubetest</code> but it appears that it was not installed.</p> <pre><code>$ kubetest kubetest: command not found </code></pre> <p>Is this not the correct way of installing it? Or is there anything extra that needs to be done? If so, why is it not mentioned in the readme?</p> <hr> <p>edit:</p> <p>here is my go version:</p> <pre><code>$ go version go version go1.13.4 linux/amd64 </code></pre> <hr> <p>edit:</p> <p>Here is a partial output from my install command:</p> <pre><code>$ go get -v -u k8s.io/test-infra/kubetest get "k8s.io/test-infra/kubetest": found meta tag get.metaImport{Prefix:"k8s.io/test-infra", VCS:"git", RepoRoot:"https://github.com/kubernetes/test-infra"} at //k8s.io/test-infra/kubetest?go-get=1 get "k8s.io/test-infra/kubetest": verifying non-authoritative meta tag k8s.io/test-infra (download) github.com/Azure/azure-sdk-for-go (download) github.com/Azure/go-autorest (download) github.com/dgrijalva/jwt-go (download) ... k8s.io/api/settings/v1alpha1 k8s.io/api/storage/v1 k8s.io/api/storage/v1alpha1 k8s.io/api/storage/v1beta1 k8s.io/client-go/tools/reference k8s.io/client-go/kubernetes/scheme </code></pre>
<p><code>go get</code> by default places projects in the directory defined as your <code>$GOPATH</code>, which by default is <code>$HOME/go</code>. Binaries are by default placed in <code>$GOPATH/bin</code>. Ensure that <code>GOPATH/bin</code> is added to your <code>$PATH</code> variable, otherwise you are unable to use binaries fetched using <code>go get</code>.</p>
<p>I have a google kubernetes engine cluster with multiple namespaces. Different applications are deployed on each of these namespaces. Is it possible to give a user complete access to a single namespace only?</p>
<p>You can bind the user to the cluster-admin role using a Rolebinding in your namespace of choice. As specified in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles" rel="nofollow noreferrer">documentation</a>, cluster-admin:</p> <blockquote> <p>Allows super-user access to perform any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.</p> </blockquote> <p>For example:</p> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: example namespace: yournamespace subjects: - kind: User name: example-user apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I have a google kubernetes engine cluster with multiple namespaces. Different applications are deployed on each of these namespaces. Is it possible to give a user complete access to a single namespace only?</p>
<p>Yes, Kubernetes has a built-in RBAC system that integrates with Cloud IAM so that you can control access to individual clusters and namespaces for GCP users.</p> <ol> <li>Create a Kubernetes <code>Role</code></li> </ol> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: YOUR_NAMESPACE name: ROLE_NAME rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] </code></pre> <ol start="2"> <li>Create a Kubernetes <code>RoleBinding</code></li> </ol> <pre><code>kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ROLE_NAME-binding namespace: YOUR_NAMESPACE subjects: # GCP user account - kind: User name: janedoe@example.com </code></pre> <p>Reference</p> <ul> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">Kubernetes Role-based access control</a></li> </ul>
<p>I recently upgraded my gke cluster to 1.14.x and nginx ingress to the latest version 0.26.1. At some point my ingresses stopped working.</p> <p>For instance, when trying to access Nexus with <code>curl INGRESS_IP -H "host:nexus.myorg.com"</code>, these are the ingress controller logs:</p> <pre><code>2019/11/07 08:35:49 [error] 350#350: *2664 upstream timed out (110: Connection timed out) while connecting to upstream, client: 82.81.2.76, server: nexus.myorg.com, request: "GET / HTTP/1.1", upstream: "http://10.8.25.3:8081/", host: "nexus.myorg.com" 2019/11/07 08:35:54 [error] 350#350: *2664 upstream timed out (110: Connection timed out) while connecting to upstream, client: 82.81.2.76, server: nexus.myorg.com, request: "GET / HTTP/1.1", upstream: "http://10.8.25.3:8081/", host: "nexus.myorg.com" 2019/11/07 08:35:59 [error] 350#350: *2664 upstream timed out (110: Connection timed out) while connecting to upstream, client: 82.81.2.76, server: nexus.myorg.com, request: "GET / HTTP/1.1", upstream: "http://10.8.25.3:8081/", host: "nexus.myorg.com" 82.81.2.76 - - [07/Nov/2019:08:35:59 +0000] "GET / HTTP/1.1" 504 173 "-" "curl/7.64.1" 79 15.003 [some-namespace-nexus-service-8081] [] 10.8.25.3:8081, 10.8.25.3:8081, 10.8.25.3:8081 0, 0, 0 5.001, 5.001, 5.001 504, 504, 504 a03f13a3bfc943e44f2df3d82a6ecaa4 </code></pre> <p>As you can see it tries to connect three times to 10.8.25.3:8081 which is the pod IP, timing out in all of them.</p> <p>I've sh'ed into a pod and accessed the pod using that same IP with no problem: <code>curl 10.8.25.3:8081</code>. So the service is set up correctly.</p> <p>This is my Ingress config:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: my-ingress namespace: some-namespace annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/add-base-url: "true" nginx.ingress.kubernetes.io/proxy-body-size: 30M spec: rules: - host: nexus.myorg.com http: paths: - backend: serviceName: nexus-service servicePort: 8081 </code></pre> <p>Any idea how to troubleshoot of fix this?</p>
<p>The problem had to do with network policies. We have some policies to forbid the access to pods from other namespaces and allow only from the ingress namespace</p> <pre><code> apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: allow-from-ingress-namespace namespace: some-namespace spec: ingress: - from: - namespaceSelector: matchLabels: type: ingress podSelector: {} policyTypes: - Ingress apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: deny-from-other-namespaces namespace: some-namespace spec: ingress: - from: - podSelector: {} podSelector: {} policyTypes: - Ingress </code></pre> <p>With the upgrade we lost the label that is matched in the policy (type=ingress). Simply adding it fixed the problem: <code>kubectl label namespaces ingress-nginx type=ingress</code></p>
<p>I can't make my ingress work in local <code>docker-desktop</code></p> <p>I made an <code>helm create my-project-helm</code> like everybody</p> <p>then I do changes in <code>./my-project-helm/values.yaml</code> I just show you the most interesting part (about ingress)</p> <pre><code>... replicaCount: 3 image: repository: localhost:5000/my-project-helm tag: latest pullPolicy: IfNotPresent ... service: type: ClusterIP port: 80 ingress: enabled: true annotations: { kubernetes.io/ingress.class: nginx } # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local paths: [/] ... </code></pre> <p>I install this helm <code>helm upgrade --install my-project-helm ./my-project-helm</code></p> <p>I install nginx <code>helm upgrade --install nginx-ingress stable/nginx-ingress</code></p> <p>I am gonna check my ingress with <code>kubectl describe ingress my-project-helm</code></p> <pre><code>Name: my-project-helm Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- chart-example.local / my-project-helm:80 (10.1.0.71:80,10.1.0.72:80,10.1.0.73:80) Annotations: kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 14m nginx-ingress-controller Ingress default/my-project-helm Normal UPDATE 13m nginx-ingress-controller Ingress default/my-project-helm </code></pre> <p>Everything seems so wonderful. </p> <p>I make a <code>curl chart-example.local</code> nothing happens <code>curl: (6) Could not resolve host: chart-example.local</code></p> <p>Does anyone have a clue?</p>
<p>By default it should add the <code>Kubernetes master</code> domain in your /etc/hosts</p> <p>Check it with <code>kubectl cluster-info</code></p> <p>So just use the default domain which is in my case <code>kubernetes.docker.internal</code> instead of <code>chart-example.local</code>.</p> <p>And yes it must point to the loadbalancer external IP of nginx which is localhost by default ;)</p>
<p>Currently I am working with implementation demo of CI/CD pipeline using Jenkins and kubernetes. I am created one Jenkins pipeline job and added every step in my Jenkinsfile. For kubernetes , I created deployment and service YAML files for Kubernetes resources.</p> <p><strong>Image Version Management In deployment</strong></p> <p>For each commit in my SVN repository , I am creating docker images by adding the tag as build number through Jenkins environment variable.</p> <pre><code>image:1 image:2 . . image:n </code></pre> <p><strong>Confusion</strong></p> <p>If I am using <code>kubectl set image</code> command for updating my deployment in my Jenkinsfile deployment stage , then It will work for first commit ? </p> <p>I felt this confusion , because command says that it is for updating the deployment if there is change in image defined in YAML file. So If I am changing image for each commit according to build number( image-name:buildnumber) , then if there is no deployment exits , it will work for first time?</p>
<p>So I can think of multiple ways, one a bit hacky and other that I like. You can decide based on waht is best suited for you.</p> <p><strong>Solution 1</strong> - You can add a simple conditional check-in your Jenkins, for the first time if deployment does not exists create it and set it to the first image. And from the second commit onwards update.</p> <p>OR</p> <p><strong>Solution 2</strong> - Think of it as each repository or service should have its own <code>deployment.yml</code> within it. So when for first time code is checked-in you can use that yml to check if deployment already exists ? if it does not just create the development with <code>kubectl apply myapp.yml</code> and for the second commit <code>kubectl set image</code> will work.</p>
<p>I am trying to deploy the statsd exporter (<a href="https://github.com/prometheus/statsd_exporter" rel="noreferrer">https://github.com/prometheus/statsd_exporter</a>) software as a docker container in a K8s cluster. However, I want some parameters to be configurable. In order to do that, I am passing some arguments to the container via K8s deployment in a yaml format. When these arguments do not contain the double quotes character ("), everything works fine. However, if the desired value of the introduced variables contains double quotes, K8s interprets them in a wrong way (something similar is described in <a href="https://stackoverflow.com/questions/57389810/pass-json-string-to-environment-variable-in-a-k8s-deployment-for-envoy">Pass json string to environment variable in a k8s deployment for Envoy</a>). What I want to set is the <code>--statsd.listen-tcp=":&lt;port&gt;"</code> argument, and I am using <code>command</code> and <code>args</code> in K8s deployment:</p> <pre><code>- name: statsd-exporter image: prom/statsd-exporter:v0.12.2 ... command: ["/bin/statsd_exporter"] args: ['--log.level="debug"', '--statsd.listen-tcp=":9999"'] </code></pre> <p>When I deploy it in K8s and check the content of the "running" deployment, everything seems to be right: </p> <pre><code>command: - /bin/statsd_exporter args: - --log.level="debug" - --statsd.listen-tcp=":9999" </code></pre> <p>However, the container never starts, giving the following error:</p> <p><code>time="..." level=fatal msg="Unable to resolve \": lookup \": no such host" source="main.go:64"</code></p> <p>I think that K8s is trying to "scape" the double quotes and it is passing them adding the backslash to the container, so the latter cannot understand them. I have also tried to write the <code>args</code> as </p> <pre><code>args: ["--log.level=\"debug\"", "--statsd.listen-tcp=\":9999\""] </code></pre> <p>and the same happens. I have also tried to pass them as env variables, and all the times the same problem is happening: the double quotes are not parsed in the right way. </p> <p>Any idea regarding some possible solution?</p> <p>Thanks!</p>
<p>According to the <a href="https://github.com/prometheus/statsd_exporter/blob/master/main.go#L30" rel="nofollow noreferrer">source code</a>, statsd-exporter uses <a href="https://github.com/alecthomas/kingpin" rel="nofollow noreferrer">kingpin</a> for command line and flag parser. If I am not mistaken, kingpin doesn't require values to be surrounded by double quotes.</p> <p>I would suggest to try:</p> <pre><code>- name: statsd-exporter image: prom/statsd-exporter:v0.12.2 ... command: ["/bin/statsd_exporter"] args: - --log.level=debug - --statsd.listen-tcp=:9999 </code></pre> <p>Reason being is that according to the source code <a href="https://github.com/prometheus/statsd_exporter/blob/master/main.go#L54-L61" rel="nofollow noreferrer">here</a>, the input value for <code>statsd.listen-tcp</code> is split into host and port and it seems the the host per the error message gets the value of a double quote character <code>"</code>.</p>
<p>I've installed a single node cluster with kubeadm but the log symlink on /var/log/containers is empty. What I need to do to configure it?</p>
<p>On machines with systemd, the kubelet and container runtime write to <code>journald</code>. Check if your log output runs to <code>journald</code>. By defoult it should write those logs to <code>json.log</code> files but I don't know any specifics of your setup. Check <code>/etc/sysconfig/</code> for <code>--log-driver=journald</code> and delete it if needed. What we want here is to have the log driver set to <code>json</code>. Therefore you would see the logs files in <code>/var/log/containers</code>. </p> <p>Please let me know if that helped.</p>
<p>In this question: <a href="https://stackoverflow.com/questions/55631068/teamcity-build-wont-run-until-build-agents-is-configured-with-docker">Teamcity Build won&#39;t run until Build Agents is configured with Docker?</a></p> <p>I had a problem with the teamcity-agent (Teamcity is a build server) deployment. These agents are the build runners and they come as their own pods. So back the days when I was just using Docker without K8s I used this command to run the container:</p> <pre><code>docker run -it -e SERVER_URL="&lt;url to TeamCity server&gt;" \ --privileged -e DOCKER_IN_DOCKER=start \ jetbrains/teamcity-agent </code></pre> <p>So adding those environement vars to the K8s container definition wasn't that hard. I just had to define this <code>spec</code> part:</p> <pre><code>spec: containers: - name: teamcity-agent image: jetbrains/teamcity-agent:latest ports: - containerPort: 8111 env: - name: SERVER_URL value: 10.0.2.205:8111 - name: DOCKER_IN_DOCKER value: start </code></pre> <p>So now I want to have the <code>--privileged</code> flag as well. I found and article here <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">link to guide</a> which I did not really understood. I added</p> <pre><code>securityContext: allowPrivilegeEscalation: false // also tried 'true' </code></pre> <p>but it did not worked with that.</p> <p>Can someone point out where to look at?</p>
<p>I think you may use it like this</p> <pre><code>securityContext: privileged: true </code></pre> <p>see <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">this</a></p>
<p>I created a small cluster with GPU nodes on GKE like so:</p> <pre class="lang-sh prettyprint-override"><code># create cluster and CPU nodes gcloud container clusters create clic-cluster \ --zone us-west1-b \ --machine-type n1-standard-1 \ --enable-autoscaling \ --min-nodes 1 \ --max-nodes 3 \ --num-nodes 2 # add GPU nodes gcloud container node-pools create gpu-pool \ --zone us-west1-b \ --machine-type n1-standard-2 \ --accelerator type=nvidia-tesla-k80,count=1 \ --cluster clic-cluster \ --enable-autoscaling \ --min-nodes 1 \ --max-nodes 2 \ --num-nodes 1 </code></pre> <p>When I submit a GPU job it successfully ends up running on the GPU node. However, when I submit a second job I get an <code>UnexpectedAdmissionError</code> from kubernetes:</p> <blockquote> <p>Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected.</p> </blockquote> <p>I would have expected the cluster to start the second GPU node and place the job there. Any idea why this didn't happen? My job spec looks roughly like this:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: &lt;job_name&gt; spec: template: spec: initContainers: - name: decode image: "&lt;decoder_image&gt;" resources: limits: nvidia.com/gpu: 1 command: [...] [...] containers: - name: evaluate image: "&lt;evaluation_image&gt;" command: [...] </code></pre>
<p>The resource constraint needs to be added to the <code>containers</code> spec as well:</p> <pre><code>piVersion: batch/v1 kind: Job metadata: name: &lt;job_name&gt; spec: template: spec: initContainers: - name: decode image: "&lt;decoder_image&gt;" resources: limits: nvidia.com/gpu: 1 command: [...] [...] containers: - name: evaluate image: "&lt;evaluation_image&gt;" resources: limits: nvidia.com/gpu: 1 command: [...] </code></pre> <p>I only required a GPU in one of the <code>initContainers</code>, but this seems to confuse the scheduler. Now autoscaling and scheduling works as expected.</p>
<p>I'm trying to set up a Redis cluster and I followed this guide here: <a href="https://rancher.com/blog/2019/deploying-redis-cluster/" rel="noreferrer">https://rancher.com/blog/2019/deploying-redis-cluster/</a></p> <p>Basically I'm creating a StatefulSet with a replica 6, so that I can have 3 master nodes and 3 slave nodes. After all the nodes are up, I create the cluster, and it all works fine... but if I look into the file "nodes.conf" (where the configuration of all the nodes should be saved) of each redis node, I can see it's empty. This is a problem, because whenever a redis node gets restarted, it searches into that file for the configuration of the node to update the IP address of itself and MEET the other nodes, but he finds nothing, so it basically starts a new cluster on his own, with a new ID.</p> <p>My storage is an NFS connected shared folder. The YAML responsible for the storage access is this one:</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-provisioner-raid5 spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisioner-raid5 spec: serviceAccountName: nfs-provisioner-raid5 containers: - name: nfs-provisioner-raid5 image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-raid5-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: 'nfs.raid5' - name: NFS_SERVER value: 10.29.10.100 - name: NFS_PATH value: /raid5 volumes: - name: nfs-raid5-root nfs: server: 10.29.10.100 path: /raid5 --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner-raid5 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs.raid5 provisioner: nfs.raid5 parameters: archiveOnDelete: "false" </code></pre> <p>This is the YAML of the redis cluster StatefulSet:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster labels: app: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: redis:5-alpine ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"] readinessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 20 periodSeconds: 3 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /conf readOnly: false - name: data mountPath: /data readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data labels: name: redis-cluster spec: accessModes: [ "ReadWriteOnce" ] storageClassName: nfs.raid5 resources: requests: storage: 1Gi </code></pre> <p>This is the configMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster labels: app: redis-cluster data: fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/data/nodes.conf" echo "creating nodes" if [ -f ${CLUSTER_CONFIG} ]; then if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} echo "done" fi exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /data/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no </code></pre> <p>and I created the cluster using the command:</p> <pre><code>kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') </code></pre> <p>what am I doing wrong? this is what I see into the /data folder: <a href="https://i.stack.imgur.com/pZAmT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pZAmT.png" alt="enter image description here"></a></p> <p>the nodes.conf file shows 0 bytes.</p> <p>Lastly, this is the log from the redis-cluster-0 pod:</p> <pre><code>creating nodes 1:C 07 Nov 2019 13:01:31.166 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 1:C 07 Nov 2019 13:01:31.166 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started 1:C 07 Nov 2019 13:01:31.166 # Configuration loaded 1:M 07 Nov 2019 13:01:31.179 * No cluster configuration found, I'm e55801f9b5d52f4e599fe9dba5a0a1e8dde2cdcb 1:M 07 Nov 2019 13:01:31.182 * Running mode=cluster, port=6379. 1:M 07 Nov 2019 13:01:31.182 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1:M 07 Nov 2019 13:01:31.182 # Server initialized 1:M 07 Nov 2019 13:01:31.182 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never &gt; /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 1:M 07 Nov 2019 13:01:31.185 * Ready to accept connections 1:M 07 Nov 2019 13:08:04.264 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH 1:M 07 Nov 2019 13:08:04.306 # IP address for this node updated to 10.40.0.27 1:M 07 Nov 2019 13:08:09.216 # Cluster state changed: ok 1:M 07 Nov 2019 13:08:10.144 * Replica 10.44.0.14:6379 asks for synchronization 1:M 07 Nov 2019 13:08:10.144 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '27972faeb07fe922f1ab581cac0fe467c85c3efd', my replication IDs are '31944091ef93e3f7c004908e3ff3114fd733ea6a' and '0000000000000000000000000000000000000000') 1:M 07 Nov 2019 13:08:10.144 * Starting BGSAVE for SYNC with target: disk 1:M 07 Nov 2019 13:08:10.144 * Background saving started by pid 1041 1041:C 07 Nov 2019 13:08:10.161 * DB saved on disk 1041:C 07 Nov 2019 13:08:10.161 * RDB: 0 MB of memory used by copy-on-write 1:M 07 Nov 2019 13:08:10.233 * Background saving terminated with success 1:M 07 Nov 2019 13:08:10.243 * Synchronization with replica 10.44.0.14:6379 succeeded </code></pre> <p>thank you for the help.</p>
<p>Looks to be an issue with the shell script that is mounted from configmap. can you update as below</p> <pre><code> fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/data/nodes.conf" echo "creating nodes" if [ -f ${CLUSTER_CONFIG} ]; then echo "[ INFO ]File:${CLUSTER_CONFIG} is Found" else touch $CLUSTER_CONFIG fi if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} echo "done" exec "$@" </code></pre> <p>I just deployed with the updated script and it worked. see below the output</p> <pre><code>master $ kubectl get po NAME READY STATUS RESTARTS AGE redis-cluster-0 1/1 Running 0 83s redis-cluster-1 1/1 Running 0 54s redis-cluster-2 1/1 Running 0 45s redis-cluster-3 1/1 Running 0 38s redis-cluster-4 1/1 Running 0 31s redis-cluster-5 1/1 Running 0 25s master $ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl getpods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') &gt;&gt;&gt; Performing hash slots allocation on 6 nodes... Master[0] -&gt; Slots 0 - 5460 Master[1] -&gt; Slots 5461 - 10922 Master[2] -&gt; Slots 10923 - 16383 Adding replica 10.40.0.4:6379 to 10.40.0.1:6379 Adding replica 10.40.0.5:6379 to 10.40.0.2:6379 Adding replica 10.40.0.6:6379 to 10.40.0.3:6379 M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379 slots:[0-5460] (5461 slots) master M: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379 slots:[5461-10922] (5462 slots) master M: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379 slots:[10923-16383] (5461 slots) master S: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379 replicates 9984141f922bed94bfa3532ea5cce43682fa524c S: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379 replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 S: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379 replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a Can I set the above configuration? (type 'yes' to accept): yes &gt;&gt;&gt; Nodes configuration updated &gt;&gt;&gt; Assign a different config epoch to each node &gt;&gt;&gt; Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... &gt;&gt;&gt; Performing Cluster Check (using node 10.40.0.1:6379) M: 9984141f922bed94bfa3532ea5cce43682fa524c 10.40.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a 10.40.0.3:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 1bc8d1b8e2d05b870b902ccdf597c3eece7705df 10.40.0.4:6379 slots: (0 slots) slave replicates 9984141f922bed94bfa3532ea5cce43682fa524c S: d4b91700b2bb1a3f7327395c58b32bb4d3521887 10.40.0.6:6379 slots: (0 slots) slave replicates 045b27c73069bff9ca9a4a1a3a2454e9ff640d1a M: 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 10.40.0.2:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 5b2b019ba8401d3a8c93a8133db0766b99aac850 10.40.0.5:6379 slots: (0 slots) slave replicates 76ebee0dd19692c2b6d95f0a492d002cef1c6c17 [OK] All nodes agree about slots configuration. &gt;&gt;&gt; Check for open slots... &gt;&gt;&gt; Check slots coverage... [OK] All 16384 slots covered. master $ kubectl exec -it redis-cluster-0 -- redis-cli cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:61 cluster_stats_messages_pong_sent:76 cluster_stats_messages_sent:137 cluster_stats_messages_ping_received:71 cluster_stats_messages_pong_received:61 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:137 master $ for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -- redis-cli role;echo; done redis-cluster-0 master 588 10.40.0.4 6379 588 redis-cluster-1 master 602 10.40.0.5 6379 602 redis-cluster-2 master 588 10.40.0.6 6379 588 redis-cluster-3 slave 10.40.0.1 6379 connected 602 redis-cluster-4 slave 10.40.0.2 6379 connected 602 redis-cluster-5 slave 10.40.0.3 6379 connected 588 </code></pre>
<p>When creating a simple pod on Kubernetes with an RBAC enabled cluster, and where a pod security policy is enabled for the role, how can I view which PSP is successfully used to validate the request? Cluster is deployed with kubeadm.</p>
<p>You can see the Pod Security Policy used by a Pod by looking at its annotations.</p> <p>For instance:</p> <pre><code>kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}' </code></pre>
<p>I'm completely new to Kubernetes, I'm a bit lost of were to search. I would like to have blue-green deployment between with a web application solution. I've been told that the blue pods are destroyed when there is no user session anymore associated to the blue pods. Is this right? In some web pages I'm reading that there is a flip between one and the other. Is it mandatory to use the session? In my case I've got a stateless application</p>
<p><strong>Blue Green Deployment</strong></p> <p>Blue Green deployment is not a standard feature in Kubernetes. That means that there are many different 3rd party products for this, or patterns. And all products and pattern differ in <strong>how</strong> they do this.</p> <p><strong>Example:</strong> <a href="https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/" rel="nofollow noreferrer">Zero-downtime Deployment in Kubernetes with Jenkins</a> is using two <code>Deployment</code> with different <code>labels</code> and <em>update</em> the <code>Service</code> to point to the other service for <em>switching</em>. It is not the easiest strategy to get right.</p> <p><strong>Stateless</strong></p> <blockquote> <p>In my case I've got a stateless application</p> </blockquote> <p>This is great! With a <strong>stateless</strong> app, is is much easier to get the deployment strategy as you want.</p> <p><strong>Default Deployment Strategy</strong></p> <p>The default deployment strategy for <code>Deployment</code> (stateless workload) is <em>Rolling Deployment</em> and if that fits your needs, it is the easiest deployment strategy to use.</p>
<p>We're sort of new to the whole Kubernetes world, but by now have a number of services running in GKE. Today we saw some strange behaviour though, where one of the processes running inside one of our pods was killed, even though the Pod itself had plenty of resources available, and wasn't anywhere near its limits.</p> <p>The limits are defined as such:</p> <pre><code>resources: requests: cpu: 100m memory: 500Mi limits: cpu: 1000m memory: 1500Mi </code></pre> <p>Inside the pod, a Celery (Python) is running, and this particular one is consuming some fairly long running tasks.</p> <p>During operation of one of the tasks, the celery process was suddenly killed, seemingly caused by OOM. The GKE Cluster Operations logs show the following:</p> <pre><code>Memory cgroup out of memory: Kill process 613560 (celery) score 1959 or sacrifice child Killed process 613560 (celery) total-vm:1764532kB, anon-rss:1481176kB, file-rss:13436kB, shmem-rss:0kB </code></pre> <p>The resource graph for the time period looks like the following:</p> <p><a href="https://i.stack.imgur.com/zJEpZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zJEpZ.png" alt="CPU and Memory usage for Pod"></a></p> <p>As can be clearly seen, neither the CPU or the memory usage was anywhere near the limits defined by the Pod, so we're baffled as to why any OOMKilling occurred? Also a bit baffled by the fact that the process itself was killed, and not the actual Pod?</p> <p>Is this particular OOM actually happening inside the OS and not in Kubernetes? And if so - is there a solution to getting around this particular problem?</p>
<p>About your statement:</p> <blockquote> <p>Also a bit baffled by the fact that the process itself was killed, and not the actual Pod?</p> </blockquote> <p>Compute Resources (CPU/Memory) are configured for Containers, not for Pods.</p> <p>If a Pod <strong>container is OOM killed</strong>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="noreferrer">the Pod is not evicted</a>. The underlying container is restarted by the <code>kubelet</code> based on its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer"><code>RestartPolicy</code></a>. The Pod will still exist on the same node, and the <code>Restart Count</code> will be incremented (unless you are using <code>RestartPolicy: Never</code>, which is not your case).</p> <p>If you do a <code>kubectl describe</code> on your pod, the newly spawned container will be in <code>Running</code> state, but you can find the last restart cause in <code>Last State</code>. Also, <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#my-container-is-terminated" rel="noreferrer">you can check</a> how many times it was restarted:</p> <pre><code>State: Running Started: Wed, 27 Feb 2019 10:29:09 +0000 Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: Wed, 27 Feb 2019 06:27:39 +0000 Finished: Wed, 27 Feb 2019 10:29:08 +0000 Restart Count: 5 </code></pre> <hr> <p>The Resource Graph visualization may deviate from the actual use of Memory. As it uses a <code>1 min interval (mean)</code> sampling, if your memory suddenly increases over the top limit, the container can be restarted before your <strong>average</strong> memory usage gets plotted on the graph as a high peak. If your Python container makes short/intermittent high memory usage, it's prone to be restarted even though the values are not in the graph.</p> <p>With <code>kubectl top</code> you can view the last memory usage registered for the Pod. Although it will be more precisely to see the memory usage on a specific point in time, keep in mind that it fetches the values from <code>metrics-server</code>, which have a <a href="https://github.com/kubernetes-sigs/metrics-server#flags" rel="noreferrer"><code>--metric-resolution</code></a>:</p> <blockquote> <p>The interval at which metrics will be scraped from Kubelets (defaults to 60s).</p> </blockquote> <p>If your container makes a "spiky" use of memory, you may still see it being restarting without even seeing the memory usage on <code>kubectl top</code>.</p>
<p>I'm running an OpenShift cluster and am trying to figure out what version of OLM in installed in it. I'm considering an upgrade, but would like more details.</p> <p>How can I find the version?</p>
<p><strong>From the CLI:</strong></p> <p>You can change kubectl for oc since you are using OpenShift.</p> <p>First find the name of an olm-operator pod. I'm assuming Operator Lifecycle Manager is installed in the olm namespace, but it might be "operator-lifecycle-manager".</p> <pre><code>kubectl get pods -n olm |grep olm-operator </code></pre> <p>Then run a command on that pod like this:</p> <pre><code>kubectl exec -n olm &lt;POD_NAME&gt; -- olm --version </code></pre> <p><strong>From the Console:</strong></p> <p>Navigate to the namespace and find an olm-operator pod. Open the "Terminal" tap and run <code>olm --version</code>.</p> <p>In either case, the output should be something like this:</p> <pre><code>OLM version: 0.12.0 git commit: a611449366805935939777d0182a86ba43b26cbd </code></pre>
<p>I followed some tutorials on how to set up an HTTP server, and test it in a local Kubernetes cluster (using <code>minikube</code>).</p> <p>I also implemented graceful shutdown from some examples I found, and expected that there would be no downtime from a Kubernetes rolling restart.</p> <p>To verify that, I started performing load tests (using <a href="https://httpd.apache.org/docs/2.4/programs/ab.html" rel="nofollow noreferrer">Apache Benchmark</a>, by running <code>ab -n 100000 -c 20 &lt;addr&gt;</code>) and running <code>kubectl rollout restart</code> during the benchmarking, but <code>ab</code> stops running as soon as the rolling restart is performed.</p> <p>Here is my current project setup:</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM golang:1.13.4-alpine3.10 RUN mkdir /app ADD . /app WORKDIR /app RUN go build -o main src/main.go CMD ["/app/main"] </code></pre> <p><strong>src/main.go</strong></p> <pre class="lang-golang prettyprint-override"><code>package main import ( "context" "fmt" "log" "net/http" "os" "os/signal" "syscall" "github.com/gorilla/mux" ) func main() { srv := &amp;http.Server{ Addr: ":8080", Handler: NewHTTPServer(), } idleConnsClosed := make(chan struct{}) go func() { sigint := make(chan os.Signal, 1) signal.Notify(sigint, os.Interrupt, syscall.SIGTERM, syscall.SIGINT) &lt;-sigint // We received an interrupt signal, shut down. if err := srv.Shutdown(context.Background()); err != nil { // Error from closing listeners, or context timeout: log.Printf("HTTP server Shutdown: %v", err) } close(idleConnsClosed) }() log.Printf("Starting HTTP server") running = true if err := srv.ListenAndServe(); err != http.ErrServerClosed { // Error starting or closing listener: log.Fatalf("HTTP server ListenAndServe: %v", err) } &lt;-idleConnsClosed } func NewHTTPServer() http.Handler { r := mux.NewRouter() // Ping r.HandleFunc("/", handler) return r } func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello World!") } </code></pre> <p><strong>kubernetes/deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: myapp name: myapp spec: replicas: 10 selector: matchLabels: app: myapp strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 5 type: RollingUpdate template: metadata: labels: app: myapp spec: containers: - name: myapp image: dickster/graceful-shutdown-test:latest imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p><strong>kubernetes/service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: myapp name: myapp spec: ports: - port: 8080 protocol: TCP selector: app: myapp sessionAffinity: None type: NodePort </code></pre> <p>Is there something missing in this setup? According to the <code>rollingUpdate</code> strategy, there should be at least five running pods that should serve the incoming requests, but <code>ab</code> exits with an <code>apr_socket_recv: Connection reset by peer (54)</code> error. I also tried adding readiness/liveness probes, but no luck. I suspect they're not needed here, either.</p>
<p>For this to work without downtime, you need to have the pods stop receiving new connections while the pod is allowed to gracefully finish handling current connections. This means the pod needs to be running, but not ready so that new requests are not sent to it.</p> <p>Your service will match all pods using the label selector you configured (I assume <code>app: myapp</code>) and will use any pod in the ready state as a possible backend. The pod is marked as ready as long as it is passing the readinessProbe. Since you have no probe configured, the pod status will default to ready as long as it is running.</p> <p>Just having a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">readinessProbe</a> configured will help immensely, but will not provide 100% uptime, that will require some tweaks in your code to cause the readinessProbe to fail (so new requests are not sent) while the container gracefully finishes with current connections.</p> <p>EDIT: As per @Thomas Jungblut mentioned, a big part of eliminating errors with your webserver is how the application handles SIGTERM. While the pod is in terminating state, it will no longer receive requests through the service. During this phase, your webserver needs to be configured to gracefully complete and terminate connections rather than stop abruptly and terminate requests. </p> <p>Note that this is configured in the application itself and is not a k8s setting. As long as the webserver gracefully drains the connections and your pod spec includes a gracefulTerminationPeriod long enough to allow the webserver to drain, you should see basically no errors. Although this still won't guarantee 100% uptime, especially when bombarding the service using ab.</p>
<p>I am getting below error while running kubeadm init :</p> <pre><code>[init] Using Kubernetes version: v1.16.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p><code>sudo swapoff -a swapoff: /swapfile: swapoff failed: Cannot allocate memory</code></p> <p>I am using Ubuntu VM in parallel desktop</p> <p><code>free -m command output below: $ free -m total used free shared buff/cache available Mem: 979 455 87 1 436 377 Swap: 2047 695 1352</code></p>
<p>You do not have enough RAM. Your machine is surviving because you use a swap file (i.e. using your hard drive for extra pseudo-RAM) but that is not supported by Kubernetes so it tried to turn that off which failed because you don't have enough RAM.</p>
<p>I'm trying to configure HTTPS for my K8s/Istio cluster. I'm following this <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/" rel="noreferrer">official tutorial</a> step by step multiple times from scratch and get the same error every time when try to create a Certificate resource.</p> <pre><code>no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1" </code></pre> <p>I tried to install cert-manager and its CRD manually based on <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="noreferrer">official docs</a> and no help.</p> <pre><code>cert-manager-5ff755b6d5-9ncgr 1/1 Running 0 6m55s cert-manager-cainjector-576978ffc8-4db4l 1/1 Running 0 6m55s cert-manager-webhook-c67fbc858-wvtgs 1/1 Running 0 6m55s </code></pre> <p>Can't find any piece of information regarding this error since it works foe everyone after installing out of the box or after installing cert-manager's CRD.</p>
<p>I suggest you to try <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="noreferrer">this</a> installation for cert-manager and thereafter you can follow <a href="https://stackoverflow.com/questions/58423312/how-do-i-test-a-clusterissuer-solver/58436097?noredirect=1#comment103215785_58436097">this stackoverflow post</a> , it will get the issue sorted, I guess. You just need to make few substitutions at places where ingress has to be replaced with istio.</p> <p>Kindly use </p> <pre><code>apiVersion: cert-manager.io/v1alpha2 </code></pre> <p>in clusterissuer, if the apiVersion for clusterIssuer present in that stackoverflow post is not acceptable </p>
<p>I'm trying to set up mongodb locally in a container using minikube, following this example repository here: <a href="https://github.com/pkdone/minikube-mongodb-demo" rel="nofollow noreferrer">https://github.com/pkdone/minikube-mongodb-demo</a></p> <p>I get the error:</p> <pre><code>The StatefulSet "mongod" is invalid: * spec.selector: Required value * spec.template.metadata.labels: Invalid value: map[string]string{"environment":"test", "replicaset":"MainRepSet", "role":"mongo"}: `selector` does not match template `labels` </code></pre> <p>Here is my full yaml file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongodb-service labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongod spec: serviceName: mongodb-service replicas: 3 template: metadata: labels: role: mongo environment: test replicaset: MainRepSet spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: replicaset operator: In values: - MainRepSet topologyKey: kubernetes.io/hostname terminationGracePeriodSeconds: 10 volumes: - name: secrets-volume secret: secretName: shared-bootstrap-data defaultMode: 256 containers: - name: mongod-container #image: pkdone/mongo-ent:3.4 image: mongo command: - "numactl" - "--interleave=all" - "mongod" - "--wiredTigerCacheSizeGB" - "0.1" - "--bind_ip" - "0.0.0.0" - "--replSet" - "MainRepSet" - "--auth" - "--clusterAuthMode" - "keyFile" - "--keyFile" - "/etc/secrets-volume/internal-auth-mongodb-keyfile" - "--setParameter" - "authenticationMechanisms=SCRAM-SHA-1" resources: requests: cpu: 0.2 memory: 200Mi ports: - containerPort: 27017 volumeMounts: - name: secrets-volume readOnly: true mountPath: /etc/secrets-volume - name: mongodb-persistent-storage-claim mountPath: /data/db volumeClaimTemplates: - metadata: name: mongodb-persistent-storage-claim annotations: volume.beta.kubernetes.io/storage-class: "standard" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre>
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#statefulset-v1-apps" rel="nofollow noreferrer"><code>spec.selector</code></a> field is missing on your Statefulset template.</p> <blockquote> <p>selector is a label query over pods that should match the replica count. <strong>It must match the pod template's labels</strong>. More info: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors</a></p> </blockquote> <p>You need to add the <code>spec.selector</code> to match one or more labels. Example:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mongod spec: serviceName: mongodb-service replicas: 3 selector: matchLabels: role: mongo template: metadata: labels: role: mongo environment: test replicaset: MainRepSet ... </code></pre> <hr> <p>The example that you are using is probably outdated. The <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#statefulset-v1beta1-apps" rel="nofollow noreferrer">former <code>apps/v1beta1</code></a> added a default value for <code>spec.selector</code> if empty, which is not the case anymore.</p> <blockquote> <p>selector is a label query over pods that should match the replica count. <strong>If empty, defaulted to labels on the pod template</strong>. More info: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors</a></p> </blockquote> <p>In <code>apps/v1beta2</code> and <code>apps/v1</code> it should be explicitly set.</p>
<p>I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod. The NFS folder <code>/mount/protected</code> has user access restrictions, i.e. only certain users can access this folder.</p> <p>This is my Pod configuration:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory secret: secretName: my-secret containers: - name: my-container image: &lt;...&gt; command: ["/bin/sh"] args: ["-c", "python /my-volume/test.py"] volumeMounts: - name: my-volume mountPath: /my-volume </code></pre> <p>When applying it, I get the following error:</p> <pre><code>The Pod "my-pod" is invalid: * spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type * spec.containers[0].volumeMounts[0].name: Not found: "my-volume" </code></pre> <p>I created my-secret according to the following guide:<br> <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret</a><br> So basically:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-secret data: username: bXktYXBw password: PHJlZGFjdGVkPg== </code></pre> <p>But when I mount the folder <code>/mount/protected</code> with:</p> <pre><code>spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory </code></pre> <p>I get a permission denied error <code>python: can't open file '/my-volume/test.py': [Errno 13] Permission denied</code> when running a Pod that mounts this volume path.</p> <p>My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?</p>
<p>You're trying to tell Kubernetes that <code>my-volume</code> should get its content from <em>both</em> a host path and a Secret, and it can only have one of those.</p> <p>You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the <code>mountPath</code> you specify within the container. (Specifying <code>hostPath:</code> at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on <em>every</em> node in the cluster.)</p> <p>So change:</p> <pre class="lang-yaml prettyprint-override"><code>volumes: - name: my-volume secret: secretName: my-secret # but no hostPath </code></pre>
<p>I have been with <code>K8s-ingress</code> well so far but I have a question.</p> <p>Can <code>ingress</code> route requests based on <strong>IP</strong>?</p> <p>I've already know that ingress do routing based on hosts like a.com, b.com... to each services and URI like path /a-service/<em>, /b-service/</em> to each services.</p> <p>However, I'm curious with the idea that <code>Ingress</code> can route by <strong>IP</strong>? I'd like requests from my office(certain ip) to route a specific service for tests.</p> <p>Does it make sense? and any idea for that?</p>
<p>If this is just for testing I would just whitelist the IP. You can read the docs about <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range" rel="nofollow noreferrer">nginx ingress annotations</a></p> <blockquote> <p>You can specify allowed client IP source ranges through the <code>nginx.ingress.kubernetes.io/whitelist-source-range</code> annotation. The value is a comma separated list of <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDRs</a>, e.g. <code>10.0.0.0/24,172.10.0.1</code>.</p> </blockquote> <p>Example yaml might look like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: whitelist annotations: nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24" spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: echoheaders servicePort: 80 </code></pre> <p>Also it looks like you can do that in <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> (I did not tried it) in <strong>kind</strong> <code>ServiceRole</code> and <code>ServiceRoleBinding</code> for specifying detailed access control requirements. For this you would use <code>source.ip</code> property. It's explained on <a href="https://istio.io/docs/reference/config/authorization/constraints-and-properties/" rel="nofollow noreferrer">Constraints and Properties</a></p>
<p>I am working on migrating my applications to Kubernetes. I am using EKS.</p> <p>I want to distribute my pods to different nodes, to avoid having a single point of failure. I read about <code>pod-affinity</code> and <code>anti-affinity</code> and <code>required</code> and <code>preferred</code> mode.</p> <p><a href="https://stackoverflow.com/a/49900137/3333052">This answer</a> gives a very nice way to accomplish this.</p> <p>But my doubt is, let's say if I have 3 nodes, of which 2 are already full(resource-wise). If I use <code>requiredDuringSchedulingIgnoredDuringExecution</code>, k8s will spin up new nodes and will distribute the pods to each node. And if I use <code>preferredDuringSchedulingIgnoredDuringExecution</code>, it will check for preferred-nodes, and not finding different nodes, will deploy all pods on the third node only. In which case, it will again become a single point of failure.</p> <p>How do I solve this condition?</p> <p>One way I can think of is to have an over-provisioned cluster, so that there are always some extra nodes.</p> <p>The second way, I am not sure how to do this, but I think there should be a way of using both <code>requiredDuringSchedulingIgnoredDuringExecution</code> and <code>preferredDuringSchedulingIgnoredDuringExecution</code>.</p> <p>Can anyone help me with this? Am I missing something? How do people work with this condition?</p> <p>I am new to Kubernetes, so feel free to correct me if I am wrong or missing something.</p> <p>Thanks in advance</p> <p><strong>Note:</strong></p> <p>I don't have a problem running a few similar pods on the same node, just don't want all of the pods to be running on the same node, just because there was only one node available to deploy.</p>
<p>I see you are trying to make sure that k8s will never schedule all pod replicas on the same node.</p> <p>It's not possible to create hard requrement like this for kubernetes scheduler.</p> <p>Scheduler will try its best to schedule your application as evenly as possible but in situation when you have 2 nodes without spare resources and 1 node where all pod replicas would be scheduled, k8s can do one of the folowing actions (depending on configuration):</p> <ol> <li>schedule your pods on one node (best effort/default)</li> <li>run one pod and not schedule rest of the pods at all (<code>antiaffnity</code> + <code>requiredDuringSchedulingIgnoredDuringExecution</code>)</li> <li>create new nodes for pods if needed (<code>antiaffnity</code> + <code>requiredDuringSchedulingIgnoredDuringExecution</code> + <code>cluster autoscaler</code>)</li> <li>start deleting pods from nodes to free resources for high-priority pods (<a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer"><code>priority based preemption</code></a>) and reschedule preempted pods if possible</li> </ol> <p>Also read this <a href="https://medium.com/expedia-group-tech/how-to-keep-your-kubernetes-deployments-balanced-across-multiple-zones-dfe719847b41" rel="nofollow noreferrer">article</a> to get better understanding on how scheduler makes its decisions.</p> <p>You can also use <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PodDisruptionBudget</a> to tell kubernetes to make sure a specified replicas are always working, remember that although:</p> <blockquote> <p>A disruption budget does not truly guarantee that the specified number/percentage of pods will always be up.</p> </blockquote> <p>kubernetes will take it under consideration when making scheduling decisions.</p>
<p>I am having an issue while creating a k8 cluster with kops command.</p> <p>This is the error i was getting when i am trying to create cluster.</p> <pre><code>W1104 16:31:41.803150 18534 apply_cluster.go:945] **unable to pre-create DNS records - cluster startup may be slower: Error pre-creating DNS records: InvalidChangeBatch**: [RRSet with DNS name api.dev.devops.com. is not permitted in zone uswest2.dev.devops.com., RRSet with DNS name api.internal.dev.devops.com. is not permitted in zone uswest2.dev.devops.com.] </code></pre> <p>commands i used to create cluster: </p> <pre><code>kops create cluster --cloud=aws --zones=us-west-2b --name=dev.devops.com --dns-zone=uswest2.dev.devops.com --dns private kops update cluster --name dev.devops.com --yes </code></pre> <p>Can someone please help me. Thanks in advance!!</p>
<p>You have registered your <code>dns-zone</code> as <code>uswest2.dev.devops.com</code> and you are referring in command to name as <code>dev.devops.com</code>.</p> <p>If you will check <a href="https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md" rel="nofollow noreferrer">this docs</a>, especially <a href="https://github.com/kubernetes/kops/blob/master/docs/getting_started/aws.md#configure-dns" rel="nofollow noreferrer">Configure DNS</a> section, you will find that:</p> <blockquote> <p>In this scenario you want to contain all kubernetes records under a subdomain of a domain you host in Route53. This requires creating a second hosted zone in route53, and then setting up route delegation to the new zone.</p> <p>In this example you own <code>example.com</code> and your records for Kubernetes would look like <code>etcd-us-east-1c.internal.clustername.subdomain.example.com</code></p> </blockquote> <p>You will find that based on this doc example: <code>etcd-us-east-1c.internal.clustername.subdomain.example.com</code> Your <code>dev.devops.com</code> is domain and <code>uswest2.dev.devops.com</code> is your subdomain.</p> <p>In <a href="https://github.com/kubernetes/kops/blob/master/docs/examples/kops-test-route53-subdomain.md" rel="nofollow noreferrer">Route 53</a> docs you will be able find example where subdomain for <code>example.org</code> in this case was set as <code>kopsclustertest</code></p> <pre><code>export ID=$(uuidgen) echo $ID ae852c68-78b3-41af-85ee-997fc470fd1c aws route53 \ create-hosted-zone \ --output=json \ --name kopsclustertest.example.org \ --caller-reference $ID | \ jq .DelegationSet.NameServers [ "ns-1383.awsdns-44.org", "ns-829.awsdns-39.net", "ns-346.awsdns-43.com", "ns-1973.awsdns-54.co.uk" ] </code></pre> <p>At this moment: <strong>subdomain:</strong> <code>kopsclustertest</code> <strong>domain:</strong> <code>example.org</code></p> <p>A few chapters below you will find <strong>KOPS CLUSTER CREATION</strong> section.</p> <pre><code>kops create cluster \ --cloud=aws \ --master-zones=us-east-1a,us-east-1b,us-east-1c \ --zones=us-east-1a,us-east-1b,us-east-1c \ --node-count=2 \ --node-size=t2.micro \ --master-size=t2.micro \ ${NAME} </code></pre> <p>with information that </p> <blockquote> <p>The environment variable ${NAME} was previously exported with our cluster name: <code>mycluster01.kopsclustertest.example.org</code>.</p> </blockquote> <p>It means that before <code>subdomain.domain</code> you need to specify your cluster name.</p> <p>In short, in flag <code>--name</code> you must specify: <code>&lt;your_cluster_name&gt;.subdomain.domain</code></p> <p>Please try:</p> <p><code>kops create cluster --cloud=aws --zones=us-west-2b --name=my-cluster.uswest2.dev.devops.com --dns-zone=uswest2.dev.devops.com --dns private</code></p>
<p>I installed <code>minikube</code> and <code>Virtualbox</code> on OS X and was working fine until I executed</p> <p><code>minikube delete</code></p> <p>After that I tried</p> <p><code>minikube start</code></p> <p>and got the following</p> <blockquote> <p>😄 minikube v1.5.2 on Darwin 10.15.1</p> <p>✨ Automatically selected the 'hyperkit' driver (alternates: [virtualbox])</p> <p>🔑 The 'hyperkit' driver requires elevated permissions. The following commands will be executed:</p> <p>...</p> </blockquote> <p>I do not want to use a different driver, why is this happening? I reinstalled minikube but the problem persisted. I could set which driver to use with:</p> <p><code>minikube start --vm-driver=virtualbox</code></p> <p>But I would rather have the default behavior after a fresh install. How can I set the default driver?</p>
<p>After googling a bit I found how to do it <a href="https://github.com/kubernetes/minikube/issues/637" rel="noreferrer">here</a></p> <p><code>minikube config set vm-driver virtualbox</code></p> <p>This command output is</p> <blockquote> <p>⚠️ These changes will take effect upon a minikube delete and then a minikube start</p> </blockquote> <p>So make sure to run </p> <p><code>minikube delete</code></p> <p>and </p> <p><code>minikube start</code></p>
<p>I am trying to deploy the logstash helm on kubernetes cluster 1.16. but it giving below error message. How to update this Kubernetes API change for this helm chart?</p> <pre><code>helm install --name logstash stable/logstash -f values.yaml Error: validation failed: unable to recognize &quot;&quot;: no matches for kind &quot;StatefulSet&quot; in version &quot;apps/v1beta2&quot; </code></pre> <p><a href="https://github.com/helm/charts/tree/master/stable/logstash" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/logstash</a></p> <p>Thanks</p>
<p>Kubernetes 1.16 <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">deprecated</a> the <code>apps/v1beta2</code> API version. You need to use <code>apps/v1</code> instead.</p> <p>The <em>stable/logstash</em> chart already <a href="https://github.com/helm/charts/commit/0274cecf530f6191ea2ef667268debf3fa857dd2" rel="nofollow noreferrer">has a commit</a> that upgraded the API version. Make sure that you are using the <code>2.3.0</code> chart version and it should work. E.g:</p> <pre><code>helm repo update helm install --name logstash stable/logstash --version=2.3.0 -f values.yaml </code></pre>
<p>I did look at this, but I don't feel like the answer is covered: <a href="https://stackoverflow.com/questions/55259927/kubernetes-persistence-volume-and-persistence-volume-claim-exceeded-storage">kubernetes persistence volume and persistence volume claim exceeded storage</a></p> <p>Anyways, I have tried to look in the documentation but could not find out what is going to happen when an PVC Azure disk is full? So, we have a grafana application which monitors some data. We use the PVC to make sure that the data is saved if the pod gets killed. Right now the pod continously fetches data and the disk gets more and more full. What happens when the disk is full? Ideally it would be nice to implement some sort of functionality, such that when it gets like 80% full, it removes the let's say 20% of the data, starting from the oldest for example. Or how do we tackle this problem? </p> <p>pvc:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: graphite-pvc spec: accessModes: - ReadWriteOnce storageClassName: managed-premium resources: requests: storage: 256Gi </code></pre>
<p>Think of PVC as a folder that is mounted to your container running grafana service. It has a fixed size which you provided and as per the question, it is not going to increase. </p> <p><strong><em>What happens when the disk is full?</em></strong> </p> <p>There is nothing different here that happens from a normal service running on a system and runs out of the system. If it were your local machine or cloud VM you would get an alert about storage and if you dint take action, the service will error out saying <code>out of disk space error</code> Now you can use services like Prometheus with Kubernetes plugin to get storage space alerts, but by default, Kubernetes won't provide any alert.</p> <p><em>ref</em> - <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#full-metrics-pipeline</a></p> <p><strong><em>how do we tackle this(disk space) problem?</em></strong></p> <p>Again same way you would on a normal system, there is a number of solutions. But if you think of it, the System or VM or Kubernetes is not the right candidate to decide which files should be removed and which should be kept, the reason being kubernetes does not know what data is and for the fact, it does not own the data. The service does. On the other hand, you can use the service or create a custom new archiving service to take the data from your Grafana PVC and place it S3 or any other storage. </p>
<p>Does Kubernetes have a way of reusing manifests without copying and paste them? Something akin to Terraform templates.</p> <p>Is there a way of passing values between manifests?</p> <p>I am looking to deploy the same service to multiple environments and wanted a way to call the necessary manifest and pass in the environment specific values.</p> <p>I'd also like to do something like:</p> <p><strong>Generic-service.yaml</strong></p> <pre><code>Name={variablename} </code></pre> <p><strong>Foo-service.yaml</strong></p> <pre><code>Use=General-service.yaml variablename=foo-service-api </code></pre> <p>Any guidance is appreciated.</p>
<p><a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a>, now part of <code>kubectl apply -k</code> is a way to <em>parameterize</em> your Kubernetes manifests files.</p> <p>With Kustomize, you have a <em>base manifest</em> file (e.g. of <code>Deployment</code>) and then multiple <em>overlay</em> directories for parameters e.g. for <em>test</em>, <em>qa</em> and <em>prod</em> environment.</p> <p>I would recommend to have a look at <a href="https://speakerdeck.com/spesnova/introduction-to-kustomize" rel="nofollow noreferrer">Introduction to kustomize</a>.</p> <p>Before Kustomize it was common to use Helm for this.</p>
<p>In <strong>Minikube</strong>, created many <em>Persistent Volumes</em> and its <em>claims</em> as a practice? Do they reserve disk space on local machine? </p> <p>Checked disk usage </p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv spec: capacity: storage: 100Gi accessModes: - ReadWriteMany storageClassName: shared hostPath: path: /data/config --- $ kubectl create -f 7e1_pv.yaml $ kubectl get pv </code></pre> <p>Now create YAML for Persistent Volume Claim</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: storageClassName: shared accessModes: - ReadWriteMany resources: requests: storage:90Gi </code></pre> <pre><code>$ kubectl create -f 7e2_pvc.yaml </code></pre>
<p>No, it's just a local folder. The size value is ignored.</p>
<p>When following tutorial: <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/</a> i run into error. The following command fails:</p> <pre><code>kubectl patch sts web -p '{"spec":{"replicas":3}}' Error from server (BadRequest): json: cannot unmarshal string into Go value of type map[string]interface {} </code></pre> <p>How do I fix this?</p> <p>This is the container image on pods: k8s.gcr.io/nginx-slim:0.8 </p> <p>I am using minikube on Windows 7 Pro and standard cmd shell.</p> <p>kubectl version</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af 9d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc" Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af 9d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc" Platform:"linux/amd64"} </code></pre>
<p>Try surrounding it with double quotes and then escaping the double quotes inside:</p> <pre><code>kubectl patch sts web -p "{\"spec\":{\"replicas\":3}}" </code></pre>
<p>I’m finally dipping my toes in the kubernetes pool and wanted to get some advice on the best way to approach a problem I have:</p> <p><strong>Tech we are using:</strong></p> <ul> <li>GCP</li> <li>GKE</li> <li>GCP Pub/Sub</li> </ul> <p><strong>We need to do bursts of batch processing spread out across a fleet and have decided on the following approach:</strong></p> <ol> <li>New raw data flows in</li> <li>A node analyses this and breaks the data up into manageable portions which are pushed onto a queue</li> <li>We have a cluster with Autoscaling On and Min Size ‘0’</li> <li>A Kubernetes job spins up a pod for each new message on this cluster</li> <li>When pods can’t pull anymore messages they terminate successfully</li> </ol> <p><strong>The question is:</strong></p> <ul> <li>What is the standard approach for triggering jobs such as this? <ul> <li>Do you create a new job each time or are jobs meant to be long lived and re-run?</li> </ul></li> <li>I have only seen examples of using a yaml file however we would probably want the node which did the portioning of work to create the job as it knows how many parallel pods should be run. Would it be recommended to use the python sdk to create the job spec programatically? Or if jobs are long lived would you simply hit the k8 api and modify the parallel pods required then re-run job? </li> </ul>
<p>Jobs in Kubernetes are meant to be short-lived and are not designed to be reused. Jobs are designed for run-once, run-to-completion workloads. Typically they are be assigned a specific task, i.e. to process a single queue item.</p> <p>However, if you want to process multiple items in a work queue with a single instance then it is generally advisable to instead use a Deployment to scale a pool of workers that continue to process items in the queue, scaling the number of pool workers dependent on the number of items in the queue. If there are no work items remaining then you can scale the deployment to 0 replicas, scaling back up when there is work to be done.</p> <p>To create and control your workloads in Kubernetes the best-practice would be to use the Kubernetes SDK. While you can generate YAML files and shell out to another tool like <code>kubectl</code> using the SDK simplifies configuration and error handling, as well as allowing for simplified introspection of resources in the cluster as well.</p>
<p>I am trying to estimate the resource (cpu) request and limit values, for which I want to know the max cpu usage of a pod in last one month using prometheus. </p> <p>I checked this question but couldn't get what i want <a href="https://stackoverflow.com/questions/40717605/generating-range-vectors-from-return-values-in-prometheus-queries">Generating range vectors from return values in Prometheus queries </a></p> <p>I tried this but it seems max_over_time doesn't worker over rate</p> <pre><code>max ( max_over_time( rate( container_cpu_usage_seconds_total[5m] )[30d] ) ) by (pod_name) </code></pre> <p>invalid parameter 'query': parse error at char 64: range specification must be preceded by a metric selector, but follows a *promql.Call instead</p>
<p>You'd need to capture the inner expression (rate of container cpu usage) as a <a href="https://prometheus.io/docs/practices/rules/" rel="noreferrer">recording rule</a>:</p> <pre><code>- record: container_cpu_usage_seconds_total:rate5m expr: rate(container_cpu_usage_seconds_total[5m]) </code></pre> <p>then use this new timeseries to calculate max_over_time:</p> <pre><code>max ( max_over_time(container_cpu_usage_seconds_total:rate5m[30d]) ) by (pod_name) </code></pre> <p>This is only need in Prometheus versions older to 2.7 as <a href="https://prometheus.io/docs/prometheus/latest/querying/examples/#subquery" rel="noreferrer">subqueries can be calculated on the fly</a>, see <a href="https://prometheus.io/blog/2019/01/28/subquery-support/" rel="noreferrer">this blog post for more details</a>.</p> <p><em>Bear in mind though</em>, if you're planning to use this composite query (max of max_per_time of data collected in the last 30 days) for alerting or visualisation (rather than an one-off query), then you'd <em>still want to use the recording rule</em> to improve the query's performance. Its the classic CS computational complexity tradeoff (memory/storage space required for storing the recording rule as a separate timeseries vs. the computational resources needed to process data for 30 days!)</p>
<p>I have deployed a Kubernetes cluster composed of a master and two workers using <code>kubeadm</code> and the Flannel network driver (So I passed the <code>--pod-network-cidr=10.244.0.0/16</code> flag to <code>kubeadm init</code>).</p> <p>Those nodes are communicating together using a VPN so that:</p> <ul> <li>Master node IP address is 10.0.0.170</li> <li>Worker 1 IP address is 10.0.0.247</li> <li>Worker 2 IP address is 10.0.0.35</li> </ul> <p>When I create a new pod and I try to ping google I have the following error:</p> <pre><code>/ # ping google.com ping: bad address 'google.com' </code></pre> <p>I followed the instructions from the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">Kubernetes DNS debugging resolution</a> documentation page:</p> <pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <h3>Check the local DNS configuration first</h3> <pre><code>$ kubectl exec busybox cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local invalid options ndots:5 </code></pre> <h3>Check if the DNS pod is running</h3> <pre><code>$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-cqzb7 1/1 Running 0 7d18h coredns-5c98db65d4-xc5d7 1/1 Running 0 7d18h </code></pre> <h3>Check for Errors in the DNS pod</h3> <pre><code>$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done .:53 2019-10-28T13:40:41.834Z [INFO] CoreDNS-1.3.1 2019-10-28T13:40:41.834Z [INFO] linux/amd64, go1.11.4, 6b56a9c CoreDNS-1.3.1 linux/amd64, go1.11.4, 6b56a9c 2019-10-28T13:40:41.834Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843 .:53 2019-10-28T13:40:42.870Z [INFO] CoreDNS-1.3.1 2019-10-28T13:40:42.870Z [INFO] linux/amd64, go1.11.4, 6b56a9c CoreDNS-1.3.1 linux/amd64, go1.11.4, 6b56a9c 2019-10-28T13:40:42.870Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843 </code></pre> <h3>Is DNS service up?</h3> <pre><code>$ kubectl get svc --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 7d18h </code></pre> <h3>Are DNS endpoints exposed?</h3> <pre><code>$ kubectl get ep kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 10.244.0.3:53,10.244.0.4:53,10.244.0.3:53 + 3 more... 7d18h </code></pre> <h3>Are DNS queries being received/processed?</h3> <p>I made the update to the coredns ConfigMap, ran again the <code>nslookup kubernetes.default</code> command, and here is the result:</p> <pre><code>$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done .:53 2019-10-28T13:40:41.834Z [INFO] CoreDNS-1.3.1 2019-10-28T13:40:41.834Z [INFO] linux/amd64, go1.11.4, 6b56a9c CoreDNS-1.3.1 linux/amd64, go1.11.4, 6b56a9c 2019-10-28T13:40:41.834Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843 [INFO] Reloading 2019-11-05T08:12:12.511Z [INFO] plugin/reload: Running configuration MD5 = 906291470f7b1db8bef629bdd0056cad [INFO] Reloading complete 2019-11-05T08:12:12.608Z [INFO] 127.0.0.1:55754 - 7434 "HINFO IN 4808438627636259158.5471394156194192600. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.095189791s .:53 2019-10-28T13:40:42.870Z [INFO] CoreDNS-1.3.1 2019-10-28T13:40:42.870Z [INFO] linux/amd64, go1.11.4, 6b56a9c CoreDNS-1.3.1 linux/amd64, go1.11.4, 6b56a9c 2019-10-28T13:40:42.870Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843 [INFO] Reloading 2019-11-05T08:12:47.988Z [INFO] plugin/reload: Running configuration MD5 = 906291470f7b1db8bef629bdd0056cad [INFO] Reloading complete 2019-11-05T08:12:48.004Z [INFO] 127.0.0.1:51911 - 60104 "HINFO IN 4077052818408395245.3902243105088660270. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016522153s </code></pre> <p>So it seems that DNS pods are receiving the requests.</p> <h2>But I had this error already!</h2> <p>That error happened to me the first time I deployed the cluster.</p> <p>At that time, I noticed that <code>kubectl get nodes -o wide</code> was showing the workers node public IP address as "INTERNAL-IP" instead of the private one.</p> <p>Looking further I found out that on the worker nodes, kubelet was missing the <code>--node-ip</code> flag, so I've added it and restarted Kubelet and the issue was gone. I then concluded that missing flag was the reason, but it seems to not be the case as the <code>kubectl get nodes -o wide</code> command shows the internal IP addresses as "INTERNAL-IP" for the workers.</p> <h2>And now</h2> <p>The DNS server IP address 10.96.0.10 looks wrong to me, and I can't ping it from the pod. The DNS pods have the IP addresses 10.244.0.3 and 10.244.0.4 which I can't ping too.</p> <p>I just tried to delete the coredns pods, so that they are scheduled again, and now their IP addresses changed, I can ping them from the pod and the <code>kubectl exec -ti busybox -- nslookup kubernetes.default</code> works:</p> <pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>But the <code>resolv.conf</code> file still has the "invalid" inside:</p> <pre><code>$ kubectl exec busybox cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local invalid options ndots:5 </code></pre> <ul> <li>Can anyone explain me what happened please?</li> <li>And how can I solved this "invalid" from the <code>resolv.conf</code> file?</li> </ul>
<p>As configured in <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns" rel="noreferrer">CoreDNS ConfigMap</a> default upstream nameservers are inherited from node, that is everything outside the cluster domain (.cluster.local)</p> <p>So "invalid" is an entry copied from Node's <code>/etc/resolv.conf</code> file during Pod creation.</p> <p>If you would manually modify <code>/etc/resolv.conf</code> on you Node, every Pod with <code>dnsPolicy: ClusterFirst</code> will inherit <code>/etc/resolv.conf</code> with this modification.</p> <p>So, after adding <code>--node-ip</code> flag to kubelet and restarting CoreDNS Pods you should re-deploy your busybox Pod so it can inherit <code>/etc/resolv.conf</code> from the Node.</p>
<p>I apologize if this is an ignorant question but I am building out a Kubernetes cluster and I really like the idea of abstracting my computing infrastructure from a single cloud provider and steering away from a VM platform but what about statefulness? I want to be able to setup a mysql server for example and keep that data for life, I want Kubernetes to load balance a mysql container with a data drive, we speak about containers and we think life and death within seconds but what about when we want to keep data around and build a kick ass data center without VM's is there a concept of of being stateful in this paradigm? </p>
<p>Kubernetes provides StatefulSets for Deploying stateful containers like databases. Follow the below link to understand how to deploy mysql database In highly available mode <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
<p>I have a Kafka cluster that is running on K8S. I am using the confluent kafka image as and I have an EXTERNAL listeners that is working. How can I add SSL encryption? Should I use an ingress? Where can I find good documentation? Thank you</p>
<p>You have a <a href="https://gist.github.com/aramalipoor/f62355cd440584986bfd91bd69e52c49#file-0-readme-md" rel="noreferrer">manual way in this gist</a>, which does not use the confluent image.</p> <p>But for <a href="https://github.com/confluentinc/cp-helm-charts" rel="noreferrer">Confluent and its Helm chart</a> (see "<a href="https://www.confluent.io/blog/getting-started-apache-kafka-kubernetes/" rel="noreferrer">Confluent Operator: Getting Started with Apache Kafka and Kubernetes</a>" from <a href="https://twitter.com/rohit2b" rel="noreferrer">Rohit Bakhshi</a>), you can follow:</p> <p>"<a href="https://medium.com/weareservian/encryption-authentication-and-external-access-for-confluent-kafka-on-kubernetes-69c723a612fc" rel="noreferrer">Encryption, authentication and external access for Confluent Kafka on Kubernetes</a>" from <a href="https://github.com/ryan-c-m" rel="noreferrer">Ryan Morris</a></p> <blockquote> <p>Out of the box, the helm chart doesn’t support SSL configurations for encryption and authentication, or exposing the platform for access from outside the Kubernetes cluster. </p> <p>To implement these requirements, there are a few modifications to the installation needed.<br> In summary, they are:</p> <ul> <li>Generate some private keys/certificates for brokers and clients</li> <li>Create Kubernetes Secrets to provide them within your cluster</li> <li>Update the broker StatefulSet with your Secrets and SSL configuration</li> <li>Expose each broker pod via an external service</li> </ul> </blockquote>
<p>I need to configure nginx to send a JSON response for 413 status code. I've tried this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: {{ .Chart.Name }}-app2 annotations: kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/server-snippet: | proxy_intercept_errors on; error_page 413 /413.json; location /413.json { return 413 '{"error": {"status_code": 413,"status": "TEST"}}'; } spec: tls: - hosts: - app2.example.com secretName: wildcard-tls rules: - host: app2.example.com http: paths: - path: / backend: serviceName: {{ .Chart.Name }}-app2 servicePort: 80 </code></pre> <p>but no success. It still returns html page with text <code>413 Request Entity Too Large</code>.</p> <p>Could you please suggest how to configure nginx in kubernetes to return json on 413?</p>
<p>You can try adding this snippet to ingress </p> <pre><code>nginx.ingress.kubernetes.io/server-snippet: | location @custom_413 { default_type application/json; return return 413 '{"error": {"status_code": 413,"status": "TEST"}}';; } error_page 413 @custom_413; </code></pre>
<p>How to Get a container’s full id from inside of itself in Kubernetes. I want to add container id in my application log which is running as Kubernetes container </p>
<p>The <code>HOSTNAME</code> environment variable is readily available in any container running on Kubernetes and gives the unique name of the pod in which the container is running. Use the means provided by the logging framework to acccesss the environment variable and make it part of the logging pattern, or to programatically add its value to log entries.</p> <p>That should do for application logging purposes assuming there is only one <em>application</em> container in the pod (which is regarded as a best practice anyway).</p>
<p>I have a small <code>cloudbuild.yaml</code> file where I build a Docker image, push it to <a href="https://cloud.google.com/container-registry/" rel="nofollow noreferrer">Google container registry</a> (GCR) and then apply the changes to my Kubernetes cluster. It looks like this:</p> <pre><code>steps: - name: 'gcr.io/cloud-builders/docker' entrypoint: 'bash' args: [ '-c', 'docker pull gcr.io/$PROJECT_ID/frontend:latest || exit 0' ] - name: "gcr.io/cloud-builders/docker" args: [ "build", "-f", "./services/frontend/prod.Dockerfile", "-t", "gcr.io/$PROJECT_ID/frontend:$REVISION_ID", "-t", "gcr.io/$PROJECT_ID/frontend:latest", ".", ] - name: "gcr.io/cloud-builders/docker" args: ["push", "gcr.io/$PROJECT_ID/frontend"] - name: "gcr.io/cloud-builders/kubectl" args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"] env: - "CLOUDSDK_COMPUTE_ZONE=europe-west3-a" - "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas" - name: "gcr.io/cloud-builders/kubectl" args: ["rollout", "restart", "deployment/frontend-deployment"] env: - "CLOUDSDK_COMPUTE_ZONE=europe-west3-a" - "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas" </code></pre> <p>The build runs smoothly, until the last step. <code>args: ["rollout", "restart", "deployment/frontend-deployment"]</code>. It has the following log output:</p> <pre><code>Already have image (with digest): gcr.io/cloud-builders/kubectl Running: gcloud container clusters get-credentials --project="cents-ideas" --zone="europe-west3-a" "cents-ideas" Fetching cluster endpoint and auth data. kubeconfig entry generated for cents-ideas. Running: kubectl rollout restart deployment/frontend-deployment error: unknown command "restart deployment/frontend-deployment" See 'kubectl rollout -h' for help and examples. </code></pre> <p>Allegedly, <code>restart</code> is an unknown command. But it works when I run <code>kubectl rollout restart deployment/frontend-deployment</code> manually.</p> <p>How can I fix this problem?</p>
<p>Looking at the <a href="https://v1-15.docs.kubernetes.io/docs/setup/release/notes/" rel="nofollow noreferrer">Kubernetes release notes</a>, the <code>kubectl rollout restart</code> commmand was introduced in the v1.15 version. In your case, it seems Cloud Build is using an older version where this command wasn't implemented yet.</p> <p>After doing some test, it appears Cloud Build uses a kubectl client version depending on the cluster's server version. For example, when running the following build:</p> <pre><code>steps: - name: "gcr.io/cloud-builders/kubectl" args: ["version"] env: - "CLOUDSDK_COMPUTE_ZONE=&lt;cluster_zone&gt;" - "CLOUDSDK_CONTAINER_CLUSTER=&lt;cluster_name&gt;" </code></pre> <p>if the cluster's master version is v1.14, Cloud Build uses a v1.14 kubectl client and returns the same <code>unknown command "restart"</code> error message. When master's version is v1.15, Cloud Build uses a v1.15 kubectl client and the command runs successfully.</p> <p>So about your case, I suspect your cluster "cents-ideas" master version is &lt;1.15 which would explain the error you're getting. As per why it works when you run the command manually (I understand locally), I suspect your kubectl may be authenticated to another cluster with master version >=1.15.</p>
<p>I was trying to TLS bootstrap a worker node to join my existing cluster of 1 master and 2 worker nodes. Below is the process which I followed -</p> <ol> <li>Create a bootstrap token on master for initial authentication.</li> <li>Create proper cluster role bindings for letting kubelet to raise, approve and rotate certificates.</li> <li>Create a bootstrap-kubeconfig file on worker node.</li> <li>Create a kubelet service and start it.</li> </ol> <p>All this works fine and my new worker node is able to join the cluster without any issues.</p> <p>But now I came across something called cluster-info configmap signing during the bootstrap process. <a href="https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/" rel="nofollow noreferrer">link</a></p> <p>What exactly it is and how it helps me in the bootstrap process ? I went through k8s docs on but they don't give a lot of detail on this. All I know is you have to create a configmap with name cluster-info but not sure how &amp; why this is used.</p> <p>Thanks in advance !</p> <p>p.s - If there is any link where this process is elaborated in detail with practical examples then please share.</p>
<p>You can find some info in the kubernetes code: </p> <p><a href="https://github.com/kubernetes/kubernetes/blob/1493757d69a8e0032128a5dadc56b168b00a6519/cmd/kubeadm/app/phases/bootstraptoken/clusterinfo/clusterinfo.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/1493757d69a8e0032128a5dadc56b168b00a6519/cmd/kubeadm/app/phases/bootstraptoken/clusterinfo/clusterinfo.go</a></p> <p>And some info ,when you are bootstrapping using kubeadm:</p> <p><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cluster-lifecycle/bootstrap-discovery.md</a></p> <p>kubeadm will implement the following flow:</p> <p>kubeadm connects to the API server address specified over TLS. As we don't yet have a root certificate to trust, this is an insecure connection and the server certificate is not validated. kubeadm provides no authentication credentials at all.</p> <p>Implementation note: the API server doesn't have to expose a new and special insecure HTTP endpoint. (D)DoS concern: Before this flow is secure to use/enable publicly (when not bootstrapping), the API Server must support rate-limiting.</p> <p>kubeadm requests a ConfigMap containing the kubeconfig file defined above. This ConfigMap exists at a well known URL:</p> <p>https:///api/v1/namespaces/kube-public/configmaps/cluster-info</p> <p>This ConfigMap is really public. Users don't need to authenticate to read this ConfigMap. In fact, the client MUST NOT use a bearer token here as we don't trust this endpoint yet.</p> <p>The API server returns the ConfigMap with the kubeconfig contents as normal Extra data items on that ConfigMap contains JWS signatures. kubeadm finds the correct signature based on the token-id part of the token.</p> <p>kubeadm verifies the JWS and can now trust the server. Further communication is simpler as the CA certificate in the kubeconfig file can be trusted</p> <p>As a way to secure your cluster, you can turn off the public access to the cluster-info: </p> <p><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join#turning-off-public-access-to-the-cluster-info-configmap" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join#turning-off-public-access-to-the-cluster-info-configmap</a></p>
<p>I have deployed a mosquitto image in a pod in kubernetes with this dockerfile:</p> <pre><code>FROM eclipse-mosquitto:1.6.7 </code></pre> <p>I downloaded the image an added it to my cluster, using this yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mosquitto-demo namespace: default spec: replicas: 1 selector: matchLabels: bb: web template: metadata: labels: bb: web spec: containers: - name: bb-site image: mosquittotest:1.0 --- apiVersion: v1 kind: Service metadata: name: mosquitto-entrypoint namespace: default spec: type: NodePort selector: bb: web ports: - port: 8080 targetPort: 8080 nodePort: 30001 </code></pre> <p>It is running correctly.</p> <p>My question is: How can I know which IP is the one I should use t sub/pub, and which port?<br /> Do I just have to use the IP of the entrypoint service with the 8080 port?</p> <p>I'm at a loss here.</p>
<p>Do you get an IP-address on the Service?</p> <h2>Using ClusterIP</h2> <p>To have an cluster interal IP, you should set <code>type=ClusterIP</code> on your service:</p> <pre><code>spec: type: ClusterIP </code></pre> <p>Your clients route it requests to a DNS name for the service, depending on how your namespaces are setup. See <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a></p> <h2>Using NodePort</h2> <p>If you want to continue using type=NodePort, you can send request to the IP for any Node, but with the specific Node Port number.</p>
<p>I have a RBAC enabled Kubernetes cluster in GCP</p> <p>There are <strong>one</strong> namespace <strong>for Tiller</strong> and <strong>multiple for Services</strong></p> <p>Right now, I can assign reader role for a specific service account given it's full name</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: tiller-reader namespace: tiller rules: - apiGroups: [""] resources: ["pods"] verbs: - "get" - "watch" - "list" --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: tiller-reader-role-binding namespace: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: tiller-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: system:serviceaccount:my-namespace-1:my-namespace-1-service-account </code></pre> <p>The <strong>Service</strong> namespaces and accounts are <strong>created dynamically</strong>. How do I <strong>automatically give</strong> all services accounts <strong>access</strong> to the Tiller namespace, for example: to get pods?</p>
<p>to grant a role to all service accounts you must use the Group system:serviceaccounts. </p> <p>you can try the following config :</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: tiller-reader namespace: tiller rules: - apiGroups: [""] resources: ["pods"] verbs: - "get" - "watch" - "list" --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: tiller-reader-role-binding namespace: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tiller-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts </code></pre>
<p>I have a headless CentOS 7 box to which I have <code>ssh</code> access. I have installed OpenShift v4.2 - the <a href="https://code-ready.github.io/crc/" rel="nofollow noreferrer">Code Ready Containers</a> version successfully. I am able to use the <code>oc</code> tool, create projects, access exposed routes from services, etc.</p> <p>I want to explore the visualization that comes from Istio - the service mesh. How do I access the web console remotely so that I can explore the service mesh?</p> <p><a href="https://2886795330-8443-simba02.environments.katacoda.com" rel="nofollow noreferrer">Katacoda</a> does not seem to have a OpenShift 4.2 cluster.</p>
<p>Two options come to my mind for the more generic case of connecting to a remote GUI:</p> <ul> <li>VNC tunneled over ssh</li> <li>X11 forwarding using ssh</li> </ul> <p>Some years ago, VNC worked way better for me - it was lighter-weight and faster.</p>
<p>I can only find documentation online for attaching pods to nodes based on labels. Is there a way to attach pods to nodes based on labels and count - So only x pods with label y?</p> <p>Our scenario is that we only want to run 3 of our API pods per node. If a 4th API pod is created, it should be scheduled onto a different node with less than 3 API pods running currently.</p> <p>Thanks</p>
<p>No, you can not schedule by <em>count</em> of a specific <em>label</em>. But you can avoid co-locate your pods on the same node.</p> <h2>Avoid co-locate your pods on same node</h2> <p>You can use <code>podAntiAffinity</code> and <code>topologyKey</code> and <code>taints</code> to avoid scheduling pods on the same node. See <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node" rel="nofollow noreferrer">Never co-located in the same node </a></p>
<p>The only way I see so far to create a container using Argo is through the command line:</p> <p><code>argo submit --watch fileName.yaml</code></p> <p>I would like to invoke Argo programmatically in Kotlin or Java in order to automate this process. Is there a way I can do that?</p> <p>I've looked into documentation on Github: <a href="https://github.com/argoproj/argo-workflows" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows</a>. I did not find anything there.</p>
<p>I accomplished this task by invoking the Kubernetes API. Argo is a Custom Resource of Kubernetes, and has equivalent Kubernetes command</p> <p><code>kubectl create -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/examples/hello-world.yaml</code></p> <p>I just had to invoke the above command using the Kubernetes API.</p>
<p>I'm trying to create a Kubernetes deployment with an associated ServiceAccount, which is <a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="noreferrer">linked to an AWS IAM role</a>. This yaml produces the desired result and the associated deployment (included at the bottom) spins up correctly:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: service-account namespace: example annotations: eks.amazonaws.com/role-arn: ROLE_ARN </code></pre> <p>However, I would like to instead use the Terraform Kubernetes provider to create the ServiceAccount:</p> <pre><code>resource "kubernetes_service_account" "this" { metadata { name = "service-account2" namespace = "example" annotations = { "eks.amazonaws.com/role-arn" = "ROLE_ARN" } } } </code></pre> <p>Unfortunately, when I create the ServiceAccount this way, the ReplicaSet for my deployment fails with the error:</p> <pre><code>Error creating: Internal error occurred: Internal error occurred: jsonpatch add operation does not apply: doc is missing path: "/spec/volumes/0" </code></pre> <p>I have confirmed that it does not matter whether the Deployment is created via Terraform or <code>kubectl</code>; it will not work with the Terraform-created <code>service-account2</code>, but works fine with the <code>kubectl</code>-created <code>service-account</code>. Switching a deployment back and forth between <code>service-account</code> and <code>service-account2</code> correspondingly makes it work or not work as you might expect.</p> <p>I have also determined that the <code>eks.amazonaws.com/role-arn</code> is related; creating/assigning ServiceAccounts that do not try to link back to an IAM role work regardless of whether they were created via Terraform or <code>kubectl</code>.</p> <p>Using <code>kubectl</code> to describe the Deployment, ReplicaSet, ServiceAccount, and associated Secret, I don't see any obvious differences, though I will admit I'm not entirely sure what I might be looking for.</p> <p>Here is a simple deployment yaml that exhibits the problem:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: example namespace: example spec: strategy: type: Recreate template: metadata: labels: app: example spec: serviceAccountName: service-account # or "service-account2" containers: - name: nginx image: nginx:1.7.8 </code></pre>
<p>Adding <code>automountServiceAccountToken: true</code> to the pod spec in your deployment should fix this error. This is usually enabled by default on service accounts, but Terraform defaults it to off. See this issue on the mutating web hook that adds the required environment variables to your pods: <a href="https://github.com/aws/amazon-eks-pod-identity-webhook/issues/17" rel="noreferrer">https://github.com/aws/amazon-eks-pod-identity-webhook/issues/17</a></p>
<p>I am trying to debug a java app on GKE cluster through stack driver. I have created a GKE cluster with <code>Allow full access to all Cloud APIs</code> I am following documentation: <a href="https://cloud.google.com/debugger/docs/setup/java" rel="nofollow noreferrer">https://cloud.google.com/debugger/docs/setup/java</a> </p> <p>Here is my DockerFile:</p> <pre><code>FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE COPY ${JAR_FILE} alnt-watchlist-microservice.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/alnt-watchlist-microservice.jar"] </code></pre> <p>In documentation, it was written to add following lines in DockeFile:</p> <pre><code>RUN mkdir /opt/cdbg &amp;&amp; \ wget -qO- https://storage.googleapis.com/cloud-debugger/compute-java/debian-wheezy/cdbg_java_agent_gce.tar.gz | \ tar xvz -C /opt/cdbg RUN java -agentpath:/opt/cdbg/cdbg_java_agent.so -Dcom.google.cdbg.module=tpm-watchlist -Dcom.google.cdbg.version=v1 -jar /alnt-watchlist-microservice.jar </code></pre> <p>When I build DockerFile, It fails saying tar: invalid magic , tar: short read.</p> <p>In stackdriver debug console, It always show 'No deployed application found'. Which application it will show? I have already 2 services deployed on my kubernetes cluster.</p> <p>I have already executed <code>gcloud debug source gen-repo-info-file --output-directory="WEB-INF/classes/</code> in my project's directory.</p> <p>It generated source-context.json. After its creation, I tried building docker image and its failing.</p>
<p>The debugger will be ready for use when you deploy your containerized app. You are getting <code>No deployed application found</code> error because your debugger agent is failing to download or unzip in dockerfile.</p> <p>Please check <a href="https://unix.stackexchange.com/questions/302192/how-to-solve-tar-invalid-magic-error-on-linux-alpine">this discussion</a> to resolve the <code>tar: invalid magic , tar: short read.</code> error.</p>
<p>For example, I change role verbs</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: provisioning-role rules: - apiGroups: ["apps"] resources: ["deployments"] verbs: ["get", "list", "watch"] </code></pre> <p>and run a Helm <code>upgrade</code>. Should the pods bound to those roles be restarted/replaced? (Those pods may be created manually without Helm.)</p>
<p>No need to recreate the pods. When you create a Role/RoleBinding or ClusterRole/ClusterRoleBinding, the entities automatically get these permissions right away.</p> <p>One prove used to be Helm itself. When you fresh install Helm, you get this error from Tiller saying has no access to the cluster to do anything, but then you give Tiller <code>cluster-role</code> (or any other with more caution) permissions and it starts working right away.</p>
<p>Below is deployment yaml, after deployment, I could access the pod and I can see the mountPath "/usr/share/nginx/html", but I could not find "/work-dir" which should be created by initContainer. Could someone explain me the reason? Thanks and Rgds</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html # These containers are run during pod initialization initContainers: - name: install image: busybox command: - wget - "-O" - "/work-dir/index.html" - http://kubernetes.io volumeMounts: - name: workdir mountPath: "/work-dir" dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre>
<p>The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same <em>volume</em>, too, (albeit at a different location) providing mechanism for the two containers to share its content. </p> <p>Per the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#using-init-containers" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>Init containers can run with a different view of the filesystem than app containers in the same Pod.</p> </blockquote>
<p>I have a kubernetes HPA set up in my cluster, and it works as expected scaling up and down instances of pods as the cpu/memory increases and decreases.</p> <p>The only thing is that my pods handle web requests, so it occasionally scales down a pod that's in the process of handling a web request. The web server never gets a response back from the pod that was scaled down and thus the caller of the web api gets an error back.</p> <p>This all makes sense theoretically. My question is does anyone know of a best practice way to handle this? Is there some way I can wait until all requests are processed before scaling down? Or some other way to ensure that requests complete before HPA scales down the pod?</p> <p>I can think of a few solutions, none of which I like:</p> <ol> <li>Add retry mechanism to the caller and just leave the cluster as is.</li> <li>Don't use HPA for web request pods (seems like it defeats the purpose).</li> <li>Try to create some sort of custom metric and see if I can get that metric into Kubernetes (e.x <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics</a>) </li> </ol> <p>Any suggestions would be appreciated. Thanks in advance!</p>
<h1>Graceful shutdown of pods</h1> <p>You must design your apps to support <em>graceful shutdown</em>. First your pod will receive a <code>SIGTERM</code> signal and after 30 seconds (can be configured) your pod will receive a <code>SIGKILL</code> signal and be removed. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">Termination of pods</a></p> <p><strong>SIGTERM</strong>: When your app receives termination signal, your pod will not receive <strong>new requests</strong> but you should try to fulfill responses of already received requests.</p> <h2>Design for idempotency</h2> <p>Your apps should also be designed for <strong>idempotency</strong> so you can safely <strong>retry</strong> failed requests.</p>
<p>Each Kubernetes deployment gets this annotation:</p> <pre><code>$ kubectl describe deployment/myapp Name: myapp Namespace: default CreationTimestamp: Sat, 24 Mar 2018 23:27:42 +0100 Labels: app=myapp Annotations: deployment.kubernetes.io/revision=5 </code></pre> <p>Is there a way to read that annotation (<code>deployment.kubernetes.io/revision</code>) from a pod that belongs to the deployment?</p> <p>I tried Downward API, but that only allows to get annotations of the pod itself (not of its deployment).</p>
<pre><code>kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}' </code></pre>
<p>I have deployed angular frontend and python backend in kubernetes via microk8s as separate pods and they are running. I have given backend url as '<a href="http://backend-service.default.svc.cluster.local:30007" rel="nofollow noreferrer">http://backend-service.default.svc.cluster.local:30007</a>' in my angular file in order to link frontend with backend. But this is raising ERR_NAME_NOT_RESOLVED. Can someone help me in understanding the issue?</p> <p>Also, I have a config file which specifies the ip's ports and other configurations in my backend. Do I need to make any changes(value of database host?, flask host?, ports? ) to that file before deploying t to kubernetes?</p> <p>Shown below is my deployment and service files of angular and backend.</p> <pre><code> apiVersion: v1 kind: Service metadata: name: angular-service spec: type: NodePort selector: app: angular ports: - protocol: TCP nodePort: 30042 targetPort: 4200 port: 4200 --- apiVersion: apps/v1 kind: Deployment metadata: name: angular-deployment labels: name: angular spec: replicas: 1 selector: matchLabels: name: angular template: metadata: labels: name: angular spec: containers: - name: angular image: angular:local ports: - containerPort: 4200 apiVersion: v1 kind: Service metadata: name: backend-service spec: type:ClusterIP selector: name: backend ports: - protocol: TCP targetPort: 7000 port: 7000 --- apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment labels: name: backend spec: replicas: 1 selector: matchLabels: name: backend template: metadata: labels: name: backend spec: containers: - name: backend image: flask:local ports: - containerPort: 7000 </code></pre>
<p>Is your cluster in a healthy state ? DNS are resolved by object coredns in kube-system namespace.</p> <p>In a classic way your angular app should show up your API Url in your browser so they must exposed and public. It is not your case and I have huge doubts about this. Expose us your app architecture?</p> <p>Moreover if you expose your service though NodePort you must not use it for internal access because you never know the node you will access. When exose a service your apps need to use the port attribute (not the nodeport) to access pod generated in backend.</p>
<p>I looked at <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> library but it requires RBAC enabled in cluster. Is any other way to retrieve pods in kubernetes programatically?</p>
<p>As per Kubernetes Java Client library you can find there:</p> <ol> <li><p><a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java" rel="nofollow noreferrer">InClusterClient</a> Example (Configure a client while running inside the Kubernetes cluster.).</p></li> <li><p><a href="https://github.com/kubernetes-client/java/blob/master/examples/src/main/java/io/kubernetes/client/examples/KubeConfigFileClientExample.java" rel="nofollow noreferrer">KubeConfigFileClient</a> Example: (Configure a client to access a Kubernetes cluster from outside.) </p></li> </ol> <p>The first example from inside the cluster is using serviceaccount applied to the POD.</p> <p>The second example from outside the cluster is using kubeconfig file.</p> <p>In the official docs you can find java example of <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#java-client" rel="nofollow noreferrer">Accessing Kubernetes API</a> Using Java client I it uses kubeconfig file by default stored in <code>$HOME/.kube/config</code>. In addition you can find there other examples how to programmatically access the Kubernetes API with the list of <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Officially-supported Kubernetes client libraries</a> and <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#community-maintained-client-libraries" rel="nofollow noreferrer">Community-maintained client libraries</a> </p> <p>Please refer also to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">Authorization Modes</a></p> <blockquote> <p>Kubernetes RBAC allows admins to configure and control access to Kubernetes resources as well as the operations that can be performed on those resources. RBAC can be enabled by starting the API server with --authorization-mode=RBAC</p> <p>Kubernetes includes a built-in role-based access control (RBAC) mechanism that allows you to configure fine-grained and specific sets of permissions that define how a given GCP user, or group of users, can interact with any Kubernetes object in your cluster, or in a specific Namespace of your cluster.</p> </blockquote> <p>Additional resources:</p> <ul> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a> </li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow noreferrer">Accessing Clusters</a> </li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a> </li> <li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">Authorization Modes</a> </li> <li><a href="https://www.replex.io/blog/kubernetes-in-production" rel="nofollow noreferrer">Kubernetes in Production</a> </li> </ul> <p>Hope this help.</p>
<p>I tried the following command on my minikube setup to verify if dns is working fine or not.</p> <p><code>kubectl exec -ti busybox -- nslookup kubernetes.default</code></p> <p>but this is the output I am getting ' Server: 10.96.0.10 Address 1: 10.96.0.10</p> <pre><code>nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1' </code></pre> <p>Apart from that, I checked the coredns pod logs it shows something like below: ` 2019-11-07T12:25:23.694Z [ERROR] plugin/errors: 0 5606995447546819070.2414697521008405831. HINFO: read udp 172.17.0.5:60961- </p> <blockquote> <p>10.15.102.11:53: i/o timeout `</p> </blockquote> <p>Can some explain me what is going wrong? busybox image tag is <code>image: busybox:1.28</code></p>
<p>Thats because your busybox pod know nothing about <code>kubernetes.default</code> Fix your /etc/resolv.conf</p> <p>It should be like</p> <pre><code>search default.svc.cluster.local svc.cluster.local nameserver 10.96.0.10 options ndots:5 </code></pre> <p>Additionally you cat open <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">Debugging DNS Resolution</a> documentation and check provided examle for the same.</p> <blockquote> <p>Take a look inside the resolv.conf file. (See Inheriting DNS from the node and Known issues below for more information)</p> </blockquote> <pre><code>kubectl exec busybox cat /etc/resolv.conf </code></pre> <blockquote> <p>Verify that the search path and name server are set up like the following (note that search path may vary for different cloud providers):</p> </blockquote> <pre><code>search default.svc.cluster.local svc.cluster.local cluster.local google.internal c.gce_project_id.internal nameserver 10.0.0.10 options ndots:5 </code></pre> <blockquote> <p>Errors such as the following indicate a problem with the coredns/kube-dns add-on or associated Services:</p> </blockquote> <pre><code>kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 nslookup: can't resolve 'kubernetes.default' or kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.0.0.10 Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'kubernetes.default' </code></pre>
<p>I have a Daemonset that places a pod onto all of my cluster's nodes. That pod looks for a set of conditions. When they are found it is supposed to execute a bash script on its node.</p> <p>Currently my pod that I apply as a daemon set mounts the directory with the bash script. I am able to detect the conditions that I am looking for. When the conditions are detected I execute the bash script but it ends up running in my alpine container inside my pod and not on the host node.</p> <p>As as simple example of what is not working for me (in spec):</p> <pre><code>command: ["/bin/sh"] args: ["-c", "source /mounted_dir/my_node_script.sh"] </code></pre> <p>I want to execute the bash script on the NODE the pod is running on, not within the container/pod. How can this be accomplished?</p>
<p>Actually a command run inside a pod is run on the host. It's a container (Docker), not a virtual machine.</p> <p>If your actual problem is that you want to do something, which a normal container isn't allowed to, you can run a pod in privileged mode or configure whatever you exactly need.</p>
<p>Using OpenShift 3.11, I've mounted an nfs persistent volume, but the application cannot copy into the new volume, saying:</p> <pre><code>oc logs my-project-77858bc694-6kbm6 cp: cannot create regular file '/config/dbdata/resdb.lock.db': Permission denied ... </code></pre> <p>I've tried to change the ownership of the folder by doing a chown in an InitContainers, but it tells me the operation not permitted. </p> <pre><code> initContainers: - name: chowner image: alpine:latest command: ["/bin/sh", "-c"] args: - ls -alt /config/dbdata; chown 1001:1001 /config/dbdata; volumeMounts: - name: my-volume mountPath: /config/dbdata/ </code></pre> <pre><code>oc logs my-project-77858bc694-6kbm6 -c chowner total 12 drwxr-xr-x 3 root root 4096 Nov 7 03:06 .. drwxr-xr-x 2 99 99 4096 Nov 7 02:26 . chown: /config/dbdata: Operation not permitted </code></pre> <p>I expect to be able to write to the mounted volume.</p>
<p>You can give your Pods permission to write into a volume by using <code>fsGroup: GROUP_ID</code> in a Security Context. <code>fsGroup</code> makes your volumes writable by GROUP_ID and makes all processes inside your container part of that group.</p> <p>For example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: POD_NAME spec: securityContext: fsGroup: GROUP_ID ... </code></pre>
<h1>Motive</h1> <p>I want to fully automate the deployment of many <strong>services</strong> with the help of <a href="https://cloud.google.com/cloud-build/" rel="noreferrer">Google Cloud Build</a> and <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">Google Kubernetes Engine</a>. Those services are located inside a <strong>monorepo</strong>, which has a folder called <code>services</code>.</p> <p>So I created a <code>cloudbuild.yaml</code> for every service and created a build trigger. The <code>cloudbuild.yaml</code> does:</p> <ol> <li>run tests</li> <li>build new version of Docker image</li> <li>push new Docker image</li> <li>apply changes to Kubernetes cluster</li> </ol> <h1>Issue</h1> <p>As the number of services increases, the number of build triggers increases, too. There are also more and more services that are built even though they haven't changed.</p> <p>Thus I want a mechanism, which has only <strong>one</strong> build trigger and automatically determines which services need to be rebuild.</p> <h1>Example</h1> <p>Suppose I have a monorepo with this file structure:</p> <pre><code>├── packages │ ├── enums │ ├── components └── services ├── backend ├── frontend ├── admin-dashboard </code></pre> <p>Then I make some changes in the <code>frontend</code> service. Since the <code>frontend</code> and the <code>admin-dashboard</code> service depend on the <code>components</code> package multiple services need to be rebuild:</p> <ul> <li>frontend</li> <li>admin-dashboard</li> </ul> <p>But <strong>not</strong> backend!</p> <h1>What I've Tried</h1> <h3>(1) Multiple build triggers</h3> <p>Setting up multiple build triggers for <strong>every</strong> service. But 80% of those builds are redundant, since most changes in the code are only related to individuals services. It's also increasingly complex to manage many build triggers, which look almost identical. A single <code>cloudbuild.yaml</code> file looks like this:</p> <pre><code>steps: - name: "gcr.io/cloud-builders/docker" args: [ "build", "-f", "./services/frontend/prod.Dockerfile", "-t", "gcr.io/$PROJECT_ID/frontend:$REVISION_ID", "-t", "gcr.io/$PROJECT_ID/frontend:latest", ".", ] - name: "gcr.io/cloud-builders/docker" args: ["push", "gcr.io/$PROJECT_ID/frontend"] - name: "gcr.io/cloud-builders/kubectl" args: ["apply", "-f", "kubernetes/gcp/frontend.yaml"] env: - "CLOUDSDK_COMPUTE_ZONE=europe-west3-a" - "CLOUDSDK_CONTAINER_CLUSTER=cents-ideas" </code></pre> <h3>(2) Looping through cloudbuild files</h3> <p><a href="https://stackoverflow.com/questions/51861870">This</a> question is about a very similar issue. So I've tried to set up one "entry-point" <code>cloudbuild.yaml</code> file in the root of the project and looped through all services:</p> <pre><code>steps: - name: 'gcr.io/cloud-builders/gcloud' entrypoint: 'bash' args: - '-c' - | for d in ./services/*/; do config="${d}cloudbuild.yaml" if [[ ! -f "${config}" ]]; then continue fi echo "Building $d ... " ( gcloud builds submit $d --config=${config} ) &amp; done wait </code></pre> <p>This would eliminate the need for having multiple build triggers. But I also ran into issues with this method:</p> <p>Every service is sent into it's own build process with the file scope of this particular service. This means, that I can only access files inside <code>/services/specific-service</code> during the build. Which is a total bummer for me (I need access to files in parent directories like <code>packages</code> and config files in the root).</p> <h3>(3) Build only changed services</h3> <p>Since I want a mechanism to only build changed services, I've tried to determine the services that need to be rebuilt. It seems quite easy to do this with the help of <a href="https://github.com/lerna/lerna" rel="noreferrer">lerna</a>. Running</p> <pre><code>lerna changed --all --parseable </code></pre> <p>will return a list file paths to the changed packages like this:</p> <pre><code>/home/username/Desktop/project/packages/components /home/username/Desktop/project/services/frontend /home/username/Desktop/project/services/admin-dashboard </code></pre> <p>However, the list also includes <code>packages</code> and I have no idea how I would be able to use this list in a script to loop through affected services. Also: when I trigger a build (e.g. through tagging a commit), lerna wouldn't be able to recognize changed packages during the build process as the changes have already been committed.</p> <hr> <p>I know this is a long one. But I think it's an important topic, so I really appreciate any help! </p> <p>P.S.: <a href="https://github.com/flolude/cents-ideas/tree/develop" rel="noreferrer">This</a> is how my actual project looks like, if you want to take a close look at the specific use-case.</p>
<p>To build from a <strong>monorepo</strong> you really want to build incrementally (what is changed and parts that depends on changed parts). To achieve this, your build tool need to handle a dependency graph in some way.</p> <p>Lerna that you describe is designed for monorepos. But so is also <a href="https://bazel.build/" rel="nofollow noreferrer">Bazel</a> and it is available as an option in Google Cloud Builder, <a href="https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/bazel" rel="nofollow noreferrer">cloud-builders/bazel</a> with documentation for using in combination with a docker builder.</p> <p>However, build tools designed for monorepos are usually more complex to setup.</p>
<p>In the pod creation yaml files or in the deployment yaml files in kubernetes, why Containers key has a list value <code>- name: memory-demo-ctr</code> rather than we can simply provide the map value <code>name: memory-demo-ctr</code> (why we're providing <code>-</code> symbol)?</p> <p>I tried looking at over the web but couldn't find a solution.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress </code></pre>
<p>Pod is capable of running multiple containers. That's the reason <code>containers</code> object is a list instead of map.</p> <pre><code>kind: Pod ... spec: containers: - name: busybox image: busybox:latest - name: nginx image: nginx:1.7.9 - name: redis image: redis:latest </code></pre> <p>If <code>containers</code> is a map object, you cannot write a configuration file to run multiple containers inside a pod. I hope this answer solved your doubt.</p>
<h3>Background</h3> <p>I am using TZCronJob to run cronjobs with timezones in Kubernetes. A sample <code>cronjob.yaml</code> might look like the following (as per the <a href="https://github.com/hiddeco/cronjobber" rel="nofollow noreferrer">cronjobber docs</a>). Note the timezone specified, the schedule, and <code>kind=TZCronJob</code>:</p> <pre><code>apiVersion: cronjobber.hidde.co/v1alpha1 kind: TZCronJob metadata: name: hello spec: schedule: "05 09 * * *" timezone: "Europe/Amsterdam" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo "Hello, World!" restartPolicy: OnFailure </code></pre> <p>Nrmally, with any old cronjob in Kubernetes, you can run <code>kubectl create job test-job --from=tzcronjob/name_of_my_cronjob</code>, as per the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-job-em-" rel="nofollow noreferrer">kubectl create cronjob docs</a>. </p> <h3>Error</h3> <p>However, when I try to run it with <code>kubectl create job test-job --from=tzcronjob/name_of_my_cronjob</code> (switching the from command to <code>--from=tzcronjob/</code>) I get:</p> <blockquote> <p><code>error: from must be an existing cronjob: no kind "TZCronJob" is registered for version "cronjobber.hidde.co/v1alpha1" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"</code></p> </blockquote> <p>When I try to take a peek at <a href="https://kubernetes.io/kubernetes/pkg/kubectl/scheme/scheme.go:28" rel="nofollow noreferrer">https://kubernetes.io/kubernetes/pkg/kubectl/scheme/scheme.go:28</a> I get 404, not found.</p> <p>This almost worked, but to no avail: </p> <pre><code>kubectl create job test-job-name-v1 --image=tzcronjob/name_of_image </code></pre> <p>How can I create a new one-off job from my chart definition?</p>
<p>In <a href="https://helm.sh/docs/" rel="nofollow noreferrer">Helm</a> there are mechanisms called <a href="https://helm.sh/docs/developing_charts/#hooks" rel="nofollow noreferrer">Hooks</a>.</p> <p>Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release’s life cycle. For example, you can use hooks to:</p> <ul> <li><p>Load a ConfigMap or Secret during install before any other charts are loaded</p></li> <li><p>Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data</p></li> <li><p>Run a Job before deleting a release to gracefully take a service out of rotation before removing it.</p></li> </ul> <p>Hooks work like regular templates, but they have special annotations that cause Helm to utilize them differently. In this section, we cover the basic usage pattern for hooks.</p> <p>Hooks are declared as an annotation in the metadata section of a manifest:</p> <pre><code>apiVersion: ... kind: .... metadata: annotations: "helm.sh/hook": "pre-install" </code></pre> <p>If the resources is a Job kind, Tiller will wait until the job successfully runs to completion. And if the job fails, the release will fail. This is a blocking operation, so the Helm client will pause while the Job is run.</p> <p><strong>HOW TO WRITE HOOKS:</strong></p> <p>Hooks are just Kubernetes manifest files with special annotations in the metadata section. Because they are template files, you can use all of the normal template features, including <strong>reading .Values</strong>, .<strong>Release</strong>, and <strong>.Template</strong>.</p> <p>For example, this template, stored in templates/post-install-job.yaml, declares a job to be run on post-install:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: "{{.Release.Name}}" labels: app.kubernetes.io/managed-by: {{.Release.Service | quote }} app.kubernetes.io/instance: {{.Release.Name | quote }} app.kubernetes.io/version: {{ .Chart.AppVersion }} helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}" annotations: # This is what defines this resource as a hook. Without this line, the # job is considered part of the release. "helm.sh/hook": post-install "helm.sh/hook-weight": "-5" "helm.sh/hook-delete-policy": hook-succeeded spec: template: metadata: name: "{{.Release.Name}}" labels: app.kubernetes.io/managed-by: {{.Release.Service | quote }} app.kubernetes.io/instance: {{.Release.Name | quote }} helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}" spec: restartPolicy: Never containers: - name: post-install-job image: "alpine:3.3" command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"] </code></pre> <p>What makes this template a hook is the annotation:</p> <pre><code> annotations: "helm.sh/hook": post-install </code></pre>
<p>I am trying to understand Kubernetes and how it works under the hood. As I understand it each pod gets its own IP address. What I am not sure about is what kind of IP address that is. </p> <p>Is it something that the network admins at my company need to pass out? Or is an internal kind of IP address that is not addressable on the full network?</p> <p>I have read about network overlays (like Project Calico) and I assume they play a role in this, but I can't seem to find a page that explains the connection. (I think my question is too remedial for the internet.)</p> <p><strong>Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?</strong></p>
<h2>Kubernetes clusters</h2> <blockquote> <p>Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?</p> </blockquote> <p>The thing with Kubernetes is that it is not a <em>service</em> like e.g. a Virtual Machine, but a <strong>cluster</strong> that has it's own networking functionality and management, including <strong>IP address allocation</strong> and <strong>network routing</strong>.</p> <p>Your nodes may be virtual or physical machines, but they are registered in the NodeController, e.g. for health check and most commonly for IP address management.</p> <blockquote> <p>The node controller is a Kubernetes master component which manages various aspects of nodes.</p> <p>The node controller has multiple roles in a node’s life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Cluster Architecture - Nodes</a></p> <h2>IP address management</h2> <p>Kubernetes Networking depends on the Container Network Interface (<a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="nofollow noreferrer">CNI</a>) plugin your cluster is using.</p> <blockquote> <p>A CNI plugin is responsible for ... It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.</p> </blockquote> <p>It is common that each node is assigned an CIDR range of IP-addresses that the nodes then assign to pods that is scheduled on the node.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="nofollow noreferrer">GKE network overview</a> describes it well on how it work on GKE.</p> <blockquote> <p>Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network.</p> <p>Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default).</p> <p>Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster.</p> <p>Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.</p> </blockquote>
<p>I am currently installing QlikSense on Kubernetes. In order to work with Qlik the Kubernetes Cluster needs to have a readwritemany storage class.</p> <p>How do I configure a readwritemany storage class in EKS with kubectl?</p>
<p>Amazon AWS supports Elastic File System (EFS) as a highly scalable NFS service basically. By installing their new CSI driver in your EKS cluster, this allows a ReadWriteMany storage class. This should satisfy your requirements. You can read more about installing that <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">here</a>.</p> <p>Note however that EFS performance will not be great. It depends on how the software will want to use it. For example, hosting a database on EFS is not recommended and will probably be too slow. However, hosting shared files on EFS with moderate load and no stringent latency requirements can work well enough!</p> <p>I am the developer behind <a href="https://airkube.io/" rel="nofollow noreferrer">airkube.io</a>, which is a managed EKS hosting that runs on AWS spot instances. We tested and had good luck with <a href="https://openebs.io/" rel="nofollow noreferrer">openebs</a>. This is a good option if you need fairly high performance RWX, you can read more about that option <a href="https://docs.openebs.io/docs/next/rwm.html" rel="nofollow noreferrer">here</a>. The opebebs solution might bring slight complexity though especially if you don't have a dedicated SRE person, or use a managed EKS solution. Another possibly simpler solution, is to use <a href="https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner" rel="nofollow noreferrer">NFS provisioner</a>. This deploys an NFS server and dynamically creates PVs as EKS users deploy PVCs.</p> <p>Hope that was helpful! Don't hesitate to ping me for more details or hands-on help! Regards</p>
<p>I have a flask application which has multiple routes including the default route '/'. I deployed this app on the kubernetes. And i am using minikube as a standalone cluster. I exposed the deployment as a NodePort service and then used ingress to map the external request to the application running in cluster. My ingress resource looks like this...</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-test-svc annotations: nginx.ingress.kubernetes.io/ssl-redirect: \"false\" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: backend: serviceName: defualt-http-backend servicePort: 80 rules: - host: kubernetes-test.info http: paths: - path: /* backend: serviceName: kubernetes-test-svc servicePort: 80 </code></pre> <p>And i also configured my /etc/hosts file to route request to this host. It looks something like this...</p> <pre><code>192.168.99.100 kubernetes-test.info </code></pre> <p>The problem is no matter which endpoint i call the ingress always redirects it to the default route '/'. My flask app looks like this...</p> <pre><code>@app.route('/') def index(): return "Root route" @app.route('/route1') def route1(): return "Route 1" @app.route('/route2') def route2(): params = request.args return make_response(jsonify({'Param1': params['one'], 'Param2': params['two']})) </code></pre> <p>So if i make request to kubernetes-test.info/route1 it will show me the text "Root Route" instead of "Route 1".</p> <p>But if i type 192.168.99.100/route1 it show "Route 1". I dont know why this is happening? Why it is working with minikube ip but not working with host I specified. </p> <p>Service deployment looks like this:-</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kubernetes-test-svc spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: kubernetes-test </code></pre>
<p>Update your ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-test-svc annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: backend: serviceName: defualt-http-backend servicePort: 80 rules: - host: kubernetes-test.info http: paths: - path: / backend: serviceName: kubernetes-test-svc servicePort: 80 </code></pre>
<p>I deployed a RabbitMQ server on my kubernetes cluster and i am able to access the management UI from the browser. But my spring boot app cannot connect to port 5672 and i get connection refused error. The same code works , if i replace my application.yml properties from kuberntes host to localhost and run a docker image on my machine.I am not sure what i am doing wrong?</p> <p>Has anyone tried this kind of setup. Please help. Thanks!</p>
<p>Let's say the dns is named <code>rabbitmq</code>. If you want to reach it, then you have to make sure that rabbitmq's deployment has a service attached with the correct ports for exposure. So you would target <code>http://rabbitmq:5672</code>.</p> <p>To make sure this or something alike exists you can debug k8s services. Run <code>kubectl get services | grep rabbitmq</code> to make sure the service exists. If it does, then get the service yaml by running 'kubectl get service rabbitmq-service-name -o yaml'. Finally, check <code>spec.ports[]</code> for the ports that allow you to connect to the pod. Search for '5672' in <code>spec.ports[].port</code> for amqp. In some cases, the port might have been changed. This means <code>spec.ports[].port</code> might be 3030 for instance, but <code>spec.ports[].targetPort</code> be 5672.</p>
<p>I configured <a href="https://github.com/helm/charts/tree/90bcd1a5ef30b87dafe543726f8fdaaf0fe6cf84/stable/prometheus-operator" rel="nofollow noreferrer">prometheus-operator</a> chart with <a href="https://github.com/bzon/prometheus-msteams" rel="nofollow noreferrer">prometheus-msteams</a> for monitoring and alerting of k8s cluster.</p> <p>But all notifications are not correctly directed to the MSteams channel. If i have 6 alerts that are firing, i can see them in the alertmanager's UI, but only one or two of them are sent to MS teams channel.</p> <p>I can see this log in alertmanager pod:</p> <pre><code>C:\monitoring&gt;kubectl logs alertmanager-monitor-prometheus-operato-alertmanager-0 -c alertmanager level=info ts=2019-11-04T09:16:47.358Z caller=main.go:217 msg="Starting Alertmanager" version="(version=0.19.0, branch=HEAD, revision=7aa5d19fea3f58e3d27dbdeb0f2883037168914a)" level=info ts=2019-11-04T09:16:47.358Z caller=main.go:218 build_context="(go=go1.12.8, user=root@587d0268f963, date=20190903-15:01:40)" level=warn ts=2019-11-04T09:16:47.553Z caller=cluster.go:228 component=cluster msg="failed to join cluster" err="1 error occurred:\n\t* Failed to resolve alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc:9094: lookup alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc on 169.254.25.10:53: no such host\n\n" level=info ts=2019-11-04T09:16:47.553Z caller=cluster.go:230 component=cluster msg="will retry joining cluster every 10s" level=warn ts=2019-11-04T09:16:47.553Z caller=main.go:308 msg="unable to join gossip mesh" err="1 error occurred:\n\t* Failed to resolve alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc:9094: lookup alertmanager-monitor-prometheus-operato-alertmanager-0.alertmanager-operated.monitoring.svc on 169.254.25.10:53: no such host\n\n" level=info ts=2019-11-04T09:16:47.553Z caller=cluster.go:623 component=cluster msg="Waiting for gossip to settle..." interval=2s level=info ts=2019-11-04T09:16:47.597Z caller=coordinator.go:119 component=configuration msg="Loading configuration file" file=/etc/alertmanager/config/alertmanager.yaml level=info ts=2019-11-04T09:16:47.598Z caller=coordinator.go:131 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/config/alertmanager.yaml level=info ts=2019-11-04T09:16:47.601Z caller=main.go:466 msg=Listening address=:9093 level=info ts=2019-11-04T09:16:49.554Z caller=cluster.go:648 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000149822s level=info ts=2019-11-04T09:16:57.555Z caller=cluster.go:640 component=cluster msg="gossip settled; proceeding" elapsed=10.001110685s level=error ts=2019-11-04T09:38:02.472Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded" level=error ts=2019-11-04T09:38:02.472Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=4 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" level=error ts=2019-11-04T09:43:02.472Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded" level=error ts=2019-11-04T09:43:02.472Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" level=error ts=2019-11-04T09:48:02.473Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded" level=error ts=2019-11-04T09:48:02.473Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" level=error ts=2019-11-04T09:53:02.473Z caller=notify.go:372 component=dispatcher msg="Error on notify" err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" context_err="context deadline exceeded" level=error ts=2019-11-04T09:53:02.473Z caller=dispatch.go:266 component=dispatcher msg="Notify for alerts failed" num_alerts=5 err="unexpected status code 500: http://prometheus-msteams:2000/alertmanager" </code></pre> <p>How can I solve this error?</p> <p><strong>EDIT :</strong></p> <p>The setup uses prometheus-msteams as a webhook to redirect the alerts notifications from alertmanager to MSTeams channel.</p> <p>The prometheus-msteams container logs also have some errors:</p> <pre><code>C:\&gt; kubectl logs prometheus-msteams-564bc7d99c-dpzsm time="2019-11-06T06:45:14Z" level=info msg="Version: v1.1.4, Commit: d47a7ab, Branch: HEAD, Build Date: 2019-08-04T17:17:06+0000" time="2019-11-06T06:45:14Z" level=info msg="Parsing the message card template file: /etc/template/card.tmpl" time="2019-11-06T06:45:15Z" level=warning msg="If the 'config' flag is used, the 'webhook-url' and 'request-uri' flags will be ignored." time="2019-11-06T06:45:15Z" level=info msg="Parsing the configuration file: /etc/config/connectors.yaml" time="2019-11-06T06:45:15Z" level=info msg="Creating the server request path \"/alertmanager\" with webhook \"https://outlook.office.com/webhook/00ce0266-7013-4d53-a20f-115ece04042d@9afb1f8a-2192-45ba-b0a1-6b193c758e24/IncomingWebhook/43c3d745ff5e426282f1bc6b5e79bfea/8368b12d-8ac9-4832-b7b5-b337ac267220\"" time="2019-11-06T06:45:15Z" level=info msg="prometheus-msteams server started listening at 0.0.0.0:2000" time="2019-11-06T07:01:07Z" level=info msg="/alertmanager received a request" time="2019-11-06T07:01:07Z" level=debug msg="Prometheus Alert: {\"receiver\":\"prometheus-msteams\",\"status\":\"firing\",\"alerts\":[{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"deployment\":\"storagesvc\",\"endpoint\":\"http\",\"instance\":\"10.233.108.72:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"fission\",\"pod\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"service\":\"monitor-kube-state-metrics\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%21%3D+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D\\u0026g0.tab=1\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubePodNotReady\",\"namespace\":\"fission\",\"pod\":\"storagesvc-5bff46b69b-vfdrd\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=sum+by%28namespace%2C+pod%29+%28kube_pod_status_phase%7Bjob%3D%22kube-state-metrics%22%2Cphase%3D~%22Failed%7CPending%7CUnknown%22%7D%29+%3E+0\\u0026g0.tab=1\"}],\"groupLabels\":{\"namespace\":\"fission\",\"severity\":\"critical\"},\"commonLabels\":{\"namespace\":\"fission\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"commonAnnotations\":{},\"externalURL\":\"http://monitor-prometheus-operato-alertmanager.monitoring:9093\",\"version\":\"4\",\"groupKey\":\"{}:{namespace=\\\"fission\\\", severity=\\\"critical\\\"}\"}" time="2019-11-06T07:01:07Z" level=debug msg="Alert rendered in template file: \r\n{\r\n \"@type\": \"MessageCard\",\r\n \"@context\": \"http://schema.org/extensions\",\r\n \"themeColor\": \"8C1A1A\",\r\n \"summary\": \"\",\r\n \"title\": \"Prometheus Alert (firing)\",\r\n \"sections\": [ \r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubeDeploymentReplicasMismatch\"\r\n },\r\n {\r\n \"name\": \"deployment\",\r\n \"value\": \"storagesvc\"\r\n },\r\n {\r\n \"name\": \"endpoint\",\r\n \"value\": \"http\"\r\n },\r\n {\r\n \"name\": \"instance\",\r\n \"value\": \"10.233.108.72:8080\"\r\n },\r\n {\r\n \"name\": \"job\",\r\n \"value\": \"kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"monitor-kube-state-metrics-856bc9455b-7z5qx\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"service\",\r\n \"value\": \"monitor-kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n },\r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubePodNotReady\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"storagesvc-5bff46b69b-vfdrd\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n }\r\n ]\r\n}\r\n" time="2019-11-06T07:01:07Z" level=debug msg="Size of message is 1714 Bytes (~1 KB)" time="2019-11-06T07:01:07Z" level=info msg="Created a card for Microsoft Teams /alertmanager" time="2019-11-06T07:01:07Z" level=debug msg="Teams message cards: [{\"@type\":\"MessageCard\",\"@context\":\"http://schema.org/extensions\",\"themeColor\":\"8C1A1A\",\"summary\":\"\",\"title\":\"Prometheus Alert (firing)\",\"sections\":[{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},{\"name\":\"alertname\",\"value\":\"KubeDeploymentReplicasMismatch\"},{\"name\":\"deployment\",\"value\":\"storagesvc\"},{\"name\":\"endpoint\",\"value\":\"http\"},{\"name\":\"instance\",\"value\":\"10.233.108.72:8080\"},{\"name\":\"job\",\"value\":\"kube-state-metrics\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"service\",\"value\":\"monitor-kube-state-metrics\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true},{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},{\"name\":\"alertname\",\"value\":\"KubePodNotReady\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"storagesvc-5bff46b69b-vfdrd\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true}]}]" time="2019-11-06T07:01:07Z" level=info msg="Microsoft Teams response text: 1" time="2019-11-06T07:01:07Z" level=info msg="A card was successfully sent to Microsoft Teams Channel. Got http status: 200 OK" time="2019-11-06T07:01:07Z" level=info msg="Microsoft Teams response text: Summary or Text is required." time="2019-11-06T07:01:07Z" level=error msg="Failed sending to the Teams Channel. Teams http response: 400 Bad Request" time="2019-11-06T07:01:08Z" level=info msg="/alertmanager received a request" time="2019-11-06T07:01:08Z" level=debug msg="Prometheus Alert: {\"receiver\":\"prometheus-msteams\",\"status\":\"firing\",\"alerts\":[{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubeDeploymentReplicasMismatch\",\"deployment\":\"storagesvc\",\"endpoint\":\"http\",\"instance\":\"10.233.108.72:8080\",\"job\":\"kube-state-metrics\",\"namespace\":\"fission\",\"pod\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"service\":\"monitor-kube-state-metrics\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=kube_deployment_spec_replicas%7Bjob%3D%22kube-state-metrics%22%7D+%21%3D+kube_deployment_status_replicas_available%7Bjob%3D%22kube-state-metrics%22%7D\\u0026g0.tab=1\"},{\"status\":\"firing\",\"labels\":{\"alertname\":\"KubePodNotReady\",\"namespace\":\"fission\",\"pod\":\"storagesvc-5bff46b69b-vfdrd\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"annotations\":{\"message\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\",\"runbook_url\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},\"startsAt\":\"2019-11-06T07:00:32.453590324Z\",\"endsAt\":\"0001-01-01T00:00:00Z\",\"generatorURL\":\"http://monitor-prometheus-operato-prometheus.monitoring:9090/graph?g0.expr=sum+by%28namespace%2C+pod%29+%28kube_pod_status_phase%7Bjob%3D%22kube-state-metrics%22%2Cphase%3D~%22Failed%7CPending%7CUnknown%22%7D%29+%3E+0\\u0026g0.tab=1\"}],\"groupLabels\":{\"namespace\":\"fission\",\"severity\":\"critical\"},\"commonLabels\":{\"namespace\":\"fission\",\"prometheus\":\"monitoring/monitor-prometheus-operato-prometheus\",\"severity\":\"critical\"},\"commonAnnotations\":{},\"externalURL\":\"http://monitor-prometheus-operato-alertmanager.monitoring:9093\",\"version\":\"4\",\"groupKey\":\"{}:{namespace=\\\"fission\\\", severity=\\\"critical\\\"}\"}" time="2019-11-06T07:01:08Z" level=debug msg="Alert rendered in template file: \r\n{\r\n \"@type\": \"MessageCard\",\r\n \"@context\": \"http://schema.org/extensions\",\r\n \"themeColor\": \"8C1A1A\",\r\n \"summary\": \"\",\r\n \"title\": \"Prometheus Alert (firing)\",\r\n \"sections\": [ \r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubeDeploymentReplicasMismatch\"\r\n },\r\n {\r\n \"name\": \"deployment\",\r\n \"value\": \"storagesvc\"\r\n },\r\n {\r\n \"name\": \"endpoint\",\r\n \"value\": \"http\"\r\n },\r\n {\r\n \"name\": \"instance\",\r\n \"value\": \"10.233.108.72:8080\"\r\n },\r\n {\r\n \"name\": \"job\",\r\n \"value\": \"kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"monitor-kube-state-metrics-856bc9455b-7z5qx\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"service\",\r\n \"value\": \"monitor-kube-state-metrics\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n },\r\n {\r\n \"activityTitle\": \"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\r\n \"facts\": [\r\n {\r\n \"name\": \"message\",\r\n \"value\": \"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"\r\n },\r\n {\r\n \"name\": \"runbook\\\\_url\",\r\n \"value\": \"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"\r\n },\r\n {\r\n \"name\": \"alertname\",\r\n \"value\": \"KubePodNotReady\"\r\n },\r\n {\r\n \"name\": \"namespace\",\r\n \"value\": \"fission\"\r\n },\r\n {\r\n \"name\": \"pod\",\r\n \"value\": \"storagesvc-5bff46b69b-vfdrd\"\r\n },\r\n {\r\n \"name\": \"prometheus\",\r\n \"value\": \"monitoring/monitor-prometheus-operato-prometheus\"\r\n },\r\n {\r\n \"name\": \"severity\",\r\n \"value\": \"critical\"\r\n }\r\n ],\r\n \"markdown\": true\r\n }\r\n ]\r\n}\r\n" time="2019-11-06T07:01:08Z" level=debug msg="Size of message is 1714 Bytes (~1 KB)" time="2019-11-06T07:01:08Z" level=info msg="Created a card for Microsoft Teams /alertmanager" time="2019-11-06T07:01:08Z" level=debug msg="Teams message cards: [{\"@type\":\"MessageCard\",\"@context\":\"http://schema.org/extensions\",\"themeColor\":\"8C1A1A\",\"summary\":\"\",\"title\":\"Prometheus Alert (firing)\",\"sections\":[{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Deployment fission/storagesvc has not matched the expected number of replicas for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubedeploymentreplicasmismatch\"},{\"name\":\"alertname\",\"value\":\"KubeDeploymentReplicasMismatch\"},{\"name\":\"deployment\",\"value\":\"storagesvc\"},{\"name\":\"endpoint\",\"value\":\"http\"},{\"name\":\"instance\",\"value\":\"10.233.108.72:8080\"},{\"name\":\"job\",\"value\":\"kube-state-metrics\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"monitor-kube-state-metrics-856bc9455b-7z5qx\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"service\",\"value\":\"monitor-kube-state-metrics\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true},{\"activityTitle\":\"[](http://monitor-prometheus-operato-alertmanager.monitoring:9093)\",\"facts\":[{\"name\":\"message\",\"value\":\"Pod fission/storagesvc-5bff46b69b-vfdrd has been in a non-ready state for longer than 15 minutes.\"},{\"name\":\"runbook\\\\_url\",\"value\":\"https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodnotready\"},{\"name\":\"alertname\",\"value\":\"KubePodNotReady\"},{\"name\":\"namespace\",\"value\":\"fission\"},{\"name\":\"pod\",\"value\":\"storagesvc-5bff46b69b-vfdrd\"},{\"name\":\"prometheus\",\"value\":\"monitoring/monitor-prometheus-operato-prometheus\"},{\"name\":\"severity\",\"value\":\"critical\"}],\"markdown\":true}]}]" time="2019-11-06T07:01:08Z" level=info msg="Microsoft Teams response text: Summary or Text is required." time="2019-11-06T07:01:08Z" level=error msg="Failed sending to the Teams Channel. Teams http response: 400 Bad Request" </code></pre> <p>Maybe due to this <code>400 bad request</code> error from prometheus-msteams, the alertmanager was returning <code>unexpected status code 500</code>.</p>
<p>An issue with the file <a href="https://github.com/bzon/prometheus-msteams/blob/master/chart/prometheus-msteams/card.tmpl" rel="nofollow noreferrer">https://github.com/bzon/prometheus-msteams/blob/master/chart/prometheus-msteams/card.tmpl</a> caused these errors.</p> <p>The problem was the summary field was empty. A slight change in the file was made as described in this <a href="https://lapee79.github.io/en/article/prometheus-alertmanager-with-msteams/" rel="nofollow noreferrer">tutorial</a> solved the errors.</p> <p>You can use the new modified card template by overriding the default one.</p>
<p>How to patch "db.password" in the following cm with kustomize?</p> <p>comfigmap:</p> <pre><code>apiVersion: v1 data: dbp.conf: |- { "db_properties": { "db.driver": "com.mysql.jdbc.Driver", "db.password": "123456", "db.user": "root" } } kind: ConfigMap metadata: labels: {} name: dbcm </code></pre>
<p>you can create new file with updated values and use command replace along wih create</p> <pre><code>kubectl create configmap NAME --from-file file.name -o yaml --dry-run | kubectl replace -f - </code></pre>
<p>I would like to learn Kubernetes and would like to setup it on my laptop.</p> <p>The architecture would be as follows:</p> <p><a href="https://i.stack.imgur.com/MRVWw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MRVWw.png" alt="enter image description here"></a></p> <ul> <li>Create 4 Ubuntu 18.04 server VM's instances on my laptop</li> <li>3 of 4 VM's will be Kubernetes Clusters and 1 VM wilk be the base</li> <li>Access via SSH the base VM</li> </ul> <p>For virtualization, I am using Virtual Box.</p> <p>The question is, how to achieve it?</p>
<p>To set up Kubernetes Cluster on Ubuntu Servers with Virtualbox and Kubeadm follow this steps: </p> <h2>Prerequisites:</h2> <ul> <li>Virtual machines with specification of minimum: <ul> <li>2 cores and 2GB RAM for master node </li> <li>1 core and 1GB for each of worker nodes </li> </ul></li> <li>Ubuntu Server 18.04 installed on all virtual machines </li> <li>OpenSSH Server installed on all virtual machines </li> </ul> <p>All of the virtual machines need to communicate with the Internet, main host and each other. It can be done through various means like: bridged networking, virtual hosts adapters etc. The networking scheme example below can be adjusted. </p> <p><a href="https://i.stack.imgur.com/iW04g.png" rel="nofollow noreferrer">Network scheme</a></p> <h2>Ansible:</h2> <p>You can do all things manually but to speed up the configuration process you can use automation tool like Ansible. It can be installed on the virtualization host, another virtual machine etc. </p> <h3>Installation steps to reproduce on host</h3> <ul> <li>Refresh the information about packages in repository:<br> <code>$ sudo apt update</code> </li> <li>Install package manager for Python3:<br> <code>$ sudo apt install python3-pip</code> </li> <li>Install Ansible package:<br> <code>$ sudo pip3 install ansible</code> </li> </ul> <h2>Configuring SSH key based access:</h2> <h3>Generating key pairs</h3> <p>To be able to connect to virtual machines without password you need to configure ssh keys. Command below will create a pair of ssh keys (private and public) and allow you to login to different systems without providing password.<br> <code>$ ssh-keygen -t rsa -b 4096</code><br> These keys will be created in default location: <strong>/home/USER/.ssh</strong></p> <h3>Authorization of keys on virtual machines</h3> <p>Next step is to upload newly created ssh keys to all of the virtual machines.<br> <strong>For each of virtual machine you need to invoke:</strong><br> <code>$ ssh-copy-id USER@IP_ADDRESS</code><br> This command will copy your public key to the authorized_keys file and will allow you to login without password. </p> <h3>SSH root access</h3> <p>By default root account can't be accessed with ssh only by password. It can be accessed with ssh keys (which you created earlier). Assuming the default configurations of files you can copy the ssh directory from user to root directory.</p> <p><strong>This step needs to invoked on all virtual machines:</strong><br> <code>$ sudo cp -r /home/USER/.ssh /root/</code> </p> <p>You can check it by running below command on main host:<br> <code>$ ssh root@IP_ADDRESS</code> </p> <p>If you can connect without password it means that the keys are configured correctly. </p> <h2>Checking connection between virtual machines and Ansible:</h2> <h3>Testing the connection</h3> <p>You need to check if Ansible can connect to all of the virtual machines. To do that you need 2 things: </p> <ul> <li><strong>Hosts</strong> file with information about hosts (virtual machines in that case) </li> <li><strong>Playbook</strong> file with statements what you require from Ansible to do </li> </ul> <p>Example hosts file: </p> <pre><code>[kubernetes:children] master nodes [kubernetes:vars] ansible_user=root ansible_port=22 [master] kubernetes-master ansible_host=10.0.0.10 [nodes] kubernetes-node1 ansible_host=10.0.0.11 kubernetes-node2 ansible_host=10.0.0.12 kubernetes-node3 ansible_host=10.0.0.13 </code></pre> <p>Hosts file consists of 2 main groups of hosts:</p> <ul> <li>master - group created for master node </li> <li>nodes - group created for worker nodes </li> </ul> <p>Variables specific to group are stored in section <strong>[kubernetes:vars]</strong>. </p> <p>Example playbook:</p> <pre><code>- name: Playbook for checking connection between hosts hosts: all gather_facts: no tasks: - name: Task to check the connection ping: </code></pre> <p>Main purpose of above playbook is to check connection between host and virtual machines.<br> You can test the connection by invoking command:<br> <code>$ ansible-playbook -i hosts_file ping.yaml</code> </p> <p>Output of this command should be like this: </p> <pre class="lang-sh prettyprint-override"><code>PLAY [Playbook for checking connection between hosts] ***************************************************** TASK [Task to check the connection] *********************************************************************** ok: [kubernetes-node1] ok: [kubernetes-node2] ok: [kubernetes-node3] ok: [kubernetes-master] PLAY RECAP ************************************************************************************************ kubernetes-master : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kubernetes-node1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kubernetes-node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 kubernetes-node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 </code></pre> <p>The output above proves that connection between Ansible and virtual machines have been successful. </p> <h2>Configuration before cluster deployment:</h2> <h3>Configure hostnames</h3> <p>Hostnames can be configured with Ansible. Each vm should connect with each vm by their hostnames. Ansible can modify hostnames as well as /etc/hosts file. Example playbook: <a href="https://pastebin.com/4KZWVA3B" rel="nofollow noreferrer">hostname.yaml</a></p> <h3>Disable SWAP</h3> <p>Swap needs to be disabled when working with Kubernetes. Example playbook: <a href="https://pastebin.com/6W1ZcTTp" rel="nofollow noreferrer">disable_swap.yaml</a></p> <h3>Additional software installation</h3> <p>Some packages are required before provisioning. All of them can be installed with Ansible:<br> Example playbook: <a href="https://pastebin.com/NHTRiFiF" rel="nofollow noreferrer">apt_install.yaml</a></p> <h3>Container Runtime Interface</h3> <p>In this example you will install Docker as your CRI. Playbook <a href="https://pastebin.com/CP6SXw2Z" rel="nofollow noreferrer">docker_install.yaml</a> will:</p> <ul> <li>Add apt signing key for Docker</li> <li>Add Docker's repository </li> <li>Install Docker with specific version (latest recommended) </li> </ul> <h3>Docker configuration</h3> <blockquote> <p><strong>[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd"</strong></p> </blockquote> <p>When deploying Kubernetes cluster kubeadm will give above warning about Docker cgroup driver. Playbook <a href="https://pastebin.com/PwijtqAh" rel="nofollow noreferrer">docker_configure.yaml</a> was created to resolve this issue. </p> <h3>Kubernetes tools installation</h3> <p>There are some core components of Kubernetes that need to be installed before cluster deployment. Playbook <a href="https://pastebin.com/HgzJGzN8" rel="nofollow noreferrer">kubetools_install.yaml</a> will: </p> <ul> <li>For master and worker nodes: <ul> <li>Add apt signing key for Kubernetes</li> <li>Add Kubernetes repository </li> <li>Install kubelet and kubeadm</li> </ul></li> <li>Additionally for master node: <ul> <li>Install kubectl </li> </ul></li> </ul> <h3>Reboot</h3> <p>Playbook <a href="https://pastebin.com/ppKKw81k" rel="nofollow noreferrer">reboot.yaml</a> will reboot all the virtual machines. </p> <h2>Cluster deployment:</h2> <h3>Cluster initalization</h3> <p>After successfully completing all the steps above, cluster can be created. Command below will initialize a cluster: </p> <p><code>$ kubeadm init --apiserver-advertise-address=IP_ADDRESS_OF_MASTER_NODE --pod-network-cidr=192.168.0.0/16</code></p> <p>Kubeadm can give warning about number of CPU's. It can be ignored by passing additional argument to kubeadm init command: <code>--ignore-preflight-errors=NumCPU</code></p> <p>Sucessful kubeadm provisioning should output something similar to this: </p> <pre><code>Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \ --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH </code></pre> <p>Copy kubeadm join command for all the worker nodes: </p> <pre><code>kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \ --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH </code></pre> <p>Run commands below as regular user: </p> <pre><code> mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <h3>Deploying Container Network Interface (CNI)</h3> <p>CNI is responsible for networking between pods and nodes. There are many examples like: </p> <ul> <li>Flannel</li> <li>Calico</li> <li>Weave</li> <li>Multus</li> </ul> <p>Command below will install Calico:</p> <p><code>$ kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml</code></p> <h3>Provisioning worker nodes</h3> <p>Run previously stored command from kubeadm init output <strong>on all worker nodes</strong>: </p> <pre><code>kubeadm join 10.0.0.10:6443 --token SECRET-TOKEN \ --discovery-token-ca-cert-hash sha256:SECRET-CA-CERT-HASH </code></pre> <p>All of the worker nodes should output: </p> <pre><code>This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. </code></pre> <h2>Testing:</h2> <p>Run below command on master node as regular user to check if nodes are properly connected: </p> <p><code>$ kubectl get nodes</code></p> <p>Output of this command: </p> <pre><code>NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 115m v1.16.2 kubernetes-node1 Ready &lt;none&gt; 106m v1.16.2 kubernetes-node2 Ready &lt;none&gt; 105m v1.16.2 kubernetes-node3 Ready &lt;none&gt; 105m v1.16.2 </code></pre> <p>Above output concludes that all the nodes are configured correctly. </p> <p><strong>Pods can now be deployed on the cluster!</strong></p>
<p>I have created a new reserved static IP in my GCP. See the screenshot marker <strong>1.</strong> <a href="https://i.stack.imgur.com/DlBa8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DlBa8.png" alt="enter image description here"></a> However my ingress resource is still using the generated ip (screenshot marker <strong>2.</strong>)</p> <p>In the Ingress' YAML file you can see I annotated the static-ip name.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/backends: '{"k8s-be-31465--b5c10175cf4f125b":"HEALTHY"}' ingress.kubernetes.io/forwarding-rule: k8s-fw-default-teamcity--b5c10175cf4f125b ingress.kubernetes.io/target-proxy: k8s-tp-default-teamcity--b5c10175cf4f125b ingress.kubernetes.io/url-map: k8s-um-default-teamcity--b5c10175cf4f125b kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.global-static-ip-name":"teamcity-static-ip"},"name":"teamcity","namespace":"default"},"spec":{"backend":{"serviceName":"teamcity","servicePort":8111}}} kubernetes.io/ingress.global-static-ip-name: teamcity-static-ip creationTimestamp: "2019-11-12T13:57:41Z" generation: 1 name: teamcity namespace: default resourceVersion: "3433973" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/teamcity uid: 6484482e-0554-11ea-af7d-42010a8400aa spec: backend: serviceName: teamcity servicePort: 8111 status: loadBalancer: ingress: - ip: 35.190.86.15 </code></pre> <p>That's why I am confused why it is not assigning it to the Ingress resource as expected.</p>
<p>It's in the name of the annotation: <code>kubernetes.io/ingress.global-static-ip-name</code> , this works only with global IP.</p> <p>And if you look at the screen your ip is a regional one: <code>europe-west-1</code>.</p> <p>Create a global IP, delete and recreate the ingress and it should work ;)</p>
<p>I've got a basic architecture set up in Kubernetes, a Laravel container for my application level code, and a Mysql container for my database. And I'm looking to implement a code compiling API service (as a seperate container) which accepts user generated code which I then run a docker container to compile the code and return the output to the user. </p> <p>There's some pretty raw implementations online but most of them use docker as a method of running user generated code compiling in an isolated environment (as you should) but the application itself is not hosted using containers or a container management system.</p> <p>Questions is, how can I spin up docker containers to handle a task and then return the output to my Laravel API container before shutting the container down? </p> <p>Apparently, running a docker container inside a docker container is not best practice.</p> <p>The process:</p> <ol> <li>User sends a post request to Laravel API container</li> <li>Laravel API container will take the request and run a docker container to compile code</li> <li>Temporary docker container will return compiled output to Laravel API container before shutting down.</li> <li>Laravel API container will return compiled response to end user.</li> </ol> <p>I'm running my app in a Kubernetes cluster, and a Docker/Kubernetes solution is needed. I rather not have to run raw commands of starting a Docker container in my application level code but have a more higher level solution. </p>
<p>You can use Kubernetes job resource to perform this kind of task. </p> <p>The Jobs objects can be spawned to run a process inside and can be set to be automatically terminated afterwards. A job in Kubernetes is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion. You are able to run multiple pod instances inside one job (parallel or sequentially). </p> <p>Check this <a href="http://kubernetesbyexample.com/jobs/" rel="nofollow noreferrer">page</a> for more details about the jobs. </p> <p>So basically your flow should look like this: </p> <ol> <li>User sends request for Laravel API container </li> <li>Laravel API container needs to interact with the API server in order to create the job.</li> <li>Pod inside the job will compile code and after compilation, will send a request to the Laravel API pod to deliver the compiled binary.</li> </ol> <p>The delivery of the binary should be coded by the user</p> <ol start="4"> <li>Laravel API container will return compiled response to user. </li> </ol> <p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster" rel="nofollow noreferrer">This documentation link</a> shows how to connect to the API, especially the section <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">Accessing the API from a Pod</a></p>
<p>I am new to Kubernetes and I'm trying to deploy an application to kubernetes via microk8s. The application contains python flask backend, angular frontend, redis and mysql database. I deployed the images in multiple pods and the status is showing "running" but the pods are not communicating with each other. </p> <p>Then app is completely dockerized and its functioning in the docker level. Before deploying into kubernetes my flask host was 0.0.0.0 and mysql host was "service name" in the docker-compose.yaml but currently I replaced it with service names of kubernetes yml file.</p> <p>Also, in angular frontend I have changed the url to connect to backed as <a href="http://localhost:5000" rel="nofollow noreferrer">http://localhost:5000</a> to <a href="http://backend-service" rel="nofollow noreferrer">http://backend-service</a>, where backend-service is the name(dns) given in the backend-service.yml file. But this also didn't make any change. Can someone tell me how can I make these pods communicate?</p> <p>I am able to access only the frontend after deploying rest is not connected.</p> <p>Listing down the service and deployment files of angular, backend.</p> <pre><code> apiVersion: v1 kind: Service metadata: name: angular-service spec: type: NodePort selector: name: angular ports: - protocol: TCP nodePort: 30042 targetPort: 4200 port: 4200 </code></pre> <hr> <pre><code> apiVersion: v1 kind: Service metadata: name: backend-service spec: type: ClusterIP selector: name: backend ports: - protocol: TCP targetPort: 5000 port: 5000 </code></pre> <p>Thanks in advance!</p> <p>(<strong>Modified service files</strong>)</p>
<p><em>For internal communication between different microservices in <strong>Kubernetes</em></strong> you should use <code>Service</code> of a type <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="noreferrer">ClusterIP</a>. It is actually the <strong>default type</strong> so even if you don't specify it in your <code>Service</code> yaml definition file, <strong>Kubernetes</strong> assumes you want to create <code>ClusterIP</code>. It creates virtual internal IP (accessible within your Kubernetes cluster) and exposes your cluster component (microservice) as a <em>single entry point</em> even if it is backed up by many pods. </p> <p>Let's assume you have front-end app which needs to communicate with back-end component which is run in 3 different pods. <code>ClusterIP</code> service provides single entry point and handles load-balancing between different pods, distributing requests evenly among them.</p> <p>You can access your <code>ClusterIP</code> service by providing its IP address and port that your application component is exposed on. Note that you may define a different port (referred to as <code>port</code> in <code>Service</code> definition) for the <code>Service</code> to listen on than the actual port used by your application (referred to as <code>targetPort</code> in your <code>Service</code> definition). Although it is possible to access the <code>Service</code> using its <code>ClusterIP</code> address, all components that communicate with pods internally exposed by it <strong>should use its DNS name</strong>. It is simply a <code>Service</code> name that you created if all application components are placed in the same namespace. If some components are in a different namespaces you need to use fully qualified domain name so they can communicate across the namespaces.</p> <p>Your <code>Service</code> definition files may look like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: angular-service spec: type: ClusterIP ### may be ommited as it is a default type selector: name: angular ### should match your labels defined for your angular pods ports: - protocol: TCP targetPort: 4200 ### port your angular app listens on port: 4200 ### port on which you want to expose it within your cluster apiVersion: v1 kind: Service metadata: name: backend-service spec: type: ClusterIP ### may be ommited as it is a default type selector: name: backend ### should match your labels defined for your backend pods ports: - protocol: TCP targetPort: 5000 ### port your backend app listens on port: 5000 ### port on which you want to expose it within your cluster </code></pre> <p>You can find a detailed description of this topic in official <strong>Kubernetes</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">documentation</a>.</p> <hr> <p><code>NodePort</code> has totally different function. It may be used e.g. to expose your front-end app on a specific port on your node's IP. Note that if you have Kubernetes cluster consisting of many nodes and your front-end pods are placed on different nodes, in order to access your application you need to use 3 different IP addresses. In such case you need an additional load balancer. If you use some cloud platform solution and you want to expose front-end part of your application to external world, Service type <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">LoadBalancer</a> is the way to go (instead of using <code>NodePort</code>).</p>
<p>I am beginner to kubernetes. I am trying to install minikube wanted to run my application in kubernetes. I am using ubuntu 16.04</p> <p>I have followed the installation instructions provided here <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy</a></p> <p>Issue1: After installing kubectl, virtualbox and minikube I have run the command</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> <p>It is failing with following error</p> <pre><code>Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... E0912 17:39:12.486830 17689 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition </code></pre> <p>But when I checked the virtualbox I see the minikube VM running and when I run the kubectl</p> <pre><code>kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 </code></pre> <p>I see the deployments</p> <pre><code> kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-minikube 1 1 1 1 27m </code></pre> <p>I exposed the hello-minikube deployment as service</p> <pre><code>kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube LoadBalancer 10.102.236.236 &lt;pending&gt; 8080:31825/TCP 15m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19h </code></pre> <p>I got the url for the service</p> <pre><code>minikube service hello-minikube --url http://192.168.99.100:31825 </code></pre> <p>When I try to curl the url I am getting the following error</p> <pre><code>curl http://192.168.99.100:31825 curl: (7) Failed to connect to 192.168.99.100 port 31825: Connection refused </code></pre> <p>1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services? 2) If cluster is fine, then why am i getting connection refused ?</p> <p>I was looking at this proxy(<a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster</a>) what is my_proxy in this ?</p> <p>Is this minikube ip and some port ?</p> <p>I have tried this</p> <p><a href="https://stackoverflow.com/questions/52300055/error-restarting-cluster-restarting-kube-proxy-waiting-for-kube-proxy-to-be-up">Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition</a></p> <p>but do not understand how #3(set proxy) in solution will be done. Can some one help me getting instructions for proxy ?</p> <p>Adding the command output which was asked in the comments</p> <pre><code>kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE etcd-minikube 1/1 Running 0 4m kube-addon-manager-minikube 1/1 Running 0 5m kube-apiserver-minikube 1/1 Running 0 4m kube-controller-manager-minikube 1/1 Running 0 6m kube-dns-86f4d74b45-sdj6p 3/3 Running 0 5m kube-proxy-7ndvl 1/1 Running 0 5m kube-scheduler-minikube 1/1 Running 0 5m kubernetes-dashboard-5498ccf677-4x7sr 1/1 Running 0 5m storage-provisioner 1/1 Running 0 5m </code></pre>
<blockquote> <p>I deleted minikube and removed all files under ~/.minikube and reinstalled minikube. Now it is working fine. I did not get the output before but I have attached it after it is working to the question. Can you tell me what does the output of this command tells ?</p> </blockquote> <p>It will be very difficult or even impossible to tell what was exactly wrong with your <strong>Minikube Kubernetes cluster</strong> when it is already removed and set up again.</p> <p>Basically there were a few things that you could do to properly troubleshoot or debug your issue.</p> <blockquote> <p>Adding the command output which was asked in the comments</p> </blockquote> <p>The output you posted is actually only part of the task that @Eduardo Baitello asked you to do. <code>kubectl get po -n kube-system</code> command simply shows you a list of <code>Pods</code> in <code>kube-system</code> namespace. In other words this is the list of system pods forming your Kubernetes cluster and, as you can imagine, proper functioning of each of these components is crucial. As you can see in your output the <code>STATUS</code> of your <code>kube-proxy</code> pod is <code>Running</code>:</p> <pre><code>kube-proxy-7ndvl 1/1 Running 0 5m </code></pre> <p>You were also asked in @Eduardo's question to check its logs. You can do it by issuing:</p> <pre><code>kubectl logs kube-proxy-7ndvl </code></pre> <p>It could tell you what was wrong with this particular pod at the time when the problem occured. Additionally in such case you may use <code>describe</code> command to see other pod details (sometimes looking at pod events may be very helpful to figure out what's going on with it):</p> <pre><code>kubectl describe pod kube-proxy-7ndvl </code></pre> <p>The suggestion to check this particular <code>Pod</code> status and logs was most probably motivated by this fragment of the error messages shown during your Minikube startup process:</p> <pre><code>E0912 17:39:12.486830 17689 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition </code></pre> <p>As you can see this message clearly suggests that there is in short "something wrong" with <code>kube-proxy</code> so it made a lot of sense to check it first.</p> <p>There is one more thing you may have not noticed:</p> <pre><code>kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube LoadBalancer 10.102.236.236 &lt;pending&gt; 8080:31825/TCP 15m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19h </code></pre> <p>Your <code>hello-minikube</code> service was not completely ready. In <code>EXTERNAL-IP</code> column you can see that its state was <code>pending</code>. As you can use <code>describe</code> command to describe <code>Pods</code> you can do so to get details of the service. Simple:</p> <pre><code>describe service hello-minikube </code></pre> <p>could tell you quite a lot in such case.</p> <blockquote> <p>1)If minikube cluster got failed while starting, how did the kubectl able to connect to minikube to do deployments and services? 2) If cluster is fine, then why am i getting connection refused ?</p> </blockquote> <p>Remember that <strong>Kubernetes Cluster</strong> is not a monolith structure and consists of many parts that depend on one another. The fact that <code>kubectl</code> worked and you could create deployment doesn't mean that the whole cluster was working fine and as you can see in the error message it was suggesting that one of its components, namely <code>kube-proxy</code>, could actually not function properly.</p> <p>Going back to the beginning of your question...</p> <blockquote> <p>I have followed the installation instructions provided here <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy</a></p> <p>Issue1: After installing kubectl, virtualbox and minikube I have run the command</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> </blockquote> <p>as far as I understood you don't use the http proxy so you didn't follow instructions from this particular fragment of the docs that you posted, did you ? </p> <p>I have the impression that you mix 2 concepts. <code>kube-proxy</code> which is a <code>Kubernetes cluster</code> component and which is deployed as pod in <code>kube-system</code> space and <a href="https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers" rel="noreferrer">http proxy server</a> mentioned in <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">this</a> fragment of documentation. </p> <blockquote> <p>I was looking at this proxy(<a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster" rel="noreferrer">https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster</a>) what is my_proxy in this ?</p> </blockquote> <p>If you don't know what is your <a href="https://en.wikipedia.org/wiki/Proxy_server#Web_proxy_servers" rel="noreferrer">http proxy</a> address, most probably you simply don't use it and if you don't use it to connect to the Internet from your computer, <strong><em>it doesn't apply to your case in any way</em></strong>. </p> <p>Otherwise you need to set it up for your <strong>Minikube</strong> by <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/#using-minikube-with-an-http-proxy" rel="noreferrer">providing additional flags when you start it</a> as follows:</p> <pre><code>minikube start --docker-env http_proxy=http://$YOURPROXY:PORT \ --docker-env https_proxy=https://$YOURPROXY:PORT </code></pre> <p>If you were able to start your <strong>Minikube</strong> and now it works properly only using the command:</p> <pre><code>minikube start --vm-driver=virtualbox </code></pre> <p>your issue was caused by something else and <em>you don't need to provide the above mentioned flags to tell your <strong>Minikube</strong> what is your http proxy server that you're using</em>.</p> <p>As far as I understand currently everything is up and running and you can access the url returned by the command <code>minikube service hello-minikube --url</code> without any problem, right ? You can also run the command <code>kubectl get service hello-minikube</code> and check if its output differs from what you posted before. As you didn't attach any yaml definition files it's difficult to tell if it was nothing wrong with your service definition. Also note that <code>Load Balancer</code> is a service type designed to work with external load balancers provided by cloud providers and minikube uses <code>NodePort</code> instead of it.</p>
<p>I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via <a href="https://github.com/kahing/goofys" rel="nofollow noreferrer">goofys</a> to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.</p> <p>Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.</p> <p>The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.</p> <p>I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?</p>
<p>I've managed to get around my issue buy configuring the NFS client provisioner to use the service's clusterIP instead of the dns name, because the node is unable to resolve it to the IP, but it does have a route to the IP. Since the IP will remain allocated unless I delete the service, this is scalable, but of course can't be automated easily as a redeployment of the nfs server helm chart will change the service's IP.</p>
<p>Can i use any tools from openssl, ssh-keygen and cfssl to create SSH certificates or the TLS certificates for kubernetes components? Is there any difference in using these three tools? Will the keys/certificates generated by these tools be same?</p>
<p>From kubernetes point of view please take a look for docs about using openssl and probably more advanced cfssl <a href="https://kubernetes.io/docs/concepts/cluster-administration/certificates/" rel="nofollow noreferrer">here</a>.</p> <p>Note:</p> <blockquote> <p>All Kubernetes components that use these certificates - kubelet, kube-apiserver, kube-controller-manager - assume the key and certificate to be PEM-encoded.</p> </blockquote> <p>I'm not expert in this matter but you can take a look for community posts like:</p> <ul> <li><p><a href="https://serverfault.com/questions/9708/what-is-a-pem-file-and-how-does-it-differ-from-other-openssl-generated-key-file">Certificate standards</a> </p></li> <li><p><a href="https://security.stackexchange.com/questions/29876/what-are-the-differences-between-ssh-generated-keysssh-keygen-and-openssl-keys">differences between ssh generated keys(ssh-keygen) and OpenSSL keys (PEM)</a> </p></li> <li><p><a href="https://serverfault.com/questions/706336/how-to-get-a-pem-file-from-ssh-key-pair">pem file from ssh key pair</a> </p></li> <li><p><a href="https://blog.cloudflare.com/introducing-cfssl/" rel="nofollow noreferrer">Introducing CFSSL</a></p></li> <li><p><a href="https://knowledge.digicert.com/solution/SO26630.html" rel="nofollow noreferrer">certificate convertion</a> </p></li> <li><p><a href="https://support.ssl.com/Knowledgebase/Article/View/19/0/der-vs-crt-vs-cer-vs-pem-certificates-and-how-to-convert-them" rel="nofollow noreferrer">Certificates Convertion</a> </p></li> </ul> <p>How is X.509 used in SSH? X.509 certificates are used as a key storage: Instead of keeping SSH keys in a proprietary format, the software keeps the keys in X.509 certificates. When the SSH key exchange is done, the keys are taken from the certificates.</p> <p>Note - advantages of using X.509 certificates:</p> <blockquote> <p>An <a href="https://en.wikipedia.org/wiki/X.509" rel="nofollow noreferrer">X.509 certificate</a> contains a public key and an identity (a hostname, or an organization, or an individual), and is either signed by a certificate authority or self-signed. When a certificate is signed by a trusted certificate authority, or validated by other means, someone holding that certificate can rely on the public key it contains to establish secure communications with another party, or validate documents digitally signed by the corresponding private key.</p> </blockquote> <p>Hope this help:</p>
<p>I'm trying to add a custom domain to my AKS cluster. All of the components I'm dealing with are within the same VNET, but the custom DNS Server and AKS Service are in different subnets. I've also like to avoid changing the DNS Server at the VNET level. </p> <p>I've followed this guide to no avail:</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/coredns-custom#use-custom-domains" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/coredns-custom#use-custom-domains</a></p> <p>I've also found previous answers used a similar setup: </p> <p><a href="https://stackoverflow.com/questions/55612141/resolve-custom-dns-in-kubernetes-cluster-aks">Resolve custom dns in kubernetes cluster (AKS)</a></p> <p>but that did not work either. The difference between the two being the coredns plugin that is used to forward the resolving traffic towards a custom resolver.</p> <p>I've tried both the proxy and forward plugin with the same setup, and both end in the same error</p> <p><strong>Proxy plugin:</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns-custom namespace: kube-system data: test.server: | mydomain.com:53 { log errors proxy . [MY DNS SERVER'S IP] } </code></pre> <p><strong>Forward Plugin:</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns-custom namespace: kube-system data: test.server: | mydomain.com:53 { log errors forward . [MY DNS SERVER'S IP] } </code></pre> <p><strong>Reproduce:</strong></p> <p>1 VNET</p> <p>2 Subnets (1 for AKS, 1 for the DNS VM)</p> <p>Add a name to the DNS VM, and use a configmap to proxy traffic to the custom DNS instead of the node resolvers/VNET DNS</p> <p><strong>Error:</strong></p> <p>After applying either of the configmaps above, the coredns pods log this error:</p> <blockquote> <p>2019-11-11T18:41:46.224Z [INFO] 172.28.18.104:47434 - 45605 "A IN mydomain.com. udp 55 false 512" REFUSED qr,rd 55 0.001407305s</p> </blockquote>
<p>Was just missing a few more steps of due diligence. After checking the logs on the DNS VM, I found that the requests were making to the host, but the host was refusing them. The named.conf.options whitelisted a subset of address spaces. After updating those address spaces in named.conf to match the new cloud network we recently moved to, the requests were resolving. </p> <p>I ended up sticking with the forward plugin as the MS docs outlined.</p>
<p>When trying to install Jupyterhub on Kubernetes (EKS) I am getting below error in the Hub pod. This is output of describe pod. There was similar issue reported and i tried the solution but it didn't work.</p> <pre><code>Warning FailedScheduling 52s (x2 over 52s) default-scheduler 0/3 nodes are available: 1 Insufficient cpu, 2 node(s) had no available volume zone. </code></pre> <h2>This is my pvc.yaml</h2> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard annotations: volume.alpha.kubernetes.io/storage-class: default provisioner: kubernetes.io/aws-ebs parameters: type: gp2 allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-east-1a - us-east-1b - us-east-1c </code></pre> <hr> <h1>Source: jupyterhub/templates/hub/pvc.yaml</h1> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: hub-db-dir annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: standard </code></pre> <p>Please let me know if I am missing something here.</p>
<p>According to AWS documentation, an EBS volume and the instance to which it attaches must be in the same Availability Zone. (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html" rel="nofollow noreferrer">Source</a>)</p> <p>In that case, the solution is using only one AZ. </p> <blockquote> <p>Kubernetes itself supports many other storage backends that could be used zone independently, but of course with different properties (like performance, pricing, cloud provider support, ...). For example there is <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">AWS EFS</a> that can be used in any AZ within an AWS region but with its own tradeoffs (e.g. <a href="https://github.com/kubernetes-incubator/external-storage/issues/1030" rel="nofollow noreferrer">kubernetes-incubator/external-storage#1030</a>).</p> </blockquote> <p>This is a know issue reported <a href="https://github.com/kubernetes/kops/issues/6267" rel="nofollow noreferrer">here</a>.</p>
<p>I have created a VPC-native cluster on GKE, master authorized networks disabled on it. I think I did all things correctly but I still can't access to the app externally.</p> <p>Below is my service manifest.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.16.0 (0c01309) creationTimestamp: null labels: io.kompose.service: app name: app spec: ports: - name: '3000' port: 80 targetPort: 3000 protocol: TCP nodePort: 30382 selector: io.kompose.service: app type: NodePort </code></pre> <p>The app's container port is <code>3000</code> and I checked it is working from logs. I added firewall to open the <code>30382</code>port in my vpc network too. I still can't access to the node with the specified nodePort. Is there anything I am missing?</p> <hr /> <p><strong>kubectl get ep</strong>:</p> <pre class="lang-sh prettyprint-override"><code>NAME ENDPOINTS AGE app 10.20.0.10:3000 6h17m kubernetes 34.69.50.167:443 29h </code></pre> <p><strong>kubectl get svc</strong>:</p> <pre class="lang-sh prettyprint-override"><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app NodePort 10.24.6.14 &lt;none&gt; 80:30382/TCP 6h25m kubernetes ClusterIP 10.24.0.1 &lt;none&gt; 443/TCP 29h </code></pre>
<p>In Kubernetes, the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> is used to communicate with pods.</p> <p>To expose the pods outside the kubernetes cluster, you will need k8s service of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> type.</p> <p>The <code>NodePort</code> setting applies to the Kubernetes services. By default Kubernetes services are accessible at the ClusterIP which is an <strong>internal</strong> IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service. To make the service accessible from outside of the cluster a user can create a service of type NodePort.</p> <p>Please note that it is needed to have <strong>external</strong> IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.</p>
<p>When I am in local, with minikube, I checked that each time I rebuild my app, and roll it out with : </p> <pre><code>kubectl rollout restart deployment/api-local -n influx </code></pre> <p>The container lasts 30 sec to be terminated. I have no problem for production matters, but in local dev, I end losing a lot of time waiting, as I can rebuild my app 100 times maybe ( = losing 50 minutes a day ).</p> <p>Is there some trick to shorten this terminating time ? I understand this would not be part of k8s best practices, but it would make sense to me.</p>
<p>Set the <code>terminationGracePeriodSeconds</code> in the deployment/pod configuration. Per the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#pod-v1-core" rel="nofollow noreferrer">reference docs</a>, it is:</p> <blockquote> <p>...Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.</p> </blockquote> <p>Example:</p> <pre><code>spec: replicas: 1 selector: ... template: spec: terminationGracePeriodSeconds: 0 containers: ... </code></pre> <p>Also, from <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">here</a>:</p> <blockquote> <p>The kubectl delete command supports the --grace-period= option which allows a user to override the default and specify their own value. The value 0 force deletes the Pod. You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions.</p> </blockquote> <p>Example (specify the namespace, if needed):</p> <pre><code>kubectl delete pod &lt;pod-name&gt; --now </code></pre> <p>...or</p> <pre><code>kubectl delete pod &lt;pod-name&gt; --grace-period=0 --force </code></pre>