prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm attempting to add some <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="nofollow noreferrer">recommended labels</a> to several k8s resources, and I can't see a good way to add labels for things that would change frequently, in this case &quot;app.kubernetes.io/instance&quot; and &quot;app.kubernetes.io/version&quot;. Instance seems like a label that should change every time a resource is deployed, and version seems like it should change when a new version is released, by git release or similar. I know that I could write a script to generate these values and interpolate them, but that's a lot of overhead for what seems like a common task. I'm stuck using Kustomize, so I can't just use Helm and have whatever variables I want. Is there a more straightforward way to apply labels like these?</p>
<p>Kustomize's <code>commonLabels</code> transformer is a common way to handle this, sometimes via a component. It really depends on your overall layout.</p>
<p>I just start using kustomize. I have the following yaml files for kustomize:</p> <pre><code>ls -l ./kustomize/base/ 816 Apr 18 21:25 deployment.yaml 110 Apr 18 21:31 kustomization.yaml 310 Apr 18 21:25 service.yaml </code></pre> <p>where deployment.yaml and service.yaml are generated files with jib and they are fine in running. And the content of the kustomization.yaml is the following:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml - deployment.yaml </code></pre> <p>And in another directory</p> <pre><code>ls -l ./kustomize/qa 133 Apr 18 21:33 kustomization.yaml 95 Apr 18 21:37 update-replicas.yaml </code></pre> <p>where </p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../base patchesStrategicMerge: - update-replicas.yaml </code></pre> <p>and </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 2 </code></pre> <p>After running "kustomize build ./kustomize/base", I run</p> <pre><code>~/kustomize build ./kustomize/qa Error: no matches for OriginalId ~G_~V_Deployment|~X|my-app; no matches for CurrentId ~G_~V_Deployment|~X|my-app; failed to find unique target for patch ~G_~V_Deployment|my-app </code></pre> <p>I have a look related files and don't see any typo on the application name. </p> <p>And here is the deployment.yaml file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: my-app name: my-app spec: replicas: 1 selector: matchLabels: app: my-app strategy: {} template: metadata: creationTimestamp: null labels: app: my-app spec: containers: - image: docker.io/[my Docker ID]/my-app name: my-app resources: {} readinessProbe: httpGet: port: 8080 path: /actuator/health/readiness livenessProbe: httpGet: port: 8080 path: /actuator/health/liveness lifecycle: preStop: exec: command: ["sh", "-c", "sleep 10"] status: {} </code></pre> <p>Again, the above file is generated with jib with some modifications. And it runs on Kubernetes directly.</p> <p>How to resolve this problem?</p>
<p>I got the same issue and fixed. This issue is related to Kustomize version installed. Check! <code>kustomize version</code>. From Kustomize v3.0.x
 and above, we need to mention namespace in patches too. After added namespace in patches yaml files, issue get resolved.</p> <p>In your example add namespace under metadata in update-replicas.yaml patch file.</p> <p>For more details regarding kustomize version related issues(like &quot;...failed to find unique target for patch...&quot;): <a href="https://github.com/kubernetes-sigs/kustomize/issues/1351" rel="noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/1351</a></p>
<p>I encountered the following error when added <strong>spring-cloud-starter-kubernetes-config</strong> dependency to my pom.xml:</p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. Caused by: java.security.cert.CertificateException: Could not parse certificate: java.io.IOException: Empty input Caused by: java.io.IOException: Empty input </code></pre> <p>To disable k8s, I added in bootstrap.yml following param:</p> <pre><code>spring: cloud: kubernetes: enabled: false </code></pre> <p>But even after that nothing changed and the error remained.</p> <p>Where else should I look? What parameter should I add so that if I have this dependency in pom.xml, I disable Kubernetes when running tests?</p>
<p>That problem could happen due to the installed <code>kubectl</code>. The easiest way to avoid this problem - rename <code>~/.kube</code> (directory with configs) to some other name like <code>~/.kube-hide</code></p> <pre><code>mv ~/.kube ~/.kube-hide </code></pre> <p>And when you will need to use your <code>kubectl</code> rename it back</p>
<p>How can I make <code>Skaffold</code> forward privileged/protected/special ports which have numbers below <code>1024</code>? In my <code>skaffold.yaml</code> I added:</p> <pre class="lang-yaml prettyprint-override"><code>portForward: - resourceType: service resourceName: foo port: 80 localPort: 80 </code></pre> <p>It works fine for all unprotected ports, but in case of port <code>80</code>, <code>Skaffold</code> automatically picks another unprotected port instead of <code>80</code>.</p> <p>According to the documentation <code>Skaffold</code> runs <code>kubectl port-forward</code> on each of user-defined ports, so I granted the <code>kubectl</code> binary the capability to open privileged ports with this command <code>sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/kubectl</code>.</p> <p>Everything works fine when directly running <code>kubectl port-forward services/foo 80:80</code>, but when I run <code>skaffold dev --port-forward</code> it still picks another unprotected port.</p> <p>I have been using <code>Skaffold v1.28.1</code> with <code>Minikube v1.22.0</code> on <code>Ubuntu 20.04</code>.</p>
<p>This should work. We changed Skaffold's behaviour to prevent it from allocating system ports (≤ 1024), but user-defined port-forwards with explicit <code>localPort</code>s will still be honoured.</p> <p>You didn't say what ports you were seeing being allocated, but I suspect they were ports 4503–4533, in which you're hitting a bug (<a href="https://github.com/GoogleContainerTools/skaffold/issues/6312" rel="nofollow noreferrer">#6312</a>). This bug is now fixed and will be in the next release. You can also use the &quot;bleeding-edge&quot; build which is built from HEAD: the <a href="https://skaffold.dev/docs/install/" rel="nofollow noreferrer">installation instructions</a> have details for where to fetch these pre-built binaries.</p>
<p>How is it possible to use more than one domain in the same cluster?</p> <p>At the moment I've running ONE cluster with <strong>one domain</strong> pointing to a hardware <code>loadbalancer</code> and <code>traefik</code> as an <code>ingress-controller</code>.</p> <p>Now I wanna add a second domain pointing to different workloads/services.</p> <p>Do I need</p> <ol> <li>a second <code>ingress-controller</code> with a second <code>LoadBalancer</code> (and pointing the second domain to that second LB)?</li> <li>to point the second domain to the same first <code>LoadBalancer</code> to use only one ìngress-controller`?</li> </ol> <p>I am asking, because I have troubles when pointing the second domain to the second <code>Loadbalancer</code> and pointing that one to the existing <code>ingress-controller</code> (nothing happens) But when I point my <strong>second domain</strong>, to the first Loadbalancer, it seems working as expected.</p> <p><em>(My guess is: solution &quot;2&quot;)?</em></p> <p>(I wanna keep one Ingress-controller, thought I need two loadbalanacers)</p> <p>Does this have to do with the occupied ports 443 and 80?</p> <p>Thank you</p>
<blockquote> <p>a second ingress-controller with a second LoadBalancer (and pointing the second domain to that second LB)?</p> </blockquote> <p><strong>No</strong> there is no requirement for a second LoadBalancer. You can single LB backed by the ingress controller and map the multiple domains.</p> <p><strong>to point the second domain to the same first LoadBalancer to use only one ìngress-controller`?</strong></p> <p>Yes, you can use the single ingress controller, inside DNS for both domains you have to add the <strong>A</strong> value of <strong>CNAME</strong> value.</p> <p>From <strong>DNS</strong> all traffic will get forwarded to <strong>LB</strong>, which is backed by the <strong>ingress controller</strong>.</p> <p>If you are using the <strong>Nginx ingress controller</strong> different domain or hosts goes like in config</p> <pre><code>spec: rules: - host: foobar.com http: paths: - backend: serviceName: foobar servicePort: 80 - host: api.foobar.com http: paths: - backend: serviceName: foobar servicePort: 80 </code></pre> <p>For <strong>treafik</strong> also it will be the <strong>same</strong>, or else you can create a two separate ingress instead of one.</p> <p>ingress-1.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard spec: rules: - host: dashboard.test.domain.com http: paths: - path: / backend: serviceName: frontend servicePort: 80 </code></pre> <p>ingress-2.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-ingress-two spec: rules: - host: dashboard.domain.com http: paths: - path: /api backend: serviceName: backend servicePort: 80 </code></pre> <p>path-based further routing you can implement on ingress.</p> <p>So you over all arch will be something like</p> <p>All traffic comes from a single point, <strong>treafik</strong> controller which is exposed as <strong>Loadbalancer</strong> service.</p> <p>All your other <strong>microservices</strong> will be running as the <strong>ClusterIP</strong>, as we don't want to direct access from the internet.</p> <p>Read more at : <a href="https://medium.com/kubernetes-tutorials/deploying-traefik-as-ingress-controller-for-your-kubernetes-cluster-b03a0672ae0c" rel="nofollow noreferrer">https://medium.com/kubernetes-tutorials/deploying-traefik-as-ingress-controller-for-your-kubernetes-cluster-b03a0672ae0c</a></p>
<p>I have a Kubernetes project managed by Kustomized (Kubernetes). This project deploys two deployments in the same namespace.</p> <p>Basically, I have the following directory structure:</p> <pre><code>kustomize -&gt; app1 -&gt; kustomization.yaml kustomize -&gt; app1 -&gt; namespace.yaml kustomize -&gt; app1 -&gt; app1.yaml kustomize -&gt; app2 -&gt; kustomization.yaml kustomize -&gt; app2 -&gt; namespace.yaml kustomize -&gt; app2 -&gt; app2.yaml </code></pre> <p>The files <code>namespace.yaml</code> create in both the case the same namespace so that the first application deployed, create the namespace and the second reuse it. Obviously, the problem is when I try to remove only one of these applications:</p> <pre><code>kubectl delete -k kustomize/app1 </code></pre> <p>remove both the applications because the namespace is removed and app2 too. An easy solution to this problem is to move <code>namespace.yaml</code> outside the folders and just call it standalone. However, this approach requires user must remember to run:</p> <pre><code>kubectl apply -f namespace.yaml </code></pre> <p>before of:</p> <pre><code>kubectl apply -k kustomize/app1 kubectl apply -k kustomize/app2 </code></pre> <p>I know another possible solution is via script. My question is that exists a way to better manage namespace removal with Kustomize so that it is removed only if it is empty.</p>
<p>You can have this directory structure:</p> <pre><code>kustomize -&gt; ns -&gt; namespace.yaml kustomize -&gt; app1 -&gt; kustomization.yaml kustomize -&gt; app1 -&gt; app1.yaml kustomize -&gt; app2 -&gt; kustomization.yaml kustomize -&gt; app2 -&gt; app2.yaml </code></pre> <p>Also you can add a <code>kustomization.yaml</code> at the root, so that you only need this to apply all:</p> <pre><code>kubectl -k kustomize/ </code></pre> <p>That will create the namespace and both apps.</p> <p>And you can still delete only one app if you want:</p> <pre><code>kubectl delete -k kustomize/app1 </code></pre> <p>And since you don't have an <code>namespace.yaml</code> in that directory, it does not delete the namespace.</p>
<p>I have these volume mounts right now defined in my deployment,</p> <pre><code>volumeMounts: - name: website-storage mountPath: /app/upload readOnly: false subPath: foo/upload - name: website-storage mountPath: /app/html readOnly: true subPath: foo/html/html </code></pre> <p>Now I want to mount another path from my PVC into <code>/app/html/website-content</code> and this is what I attempted with,</p> <pre><code>volumeMounts: - name: website-storage mountPath: /app/upload readOnly: false subPath: foo/upload - name: website-storage mountPath: /app/html readOnly: true subPath: foo/html/html - name: website-storage mountPath: /app/html/website-content readOnly: true subPath: foo/website-content </code></pre> <p>This does not work and gives an error during mounting. Is it possible to do this? Do I have to explicitly create the <code>website-content</code> folder prior to mounting it? Thanx in advance!</p>
<p>The cause of the issue is that during pod initialization there is an attempt to create directory <code>website-content</code> in the <code>/app/html</code> location which will cause error as the <code>/app/html</code> is mounted read only.</p> <p>You cannot create a folder in the read only system that means you can't mount a volume as the folder doesn't exist, but if there was already the folder created, you can mount the volume.</p> <p>So all you need is just to create a directory <code>website-content</code> in the <code>foo/html/html</code> location on the volume before you attach it into container. Then, as it will be mounted to the <code>/app/html</code> location, there will be directory <code>/app/html/website-content</code>.</p> <p>For example, you can use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> for that. Add this code to your deployment file:</p> <pre><code>initContainers: - name: init-container image: busybox volumeMounts: - name: website-storage mountPath: /my-storage readOnly: false command: ['sh', '-c', 'mkdir -p /my-storage/foo/html/html/website-content'] </code></pre> <p>When the pod is running, you check mount points on the pod using <code>kubectl describe pod {pod-name}</code>:</p> <blockquote> <pre><code>Mounts: /app/html from website-storage (ro,path=&quot;foo/html/html&quot;) /app/html/website-content from website-storage (ro,path=&quot;foo/website-content&quot;) /app/upload from website-storage (rw,path=&quot;foo/upload&quot;) </code></pre> </blockquote>
<ol> <li>node.js express server bound to port 8080</li> </ol> <pre><code>server.listen(8080, () =&gt; { logger.log({ level: 'info', message: 'Listening on port ' + port }) }) </code></pre> <ol start="2"> <li>Docker image with node.js code + npm modules with port 8080 exposed</li> </ol> <pre><code>FROM node:10-alpine ... # Expose port EXPOSE 8080 </code></pre> <ol start="3"> <li>Kubernetes deployment of Docker image with containerPort 8080 configured</li> </ol> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: deployment spec: selector: matchLabels: app: deployment replicas: 2 template: metadata: labels: app: deployment spec: containers: - name: job-id-20 image: redacted/redacted command: ["node", "backend/services.js"] ports: - name: http-port containerPort: 8080 imagePullSecrets: - name: docker-hub-credentials dnsConfig: options: - name: ndots value: "0" </code></pre> <ol start="4"> <li>Kubernetes service with matching selector to app with targetPort of 8080 and type LoadBalancer</li> </ol> <pre><code>apiVersion: v1 kind: Service metadata: name: service spec: ports: - protocol: TCP targetPort: 8080 port: 8080 selector: app: deployment type: LoadBalancer </code></pre> <ol start="5"> <li>Verify load balancer has external IP (I scrubbed it)</li> </ol> <pre><code>$ kubectl --kubeconfig="k8s-1-13-4-do-0-nyc1-1552761372568-kubeconfig.yaml" get service/service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service LoadBalancer 10.245.239.60 1x4.2x9.1x8.x2 8080:30626/TCP 113s </code></pre> <ol start="6"> <li>curl fails with empty response</li> </ol> <pre><code>$ curl --verbose http://1x4.2x9.1x8.x2:8080/ * Trying 1x4.2x9.1x8.x2... * TCP_NODELAY set * Connected to 1x4.2x9.1x8.x2 (1x4.2x9.1x8.x2) port 8080 (#0) &gt; GET / HTTP/1.1 &gt; Host: 1x4.2x9.1x8.x2:8080 &gt; User-Agent: curl/7.54.0 &gt; Accept: */* &gt; * Empty reply from server * Connection #0 to host 1x4.2x9.1x8.x2 left intact curl: (52) Empty reply from server </code></pre> <p>I'd expect the traffic to route through to the service to one of the pods/replicas in the deployment. What am I doing wrong?</p>
<p>There are some potential sources of errors here.</p> <p>First potential problem is that your Docker image does not work as expected. You can try this: Use <code>nginx:latest</code> as your image and try if this works. If this works the Kubernetes parts are working correctly and you can do further investigation on your Docker image.</p> <p>Your code snippet does not contain any code that outputs some data as far as I can see.</p> <p>You can experiment with your image by using the <code>docker run</code> command as indicated in the comments above.</p> <p>If it still does not work with the Nginx image then you have to further investigate the Kubernetes side.</p> <p>Although a LoadBalancer is a standard Kubernetes service type, its implementation is different for different cloud providers and on-premise installations.</p> <p>You must consult your Kubernetes or cloud provider's documentation on how to find out if the LoadBalancer is configured correctly.</p> <p>To see if the service can reach the pods you can use the command <code>kubectl get endpoints</code>.</p> <p>To do some more debugging you can use the <code>kubectl port-forward</code> command to create a tunnel to either one of the pods or to the service and try the curl command on the established tunnel.</p> <p>Also you can use the <code>kubectl logs</code> command to see any log output of your pods.</p>
<p>I have one question regarding helmfile. The question is: how to set annotations to deployment using helmfile? I've tried by using set instruction but It's not working. Also didn't find any information in docs.</p>
<p>Checked very similar stack question <a href="https://stackoverflow.com/questions/49856305/how-do-i-assign-value-to-variable-in-helmfile">How do I assign value to variable in helmfile?</a>?</p> <p>you can do <a href="https://github.com/cloudposse/geodesic/blob/0.12.0/rootfs/conf/kops/helmfile.yaml#L63-L137" rel="nofollow noreferrer">like this</a></p> <pre><code>set: - name: &quot;ingress.annotations.kubernetes\\.io/ingress\\.class&quot; value: &quot;nginx&quot; ... </code></pre> <p>or <a href="https://github.com/cloudposse/helmfiles/blob/0.2.4/helmfile.d/0300.chartmuseum.yaml#L65-L83" rel="nofollow noreferrer">like that</a></p> <pre><code> annotations: kubernetes.io/ingress.class: &quot;nginx&quot; kubernetes.io/tls-acme: &quot;true&quot; </code></pre>
<p>i am a begginer in kubernetes i am trying to run an elasticsearch cluster in it for nthat i am following this (tutorial)[https://phoenixnap.com/kb/elasticsearch-helm-chart] followed all the commands as it is and in the ebnd when i do</p> <pre><code>kubectl get pods --namespace=default -l app=elasticsearch-master -w </code></pre> <p>i am getting</p> <pre><code>NAME READY STATUS RESTARTS AGE elasticsearch-master-0 0/1 Init:0/1 0 55s elasticsearch-master-1 0/1 Init:0/1 0 55s elasticsearch-master-2 0/1 Init:0/1 0 55s </code></pre> <p>they are not turning into ready state and the command</p> <pre><code>helm test elasticsearch </code></pre> <p>got stuck and does not respond</p> <p>here are the logs</p> <pre><code>kubectl logs -f elasticsearch-master-0 Error from server (BadRequest): container &quot;elasticsearch&quot; in pod &quot;elasticsearch-master-0&quot; is waiting to start: PodInitializing kubectl logs -f elasticsearch-master-1 Error from server (BadRequest): container &quot;elasticsearch&quot; in pod &quot;elasticsearch-master-1&quot; is waiting to start: PodInitializing kubectl logs -f elasticsearch-master-2 Error from server (BadRequest): container &quot;elasticsearch&quot; in pod &quot;elasticsearch-master-2&quot; is waiting to start: PodInitializing </code></pre> <p>and the <code>kubectl cluster-info</code> returning</p> <pre><code>{ &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;services \&quot;kube-dns:dns\&quot; is forbidden: User \&quot;system:anonymous\&quot; cannot get resource \&quot;services/proxy\&quot; in API group \&quot;\&quot; in the namespace \&quot;kube-system\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;name&quot;: &quot;kube-dns:dns&quot;, &quot;kind&quot;: &quot;services&quot; }, &quot;code&quot;: 403 } </code></pre> <p>that's all i know right now as i am a begenier in it please help why my elasticsearch nodes are not getting ready</p>
<p>There is an <a href="https://github.com/elastic/helm-charts/issues/775" rel="nofollow noreferrer">open issue around this helm on GitHub</a>.</p> <pre><code>Check once which storage class you have for PVC creation. </code></pre> <p>however, still, there could be multiple reasons for this issue like <strong>PVC</strong> is not getting created or others.</p> <p>You can try if single node <strong>Elasticsearch</strong> working or not with <strong>PVC</strong> dynamic creation using :</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: labels: app : elasticsearch component: elasticsearch release: elasticsearch name: elasticsearch spec: podManagementPolicy: Parallel replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app : elasticsearch component: elasticsearch release: elasticsearch serviceName: elasticsearch template: metadata: creationTimestamp: null labels: app : elasticsearch component: elasticsearch release: elasticsearch spec: containers: - env: - name: cluster.name value: &lt;SET THIS&gt; - name: discovery.type value: single-node - name: ES_JAVA_OPTS value: -Xms512m -Xmx512m - name: bootstrap.memory_lock value: &quot;false&quot; image: elasticsearch:6.5.0 imagePullPolicy: IfNotPresent name: elasticsearch ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP resources: limits: cpu: 250m memory: 1Gi requests: cpu: 150m memory: 512Mi securityContext: privileged: true runAsUser: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-data dnsPolicy: ClusterFirst initContainers: - command: - sh - -c - chown -R 1000:1000 /usr/share/elasticsearch/data - sysctl -w vm.max_map_count=262144 - chmod 777 /usr/share/elasticsearch/data - chomod 777 /usr/share/elasticsearch/data/node - chmod g+rwx /usr/share/elasticsearch/data - chgrp 1000 /usr/share/elasticsearch/data image: busybox:1.29.2 imagePullPolicy: IfNotPresent name: set-dir-owner resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-data restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 10 updateStrategy: type: OnDelete volumeClaimTemplates: - metadata: creationTimestamp: null name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre>
<p>I'm using a Microk8s setup with the following configuration -</p> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins labels: app: jenkins spec: selector: matchLabels: app: jenkins replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: jenkins spec: serviceAccountName: jenkins containers: - name: jenkins image: jenkins/jenkins:2.235.1-lts-alpine imagePullPolicy: IfNotPresent env: - name: JAVA_OPTS value: -Xmx2048m -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 ports: - containerPort: 8080 protocol: TCP - containerPort: 50000 protocol: TCP volumeMounts: - mountPath: /var/jenkins_home name: jenkins restartPolicy: Always securityContext: runAsUser: 0 terminationGracePeriodSeconds: 30 volumes: - name: jenkins persistentVolumeClaim: claimName: jenkins-claim </code></pre> <p><strong>pv.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins labels: type: local spec: storageClassName: manual capacity: storage: 4Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; </code></pre> <p><strong>pvc.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 4Gi </code></pre> <p><strong>rbac.yaml</strong></p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: jenkins rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods/exec&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;pods/log&quot;] verbs: [&quot;get&quot;,&quot;list&quot;,&quot;watch&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;secrets&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;configmaps&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;apps&quot;] resources: [&quot;deployments&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;services&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;ingresses&quot;] verbs: [&quot;create&quot;,&quot;delete&quot;,&quot;get&quot;,&quot;list&quot;,&quot;patch&quot;,&quot;update&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins subjects: - kind: ServiceAccount name: jenkins namespace: jenkins </code></pre> <p><strong>service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: jenkins labels: app: jenkins spec: type: NodePort ports: - name: ui port: 8080 targetPort: 8080 protocol: TCP - name: slave port: 50000 protocol: TCP - name: http port: 80 targetPort: 8080 selector: app: jenkins </code></pre> <p>I can access the internet from my node (host), but not from my pods. my node is an ubuntu 18.04.2 LTS machine running on vSphere, within a VPN.</p> <p>in official documentation (<a href="https://microk8s.io/docs/troubleshooting" rel="nofollow noreferrer">https://microk8s.io/docs/troubleshooting</a>) it says to either</p> <pre><code>sudo iptables -P FORWARD ACCEPT sudo apt-get install iptables-persistent </code></pre> <p>or</p> <pre><code>sudo ufw default allow routed </code></pre> <p>both doesn't fix the problem for me.</p> <p>also tried suggestions in <a href="https://github.com/ubuntu/microk8s/issues/1484" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/issues/1484</a> without success.</p>
<p>In order to solve this problem on Microk8s, enable dns addon BEFORE deploying with command <code>microk8s enable dns</code></p>
<p>The <strong>metrics-server-v0.3.6</strong> deployment that is installed into GKE clusters by default is using an outdated version of addon-resizer (<strong>addon-resizer:1.8.11-gke.0</strong>) that causes CPU utilization and throttling issues. See <a href="https://www.gitmemory.com/issue/kubernetes/autoscaler/4141/878881986" rel="nofollow noreferrer">here</a></p> <p>The cluster running is on the latest version of the regular release branch (<strong>1.20.8-gke.900</strong>), when editing the metric-server workload deployment and changing the version of addon-resizer to <strong>addon-resizer-1.8.14</strong> the deployment reverts back to the old version that is managed by google.</p> <p>Is there a way to change the image that the metrics-server-nanny container deployment is using permanently or is there a way to update the entire metrics-server deployment so that the containers that are running are on an update version?</p>
<p>You cannot update those deployments. Any updates you make will be reverted to the original deployment spec.</p> <p>Looks like the &quot;fix&quot; will be released with 1.22 and initially backported to 1.21 as well. It will likely make it back to 1.19+ eventually.</p>
<p>I am using client-go to continuouslly pull log streams from kubernetes pods. Most of the time everything works as expected, until the job runs couple of hours.</p> <p>The code is like below:</p> <pre><code>podLogOpts := corev1.PodLogOptions{ Follow: true, } kubeJob, err := l.k8sclient.GetKubeJob(l.job.GetNamespace(), l.job.GetJobId()) ... podName := l.k8sclient.GetKubeJobPodNameByJobId(l.job.GetNamespace(), l.job.GetJobId()) req := l.k8sclient.GetKubeClient().CoreV1().Pods(l.job.GetNamespace()).GetLogs(podName, &amp;podLogOpts) podLogStream, err := req.Stream(context.TODO()) ... for { copied, err := podLogStream.Read(buf) if err == io.EOF { // here is place where error happens // usually after many hours, the podLogStream return EOF. // I checked the pod status it is still running and keeps printing data to pod stdout. why would this happend??? break } ... } </code></pre> <p>The podLogStream returns EOF about 3-4 hours later. But I checked the pod status and found pod is still running and the service inside keeps printing data to the stdout. So why would this happend? How to fix it?</p> <p><strong>UPDATE</strong> I found that every 4 hours the pod stream api -- read -- would return EOF so I have to make the goroutine sleep and retry a second later, by recreating the pogLogStream and reading logs from new stream object. It works. But why would this happen??</p>
<p>When you contact logs endpoint what happens is that apiserver is forwarding your request to the kubelet which is hosting your pod. Kubelet server then start streaming content of the log file <a href="https://github.com/kubernetes/kubernetes/blob/985ac8ae5089d3312640b0fca795469fb8025f2a/pkg/kubelet/server/server.go#L666" rel="nofollow noreferrer">to the apiserver</a> and later to your client. Since it is streaming logs from the file and <a href="https://github.com/kubernetes/kubernetes/blob/f0b7ad3ee06c5168fef5fa4f01fe445ece595f89/pkg/kubelet/kuberuntime/logs/logs.go#L280" rel="nofollow noreferrer">not from the stdout directly</a> it may happen that log file is rotated <a href="https://github.com/kubernetes/kubernetes/blob/f0b7ad3ee06c5168fef5fa4f01fe445ece595f89/pkg/kubelet/logs/container_log_manager.go#L66-L73" rel="nofollow noreferrer">by container log manager</a> and as consequence you receive EOF and need to reinitialize the stream.</p>
<p>I've been reading other questions about getting K8s environment variables to work in a Next.js app, but no accepted answer till now.</p> <p>My app works fine using .env.local but it's getting an error (undefined) when deployed to K8s.</p> <p>This is my next.config.js</p> <pre class="lang-js prettyprint-override"><code>module.exports = { env: { NEXT_PUBLIC_API_BASE_URL: process.env.NEXT_PUBLIC_API_BASE_URL, }, }; </code></pre> <p>K8s environment:</p> <p><a href="https://i.stack.imgur.com/vRpyn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vRpyn.png" alt="k8s environment" /></a></p> <p>Can anyone help me to get that environment var works on my next.js app?</p> <p>Right now I do a simple trick, that is added ARG and ENV on dockerfile, then inject it when I build the docker image</p> <p>Dockerfile:</p> <pre><code>ARG NEXT_PUBLIC_API_BASE_URL ENV NEXT_PUBLIC_API_BASE_URL=${NEXT_PUBLIC_API_BASE_URL} </code></pre>
<p>You should add the <code>ENV_VARS</code> in a <code>.env.local</code> file. in form of a <code>configMap</code>. (<a href="https://nextjs.org/docs/basic-features/environment-variables" rel="noreferrer">https://nextjs.org/docs/basic-features/environment-variables</a>)</p> <p>In Kubernetes you create a <code>configMap</code> like so:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 name: env-local data: .env: |- NEXT_PUBLIC_API_URL=http:/your.domain.com/api API_URL=http://another.endpoint.com/serverSide kind: ConfigMap </code></pre> <p>Then you mount that <code>configMap</code> as FILE into your deployment, it then is available at <code>app/.env.local</code>:</p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: apps/v1 kind: Deployment spec: replicas: 1 selector: matchLabels: app: your-app template: metadata: labels: app: your-app spec: containers: - image: your/image:latest imagePullPolicy: Always name: your-app ports: volumeMounts: - mountPath: /app/.env.local name: env-local readOnly: true subPath: .env.local volumes: - configMap: defaultMode: 420 items: - key: .env path: .env.local name: env-local name: env-local </code></pre> <p>What also worked - for me at least - for <strong>server side vars</strong> was simply adding them as regular env vars in my <code>deployment</code>: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container" rel="noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container</a></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: your-app labels: purpose: demonstrate-envars spec: containers: - name: your-app-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: &quot;Hello from the environment&quot; - name: DEMO_FAREWELL value: &quot;Such a sweet sorrow&quot; const withSvgr = require('next-svgr'); module.exports = { // Will only be available on the server side serverRuntimeConfig: { API_URL: process.env.API_URL, }, // Will be available on both server and client publicRuntimeConfig: { NEXT_PUBLIC_API_URL: process.env.API_URL, }, }; </code></pre>
<p>I am using an EKS cluster with 4 nodes. There are multiple applications running in the kubernetes cluster(nearly 30-40 pods) with different cpu and memory requests.</p> <p>Now I wish to increase the memory of one particular pod, now how to choose what maximum memory I can assign to the pod in my cluster.</p> <p>My Idea is to get the free memory inside the kubernetes nodes and based on that will decide the maximum memory that I can assign to pod.</p> <p>I am trying with free command in pods to check the memory available.</p> <p>How can I get the free memory available in my EKS cluster nodes?</p> <p>Note: There is no metrics server installed in my EKS cluster</p>
<p>There may be namespace specific limits, which could be lower than whats available at the node level. In this case, you will have to consider the namespace limits.</p> <p>However, if thats not the case, as a starting value, you may the below command and look at the &quot;Requests&quot; column and choose the request value in your pod that is lower than the available amount shown for your most utilized node.</p> <pre><code>kubectl get node --no-headers | while read node status; do echo '&gt;&gt;&gt;&gt; ['$node']'; kubectl describe node $node | grep Resource -A 3 ;done &gt;&gt;&gt;&gt; [node-be-worker-1] Resource Requests Limits -------- -------- ------ cpu 300m (7%) 100m (2%) memory 220Mi (2%) 220Mi (2%) &gt;&gt;&gt;&gt; [node-be-worker-2] Resource Requests Limits -------- -------- ------ cpu 200m (5%) 100m (2%) memory 150Mi (1%) 50Mi (0%) &gt;&gt;&gt;&gt; [node-be-worker-3] Resource Requests Limits -------- -------- ------ cpu 400m (10%) 2100m (52%) memory 420Mi (5%) 420Mi (5%) &gt;&gt;&gt;&gt; [node-master-0] Resource Requests Limits -------- -------- ------ cpu 650m (32%) 100m (5%) memory 50Mi (1%) 50Mi (1%) </code></pre> <p>Explanation: The command lists all the nodes and loops over to describe the nodes and then filters the lines for &quot;Resource&quot; and prints the string and the next 3 lines.</p> <p>You should be able to tweak the above command to work for namespace too.</p>
<p>I have the following local 2-node kubernetes cluster-</p> <pre><code>$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME srl1 Ready control-plane,master 2d18h v1.21.2 xxx.xxx.12.58 &lt;none&gt; Ubuntu 20.04.2 LTS 5.4.0-80-generic docker://20.10.7 srl2 Ready &lt;none&gt; 2d18h v1.21.3 xxx.xxx.80.72 &lt;none&gt; Ubuntu 18.04.2 LTS 5.4.0-80-generic docker://20.10.2 </code></pre> <p>I am trying to deploy an application on using a cluster creation python scirpt(<a href="https://github.com/hydro-project/cluster/blob/master/hydro/cluster/create_cluster.py" rel="nofollow noreferrer">https://github.com/hydro-project/cluster/blob/master/hydro/cluster/create_cluster.py</a>)</p> <p>When it tries to create a routing node using <code>apps_client.create_namespaced_daemon_set(namespace=util.NAMESPACE, body=yml)</code> it is expected that it should create a single pod from the routing-ds.yaml (given below) file and assign it to the routing daemonset (<strong>kind</strong>). However as you can see, <strong>it is creating two routing pods instead of one</strong> on every physical node. (FYI-my master can schedule pod)</p> <pre><code>akazad@srl1:~/hydro-project/cluster$ kubectl get all -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default pod/management-pod 1/1 Running 0 25m 192.168.190.77 srl2 &lt;none&gt; &lt;none&gt; default pod/monitoring-pod 1/1 Running 0 25m 192.168.120.71 srl1 &lt;none&gt; &lt;none&gt; default pod/routing-nodes-9q7dr 1/1 Running 0 24m xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; default pod/routing-nodes-kfbnv 1/1 Running 0 24m xxx.xxx.80.72 srl2 &lt;none&gt; &lt;none&gt; kube-system pod/calico-kube-controllers-7676785684-tpz7q 1/1 Running 0 2d19h 192.168.120.65 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/calico-node-lnxtb 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/calico-node-mdvpd 1/1 Running 0 2d19h xxx.xxx.80.72 srl2 &lt;none&gt; &lt;none&gt; kube-system pod/coredns-558bd4d5db-vfghf 1/1 Running 0 2d19h 192.168.120.66 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/coredns-558bd4d5db-x7jhj 1/1 Running 0 2d19h xxx.xxx.120.67 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/etcd-srl1 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/kube-apiserver-srl1 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/kube-controller-manager-srl1 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/kube-proxy-l8fds 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; kube-system pod/kube-proxy-szrng 1/1 Running 0 2d19h xxx.xxx.80.72 srl2 &lt;none&gt; &lt;none&gt; kube-system pod/kube-scheduler-srl1 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; metallb-system pod/controller-6b78bff7d9-t7gjr 1/1 Running 0 2d19h 192.168.190.65 srl2 &lt;none&gt; &lt;none&gt; metallb-system pod/speaker-qsqnc 1/1 Running 0 2d19h xxx.xxx.12.58 srl1 &lt;none&gt; &lt;none&gt; metallb-system pod/speaker-s4pp8 1/1 Running 0 2d19h xxx.xxx.80.72 srl2 &lt;none&gt; &lt;none&gt; NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2d19h &lt;none&gt; kube-system service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 2d19h k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR default daemonset.apps/routing-nodes 2 2 2 2 2 &lt;none&gt; 24m routing-container akazad1/srlanna:v2 role=routing kube-system daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 2d19h calico-node calico/node:v3.14.2 k8s-app=calico-node kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 2d19h kube-proxy k8s.gcr.io/kube-proxy:v1.21.3 k8s-app=kube-proxy metallb-system daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 2d19h speaker quay.io/metallb/speaker:v0.10.2 app=metallb,component=speaker </code></pre> <p>However, when it is directly creating pods from the management-pod.yaml (given below), it is creating one as expected.</p> <p><strong>Why the dasemonset is creating two pods instead of one?</strong></p> <h2>Code segment where it is supposed to create a daemonset of type routing node</h2> <pre><code>for i in range(len(kinds)): kind = kinds[i] # Create should only be true when the DaemonSet is being created for the # first time -- i.e., when this is called from create_cluster. After that, # we can basically ignore this because the DaemonSet will take care of # adding pods to created nodes. if create: fname = 'yaml/ds/%s-ds.yml' % kind yml = util.load_yaml(fname, prefix) for container in yml['spec']['template']['spec']['containers']: env = container['env'] util.replace_yaml_val(env, 'ROUTING_IPS', route_str) util.replace_yaml_val(env, 'ROUTE_ADDR', route_addr) util.replace_yaml_val(env, 'SCHED_IPS', sched_str) util.replace_yaml_val(env, 'FUNCTION_ADDR', function_addr) util.replace_yaml_val(env, 'MON_IPS', mon_str) util.replace_yaml_val(env, 'MGMT_IP', management_ip) util.replace_yaml_val(env, 'SEED_IP', seed_ip) apps_client.create_namespaced_daemon_set(namespace=util.NAMESPACE, body=yml) # Wait until all pods of this kind are running res = [] while len(res) != expected_counts[i]: res = util.get_pod_ips(client, 'role='+kind, is_running=True) pods = client.list_namespaced_pod(namespace=util.NAMESPACE, label_selector='role=' + kind).items created_pods = get_current_pod_container_pairs(pods) </code></pre> <p>I have removed the <strong>nodeSelector</strong> from all the yaml files as I am running it on bare-metal cluster.</p> <h2>1 routing-ds.yaml</h2> <pre><code> 14 15 apiVersion: apps/v1 16 kind: DaemonSet 17 metadata: 18 name: routing-nodes 19 labels: 20 role: routing 21 spec: 22 selector: 23 matchLabels: 24 role: routing 25 template: 26 metadata: 27 labels: 28 role: routing 29 spec: 30 #nodeSelector: 31 # role: routing 32 33 hostNetwork: true 34 containers: 35 - name: routing-container 36 image: akazad1/srlanna:v2 37 env: 38 - name: SERVER_TYPE 39 value: r 40 - name: MON_IPS 41 value: MON_IPS_DUMMY 42 - name: REPO_ORG 43 value: hydro-project 44 - name: REPO_BRANCH 45 value: master </code></pre> <h2>2 management-pod.yaml</h2> <pre><code> 15 apiVersion: v1 16 kind: Pod 17 metadata: 18 name: management-pod 19 labels: 20 role: management 21 spec: 22 restartPolicy: Never 23 containers: 24 - name: management-container 25 image: hydroproject/management 26 env: 27 #- name: AWS_ACCESS_KEY_ID 28 #value: ACCESS_KEY_ID_DUMMY 29 #- name: AWS_SECRET_ACCESS_KEY 30 #value: SECRET_KEY_DUMMY 31 #- name: KOPS_STATE_STORE 32 # value: KOPS_BUCKET_DUMMY 33 - name: HYDRO_CLUSTER_NAME 34 value: CLUSTER_NAME 35 - name: REPO_ORG 36 value: hydro-project 37 - name: REPO_BRANCH 38 value: master 39 - name: ANNA_REPO_ORG 40 value: hydro-project 41 - name: ANNA_REPO_BRANCH 42 value: master 43 # nodeSelector: 44 #role: general </code></pre>
<p>May you have misunderstanding you have to use the <code>kind: deployment</code> if you want to manage the replicas (PODs - 1,2,3...n) on Kubernetes.</p> <p>Daemon set behavior is like it will run the POD on each available node in the cluster.</p> <p>So inside your cluster, there are <strong>two nodes</strong> so <strong>daemon set</strong> will run the POD on each <strong>available node</strong>. If you will increase the <strong>Node</strong> <strong>deamon set</strong> will auto-create the POD on that node also.</p> <p><strong>kind: Pod</strong></p> <p>will create the single POD only which is its default behavior.</p> <p>The following are some of the Kubernetes Objects:</p> <pre><code>pods ReplicationController (Manages Pods) Deployment (Manages Pods) StatefulSets DaemonSets </code></pre> <p>You can read more at : <a href="https://chkrishna.medium.com/kubernetes-objects-e0a8b93b5cdc" rel="nofollow noreferrer">https://chkrishna.medium.com/kubernetes-objects-e0a8b93b5cdc</a></p> <p>Official document : <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/</a></p> <p>If you want to manage <strong>POD</strong> using any type of controller <code>kind: deployment</code> is best. you can scale the replicas and scale down. You can also mention the replicas in YAML 1,2,3 and that way it will be running on cluster based on number.</p>
<p>Hy there,</p> <p>I'm trying to configure Kubernetes Cronjobs monitoring &amp; alerts with Prometheus. I found this helpful <a href="https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511" rel="nofollow noreferrer">guide</a></p> <p>But I always get a <strong>many-to-many matching not allowed: matching labels must be unique on one side</strong> error. </p> <p>For example, this is the PromQL query which triggers this error:</p> <p><code>max( kube_job_status_start_time * ON(job_name) GROUP_RIGHT() kube_job_labels{label_cronjob!=""} ) BY (job_name, label_cronjob) </code></p> <p>The queries by itself result in e.g. these metrics</p> <p><strong>kube_job_status_start_time</strong>: <code> kube_job_status_start_time{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",namespace="test-develop",release="kube-state-metrics"} </code></p> <p><strong>kube_job_labels{label_cronjob!=""}</strong>: <code> kube_job_labels{app="kube-state-metrics",chart="kube-state-metrics-0.12.1",heritage="Tiller",instance="REDACTED",job="kubernetes-service-endpoints",job_name="test-1546295400",kubernetes_name="kube-state-metrics",kubernetes_namespace="monitoring",kubernetes_node="REDACTED",label_cronjob="test",label_environment="test-develop",namespace="test-develop",release="kube-state-metrics"} </code></p> <p>Is there something I'm missing here? The same many-to-many error happens for every query I tried from the guide. Even constructing it by myself from ground up resulted in the same error. Hope you can help me out here :)</p>
<p>I ran into the same issue when I followed that article, but for me, I actually get duplicate job names but in different namespaces.</p> <p>Ex. When running <code>kube_job_status_start_time</code>:</p> <pre><code>kube_job_status_start_time{instance=&quot;REDACTED&quot;,job=&quot;kube-state-metrics&quot;,job_name=&quot;job-abc-123&quot;,namespace=&quot;us&quot;} kube_job_status_start_time{instance=&quot;REDACTED&quot;,job=&quot;kube-state-metrics&quot;,job_name=&quot;job-abc-123&quot;,namespace=&quot;ca&quot;} </code></pre> <p>So I had to either add a filter for the namespace or add namespace into the ON/BY clauses to get it to be unique.</p> <p>e.g. for one of the subqueries I had to do this:</p> <pre><code> max( kube_job_status_start_time * ON(namespace, job_name) GROUP_RIGHT() kube_job_labels{label_cronjob!=&quot;&quot;} ) BY (namespace, label_cronjob) </code></pre> <p>Essentially had to apply that principle to all the rest of the queries for it to work for me. Not sure if that applies in your case.</p>
<p>This is the output with <code>custom-columns</code></p> <pre><code>$ kubectl -n web get pod -ocustom-columns=&quot;Name:.metadata.name,Image:.spec.containers[0].image&quot; Name Image mysql-0 myrepo.mydomain.com/mysql:5.7 mysql-1 myrepo.mydomain.com/mysql:5.7 mysql-2 myrepo.mydomain.com/mysql:5.7 </code></pre> <p>This is the output with <code>jsonpath</code> for single pod</p> <pre><code>$ kubectl -n web get pod -o jsonpath=&quot;{..metadata.name}:{..spec.containers[0].image}&quot; mysql-0 mysql-0:myrepo.mydomain.com/mysql:5.7 </code></pre> <p>This is the output with <code>jsonpath</code> for multiple pods</p> <pre><code>$ kubectl -n web get pod -o jsonpath=&quot;{..metadata.name}:{..spec.containers[0].image}&quot; mysql-0 mysql-1 mysql-2:myrepo.mydomain.com/mysql:5.7 myrepo.mydomain.com/mysql:5.7 myrepo.mydomain.com/mysql:5.7 </code></pre> <p>Now how to combine this into a single column or word, something like this, using <code>-ocustom-columns</code> or <code>-ojsonpath</code></p> <pre><code>mysql-0=myrepo.mydomain.com/mysql:5.7 mysql-1=myrepo.mydomain.com/mysql:5.7 mysql-2=myrepo.mydomain.com/mysql:5.7 </code></pre>
<p>Using kubectl using plain <code>jsonpath</code>:</p> <pre><code>kubectl get pod -n &lt;namespace&gt; -o jsonpath='{range .items[*]}{.metadata.name}={.spec.containers[*].image}{&quot;\n&quot;}{end}' </code></pre> <p>Example:</p> <pre><code>kubectl get pod -n default -o jsonpath='{range .items[*]}{.metadata.name}={.spec.containers[*].image}{&quot;\n&quot;}{end}' nginx-0=nginx nginx-1=nginx nginx-2=nginx </code></pre> <p>Here <code>range</code> feature is used to loop over all the pods:</p> <pre><code>{range items[*]} ...&lt;LOGIC HERE&gt;... {end} </code></pre> <p>Between the range block(As described above), use the <code>jsonpath</code>, notice the <code>=</code> sign is used as per our requirement.</p> <pre><code>{.metadata.name}={.spec.containers[*].image}{&quot;\n&quot;} </code></pre>
<p>We are trying to write a consumer that would subscribe to a particular topic and do some processing when a couple conditions in the data are met. One operation in the processing however can be done only once, and for the purpose of simplicity let us consider that to be a POST http request that is not idempotent.</p> <p>Following are some other considerations:-</p> <ul> <li>The consumer would need to be coded in node.js and use the kafkajs library,</li> <li>The consumer would need to run on Kubernetes,</li> <li>The number of replicas would be equal to the number of producer replicas</li> <li>We will use a consumer group for our consumers</li> </ul> <p>We were thinking that to make this Consumer Idempotent, we can perhaps do something like:-</p> <pre><code>For Every Message Check if message was processed try: If !processed Do Processing (http POST included here) catch for errors: Do error processing finally: Mark message as processed </code></pre> <p>&quot;Mark message as processed&quot; would basically be capturing some details to a Relation DB (like Postgres), stuff like offset, timestamp, and a few other details to ensure we have the keys captured that would allow us to identify a record uniquely</p> <ol> <li>Does the above look good for making the Consumer Idempotent?</li> <li>What other alternatives can you suggest that would perform better?</li> </ol> <p>Added to the above, a few more questions around the Best Practices with DB processing in the above scenario:-</p> <ol start="3"> <li><p>Say I have 3 k8s nodes and each one of them have 3 consumer pods running, essentially giving 9 single threaded kafka consumers. Is that a correct understanding?</p> </li> <li><p>Now since each of these threads will do DB Inserts/Reads, what would be a better to use between a Pool and Client (assuming node-postgres library)?</p> </li> <li><p>It seems that if we open a Client connection at the Start of Day and let it be there till the end of the day, it should work for us. Is that a good approach or a really poor design?</p> </li> <li><p>Do we get any benefits by using Pools, if we are doing for each message processing from these 9 consumers running.</p> </li> </ol> <p>Additional Assumptions:-</p> <ul> <li>Traffic Timing: Start at 7:00 AM Eastern and pick up over the day and taper in the US Evening. No traffic between 2:00 AM Eastern to 6:00 AM Eastern.</li> <li>Average:- 1 message per second during US Daytime,</li> <li>Max:- 5 messages per second in bursts of small duration during US daytime.</li> <li>Delay Tolerance: On a normal day, the POST in the consumer can be delayed no more than 5 mins from the publish time of the message.</li> </ul> <p>Thanks for being patient and reading this through. Apologize for the length of this post.</p>
<ol> <li>Does the above look good for making the Consumer Idempotent?</li> </ol> <blockquote> <p>Yes, from the POV of Idempotency, your code looks good. Since you're working with Kafka consumers, you don't need an exclusive for loop for message processing. Consumers are invoked on each message arrival. Your psuedo-code should look like this:</p> </blockquote> <pre><code>Check if message was processed try: If !processed Do Processing (http POST included here) catch for errors: Do error processing finally: Mark message as processed </code></pre> <ol start="2"> <li>What other alternatives can you suggest that would perform better?</li> </ol> <blockquote> <p>Your code misses out on an important aspect which is concurrent duplicate messages. Say, for example, two messages are somehow produced at the same time from the producer (which is actually an error at producer's end) and the message should be processed only once. The consumer starts processing for both messages. At this point <code>If !processed</code>, both consumers see the same state which is <code>not processed</code> and both of them proceed to <code>Do Processing</code>. You can avoid such scenarios by acquiring a lock on some an id by which you can tell if a message is duplicate or not. Since you're already using Postgres, you could look into <code>pg_advisory_locks</code>. So now, your pseudo-code will now look something like:</p> </blockquote> <pre><code>Check if message was processed try: acquire_lock(message.uniqId) //pg_advisory_lock If !processed Do Processing (http POST included here) catch for errors: if error is lock_already_acquired_by_other_process return as duplicate processor else Do error processing finally: Mark message as processed release lock </code></pre> <blockquote> <p>We can still do some improvements. The above code doesn't handle failure scenarios where we would like to have retries. There are various ways to achieve this. Oh wait, you're using Kafka. Why not publish the message which has failed in processing (obviously not those which were duplicates) in the same Kafka topic after some delay and have some counter in the message object to check how many times has this message been processed. We would certainly want to retry only a limited number of times, so each time we're processing the message we can check the counter we set previously in the message object to control the number of retries. So far so good, but what about the messages which are failing even after the fixed number of retries. For such cases, you'd like to have a DLQ (dead lettered queue) which holds such messages along with some error message until you have looked at them manually and fixed the errors.</p> <p>This sounds a lot of code to write. We've another great news. There are libraries available which you can leverage to achieve all these. One such library is <a href="https://optimalbits.github.io/bull/" rel="nofollow noreferrer">bull</a>.</p> </blockquote> <ol start="3"> <li>Say I have 3 k8s nodes and each one of them have 3 consumer pods running, essentially giving 9 single threaded kafka consumers. Is that a correct understanding?</li> </ol> <blockquote> <p>Yeah. As far as I understand this.</p> </blockquote> <ol start="4"> <li>Now since each of these threads will do DB Inserts/Reads, what would be a better to use between a Pool and Client (assuming node-postgres library)?</li> </ol> <blockquote> <p>Using a pool would be advisable since you'd also aim to achieve faster processing. With connection pooling you can do stuffs like firing multiple queries at the same time without queuing them up, utilizing any underlying library which uses parallel execution, etc. Of course, we shouldn't fill up our memory with connections so a tuned number of connections in the pool according to the pod's memory is advisable.</p> </blockquote> <ol start="5"> <li>It seems that if we open a Client connection at the Start of Day and let it be there till the end of the day, it should work for us. Is that a good approach or a really poor design?</li> </ol> <blockquote> <p>I can't understand correctly what you're trying to do here but I'd for connection pooling.</p> </blockquote> <ol start="6"> <li>Do we get any benefits by using Pools, if we are doing for each message processing from these 9 consumers running.</li> </ol> <blockquote> <p>Yes. Apart from the benefits already mentioned in point 4, you get better resource utilization of your k8s pods (again that depends on whether if 9 consumers are optimal according to the message incoming rate).</p> </blockquote>
<p>So I'm not sure if I'm using this feature incorrectly or it's a limitation of &quot;anti-affinity&quot; but I'm at a loss. I have a batch of jobs that I want to run with each one having their own node. Should be relatively simple, add the anti-affinity to only run the pod where the hostname + label does not exist. Despite this I still have multiple pods on the same node.</p> <p>My best guess right now is that, because I create all the jobs at once with a <code>kubectl apply -f ./folder</code> command, the scheduler doesn't count pods on a node in the &quot;container creating&quot; state as a trigger for the anti-affinity rule and schedules another onto the node.</p> <p>Each job needs a slightly different command line so I can't use just one job file with the parallel spec until 1.22 comes out with job indexing.</p> <p>Below is the job yaml in case there's something I'm missing.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: testjob-$SHARD spec: backoffLimit: 1 template: metadata: labels: run: testjob spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: run operator: In values: - testjob topologyKey: &quot;kubernetes.io/hostname&quot; containers: - name: testjob imagePullPolicy: Always image: image resources: requests: memory: &quot;3072Mi&quot; limits: memory: &quot;4Gi&quot; command: [&quot;./foo&quot;] securityContext: privileged: true restartPolicy: OnFailure </code></pre>
<p>So as a shot in the dark I upgraded my cluster from 1.20 to 1.21 and it magically fixed itself. So it's possible this was just a 1.20 problem. I have a issue on the kubernetes github to look into it but all I can say is that if you're on 1.20 and having trouble with pod level anti-affinity maybe try upgrading.</p>
<p>We are using NFS volume (GCP filestore with 1TB size) to set RWX Many access PVC in GCP, the problem here is: for example I allot a PVC of 5Gi and mount it to a nginx pod under /etc/nginx/test-pvc, instead of just allotting 5Gi it allots the whole NFS volume size.</p> <p>I logged into the nginx pod and did a df -kh:</p> <pre><code>df -kh Filesystem Size Used Avail Use% Mounted on overlay 95G 16G 79G 17% / tmpfs 64M 0 64M 0% /dev tmpfs 63G 0 63G 0% /sys/fs/cgroup shm 64M 0 64M 0% /dev/shm /dev/sda1 95G 16G 79G 17% /etc/hosts 10.x.10.x:/vol 1007G 5.0M 956G 1% /etc/nginx/test-pvc tmpfs 63G 12K 63G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 63G 0 63G 0% /proc/acpi tmpfs 63G 0 63G 0% /proc/scsi tmpfs 63G 0 63G 0% /sys/firmware </code></pre> <p>size of /etc/nginx/test-pvc is 1007G, which is my whole volume size in NFS(1 TB), it should have been 5G instead, even the used space 5MB isn't actually used in /etc/nginx/test-pvc. Why is the behaviour so ?</p> <p>PV and PVC yaml used:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-test spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce nfs: path: /vol server: 10.x.10.x persistentVolumeReclaimPolicy: Recycle apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce storageClassName: &quot;&quot; resources: requests: storage: 5Gi volumeName: pv-nfs-test </code></pre> <p>Nginx deployment yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nfs-pv-demo-depl spec: replicas: 1 selector: matchLabels: app: nfs-pv-demo template: metadata: name: nfs-pv-pod labels: app: nfs-pv-demo spec: containers: - image: nginx name: nfs-pv-multi imagePullPolicy: Always name: ng volumeMounts: - name: nfs-volume-1 mountPath: &quot;/etc/nginx/test-pvc&quot; volumes: - name: nfs-volume-1 persistentVolumeClaim: claimName: nfs-claim1 </code></pre> <p>Is there anything I'm missing ? Or is this the behaviour of NFS ? If so what is the best way to handle it in production, as we will have multiple other PVCs and could cause some confusions and volume denial issues.</p>
<blockquote> <p>Is there anything I'm missing ? Or is this the behaviour of NFS ?</p> </blockquote> <p>No, nothing at all. This is simply the way it works. And it's nothing specific to NFS either.</p> <p><code>5Gi</code> of storage capacity defined in your <code>PV</code> can be treated more like a <strong>declaration</strong> that you have a <code>PersistentVolume</code> object which has 5 gigabtes of uderlying storage. <strong>But it's nothing more than just a declaration. You can not put any constraint on your available disk capacity this way.</strong> So if you have a disk that has actually 100 gigabytes of capacity it is a good practice to declare in this field of your <code>PV</code> definition <code>100Gi</code> for the sake of consistency.</p> <p>The storage capacity you set in your <code>PVC</code> is a bit different story. It can be understood as a <strong>minimum storage capacity that would satisfy your request for storage</strong>. So if you have let's say 3 different <code>PVs</code> which have following capacities (declared in <code>PV</code> definition, no matter what their real capacity is): <code>3Gi</code>, <code>10Gi</code> and <code>100Gi</code> and you claim for <code>5Gi</code> in your <code>PersistentVolumeClaim</code>, only 2 of them i.e. <code>10Gi</code> and <code>100Gi</code> can satisfy such request. And as I said above, it doesn't matter that the smallest one which has <code>3Gi</code> declared is in fact backed with quite a large disk which has <code>1000Gi</code>. If you defined a <code>PV</code> object which represents such disk in kubernetes environment (and makes it available to be consumed by some <code>PVC</code> and in the end by some <code>Pod</code> which uses it) and you declared that this particular <code>PV</code> has only <code>3Gi</code> of capacity, <code>PVC</code> in which you request for <code>5Gi</code> has no means to verify the actual capacity of the disk and &quot;sees&quot; such volume as the one with not enough capacity to satisfy the request made for <code>5Gi</code>.</p> <p>To illustrate that it isn't specific to NFS, you can create a new GCE persistent disk of 100 gigabytes (e.g. via cloud console as it seems the easiest way) and then you can use such disk in a <code>PV</code> and <code>PVC</code> which in the end will be used by simple nginx pod. This is described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd#create_pv_pvc" rel="nofollow noreferrer">here</a>.</p> <p>So you may declare in your <code>PV</code> 10Gi (in <code>PVC</code> at most 10Gi then) although your GCE persistent disk has in fact the capacity of 100 gigs. And if you connect to such pod, you won't see the declared capacity of 10Gi but the real capacity of the disk. And it's completely normal and works exactly as it was designed.</p> <p>You may have thought that it works similar to <code>LVM</code> where you create a volume group consisting of one or more disks and than you can create as many logical volumes as your underlying capacity allows you. <code>PVs</code> in kubernetes don't allow you to do anything like this. <strong>Capacity that you &quot;set&quot; in a <code>PV</code> definition is only a declaration, not a constraint of any kind.</strong> If you need to mount separate chunks of a huge disks into different pods, you would need to divide it into partitions first and create separate <code>PV</code> objects, each one out of a single partition.</p>
<p>We are running Spark 3.1.1 on Kubernetes in client mode.</p> <p>We are a simple scala spark application that loads parquet files from S3 and aggregates them:</p> <pre><code>sparkSession.read.parquet(paths).as[MyRawEvent] </code></pre> <p>Our application runs perfectly on the happy-path: driver pod starts running, executors pods join the party, and when the application finishes both executors and driver terminate.</p> <p>On the other hand, if something goes wrong, both driver + executors pods stay on <code>Running</code> state. For instance, an exception occurs (in driver) if one of the <code>paths</code> above doesn't exist:</p> <pre><code>Exception in thread &quot;main&quot; org.apache.spark.sql.AnalysisException: Path does not exist: s3a://&lt;bucket-name&gt;/client-id=8765432/date=2021-08-06 at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:803) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:800) at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:372) at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659) at scala.util.Success.$anonfun$map$1(Try.scala:255) at scala.util.Success.map(Try.scala:213) at scala.concurrent.Future.$anonfun$map$1(Future.scala:292) at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33) at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(Unknown Source) at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) </code></pre> <p>Interestingly, this exception doesn't prevent the executors to start right after and both driver&amp;executors pods are stuck forever, doing nothing.</p> <p>We are not catching exceptions in our application and we expected that the driver and the executors will stop, instead of wasting redundant resources.</p> <p>How can we crush the application so it won't stay in <code>Running</code> state forever?</p>
<p>Well, that's was an easy one.</p> <p>I had to catch all exceptions to ensure that spark context is being closed no matter what:</p> <pre><code> def main(args: Array[String]): Unit = { // some code implicit val sparkSession = SparkSession.builder().getOrCreate try { // application code with potential exceptions } catch { case exception: Exception =&gt; sparkSession.close() throw exception } sparkSession.close() } </code></pre> <p>That way all resources are freed and the driver pod changes its state to <code>Error</code> as excepted.</p> <p><strong>EDIT</strong> - on in the Scala fashion:</p> <pre><code> def main(args: Array[String]): Unit = { // some code implicit val sparkSession = SparkSession.builder().getOrCreate Try { // application code with potential exceptions } match { case Success(_) =&gt; None case Failure(exception) =&gt; sparkSession.close() throw exception } sparkSession.close() } </code></pre>
<p>I frequently update a Docker image using the same tag on Azure Container Registry. I would like to automatically run a rolling update when a new tag is pushed to my ACR, so pods are automatically updated with the latest version of the image.</p> <p>ACR provides a Webhook feature to automatically call a webhook when a new tag is pushed.</p> <p>My first approach would be to create webhook in Kubernetes exposed through an ingress and callable from the ACR. The webhook would perfom the rolling update, but it requires to access the Kubernetes API.</p> <p>Is it possible? Am I in the right direction, or is there a more convenient way to answer my requirement?</p> <p>Thanks!</p>
<p>Yes, it's possible. If ACR can call a webhook whenever a new image is pushed (even with the same tag), you can call a webhook service in your cluster. The service can then, for example, update an annotation in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podtemplatespec-v1-core" rel="nofollow noreferrer">PodTemplateSpec</a> of your Deployment, which triggers a rolling update of your Pods.</p> <p>However, it requires that the <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">imagePullPolicy</a> of the containers is set to Always, as the tag of the image remains unchanged. Otherwise, Kubernetes would think that it's still the same image and would use the one that it has previously pulled.</p> <p>For access to the Kubernetes API, you can do this with a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">ServiceAccount</a> to which you assign the necessary Kubernetes API permissions with <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Roles and RoleBindings</a>. Finally, you assign this ServiceAccount to your Pods through the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#podspec-v1-core" rel="nofollow noreferrer"><code>serviceAccountName</code></a> field in the PodSpec.</p>
<p>Hi I am using ambassador for ny project. I have 2 services and 2 host files. When I apply &quot;kubectl apply -f host1.yml&quot; and &quot;kubectl apply -f host2.yml&quot; I got below error: How can I produce the correct TLS cert by using Ambassador edge stack 1.13 ? What is wrong here?</p> <pre><code>kubectl get hosts -A </code></pre> <p><a href="https://i.stack.imgur.com/txQPN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/txQPN.png" alt="enter image description here" /></a></p> <p>Error: echo-host: continuing with invalid TLS secret tls2-cert</p> <p>Details:</p> <pre><code>2021-08-07 14:26:19 diagd 1.13.10-dev.24+g61366062d [P32TAEW] ERROR: Secret tls2-cert.default unknown 2021-08-07 14:26:19 diagd 1.13.10-dev.24+g61366062d [P32TAEW] ERROR: Host echo-host: continuing with invalid TLS secret tls2-cert 2021-08-07 14:26:19 diagd 1.13.10-dev.24+g61366062d [P32TAEW] ERROR: Secret tls-cert.default unknown 2021-08-07 14:26:19 diagd 1.13.10-dev.24+g61366062d [P32TAEW] ERROR: Host quote-host: continuing with invalid TLS secret tls-cert 2021-08-07 14:26:19 diagd 1.13.10-dev.24+g61366062d [P32TAEW] INFO: configuration updated (complete) from snapshot snapshot (S9 L1 G12 C5) time=&quot;2021-08-07 14:26:19&quot; level=warning msg=&quot;license_secret_watch: empty decoded license data&quot; func=github.com/datawire/apro/cmd/amb-sidecar.runE.func3 file=&quot;github.com/datawire/apro/cmd/amb-sidecar/main.go:258&quot; CMD=amb-sidecar PID=16 THREAD=/license_secret_watch time=&quot;2021-08-07T14:26:19Z&quot; level=info msg=&quot;Loaded file /ambassador/envoy/envoy.json&quot; time=&quot;2021-08-07T14:26:19Z&quot; level=info msg=&quot;Saved snapshot v126&quot; time=&quot;2021-08-07T14:26:19Z&quot; level=info msg=&quot;Pushing snapshot v126&quot; </code></pre> <p>My host files:</p> <p><strong>host1.yml:</strong></p> <pre><code>--- apiVersion: getambassador.io/v2 kind: Host metadata: name: quote-host spec: hostname: quote.kafkacake.xyz acmeProvider: email: yusufkaratoprak@gmail.com tlsSecret: name: tls-cert requestPolicy: insecure: action: Redirect additionalPort: 8080 </code></pre> <p>host2.yml:</p> <pre><code>--- apiVersion: getambassador.io/v2 kind: Host metadata: name: echo-host spec: hostname: echo.kafkacake.xyz acmeProvider: email: yusufkaratoprak@gmail.com tlsSecret: name: tls2-cert requestPolicy: insecure: action: Redirect additionalPort: 8080 </code></pre> <p>How can I produce the correct TLS cert by using Ambassador edge stack 1.13 ?</p>
<p>You should try installing the <strong>Cert-manager</strong> with the <strong>ambassador</strong>. Edge stack supports the Let's encrypt in <code>acmeProvider</code> but only with one challenge <code>HTTP-01</code>.</p> <p>Your <strong>HTTP-01</strong> cert-challenge is getting failed while cert-manager supports the <strong>DNS-01</strong> method also and if you want to use the wild card cert you can also use it.</p> <p><a href="https://www.getambassador.io/docs/edge-stack/latest/howtos/cert-manager/" rel="nofollow noreferrer">https://www.getambassador.io/docs/edge-stack/latest/howtos/cert-manager/</a></p>
<p>I am writing the simplest fxn possible using <code>client-go</code> that justs performs in-cluster authentication and returns a pointer to the <code>kubernetes.Clientset</code> object</p> <pre class="lang-golang prettyprint-override"><code>// getInClusterAuth performs in cluster authentication and returns the clientset func getInClusterAuth() (*kubernetes.Clientset, error) { // creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { return nil, err } // create the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { return nil, err } return clientset, nil } </code></pre> <p>Since this is sth that not only does it run against an external system (a <code>k8s</code> api server) but it is also supposed to be running from within a <code>Pod</code> once deployed, what is the appropriate way of unit-testing it?</p> <p>Could it be that it is an acceptable practice to cover the case in e2e or integration tests?</p>
<p>you can use k8s.io/client-go/kubernetes/fake.NewSimpleClientset to mock a clientSet in your unit test .</p> <pre><code> import ( &quot;context&quot; &quot;testing&quot; v1 &quot;k8s.io/api/core/v1&quot; &quot;k8s.io/apimachinery/pkg/api/errors&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes/fake&quot; ) func TestHelloWorld(t *testing.T) { clientset := fake.NewSimpleClientset(&amp;v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: &quot;influxdb-v2&quot;, Namespace: &quot;default&quot;, Annotations: map[string]string{}, }, }, &amp;v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: &quot;chronograf&quot;, Namespace: &quot;default&quot;, Annotations: map[string]string{}, }, }) _, err := clientset.CoreV1().Pods(v1.NamespaceDefault).Get(context.Background(), &quot;influxdb-v2&quot;, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { t.Error(err) } else { t.Errorf(&quot;failed to get service from apiserver: %s&quot;, err) } } p := &amp;v1.Pod{ObjectMeta: metav1.ObjectMeta{Name: &quot;helloooooo&quot;}} _, err = clientset.CoreV1().Pods(v1.NamespaceDefault).Create(context.Background(), p, metav1.CreateOptions{}) if err != nil { t.Fatalf(&quot;error injecting pod add: %v&quot;, err) } } </code></pre> <p>this is a sample: <a href="https://stackoverflow.com/questions/57833499/how-to-write-simple-tests-for-client-go-using-a-fake-client">a fakeclient case</a></p>
<p>I am trying to install Kubernetes in Mac. I followed these instructions - <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a> (for MacOs)</p> <p>Followed all the 5 steps mentioned in that link</p> <pre><code>1. curl -LO &quot;https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl&quot; 2.curl -LO &quot;https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl.sha256&quot; echo &quot;$(&lt;kubectl.sha256) kubectl&quot; | shasum -a 256 --check output: kubectl: OK 3. chmod +x ./kubectl 4. sudo mv ./kubectl /usr/local/bin/kubectl &amp;&amp; \ sudo chown root: /usr/local/bin/kubectl 5. kubectl version --client </code></pre> <p>Apparently, when I executed this kubectl version --client</p> <p><code>zsh: bad CPU type in executable: kubectl</code></p> <p>I tried to switch the shell from zsh to sh, bash but nothing helped</p>
<p>For Mac M1 - install Rosetta <code>softwareupdate --install-rosetta</code></p> <p>Working on my M1 Big Sur 11.5.1</p> <p>For more info , have a look on this link <a href="https://support.apple.com/en-gb/HT211861" rel="nofollow noreferrer">Rosetta</a></p> <p>Check this <a href="https://apple.stackexchange.com/questions/408375/zsh-bad-cpu-type-in-executable?newreg=943f42fc6a254d34bb2729742d135920">answer</a></p>
<p>I search (since many days ...) How to use Ingress Nginx in order to use External url as backend.</p> <p>Yes I use Service object with externalName. I also test many many directive in nginx without succes...</p> <p>The goal is simple:</p> <ul> <li>user can resolv and join only my public URL (url-public.com &amp; url-public-2.com)</li> <li>I want to send request to external backend than is out of my control and cluster (url-ext.com)</li> </ul> <p>My ingress can resolv url-ext.com, user cannot. Of course, url-ext.com have here proper certificat and is expose as HTTPS ;). Nota : My ingress expose 2 url, but I don't think this is important.</p> <p>My Code :</p> <p>Service :</p> <pre class="lang-yaml prettyprint-override"><code>kind: Service apiVersion: v1 metadata: name: external-eip namespace: external-url spec: type: ExternalName externalName: url-ext.com </code></pre> <p>Ingress :</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: external-eip namespace: external-url annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/upstream-vhost: &quot;url-ext.com&quot; nginx.ingress.kubernetes.io/configuration-snippet: | # more_clear_input_headers &quot;Host&quot; &quot;X-Forwarded-Host&quot;; more_set_input_headers 'Host: url-ext.com'; proxy_set_header Host url-ext.com; spec: rules: - host: url-public.com http: paths: - path: / pathType: Prefix backend: service: name: external-eip port: number: 443 - host: url-public-2.com http: paths: - path: / pathType: Prefix backend: service: name: external-eip port: number: 443 </code></pre> <p>Result (curl or Browser access ) :</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;504 Gateway Time-out&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;504 Gateway Time-out&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>And Ingress Log :</p> <pre><code>2021/08/06 21:44:45 [error] 10873#10873: *2914631 upstream timed out (110: Operation timed out) while connecting to upstream, client: 10.203.65.14, server: url-public.com, request: &quot;GET / HTTP/2.0&quot;, upstream: &quot;https://10.203.64.5:443/&quot;, host: &quot;url-ext.com&quot; </code></pre> <p>10.203.64.5 is the real IP of url-ext.com.</p> <p>And, <code>curl https://10.203.64.5:443/ -k -v -H &quot;Host: url-ext.com&quot;</code> work.</p> <p>Don't ask me &quot;why using this annotation&quot;, I don't know :p many Google search and many test ...</p> <p>So, what is the correct ingress configuration for using externalName, when this externalName is (probably) under RP also ?</p> <p>Nota : Rewrite and other redirect not work, because it rewrite user location and I don't want it. User canno't acces and resolv url-ext.com, only url-public.com.</p> <p>Br, Thanks</p>
<p>Reply to my self. The problem was my Ingress (and all my K8S cluster) is under HTTP Squid proxy.</p> <p>I have mounted all env variable for proxy but it seems the Ingress container doesn't use it ...</p> <p>So, to get it working, I allow my cluster to outbound 80/443 ports to the destination.</p> <p>You just need this annotation:</p> <pre class="lang-yaml prettyprint-override"><code> nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/upstream-vhost: &quot;url-ext.com&quot; </code></pre> <p>But, if any of you know how to setup my Ingress to use my HTTP proxy, I will need it.</p> <p>br</p>
<p>We are deploying a Rails application on Kubernetes.</p> <p>The <code>assets:precompile</code> task run as part of the Docker image build process.</p> <p>We want to run rake tasks like rake db:migrate task and other tasks on each deployment.</p> <p>Current solution, we are using kubectl exec.</p> <pre><code>deplyment.yaml apiVersion: extensions/v1 kind: Deployment metadata: name: rails-app spec: template: spec: containers: - image: test/rails-app:v0.0.1 name: myapp imagePullPolicy: IfNotPresent </code></pre> <p>Get list of pods</p> <pre><code>$ kubectl get pods </code></pre> <p>Then exec on the rails pod:</p> <pre><code>$ kubectl exec rails-app-4007005961-1st7s \ -- bash -c \ 'bin/rake db:migrate </code></pre> <p>'</p> <p>We need to use kubernentes cronjob (.yaml) to start our rake taks but we do not know which docker image we must use ? how to connect to rails pod and start the rake task ?. example of implementation ?</p>
<blockquote> <p>We need to use kubernentes cronjob (.yaml) to start our rake taks</p> </blockquote> <p>I don't think you need a k8s <strong>cronjob</strong> to run your db migrations. You might wanna use the <em>job</em> resource in k8s, and run it as part of your CI/CD script. Run it right before you apply your deployment.yaml file. In case the migrations job fails, you should abort the deployment.</p>
<p><code>kubectl explain serviceaccount.secrets</code> describes ServiceAccount Secrets as the secrets allowed to be used by Pods running using this ServiceAccount, but what effect does adding a Secret name to this list have?</p> <p>The ServiceAccount token Secret (which is automatically added to this list) gets automatically mounted as a volume into all containers in a Pod running using this ServiceAccount (as long as the ServiceAccount admission controller is enabled), but what happens for other secrets?</p>
<p>It holds the name of all secrets containing tokens for that SA so when the controller goes to rotate things, it knows where to find them.</p>
<h2>Overview</h2> <p>Kubernetes scheduling errs on the side of 'not shuffling things around once scheduled and happy' which can lead to quite the level of imbalance in terms of CPU, Memory, and container count distribution. It can also mean that <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#known-limitations" rel="nofollow noreferrer">sometimes Affinity and Topology rules may not be enforced</a> / as the state of affair changes:</p> <h6>With regards to topology spread constraints introduced in v1.19 (stable)</h6> <blockquote> <p>There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.</p> </blockquote> <h2>Context</h2> <p>We are currently making use of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread contraints</a>, and they are pretty superb, aside from the fact that they only seem to handle skew during scheduling, and not execution (unlike the ability to differentiate with <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts" rel="nofollow noreferrer">Taints and Tolerations</a>).</p> <p>For features such as <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">Node affinity</a>, we're currently waiting on the ability to add <code>RequiredDuringExecution</code> requirements as opposed to <code>ScheduledDuringExecution</code> requirements</p> <h2>Question</h2> <p><em><strong>My question is</strong></em>, is there a native way to make Kubernetes re-evaluate and attempt to enforce topology spread skew when a new fault domain (topology) is added, without writing my own scheduler?</p> <p>Or do I need to wait for Kubernetes to advance a few more releases? ;-) (I'm hoping someone may have a smart answer with regards to combining affinity / topology constraints)</p>
<p>After more research I'm fairly certain that using an outside tool like <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a> is the best way currently.</p> <p>There doesn't seem to be a combination of Taints, Affinity rules, or Topology constraints that can work together to achieve the re-evaluation of topology rules during execution.</p> <p>Descheduler allows you to kill of certain workloads based on user requirements, and let the default <code>kube-scheduler</code> reschedule killed pods. It can be installed easily with manifests or Helm and ran on a schedule. It can even be triggered manually when the topology changes, which is what I think we will implement to suit our needs.</p> <p>This will be the best means of achieving our goal while waiting for <code>RequiredDuringExecution</code> rules to mature across all feature offerings.</p> <p>Given our topology rules mark each node as a topological zone, using a <a href="https://github.com/kubernetes-sigs/descheduler/blob/master/examples/low-node-utilization.yml" rel="nofollow noreferrer">Low Node Utilization strategy</a> to spread workloads across new hosts as they appear will be what we go with.</p> <pre><code>apiVersion: &quot;descheduler/v1alpha1&quot; kind: &quot;DeschedulerPolicy&quot; strategies: &quot;LowNodeUtilization&quot;: enabled: true params: nodeResourceUtilizationThresholds: thresholds: &quot;memory&quot;: 20 targetThresholds: &quot;memory&quot;: 70 </code></pre>
<p>I read a bit about <code>Deployment</code> vs <code>StatefulSet</code> in Kubernetes. We usually need <code>StatefulSet</code> when we have a stateful app, so every pod can have its own volume.</p> <p>Now, I have a task to introduce persistence for <code>RabbitMq</code>. I will have only one pod replica of <code>RabbitMq</code>. Can I do it with <code>Deployment</code>? I don't see any problem with this. That one <code>RabbitMq</code> replica will have its own <code>PersistentVolume</code>(it will not share volume with other pods since I have only one replica). Also, I would say that if for any reason my <code>RabbitMq</code> pod gets restarted, it will continue to read and write from the same storage as before restart.</p> <p>Am I missing something?</p>
<p>Even with 1 replica, a statefulset still gets you some thing, like stable network ID. You are right that most features stop mattering but it's really up to your specific needs.</p>
<p>We are using <a href="https://github.com/kubeflow/kfserving/tree/v0.5.1" rel="nofollow noreferrer">KFServing</a> in our kubernetes cluster. The project provides InferenceService struct and clientset in the folder pkg/.</p> <p>How to import these packages under <a href="https://github.com/kubeflow/kfserving/tree/v0.5.1/pkg/client" rel="nofollow noreferrer">pkg/client</a>? My code is as below:</p> <pre><code>package main import ( &quot;fmt&quot; kfs &quot;github.com/kubeflow/kfserving/pkg/client/clientset/versioned&quot; ) func main() { var clientset *kfs.Cientset clientset = kfs.NewForConfig(nil) fmt.Println(clientset) } </code></pre> <p>Try <em><strong>go mod tidy</strong></em> i got (I need dev on specified version v0.5.1):</p> <pre><code> main imports github.com/kubeflow/kfserving/pkg/client/clientset/versioned imports k8s.io/client-go/discovery imports github.com/googleapis/gnostic/OpenAPIv2: module github.com/googleapis/gnostic@latest found (v0.5.5), but does not contain package github.com/googleapis/gnostic/OpenAPIv2 main imports github.com/kubeflow/kfserving/pkg/client/clientset/versioned imports ... </code></pre> <p>Try compiling these code i got:</p> <pre><code> a lot of errors ..... </code></pre> <p>After <em><strong>go get github.com/kubeflow/kfserving</strong></em> I got:</p> <p>Then I changed the go.mod and reconfiged the kfseving version from v0.6.0 to v0.5.1 and then recompile these code, still got below error:</p> <pre><code>go build . go: github.com/kubeflow/kfserving@v0.5.1: missing go.sum entry; to add it: go mod download github.com/kubeflow/kfserving </code></pre> <p>contens of go.mod:</p> <pre><code>module main go 1.16 require github.com/kubeflow/kfserving v0.5.1 // indirect </code></pre> <p><strong>UPDATE</strong><br /> <strong>for kfserving v0.4.0 everything works well</strong></p>
<p>Looking at the <code>kfserving</code> <code>go.mod</code> file at <code>v0.5.1</code>, I see <a href="https://github.com/kubeflow/kfserving/blob/aa1042a3b6f72a65f16e13771ce335b1844ad7fc/go.mod#L46-L71" rel="nofollow noreferrer">a big block of <code>replace</code> directives</a>. By design, <a href="https://golang.org/ref/mod#go-mod-file-replace" rel="nofollow noreferrer"><code>replace</code> directives</a> “only apply in the main module's <code>go.mod</code> file and are ignored in other modules”, so it looks like <code>kfserving</code> has some technical debt here that they're passing on to you.</p> <p>I started with an empty <code>go.mod</code> file and pasted in those <code>replace</code> directives. Then:</p> <pre><code>$ go get -d github.com/kubeflow/kfserving/pkg/client/clientset/versioned@v0.5.1 go get: added github.com/PuerkitoBio/purell v1.1.1 … go get: added sigs.k8s.io/yaml v1.2.0 </code></pre> <p>The <code>.go</code> source file needed a few fixes for typos and mismatched types. I fudged it to:</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;fmt&quot; kfs &quot;github.com/kubeflow/kfserving/pkg/client/clientset/versioned&quot; ) func main() { var clientset *kfs.Clientset clientset = kfs.NewForConfigOrDie(nil) fmt.Println(clientset) } </code></pre> <p>and then <code>go build .</code> succeeds:</p> <pre><code>$ go build -o /dev/null . </code></pre> <p>Now I'll run <code>go mod tidy</code> to clean up the <code>go.mod</code> and <code>go.sum</code> files:</p> <pre><code>$ go mod tidy go: downloading github.com/stretchr/testify v1.5.1 … go: downloading github.com/jmespath/go-jmespath v0.3.0 </code></pre> <hr /> <p>But wait! I've still got all of the technical debt from <code>kfserving/go.mod</code> — without any comments explaining the choice of versions! — and I'm passing that technical debt on to any downstream users. Let's see if I can improve the situation while I'm here.</p> <p>Go 1.16 handles the <code>exclude</code> directive a lot better than previous Go versions did, so maybe I can use a couple of targeted <code>exclude</code> directives instead of the (rather large) <code>replace</code> hammer.</p> <p>I'll start by committing the fixes so far. (If nothing else, I want to be able to <code>git diff</code> the requirements to see what has changed.)</p> <pre><code>$ git add *.go go.mod go.sum $ git commit -m 'fixed kfserving build' [main fd93b1d] fixed kfserving build 3 files changed, 1643 insertions(+), 2 deletions(-) create mode 100644 go.sum </code></pre> <p>I notice that all of the <code>replace</code> directives apply to <code>k8s.io</code> paths, so I'll list the current versions of those paths to see what needs to be fixed:</p> <pre><code>$ go list -m k8s.io/... k8s.io/api v0.19.2 =&gt; k8s.io/api v0.19.2 k8s.io/apiextensions-apiserver v0.19.2 =&gt; k8s.io/apiextensions-apiserver v0.19.2 k8s.io/apimachinery v0.19.2 =&gt; k8s.io/apimachinery v0.19.2 k8s.io/apiserver v0.19.2 =&gt; k8s.io/apiserver v0.19.2 k8s.io/client-go v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible =&gt; k8s.io/client-go v0.19.2 k8s.io/cloud-provider v0.19.2 =&gt; k8s.io/cloud-provider v0.19.2 k8s.io/code-generator v0.19.2 =&gt; k8s.io/code-generator v0.19.2 k8s.io/component-base v0.19.2 =&gt; k8s.io/component-base v0.19.2 k8s.io/csi-translation-lib v0.19.2 =&gt; k8s.io/csi-translation-lib v0.19.2 k8s.io/gengo v0.0.0-20200428234225-8167cfdcfc14 k8s.io/klog v1.0.0 =&gt; k8s.io/klog v1.0.0 k8s.io/klog/v2 v2.2.0 k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6 k8s.io/legacy-cloud-providers v0.17.4 =&gt; k8s.io/legacy-cloud-providers v0.19.2 k8s.io/test-infra v0.0.0-20200803112140-d8aa4e063646 =&gt; k8s.io/test-infra v0.0.0-20200803112140-d8aa4e063646 k8s.io/utils v0.0.0-20200912215256-4140de9c8800 =&gt; k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 </code></pre> <p>Most of those look fine, but there are three mismatched versions:</p> <pre><code>k8s.io/client-go v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible =&gt; k8s.io/client-go v0.19.2 … k8s.io/legacy-cloud-providers v0.17.4 =&gt; k8s.io/legacy-cloud-providers v0.19.2 … k8s.io/utils v0.0.0-20200912215256-4140de9c8800 =&gt; k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 </code></pre> <p><code>legacy-cloud-providers</code> just needs an upgrade, so that should be easy enough to fix with <code>go get</code> — but it's not relevant to this package anyway so I can just let it float to whatever version it ends up at. I'll cut down the <code>replace</code> directives to just the two remaining modules and see how far I get:</p> <pre><code>replace ( k8s.io/client-go =&gt; k8s.io/client-go v0.19.2 k8s.io/utils =&gt; k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 ) </code></pre> <p>Now:</p> <pre><code>$ go mod tidy $ go build -o /dev/null . </code></pre> <p>Ok, so the build is still good — I've made incremental progress, so I'll lock that in:</p> <pre><code>$ git add go.mod go.sum $ git commit -m 'go.mod: remove irrelevant replacements' [main cdbc1db] go.mod: remove irrelevant replacements 3 files changed, 456 insertions(+), 43 deletions(-) rewrite go.mod (85%) create mode 100755 m </code></pre> <p>Now let's try notching out the bad <code>client-go</code> versions with <code>exclude</code> directives:</p> <pre><code>$ go mod edit -exclude=k8s.io/client-go@v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible $ go list -m k8s.io/client-go k8s.io/client-go v9.0.0+incompatible =&gt; k8s.io/client-go v0.19.2 $ go mod edit -exclude=k8s.io/client-go@v9.0.0+incompatible $ go list -m k8s.io/client-go k8s.io/client-go v0.19.2 =&gt; k8s.io/client-go v0.19.2 </code></pre> <p>Now I'm at the <code>client-go</code> version I needed, so I can drop that <code>replace</code> directive:</p> <pre><code>$ go mod edit -dropreplace=k8s.io/client-go $ go mod tidy $ go build -o /dev/null . </code></pre> <p>And again lock it in:</p> <pre><code>$ git add go.mod go.sum $ git commit -m 'go.mod: use exclude instead of replace to notch out unwanted client-go versions' [main de69965] go.mod: use exclude instead of replace to notch out unwanted client-go versions 2 files changed, 21 insertions(+), 3 deletions(-) </code></pre> <p>Now let's see what's going on with this <code>k8s.io/utils</code> replacement. I suspect that it isn't really needed — the version will be upgraded a bit, but I'm betting it won't break the build. And, indeed, it seems to be fine:</p> <pre><code>$ go list -m k8s.io/utils k8s.io/utils v0.0.0-20200912215256-4140de9c8800 =&gt; k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 $ go mod edit -dropreplace=k8s.io/utils $ go mod tidy $ go build -o /dev/null . $ go list -m k8s.io/utils k8s.io/utils v0.0.0-20200912215256-4140de9c8800 </code></pre> <hr /> <p>So now I've successfully reduced that big nest of <code>replace</code> directives from <code>kfserving</code> to just a couple of <code>exclude</code> directives targeted retracted versions of <code>k8s.io/client-go</code>. Here's my whole <code>go.mod</code> file at the end of this process:</p> <pre><code>module example.com/m go 1.16 exclude ( k8s.io/client-go v11.0.1-0.20190805182717-6502b5e7b1b5+incompatible k8s.io/client-go v9.0.0+incompatible ) require github.com/kubeflow/kfserving v0.5.1 </code></pre>
<p>I am trying to run Kubernetes and trying to use <code>sudo kubeadm init</code>. Swap is off as recommended by official doc.</p> <p>The issue is it displays the warning:</p> <pre><code>[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) - No internet connection is available so the kubelet cannot pull or find the following control plane images: - k8s.gcr.io/kube-apiserver-amd64:v1.11.2 - k8s.gcr.io/kube-controller-manager-amd64:v1.11.2 - k8s.gcr.io/kube-scheduler-amd64:v1.11.2 - k8s.gcr.io/etcd-amd64:3.2.18 - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images are downloaded locally and cached. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster </code></pre> <p>The docker version I am using is <code>Docker version 17.03.2-ce, build f5ec1e2</code> I m using Ubuntu 16.04 LTS 64bit</p> <p>The docker images shows the following images:</p> <pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 3 weeks ago 187 MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 3 weeks ago 155 MB k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 3 weeks ago 97.8 MB k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 3 weeks ago 56.8 MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 3 months ago 45.6 MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219 MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 8 months ago 742 kB </code></pre> <p>Full logs can be found here : <a href="https://pastebin.com/T5V0taE3" rel="noreferrer">https://pastebin.com/T5V0taE3</a></p> <p>I didn't found any solution on internet.</p> <p><strong>EDIT:</strong></p> <p><em>docker ps -a</em> output:</p> <pre><code>ubuntu@ubuntu-HP-Pavilion-15-Notebook-PC:~$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS </code></pre> <p><em>journalctl -xeu kubelet</em> output:</p> <pre><code>journalctl -xeu kubelet -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: T -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: M Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: U Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: F lines 788-810/810 (END) -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117131 9107 server.go:408] Version: v1.11.2 Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117406 9107 plugins.go:97] No cloud provider specified. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.121192 9107 certificate_store.go:131] Loading cert/key pair Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.145720 9107 server.go:648] --cgroups-per-qos enabled, but -- Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06.146074 9107 server.go:262] failed to run Kubelet: Running wi Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Unit entered failed state. Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Failed with result 'exit-code'. ~ PORTS NAMES </code></pre> <p>Any help/suggestion/comment would be appreciated.</p>
<p>I faced similar issue recently. The problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd. So I created <code>/etc/docker/daemon.json</code> and added below:</p> <pre><code>{ &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;] } </code></pre> <p>Then</p> <pre><code> sudo systemctl daemon-reload sudo systemctl restart docker sudo systemctl restart kubelet </code></pre> <p>Run kubeadm init or kubeadm join again.</p>
<p>I have uploaded my image on ACR. When I try to deploy it using a <code>deployment.yaml</code> with <code>kubectl</code> commands, the <code>kubectl get pods</code> command shows <code>ErrImageNeverPull</code> in the pods.</p> <p>Also, I am not using minikube. Is it necessary to use minikube for this? I am a beginner in azure/kubernetes.</p> <p>I've also used <code>imagePullPolicy: Never</code> in the yaml file. It's not working even without this and shows <code>ImagePullBackOff</code>.</p>
<p>As <a href="https://stackoverflow.com/users/16077085/payal-jindal">Payal Jindal</a> mentioned in the comment:</p> <blockquote> <p>It worked fine. There was a problem with my docker installation.</p> </blockquote> <p>Problem is now resolved. The way forward is to set the image pull policy to <code>IfNotPresent</code> or <code>Always</code>.</p> <pre><code>spec: containers: - imagePullPolicy: Always </code></pre>
<p>since a couple of days and without any change in the environment one of the clusters running kubernetes 1.19.9 on-prem showed some errors regarding kubelet certificates.</p> <p>A node is in NON-READY state due to an expired certificate. Investigating a bit i've found out that the CSR are in pending state. I can approve them manually but no issued at all.</p> <p>I've tried to rejoin those nodes to the cluster but i face the same situation with the CSR approval.</p> <p>Example:</p> <pre><code>NAME AGE SIGNERNAME REQUESTOR CONDITION csr-4dc9x 3m28s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-ctrl-prod-1 Pending csr-4xljn 18m kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-9 Pending csr-6jdmg 3m19s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-6 Pending csr-9lr8n 18m kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-6 Pending csr-g2pjt 3m35s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-ctrl-prod-2 Pending </code></pre> <p>CSR example:</p> <pre><code>apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: creationTimestamp: &quot;2021-08-08T10:10:19Z&quot; generateName: csr- managedFields: - apiVersion: certificates.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:spec: f:request: {} f:signerName: {} f:usages: {} manager: kubelet operation: Update time: &quot;2021-08-08T10:10:19Z&quot; name: csr-4dc9x resourceVersion: &quot;775314577&quot; selfLink: /apis/certificates.k8s.io/v1/certificatesigningrequests/csr-4dc9x uid: 8c51be15-4ec4-4dc7-8a7a-486e27c74607 spec: groups: - system:nodes - system:authenticated request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlIN01JR2lBZ0VBTUVBeEZUQVRCZ05WQkFvVERITjVjM1JsYlRwdWIyUmxjekVuTUNVR0ExVUVBeE1lYzNsegpkR1Z0T201dlpHVTZkbTB0YXpoekxXTjBjbXd0Y0hKdlpDMHhNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBECkFRY0RRZ0FFazNESFh2cTloVkZxZzB3bW5VeWd6Z3VGdmFRdDZFUkFCcHcrUmhRNHFCRlRqdkxTSGo3ZUxVK1oKT3JGaThaOGpYUjZqRE5nekVpUkxRQTloS1pxR0c2QUFNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJUURObFJBcAphT0hFZWRteENDajZiK2tLMWJrNjVYVDc0aC9Nd1VCenVDSnBrUUlnU2F0U0Z3Rkp5ekNQaWtFZTRKQys0QStqClVtVUVWUzhlOWZRbkdXdjROTms9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= signerName: kubernetes.io/kube-apiserver-client-kubelet usages: - digital signature - key encipherment - client auth username: system:node:vm-k8s-ctrl-prod-1 status: {} </code></pre> <p>Did anyone face the same situation? i've checked all the certificates in the cluster and everything looks good to me.</p> <pre><code> CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jun 10, 2022 22:17 UTC 306d no apiserver Jun 10, 2022 22:16 UTC 306d ca no apiserver-kubelet-client Jun 10, 2022 22:16 UTC 306d ca no controller-manager.conf Jun 10, 2022 22:17 UTC 306d no front-proxy-client Jun 10, 2022 22:16 UTC 306d front-proxy-ca no scheduler.conf Jun 10, 2022 22:17 UTC 306d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Apr 07, 2029 17:39 UTC 7y no front-proxy-ca Apr 07, 2029 17:39 UTC 7y no </code></pre> <p>Thanks in advance</p>
<p>just in case anyone else face this situation. The issue was a legacy configuration for kubelet on the master nodes.</p> <p><a href="https://serverfault.com/questions/1065444/how-can-i-find-which-kubernetes-certificate-has-expired">https://serverfault.com/questions/1065444/how-can-i-find-which-kubernetes-certificate-has-expired</a></p> <p>reconfiguring manually the kubelet.conf on the controllers and restarting the control-plane, fixed the issue.</p> <p>Thanks</p>
<p>Currently struggling writing a Terraform module to deploy a Helm chart, I was getting:</p> <pre><code>│ Error: YAML parse error on external-dns/templates/serviceaccount.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string </code></pre> <p>with a resource definition like this one:</p> <pre><code>resource &quot;helm_release&quot; &quot;external_dns&quot; { name = &quot;externaldns&quot; namespace = var.external_dns_namespace repository = &quot;https://charts.bitnami.com/bitnami&quot; chart = &quot;external-dns&quot; version = &quot;5.3.0&quot; set { name = &quot;serviceAccount.annotations.eks.amazonaws.com/role-arn&quot; value = resource.aws_iam_role.external_dns_role.arn } } </code></pre> <p>When I found a public repository with a similar module: <a href="https://github.com/lablabs/terraform-aws-eks-external-dns/blob/master/main.tf" rel="nofollow noreferrer">https://github.com/lablabs/terraform-aws-eks-external-dns/blob/master/main.tf</a> and see that it has the last parameter defined as</p> <pre><code> set { name = &quot;serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn&quot; value = aws_iam_role.external_dns[0].arn } </code></pre> <p>I tried adding those double slashes (\) and everything works! Now I would like to understand... why are these double slash required before the last two &quot;.&quot; but not in the other two?</p> <p>I understand that, in Terraform, the double slash means literally a slash... but I cannot understand why would it be required there.</p> <p>This is what I am trying to put into the Terraform module. <a href="https://i.stack.imgur.com/07tQS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/07tQS.png" alt="enter image description here" /></a></p> <p>Any help with an explanation for this issue will be appreciated :)</p>
<p>in <code>name = &quot;serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn&quot;</code> you want to define 3 groups, that are separated by dots:</p> <p><code>serviceAccount</code> -&gt; <code>annotations</code> -&gt; <code>eks.amazonaws.com/role-arn</code></p> <p>Since your third group happens to contain dots, you successfully found out that you must escape the dot characters in order to preserve proper structure.</p> <p>Without escaping, the string would somehow mean</p> <p><code>serviceAccount</code> -&gt; <code>annotations</code> -&gt; <code>eks</code> -&gt; <code>amazonaws</code>-&gt; <code>com/role-arn</code>, which makes no sense here</p>
<p>I'm seeing liveliness and readiness probes failing in the Kubernetes setup.</p> <p>Below im attaching screenshots of Pod Events, Resource Limits of the pod, probe configurations.</p> <p>Anyone can help he with this issue and some explanation why this can happen and when do we see status code 503 in probes.</p> <p>thankyou in advance!</p> <p><strong>Below screen shot is from events section of pod</strong></p> <p><a href="https://i.stack.imgur.com/5UfLI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5UfLI.png" alt="screenshot is from events of pod " /></a></p> <p><strong>Configurations of liveliness and readiness probe</strong></p> <p><a href="https://i.stack.imgur.com/xAxzF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xAxzF.png" alt="configurations for liveliness and readiness probe " /></a></p> <p><strong>Resource limits of pod</strong></p> <p><a href="https://i.stack.imgur.com/UYIiE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYIiE.png" alt="resource limits set to pod " /></a></p> <p><strong>FYI</strong>: I've tried changing initialDelaySeconds to 180 didn't help and also i don't see any issue with service startup it is not taking much time to start I could see in logs of pod.</p>
<p>Community wiki answer for better visibility. As <a href="https://stackoverflow.com/users/10753078/ni-kill12">ni_kill12</a> has mentioned in the comment, the issue is solved:</p> <blockquote> <p>I got the issue, what is happening is one of the component is going OUT_OF_STATE because of that readiness and liveliness probe is failing for me. I got to know about this by hitting the request of livelines probe. <a href="https://github.com/alexandreroman/spring-k8s-probes-demo/blob/master/README.md" rel="nofollow noreferrer">This link</a> helped me to understand probes.</p> </blockquote>
<p>I have multiple applications that run using Node.js and pg (node-postgres).</p> <p>The issue i have is that every app is getting the error &quot;Connection terminated unexpectedly&quot; every hour. Here is the error :</p> <pre><code>&gt; node ./dist/app.js App Started events.js:174 throw er; // Unhandled 'error' event ^ Error: Connection terminated unexpectedly at Connection.con.once (/app/node_modules/pg/lib/client.js:255:9) at Object.onceWrapper (events.js:286:20) at Connection.emit (events.js:198:13) at Socket.&lt;anonymous&gt; (/app/node_modules/pg/lib/connection.js:139:10) at Socket.emit (events.js:203:15) at endReadableNT (_stream_readable.js:1145:12) at process._tickCallback (internal/process/next_tick.js:63:19) Emitted 'error' event at: at connectedErrorHandler (/app/node_modules/pg/lib/client.js:202:10) at Connection.con.once (/app/node_modules/pg/lib/client.js:272:9) at Object.onceWrapper (events.js:286:20) [... lines matching original stack trace ...] at process._tickCallback (internal/process/next_tick.js:63:19) </code></pre> <p>Here is how i connect my Client to the database :</p> <p>Database.ts:</p> <pre><code>import { Client, QueryResult } from 'pg'; export default class DatabaseModule { private client: Client; constructor() { this.client = new Client({ connectionString: process.env.DATABASE_URL }); } public init(): Promise&lt;any&gt; { return this.client.connect(); } } </code></pre> <p>app.ts:</p> <pre><code>Promise.all([ express.init(), database.init() ]) .then(() =&gt; { console.log(&quot;App Started&quot;); [load routes...]; }) .catch((error) =&gt; { console.error(error); process.exit(1); }); </code></pre> <p>All works fine on local but not on production.</p> <p>On production we are running every app as micro services in Google Kubernetes Engine. Is there any config in K8s that may cause this connection loss every hour ? (even if the Client is idle or not idle, this error happens)</p> <pre><code>NAME READY STATUS RESTARTS AGE my-service-57c9f99767-wnm47 2/2 Running 96 4d </code></pre> <p>As you can see, my app has 96 restarts: 4 days * 24 hours = 96 =&gt; error every hour that crashs the pod.</p> <p>We are using a postgreSQL server hosted by Google Cloud SQL, and every app in K8s has an access to it with a local address.</p> <p>EDIT:</p> <p>I just found this in the Google Cloud SQL documentation : <code>WebSockets are always available to your application without any additional setup. Once a WebSockets connection is established, it will time out after one hour.</code></p> <p>So the error was generated from the usage of pg.Client with a persistent connection to the SQL server. I will try to use pg.Pool(). Here is the explanation of why i should use pool instead of client : <a href="https://stackoverflow.com/a/48751665/12052533">https://stackoverflow.com/a/48751665/12052533</a></p>
<p>I found the problem :</p> <p>In the Google Cloud SQL documentation : <code>WebSockets are always available to your application without any additional setup. Once a WebSockets connection is established, it will time out after one hour.</code></p> <p>The error was generated by the usage of pg.Client() because i had a persistent connection into my database which is a bad practice. A client shall connect to the database then end its connection after it finished executing a query.</p> <p>I will use pg.Pool() as it will generate clients and it is better for multiple requests. After the generation of a client i juste have to release all my clients.</p> <p>I removed the database.init() and modified the database.query() function like the following :</p> <pre class="lang-js prettyprint-override"><code> public query(command: string, args?: Array&lt;any&gt;): Promise&lt;QueryResult&gt; { if (args === undefined) args = []; return this.pool.connect() .then((client: Client) =&gt; { return this.queryClient(client, command, args) }) .then((result: QueryResult) =&gt; { return result; }) .catch((error) =&gt; { throw error; }); } private queryClient(client: Client, command: string, args?: Array&lt;any&gt;): Promise&lt;QueryResult&gt; { return client.query(command, args) .then((result: QueryResult) =&gt; { client.release(); return result; }).catch((error) =&gt; { client.release(); throw error; }) } </code></pre>
<ul> <li>with istio 1.4.6</li> </ul> <p>I configured Kubernetes using resources such as <code>service</code>, <code>deployment</code>. I also configured <code>gateway</code>, <code>virtual service</code>, and <code>destination rules</code> to implement circuit breakers.</p> <p>The composition diagram is as follows. (<strong>number of Pod's replica is two. &amp; I operate only one version of app.</strong>)</p> <p><a href="https://i.stack.imgur.com/sGidq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGidq.png" alt="enter image description here" /></a></p> <p>I wrote <code>VirtualServices</code> and <code>DestinationRules</code> to use circuit breakers</p> <h3>VirtualService</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews-virtual-service spec: gateways: - reviews-istio-gateway hosts: - reviews http: - route: - destination: host: reviews-service port: number: 80 </code></pre> <h3>DestinationRules</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: reviews-destination-rule spec: host: reviews trafficPolicy: loadBalancer: simple: ROUND_ROBIN outlierDetection: baseEjectionTime: 1m consecutiveErrors: 1 interval: 1s maxEjectionPercent: 100 </code></pre> <p>Here, I expect that if more than one error occurs in <code>reviews-app</code>, all pods will be excluded from the load balancing list for a minute. Therefore, I expected the circuit breaking to work as below.</p> <p><a href="https://i.stack.imgur.com/gVkys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gVkys.png" alt="enter image description here" /></a></p> <p>However, contrary to expectations, <strong>circuit breakers did not work</strong>, and <strong>error logs were continuously being recorded</strong> in reviews-app.</p> <p><strong>Why isn't the circuit breaker working?</strong></p>
<p>I guess the problem is not about Circuit Breaking, but about the usage of Virtual Services and Destination Rules.</p> <p>For example, if using a Virtual Service with a Gateway, its host should probably be of public host, like <a href="http://amce.io" rel="nofollow noreferrer">http://amce.io</a></p> <p>The host of the Destination Rule should probably be that of the Kubernetes Service.</p>
<p>We have an Airflow (Celery executor) setup that can run tasks on our K8s cluster. The tasks that use KubernetesPodOperator can access K8s secrets <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#how-to-use-cluster-configmaps-secrets-and-volumes-with-pod" rel="nofollow noreferrer">as described in the documentation</a>. The rest of the tasks run on Celery workers outside of the K8s cluster.</p> <p>How can tasks using other operators (e.g., SqlSensor) access the same K8s secrets as the tasks using KubernetesPodOperator?</p>
<p>If you need to pass secrets between MWAA and K8s I would suggest using an external secrets manager.</p> <p>AWS Secrets Manager can be used natively by MWAA and K8S. <a href="https://docs.aws.amazon.com/mwaa/latest/userguide/connections-secrets-manager.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/mwaa/latest/userguide/connections-secrets-manager.html</a> <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html</a></p> <p>Hashicorp Vault is another option.</p> <p>One thing to note is to not pass Secrets as variables to KubernetesPodOperator</p> <hr /> <p>Our solution was to actually run MWAA tasks on AWS Fargate using ECS operator</p> <p><a href="https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/ecs.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/ecs.html</a></p> <p>Secrets are shared per ECS task definition and used as environmental variables. Developers have a simple YAML configuration file for defining ECS tasks and associated secrets from AWS Secrets Manager and Terraform handles managing Task Definitions based on this YAML. It works really smooth for our particular use case.</p>
<p>I am trying to use Persistent Volumes in AWS EKS.</p> <p>I followed all the steps mentioned from the below link</p> <p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/</a></p> <p>When I deployed sample pods to check PVCs are created fine, I am getting below error.</p> <pre><code>Warning FailedScheduling 2m6s (x265 over 44h) default-scheduler error while running &quot;VolumeBinding&quot; prebind plugin for pod &quot;app&quot;: Failed to bind volumes: timed out waiting for the condition </code></pre> <p>Checked further persistent volume claims</p> <pre><code>kubectl describe pvc Warning ProvisioningFailed 62s (x7 over 3m5s) ebs.csi.aws.com_ebs-csi-controller-7cb88dcffc-vkvdk_2ae45c3c-b4f2-4200-a5d4-cfdb35a5f0db failed to provision volume with StorageClass &quot;ebs-sc&quot;: rpc error: code = DeadlineExceeded desc = context deadline exceeded </code></pre> <p>From looking at the logs of ebs csi controller</p> <pre><code>I0721 10:44:10.049138 1 event.go:282] Event(v1.ObjectReference{Kind:&quot;PersistentVolumeClaim&quot;, Namespace:&quot;avengers&quot;, Name:&quot;ebs-claim&quot;, UID:&quot;88a9684c-1be7-412f-8616-b662eff692df&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;19320908&quot;, FieldPath:&quot;&quot;}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass &quot;ebs-sc&quot;: rpc error: code = Internal desc = WebIdentityErr: failed to retrieve credentials caused by: InvalidIdentityToken: No OpenIDConnect provider found in your account for https://oidc.eks.us-east-2.amazonaws.com/id/XXXXXXXXXXXX34FDCBXXXXX status code: 400, request id: 90b430ba-6999-41d8-ac15-13f5ec73def2 I0721 10:44:45.207103 1 controller.go:1332] provision &quot;default/ebs-claim&quot; class &quot;ebs-sc&quot;: started I0721 10:44:45.207406 1 event.go:282] Event(v1.ObjectReference{Kind:&quot;PersistentVolumeClaim&quot;, Namespace:&quot;default&quot;, Name:&quot;ebs-claim&quot;, UID:&quot;3922ab97-483a-486a-bbbd-57875e506021&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;19875393&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim &quot;default/ebs-claim&quot; I0721 10:44:55.207594 1 controller.go:1106] Temporary error received, adding PVC 3922ab97-483a-486a-bbbd-57875e51 to claims in progress W0721 10:44:55.207639 1 controller.go:958] Retrying syncing claim &quot;3922ab97-483a-486a-bbbd-57875e506021&quot;, failure 9 E0721 10:44:55.207664 1 controller.go:981] error syncing claim &quot;3922ab97-483a-486a-bbbd-57875e506021&quot;: failed to provision volume with StorageClass &quot;ebs-sc&quot;: rpc error: code = DeadlineExceeded desc = context deadline exceeded I0721 10:44:55.207859 1 event.go:282] Event(v1.ObjectReference{Kind:&quot;PersistentVolumeClaim&quot;, Namespace:&quot;default&quot;, Name:&quot;ebs-claim&quot;, UID:&quot;3922ab97-483a-486a-bbbd-57875e506021&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;19875393&quot;, FieldPath:&quot;&quot;}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass &quot;ebs-sc&quot;: rpc error: code = DeadlineExceeded desc = context </code></pre> <p>What did I miss here ?</p> <p>Any help will be appreciated.</p>
<p>From the error log, pick the id for search and i.e. <code>XXXXXXXXXXXX34FDCBXXXXX</code></p> <pre><code>caused by: InvalidIdentityToken: No OpenIDConnect provider found in your account for https://oidc.eks.us-east-2.amazonaws.com/id /XXXXXXXXXXXX34FDCBXXXXX status code: 400, request id: 90b430ba-6999-41d8-ac15-13f5ec73def2 </code></pre> <h2>Search in OpenID connect providers list</h2> <p>IAM OIDC (OpenID Connect) Identity provider didn’t exist for current EKS Cluster.</p> <pre><code>aws iam list-open-id-connect-providers | grep XXXXXXXXXXXX34FDCBXXXXX </code></pre> <p>returned nothing.</p> <h2>Enable OIDC IAM</h2> <p>Enabled OIDC IAM in existing EKS Cluster:</p> <pre><code>$ eksctl utils associate-iam-oidc-provider --cluster=eks-lab --region us-east-2 --approve </code></pre> <p>Deleted everything and deployed again and now it works fine.</p> <p><a href="https://eksctl.io/usage/iamserviceaccounts/" rel="nofollow noreferrer">References</a>.</p>
<p>I need to exclude specific host from the EnvoyFilter that looks like this:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: authn-filter spec: workloadLabels: istio: ingressgateway filters: - filterConfig: httpService: serverUri: uri: http://authservice.$(namespace).svc.cluster.local cluster: outbound|8080||authservice.$(namespace).svc.cluster.local failureModeAllow: false timeout: 10s authorizationRequest: allowedHeaders: patterns: - exact: "cookie" - exact: "X-Auth-Token" authorizationResponse: allowedUpstreamHeaders: patterns: - exact: "kubeflow-userid" statusOnError: code: GatewayTimeout filterName: envoy.ext_authz filterType: HTTP insertPosition: index: FIRST listenerMatch: listenerType: GATEWAY </code></pre> <p>The problem is that the filter applies to the default istio ingress gateway which affects all traffic that is coming through that gateway, i would like to have some hosts that could be excluded / whitelisted from the filter.</p>
<p>I found my answer <a href="https://stackoverflow.com/questions/63448933/istio-exclude-service-from-ext-auth">here</a>. The question asks to exclude some paths, but I was successful with hosts as well. This is what I used:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: bypass-authn namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: HTTP_ROUTE match: routeConfiguration: vhost: name: subdomain.example.org:80 # &lt;== your host goes here patch: operation: MERGE value: name: envoy.ext_authz_disabled typed_per_filter_config: envoy.ext_authz: &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthzPerRoute disabled: true </code></pre> <p>More information in <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/#EnvoyFilter-RouteConfigurationMatch-VirtualHostMatch" rel="nofollow noreferrer">Istio documentation</a>. Specifically, the documentation specifies that you should also put into the <code>name:</code> field the port, but I think it should work without it as well.</p>
<p>I am doing my practice on <strong>Kubernetes</strong> through <strong>Minikube</strong> on my <strong>AWS EC2</strong>. As part of that, I have created <strong>deployment</strong> and exposed that through <strong>NodePort</strong> service, then checked with:</p> <pre><code>curl http://&lt;node-ip&gt;:&lt;service-port&gt; </code></pre> <p>in EC2 Machine that worked fine. But when I hit the same URL on browser gave me:</p> <pre><code>This site can't be reached </code></pre> <p>Can anyone help me what is the problem and how can I access this?</p> <p>Thank you.</p> <p>This is my Deployment YAML file:</p> <pre> <code> apiVersion: apps/v1 kind: Deployment metadata: name: myfirstdeployment labels: app: web spec: replicas: 1 selector: matchLabels: app: web template: metadata: name: myfirstpod labels: app: web spec: containers: - name: myfirstpod-1 image: nginx command: ["sleep","3600"] ports: - containerPort: 80 </code> </pre> <p>This is my Service YAML file</p> <pre> <code> apiVersion: v1 kind: Service metadata: name: myfirstservice spec: selector: app: web ports: - targetPort: 80 #target container's port port: 80 #service port nodePort: 30030 #node port that we access to type: NodePort </code> </pre>
<p>I strongly recommend going through the official tutorial showing the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="noreferrer">Accessing apps</a> options in Minikube:</p> <blockquote> <p>How to access applications running within minikube There are two major categories of services in Kubernetes:</p> <ul> <li>NodePort</li> <li>LoadBalancer</li> </ul> <p>minikube supports either. Read on!</p> </blockquote> <p>There you will find how to use both, the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#nodeport-access" rel="noreferrer">NodePort access</a>:</p> <blockquote> <p>A <code>NodePort</code> service is the most basic way to get external traffic directly to your service. <code>NodePort</code>, as the name implies, opens a specific port, and any traffic that is sent to this port is forwarded to the service.</p> </blockquote> <p>Notice that you have to use <code>minikube ip</code> here.</p> <p>And also the <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#loadbalancer-access" rel="noreferrer">LoadBalancer access</a>:</p> <blockquote> <p>A <code>LoadBalancer</code> service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.</p> </blockquote> <p>This method uses the <code>minikube tunnel</code> command.</p> <p>You can also use <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="noreferrer">these docs</a> as a supplement.</p>
<p>I get this log error for a pod like below but I updated kubernetes orchestrator, clusters, and nodes to kubernetes v1.21.2. Before updating it, they were v1.20.7. I found a reference that from v1.21, selfLink is completely removed. Why am I getting this error? How can I resolve this issue?</p> <p><strong>error log for kubectl logs (podname)</strong></p> <pre><code>... 2021-08-10T03:07:19.535Z INFO setup starting manager 2021-08-10T03:07:19.536Z INFO controller-runtime.manager starting metrics server {&quot;path&quot;: &quot;/metrics&quot;} E0810 03:07:19.550636 1 event.go:247] Could not construct reference to: '&amp;v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:&quot;&quot;, APIVersion:&quot;&quot;}, ObjectMeta:v1.ObjectMeta{Name:&quot;controller-leader-election-helper&quot;, GenerateName:&quot;&quot;, Namespace:&quot;kubestone-system&quot;, SelfLink:&quot;&quot;, UID:&quot;b01651ed-7d54-4815-a047-57b16d26cfdf&quot;, ResourceVersion:&quot;65956&quot;, Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764161639, loc:(*time.Location)(0x21639e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{&quot;control-plane.alpha.kubernetes.io/leader&quot;:&quot;{\&quot;holderIdentity\&quot;:\&quot;kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2\&quot;,\&quot;leaseDurationSeconds\&quot;:15,\&quot;acquireTime\&quot;:\&quot;2021-08-10T03:07:19Z\&quot;,\&quot;renewTime\&quot;:\&quot;2021-08-10T03:07:19Z\&quot;,\&quot;leaderTransitions\&quot;:0}&quot;}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:&quot;&quot;, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:&quot;manager&quot;, Operation:&quot;Update&quot;, APIVersion:&quot;v1&quot;, Time:(*v1.Time)(0xc0000956a0), Fields:(*v1.Fields)(nil)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'kubestone-controller-manager-f467b7c47-cv7ws_1305bc36-f988-11eb-81fc-a20dfb9758a2 became leader' 2021-08-10T03:07:21.636Z INFO controller-runtime.controller Starting Controller {&quot;controller&quot;: &quot;kafkabench&quot;} ... </code></pre> <p><strong>kubectl get nodes to show kubernetes version: the node that the pod is scheduled is aks-default-41152893-vmss000000</strong></p> <pre><code>PS C:\Users\user&gt; kubectl get nodes -A NAME STATUS ROLES AGE VERSION aks-default-41152893-vmss000000 Ready agent 5h32m v1.21.2 aks-default-41152893-vmss000001 Ready agent 5h29m v1.21.2 aksnpwi000000 Ready agent 5h32m v1.21.2 aksnpwi000001 Ready agent 5h26m v1.21.2 aksnpwi000002 Ready agent 5h19m v1.21.2 </code></pre> <p><strong>kubectl describe pods (pod name: kubestone-controller-manager-f467b7c47-cv7ws)</strong></p> <pre><code>PS C:\Users\user&gt; kubectl describe pods kubestone-controller-manager-f467b7c47-cv7ws -n kubestone-system Name: kubestone-controller-manager-f467b7c47-cv7ws Namespace: kubestone-system Priority: 0 Node: aks-default-41152893-vmss000000/10.240.0.4 Start Time: Mon, 09 Aug 2021 23:07:16 -0400 Labels: control-plane=controller-manager pod-template-hash=f467b7c47 Annotations: &lt;none&gt; Status: Running IP: 10.240.0.21 IPs: IP: 10.240.0.21 Controlled By: ReplicaSet/kubestone-controller-manager-f467b7c47 Containers: manager: Container ID: containerd://01594df678a2c1d7163c913eff33881edf02e39633b1a4b51dcf5fb769d0bc1e Image: user2/imagename Image ID: docker.io/user2/imagename@sha256:aa049f135931192630ceda014d7a24306442582dbeeaa36ede48e6599b6135e1 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /manager Args: --enable-leader-election State: Running Started: Mon, 09 Aug 2021 23:07:18 -0400 Ready: True Restart Count: 0 Limits: cpu: 100m memory: 30Mi Requests: cpu: 100m memory: 20Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jvjjh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-jvjjh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23m default-scheduler Successfully assigned kubestone-system/kubestone-controller-manager-f467b7c47-cv7ws to aks-default-41152893-vmss000000 Normal Pulling 23m kubelet Pulling image &quot;user2/imagename&quot; Normal Pulled 23m kubelet Successfully pulled image &quot;user2/imagename&quot; in 354.899039ms Normal Created 23m kubelet Created container manager Normal Started 23m kubelet Started container manager </code></pre>
<p>Kubestone has had no releases since 2019, it needs to upgrade its copy of the Kubernetes Go client. That said, this appears to only impact the event recorder system so probably not a huge deal.</p>
<p>There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage <code>Dockerfile</code>.</p> <p>Using this as an example, I have a couple questions below it:</p> <pre><code># Dockerfile # Uses multi-stage builds requiring Docker 17.05 or higher # See https://docs.docker.com/develop/develop-images/multistage-build/ # Creating a python base with shared environment variables FROM python:3.8.1-slim as python-base ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PIP_NO_CACHE_DIR=off \ PIP_DISABLE_PIP_VERSION_CHECK=on \ PIP_DEFAULT_TIMEOUT=100 \ POETRY_HOME=&quot;/opt/poetry&quot; \ POETRY_VIRTUALENVS_IN_PROJECT=true \ POETRY_NO_INTERACTION=1 \ PYSETUP_PATH=&quot;/opt/pysetup&quot; \ VENV_PATH=&quot;/opt/pysetup/.venv&quot; ENV PATH=&quot;$POETRY_HOME/bin:$VENV_PATH/bin:$PATH&quot; # builder-base is used to build dependencies FROM python-base as builder-base RUN apt-get update \ &amp;&amp; apt-get install --no-install-recommends -y \ curl \ build-essential # Install Poetry - respects $POETRY_VERSION &amp; $POETRY_HOME ENV POETRY_VERSION=1.0.5 RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python # We copy our Python requirements here to cache them # and install only runtime deps using poetry WORKDIR $PYSETUP_PATH COPY ./poetry.lock ./pyproject.toml ./ RUN poetry install --no-dev # respects # 'development' stage installs all dev deps and can be used to develop code. # For example using docker-compose to mount local volume under /app FROM python-base as development ENV FASTAPI_ENV=development # Copying poetry and venv into image COPY --from=builder-base $POETRY_HOME $POETRY_HOME COPY --from=builder-base $PYSETUP_PATH $PYSETUP_PATH # Copying in our entrypoint COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh RUN chmod +x /docker-entrypoint.sh # venv already has runtime deps installed we get a quicker install WORKDIR $PYSETUP_PATH RUN poetry install WORKDIR /app COPY . . EXPOSE 8000 ENTRYPOINT /docker-entrypoint.sh $0 $@ CMD [&quot;uvicorn&quot;, &quot;--reload&quot;, &quot;--host=0.0.0.0&quot;, &quot;--port=8000&quot;, &quot;main:app&quot;] # 'lint' stage runs black and isort # running in check mode means build will fail if any linting errors occur FROM development AS lint RUN black --config ./pyproject.toml --check app tests RUN isort --settings-path ./pyproject.toml --recursive --check-only CMD [&quot;tail&quot;, &quot;-f&quot;, &quot;/dev/null&quot;] # 'test' stage runs our unit tests with pytest and # coverage. Build will fail if test coverage is under 95% FROM development AS test RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests RUN coverage report --fail-under 95 # 'production' stage uses the clean 'python-base' stage and copyies # in only our runtime deps that were installed in the 'builder-base' FROM python-base as production ENV FASTAPI_ENV=production COPY --from=builder-base $VENV_PATH $VENV_PATH COPY ./docker/gunicorn_conf.py /gunicorn_conf.py COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh RUN chmod +x /docker-entrypoint.sh COPY ./app /app WORKDIR /app ENTRYPOINT /docker-entrypoint.sh $0 $@ CMD [ &quot;gunicorn&quot;, &quot;--worker-class uvicorn.workers.UvicornWorker&quot;, &quot;--config /gunicorn_conf.py&quot;, &quot;main:app&quot;] </code></pre> <p>The questions I have:</p> <ol> <li><p>Are you <code>docker build ...</code> this entire image and then just <code>docker run ... --target=&lt;stage&gt;</code> to run a specific stage (<code>development</code>, <code>test</code>, <code>lint</code>, <code>production</code>, etc.) or are you only building and running the specific stages you need (e.g. <code>docker build ... -t test --target=test &amp;&amp; docker run test ...</code>)?</p> <p>I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?</p> </li> <li><p>When it comes to local Kubernetes development (<code>minikube</code>, <code>skaffold</code>, <code>devspace</code>, etc.) and running unit tests, are you supposed referring to these stages in the <code>Dockerfile</code> (<code>devspace</code> Hooks or something) or using native test tools in the container (e.g. <code>npm test</code>, <code>./manage.py test</code>, etc.)?</p> </li> </ol> <p>Thanks for clearing this questions up.</p>
<p>To answer from a less DevSpace-y persepctive and a more general Docker-y one (With no disrespect to Lukas!):</p> <h2>Question 1</h2> <h3>Breakdown</h3> <blockquote> <p>❌ Are you docker build ... this entire image and then just docker run ... --target= to run a specific stage</p> </blockquote> <p>You're close in your understanding and managed to outline the approach in your second part of the query:</p> <blockquote> <p>✅ or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test &amp;&amp; docker run test ...)?</p> </blockquote> <p>The <code>--target</code> option is not present in the <code>docker run</code> command, which can be seen when calling <code>docker run --help</code>.</p> <blockquote> <p>I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?</p> </blockquote> <p>Yes, it's impossible to do it the first way, as when <code>--target</code> is not specified, then only the final stage is incorporated into your image. This is a great benefit as it cuts down the final size of your container, while allowing you to use multiple directives.</p> <h3>Details and Examples</h3> <p>It is a flag that you can pass in at <em>build time</em> so that you can choose which layers to build specifically. It's a pretty helpful directive that can be used in a few different ways. There's a decent blog post <a href="https://www.docker.com/blog/advanced-dockerfiles-faster-builds-and-smaller-images-using-buildkit-and-multistage-builds/" rel="nofollow noreferrer">here</a> talking about the the new features that came out with multi-stage builds (<code>--target</code> is one of them)</p> <p>For example, I've had a decent amount of success building projects in CI utilising different stages and targets, the following is pseudo-code, but hopefully the context is applied</p> <pre><code># Dockerfile FROM python as base FROM base as dependencies COPY requirements.txt . RUN pip install -r requirements.txt FROM dependencies as test COPY src/ src/ COPY test/ test/ FROM dependencies as publish COPY src/ src/ CMD ... </code></pre> <p>A Dockerfile like this would enable you to do something like this in your CI workflow, once again, pseudo-code-esque</p> <pre><code>docker build . -t my-app:unit-test --target test docker run my-app:unit-test pyunit ... docker build . -t my-app:latest docker push ... </code></pre> <p>In some scenarios, it can be quite advantageous to have this fine grained control over what gets built when, and it's quite the boon to be able to run those images that comprise of only a few stages without having built the entire app.</p> <p>The key here, is that there's no expectation that you need to use <code>--target</code>, but it <em>can</em> be used to solve particular problems.</p> <h2>Question 2</h2> <blockquote> <p>When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?</p> </blockquote> <p>Lukas covers a devspace specific approach very well, but ultimately you can test however you like. Using devspace to make it easier to run (and remember to run) tests certainly sounds like a good idea. Whatever tool you use to enable an easier workflow, will likely still use <code>npm test</code> etc under the hood.</p> <p>If you wish to call <code>npm test</code> outside of a container that's fine, if you wish to call it in a container, that's also fine. The solution to your problem will always change depending on your landscape. CICD helps to standardise on external factors and provide a uniform means to ensure testing is performed, and deployments are auditable</p> <p>Hope that helps in any way shape or form 👍</p>
<p>I am using Kustomize to manage my Kubernetes project with a StetefulSet that deploys a PostgreSQL cluster with three pods. I am working on Vagrant/VirtualBox so no dynamic provisioning of PV exists. For this reason, I have my own <code>pv.yaml</code> containing the manifest to deploy these 3 PVs.</p> <p>Then I have a <code>kustomization.yaml</code> file like this:</p> <pre><code>namespace: ibm-cfdb bases: - ../../bases resources: - pv.yaml </code></pre> <p>the folder <code>../../bases</code> contains the file to deploy the StatefulSet. When I run: <code>kubectl apply -k kustomize/</code> everything is correctly deployed. PVs are created before the StetefulSet that contains a <code>volumeClaimTemplates</code> that declare the Claim for these PVs.</p> <p>The problem is that when I try to remove the deployment with the command: <code>kubectl delete -k kustomize/</code> the removal of PV is executed (it seems I don't have control about the order). I suppose these PVs cannot be deleted because Claims use them. Then the StatefulSet removal stuck.</p> <p>What is the best approach to manage PV static provisioning with Kustomize?</p>
<p>You encountered an interesting problem regarding StatefulSet and PVC removal. There is a <a href="https://github.com/kubernetes/kubernetes/issues/55045" rel="nofollow noreferrer">discussion</a> whether PVCs created by the StatefulSet should be deleted when deleting the corresponding StatefulSet. We recently received <a href="https://github.com/kubernetes/kubernetes/issues/55045#issuecomment-824937931" rel="nofollow noreferrer">information</a> that the feature to autodelete the PVCs created by StatefulSet will probably be available in the <a href="https://github.com/kubernetes/kubernetes/issues/55045#issuecomment-884298382" rel="nofollow noreferrer">1.23 release</a>. According to the <a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-apps/1847-autoremove-statefulset-pvcs#changes-required" rel="nofollow noreferrer">feature documentation</a>, this will allow us to specifiy if the VolumeClaimTemplate PVCs will be deleted after deleting their StatefulSet. I suspect that with this feature it'll be easy to delete your StatefulSet along with PVC and PV.</p> <p>For now, you can consider moving the file with the PV to another directory and manage it separately. However, I will propose another solution which is kind of a workaround but you may be interested.</p> <p>Basically, we can use the <code>-o</code> flag with the <code>kustomize build</code> command. This creates one file per resource, which gives us more control over resources creation.</p> <hr /> <p>I will give you an example to illustrate how this can work.</p> <p>Suppose I have a similar environment to you:</p> <pre><code>$ tree . ├── base │ ├── kustomization.yaml │ └── statefulset.yaml └── overlays └── dev ├── kustomization.yaml └── pv.yaml $ cat overlays/dev/kustomization.yaml bases: - ../../base resources: - pv.yaml </code></pre> <p>Now let's create a directory where our manifest files generated by <code>kustomize</code> will be stored:</p> <pre><code>$ mkdir generated_manifests </code></pre> <p>Then we can check if the command <code>kustomize build overlays/dev -o generated_manifests</code> works as expected. First we'll apply the generated manifests (it'll create the <code>web</code> StatefulSet and <code>pv0003</code> PersistentVolume):</p> <pre><code>$ kustomize build overlays/dev -o generated_manifests &amp;&amp; kubectl apply -Rf generated_manifests/ statefulset.apps/web created persistentvolume/pv0003 created </code></pre> <p>As you can see, the appropriate manifest files have been created in the <code>generated_manifests</code> directory:</p> <pre><code>$ ls generated_manifests/ apps_v1_statefulset_web.yaml v1_persistentvolume_pv0003.yaml </code></pre> <p>Finally, we can try to delete only the <code>web</code> StatefulSet:</p> <pre><code>$ kustomize build overlays/dev -o generated_manifests &amp;&amp; kubectl delete -f generated_manifests/apps_v1_statefulset_web.yaml statefulset.apps &quot;web&quot; deleted </code></pre> <hr /> <p>I would also like to mention that <code>kustomize</code> has a feature like &quot;ignore pv.yaml&quot; but it will also be used when creating resources, not just when removing. This is known as a delete patch and a good example can be found <a href="https://stackoverflow.com/a/66074466/14801225">here</a>.</p>
<p>I'm setting up Keyvault integration with k8s in Azure. I can mount a volume with secrets using the csi driver in Azure using Managed identities. I can verify the secret is mounted by exec-ing into the pod and cat-ing out the secrets. However, now I want to expose the secrets as environment variables, but I'm unclear how to do that. Below is the following <code>SecretProviderClass</code> and <code>Pod</code> I have deployed.</p> <p>spc-keyvault.yaml:</p> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: spc-keyvault spec: provider: azure secretObjects: - secretName: foobar-secret data: - key: foobar objectName: foobar type: Opaque parameters: keyvaultName: &quot;$keyvault&quot; usePodIdentity: &quot;true&quot; useVMManagedIdentity: &quot;false&quot; userAssignedIdentityID: &quot;&quot; cloudName: &quot;&quot; objects: | array: - | objectName: foobar objectType: secret objectVersion: &quot;&quot; resourceGroup: &quot;$resourceGroup&quot; subscriptionId: &quot;$subId&quot; tenantId: &quot;$tenantId&quot; </code></pre> <p>pod.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: inject-secrets-from-akv labels: aadpodidbinding: azure-pod-identity-binding-selector spec: containers: - name: nginx image: nginx env: - name: SECRET valueFrom: secretKeyRef: name: foobar-secret key: foobar volumeMounts: - name: secrets-store-inline mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: spc-keyvault </code></pre> <p>Here's the error when I deploy my Pod manifest:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Failed 58m (x227 over 108m) kubelet Error: secret &quot;foobar-secret&quot; not found Normal Pulling 3m51s (x470 over 108m) kubelet Pulling image &quot;nginx&quot; </code></pre> <p>The error is pretty obvious, there is no secret named: <code>foobar-secret</code>. I thought the csi driver automatically created the required secrets. So after some research and digging throught documentation and source code, I found a suggestion to add <code>nodePublishSecretRef</code> in the pod yaml. Then you need to set <code>nodePublishSecretRef</code> with the aad client id and client secret.[1, 2] However since I'm using managed identity I'm unsure how to do this.</p> <p>Has anyone got this working using Managed Identity and can provide any insight? Or do i need to create a startup script (via configmap?) to populate the volume mount as env. variables.</p> <ol> <li><a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/service-principal-mode/" rel="nofollow noreferrer">https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/service-principal-mode/</a></li> <li><a href="https://medium.com/swlh/integrate-azure-key-vault-with-azure-kubernetes-service-1a8740429bea" rel="nofollow noreferrer">https://medium.com/swlh/integrate-azure-key-vault-with-azure-kubernetes-service-1a8740429bea</a></li> </ol>
<p>After more digging around, I found this bit to tickle in the helm chart to get the csi driver to create k8s secrets:</p> <pre><code>secrets-store-csi-driver.syncSecret.enabled = true </code></pre> <p>Now I have k8s secrets. Figured I would share for anyone else wanting this functionality.</p>
<p>I have a node in Google Cloud Platform Kubernetes public cluster. When I make HTTP request from my application to external website, nginx in that website shows some IP address different than the IP address of my kubernetes cluster. I can't figure out where that IP address comes from. I'm not using NAT in GCP.</p>
<p><strong>I will just add some official terminology to put some light on <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="nofollow noreferrer">GKE networking</a> before providing an answer;</strong></p> <p>Let's have a look at some <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#terminology_related_to_ip_addresses_in_kubernetes" rel="nofollow noreferrer">GKE networking terminology</a>:</p> <blockquote> <p>The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports. Kubernetes provides different types of load balancing to direct traffic to the correct Pods. All of these mechanisms are described in more detail later in this topic. Keep the following terms in mind as you read:</p> <p><strong>ClusterIP:</strong> The IP address assigned to a Service. In other documents, it may be called the &quot;Cluster IP&quot;. This address is stable for the lifetime of the Service, as discussed in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#services" rel="nofollow noreferrer">Services</a> section in this topic.</p> <p><strong>Pod IP:</strong> The IP address assigned to a given Pod. This is ephemeral, as discussed in the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#pods" rel="nofollow noreferrer">Pods</a> section in this topic.</p> <p><strong>Node IP:</strong> The IP address assigned to a given node.</p> </blockquote> <p>Additionally you may have a look at the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#introduction" rel="nofollow noreferrer">exposing your service</a> documentation which may give you even more insight.</p> <p><strong>And to support the fact that <a href="https://stackoverflow.com/questions/68727769/gcp-cluster-ip-address-is-not-the-same-as-requests-remoteaddr#comment121465828_68727769">you got your node's IP</a></strong> - <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">GKE uses an IP masquerading</a>:</p> <blockquote> <p>IP masquerading is a form of network address translation (NAT) used to perform many-to-one IP address translations, which allows multiple clients to access a destination using a single IP address. A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses.</p> </blockquote>
<p>When we fire command <code>Kubectl apply -f &lt;file.yaml&gt;</code>, what happens in background?</p>
<p>In a nutshell, Kubernetes works by having a bunch of &quot;object&quot; or &quot;resource&quot; definitions (like Deployments, ConfigMaps and such), and continuously trying to match the state of the system to these definitions.</p> <p>For example, if there is a Deployment definition, but there is no pod running for that, K8s will keep trying to spin up such a pod. This might fail due to many reasons - there might not be a node with enough free resources available, the image might not be found, and so on, but as soon as all obstacles are cleared, it will start the pod without further action from you.</p> <p><code>kubectl apply -f file.yaml</code> then simply reads the content of that file and adds it to its internal store of definitions (where &quot;add&quot; can mean a complex situation if there are overwrites, merges or deletions involved; see the <code>man kubectl-apply</code> documentation for details).</p>
<p>I have access to a cluster with a lot of nodes. I am running my Nextflow workflow using this command:</p> <pre><code>./nextflow kuberun user/repo -c nextflow.config -profile kubernetes -v my_pvc:/mounted_path -with-report _report.html -with-trace _trace </code></pre> <p>I would like to run my nextflow workflow on a specific set of nodes. I have have already labeled my nodes of interest:</p> <pre><code>kubectl label nodes node1 disktype=my_experiment kubectl label nodes node2 disktype=my_experiment kubectl label nodes node3 disktype=my_experiment </code></pre> <p>I am not understanding from Nextflow and Kubernete documentation how to is it possible to schedule my workflow, with the processes splitted between my nodes of interest.</p> <p>I understand how to do with kubernete only: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a></p> <p>But not how to the <code>nextflow kuberun</code> command.</p> <p>Any help is very appreciated, thank you!</p>
<p>Use one or more <a href="https://www.nextflow.io/docs/latest/config.html#process-selectors" rel="nofollow noreferrer">process selectors</a> and the <a href="https://www.nextflow.io/docs/latest/process.html#pod" rel="nofollow noreferrer">pod directive</a> to select the nodes using a pod label. For example, the following could be added to your 'kubernetes' profile:</p> <pre><code>process { withName: my_process { pod { nodeSelector = 'disktype=my_experiment' } } ... } </code></pre>
<p>I want to run a docker container which uses GPU (it runs a cnn to detect objects on a video), and then run that container on Kubernetes.</p> <p>I can run the container from docker alone without problems, but when I try to run the container from Kubernetes it fails to find the GPU.</p> <p>I run it using this command:</p> <pre><code>kubectl exec -it namepod /bin/bash </code></pre> <p>This is the problem that I get:</p> <pre><code>kubectl exec -it tym-python-5bb7fcf76b-4c9z6 /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@tym-python-5bb7fcf76b-4c9z6:/opt# cd servicio/ root@tym-python-5bb7fcf76b-4c9z6:/opt/servicio# python3 TM_Servicev2.py Try to load cfg: /opt/darknet/cfg/yolov4.cfg, weights: /opt/yolov4.weights, clear = 0 CUDA status Error: file: ./src/dark_cuda.c : () : line: 620 : build time: Jul 30 2021 - 14:05:34 CUDA Error: no CUDA-capable device is detected python3: check_error: Unknown error -1979678822 root@tym-python-5bb7fcf76b-4c9z6:/opt/servicio# </code></pre> <p><strong>EDIT.</strong> I followed all the steps on the Nvidia docker 2 guide and downloaded the Nvidia plugin for Kubernetes.</p> <p>however when I deploy Kubernetes it stays as &quot;pending&quot; and never actually starts. I don't get an error anymore, but it never starts. The pod appears like this:</p> <pre><code>gpu-pod 0/1 Pending 0 3m19s </code></pre> <p><strong>EDIT 2.</strong></p> <p>I ended up reinstalling everything and now my pod appears completed but not running. like this.</p> <pre><code>default gpu-operator-test 0/1 Completed 0 62m </code></pre> <p>Answering Wiktor. when I run this command:</p> <pre><code>kubectl describe pod gpu-operator-test </code></pre> <p>I get:</p> <pre><code>Name: gpu-operator-test Namespace: default Priority: 0 Node: pdi-mc/192.168.0.15 Start Time: Mon, 09 Aug 2021 12:09:51 -0500 Labels: &lt;none&gt; Annotations: cni.projectcalico.org/containerID: 968e49d27fb3d86ed7e70769953279271b675177e188d52d45d7c4926bcdfbb2 cni.projectcalico.org/podIP: cni.projectcalico.org/podIPs: Status: Succeeded IP: 192.168.10.81 IPs: IP: 192.168.10.81 Containers: cuda-vector-add: Container ID: docker://d49545fad730b2ec3ea81a45a85a2fef323edc82e29339cd3603f122abde9cef Image: nvidia/samples:vectoradd-cuda10.2 Image ID: docker-pullable://nvidia/samples@sha256:4593078cdb8e786d35566faa2b84da1123acea42f0d4099e84e2af0448724af1 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 09 Aug 2021 12:10:29 -0500 Finished: Mon, 09 Aug 2021 12:10:30 -0500 Ready: False Restart Count: 0 Limits: nvidia.com/gpu: 1 Requests: nvidia.com/gpu: 1 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9ktgq (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-9ktgq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: &lt;none&gt; </code></pre> <p>I'm using this configuration file to create the pod</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: gpu-operator-test spec: restartPolicy: OnFailure containers: - name: cuda-vector-add image: &quot;nvidia/samples:vectoradd-cuda10.2&quot; resources: limits: nvidia.com/gpu: 1 </code></pre>
<p>Addressing two topics here:</p> <ol> <li>The error you saw at the beginning:</li> </ol> <hr /> <pre><code>kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. </code></pre> <p>Means that you tried to use a deprecated version of the <code>kubectl exec</code> command. The proper syntax is:</p> <pre><code>$ kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] </code></pre> <p>See <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">here</a> for more details.</p> <ol start="2"> <li>According the the <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html" rel="nofollow noreferrer">official docs</a> the <code>gpu-operator-test</code> pod should run to completion: <a href="https://i.stack.imgur.com/FrQXC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FrQXC.png" alt="enter image description here" /></a></li> </ol> <p>You can see that the pod's status is <code>Succeeded</code> and also:</p> <hr /> <pre><code> State: Terminated Reason: Completed Exit Code: 0 </code></pre> <p><code>Exit Code: 0</code> means that the specified container command completed successfully.</p> <p>More details can be found in the <a href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/getting-started.html" rel="nofollow noreferrer">official docs</a>.</p>
<p>Lets assume I have an existing GKE cluster that contains all my applications. They were all deployed using different methods. Now I want to deploy some resources to that cluster using Terraform. The trouble here is that terraform doesn't see it in his state file so it can't interact with it. Another problem is that even if I get that cluster to my state file, terraform doesn't see all of the created resources in that cluster. This could lead to some conflicts e.g. I'm trying to deploy two resources with the same name. Is there a way to solve this problem or do I just have to deal with the reality of my existence and create a new cluster for every new project that I deploy with terraform?</p>
<p>You can use terraform <a href="https://www.terraform.io/docs/cli/commands/import.html" rel="nofollow noreferrer">import command</a> to import your existing GKE cluster to terraform state. Prior to run it, you need to have the adequate terraform configuration for your cluster.</p> <p>example of import command :</p> <pre><code>terraform import google_container_cluster.&lt;TF_RESOURCE_NAME&gt; projects/&lt;PROJECT_ID&gt;/locations/&lt;YOUR-CLUSTER-ZONE&gt;/clusters/&lt;CLUSTER_NAME&gt; </code></pre> <p>for a terraform configuration :</p> <pre><code>resource &quot;google_container_cluster&quot; &quot;&lt;TF_RESOURCE_NAME&gt;&quot; { name = &quot;&lt;CLUSTER_NAME&gt;&quot; location = &quot;&lt;YOUR-CLUSTER-ZONE&gt;&quot; } </code></pre> <p>The CLUSTER_NAME is the name displayed in your GKE clusters list on Google Cloud Console.</p> <p>Then you need also to import the cluster node pool(s) in the same way using terraform <code>google_container_node_pool</code> resource.</p>
<p>I am using <code>@kubernetes/client-node</code> to access Kubernetes server API. I can get all the Pods from default using:</p> <pre><code>const k8s = require('@kubernetes/client-node'); const kc = new k8s.KubeConfig(); kc.loadFromDefault(); const k8sApi = kc.makeApiClient(k8s.CoreV1Api); k8sApi.listNamespace().then((res) =&gt; { // or using listAllNamespacedPods console.log(res.body); }); </code></pre> <p>and the body of the response from the above code looks like this:</p> <p><a href="https://i.stack.imgur.com/wV30C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wV30C.png" alt="Response From loadFromDefault" /></a></p> <p>but when I am using <code>kc.loadFromFile('pathToKubeConfigFile')</code>, it is unable to read it (the <code>config.yaml</code> which is saved inside <code>.kube</code> folder). I have checked all the paths to certificates and keys files inside this file and they are correct.</p> <pre><code>import { KubeConfig, CoreV1Api } from '@kubernetes/client-node'; const kc = new KubeConfig(); kc.loadFromFile('./config/k8sConfig.yaml'); const k8sApi = kc.makeApiClient(CoreV1Api); k8sApi.listPodForAllNamespaces().then((res) =&gt; { console.log(res.body); }); </code></pre> <p>and I need to return all the active Kubernetes Jobs (or the pods for that). Can anyone please suggest me how to achieve it?</p>
<p>As the problem has already been resolved in the comments section, I decided to provide a Community Wiki answer just for better visibility to other community members. I would also like to describe how to return all active Kubernetes Jobs using the <a href="https://github.com/kubernetes-client/javascript#javascript-kubernetes-client-information" rel="nofollow noreferrer">Javascript Kubernetes Client</a></p> <h5>Using the loadFromFile() method.</h5> <p>When using the <code>loadFromFile()</code> method, it's important to make sure that the <code>kubeconfig</code> file is correct. In case the <code>kubeconfig</code> file is invalid, we may get various error messages such as:</p> <pre><code>Error: ENOENT: no such file or directory, open '.kube/confi' </code></pre> <p>or</p> <pre><code>Error: unable to verify the first certificate </code></pre> <p>The exact error message depends on what is incorrect in the <code>kubeconfig</code> file.</p> <h5>List all/active Kubernetes Jobs.</h5> <p>To list all Kubernetes Jobs, we can use the <a href="https://github.com/kubernetes-client/javascript/blob/1d5d4660f99807e3d3b02dd0984d0b980f279ff9/src/gen/api/batchV1Api.ts#L950" rel="nofollow noreferrer">listJobForAllNamespaces()</a> method.</p> <p>I've created the <code>listAllJobs.js</code> script to demonstrate how it works:</p> <pre><code>$ cat listAllJobs.js const k8s = require('@kubernetes/client-node') const kc = new k8s.KubeConfig() kc.loadFromFile('.kube/config') const k8sApi = kc.makeApiClient(k8s.BatchV1Api); k8sApi.listJobForAllNamespaces().then((res) =&gt; { res.body.items.forEach(job =&gt; console.log(job.metadata.name)); }); $ kubectl get jobs NAME COMPLETIONS DURATION AGE job-1 0/1 3s 3s job-2 1/1 10s 48m job-3 1/1 10s 48m $ node listAllJobs.js job-1 job-2 job-3 </code></pre> <p>To list only active Jobs, we need to slightly modify the <code>res.body.items.forEach(job =&gt; console.log(job.metadata.name));</code> line to check if the Job is active:</p> <pre><code>$ cat listActiveJobs.js const k8s = require('@kubernetes/client-node') const kc = new k8s.KubeConfig() kc.loadFromFile('.kube/config') const k8sApi = kc.makeApiClient(k8s.BatchV1Api); k8sApi.listJobForAllNamespaces().then((res) =&gt; { res.body.items.forEach(job =&gt; job.status.active &gt;= 1 &amp;&amp; console.log(job.metadata.name)); }); $ kubectl get jobs NAME COMPLETIONS job-1 0/1 job-2 1/1 job-3 1/1 $ node listActiveJobs.js job-1 </code></pre>
<p>I have a Kubernetes cluster set up using kubeadm. I installed prometheus and node-exporter on top of it based on:</p> <ul> <li><a href="https://github.com/bibinwilson/kubernetes-prometheus" rel="nofollow noreferrer">https://github.com/bibinwilson/kubernetes-prometheus</a></li> <li><a href="https://github.com/bibinwilson/kubernetes-node-exporter" rel="nofollow noreferrer">https://github.com/bibinwilson/kubernetes-node-exporter</a></li> </ul> <p>The pods seem to be running properly:</p> <pre><code> kubectl get pods --namespace=monitoring -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-jk2sd 1/1 Running 0 90m 192.168.5.20 work03 &lt;none&gt; &lt;none&gt; node-exporter-jldrx 1/1 Running 0 90m 192.168.5.17 work04 &lt;none&gt; &lt;none&gt; node-exporter-mgtld 1/1 Running 0 90m 192.168.5.15 work01 &lt;none&gt; &lt;none&gt; node-exporter-tq7bx 1/1 Running 0 90m 192.168.5.41 work02 &lt;none&gt; &lt;none&gt; prometheus-deployment-5d79b5f65b-tkpd2 1/1 Running 0 91m 192.168.5.40 work02 &lt;none&gt; &lt;none&gt; </code></pre> <p>I can see the endpoints, as well:</p> <pre><code>kubectl get endpoints -n monitoring NAME ENDPOINTS AGE node-exporter 192.168.5.15:9100,192.168.5.17:9100,192.168.5.20:9100 + 1 more... 5m3s </code></pre> <p>I also did: <code>kubectl port-forward prometheus-deployment-5d79b5f65b-tkpd2 8080:9090 -n monitoring</code> and when I access the prometheus web UI &gt; Status &gt; Targets, I don't find node-exporters there. When I start typing a query for a metric reported by node-exporter, it doesn't automatically show up in the query editor.</p> <p>Logs coming from the prometheus pod seem to have a lot of errors:</p> <pre><code>kubectl logs prometheus-deployment-5d79b5f65b-tkpd2 -n monitoring level=info ts=2021-08-11T16:24:21.743Z caller=main.go:428 msg=&quot;Starting Prometheus&quot; version=&quot;(version=2.29.1, branch=HEAD, revision=dcb07e8eac34b5ea37cd229545000b857f1c1637)&quot; level=info ts=2021-08-11T16:24:21.743Z caller=main.go:433 build_context=&quot;(go=go1.16.7, user=root@364730518a4e, date=20210811-14:48:27)&quot; level=info ts=2021-08-11T16:24:21.743Z caller=main.go:434 host_details=&quot;(Linux 5.4.0-70-generic #78-Ubuntu SMP Fri Mar 19 13:29:52 UTC 2021 x86_64 prometheus-deployment-5d79b5f65b-tkpd2 (none))&quot; level=info ts=2021-08-11T16:24:21.743Z caller=main.go:435 fd_limits=&quot;(soft=1048576, hard=1048576)&quot; level=info ts=2021-08-11T16:24:21.743Z caller=main.go:436 vm_limits=&quot;(soft=unlimited, hard=unlimited)&quot; level=info ts=2021-08-11T16:24:21.745Z caller=web.go:541 component=web msg=&quot;Start listening for connections&quot; address=0.0.0.0:9090 level=info ts=2021-08-11T16:24:21.745Z caller=main.go:812 msg=&quot;Starting TSDB ...&quot; level=info ts=2021-08-11T16:24:21.748Z caller=tls_config.go:191 component=web msg=&quot;TLS is disabled.&quot; http2=false level=info ts=2021-08-11T16:24:21.753Z caller=head.go:815 component=tsdb msg=&quot;Replaying on-disk memory mappable chunks if any&quot; level=info ts=2021-08-11T16:24:21.753Z caller=head.go:829 component=tsdb msg=&quot;On-disk memory mappable chunks replay completed&quot; duration=4.15µs level=info ts=2021-08-11T16:24:21.753Z caller=head.go:835 component=tsdb msg=&quot;Replaying WAL, this may take a while&quot; level=info ts=2021-08-11T16:24:21.754Z caller=head.go:892 component=tsdb msg=&quot;WAL segment loaded&quot; segment=0 maxSegment=0 level=info ts=2021-08-11T16:24:21.754Z caller=head.go:898 component=tsdb msg=&quot;WAL replay completed&quot; checkpoint_replay_duration=75.316µs wal_replay_duration=451.769µs total_replay_duration=566.051µs level=info ts=2021-08-11T16:24:21.756Z caller=main.go:839 fs_type=EXT4_SUPER_MAGIC level=info ts=2021-08-11T16:24:21.756Z caller=main.go:842 msg=&quot;TSDB started&quot; level=info ts=2021-08-11T16:24:21.756Z caller=main.go:969 msg=&quot;Loading configuration file&quot; filename=/etc/prometheus/prometheus.yml level=info ts=2021-08-11T16:24:21.757Z caller=kubernetes.go:282 component=&quot;discovery manager scrape&quot; discovery=kubernetes msg=&quot;Using pod service account via in-cluster config&quot; level=info ts=2021-08-11T16:24:21.759Z caller=kubernetes.go:282 component=&quot;discovery manager scrape&quot; discovery=kubernetes msg=&quot;Using pod service account via in-cluster config&quot; level=info ts=2021-08-11T16:24:21.762Z caller=kubernetes.go:282 component=&quot;discovery manager scrape&quot; discovery=kubernetes msg=&quot;Using pod service account via in-cluster config&quot; level=info ts=2021-08-11T16:24:21.764Z caller=main.go:1006 msg=&quot;Completed loading of configuration file&quot; filename=/etc/prometheus/prometheus.yml totalDuration=7.940972ms db_storage=607ns remote_storage=1.251µs web_handler=283ns query_engine=694ns scrape=227.668µs scrape_sd=6.081132ms notify=27.11µs notify_sd=16.477µs rules=648.58µs level=info ts=2021-08-11T16:24:21.764Z caller=main.go:784 msg=&quot;Server is ready to receive web requests.&quot; level=error ts=2021-08-11T16:24:51.765Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:24:51.765Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:24:51.765Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:24:51.766Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:24:51.766Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:22.587Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:22.855Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:23.153Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:23.261Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:23.335Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:54.814Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:55.282Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:55.516Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:55.934Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:25:56.442Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:26:30.058Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:26:30.204Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:26:30.246Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:26:30.879Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:26:31.479Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:09.673Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:09.835Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:10.467Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:11.170Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:12.684Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:27:55.324Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get \&quot;https://10.96.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:28:01.550Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:28:01.621Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \&quot;https://10.96.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:28:04.801Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:28:05.598Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Node: failed to list *v1.Node: Get \&quot;https://10.96.0.1:443/api/v1/nodes?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:28:57.256Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; level=error ts=2021-08-11T16:29:04.688Z caller=klog.go:116 component=k8s_client_runtime func=ErrorDepth msg=&quot;pkg/mod/k8s.io/client-go@v0.21.3/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \&quot;https://10.96.0.1:443/api/v1/pods?limit=500&amp;resourceVersion=0\&quot;: dial tcp 10.96.0.1:443: i/o timeout&quot; </code></pre> <p>Is there a way to solve this issue and make node-exporters show up in the targets?</p> <p>Version details:</p> <pre><code>kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.5&quot;, GitCommit:&quot;6b1d87acf3c8253c123756b9e61dac642678305f&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-03-18T01:10:43Z&quot;, GoVersion:&quot;go1.15.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.9&quot;, GitCommit:&quot;7a576bc3935a6b555e33346fd73ad77c925e9e4a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-15T20:56:38Z&quot;, GoVersion:&quot;go1.15.14&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Edit: The cluster was set up as follows:</p> <pre><code>sudo kubeadm reset sudo rm $HOME/.kube/config sudo kubeadm init --pod-network-cidr=192.168.5.0/24 mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>It is using flannel.</p> <p>flannel pods are running:</p> <pre><code>kube-flannel-ds-45qwf 1/1 Running 0 31h x.x.x.41 work01 &lt;none&gt; &lt;none&gt; kube-flannel-ds-4rwzj 1/1 Running 0 31h x.x.x.40 mast01 &lt;none&gt; &lt;none&gt; kube-flannel-ds-8fdtt 1/1 Running 24 31h x.x.x.43 work03 &lt;none&gt; &lt;none&gt; kube-flannel-ds-8hl5f 1/1 Running 23 31h x.x.x.44 work04 &lt;none&gt; &lt;none&gt; kube-flannel-ds-xqtrd 1/1 Running 0 31h x.x.x.42 work02 &lt;none&gt; &lt;none&gt; </code></pre>
<p>The issue is related to SDN not working properly.</p> <p>As a general rule, troubleshooting this, we would check the SDN pods (calico, weave, or in this case flannel), are they healthy, any errors in their logs, ...</p> <p>Check iptables (<code>iptables -nL</code>) and ipvs (<code>ipvsadm -l n</code>) configuration nodes.</p> <p>Restart SDN pods, as well as kube-proxy, if you still didn't find anything.</p> <p>Now, on this specific case, we're not suffering from an outage: cluster is freshly deployed, it's likely the SDN never worked at all - though this may not be obvious, with a kubeadm deployment, that doesn't ship with other pods than the defaults, most of which using host networking.</p> <p>The kubeadm init command mentions that pod CIDR is some 192.168.5.0/24, which brings two remarks:</p> <ul> <li><p>with all SDN: the pod CIDR is a subnet that will be split into smaller subnets (usually /24 or /25). Each range being statically allocated to Nodes when they first join your cluster</p> </li> <li><p>running flannel SDN: kubeadm init should include a <code>--pod-network-cidr</code> argument that MUST match the subnet configured in the <code>kube-flannel-cfg</code> ConfigMap, see <code>net-conf.json</code> key.</p> </li> </ul> <p>Though I'm unfamiliar with the process of fixing this, there seem to be an answer on ServerFault that gives some instructions, which sounds right: <a href="https://serverfault.com/a/977401/293779">https://serverfault.com/a/977401/293779</a></p>
<p>I have a python program that runs an infinite loop, however, every once in a while the code freezes. No errors are raised or any other message that would alert me something's wrong. I was wondering if Kubernetes has any liveness probe that could possibly help catch when the code freezes so it can kill and restart that container.</p> <p>I have an idea of having the python code make a periodic log every time it completes the loop. This way I can have a liveness probe check the log file every 30 seconds or so to see if the file has been updated. If the file has not been updated after the allotted time, then its is assumed the program froze and the container is killed and restarted.</p> <p>I am currently using the following python code to test with:</p> <pre><code>#Libraries import logging import random as r from time import sleep #Global Veriables FREEZE_TIME = 60 '''Starts an infinate loop that has a 10% chance of freezing...........................................''' def main(): #Create .log file to hold logged info. logging.basicConfig(filename=&quot;freeze.log&quot;, level=logging.INFO) #Start infinate loop while True: freeze = r.randint(1, 10) #10% chance of freezing. sleep(2) logging.info('Running infinate loop...') print(&quot;Running infinate loop...&quot;) #Simulate a freeze. if freeze == 1: print(f&quot;Simulating freeze for {FREEZE_TIME} sec.&quot;) sleep(FREEZE_TIME) #Start code with main() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>If anyone could tell me how to implement this log idea or if there is a better way to do this I would be most grateful! I am currently using Kubernetes on Docker-Desktop for windows 10 if this makes a difference. Also, I am fairly new to this so if you could keep your answers to a &quot;Kubernetes for dummies&quot; level I would appreciate it.</p>
<p>A common approach to liveness probes in Kubernetes is to access an HTTP endpoint (if the application has it). Kubernetes checks whether response status code falls into 200-399 range (=success) or not (=failure). Running a HTTP server is not mandatory as you can run a command or sequence of commands instead. In this case health status is based on the exit code (0 - ok, anything else - failure).</p> <p>Given the nature of your script and the idea with the log, I would wrote another python script to read the last line of that log and parse the timestamp. Then, if the difference between current time and the timestamp is greater than <code>[insert reasonable amount]</code> then <code>exit(1)</code>, else <code>exit(0)</code>.</p> <p>If you have prepared the health-check script, you can enable it in this way:</p> <pre class="lang-yaml prettyprint-override"><code>spec: containers: - name: my_app image: my_image livenessProbe: exec: command: # the command to run - python3 - check_health.py initialDelaySeconds: 5 # wait 5 sec after start for the log to appear periodSeconds: 5 # run every 5 seconds </code></pre> <p>The <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">documentation</a> has detailed explanation with some great examples.</p>
<p>I have a specific scenario where I'd like to have a deployment controlled by horizontal pod autoscaling. To handle database migrations in pods when pushing a new deployment, I followed this excellent tutorial by Andrew Lock <a href="https://andrewlock.net/deploying-asp-net-core-applications-to-kubernetes-part-8-running-database-migrations-using-jobs-and-init-containers/" rel="nofollow noreferrer">here</a>.</p> <p>In short, you must define an <code>initContainer</code> that <code>waits</code> for a Kubernetes <code>Job</code> to complete a process (like running db migrations) before the new pods can run.</p> <p>This works well, however, I'm not sure how to handle HPA after the initial deployment because if the system detects the need to add another <code>Pod</code> in my node, the <code>initContainer</code> defined in my deployment requires a <code>Job</code> to be deployed and run, but since <code>Jobs</code> are one-off processes, the pod can not initialize and run properly (a <code>ttlSecondsAfterFinished</code> attribute removes the <code>Job</code> anyways).</p> <p>How can I define an <code>initContainer</code> to run when I deploy my app so I can push my database migrations in a <code>Job</code>, but also allow HPA to control dynamically adding a <code>Pod</code> without needing an <code>initContainer</code>?</p> <p>Here's what my <code>deployment</code> looks like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: graphql-deployment spec: replicas: 1 selector: matchLabels: app: graphql-pod template: metadata: labels: app: graphql-pod spec: initContainers: - name: wait-for-graphql-migration-job image: groundnuty/k8s-wait-for:v1.4 # This is an image that waits for a process to complete args: - job - graphql-migration-job # this job is defined next containers: - name: graphql-container image: image(graphql):tag(graphql) </code></pre> <p>The following <code>Job</code> is also deployed</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: graphql-migration-job spec: ttlSecondsAfterFinished: 30 template: spec: containers: - name: graphql-migration-container image: image(graphql):tag(graphql) command: [&quot;npm&quot;, &quot;run&quot;, &quot;migrate:reset&quot;] restartPolicy: Never </code></pre> <p>So basically what happens is:</p> <ul> <li>I deploy these two resources to my node</li> <li><code>Job</code> is initialized</li> <li><code>initContainer</code> on <code>Pod</code> waits for <code>Job</code> to complete using an image called <code>groundnuty/k8s-wait-for:v1.4</code></li> <li><code>Job</code> completes</li> <li><code>initContainer</code> completes</li> <li><code>Pod</code> initializes</li> <li>(after 30 TTL seconds) <code>Job</code> is removed from node</li> </ul> <p>(LOTS OF TRAFFIC)</p> <ul> <li>HPA realizes a need for another pod</li> <li><code>initContainer</code> for NEW pod is started, but cant run because <code>Job</code> doesn't exist</li> <li>...crashLoopBackOff</li> </ul> <p>Would love any insight on the proper way to handle this scenario!</p>
<p>There is, unfortunately, no simple Kubernetes feature to resolve your issue.</p> <p>I recommend extending your deployment tooling/scripts to separate the migration job and your deployment. During the deploy process, you first execute the migration job and then deploy your deployment. Without the job attached, the HPA can nicely scale your pods.</p> <p>There is a multitude of ways to achieve this:</p> <ul> <li>Have a bash, etc. script first to execute the job, wait and then update your deployment</li> <li>Leverage more complex deployment tooling like Helm, which allows you to add a <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">'pre-install hook'</a> to your job to execute them when you deploy your application</li> </ul>
<p>I am currently trying to setup Airflow to work in a Kubernetes like environment. For airflow to be useful, I need to be able to use the Git-Sync features so that the DAGs can be stored seperatly from the Pod, thus not being reset when the Pod downscales or restarts. I am trying to set it up with ssh.</p> <p>I have been searching for good documentation on the Airflow config or tutorials on how to set this up properly, but this has been to no avail. I would very much appreciate some help here, as I have been struggling with this for a while. </p> <p>Here is how i set the relevant config, please note I have some stand ins for links and some information due to security reasons:</p> <pre><code>git_repo = https://&lt;git-host&gt;/scm/&lt;project-name&gt;/airflow git_branch = develop git_subpath = dags git_sync_root = /usr/local/airflow git_sync_dest = dags git_sync_depth = 1 git_sync_ssh = true git_dags_folder_mount_point = /usr/local/airflow/dags git_ssh_key_secret_name = airflow-secrets git_ssh_known_hosts_configmap_name = airflow-configmap dags_folder = /usr/local/airflow/ executor = KubernetesExecutor dags_in_image = False </code></pre> <p>Here is how I have setup my origin/config repo:</p> <pre><code>-root |-configmaps/airflow |-airflow.cfg |-airflow-configmap.yaml |-environment |-&lt;environment specific stuff&gt; |-secrets |-airflow-secrets.yaml |-ssh |-id_rsa |-id_rsa.pub |-README.md </code></pre> <p>The airflow-conifgmap and secrets look like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: airflow-secrets data: # key needs to be gitSshKey gitSshKey: &lt;base64 encoded private sshKey&gt; </code></pre> <p>and </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: airflow-configmap data: known_hosts: | https://&lt;git-host&gt;/ ssh-rsa &lt;base64 encoded public sshKey&gt; </code></pre> <p>The repo that I am trying to sync to has the Public key set as an access key and is just a folder named dags with 1 dag inside.</p> <p>My issue is that I do not know what my issue is at this point. I have no way of knowing what part of my config has been set correctly and what part of it is set incorrectly and documentation on the subject is very lackluster. </p> <p>If there is more information that is required I will be happy to provide it. </p> <p>Thank you for your time</p>
<p>Whats the error you're seeing on doing this ?</p> <p>Couple of things you need to consider:</p> <ul> <li><p>Create an SSH key locally using this <a href="https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent" rel="nofollow noreferrer">link</a> and:</p> <ol> <li><p>Repository Name &gt; Settings &gt; Deploy Keys &gt; Value of ssh_key.pub</p> </li> <li><p>Ensure &quot;write access&quot; is checked</p> </li> </ol> </li> <li><p>My <code>Dockerfile</code> I'm using looks like:</p> <pre><code>FROM apache/airflow:2.1.2 COPY requirements.txt . RUN python -m pip install --upgrade pip RUN pip install -r requirements.txt </code></pre> </li> <li><p>The <code>values.yaml</code> from the official Airflow Helm repository (<code>helm repo add apache-airflow https://airflow.apache.org</code>) needs the following values updated under <code>gitSync</code>:</p> <ul> <li><p><code>enabled: true</code></p> </li> <li><p><code>repo: ssh://git@github.com/username/repository-name.git</code></p> </li> <li><p><code>branch: master</code></p> </li> <li><p><code>subPaths: &quot;&quot;</code> (if DAGs are in repository root)</p> </li> <li><p><code>sshKeySecret: airflow-ssh-git-secret</code></p> </li> <li><p><code>credentialsSecret: git-credentials</code></p> </li> </ul> </li> <li><p>Export SSH key and <code>known_hosts</code> to Kubernetes secret for accessing the private repository</p> <pre><code>kubectl create secret generic airflow-ssh-git-secret \ --from-file=gitSshKey=/path/to/.ssh/id_ed25519 \ --from-file=known_hosts=/path/to/.ssh/known_hosts \ --from-file=id_ed25519.pub=/path/to/.ssh/id_ed25519.pub \ -n airflow </code></pre> </li> <li><p>Create and apply manifests:</p> <pre><code>apiVersion: v1 kind: Secret metadata: namespace: airflow name: airflow-ssh-git-secret data: gitSshKey: &lt;base64_encoded_private_key_id_ed25519_in_one_line&gt; </code></pre> <pre><code>apiVersion: v1 kind: Secret metadata: name: git-credentials data: GIT_SYNC_USERNAME: base64_encoded_git_username GIT_SYNC_PASSWORD: base64_encoded_git_password </code></pre> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: airflow name: known-hosts data: known_hosts: | line 1 of known_host file line 2 of known_host file line 3 of known_host file ... </code></pre> </li> <li><p>Update Airflow release</p> <p><code>helm upgrade --install airflow apache-airflow/airflow -n airflow -f values.yaml --debug</code></p> </li> <li><p>Get pods in the <em>airflow</em> namespace</p> <p><code>kubectl get pods -n airflow</code></p> </li> <li><p>The <code>airflow-scheduler-SOME-STRING</code> pod is going to have 3 containers running. View the logs of container <code>git-sync-init</code> if you dont see the pods in Running state</p> </li> </ul>
<p>Is it possible to aggregate 2 gauge metrics (i.e. <code>kube_pod_labels</code> and <code>kube_pod_container_resource_requests_cpu_cores</code>) so that by executing the query both resulting elements would be combined (i.e. all pod labels as resulting element and request_cpu_cores as value)?</p> <p>Query for cpu request value looks like this <code>sum by (namespace, pod) (kube_pod_container_resource_requests_cpu_cores{cluster=&quot;my-cluster&quot;})</code></p> <p>Actual result:</p> <pre><code>{namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;} 0.05 </code></pre> <p>Query for pod labels is <code>kube_pod_labels{label_foo=&quot;bar&quot;}</code></p> <p>Actual result:</p> <pre><code>kube_pod_labels{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;,label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 1 </code></pre> <hr /> <p>I have tried using a left joint but it seems that grouping by a given label (pod, namespace etc) is required as explained in this <a href="https://www.robustperception.io/left-joins-in-promql" rel="nofollow noreferrer">https://www.robustperception.io/left-joins-in-promql</a>.</p> <p>With a <code>multiplication</code> operator <code>*</code> is possible to obtain the desired result set but the set would only contain labels specified in the <code>by</code> clause. Example query:</p> <pre><code>group by (namespace,pod) (kube_pod_labels{label_foo=&quot;bar&quot;,cluster=&quot;my-cluster&quot;}) * sum by (namespace, pod) (kube_pod_container_resource_requests_cpu_cores{cluster=&quot;my-cluster&quot;}) </code></pre> <p>Example result:</p> <pre><code>{namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;} 0.05 </code></pre> <hr /> <p>What I am trying to obtain is a resulting set containing all labels without having to filter by an arbitrary label/value</p> <p>The <strong>desired</strong> result by joining the 2 queries should be:</p> <pre><code>{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;, label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 0.05 </code></pre>
<p>This can be achieved with a combination of the following:</p> <ul> <li><p><code>label_replace</code> query function: For each timeseries in v, label_replace(v instant-vector, dst_label string, replacement string, src_label string, regex string) matches the regular expression regex against the value of the label src_label. If it matches, the value of the label dst_label in the returned timeseries will be the expansion of replacement, together with the original labels in the input. Capturing groups in the regular expression can be referenced with $1, $2, etc. If the regular expression doesn't match then the timeseries is returned unchanged. <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#label_replace" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/querying/functions/#label_replace</a></p> </li> <li><p><code>multiplication *</code> operator and <code>group_left()</code> modifier: Many-to-one and one-to-many matchings refer to the case where each vector element on the &quot;one&quot;-side can match with multiple elements on the &quot;many&quot;-side. This has to be explicitly requested using the group_left or group_right modifier, where left/right determines which vector has the higher cardinality. <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/querying/operators/</a></p> </li> </ul> <p>Example query:</p> <pre><code>label_replace(kube_pod_labels{},&quot;label&quot;,&quot;$1&quot;,&quot;label_&quot;, &quot;(.+)&quot;) * on (cluster,namespace, pod) group_left() (sum by (cluster,namespace, pod) (kube_pod_container_resource_requests_cpu_cores{})) </code></pre> <p>Note that: <code>If the regular expression doesn't match then the timeseries is returned unchanged</code>. In this case the regular expression does not match - hence the full set of labels is return unchanged.</p> <p>Example result:</p> <pre><code>{cluster=&quot;my-cluster&quot;,label_foo=&quot;bar&quot;, label_app=&quot;my-app-label&quot;,namespace=&quot;my-ns&quot;,pod=&quot;my-pod&quot;,service=&quot;my-svc&quot;} 0.05 </code></pre> <hr /> <p>Felipe provided a valuable hint on how to achieve this result in a comment for the original question.</p>
<p>I'm deploying several services to my local cluster (<code>minikube</code>) using <a href="https://devspace.sh/" rel="nofollow noreferrer">DevSpace tool</a>. Once someone makes changes to one of the services and pushes the image to our private repo, I need these changes to be available on my local then. What I do now is I completely delete <code>minikube</code> cluster and start a new one. In this case all images with same tags are just updated with the latest version, not a cached one.</p> <p>But I believe there is some more elegant way to overcome this. So, I need to cleanup/remove/delete outdated images from my local cluster somehow before re-deploying services there.</p> <p>Can someone point where they are stored, how I can review and remove them? Thanks.</p>
<p>DevSpace maintainer here. What you need is 2 things:</p> <ol> <li>Something that forces your pods to be recreated each time you run <code>devspace dev</code>. So, if you are using a Deployment or StatefulSet, you can add something like a label, e.g. containing the DevSpace built-in timestamp variable as value to your pod template.</li> <li>An <code>imagePullPolicy: Always</code> in your pod spec to ensure that Kubernetes always pulls the newest image for each newly created pod. Otherwise Kubernetes would use the already cached image.</li> </ol> <p>In combination, this could look like this within your <code>devspace.yaml</code> file (if you are using the component-chart deployment):</p> <pre><code>deployments: - name: my-component helm: componentChart: true values: labels: timestamp: $!{DEVSPACE_TIMESTAMP} # here is 1. containers: - image: &quot;YOUR_IMAGE:latest&quot; # specify any tag here that you want imagePullPolicy: Always # here is 2. </code></pre> <p><code>$!{DEVSPACE_TIMESTAMP}</code> = <code>$!{}</code> forces to place the value of this var as a string (because k8s only allows string values for labels) and <code>DEVSPACE_TIMESTAMP</code> is the name of a predefined variable in DevSpace. More details here: <a href="https://devspace.sh/cli/docs/configuration/variables/basics#predefined-variables" rel="noreferrer">https://devspace.sh/cli/docs/configuration/variables/basics#predefined-variables</a></p>
<p>I'm trying to SSH into AKS windows node using this <a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="nofollow noreferrer">reference</a> which created debugging Linux node, and ssh into the windows node from the debugging node. Once I enter the Linux node and try to SSH into the windows node, it asks me to type in azureuser password like below:</p> <pre><code>azureuser@10.240.0.128's password: Permission denied, please try again. </code></pre> <p>What is <code>azureuser@(windows node internal IP address)'s</code> password? Is it my azure service password or is it a <code>WindowsProfileAdminUserPassword</code> that I pass in when I create an AKS cluster using <code>New-AzAksCluster</code> <code>cmdlet</code>? Or is it my ssh keypair password? If I do not know what it is, is there a way I can reset it? Or is there a way I can create a Windows node free from credentials? Any help is appreciated. Thanks ahead!</p>
<p>It looks like you're trying to login with your password, not your ssh key. Look for the <a href="https://www.thorntech.com/passwords-vs-ssh/" rel="nofollow noreferrer">explanation</a> between those methods. These are two different authentication methods. If you want to ssh to your node, you need to chose ssh with key authentication. You can do this by running the command:</p> <pre><code>ssh -i &lt;id_rsa&gt; azureuser@&lt;your.ip.adress&gt; </code></pre> <p>But before this, you need to create key pair. It is well done described in <a href="https://learn.microsoft.com/en-us/azure/aks/ssh#before-you-begin" rel="nofollow noreferrer">this section</a>. Then you can <a href="https://learn.microsoft.com/en-us/azure/aks/ssh#create-the-ssh-connection-to-a-linux-node" rel="nofollow noreferrer">create the SSH connection to a Linux node</a>. You have everything described in detail, step by step, in the documentation you provide.</p> <p>When you configure everything correctly, you will be able to log into the node using the ssh key pair. You won't need a password. When you execute the command</p> <pre><code>ssh -i &lt;id_rsa&gt; azureuser@&lt;your.ip.adress&gt; </code></pre> <p>you should see an output like this:</p> <pre><code>The authenticity of host '10.240.0.67 (10.240.0.67)' can't be established. ECDSA key fingerprint is SHA256:1234567890abcdefghijklmnopqrstuvwxyzABCDEFG. Are you sure you want to continue connecting (yes/no)? yes [...] Microsoft Windows [Version 10.0.17763.1935] (c) 2018 Microsoft Corporation. All rights reserved. </code></pre> <p>When you see <code>Are you sure you want to continue connecting (yes/no)?</code> you need to write <code>yes</code> and confirm using <code>Enter</code>.</p>
<p>I have a Spring boot application running a Kafka consumer that consumes messages from a single Topic having two partitions(partition-0 and 1). The application is deployed as a single Kubernetes pod. I have scaled the application and now it's running in two pods. But the client Id prefix of the consumer running in both pod has the same clientIdPrefix 'client-consumer-0'.</p> <p>How to generated clientIdPrefix unique to the pod?</p>
<p>Set kafka consumer client.id properties for change consumer client name <a href="https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_client.id" rel="nofollow noreferrer">https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#consumerconfigs_client.id</a>.</p> <p>You can give property in yml file like below.</p> <pre><code>spring: kafka: consumer: properties: client.id: ${KAFKA_CLIENT_ID:unique-kafka-client-name} </code></pre> <p>for create unique value for client id you can use pod info <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables</a></p> <p>i.e <strong>metadata.uid</strong></p> <pre><code> env: - name: KAFKA_CLIENT_ID valueFrom: fieldRef: fieldPath: metadata.uid </code></pre>
<p>When I run:</p> <pre><code>kubectl get pods --field-selector=status.phase=Running </code></pre> <p>I see:</p> <pre><code>NAME READY STATUS RESTARTS AGE k8s-fbd7b 2/2 Running 0 5m5s testm-45gfg 1/2 Error 0 22h </code></pre> <p>I don't understand why this command gives me pod that are in Error status? According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">K8S api</a>, there is no such thing <code>STATUS=Error</code>.</p> <p>How can I get only the pods that are in this Error status?</p> <p>When I run:</p> <pre><code>kubectl get pods --field-selector=status.phase=Failed </code></pre> <p>It tells me that there are no pods in that status.</p>
<p>Using the <code>kubectl get pods --field-selector=status.phase=Failed</code> command you can display all Pods in the <code>Failed</code> phase.</p> <p><code>Failed</code> means that all containers in the Pod have terminated, and at least one container has terminated in failure (see: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="noreferrer">Pod phase</a>):</p> <blockquote> <p>Failed - All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.</p> </blockquote> <p>In your example, both Pods are in the <code>Running</code> phase because at least one container is still running in each of these Pods.:</p> <blockquote> <p>Running - The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.</p> </blockquote> <p>You can check the current phase of Pods using the following command:</p> <pre><code>$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.status.phase}{&quot;\n&quot;}{end}' </code></pre> <p>Let's check how this command works:</p> <pre><code>$ kubectl get pods NAME READY STATUS app-1 1/2 Error app-2 0/1 Error $ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.status.phase}{&quot;\n&quot;}{end}' app-1 Running app-2 Failed </code></pre> <p>As you can see, only the <code>app-2</code> Pod is in the <code>Failed</code> phase. There is still one container running in the <code>app-1</code> Pod, so this Pod is in the <code>Running</code> phase.</p> <p>To list all pods with the <code>Error</code> status, you can simply use:</p> <pre><code>$ kubectl get pods -A | grep Error default app-1 1/2 Error default app-2 0/1 Error </code></pre> <p>Additionally, it's worth mentioning that you can check the state of all containers in Pods:</p> <pre><code>$ kubectl get pod -o=jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.status.containerStatuses[*].state}{&quot;\n&quot;}{end}' app-1 {&quot;terminated&quot;:{&quot;containerID&quot;:&quot;containerd://f208e2a1ff08c5ce2acf3a33da05603c1947107e398d2f5fbf6f35d8b273ac71&quot;,&quot;exitCode&quot;:2,&quot;finishedAt&quot;:&quot;2021-08-11T14:07:21Z&quot;,&quot;reason&quot;:&quot;Error&quot;,&quot;startedAt&quot;:&quot;2021-08-11T14:07:21Z&quot;}} {&quot;running&quot;:{&quot;startedAt&quot;:&quot;2021-08-11T14:07:21Z&quot;}} app-2 {&quot;terminated&quot;:{&quot;containerID&quot;:&quot;containerd://7a66cbbf73985efaaf348ec2f7a14d8e5bf22f891bd655c4b64692005eb0439b&quot;,&quot;exitCode&quot;:2,&quot;finishedAt&quot;:&quot;2021-08-11T14:08:50Z&quot;,&quot;reason&quot;:&quot;Error&quot;,&quot;startedAt&quot;:&quot;2021-08-11T14:08:50Z&quot;}} </code></pre>
<p>I am doing a rolling upgrade of specififc container of deployment using the fabric8 java client API</p> <pre class="lang-java prettyprint-override"><code>client.apps().deployments().inNamespace(..) .withName(..) .rolling() .updateImage(...); </code></pre> <p>When observing k8s I can see the new pod being created and after the old one deleted as expected. How can I know the status of the upgrade programmatically?</p> <p>I need to return response whether the upgrade succeeded or failed.</p>
<p>To check what is happening with deployment you can use Watcher:</p> <pre><code>client.apps().deployments().inNamespace(&quot;default&quot;).watch(new Watcher&lt;Deployment&gt;() { @Override public void eventReceived(Action action, Deployment resource) { // Do stuff depending upon action } @Override public void onClose(KubernetesClientException cause) { } }); </code></pre> <p><a href="https://github.com/fabric8io/kubernetes-client/blob/c24152096f1a0438be1cd4978bea23a8dc185334/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Watcher.java#L46" rel="nofollow noreferrer">Action</a> is an enum, so you can check for <code>ERROR</code> and raise an exception:</p> <pre><code>enum Action { ADDED, MODIFIED, DELETED, ERROR } </code></pre> <p><a href="https://github.com/fabric8io/kubernetes-client/blob/master/doc/CHEATSHEET.md#deployment" rel="nofollow noreferrer">Here</a> you can find more examples on what you can do with deployments</p>
<p>I have read through a bunch of guides showing how to monitor cpu and memory usage of pods in Kubernetes with Prometheus and most of them look something like this:</p> <pre><code>rate(container_cpu_usage_seconds_total{pod=~&quot;compute-.*&quot;, image!=&quot;&quot;, container!=&quot;POD&quot;}[5m]) </code></pre> <p>but I can't find any documentation on why the container label is there (it seems like it causes duplicated data) and why it is being avoided in many monitoring examples. I know that this metric comes from the cadvisor component of Kubernetes, but the only docs I can find on these metrics are the short descriptions provided in the code <a href="https://github.com/google/cadvisor/blob/master/metrics/prometheus.go" rel="noreferrer">here</a>.</p> <p>Does anyone know what this label is for and where there are more in depth documentation no these metrics?</p>
<p>Containers, as @Ali Sattari already mentioned right in the comment, are pause containers.</p> <hr /> <p><strong>Pause containers</strong></p> <p>Pause container starts first, before scheduling other pods. Purpose of the pause container (<code>container_name=&quot;POD&quot;</code>) is to provide the network namespace for the pod and additional containers, which will be assigned to that pod. Image of the pause container is always present in Kubernetes. Due to this, allocation of the pod’s resources is instantaneous. After the pause container started, there is no other work for him.</p> <p>By default, pause containers are hidden, but you can see them by running next command: <code>docker ps | grep pause</code></p> <pre><code>$ docker ps | grep pause 3bb5065dd9ba k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_kubernetes-bootcamp-fb5c67579-5rxjn_default_93ce94f8-b440-4b4f-9e4e-25f97be8196f_0 0627138518e1 k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_metrics-server-56c4f8c9d6-vf2zg_kube-system_93626697-8cd0-4fff-86d3-245c23d74a42_0 81ca597ed3ff k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_storage-provisioner_kube-system_dbdec6e5-d3ed-4967-a042-1747f8bdc39a_0 0d01130b158f k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_kubernetes-dashboard-968bcb79-pxmzb_kubernetes-dashboard_b1265ad7-2bce-46aa-8764-d06d72856633_0 d8a159b6215e k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_dashboard-metrics-scraper-f6647bd8c-hqm6k_kubernetes-dashboard_bde40acc-a8ca-451a-9868-26e86ccafecb_0 294e81edf0be k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_coredns-74ff55c5b-84vr7_kube-system_28275e83-613a-4a09-8ace-13d6e831c1bf_0 2b3bfad1201b k8s.gcr.io/pause:3.2 &quot;/pause&quot; 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-zxjgc_kube-system_34f8158a-487e-4d00-80f1-37b67b72865e_0 d5542091730b k8s.gcr.io/pause:3.2 &quot;/pause&quot; 4 minutes ago Up 4 minutes k8s_POD_kube-scheduler-minikube_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0 b87163ed2c0a k8s.gcr.io/pause:3.2 &quot;/pause&quot; 4 minutes ago Up 4 minutes k8s_POD_kube-controller-manager-minikube_kube-system_57b8c22dbe6410e4bd36cf14b0f8bdc7_0 c97ed96ded60 k8s.gcr.io/pause:3.2 &quot;/pause&quot; 4 minutes ago Up 4 minutes k8s_POD_etcd-minikube_kube-system_62a7db7bebf35458f2365f79293db6d3_0 4ab2d11317ed k8s.gcr.io/pause:3.2 &quot;/pause&quot; 4 minutes ago Up 4 minutes k8s_POD_kube-apiserver-minikube_kube-system_dc477bf6fc026f57469b47d9be68a88c_0 </code></pre> <p>You can read more about pause containers <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="noreferrer">here</a>.</p> <hr /> <p><strong>Pause containers in Prometheus</strong></p> <p>In examples provided for Prometheus you can often see the next limitation: <code>container_name!=&quot;POD&quot;</code>, since it's useful to request resource usage just for necessary containers that currently work, without information for pause containers.</p>
<p>I am blocked with k8s cron job yaml syntax errros</p> <p>I try to do</p> <p>kubectl apply -f cronjob.yaml</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: update-test spec: schedule: &quot;0 /5 * * *&quot; jobTemplate: spec: template: spec: containers: - name: update-test image: test:test imagePullPolicy: IfNotPresent command: ['echo test'] envFrom: - configMapRef: name: test-config - configMapRef: name: test-config-globe resources: requests: memory: &quot;512Mi&quot; cpu: &quot;0.5&quot; limits: memory: &quot;1024Mi&quot; cpu: &quot;2&quot; restartPolicy: OnFailure </code></pre> <p>But i am getting this error:</p> <pre><code>error: error validating &quot;deplyment.yaml&quot;: error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[0]): unknown field &quot;name&quot; in io.k8s.api.core.v1.EnvFromSource, ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0].envFrom[1]): unknown field &quot;name&quot; in io.k8s.api.core.v1.EnvFromSource]; </code></pre>
<p>Indentation of <code>configMapRef</code> name is incorrect, change this:</p> <pre><code>envFrom: - configMapRef: name: test-config </code></pre> <p>to:</p> <pre><code>envFrom: - configMapRef: name: test-config </code></pre> <p>Note: Also, your <code>cron</code> schedule is incorrect, you may need to fix <code>0 /5 * * *</code> to a valid value. perhaps you need to set it to <code>0 */5 * * *</code></p>
<p>For a Kubernetes deployment, given a Kustomize Base like:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - name: developmentregistry.com/myimage:v1.0 </code></pre> <p>Is there a way to change only the image registry and retain the o.g. tag with an Overlay <strong>without re-declaring the image</strong> using image transformers?</p> <p>For example if I use an Overlay like:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - newName: productionregistry.com/myimage </code></pre> <p>The image is deployed as just <code>productionregistry.com/myimage</code>— which is not what I want.<br /> How can I make it so my Overlay deployment yields <code>productionregistry.com/myimage:v1.0</code>? I cannot find any other posts or any issues about this.<br /> Would I have to use Patches instead of image transformers?</p>
<p>Okay I figured it out — Kustomization Yamls are finnicky so you <em>must</em> have the right yaml structure.</p> <p>This works:</p> <p><strong>Base</strong></p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - name: developmentregistry.com/myimage newTag: v1.0 </code></pre> <p><strong>Overlay</strong></p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - name: developmentregistry.com/myimage newName: productionregistry.com/myimage </code></pre>
<p>In the pods definition, we add the nodeSelector as a child of spec. But I'm not sure how to add this in deployment yaml file</p> <p>are we supposed to add it in the template's spec-</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: first-deployment labels: app: first spec: replicas: 1 template: metadata: name: first-deployment-pod label: app: first spec: containers: - name: test image: test/test nodeSelector: testkey: testvalue </code></pre> <p>or do we need to add it in the spec of the deplyment -</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: first-deployment labels: app: first spec: replicas: 1 nodeSelector: testkey: testvalue template: metadata: name: first-deployment-pod label: app: first spec: containers: - name: test image: test/test </code></pre>
<p>It should be at the same level than your containers array:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: first-deployment labels: app: first spec: replicas: 1 template: metadata: name: first-deployment-pod label: app: first spec: containers: - name: test image: test/test nodeSelector: testkey: testvalue </code></pre>
<p>I'd like to grant a service account the ability to access the metrics exposed by the <code>metrics-server</code> service (<code>https://metrics-server.kube-system/metrics</code>). If I create a serviceaccount...</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: metrics-reader namespace: prometheus </code></pre> <p>...and then grant it <code>cluster-admin</code> privileges...</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-reader-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: metrics-reader namespace: prometheus </code></pre> <p>...it works! I can use the account token to access the metrics server:</p> <pre><code>curl -k --header &quot;Authorization: Bearer $token&quot; https://metrics-server.kube-system/metrics </code></pre> <p>But I don't want to require <code>cluster-admin</code> access just to read metrics. I tried to use the <code>view</code> cluster role instead of <code>cluster-admin</code>, but that fails.</p> <p>Is there an <em>existing</em> role that would grant the appropriate access? If not, what are the specific permissions necessary to grant read-only access to the metrics-server <code>/metrics</code> endpoint?</p>
<p>Interesting question. I've found some info for you, however i'm not sure that 100% helpful. It needs more research and reproduce.</p> <ol> <li>check <a href="https://github.com/kubernetes-sigs/metrics-server/issues/40#issuecomment-402395198" rel="nofollow noreferrer">RBAC Deny when requesting metrics</a>. Smth like below?</li> </ol> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: metrics-reader namespace: prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: view-metrics rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: view-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view-metrics subjects: - kind: ServiceAccount name: metrics-reader namespace: prometheus </code></pre> <hr /> <ol start="2"> <li>It seems, there is a <a href="https://github.com/kubernetes-sigs/metrics-server/issues/411#issuecomment-707525489" rel="nofollow noreferrer"><code>aggregated-metrics-reader</code> clusterrole</a> (or there was)</li> </ol> <blockquote> <p>Aggregated ClusterRoles are documented in: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles</a>. <strong>The purpose of the system:aggregated-metrics-reader ClusterRole, is to aggregate the rules, that grant permission to get the pod and node metrics, to the view, edit and admin roles.</strong></p> </blockquote> <p>however I wasnt able to find any reference to <code>aggregated-metrics-reader</code> clusterrole in current version of that doc.</p> <p>You can find huge example of using this clusterrole in <a href="https://github.com/kubernetes-sigs/metrics-server/issues/537" rel="nofollow noreferrer">Metrics server unable to scrape</a></p> <p>IN addition check <a href="https://github.com/helm/charts/pull/10737" rel="nofollow noreferrer">This adds the aggregated-metrics-reader ClusterRole which was missing</a> github PR:</p> <blockquote> <p>What this PR does / why we need it: This adds the aggregated-metrics-reader ClusterRole which was missing, and seems to be required for k8s 1.8+ per the metrics-server documentation and default deploy manfiests</p> </blockquote> <p>Unfortunately link in that PR direct to nowhere. I start thinking this obsolete info for 1.8 clusters.. Will update answer in case find anything more relevant</p>
<p>I have deployed pods using kubectl apply command and I can see pods running:</p> <pre><code>$kubectl describe pod test-pod -n sample | grep -i container Containers: Container ID: containerd://ce6cd9XXXXXX69538XXX ContainersReady True </code></pre> <p>Can I say that it's using contained runtime technology? How do I verify the runtime used by containers.</p> <p>I am also getting some errors like below in pod:</p> <pre><code>kubectl logs test-pod -n sample 'docker.images' is not supported: Cannot fetch data: Get http://1.28/images/json: dial unix /var/run/docker.sock: connect: no such file or directory. </code></pre> <p>Is it because I am not using docker runtime?</p>
<p>As i already mentioned in a comment the command is</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>It will returns the container runtime for each node.</p>
<p>I'm using ArgoCD and I want to track files under different subdirectories. I've setted the path as ./root_directory, but I would like to track also files in the subdirectories of root_directory. For instance /root_directory/dir1, /root_directory/dir2, but also /root_directory/dir1/dir1.1 ecc.. How can I do that?</p> <p>Thanks for your help</p>
<p>You can add the <code>spec.source.directory.recurse</code> attribute.</p> <p>See an example below:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: argocd-app namespace: argocd spec: project: default source: path: some/path/ repoURL: https://github.com/your-repo.git targetRevision: HEAD directory: recurse: true # &lt;--- Here destination: namespace: '*' server: https://kubernetes.default.svc syncPolicy: automated: prune: true </code></pre>
<p>I set up a (very) simple deployment with GKE on a GKE Autopilot cluster running the latest version of Kubernetes (<code>1.18.15-gke.1501</code>) and attached an ingress (external HTTP(s) load balancer) that links to a simple <code>ClusterIP</code> service.</p> <p>Whenever I update the deployment with a new image, I experience about 5-15 minutes of downtime where the load balancer returns a 502 error. It seems like the control plane creates the new, updated pod, allows the service-level health checks to go through (not the load-balancer ones, it doesn't create the NEG yet), then kills the older pod while at the same time setting up the new NEG. It then doesn't remove the old NEG until a variable amount of time later.</p> <p><a href="https://i.stack.imgur.com/j3ZKz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/j3ZKz.png" alt="enter image description here" /></a></p> <p>Logs on the pods show that health checks are going through, but the GKE dashboard show inconsistent results for the Ingress' state. The ingress will show as fine, but the service will 502.</p> <h2>Things I've tried</h2> <ul> <li>Increasing the number of pods from 1 to 3. This helped on some deploys, but on every other deploy it <em>increased</em> the amount of time it took for the load balancer to resolve correctly.</li> <li>Attempted setting <code>maxSurge</code> to 1 and <code>maxUnavailable</code> to 0. This did not improve the downtime at all.</li> <li>Adding <code>lifecycle.preStop.exec.command: [&quot;sleep&quot;, &quot;60&quot;]</code> to the container on the deployment. <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#traffic_does_not_reach_endpoints" rel="noreferrer">This was suggested in the GKE docs here.</a></li> <li>Recreating the ingress, service, deployments, and clusters multiple times.</li> <li>Adding a <code>BackendConfig</code> to the service that adds slower draining on it.</li> <li>Adding a readiness gate found in the docs that's supposed to fix this, but for some reason doesn't?</li> </ul> <p>None of the above have helped or made any noticeable difference in how long things were down.</p> <p>I'm really, really confused by why this isn't working. It feels like I'm missing something <em>really</em> obvious, but this is also such a simple config that you'd think it'd... just work?? Anyone have any idea on what's going on?</p> <h3>Config files</h3> <p>Deployment config:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: foundation-deployment spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 selector: matchLabels: app: foundation-web template: metadata: labels: app: foundation-web spec: readinessGates: - conditionType: &quot;cloud.google.com/load-balancer-neg-ready&quot; serviceAccountName: foundation-database-account containers: # Run Cloud SQL proxy so we can safely connect to Postgres on localhost. - name: cloud-sql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.17 resources: requests: cpu: &quot;250m&quot; memory: 100Mi limits: cpu: &quot;500m&quot; memory: 100Mi command: - &quot;/cloud_sql_proxy&quot; - &quot;-instances=nine-foundation:us-central1:foundation-staging=tcp:5432&quot; securityContext: runAsNonRoot: true # Main container config - name: foundation-web image: gcr.io/project-name/foundation_web:latest imagePullPolicy: Always lifecycle: preStop: exec: command: [&quot;sleep&quot;, &quot;60&quot;] env: # Env variables resources: requests: memory: &quot;500Mi&quot; cpu: &quot;500m&quot; limits: memory: &quot;1000Mi&quot; cpu: &quot;1&quot; livenessProbe: httpGet: path: /healthz port: 4000 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: path: /healthz port: 4000 initialDelaySeconds: 10 periodSeconds: 10 ports: - containerPort: 4000 </code></pre> <p>Service config:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: foundation-web-service annotations: cloud.google.com/neg: '{&quot;ingress&quot;: true}' cloud.google.com/backend-config: '{&quot;ports&quot;: {&quot;4000&quot;: &quot;foundation-service-config&quot;}}' spec: type: ClusterIP selector: app: foundation-web ports: - port: 4000 targetPort: 4000 </code></pre> <p>BackendConfig:</p> <pre><code>apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: foundation-service-config spec: # sessionAffinity: # affinityType: &quot;GENERATED_COOKIE&quot; # affinityCookieTtlSec: 120 connectionDraining: drainingTimeoutSec: 60 </code></pre> <p>Ingress config:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: foundation-web-ingress labels: name: foundation-web-ingress spec: backend: serviceName: foundation-web-service servicePort: 4000 </code></pre>
<p>I think this might be to do with the cloud sql auth proxy sidecar not terminating properly, leading to the load balancer getting in a twist.</p> <p>Note this from the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#traffic_does_not_reach_endpoints" rel="nofollow noreferrer">GCP docs</a> (I've made the key parts <em>italics</em>)</p> <blockquote> <p><strong>Symptoms</strong></p> <p>502 errors or rejected connections. Potential causes New endpoints generally become reachable after attaching them to the load balancer, provided that they respond to health checks. You might encounter 502 errors or rejected connections if traffic cannot reach the endpoints.</p> <p><em>502 errors and rejected connections can also be caused by a container that doesn't handle SIGTERM. If a container doesn't explicitly handle SIGTERM, it immediately terminates and stops handling requests. The load balancer continues to send incoming traffic to the terminated container, leading to errors.</em></p> <p>The container native load balancer only has one backend endpoint. During a rolling update, the old endpoint gets deprogrammed before the new endpoint gets programmed.</p> <p>Backend Pod(s) are deployed into a new zone for the first time after a container native load balancer is provisioned. Load balancer infrastructure is programmed in a zone when there is at least one endpoint in the zone. When a new endpoint is added to a zone, load balancer infrastructure is programmed and causes service disruptions.</p> <p><strong>Resolution</strong></p> <p><em>Configure containers to handle SIGTERM and continue responding to requests throughout the termination grace period (30 seconds by default)</em>. Configure Pods to begin failing health checks when they receive SIGTERM. This signals the load balancer to stop sending traffic to the Pod while endpoint deprograming is in progress.</p> </blockquote> <p>By default, the proxy doesn't handle SIGTERM nicely, and wont exit gracefully on a SIGTERM (see <a href="https://github.com/GoogleCloudPlatform/cloudsql-proxy/issues/128" rel="nofollow noreferrer">related issue</a>), however it now has a nice flag for handling this, so you can use something like</p> <pre><code>- name: cloud-sql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.23.1 command: - &quot;/cloud_sql_proxy&quot; - &quot;-instances={{ .Values.postgresConnectionName }}=tcp:5432&quot; - &quot;-term_timeout=60s&quot; securityContext: runAsNonRoot: true resources: requests: memory: &quot;2Gi&quot; cpu: &quot;1&quot; </code></pre> <p>Adding the term_timeout flag mainly fixed it for me but was still seeing the occasional 502 during a deployment. Upping replicas to 3 (my cluster is regional, so I wanted to cover all the zones) seemed to help, once I had the term_timeout in place.</p>
<p>I am surprised that nobody has yet asked this, but what exactly is <code>deployment.apps</code>?</p> <p>I see it often in commands e.g</p> <pre><code>kubectl rollout pause deployment.apps/nginx-deployment </code></pre> <p>or even used interchangably for the <code>deployments</code> keyword: <code>kubectl get deployments</code>= <code>kubectl get deployment.apps</code></p> <p>I do not understand what it indicates though. Even in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">K8s official docs</a>, they just take for granted that the reader understands the term.</p> <p>Could someone please explain it to me?</p>
<p>Kubernetes API has its different resources (e.g. Pods, Deployments, Ingress) grouped in what they call &quot;<a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">api groups</a>&quot; and in the notation <code>deployment.apps</code> - &quot;<a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="nofollow noreferrer">deployment</a>&quot; is the resource name and the &quot;apps&quot; is the api group name.</p> <p>Also see the motivation for <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md" rel="nofollow noreferrer">API groups</a></p>
<p>My deployment yaml file is bitbucket and CI/CD pipeline to build the image and deploy the container is in Azure DevOps. I am able to build and tag the image correctly i.e. with each pipeline run the image version is incremented by 1 e.g. 1.0.0, 1.0.1,1.0.2 ... There is no &quot;latest&quot; tag in this repository. How do I pass this image tag dynamically in the deployment.yml file so that kubectl deploy stage always picks the latest tagged image ?</p> <p>my deployment file currently looks like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-custom labels: app: ubuntu-custom spec: replicas: 1 selector: matchLabels: app: ubuntu-custom template: metadata: labels: app: ubuntu-custom spec: containers: - name: ubuntu-custom image: &lt;acr&gt;/ubuntu-custom:latest command: [&quot;sleep&quot;] args: [&quot;365d&quot;] imagePullSecrets: - name: acrsecret </code></pre> <p>I have tried to change the image to - image: acr/ubuntu-custom but I get the ImagePullBackOff error.</p> <p>I am sure I need to pass the latest tag in the deployment file but I am not sure how I can achieve that.</p>
<ol> <li><p>Use sed command</p> <p><code>sed 's/\\&lt;arc\\&gt;/${ACR_REPO_URL}/g' deploy.yaml</code></p> </li> <li><p>Use yq</p> </li> </ol> <pre><code> yq -i -y &quot;.spec. template.spec.containers.image = \\&quot;${ACR_REPO_URL}\\&quot;&quot; deploy.yaml </code></pre>
<p>The command <code>kubectl describe pod</code> returns information about a specific pod. There is a section called <code>Events</code>, which contains events related to a selected pod.</p> <p>In case of readiness probe (httpGet) failure there will be written something like this:</p> <pre><code>Message ------- Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>However, it's not clear what the reason is and I would like to see the response body of the readiness probe, because in the response I can specify a message that explains the reason.</p> <p>Is it possible to add the response of the http probe straight to the output of <code>kubectl describe pod</code> so I don't have to curl this pod?</p> <p>As I understand, I can set readiness probe as a <code>exec</code>, but I think there could be a simpler solution.</p> <p>The example of the httpGet probe:</p> <pre><code>readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 10 periodSeconds: 5 </code></pre>
<p>To debug the response of <code>HTTP GET</code> of your program at <code>healthz</code> endpoint, you can <code>port-forward</code> your pod to the local machine, and check the response body by yourself.</p> <ol> <li>First of all, you need to increase the initial delay so that you have enough time to test:</li> </ol> <pre><code>readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 1000 # something like this periodSeconds: 5 </code></pre> <ol start="2"> <li>Once your pod is running, use <code>kubectl port-forward</code>:</li> </ol> <pre><code>$ kubectl port-forward -n &lt;namespace&gt; &lt;pod-name&gt; 8080:8080 </code></pre> <ol start="3"> <li>Now use <code>curl</code> command to test response:</li> </ol> <pre><code>$ curl -XGET &quot;localhost:8080/healthz&quot; </code></pre>
<p>I checked this <a href="https://stackoverflow.com/questions/66557539/how-to-create-keycloak-with-operator-and-external-database">How to use</a> for Keycloak operator custom resource using external database connection. I am using CloudSQL from Google platform as the external database source.</p> <p>My configurations are</p> <ul> <li>keycloak-idm</li> </ul> <pre><code>apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: kiwigrid-keycloak-idm spec: instances: 3 externalAccess: enabled: false externalDatabase: enabled: true </code></pre> <ul> <li>external db storage secret</li> </ul> <pre><code>apiVersion: v1 kind: Secret metadata: name: keycloak-db-secret namespace: kiwios-application type: Opaque stringData: POSTGRES_DATABASE: keycloak-storage POSTGRES_EXTERNAL_ADDRESS: pgsqlproxy.infra POSTGRES_EXTERNAL_PORT: &quot;5432&quot; POSTGRES_HOST: keycloak-postgresql POSTGRES_USERNAME: keycloak-user POSTGRES_PASSWORD: S1ly3AValJYBNR-fsptLYdT74 POSTGRES_SUPERUSER: &quot;true&quot; </code></pre> <ul> <li>storage database</li> </ul> <pre><code>apiVersion: sql.cnrm.cloud.google.com/v1beta1 kind: SQLDatabase metadata: name: keycloak-storage namespace: kiwios-application annotations: cnrm.cloud.google.com/deletion-policy: &quot;abandon&quot; spec: charset: UTF8 collation: en_US.UTF8 instanceRef: name: keycloak-storage-instance-pg namespace: infra </code></pre> <ul> <li>storage users</li> </ul> <pre><code>apiVersion: sql.cnrm.cloud.google.com/v1beta1 kind: SQLUser metadata: name: keycloak-user namespace: kiwios-application annotations: cnrm.cloud.google.com/deletion-policy: &quot;abandon&quot; spec: instanceRef: name: keycloak-storage-instance-pg namespace: infra password: valueFrom: secretKeyRef: name: keycloak-db-secret key: POSTGRES_PASSWORD </code></pre> <p>And the error shown in Kubernetes console <a href="https://i.stack.imgur.com/sJegJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJegJ.png" alt="error" /></a></p> <p>It is not working. Anyone please help me to figure out what I am doing wrong.</p> <p><strong>Update:</strong> I deep dived with k9s console. As per <code>keycloak-operator</code> functionality it creates a external name for the database connection. which is here <code>keycloak-postgresql</code></p> <p>check image below <a href="https://i.stack.imgur.com/qXSe6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qXSe6.png" alt="keycloak-postgresql works" /></a></p> <p>There is no error showing in <code>keycloak-operator</code> console. Only the <code>keycloak-idm</code> is not able to make a connection using this <em>external name</em>. It shows the below error. <a href="https://i.stack.imgur.com/sJegJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJegJ.png" alt="error" /></a></p>
<p>This is what i am using for keycloak setup, also if you have read the question he has mention secret issue issue in update section</p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: default labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:10.0.0 env: - name: KEYCLOAK_USER value: &quot;admin&quot; - name: KEYCLOAK_PASSWORD value: &quot;admin&quot; - name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; - name: DB_VENDOR value: POSTGRES - name: DB_ADDR value: postgres - name: DB_DATABASE value: keycloak - name: DB_USER value: root - name: DB_PASSWORD value: password - name : KEYCLOAK_HTTP_PORT value : &quot;80&quot; - name: KEYCLOAK_HTTPS_PORT value: &quot;443&quot; - name : KEYCLOAK_HOSTNAME value : keycloak.harshmanvar.tk #replace with ingress URL ports: - name: http containerPort: 8080 - name: https containerPort: 8443 readinessProbe: httpGet: path: /auth/realms/master port: 8080 </code></pre> <p>You can try changing the ENV variables into the secret you are using.</p> <p>Example files : <a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment</a></p> <p>Environment variables that Keycloak support : <a href="https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables" rel="nofollow noreferrer">https://github.com/keycloak/keycloak-containers/blob/master/server/README.md#environment-variables</a></p>
<p>After deploying Prometheus -operator according to the documentation, I find that <code>kubectl top Nodes</code> cannot run properly.</p> <pre><code>$ kubectl get apiService v1beta1.metrics.k8s.io v1beta1.metrics.k8s.io monitoring/prometheus-adapter False (FailedDiscoveryCheck) 44m $ kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) $ kubectl get --raw &quot;/apis/metrics.k8s.io/v1beta1&quot; Error from server (ServiceUnavailable): the server is currently unable to handle the request </code></pre> <blockquote> <p>prometheus-adapter.yaml</p> </blockquote> <pre><code>... - args: - --cert-dir=/var/run/serving-cert - --config=/etc/adapter/config.yaml - --logtostderr=true - --metrics-relist-interval=1m - --prometheus-url=http://prometheus-k8s.monitoring.svc.cluster.local:9090/prometheus - --secure-port=6443 ... </code></pre> <p>When I was looking for a problem, I found a solution (<a href="https://github.com/banzaicloud/banzai-charts/issues/1060" rel="nofollow noreferrer">#1060</a>) by adding <code>hostNetwork: true</code> to the configuration file.</p> <p>When I thought the solution was successful, I found that <code>kubectl top nodes</code> still does not work.</p> <pre><code>$ kubectl get apiService v1beta1.metrics.k8s.io v1beta1.metrics.k8s.io monitoring/prometheus-adapter True 64m $ kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) $ kubectl get --raw &quot;/apis/metrics.k8s.io/v1beta1&quot; {&quot;kind&quot;:&quot;APIResourceList&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;groupVersion&quot;:&quot;metrics.k8s.io/v1beta1&quot;,&quot;resources&quot;:[{&quot;name&quot;:&quot;nodes&quot;,&quot;singularName&quot;:&quot;&quot;,&quot;namespaced&quot;:false,&quot;kind&quot;:&quot;NodeMetrics&quot;,&quot;verbs&quot;:[&quot;get&quot;,&quot;list&quot;]},{&quot;name&quot;:&quot;pods&quot;,&quot;singularName&quot;:&quot;&quot;,&quot;namespaced&quot;:true,&quot;kind&quot;:&quot;PodMetrics&quot;,&quot;verbs&quot;:[&quot;get&quot;,&quot;list&quot;]}]} </code></pre> <p>View logs of Prometheus-adapter</p> <pre><code>E0812 10:03:02.469561 1 provider.go:265] failed querying node metrics: unable to fetch node CPU metrics: unable to execute query: Get &quot;http://prometheus-k8s.monitoring.svc.cluster.local:9090/prometheus/api/v1/query?query=sum+by+%28node%29+%28%0A++1+-+irate%28%0A++++node_cpu_seconds_total%7Bmode%3D%22idle%22%7D%5B60s%5D%0A++%29%0A++%2A+on%28namespace%2C+pod%29+group_left%28node%29+%28%0A++++node_namespace_pod%3Akube_pod_info%3A%7Bnode%3D~%22node02.whisper-tech.net%7Cnode03.whisper-tech.net%22%7D%0A++%29%0A%29%0Aor+sum+by+%28node%29+%28%0A++1+-+irate%28%0A++++windows_cpu_time_total%7Bmode%3D%22idle%22%2C+job%3D%22windows-exporter%22%2Cnode%3D~%22node02.whisper-tech.net%7Cnode03.whisper-tech.net%22%7D%5B4m%5D%0A++%29%0A%29%0A&amp;time=1628762582.467&quot;: dial tcp: lookup prometheus-k8s.monitoring.svc.cluster.local on 100.100.2.136:53: no such host </code></pre> <p>The cause of the problem was that <code>hostNetwork: true</code> was added to the <code>Prometheus-Adapter</code>, which prevented pod from accessing <code>Prometheus-K8s</code> in the cluster through <code>coreDNS</code>.</p> <p>One idea I've come up with is to have <code>Kubernetes nodes</code> access the inner part of the cluster through <code>coreDNS</code></p> <p>Is there a better way to solve the current problem? What should I do?</p>
<p>Your Pods are running with <code>hostNetwork</code>, so you should explicitly set its DNS policy &quot;ClusterFirstWithHostNet&quot; as described in the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">Pod's DNS Policy</a> documentation:</p> <blockquote> <p>&quot;ClusterFirstWithHostNet&quot;: For Pods running with hostNetwork, you should explicitly set its DNS policy &quot;ClusterFirstWithHostNet&quot;.</p> </blockquote> <p>I've created a simple example to illustrate how it works.</p> <hr /> <p>First, I created the <code>app-1</code> Pod with <code>hostNetwork: true</code>:</p> <pre><code>$ cat app-1.yml kind: Pod apiVersion: v1 metadata: name: app-1 spec: hostNetwork: true containers: - name: dnsutils image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 command: - sleep - &quot;3600&quot; $ kubectl apply -f app-1.yml pod/app-1 created </code></pre> <p>We can test that the <code>app-1</code> cannot resolve e.g. <code>kubernetes.default.svc</code>:</p> <pre><code>$ kubectl exec -it app-1 -- sh / # nslookup kubernetes.default.svc Server: 169.254.169.254 Address: 169.254.169.254#53 ** server can't find kubernetes.default.svc: NXDOMAIN </code></pre> <p>Let's add the <code>dnsPolicy: ClusterFirstWithHostNet</code> to the <code>app-1</code> Pod and recreate it:</p> <pre><code>$ cat app-1.yml kind: Pod apiVersion: v1 metadata: name: app-1 spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: dnsutils image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 command: - sleep - &quot;3600&quot; $ kubectl delete pod app-1 &amp;&amp; kubectl apply -f app-1.yml pod &quot;app-1&quot; deleted pod/app-1 created </code></pre> <p>Finally, we can check if the <code>app-1</code> Pod is able to resolve <code>kubernetes.default.svc</code>:</p> <pre><code>$ kubectl exec -it app-1 -- sh / # nslookup kubernetes.default.svc Server: 10.8.0.10 Address: 10.8.0.10#53 Name: kubernetes.default.svc.cluster.local Address: 10.8.0.1 </code></pre> <p>As you can see in the example above, everything works as expected with the <code>ClusterFirstWithHostNet</code> dnsPolicy.</p> <p>For more information, see the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> documentation.</p>
<p>I have an Airflow 1.10.15 (a.k.a. Bridge Release) in my AWS Kubernetes cluster. It uses KubernetesExecutor.</p> <p>I have a Hello World KubernetesExecutor DAG which should print Hello World. When triggering the DAG, it creates a pod but it never prints the Hello World.</p> <p>Here are all the logs after the pod has been completed running: <a href="https://i.stack.imgur.com/LP51M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LP51M.png" alt="enter image description here" /></a></p> <p>Describing the pod will give me logs which has no errors or failures: <a href="https://i.stack.imgur.com/2ptQu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ptQu.png" alt="enter image description here" /></a></p>
<p>You should check Task logs, not Kubernetes logs. Kubernetes logs keep information about &quot;atempting to run&quot; the task (and looks that it's all ok here).</p> <p>Now, when you log anything in the running tasks, it does not go to the k8S logs - it goes to task logs. By default, when you configure Airflow, the logs for tasks are stored separately - basically every task has its own log. This is in order so that you can pull the logs and see them in Airflow UI when you click on &quot;logs&quot; for this particular task execution.</p> <p>Just check it in the UI or in the &quot;${AIRFLOW_HOME}/logs&quot; folder.</p>
<p>I would like to use kubectl cp to copy a file from a completed pod to my local host(local computer). I used <strong>kubectl cp /:/ </strong>, however, it gave me an <strong>error: cannot exec into a container in a completed pod; current phase is Succeeded</strong> error. Is there a way I can copy a file from a completed pod? It does not need to be kubectl cp. Any help appreciated!</p>
<p>Nope. If the pod is gone, it's gone for good. Only possibility would be if the data is stored in a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">PV</a> or some other external resource. Pods are <a href="https://devops.stackexchange.com/questions/653/what-is-the-definition-of-cattle-not-pets">cattle, not pets</a>.</p>
<p>I am setting up Traefik in my CentOS VM. I tried to port-forward as specified here:</p> <p><a href="https://github.com/jakubhajek/traefik-workshop/tree/3cbbb3b8d3dbafcb2a56f3bb715fee41ba8ffe8b/exercise-2" rel="nofollow noreferrer">https://github.com/jakubhajek/traefik-workshop/tree/3cbbb3b8d3dbafcb2a56f3bb715fee41ba8ffe8b/exercise-2</a></p> <p>It displays the following, taking hours, and does nothing:</p> <pre><code>Forwarding from 127.0.0.1:9000 -&gt; 9000 Forwarding from [::1]:9000 -&gt; 9000 Handling connection for 9000 Handling connection for 9000 Handling connection for 9000 Handling connection for 9000 </code></pre> <p>Please advise what I should do to make the <code>kubectl port-forward</code> work.</p>
<p>kubectl port-forward makes a specific Kubernetes API request. That means the system running it needs access to the API server, and any traffic will get tunneled over a single HTTP connection.</p> <p>If it is saying 404 page not found, then probably there is something wrong with the deployment ( application ) as the port 9000 is listening and opened connection. So you can check whether you have done port-forwarding to the right pod or not.</p>
<p>TL;DR - gke 1.20 preemptible nodes cause pods to zombie into Failed/Shutdown</p> <p>We have been using GKE for a few years with clusters containing a mixture of both stable and preemptible node pools. Recently, since gke v1.20, we have started seeing preempted pods enter into a weird zombie state where they are described as:</p> <p>Status: Failed</p> <p>Reason: Shutdown</p> <p>Message: Node is shutting, evicting pods</p> <p>When this started occurring we were convinced it was related to our pods failing to properly handle the SIGTERM at preemption. We decided to eliminate our service software as a source of a problem by boiling it down to a simple service that mostly sleeps:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>/* eslint-disable no-console */ let exitNow = false process.on( 'SIGINT', () =&gt; { console.log( 'INT shutting down gracefully' ) exitNow = true } ) process.on( 'SIGTERM', () =&gt; { console.log( 'TERM shutting down gracefully' ) exitNow = true } ) const sleep = ( seconds ) =&gt; { return new Promise( ( resolve ) =&gt; { setTimeout( resolve, seconds * 1000 ) } ) } const Main = async ( cycles = 120, delaySec = 5 ) =&gt; { console.log( `Starting ${cycles}, ${delaySec} second cycles` ) for ( let i = 1; i &lt;= cycles &amp;&amp; !exitNow; i++ ) { console.log( `---&gt; ${i} of ${cycles}` ) await sleep( delaySec ) // eslint-disable-line } console.log( '*** Cycle Complete - exiting' ) process.exit( 0 ) } Main()</code></pre> </div> </div> </p> <p>This code is built into a docker image using the tini init to spawn the pod process running under nodejs (fermium-alpine image). No matter how we shuffle the signal handling it seems the pods never really shutdown cleanly, even though the logs suggest they are.</p> <p>Another oddity to this is that according to the Kubernetes Pod logs, we see the pod termination start and then gets cancelled:</p> <p>2021-08-06 17:00:08.000 EDT Stopping container preempt-pod</p> <p>2021-08-06 17:02:41.000 EDT Cancelling deletion of Pod preempt-pod</p> <p>We have also tried adding a preStop 15 second delay just to see if that has any effect, but nothing we try seems to matter - the pods become zombies. New replicas are started on the other nodes that are available in the pool, so it always maintains the minimum number of successfully running pods on the system.</p> <p>We are also testing the preemption cycle using a sim maintenance event:</p> <p>gcloud compute instances simulate-maintenance-event node-id</p>
<p>After poking around various posts I finally relented to running a cronjob every 9 minutes to avoid the alertManager trigger that occurs after pods have been stuck in shutdown for 10+ minutes. This still feels like a hack to me, but it works, and it forced me to dig in to k8s cronjob and RBAC.</p> <p>This post started me on the path: <a href="https://stackoverflow.com/questions/68344231/how-to-remove-kubernetes-shutdown-pods">How to remove Kubernetes &#39;shutdown&#39; pods</a></p> <p>And the resultant cronjob spec:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-accessor-role namespace: default rules: - apiGroups: [&quot;&quot;] # &quot;&quot; indicates the core API group resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;delete&quot;, &quot;watch&quot;, &quot;list&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pod-access namespace: default subjects: - kind: ServiceAccount name: cronjob-sa namespace: default roleRef: kind: Role name: pod-accessor-role apiGroup: &quot;&quot; --- apiVersion: v1 kind: ServiceAccount metadata: name: cronjob-sa namespace: default --- apiVersion: batch/v1beta1 kind: CronJob metadata: name: cron-zombie-killer namespace: default spec: schedule: &quot;*/9 * * * *&quot; successfulJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: name: cron-zombie-killer namespace: default spec: serviceAccountName: cronjob-sa restartPolicy: Never containers: - name: cron-zombie-killer imagePullPolicy: IfNotPresent image: bitnami/kubectl command: - &quot;/bin/sh&quot; args: - &quot;-c&quot; - &quot;kubectl get pods -n default --field-selector='status.phase==Failed' -o name | xargs kubectl delete -n default 2&gt; /dev/null&quot; status: {} </code></pre> <p>Note that the redirect of stderr to /dev/null is to simply avoid the error output from kubectl delete when the kubectl get doesn't find any pods in the failed state.</p> <p><strong>Update</strong> added missing &quot;delete&quot; verb from the role, and added the missing RoleBinding</p> <p><strong>Update</strong> added imagePullPolicy</p>
<p>How to define a network policy to prevent communication across pods in different namespaces within the same K8s cluster?</p>
<p>Checkout the git repo at <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/04-deny-traffic-from-other-namespaces.md" rel="noreferrer">here</a> for many netpolicy usecases, following is one of it.</p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: default name: deny-from-other-namespaces spec: podSelector: matchLabels: ingress: - from: - podSelector: {} </code></pre> <blockquote> <p>*<strong>Note a few things about this manifest:</strong></p> </blockquote> <blockquote> <p>namespace: default deploys it to the default namespace. it applies the policy to ALL pods in default namespace as the spec.podSelector.matchLabels is empty and therefore selects all pods. it allows traffic from ALL pods in the default namespace, as spec.ingress.from.podSelector is empty and therefore selects all pods.*</p> </blockquote>
<p>I'm preparing all the Ingress manifest files to keep the latest apiVersion (<strong>networking.k8s.io/v1</strong>) to upgrade my cluster from 1.19 to 1.22.</p> <p>I'm deleting the previous Ingress rule and then recreating:</p> <pre><code>k delete ingress/my-ingress k create -f /tmp/ingress.yaml </code></pre> <p>Unfortunately, the Ingress is created but with apiVersion <strong>extensions/v1beta1</strong> that's different for what I have on my manifest:</p> <pre class="lang-yaml prettyprint-override"><code>$ k get ingress/my-ingress -o yaml Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; creationTimestamp: &quot;2021-08-11T19:42:08Z&quot; </code></pre> <p>Here is an example of the YAML I'm using:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; labels: app.kubernetes.io/instance: my-app app.kubernetes.io/name: my-app name: my-ingress namespace: default spec: rules: - host: application.com http: paths: - path: / pathType: ImplementationSpecific backend: service: name: my-app port: number: 443 </code></pre> <p>Kubernetes version:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.1&quot;, GitCommit:&quot;c4d752765b3bbac2237bf87cf0b1c2e307844666&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-12-18T12:09:25Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19+&quot;, GitVersion:&quot;v1.19.13-eks-8df270&quot;, GitCommit:&quot;8df2700a72a2598fa3a67c05126fa158fd839620&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-31T01:36:57Z&quot;, GoVersion:&quot;go1.15.14&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Ingress controller version (I upgraded from 0.41 to avoid any kind of issues):</p> <pre><code>Image: k8s.gcr.io/ingress-nginx/controller:v0.48.1@sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899 </code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/issues/94761" rel="nofollow noreferrer">This is working as expected</a>, in particular check <a href="https://github.com/kubernetes/kubernetes/issues/94761#issuecomment-691982480" rel="nofollow noreferrer">github answer</a></p> <p>When you create an ingress object, it can be read via any version - the server handles converting into the requested version. In your request <code>get ingress/my-ingress -o yaml</code> you not specified version, which should be read. In such case kubectl searches documents returned by the server to find the first among them with requested resource. And it can be any version, as in your case.</p> <p>That is why, if you want to check particular version, you can:</p> <ol> <li>Improve your request with adding your manifest file, since version specified in the file</li> </ol> <pre><code> $ kubectl get -f ingress.yaml -o yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: ... </code></pre> <ol start="2"> <li>Other option is to qualify necessary version in get request:</li> </ol> <pre><code> $ kubectl get ingresses.v1.networking.k8s.io NAME CLASS HOSTS ADDRESS PORTS AGE my-ingress &lt;none&gt; application.com 80 12m $ kubectl get ingresses.v1beta1.networking.k8s.io Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress NAME CLASS HOSTS ADDRESS PORTS AGE my-ingress &lt;none&gt; application.com 80 13m </code></pre>
<p>I have a Kubernetes cluster that uses 1.17.17. I want to increase the CPU/RAM of a node using KOPS. When running <code>kops update cluster</code> command, I expect it would return the preview of my old instance type VS new instance type.</p> <p>However, it returns a long line of will create resources/will modify resources.</p> <p>I want to know why it shows a long log of changes it will execute instead of showing only the changes I made for instance type. Also, if this is safe to apply the changes. <a href="https://i.stack.imgur.com/Kf4Ak.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kf4Ak.png" alt="enter image description here" /></a></p>
<p>After you will do that cluster update you are going to do rolling update on that cluster. The nodes will be terminated one by one and the new ones are going to show. Also while one node is going down to be replaced with the new one the services inside that node are going to be shifted on that one . Small tip remove all poddistributionbudgets. Also the log is fine dont worry.</p>
<p>I have the strange problem that a Spark job ran on Kubernetes fails with a lot of "Missing an output location for shuffle X" in jobs where there is a lot of shuffling going on. Increasing executor memory does not help. The same job run on just a single node of the Kubernetes cluster in local[*] mode runs fine however so I suspect it has to do with Kubernetes or underlying Docker. When an executor dies, the pods are deleted immediately so I cannot track down why it failed. Is there an option that keeps failed pods around so I can view their logs?</p>
<p><strong>If you're using the spark executor</strong>: There is a <code>deleteOnTermination</code> setting in the spark application yaml. See <a href="https://github.com/mesosphere/spark-on-k8s-operator/blob/master/docs/api-docs.md#sparkoperator.k8s.io/v1beta2.ExecutorSpec" rel="nofollow noreferrer">the spark-on-kubernetes README.md</a>.</p> <blockquote> <p><code>deleteOnTermination</code> - <em>(Optional)</em> <code>DeleteOnTermination</code> specify whether executor pods should be deleted in case of failure or normal termination.<br><br> Maps to <code>spark.kubernetes.executor.deleteOnTermination</code> that is available since Spark 3.0.</p> </blockquote> <p><strong>If you're using kubernetes jobs</strong>: Set the job <code>spec.ttlSecondsAfterFinished</code> parameter or get the previous pods logs with kubectl. There is also a setting for keeping failed jobs around if you're using cronjobs.</p>
<p>I have an application deployed to kubernetes (AKS) where I have a mix of gRPC and http services. I initially added the route for a new gRPC service to the existing ingress which was previously serving only http. That didn't work and digging into it, I read that we need to add the <code>nginx.ingress.kubernetes.io/backend-protocol: GRPC</code> annotation, and that it applied to all routes, so we would need two separate ingress. I'm currently getting an exception <code>io.grpc.internal.ManagedChannelImpl$NameResolverListener</code> error trying to connect to the gRPC service with message <code>nodename nor servname provided, or not known</code>. I'm guessing that though when multiple paths within an Ingress match a request, precedence is given first to the longest matching path, that doesn't apply across the both ingress. So I would need to either use different hosts, or change the <code>/*</code> path so that it didn't also match <code>/results</code>? Or is there something else that I need to change in my configuration?</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Release.Name }}-{{ .Chart.Name }}-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - {{ .Values.ingress.hosts.host }} secretName: {{ .Values.ingress.tls.secretName }} rules: - host: {{ .Values.ingress.hosts.host }} http: paths: - path: /graphql pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }} port: number: 80 - path: /graphql/* pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }} port: number: 80 - path: / pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }} port: number: 80 - path: /* pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }} port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Release.Name }}-{{ .Chart.Name }}-grpc annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - {{ .Values.ingress.hosts.host }} secretName: {{ .Values.ingress.tls.secretName }} rules: - host: {{ .Values.ingress.hosts.host }} http: paths: - path: /results pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.externalResults.host }} port: number: 9000 </code></pre>
<p>This wound up being resolved by creating a second host name that pointed to our k8s cluster. I changed the route for the grpc service to be the root path and pathType of ImplementationSpecific.</p> <pre><code> - path: / pathType: ImplementationSpecific </code></pre> <p>Both host names needed to be included in the tls section of both ingress. I was getting an SSL exception after changing the route but not updating the hosts in the tls section of each ingress.</p> <pre><code>Channel Pipeline: [SslHandler#0, ProtocolNegotiators$ClientTlsHandler#0, WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0] at io.grpc.Status.asRuntimeException(Status.java:533) at akka.grpc.internal.UnaryCallAdapter.onClose(UnaryCallAdapter.scala:40) at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413) | =&gt; cat io.grpc.internal.ClientCallImpl.access$500(ClientCallImpl.java:66) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:742) at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:721) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) stderr: at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: javax.net.ssl.SSLHandshakeException: General OpenSslEngine problem at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.handshakeException(ReferenceCountedOpenSslEngine.java:1771) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.wrap(ReferenceCountedOpenSslEngine.java:776) at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:511) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:1079) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.wrapNonAppData(SslHandler.java:970) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1443) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1275) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1322) at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650) at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576) at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) stderr: at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) stderr: at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching grpc.aks.dev.app.cycleautomation.com found. stderr: at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:214) at sun.security.util.HostnameChecker.match(HostnameChecker.java:96) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:462) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:428) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:261) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:144) at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSslTlsv13X509ExtendedTrustManager.checkServerTrusted(OpenSslTlsv13X509ExtendedTrustManager.java:223) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslClientContext$ExtendedTrustManagerVerifyCallback.verify(ReferenceCountedOpenSslClientContext.java:261) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslContext$AbstractCertificateVerifier.verify(ReferenceCountedOpenSslContext.java:700) at io.grpc.netty.shaded.io.netty.internal.tcnative.SSL.readFromSSL(Native Method) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:595) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1202) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1324) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1367) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:206) at io.grpc.netty.shaded.io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1380) ... 21 more Suppressed: javax.net.ssl.SSLHandshakeException: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.sslReadErrorResult(ReferenceCountedOpenSslEngine.java:1287) at io.grpc.netty.shaded.io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1248) ... 25 more </code></pre> <p>The final yaml looked like this:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Release.Name }}-{{ .Chart.Name }}-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - {{ .Values.ingress.hosts.host }} - {{ .Values.ingress.grpc.host }} secretName: {{ .Values.ingress.tls.secretName }} rules: - host: {{ .Values.ingress.hosts.host }} http: paths: - path: /graphql pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }} port: number: 80 - path: /graphql/* pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.graphqlServer.host }} port: number: 80 - path: / pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }} port: number: 80 - path: /* pathType: Prefix backend: service: name: {{ .Release.Name }}-{{ .Values.services.webUIServer.host }} port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ .Release.Name }}-{{ .Chart.Name }}-grpc annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; cert-manager.io/cluster-issuer: letsencrypt spec: tls: - hosts: - {{ .Values.ingress.hosts.host }} - {{ .Values.ingress.grpc.host }} secretName: {{ .Values.ingress.tls.secretName }} rules: - host: {{ .Values.ingress.hosts.host }} http: paths: - path: / pathType: ImplementationSpecific backend: service: name: {{ .Release.Name }}-{{ .Values.services.externalResults.host }} port: number: 9000 </code></pre> <p>Then I was able to connect to the grpc service over port 443 w/ tls enabled and just using the host name with no path in my connection.</p>
<p>I deployed an aws eks cluster via terraform. I also deployed Consul following hasicorp’s tutorial and I see the nodes in consul’s UI.</p> <p>Now I’m wondering how al the consul agents will know about the pods I deploy? I deploy something and it’s not shown anywhere on consul.</p> <p>I can’t find any documentation as to how to register pods (services) on consul via the node’s consul agent, do I need to configure that somewhere? Should I not use the node’s agent and register the service straight from the pod? Hashicorp discourages this since it may increase resource utilization depending on how many pods one deploy on a given node. But then how does the node’s agent know about my services deployed on that node?</p> <p>Moreover, when I deploy a pod in a node and ssh into the node, and install consul, consul’s agent can’t find the consul server (as opposed from the node, which can find it)</p> <p>EDIT:</p> <p>Bottom line is I can't find WHERE to add the configuration. If I execute ON THE POD:</p> <p><code>consul members</code></p> <p>It works properly and I get:</p> <pre><code>Node Address Status Type Build Protocol DC Segment consul-consul-server-0 10.0.103.23:8301 alive server 1.10.0 2 full &lt;all&gt; consul-consul-server-1 10.0.101.151:8301 alive server 1.10.0 2 full &lt;all&gt; consul-consul-server-2 10.0.102.112:8301 alive server 1.10.0 2 full &lt;all&gt; ip-10-0-101-129.ec2.internal 10.0.101.70:8301 alive client 1.10.0 2 full &lt;default&gt; ip-10-0-102-175.ec2.internal 10.0.102.244:8301 alive client 1.10.0 2 full &lt;default&gt; ip-10-0-103-240.ec2.internal 10.0.103.245:8301 alive client 1.10.0 2 full &lt;default&gt; ip-10-0-3-223.ec2.internal 10.0.3.249:8301 alive client 1.10.0 2 full &lt;default&gt; </code></pre> <p>But if i execute:</p> <pre><code># consul agent -datacenter=voip-full -config-dir=/etc/consul.d/ -log-file=log-file -advertise=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4) </code></pre> <p>I get the following error:</p> <pre><code>==&gt; Starting Consul agent... Version: '1.10.1' Node ID: 'f10070e7-9910-06c7-0e12-6edb6cc4c9b9' Node name: 'ip-10-0-3-223.ec2.internal' Datacenter: 'voip-full' (Segment: '') Server: false (Bootstrap: false) Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600) Cluster Addr: 10.0.3.223 (LAN: 8301, WAN: 8302) Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false ==&gt; Log data will now stream in as it occurs: 2021-08-16T18:23:06.936Z [WARN] agent: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set 2021-08-16T18:23:06.936Z [WARN] agent: Node name &quot;ip-10-0-3-223.ec2.internal&quot; will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. 2021-08-16T18:23:06.946Z [WARN] agent.auto_config: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set 2021-08-16T18:23:06.947Z [WARN] agent.auto_config: Node name &quot;ip-10-0-3-223.ec2.internal&quot; will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. 2021-08-16T18:23:06.948Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: ip-10-0-3-223.ec2.internal 10.0.3.223 2021-08-16T18:23:06.948Z [INFO] agent.router: Initializing LAN area manager 2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=udp 2021-08-16T18:23:06.950Z [WARN] agent.client.serf.lan: serf: Failed to re-join any previously known node 2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=tcp 2021-08-16T18:23:06.951Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http 2021-08-16T18:23:06.951Z [WARN] agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them. 2021-08-16T18:23:06.953Z [INFO] agent: started state syncer 2021-08-16T18:23:06.953Z [INFO] agent: Consul agent running! 2021-08-16T18:23:06.953Z [WARN] agent.router.manager: No servers available 2021-08-16T18:23:06.954Z [ERROR] agent.anti_entropy: failed to sync remote state: error=&quot;No known Consul servers&quot; 2021-08-16T18:23:34.169Z [WARN] agent.router.manager: No servers available 2021-08-16T18:23:34.169Z [ERROR] agent.anti_entropy: failed to sync remote state: error=&quot;No known Consul servers&quot; </code></pre> <p>So where to add the config?</p> <p>I also tried adding a service in k8s pointing to the pod, but the service doesn't come up on consul's UI...</p> <p>What do you guys recommend?</p> <p>Thanks</p>
<blockquote> <p>Consul knows where these services are located because each service registers with its local Consul client. Operators can register services manually, configuration management tools can register services when they are deployed, or container orchestration platforms can register services automatically via integrations.</p> </blockquote> <p>if you planning to use manual option you have to register the service into the consul.</p> <p>Something like</p> <pre><code>echo '{ &quot;service&quot;: { &quot;name&quot;: &quot;web&quot;, &quot;tags&quot;: [ &quot;rails&quot; ], &quot;port&quot;: 80 } }' &gt; ./consul.d/web.json </code></pre> <p>You can find the good example at : <a href="https://thenewstack.io/implementing-service-discovery-of-microservices-with-consul/" rel="nofollow noreferrer">https://thenewstack.io/implementing-service-discovery-of-microservices-with-consul/</a></p> <p>Also this is a very nice document for having detailed configuration of the health check and service discovery : <a href="https://cloud.spring.io/spring-cloud-consul/multi/multi_spring-cloud-consul-discovery.html" rel="nofollow noreferrer">https://cloud.spring.io/spring-cloud-consul/multi/multi_spring-cloud-consul-discovery.html</a></p> <p>Official document : <a href="https://learn.hashicorp.com/tutorials/consul/get-started-service-discovery" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/consul/get-started-service-discovery</a></p>
<p>My goal is to convert values in Values.yaml into the following:</p> <pre><code>CUSTOM_VARIABLE: &quot;TEST_ENV_1=devil,TEST_ENV_2=god,TEST_ENV_3=angel&quot; </code></pre> <pre class="lang-yaml prettyprint-override"><code>### Values.yaml env: TEST_ENV_1: devil TEST_ENV_2: god TEST_ENV_3: angel </code></pre> <p>The below template almost does this but I'm getting comma at the end: <code>TEST_ENV_1=devil,TEST_ENV_2=god,TEST_ENV_3=angel,</code>.</p> <pre class="lang-yaml prettyprint-override"><code>### _envVars.tpl {{ define &quot;envVars&quot; }} ... - name: CUSTOM_VARIABLE value: &quot; {{- range $key, $value := .Values.env -}} {{- printf &quot;%s=%s,&quot; $key $value -}} {{- end -}} &quot; ... {{- end }} </code></pre> <p>Is there a way to apply template function (e.g. <code>trunc</code> to remove last symbol) to a range result in my case?</p>
<p>try something like</p> <pre><code>{{range $i, $e := $}} {{if $i}},{{end}} {{$e}}{{end}} </code></pre> <p>If actually look for the index and if it's zero it's wont to add the <code>,</code> at last. here is if is not behave like normal it checks the <strong>index</strong> also.</p> <pre><code>{{- range $i, $e := . -}} {{if $i}}, {{end}}prefix_{{$e}} {{- end}} </code></pre> <p>above loop will give output like : <code>prefix_one, prefix_two, prefix_three</code></p> <p><a href="https://play.golang.org/p/KuRh55BHna8" rel="nofollow noreferrer">https://play.golang.org/p/KuRh55BHna8</a></p> <p>Read more at : <a href="https://groups.google.com/g/golang-nuts/c/XBScetK-guk/m/Bh7ZFz6R3wQJ" rel="nofollow noreferrer">https://groups.google.com/g/golang-nuts/c/XBScetK-guk/m/Bh7ZFz6R3wQJ</a></p>
<p><a href="https://i.stack.imgur.com/8M0Qx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8M0Qx.png" alt="enter image description here" /></a></p> <p>I created an EC2 instance and an EKS cluster in the same AWS account. In order to use the EKS cluster from EC2, I have to grant necessary permissions to it.</p> <p>I added an instance profile role with some EKS operation permissions. Its role arn is <code>arn:aws:iam::11111111:role/ec2-instance-profile-role</code>(A) on dashboard. But in the EC2 instance, it can be found as <code>arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000</code>(B).</p> <pre><code>$ aws sts get-caller-identity { &quot;Account&quot;: &quot;11111111&quot;, &quot;UserId&quot;: &quot;AAAAAAAAAAAAAAA:i-000000000000&quot;, &quot;Arn&quot;: &quot;arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000&quot; } </code></pre> <p>I also created an <code>aws-auth</code> config map to set into Kubernetes' system config in EKS, in order to allow the EC2 instance profile role can be registered and accessible. I tried both A and B to set into the mapRoles, all of them got the same issue. When I run <code>kubectl</code> command on EC2:</p> <pre><code>$ aws eks --region aws-region update-kubeconfig --name eks-cluster-name $ kubectl config view --minify apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx.aw1.aws-region.eks.amazonaws.com name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name contexts: - context: cluster: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name user: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name current-context: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name kind: Config preferences: {} users: - name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - --region - aws-region - eks - get-token - --cluster-name - eks-cluster-name - --role - arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000 command: aws env: null provideClusterInfo: false $kubectl get svc error: You must be logged in to the server (Unauthorized) </code></pre> <p>I also checked the type of the assumed role. It's <code>Service</code> but not <code>AWS</code>.</p> <p>It seems this type is necessary.</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: { &quot;AWS&quot;: &quot;arn:aws:iam:: 333333333333:root&quot; }, &quot;Action&quot;: &quot;sts:AssumeRole&quot; } } </code></pre> <p><a href="https://stackoverflow.com/questions/59704676/terraform-aws-assume-role/59705497#59705497">Terraform aws assume role</a></p> <p>But I tried to create a new assume role with <code>AWS</code> type and set it to Kubernetes' <code>aws-auth</code> config map, still the same issue.</p> <p>How to use it? Do I need to create a new IAM user to use?</p>
<pre><code>- name: external-staging user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - exec - test-dev - -- - aws - eks - get-token - --cluster-name - eksCluster-1234 - --role-arn - arn:aws:iam::3456789002:role/eks-cluster-admin-role-e65f32f command: aws-vault env: null </code></pre> <p>this config file working for me. it should be <code>role-arn</code> &amp; <code>command: aws-vault</code></p>
<p>Does pod running on a Kubernetes node gets rescheduled if the Kubernetes nodes gets killed due to power failure. And if gets rescheduled and if that nodes comes up what happens to the pod running on that node?</p>
<p>Warning: this answer is based on the assumption you don't create an ad-hoc Pod, but instead manage your pods via a Deployment, StatefulSet, DaemonSet or any other controller. If it's an ad-hoc pod you created manually - it won't be rescheduled.</p> <blockquote> <p>Does pod running on a Kubernetes node gets rescheduled if the Kubernetes nodes gets killed due to power failure.</p> </blockquote> <p>Whether and when it gets rescheduled depends on its <code>node.kubernetes.io/not-ready</code> and <code>node.kubernetes.io/unreachable</code> tolerations.</p> <p>As of current stable kubernetes here are the defaults:</p> <pre class="lang-yaml prettyprint-override"><code> tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 </code></pre> <p>So, with these defaults - a pod on a died node would be rescheduled after 300 seconds (5 minutes).</p> <blockquote> <p>And if gets rescheduled and if that nodes comes up what happens to the pod running on that node?</p> </blockquote> <p>If a node is restarted - then a pod won't be started on next boot, and the kubelet would cleanup everything needed.</p>
<p><strong>Bug description</strong></p> <p>Installation gets timeout errors and in <code>kubectl get pods -n istio-system</code> shows <code>ImagePullBackOff</code>.</p> <p><code>kubectl describe pod istiod-xxx-xxx -n istio-system</code></p> <blockquote> <p>Failed to pull image &quot;our-registry:5000/pilot:1.10.3&quot;: rpc error: code = Unknown desc = Error response from daemon: Head https://our-registry:5000/v2/pilot/manifests/1.10.3: no basic auth credentials</p> </blockquote> <p><strong>Affected product area (please put an X in all that apply)</strong></p> <p>[x] Installation</p> <p><strong>Expected behavior</strong></p> <p>Successful installation with <code>istioctl install --set profile=demo --set hub=our-registry:5000</code></p> <p><strong>Steps to reproduce the bug</strong></p> <ul> <li><p>Create <code>istio-system</code> namespace.</p> </li> <li><p>Set <code>docker-registry</code> user credentials for <code>istio-system</code> namespace.</p> </li> <li><p><code>istioctl manifest generate --set profile=demo --set hub=our-registry:5000 &gt; new-generated-manifest.yaml</code></p> </li> <li><p>Verify it has proper images with <code>our-registry:5000</code></p> </li> <li><p>Pull and push required images to <code>our-registry:5000</code></p> </li> <li><p><code>istioctl install --set profile=demo --set hub=our-registry:5000</code></p> </li> </ul> <p><strong>Version</strong></p> <p>Kubernetes : v1.21</p> <p>Istio : 1.10.3 / 1.7.3</p> <p><strong>How was Istio installed?</strong></p> <p><code>istioctl install --set profile=demo --set hub=our-registry:5000</code></p> <p>[References]</p> <ol> <li>Tried to setup imagePullSecrets as described <a href="https://stackoverflow.com/questions/63721552/install-istio-by-using-istioctl-from-private-registry">here</a>, but it gives</li> </ol> <blockquote> <p>Json object error 2. <a href="https://github.com/istio/istio/issues/27036" rel="nofollow noreferrer">Here</a> describe about using it in charts, but dont know how they applied it.</p> </blockquote> <p>Originally posted as an <a href="https://github.com/istio/istio/issues/34636" rel="nofollow noreferrer">issue</a>.</p>
<p>There are two ways to cirumvent this issue.</p> <h3>If installing with <code>istioctl install</code></h3> <p>Using <code>istioctl install</code> provide a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> with docker-registry auth details with <code>--set values.global.imagePullSecrets</code>. Like this</p> <pre class="lang-text prettyprint-override"><code>istioctl install [other options] --set values.global.imagePullSecrets[0]=&lt;auth-secret&gt; </code></pre> <p>Where <code>&lt;auth-secret&gt;</code> is the secret created prior on the cluster.<br /> You can read more about using secrets with docker repository <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">here</a></p> <h3>If installing using Istio operator</h3> <p>Installing Istio with operator, from private regostry, you have to pass proper YAML:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator ... spec: profile: demo #as an example values: global: imagePullSecrets: - &lt;auth-secret&gt; ... </code></pre> <p>Again, <code>&lt;auth-secret&gt;</code> must be created prior.</p>
<p>i'm a newbie to k8s. I have a homework and this is my situation:<br> There is an app with microservice-oriented, build with amount ten containers. It had a <code>docker-compose</code> file for easily set up. Now my mission is deploy it into Kubernetes. My idea: convert <code>docker-compose</code> file to k8s manifest with <code>kompose</code>, and create helm chart for each services.<br> My question is: I have to modify each chart one-by-one, isn't it? Is there any way to generate <code>values.yaml</code> base on existing k8s manifest? example, from this: <br></p> <pre><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.22.0 (955b78124) creationTimestamp: null labels: io.kompose.service: bookstore-account-service name: bookstore-account-service ... </code></pre> <p>to this, automatically:</p> <pre><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: {{ .Values.cmd }} kompose.version: {{ .Values.ver }} creationTimestamp: null labels: io.kompose.service: {{ .Values.name }} name: {{ .Values.name }} ... </code></pre> <pre><code># values.yaml cmd: kompose convert ver: 1.22.0 (955b78124) name: bookstore-account-service </code></pre> <p>p/s: sorry for my bad English, it's not my first language :D</p>
<p>The Helm <code>values.yaml</code> file is the main point where you can configure the chart at deployment time. On the one hand, you can't configure anything that's not referenced in <code>.Values</code>; on the other hand, you usually don't want every individual line of the YAML file to be configurable.</p> <p>If I was going to approach this, I'd start by <a href="https://docs.helm.sh/docs/helm/helm_create/" rel="nofollow noreferrer"><code>helm create</code></a> a new chart. I'd then switch to the <code>templates</code> directory, move aside most of the boilerplate there (but leave the generated <code>_helpers.tpl</code> file), and run <a href="https://kompose.io/user-guide/#kompose-convert" rel="nofollow noreferrer"><code>kompose convert</code></a>. This will generate a set of YAML files, though with no Helm templating.</p> <p>From here I'd edit the files to make them match typical Helm usage. Look at the original files from <code>helm create</code> (or <a href="https://github.com/helm/helm/blob/main/pkg/chartutil/create.go" rel="nofollow noreferrer">in the Helm source</a>) for examples. I would expect the edited <code>deployment.yaml</code> to look like:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: {{-/* delete the Kompose annotations: and empty creationTimestamp: */}} labels: {{-/* get a standard set of labels from _helpers.tpl }} {{- include &quot;bookstore.labels&quot; . | nindent 4 }} {{-/* get a standard name from _helpers.tpl }} name: {{ include &quot;bookstore.fullname&quot; . }} </code></pre> <p>What should go in the <code>values.yaml</code> file, then? These are things you'd need to configure <em>at deploy time</em>. If you need to override the container <code>command:</code> or <code>args:</code>, these would usually be fixed, but if you needed to supply some sort of credentials or host name, those could vary per-deployment. (If you <code>helm install</code>ed the chart twice, what would be <em>different</em> between the installs?) The <code>helm create</code> template makes resource limits configurable, since these can vary heavily based on the actual workload:</p> <pre class="lang-yaml prettyprint-override"><code># deployment.yaml (from helm/create.go linked above) resources: {{- toYaml .Values.resources | nindent 12 }} </code></pre> <pre class="lang-yaml prettyprint-override"><code># values.yaml (also from helm/create.go) resources: {} </code></pre> <p>You could deploy this with a specific set of values here:</p> <pre class="lang-yaml prettyprint-override"><code># values.dev.yaml resources: requests: memory: 256Mi limits: memory: 1Gi </code></pre> <pre class="lang-yaml prettyprint-override"><code># values.prod.yaml resources: requests: memory: 2Gi limits: memory: 4Gi </code></pre> <pre class="lang-sh prettyprint-override"><code>helm install bookstore . -f values.dev.yaml </code></pre> <p>If you had preserved, for example, the &quot;what version of Kompose generated this file&quot; annotation, there'd be no reason to change that between environments, and so you could just leave that as a fixed string.</p>
<p>Following <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">instructions</a> I am trying to deploy my pyspark app on Azure AKS free tier with <code>spark.executor.instances=5</code></p> <pre><code>spark-submit \ --master k8s://https://xxxxxxx-xxxxxxx.hcp.westeurope.azmk8s.io:443 \ --deploy-mode cluster \ --name sparkbasics \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=aosb06.azurecr.io/sparkbasics:v300 \ local:///opt/spark/work-dir/main.py </code></pre> <p>Everything works fine (including application itself), except I see no executors pods at all, only driver pod.</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE sparkbasics-f374377b3c78ac68-driver 0/1 Completed 0 52m </code></pre> <p>Dockerfile is from Spark distribution.</p> <p>What can be an issue? Is there problem with resource allocation?</p> <p>In driver logs seems there are no issues.</p> <pre><code>kubectl logs &lt;driver-pod&gt; 021-08-12 22:25:54,332 INFO spark.SparkContext: Running Spark version 3.1.2 2021-08-12 22:25:54,378 INFO resource.ResourceUtils: ============================================================== 2021-08-12 22:25:54,378 INFO resource.ResourceUtils: No custom resources configured for spark.driver. 2021-08-12 22:25:54,379 INFO resource.ResourceUtils: ============================================================== 2021-08-12 22:25:54,379 INFO spark.SparkContext: Submitted application: SimpleApp 2021-08-12 22:25:54,403 INFO resource.ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -&gt; name: cores, amount: 1, script: , vendor: , memory -&gt; name: memory, amount: 1024, script: , vendor: , offHeap -&gt; name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -&gt; name: cpus, amount: 1.0) 2021-08-12 22:25:54,422 INFO resource.ResourceProfile: Limiting resource is cpu 2021-08-12 22:25:54,422 INFO resource.ResourceProfileManager: Added ResourceProfile id: 0 2021-08-12 22:25:54,475 INFO spark.SecurityManager: Changing view acls to: 185,aovsyannikov 2021-08-12 22:25:54,475 INFO spark.SecurityManager: Changing modify acls to: 185,aovsyannikov 2021-08-12 22:25:54,475 INFO spark.SecurityManager: Changing view acls groups to: 2021-08-12 22:25:54,475 INFO spark.SecurityManager: Changing modify acls groups to: 2021-08-12 22:25:54,475 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, aovsyannikov); groups with view permissions: Set(); users with modify permissions: Set(185, aovsyannikov); groups with modify permissions: Set() 2021-08-12 22:25:54,717 INFO util.Utils: Successfully started service 'sparkDriver' on port 7078. 2021-08-12 22:25:54,781 INFO spark.SparkEnv: Registering MapOutputTracker 2021-08-12 22:25:54,818 INFO spark.SparkEnv: Registering BlockManagerMaster 2021-08-12 22:25:54,843 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2021-08-12 22:25:54,844 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 2021-08-12 22:25:54,848 INFO spark.SparkEnv: Registering BlockManagerMasterHeartbeat 2021-08-12 22:25:54,862 INFO storage.DiskBlockManager: Created local directory at /var/data/spark-1e9aa64b-e0a1-44ae-a097-ebb3c2f32404/blockmgr-c51b9095-5426-4a00-b17a-461de2b80357 2021-08-12 22:25:54,892 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MiB 2021-08-12 22:25:54,909 INFO spark.SparkEnv: Registering OutputCommitCoordinator 2021-08-12 22:25:55,023 INFO util.log: Logging initialized @3324ms to org.sparkproject.jetty.util.log.Slf4jLog 2021-08-12 22:25:55,114 INFO server.Server: jetty-9.4.40.v20210413; built: 2021-04-13T20:42:42.668Z; git: b881a572662e1943a14ae12e7e1207989f218b74; jvm 1.8.0_275-b01 2021-08-12 22:25:55,139 INFO server.Server: Started @3442ms 2021-08-12 22:25:55,184 INFO server.AbstractConnector: Started ServerConnector@59b3b32{HTTP/1.1, (http/1.1)}{0.0.0.0:4040} 2021-08-12 22:25:55,184 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. </code></pre> <pre><code>kubectl describe pod &lt;driver-pod&gt; Name: sparkbasics-f374377b3c78ac68-driver Namespace: default Priority: 0 Node: aks-default-31057657-vmss000000/10.240.0.4 Start Time: Fri, 13 Aug 2021 01:25:47 +0300 Labels: spark-app-selector=spark-256cc7f64af9451b89e0098397980974 spark-role=driver Annotations: &lt;none&gt; Status: Succeeded IP: 10.244.0.28 IPs: IP: 10.244.0.28 Containers: spark-kubernetes-driver: Container ID: containerd://b572a4056014cd4b0520b808d64d766254d30c44ba12fc98717aee3b4814f17d Image: aosb06.azurecr.io/sparkbasics:v300 Image ID: aosb06.azurecr.io/sparkbasics@sha256:965393784488025fffc7513edcb4a62333ba59a5ee3076346fd8d335e1715213 Ports: 7078/TCP, 7079/TCP, 4040/TCP Host Ports: 0/TCP, 0/TCP, 0/TCP Args: driver --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///opt/spark/work-dir/main.py State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 13 Aug 2021 01:25:51 +0300 Finished: Fri, 13 Aug 2021 01:56:40 +0300 Ready: False Restart Count: 0 Limits: memory: 1433Mi Requests: cpu: 1 memory: 1433Mi Environment: SPARK_USER: aovsyannikov SPARK_APPLICATION_ID: spark-256cc7f64af9451b89e0098397980974 SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP) SB_KEY_STORAGE: &lt;set to the key 'STORAGE' in secret 'sparkbasics'&gt; Optional: false SB_KEY_OPENCAGE: &lt;set to the key 'OPENCAGE' in secret 'sparkbasics'&gt; Optional: false SB_KEY_STORAGEOUT: &lt;set to the key 'STORAGEOUT' in secret 'sparkbasics'&gt; Optional: false SPARK_LOCAL_DIRS: /var/data/spark-1e9aa64b-e0a1-44ae-a097-ebb3c2f32404 SPARK_CONF_DIR: /opt/spark/conf Mounts: /opt/spark/conf from spark-conf-volume-driver (rw) /var/data/spark-1e9aa64b-e0a1-44ae-a097-ebb3c2f32404 from spark-local-dir-1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-wlqjt (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: spark-local-dir-1: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; spark-conf-volume-driver: Type: ConfigMap (a volume populated by a ConfigMap) Name: spark-drv-6f83b17b3c78af1f-conf-map Optional: false default-token-wlqjt: Type: Secret (a volume populated by a Secret) SecretName: default-token-wlqjt Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: &lt;none&gt; </code></pre>
<p>I have found a mistake in pyspark app itself.</p> <pre><code> ... SparkSession.builder.master(&quot;local&quot;) ... </code></pre> <p>Should be without master</p> <pre><code> ... SparkSession.builder ... </code></pre> <p>as simple as that :(</p>
<p>I have a Django app that uses the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">official Kubernetes client for python</a> and works fine but it only deploys (rightly) public registries.</p> <p>Is there a way to execute a login and then let Kubernetes client pull a private image freely? I wouldn't like to execute direct <code>cmd</code> commands for the login and the image pull.. Thanks!</p>
<p>Actually it's pretty easy to do using official Kubernetes Python Client. You need to do two steps:</p> <ul> <li>create a secret of type <code>dockerconfigjson</code> (could be done by command line or using Python client) - you are putting here your credentials</li> <li>add this secret into your deployment / pod definition using <code>imagePullSecrets</code> so Kubernetes client can pull images from private repositories</li> </ul> <p><strong>Create secret of type <code>dockerconfigjson</code>:</strong></p> <p>Replace <code>&lt;something&gt;</code> with your data.</p> <p>Command line:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create secret docker-registry private-registry \ --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; \ --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>Equivalent in Kubernetes Python Client (remember to pass in secure way variable <code>password</code>, for example check <a href="https://stackoverflow.com/questions/15209978/where-to-store-secret-keys-django">this</a>):</p> <pre class="lang-py prettyprint-override"><code>import base64 import json from kubernetes import client, config config.load_kube_config() v1 = client.CoreV1Api() # Credentials username = &lt;your-name&gt; password = &lt;your-pword&gt; mail = &lt;your-email&gt; secret_name = &quot;private-registry&quot; namespace = &quot;default&quot; # Address of Docker repository - in case of Docker Hub just use https://index.docker.io/v1/ docker_server = &lt;your-registry-server&gt; # Create auth token auth_decoded = username + &quot;:&quot; + password auth_decoded_bytes = auth_decoded.encode('ascii') base64_auth_message_bytes = base64.b64encode(auth_decoded_bytes) base64_auth_message = base64_auth_message_bytes.decode('ascii') cred_payload = { &quot;auths&quot;: { docker_server: { &quot;username&quot;: username, &quot;password&quot;: password, &quot;email&quot;: mail, &quot;auth&quot;: base64_auth_message } } } data = { &quot;.dockerconfigjson&quot;: base64.b64encode( json.dumps(cred_payload).encode() ).decode() } secret = client.V1Secret( api_version=&quot;v1&quot;, data=data, kind=&quot;Secret&quot;, metadata=dict(name=secret_name, namespace=namespace), type=&quot;kubernetes.io/dockerconfigjson&quot;, ) v1.create_namespaced_secret(namespace, body=secret) </code></pre> <p><strong>Add this secret into your deployment / pod definition using <code>imagePullSecrets</code>: option</strong></p> <p>Now, let's move to using newly created secret - depends how you want to deploy pod / deployment in Python code there are two ways: apply <code>yaml</code> file or create pod / deployment manifest directly in the code. I will show both ways. As before, replace <code>&lt;something&gt;</code> with your data.</p> <p>Example <code>yaml</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: private-registry-pod spec: containers: - name: private-registry-container image: &lt;your-private-image&gt; imagePullSecrets: - name: private-registry </code></pre> <p>In last line we are referring to secret <code>docker-registry</code> created in previous step.</p> <p>Let's apply this <code>yaml</code> file using Kubernetes Python client:</p> <pre><code>from os import path import yaml from kubernetes import client, config config.load_kube_config() v1 = client.CoreV1Api() config_yaml = &quot;pod.yaml&quot; with open(path.join(path.dirname(__file__), config_yaml)) as f: dep = yaml.safe_load(f) resp = v1.create_namespaced_pod(body=dep, namespace=&quot;default&quot;) print(&quot;Deployment created. status='%s'&quot; % str(resp.status)) </code></pre> <p>All in Python code - both pod definition and applying process:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config import time config.load_kube_config() v1 = client.CoreV1Api() pod_name = &quot;private-registry-pod&quot; secret_name = &quot;private-registry&quot; namespace = &quot;default&quot; container_name = &quot;private-registry-container&quot; image = &lt;your-private-image&gt; # Create a pod print(&quot;Creating pod...&quot;) pod_manifest= { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;name&quot;: pod_name }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;name&quot;: container_name, &quot;image&quot;: image } ], &quot;imagePullSecrets&quot;: [ { &quot;name&quot;: secret_name } ] } } resp = v1.create_namespaced_pod(body=pod_manifest, namespace=namespace) # Wait for a pod while True: resp = v1.read_namespaced_pod(name=pod_name, namespace=namespace) if resp.status.phase != 'Pending': break time.sleep(1) print(&quot;Done.&quot;) </code></pre> <p>Sources:</p> <ul> <li><a href="https://github.com/kubernetes-client/python/issues/501" rel="nofollow noreferrer">Github thread</a></li> <li><a href="https://stackoverflow.com/questions/56673919/kubernetes-python-api-client-execute-full-yaml-file">Stackoverflow topic</a></li> <li><a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_exec.py" rel="nofollow noreferrer">Official Kubernetes Python client example</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#inspecting-the-secret-regcred" rel="nofollow noreferrer">Kubernetes docs</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Another Kubernetes docs</a></li> <li><a href="https://github.com/kubeflow/kubeflow/issues/1748" rel="nofollow noreferrer">Github topic</a></li> </ul>
<p>In CKAD exam I have been asked to SSH to other node in cluster to do some kubectl operations like <code>kubectl get all</code>, though with that getting below:</p> <blockquote> <p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p> </blockquote> <p>Tried doing sudo, but did not work and did check kubectl config view (can see empty file in client node)</p> <p>How to do this?</p>
<p>You need to list the available nodes in the cluster, but first, make sure you're using the correct context:</p> <pre><code>k get nodes </code></pre> <p>You will get the available noted like:<br /> <code>node-0 node-1</code> (to see which one is the worker node, or if you were asked to ssh to a specific node then copy-paste it) should be:</p> <pre><code>ssh node-0 </code></pre> <p>This is to create some files/directory (ex: to persist data) once you finish return to the master to complete your task.</p>
<p>My goal is to convert values in Values.yaml into the following:</p> <pre><code>CUSTOM_VARIABLE: &quot;TEST_ENV_1=devil,TEST_ENV_2=god,TEST_ENV_3=angel&quot; </code></pre> <pre class="lang-yaml prettyprint-override"><code>### Values.yaml env: TEST_ENV_1: devil TEST_ENV_2: god TEST_ENV_3: angel </code></pre> <p>The below template almost does this but I'm getting comma at the end: <code>TEST_ENV_1=devil,TEST_ENV_2=god,TEST_ENV_3=angel,</code>.</p> <pre class="lang-yaml prettyprint-override"><code>### _envVars.tpl {{ define &quot;envVars&quot; }} ... - name: CUSTOM_VARIABLE value: &quot; {{- range $key, $value := .Values.env -}} {{- printf &quot;%s=%s,&quot; $key $value -}} {{- end -}} &quot; ... {{- end }} </code></pre> <p>Is there a way to apply template function (e.g. <code>trunc</code> to remove last symbol) to a range result in my case?</p>
<p>If you write the <code>range</code> call into a helper template, Helm has an <a href="https://docs.helm.sh/docs/chart_template_guide/named_templates/#the-include-function" rel="nofollow noreferrer"><code>include</code> extension function</a> that calls a template and captures its output as a string.</p> <pre class="lang-yaml prettyprint-override"><code>{{/* Render the map-type template parameter to a key=value,key=value, list, ending with a trailing comma. */}} {{- define &quot;custom.variable.format&quot; -}} {{- range $key, $value := . -}} {{ $key }}={{ $value }}, {{- end -}} {{- end -}} - name: CUSTOM_VARIABLE value: {{ include &quot;custom.variable.format&quot; .Values.env | trimSuffix &quot;,&quot; | quote }} </code></pre> <p>(It is probably cleaner to not generate the comma at all, as <a href="https://stackoverflow.com/a/68772574">@HarshManvar's answer</a> proposes.)</p>
<p>I have a simple Flink streaming app. It runs well in a cluster created by <code>start-cluster.sh</code> command.</p> <p>Now based on the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/#application-mode" rel="nofollow noreferrer">Flink tutorial</a>, I hope to deploy it in application mode natively in a Kubernetes cluster created by <a href="https://k3d.io/" rel="nofollow noreferrer">k3d</a> on macOS.</p> <p>First, I created a cluster by <code>k3d cluster create dev</code>.</p> <p>Here is my <strong>Dockerfile</strong>:</p> <pre><code>FROM flink RUN mkdir -p $FLINK_HOME/usrlib COPY target/streaming-0.1.jar $FLINK_HOME/usrlib/streaming-0.1.jar </code></pre> <p>I built and pushed it to Docker Hub.</p> <p>My cluster name is <code>k3d-dev</code>, so I ran</p> <pre><code>flink run-application \ --target kubernetes-application \ -Dkubernetes.cluster-id=k3d-dev \ -Dkubernetes.container.image=hongbomiao/my-flink-xxx:latest \ local:///opt/flink/usrlib/streaming-0.1.jar </code></pre> <p>However, I got error:</p> <pre><code> The program finished with the following exception: io.fabric8.kubernetes.client.KubernetesClientException: JcaPEMKeyConverter is provided by BouncyCastle, an optional dependency. To use support for EC Keys you must explicitly add this dependency to classpath. at io.fabric8.kubernetes.client.internal.CertUtils.handleECKey(CertUtils.java:161) at io.fabric8.kubernetes.client.internal.CertUtils.loadKey(CertUtils.java:131) at io.fabric8.kubernetes.client.internal.CertUtils.createKeyStore(CertUtils.java:111) at io.fabric8.kubernetes.client.internal.CertUtils.createKeyStore(CertUtils.java:243) at io.fabric8.kubernetes.client.internal.SSLUtils.keyManagers(SSLUtils.java:128) at io.fabric8.kubernetes.client.internal.SSLUtils.keyManagers(SSLUtils.java:122) at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:82) at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:62) at io.fabric8.kubernetes.client.BaseClient.&lt;init&gt;(BaseClient.java:51) at io.fabric8.kubernetes.client.DefaultKubernetesClient.&lt;init&gt;(DefaultKubernetesClient.java:105) at org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory.fromConfiguration(FlinkKubeClientFactory.java:102) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:61) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39) at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:63) at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:213) at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1057) at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132) </code></pre> <p>After reading</p> <ul> <li><a href="https://www.mail-archive.com/search?l=dev@spark.apache.org&amp;q=subject:%22Should+we+add+built+in+support+for+bouncy+castle+EC+w%5C%2FKube%22&amp;o=newest&amp;f=1" rel="nofollow noreferrer">https://www.mail-archive.com/search?l=dev@spark.apache.org&amp;q=subject:&quot;Should+we+add+built+in+support+for+bouncy+castle+EC+w\%2FKube&quot;&amp;o=newest&amp;f=1</a></li> <li><a href="https://github.com/de-jcup/ekube/issues/63#issuecomment-753508790" rel="nofollow noreferrer">https://github.com/de-jcup/ekube/issues/63#issuecomment-753508790</a></li> </ul> <p>I added</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.bouncycastle&lt;/groupId&gt; &lt;artifactId&gt;bcpkix-jdk15on&lt;/artifactId&gt; &lt;version&gt;1.69&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.bouncycastle&lt;/groupId&gt; &lt;artifactId&gt;bcprov-jdk15on&lt;/artifactId&gt; &lt;version&gt;1.69&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.bouncycastle&lt;/groupId&gt; &lt;artifactId&gt;bcprov-ext-jdk15on&lt;/artifactId&gt; &lt;version&gt;1.69&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>to my <strong>pom.xml</strong> file. I built and pushed to Docker Hub again.</p> <p>When I ran the Flink command above, I still got same error. Any idea? Thanks!</p> <hr /> <p><strong>UPDATE 1:</strong></p> <p>Besides above <strong>pom.xml</strong> change, I manually downloaded those 3 jars and changed my Dockerfile to</p> <pre><code>FROM flink COPY lib/* $FLINK_HOME/lib RUN mkdir -p $FLINK_HOME/usrlib COPY target/streaming-0.1.jar $FLINK_HOME/usrlib/streaming-0.1.jar </code></pre> <p>and tried again, but still same error.</p> <p>I can confirm the 3 jar files <code>bcpkix-jdk15on-1.69.jar</code>, <code>bcprov-ext-jdk15on-1.69.jar</code>, <code>bcprov-jdk15on-1.69.jar</code> are in the docker image:</p> <pre><code>➜ docker run -it 6c48af48db55c334003a307d1ef7a5fc5181f389613284b66b5cb97588b9708d sh $ cd lib &amp;&amp; ls bcpkix-jdk15on-1.69.jar flink-dist_2.12-1.13.2.jar flink-table_2.12-1.13.2.jar log4j-slf4j-impl-2.12.1.jar bcprov-ext-jdk15on-1.69.jar flink-json-1.13.2.jar log4j-1.2-api-2.12.1.jar bcprov-jdk15on-1.69.jar flink-shaded-zookeeper-3.4.14.jar log4j-api-2.12.1.jar flink-csv-1.13.2.jar flink-table-blink_2.12-1.13.2.jar log4j-core-2.12.1.jar $ cd ../usrlib &amp;&amp; ls streaming-0.1.jar </code></pre> <hr /> <p><strong>UPDATE 2:</strong></p> <p>I tried to start the session mode by</p> <pre><code>/usr/local/Cellar/apache-flink/1.13.1/libexec/bin/kubernetes-session.sh </code></pre> <p>but still got same error. So now I can confirm when I was using application mode before, the issue is not related my Docker image.</p> <p>I have those jars are located at <code>~/.m2</code> on my machine:</p> <p><a href="https://i.stack.imgur.com/hQNf8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hQNf8.png" alt="enter image description here" /></a></p> <p>Did I miss any other jars?</p> <p>Also, I found the error only happens to the cluster created by k3d/k3s, but not minikube.</p>
<h1>TL;DR</h1> <p>Download latest version of <code>bcprov-jdk15on</code> and <code>bcpkix-jdk15on</code> jar files from</p> <ul> <li><a href="https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on" rel="nofollow noreferrer">https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk15on</a></li> <li><a href="https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on" rel="nofollow noreferrer">https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on</a></li> </ul> <p>Then move to your apache-flink folder. In my case, it is</p> <pre><code>/usr/local/Cellar/apache-flink/{version}/libexec/lib </code></pre> <p>Then you are good to go!</p> <hr /> <h1>How I Found Out</h1> <p>After checking the code of</p> <ul> <li>/usr/local/Cellar/apache-flink/1.13.1/libexec/bin/kubernetes-session.sh</li> <li>/usr/local/Cellar/apache-flink/1.13.1/libexec/libexec/kubernetes-session.sh</li> </ul> <p>The first script is pointing to the second script, and the second script has</p> <pre><code># ... CC_CLASSPATH=`manglePathList $(constructFlinkClassPath):$INTERNAL_HADOOP_CLASSPATHS` # ... &quot;$JAVA_RUN&quot; $JVM_ARGS -classpath &quot;$CC_CLASSPATH&quot; $log_setting org.apache.flink.kubernetes.cli.KubernetesSessionCli &quot;$@&quot; </code></pre> <p>I added <code>echo $CC_CLASSPATH</code>, and printed out the classpath.</p> <p>In my case, it is at <code>/usr/local/Cellar/apache-flink/1.13.1/libexec/lib</code>.</p> <p>After I put <code>bcprov-jdk15on-1.69.jar</code> and <code>bcpkix-jdk15on-1.69.jar</code> in the folder above, Flink can be deployed to k3s (k3d) now in both session and application modes.</p>
<p>I think I am misunderstanding Kubernetes CronJobs. On the CKAD exam there was a question to have a CronJob run every minute, but it should start after an arbitrary amount of time. I don't see any properties for CronJobs or Jobs to have them start after a specific time. Should that be part of the cron string or am I completely misunderstanding?</p>
<p>maybe you misunderstood the question, it was to terminate the job if it didn't complete in x seconds. you should use the <code>.spec.activeDeadlineSeconds</code> to satisfy the requirment.</p>
<p>Our organisation runs Databricks on Azure that is used by data scientists &amp; analysts primarily for Notebooks in order to do ad-hoc analysis and exploration.</p> <p>We also run Kubernetes clusters for non spark-requiring ETL workflows.</p> <p>We would like to use Delta Lakes as our storage layer where both Databricks and Kubernetes are able to read and write as first class citizens.<br /> Currently our Kubernetes jobs write parquets directly to blob store, with an additional job that spins up a databricks cluster to load the parquet data into Databrick's table format. This is slow and expensive.</p> <p>What I would like to do is write to Delta lake from Kubernetes python directly, as opposed to first dumping a parquet file to blob store and then triggering an additional Databricks job to load it into Delta lake format.<br /> Conversely, I'd like to also leverage Delta lake to query from Kubernetes.</p> <hr /> <p>In short, how do I set up my Kubernetes python environment such that it has equal access to the existing Databricks Delta Lake for writes &amp; queries?<br /> Code would be appreciated.</p>
<p>You can <em>usually</em> can write into the Delta table using <a href="https://delta.io/" rel="nofollow noreferrer">Delta connector for Spark</a>. Just start a Spark job with <a href="https://docs.delta.io/latest/quick-start.html#set-up-apache-spark-with-delta-lake" rel="nofollow noreferrer">necessary packages and configuration options</a>:</p> <pre class="lang-sh prettyprint-override"><code>spark-submit --packages io.delta:delta-core_2.12:1.0.0 \ --conf &quot;spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension&quot; --conf &quot;spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog&quot; ... </code></pre> <p>and write the same way as on Databricks:</p> <pre class="lang-py prettyprint-override"><code>df.write.format(&quot;delta&quot;).mode(&quot;append&quot;).save(&quot;some_location&quot;) </code></pre> <p>But by using OSS version of Delta you may loose some of the optimizations that are available only on Databricks, like, <a href="https://docs.databricks.com/delta/optimizations/file-mgmt.html#data-skipping" rel="nofollow noreferrer">Data Skipping</a>, etc. - in this case performance for the data written from Kubernetes <em>could be</em> lower (really depends on how do you access data).</p> <p>There could be a case when you couldn't write into Delta table create by Databricks - when the table was written by writer with writer version higher that supported by OSS Delta connector (see <a href="https://github.com/delta-io/delta/blob/master/PROTOCOL.md#writer-version-requirements" rel="nofollow noreferrer">Delta Protocol documentation</a>). For example, this happens when you enable <a href="https://docs.databricks.com/delta/delta-change-data-feed.html" rel="nofollow noreferrer">Change Data Feed</a> on the Delta table that performs additional actions when writing data.</p> <p>Outside of Spark, there are plans for implementing so-called <a href="https://github.com/delta-io/connectors/issues/85" rel="nofollow noreferrer">Standalone writer</a> for JVM-based languages (in addition to existing <a href="https://github.com/delta-io/connectors" rel="nofollow noreferrer">Standalone reader</a>). And there is a <a href="https://github.com/delta-io/delta-rs" rel="nofollow noreferrer">delta-rs project</a> implemented in Rust (with bindings for Python &amp; Ruby) that should be able to write into Delta table (but I haven't tested that myself)</p> <p>Update 14.04.2022: Data Skipping is also available in OSS Delta, starting with version 1.2.0</p>
<p>I have a docker deployment using Kubernetes. Out of the blue on a running AWS K8 &quot;pod&quot;, I have started getting the following error:</p> <pre><code>connect() failed (111: Connection refused) while connecting to upstream, client: X.0.XX.XX, server: _, request: &quot;GET /api/endpoint&amp; HTTP/1.1&quot;, upstream: &quot;fastcgi://127.0.0.1:9000&quot;, host: &quot;domain.com&quot; </code></pre> <p>It seems like it has something to do with the nginx configuration with php-fpm rather than the Dockerfile or the deployment. What fixes this error immediately is restarting the Kubernetes pods (whichever pod happens to give this error).</p> <p>I suspect that if php-fpm happens to fail inside a given pod, that's when it stops listening on port 9000? I ssh'ed into the pod and did a netstat to verify this and it shows 9000 being alive.</p> <p>For reference, here is the Dockerfile:</p> <pre><code>FROM trafex/alpine-nginx-php7:1.9.0 USER root RUN apk add --no-cache file RUN apk --update add imagemagick RUN apk --no-cache add php7-redis php7-simplexml php7-iconv php7-imagick RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer ARG gh_token ADD nginx-host.conf /etc/nginx/nginx.conf RUN rm -Rf /var/www/html/ COPY . /var/www/html/ RUN composer config --global github-oauth.github.com ${gh_token} RUN cd /var/www/html/ \ &amp;&amp; composer update RUN composer config --global github-oauth.github.com &quot;none&quot; USER nobody </code></pre> <p>And here is the nginx.conf:</p> <pre><code>worker_processes auto; error_log stderr warn; pid /run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; # Define custom log format to include reponse times log_format main_timed '$remote_addr - $remote_user [$time_local] &quot;$request&quot; ' '$status $body_bytes_sent &quot;$http_referer&quot; ' '&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot; ' '$request_time $upstream_response_time $pipe $upstream_cache_status'; access_log /dev/stdout main_timed; error_log /dev/stderr notice; keepalive_timeout 65; # Write temporary files to /tmp so they can be created as a non-privileged user client_body_temp_path /tmp/client_temp; proxy_temp_path /tmp/proxy_temp_path; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; underscores_in_headers on; map $request_uri $version { ~(?&lt;captured_topdir&gt;^/[a-zA-Z0-9]+[/]) $captured_topdir; } # Default server definition server { listen [::]:8080 default_server; listen 8080 default_server; server_name _; sendfile off; client_max_body_size 6M; root /var/www/html; index index.php index.html; # Redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/lib/nginx/html; } location / { rewrite ^/(v[0-9]+|stage|partner)?/(.*)$ /$2 last; set $new_uri $uri; try_files $uri $uri/ /index.php?$query_string; } # Pass the PHP scripts to PHP-FPM listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param API_VERSION $version; fastcgi_index index.php; include fastcgi_params; fastcgi_param REQUEST_URI $new_uri; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { expires 5d; } # Deny access to . files, for security location ~ /\. { log_not_found off; deny all; } # Allow fpm ping and status from localhost location ~ ^/(fpm-status|fpm-ping)$ { access_log off; allow 127.0.0.1; deny all; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } } gzip on; gzip_proxied any; gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss; gzip_vary on; gzip_disable &quot;msie6&quot;; # Include other server configs include /etc/nginx/conf.d/*.conf; } </code></pre> <p>Any idea on the nginx config where this error could be surfacing from? It has happened completely out of the blue a few times now.</p>
<p>Looks like Nginx is getting the connection refuse from the stream.</p> <p>i would suggest trying with simple Nginx config, store it inside the Kubernetes <code>configmap</code></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginxthroughpass namespace: development data: default.conf: |- server { listen 80 default_server; root /var/www/html; server_name _; index index.php; location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } </code></pre> <p>Injecting the Nginx configmap along with the PHP fpm deployment</p> <pre><code>apiVersion: extensions/v1 kind: Deployment metadata: labels: app: wordpress-site name: wordpress-site namespace: development spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: wordpress-site tier: frontend strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: wordpress-site tier: frontend spec: volumes: - configMap: defaultMode: 256 name: nginxthroughpass optional: false name: nginxconf - name: shared-files emptyDir: {} containers: - name: app image: &lt;REPLACE WITH DOCKER PHP-FPM IMAGE URL&gt; imagePullPolicy : IfNotPresent volumeMounts: - name: shared-files mountPath: /var/www/html envFrom: - configMapRef: name: wordpress-configmap - name: nginx image: nginx imagePullPolicy : IfNotPresent volumeMounts: - name: shared-files mountPath: /var/www/html - mountPath: /etc/nginx/conf.d name: nginxconf readOnly: true </code></pre> <p>instead of <code>WordPress</code> image replace your <code>php-fpm</code> image and test.</p> <p>You check this using <strong>Php-fpm WordPress</strong> with the <strong>Nginx</strong> pod and storing the config into the config map.</p> <p><a href="https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx" rel="nofollow noreferrer">https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx</a></p>
<p>When using the Kubernetes <a href="https://pkg.go.dev/k8s.io/client-go/kubernetes/fake" rel="nofollow noreferrer">Fake Client</a> to write unit tests, I noticed that it fails to create two identical objects which have their <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMeta" rel="nofollow noreferrer"><code>ObjectMeta.GenerateName</code></a> field set to some string. A real cluster accepts this specification and generates a unique name for each object.</p> <p>Running the following test code:</p> <pre><code>package main import ( &quot;context&quot; &quot;testing&quot; &quot;github.com/stretchr/testify/assert&quot; corev1 &quot;k8s.io/api/core/v1&quot; metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/client-go/kubernetes/fake&quot; ) func TestFake(t *testing.T) { ctx := context.Background() client := fake.NewSimpleClientset() _, err := client.CoreV1().Secrets(&quot;default&quot;).Create(ctx, &amp;corev1.Secret{ ObjectMeta: metav1.ObjectMeta{ GenerateName: &quot;generated&quot;, }, StringData: map[string]string{&quot;foo&quot;: &quot;bar&quot;}, }, metav1.CreateOptions{}) assert.NoError(t, err) _, err = client.CoreV1().Secrets(&quot;default&quot;).Create(ctx, &amp;corev1.Secret{ ObjectMeta: metav1.ObjectMeta{ GenerateName: &quot;generated&quot;, }, StringData: map[string]string{&quot;foo&quot;: &quot;bar&quot;}, }, metav1.CreateOptions{}) assert.NoError(t, err) } </code></pre> <p>fails with</p> <pre><code>--- FAIL: TestFake (0.00s) /Users/mihaitodor/Projects/kubernetes/main_test.go:44: Error Trace: main_test.go:44 Error: Received unexpected error: secrets &quot;&quot; already exists Test: TestFake FAIL FAIL kubernetes 0.401s FAIL </code></pre>
<p>According to <a href="https://github.com/kubernetes/client-go/issues/439#issuecomment-403867107" rel="nofollow noreferrer">this</a> GitHub issue comment:</p> <blockquote> <p>the fake clientset doesn't attempt to duplicate server-side behavior like validation, name generation, uid assignment, etc. if you want to test things like that, you can add reactors to mock that behavior.</p> </blockquote> <p>To add the required reactor, we can insert the following code before creating the <code>corev1.Secret</code> objects:</p> <pre><code>client.PrependReactor( &quot;create&quot;, &quot;*&quot;, func(action k8sTesting.Action) (handled bool, ret runtime.Object, err error) { ret = action.(k8sTesting.CreateAction).GetObject() meta, ok := ret.(metav1.Object) if !ok { return } if meta.GetName() == &quot;&quot; &amp;&amp; meta.GetGenerateName() != &quot;&quot; { meta.SetName(names.SimpleNameGenerator.GenerateName(meta.GetGenerateName())) } return }, ) </code></pre> <p>There are a few gotchas in there:</p> <ul> <li>The <code>Clientset</code> contains an embedded <a href="https://pkg.go.dev/k8s.io/client-go/testing#Fake" rel="nofollow noreferrer"><code>Fake</code></a> structure which has the <a href="https://pkg.go.dev/k8s.io/client-go/testing#Fake.PrependReactor" rel="nofollow noreferrer"><code>PrependReactor</code></a> method we need to call for this use case (there are a few others). This code <a href="https://github.com/kubernetes/client-go/blob/d6c83109f030902f150f03f252311d2749cb6094/testing/fake.go#L140-L145" rel="nofollow noreferrer">here</a> is invoked when creating such objects.</li> <li>The <code>PrependReactor</code> method has 3 parameters: <code>verb</code>, <code>resource</code> and <code>reaction</code>. For <code>verb</code>, <code>resource</code>, I couldn't find any named constants, so, in this case, &quot;create&quot; and &quot;secrets&quot; (strange that it's not &quot;secret&quot;) seem to be the correct values for them if we want to be super-specific, but setting <code>resource</code> to &quot;*&quot; should be acceptable in this case.</li> <li>The <code>reaction</code> parameter is of type <a href="https://pkg.go.dev/k8s.io/client-go/testing#ReactionFunc" rel="nofollow noreferrer">ReactionFunc</a>, which takes an <a href="https://pkg.go.dev/k8s.io/client-go/testing#Action" rel="nofollow noreferrer"><code>Action</code></a> as a parameter and returns <code>handled</code>, <code>ret</code> and <code>err</code>. After some digging, I noticed that the <code>action</code> parameter will be cast to <a href="https://pkg.go.dev/k8s.io/client-go/testing#CreateAction" rel="nofollow noreferrer"><code>CreateAction</code></a>, which has the <code>GetObject()</code> method that returns a <code>runtime.Object</code> instance, which can be cast to <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#Object" rel="nofollow noreferrer"><code>metav1.Object</code></a>. This interface allows us to get and set the various metadata fields of the underlying object. After setting the object <code>Name</code> field as needed, we have to return <code>handled = false</code>, <code>ret = mutatedObject</code> and <code>err = nil</code> to instruct the calling code to execute the remaining reactors.</li> <li>Digging through the <code>apiserver</code> code, I noticed that the <code>ObjectMeta.Name</code> field <a href="https://github.com/kubernetes/apiserver/blob/f0b4663d4cd5caceddb64fd239053d29208104cd/pkg/registry/rest/create.go#L112-L114" rel="nofollow noreferrer">is generated</a> from the <code>ObjectMeta.GenerateName</code> field using the <a href="https://pkg.go.dev/k8s.io/apiserver/pkg/storage/names" rel="nofollow noreferrer"><code>names.SimpleNameGenerator.GenerateName</code></a> utility.</li> </ul>