prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>So this is my configuration in application.yaml:</p> <pre><code>management: endpoint: health: show-details: &quot;ALWAYS&quot; probes: enabled: true endpoints: enabled-by-default: true web: exposure: include: metrics, health, caches, restart </code></pre> <p>And according to documentations this should be enough for enabling liveness and readiness probe for a spring application. But the endpoints (<code>/actuator/health/liveness</code> and <code>/actuator/health/readiness</code>) are still returning 404. I tried many combinations in the config, but nothing works. Can you please tell me what to do about this?</p>
<p>I dug a bit deeper in this issue as I found it as an interesting feature of <code>spring-boot-actuator</code>. From my research I found out that this feature with <code>liveness</code> and <code>readiness</code> has been introduced in <code>spring-boot:2.3.0</code> so if you're using an older version this might be the reason you're not receiving the expected result when you do a <code>GET /actuator/health/readiness</code>.</p> <p>If you upgrade your <code>spring-boot</code> version to &gt;= <strong>2.3.0</strong> you can enable the liveness and readiness probes by adding:</p> <pre><code>management: health: probes: enabled: true </code></pre> <p>to your <strong>application.yaml</strong> file. After doing so you should be able to</p> <p><code>GET /actuator/health</code></p> <pre><code>{ &quot;status&quot;: &quot;UP&quot;, &quot;groups&quot;: [ &quot;liveness&quot;, &quot;readiness&quot; ] } </code></pre> <p>However for spring-boot versions &gt;= 2.3.2 it is advised to enable probes by using the following in your application.yaml</p> <pre><code>management: endpoint: health: probes: enabled: true </code></pre> <p>The reason for doing so is a bug which you can read more about <a href="https://github.com/spring-projects/spring-boot/issues/22107" rel="nofollow noreferrer">here</a></p> <p><strong>Extra tip</strong>: If you're <code>spring-boot</code> version &gt;= 2.3.0, you've configured your application.yaml file accordingly and you still receive <strong>404</strong> when you <code>GET /actuator/health/liveness</code> there's a slim chance that your application.yaml file is not getting picked up by the Spring Context. You can check if this is the case by changing the port of the application</p> <pre><code>server: port: 8081 </code></pre> <p>If your application doesn't start on a different port, it is safe to say that none of your configurations have taken place. I had this issue one or two times with my IDE.</p>
<p>I just started exploring kubernetes concepts. I am using helm chart to do deployment of pod. Got stucked to solve the below problem. Anyone kindly help me to unblock this issue.</p> <p>I have three containers, lets say A, B and C. container A is having a file =&gt; &quot;/root/dir1/sample.txt&quot;. Otherwise I can prepare this file offline but needs to be mounted in all three containers. Container A and container C has a service running which will update that file. Container B has a service running which will read that file. so, I need this file to be shared across all the three containers.</p> <p>Approach 1: Using emptyDir volume, I tried mounting this file but it doesn't helped my case. Because when I use emptyDir I am losing all the files under dir1 which comes via container images. Also I don't want other files under dir1 to be shared across containers. In addition to that I need to mount the file as same directory structure &quot;/root/dir1/sample.txt&quot; instead of creating empty dir say &quot;/root/dir2&quot; as shared-data mount and copying sample.txt to dir2 as &quot;/root/dir2/sample.txt&quot;. Reason is now mounting /root/dir2 in all three containers (file write happening in any container reflected in other containers). But this does not helped my case as my need is to mount the file as same directory structure &quot;/root/dir1/sample.txt&quot;.</p> <p>Approach 2: Using configmap volume, I can mount the file &quot;sample.txt&quot; under dir1 as expected. But it is mounted as read-only filesystem and containers A and C are unable to write the file.</p> <p>So above two approaches does not helped my case. It would be great if anyone help me on how to mount a file directly into containers with write access under same directory structure and shared across containers. Or any volume type available in kubernetes will help my case (<a href="https://kubernetes.io/docs/concepts/storage/volumes/#cinder" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#cinder</a>).</p> <p>Thanks in advance!</p>
<p>I think your best option is to use an emptyDir and an initContainer that mounts it to another path and copies the original files into it. Its not an uncommon pattern. See it in action in <a href="https://github.com/rabbitmq/cluster-operator/commit/60b048fde7d7314801d34d051f88a7577640efdb" rel="noreferrer">rabbitmq cluster operator</a>.</p> <p>That would look something like this in the pod:</p> <pre><code>(...) spec: volumes: - name: shared-dir emptyDir: {} initContainers: - name: prepare-dir image: YOUR_IMAGE command: - sh - '-c' - 'cp /root/dir1 /tmp/dir1' volumeMounts: - name: shared-dir mountPath: /tmp/dir1/ containers: - name: container-a image: YOUR_IMAGE volumeMounts: - name: shared-dir mountPath: /root/dir1/ - name: container-b image: YOUR_IMAGE volumeMounts: - name: shared-dir mountPath: /root/dir1/ (...) </code></pre>
<p>We performed our kubernetes cluster upgrade from v1.21 to v1.22. After this operation we discovered that our nginx-ingress-controller deployment’s pods are failing to start with the following error message: <code>pkg/mod/k8s.io/client-go@v0.18.5/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource</code></p> <p>We have found out that this issue is tracked over here: <a href="https://github.com/bitnami/charts/issues/7264" rel="noreferrer">https://github.com/bitnami/charts/issues/7264</a></p> <p>Because azure doesn't let to downgrade the cluster back to the 1.21 could you please help us fixing the nginx-ingress-controller deployment? Could you please be specific with what should be done and from where (local machine or azure cli, etc) as we are not very familiar with <code>helm</code>.</p> <p>This is our deployment current yaml:</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: nginx-ingress-controller namespace: ingress uid: 575c7699-1fd5-413e-a81d-b183f8822324 resourceVersion: '166482672' generation: 16 creationTimestamp: '2020-10-10T10:20:07Z' labels: app: nginx-ingress app.kubernetes.io/component: controller app.kubernetes.io/managed-by: Helm chart: nginx-ingress-1.41.1 heritage: Helm release: nginx-ingress annotations: deployment.kubernetes.io/revision: '2' meta.helm.sh/release-name: nginx-ingress meta.helm.sh/release-namespace: ingress managedFields: - manager: kube-controller-manager operation: Update apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:replicas: {} subresource: scale - manager: Go-http-client operation: Update apiVersion: apps/v1 time: '2020-10-10T10:20:07Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/component: {} f:app.kubernetes.io/managed-by: {} f:chart: {} f:heritage: {} f:release: {} f:spec: f:progressDeadlineSeconds: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:app.kubernetes.io/component: {} f:component: {} f:release: {} f:spec: f:containers: k:{&quot;name&quot;:&quot;nginx-ingress-controller&quot;}: .: {} f:args: {} f:env: .: {} k:{&quot;name&quot;:&quot;POD_NAME&quot;}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{&quot;name&quot;:&quot;POD_NAMESPACE&quot;}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{&quot;containerPort&quot;:80,&quot;protocol&quot;:&quot;TCP&quot;}: .: {} f:containerPort: {} f:name: {} f:protocol: {} k:{&quot;containerPort&quot;:443,&quot;protocol&quot;:&quot;TCP&quot;}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: {} f:requests: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:add: {} f:drop: {} f:runAsUser: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} - manager: kube-controller-manager operation: Update apiVersion: apps/v1 time: '2022-01-24T01:23:22Z' fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: .: {} k:{&quot;type&quot;:&quot;Available&quot;}: .: {} f:type: {} k:{&quot;type&quot;:&quot;Progressing&quot;}: .: {} f:type: {} - manager: Mozilla operation: Update apiVersion: apps/v1 time: '2022-01-28T23:18:41Z' fieldsType: FieldsV1 fieldsV1: f:spec: f:template: f:spec: f:containers: k:{&quot;name&quot;:&quot;nginx-ingress-controller&quot;}: f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} - manager: kube-controller-manager operation: Update apiVersion: apps/v1 time: '2022-01-28T23:29:49Z' fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:conditions: k:{&quot;type&quot;:&quot;Available&quot;}: f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} k:{&quot;type&quot;:&quot;Progressing&quot;}: f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:observedGeneration: {} f:replicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} subresource: status spec: replicas: 2 selector: matchLabels: app: nginx-ingress app.kubernetes.io/component: controller release: nginx-ingress template: metadata: creationTimestamp: null labels: app: nginx-ingress app.kubernetes.io/component: controller component: controller release: nginx-ingress spec: containers: - name: nginx-ingress-controller image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 args: - /nginx-ingress-controller - '--default-backend-service=ingress/nginx-ingress-default-backend' - '--election-id=ingress-controller-leader' - '--ingress-class=nginx' - '--configmap=ingress/nginx-ingress-controller' ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace resources: limits: cpu: 300m memory: 512Mi requests: cpu: 200m memory: 256Mi livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent securityContext: capabilities: add: - NET_BIND_SERVICE drop: - ALL runAsUser: 101 allowPrivilegeEscalation: true restartPolicy: Always terminationGracePeriodSeconds: 60 dnsPolicy: ClusterFirst serviceAccountName: nginx-ingress serviceAccount: nginx-ingress securityContext: {} schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 status: observedGeneration: 16 replicas: 3 updatedReplicas: 2 unavailableReplicas: 3 conditions: - type: Available status: 'False' lastUpdateTime: '2022-01-28T22:58:07Z' lastTransitionTime: '2022-01-28T22:58:07Z' reason: MinimumReplicasUnavailable message: Deployment does not have minimum availability. - type: Progressing status: 'False' lastUpdateTime: '2022-01-28T23:29:49Z' lastTransitionTime: '2022-01-28T23:29:49Z' reason: ProgressDeadlineExceeded message: &gt;- ReplicaSet &quot;nginx-ingress-controller-59d9f94677&quot; has timed out progressing. </code></pre>
<p>@Philip Welz's answer is the correct one of course. It was necessary to upgrade the ingress controller because of the removed <code>v1beta1</code> Ingress API version in Kubernetes v1.22. But that's not the only problem we faced so I've decided to make a &quot;very very short&quot; guide of how we have finally ended up with a healthy running cluster (5 days later) so it may save someone else the struggle.</p> <h2>1. Upgrading nginx-ingress-controller version in YAML file.</h2> <p>Here we simply changed the version in the yaml file from:</p> <pre><code>image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1 </code></pre> <p>to</p> <pre><code>image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v1.1.1 </code></pre> <p>After this operation, a new pod in v1.1.1 was spawned. It started nicely and was running healthy. Unfortunately that didn't bring our microservices back online. Now I know it was probably because of some changes that had to be done to the existing ingresses yaml files to make them compatible with the new version of the ingress controller. So go directly to step 2. now (two headers below).</p> <h2>Don't do this step for now, do only when step 2 failed for you: Reinstall nginx-ingress-controller</h2> <p>We decided that in this situation we will reinstall the controller from scratch following Microsoft's official documentation: <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli</a>. Be aware that this will probably change the external IP address of your ingress controller. The easiest way in our case was to just remove the whole <code>ingress</code> namespace:</p> <pre><code>kubectl delete namespace ingress </code></pre> <p>That unfortunately doesn't remove the ingress class so the additional is required:</p> <pre><code>kubectl delete ingressclass nginx --all-namespaces </code></pre> <p>Then install the new controller:</p> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace ingress </code></pre> <h2>If you reinstalled nginx-ingress-controller or IP address changed after the upgrade in step 1.: Update your Network security groups, Load Balancers and domain DNS</h2> <p>In your AKS resource group should be a resource of type <code>Network security group</code>. It contains inbound and outbound security rules (I understand it works as a firewall). There should be a default network security group that is automatically managed by Kubernetes and the IP address should be automatically refreshed there.</p> <p>Unfortunately, we also had an additional custom one. We had to update the rules manually there.</p> <p>In the same resource group there should be a resource of <code>Load balancer</code> type. In the <code>Frontend IP configuration</code> tab double check if the IP address reflects your new IP address. As a bonus you can double check in the <code>Backend pools</code> tab that the addresses there match your internal node IPs.</p> <p>Lastly don't forget to adjust your domain DNS records.</p> <h2>2. Upgrade your ingress yaml configuration files to match syntax changes</h2> <p>That took us a while to determine a working template but actually installing the helloworld application from, the mentioned above, Microsoft's tutorial helped us a lot. We started from this:</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: hello-world-ingress namespace: services annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/ssl-redirect: 'false' nginx.ingress.kubernetes.io/use-regex: 'true' rules: - http: paths: - path: /hello-world-one(/|$)(.*) pathType: Prefix backend: service: name: aks-helloworld-one port: number: 80 </code></pre> <p>And after introducing changes incrementally we finally made it to the below. But I'm pretty sure the issue was that we were missing the <code>nginx.ingress.kubernetes.io/use-regex: 'true'</code> entry:</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: example-api namespace: services annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers &quot;X-Forwarded-By: example-api&quot;; nginx.ingress.kubernetes.io/rewrite-target: /example-api nginx.ingress.kubernetes.io/ssl-redirect: 'true' nginx.ingress.kubernetes.io/use-regex: 'true' spec: tls: - hosts: - services.example.com secretName: tls-secret rules: - host: services.example.com http: paths: - path: /example-api pathType: ImplementationSpecific backend: service: name: example-api port: number: 80 </code></pre> <p>Just in case someone would like to install, for testing purposes, the helloworld app then yamls looked as follows:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: aks-helloworld-one spec: replicas: 1 selector: matchLabels: app: aks-helloworld-one template: metadata: labels: app: aks-helloworld-one spec: containers: - name: aks-helloworld-one image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80 env: - name: TITLE value: &quot;Welcome to Azure Kubernetes Service (AKS)&quot; --- apiVersion: v1 kind: Service metadata: name: aks-helloworld-one spec: type: ClusterIP ports: - port: 80 selector: app: aks-helloworld-one </code></pre> <h2>3. Deal with other crashing applications ...</h2> <p>Another application that was crashing in our cluster was <code>cert-manager</code>. This was in version 1.0.1 so, first, we upgraded this to version 1.1.1:</p> <pre><code>helm repo add jetstack https://charts.jetstack.io helm repo update helm upgrade --namespace cert-manager --version 1.1 cert-manager jetstack/cert-manager </code></pre> <p>That created a brand new healthy pod. We were happy and decided to stay with v1.1 because we were a bit scared about additional measures that have to be taken when upgrading to higher versions (check at the bottom of this page <a href="https://cert-manager.io/docs/installation/upgrading/" rel="noreferrer">https://cert-manager.io/docs/installation/upgrading/</a>).</p> <p>The cluster is now finally fixed. It is, right?</p> <h2>4. ... but be sure to check the compatibility charts!</h2> <p>Well.. now we know that the cert-manager is compatible with Kubernetes v1.22 only starting from version 1.5. We were so unlucky that exactly that night our SSL certificate passed 30 days threshold from the expiration date so the cert-manager decided to renew the cert! The operation failed and the cert-manager crashed. Kubernetes fallback to the &quot;Kubernetes Fake Certificate&quot;. The web page went down again because of browsers killing the traffic because of the invalid certificate. The fix was to upgrade to 1.5 and upgrade the CRDs as well:</p> <pre><code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.5.4/cert-manager.crds.yaml helm upgrade --namespace cert-manager --version 1.5 cert-manager jetstack/cert-manager </code></pre> <p>After this, the new instance of cert-manager refreshed our certificate successfully. Cluster saved again.</p> <p>In case you need to force the renewal you can take a look at this issue: <a href="https://github.com/jetstack/cert-manager/issues/2641" rel="noreferrer">https://github.com/jetstack/cert-manager/issues/2641</a></p> <p>@ajcann suggests adding <code>renewBefore</code> property to the certificates:</p> <pre><code>kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch ' - op: replace path: /spec/renewBefore value: 1440h ' --type=json </code></pre> <p>Then wait for the certificates to renew and then remove the property:</p> <pre><code>kubectl get certs --no-headers=true | awk '{print $1}' | xargs -n 1 kubectl patch certificate --patch ' - op: remove path: /spec/renewBefore ' --type=json </code></pre>
<p>I am planning on building a K8s cluster with many microservices (each running in pods with services ensuring communication). I'm trying to understand how to ensure communication between these microservices is secure. By communication, I mean HTTP calls between microservice A and microservice B's API.</p> <p>Usually, I would implement an OAuth flow, where an auth server would receive some credentials as input and return a JWT. And then the client could use this JWT in any subsequent call.</p> <p>I expected K8s to have some built-in authentication server that could generate tokens (like a JWT) but I can't seem to find one. K8s does have authentication for its API server, but that only seems to authenticate calls that perform Kubernetes specific actions such as scaling a pod or getting secrets etc. However, there is no mention of simply authenticating HTTP calls (GET POST etc).</p> <p>Should I just create my own authentication server and make it accessible via a service or is there a simple and clean way of authenticating API calls automatically in Kubernetes?</p>
<p>Not sure how to answer this vast question, however, i will try my best.</p> <p>There are multiple solutions that you could apply but again there is nothing in K8s for auth you can use.</p> <p>Either you have to set up the third-party OAuth server or IAM server etc, or you write and create your own microservice.</p> <p>There are different areas which you cannot merge,</p> <p>For service interconnection <strong>service A</strong> to <strong>service B</strong> it would be best to use the service mesh like <strong>Istio</strong> and <strong>LinkerD</strong> which provide the <strong>mutual TLS</strong> support for security and are easy to set up also.</p> <p>So the connection between services will be <strong>HTTPS</strong> and secured but it's on you to manage it and set it up.</p> <p>If you just run plain traffic inside your backend you can follow the same method that you described.</p> <p>Passing plain <strong>HTTP</strong> with jwt payload or so in backend services.</p> <p><a href="https://www.keycloak.org/" rel="nofollow noreferrer">Keycloak</a> is also a good idea to use the OAuth server, i would also recommend checking out <a href="https://github.com/oauth2-proxy/oauth2-proxy" rel="nofollow noreferrer">Oauth2-proxy</a></p> <p>Listing down few article also that might be helpful</p> <p><a href="https://medium.com/codex/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-a980c996c259" rel="nofollow noreferrer">https://medium.com/codex/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-a980c996c259</a></p> <p>My Own article on Keycloak with Kong API gateway on Kubernetes</p> <p><a href="https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56" rel="nofollow noreferrer">https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56</a></p> <p>GitHub files for POC : <a href="https://github.com/harsh4870/POC-Securing-the--application-with-Kong-Keycloak" rel="nofollow noreferrer">https://github.com/harsh4870/POC-Securing-the--application-with-Kong-Keycloak</a></p> <p>Keycloak deployment on K8s : <a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment</a></p>
<p>I installed the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus-0.9.0</a>, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: hpa-prom-demo spec: selector: matchLabels: app: nginx-server template: metadata: labels: app: nginx-server spec: containers: - name: nginx-demo image: cnych/nginx-vts:v1.0 resources: limits: cpu: 50m requests: cpu: 50m ports: - containerPort: 80 name: http --- apiVersion: v1 kind: Service metadata: name: hpa-prom-demo annotations: prometheus.io/scrape: &quot;true&quot; prometheus.io/port: &quot;80&quot; prometheus.io/path: &quot;/status/format/prometheus&quot; spec: ports: - port: 80 targetPort: 80 name: http selector: app: nginx-server type: NodePort </code></pre> <p>For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed Prometheus Adapter via Helm Chart by creating a new <code>hpa-prome-adapter-values.yaml</code> file to override the default Values values, as follows.</p> <pre class="lang-yaml prettyprint-override"><code>rules: default: false custom: - seriesQuery: 'nginx_vts_server_requests_total' resources: overrides: kubernetes_namespace: resource: namespace kubernetes_pod_name: resource: pod name: matches: &quot;^(.*)_total&quot; as: &quot;${1}_per_second&quot; metricsQuery: (sum(rate(&lt;&lt;.Series&gt;&gt;{&lt;&lt;.LabelMatchers&gt;&gt;}[1m])) by (&lt;&lt;.GroupBy&gt;&gt;)) prometheus: url: http://prometheus-k8s.monitoring.svc port: 9090 </code></pre> <p>Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.</p> <pre class="lang-sh prettyprint-override"><code>$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml NAME: prometheus-adapter LAST DEPLOYED: Fri Jan 28 09:16:06 2022 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: prometheus-adapter has been deployed. In a few minutes you should be able to list metrics using the following command(s): kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 </code></pre> <p>Finally the adatper was installed successfully, and can get the http response, as follows.</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get po -nmonitoring |grep adapter prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m $ kubectl get --raw=&quot;/apis/custom.metrics.k8s.io/v1beta1&quot; | jq { &quot;kind&quot;: &quot;APIResourceList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;groupVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;, &quot;resources&quot;: [ { &quot;name&quot;: &quot;namespaces/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: false, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] } ] } </code></pre> <p>But it was supposed to be like this,</p> <pre class="lang-json prettyprint-override"><code>$ kubectl get --raw=&quot;/apis/custom.metrics.k8s.io/v1beta1&quot; | jq { &quot;kind&quot;: &quot;APIResourceList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;groupVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;, &quot;resources&quot;: [ { &quot;name&quot;: &quot;namespaces/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: false, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] }, { &quot;name&quot;: &quot;pods/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: true, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] } ] } </code></pre> <p>Why I can't get the metrics <code>pods/nginx_vts_server_requests_per_second</code>? as a result, below query was also failed.</p> <pre class="lang-sh prettyprint-override"><code> kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second&quot; | jq . Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods </code></pre> <p>Anybody cloud please help? many thanks.</p>
<p>It is worth knowing that using the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> repository, you can also install components such as <strong>Prometheus Adapter for Kubernetes Metrics APIs</strong>, so there is no need to install it separately with Helm.</p> <p>I will use your <code>hpa-prome-demo.yaml</code> manifest file to demonstrate how to monitor <code>nginx_vts_server_requests_total</code> metrics.</p> <hr /> <p>First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.</p> <p>Copy the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> repository and refer to the <a href="https://github.com/prometheus-operator/kube-prometheus#kubernetes-compatibility-matrix" rel="nofollow noreferrer">Kubernetes compatibility matrix</a> in order to choose a compatible branch:</p> <pre><code>$ git clone https://github.com/prometheus-operator/kube-prometheus.git $ cd kube-prometheus $ git checkout release-0.9 </code></pre> <p>Install the <code>jb</code>, <code>jsonnet</code> and <code>gojsontoyaml</code> tools:</p> <pre><code>$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest $ go install github.com/google/go-jsonnet/cmd/jsonnet@latest $ go install github.com/brancz/gojsontoyaml@latest </code></pre> <p>Uncomment the <code>(import 'kube-prometheus/addons/custom-metrics.libsonnet') +</code> line from the <code>example.jsonnet</code> file:</p> <pre><code>$ cat example.jsonnet local kp = (import 'kube-prometheus/main.libsonnet') + // Uncomment the following imports to enable its patches // (import 'kube-prometheus/addons/anti-affinity.libsonnet') + // (import 'kube-prometheus/addons/managed-cluster.libsonnet') + // (import 'kube-prometheus/addons/node-ports.libsonnet') + // (import 'kube-prometheus/addons/static-etcd.libsonnet') + (import 'kube-prometheus/addons/custom-metrics.libsonnet') + &lt;--- This line // (import 'kube-prometheus/addons/external-metrics.libsonnet') + ... </code></pre> <p>Add the following rule to the <code>./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet</code> file in the <code>rules+</code> section:</p> <pre><code> { seriesQuery: &quot;nginx_vts_server_requests_total&quot;, resources: { overrides: { namespace: { resource: 'namespace' }, pod: { resource: 'pod' }, }, }, name: { &quot;matches&quot;: &quot;^(.*)_total&quot;, &quot;as&quot;: &quot;${1}_per_second&quot; }, metricsQuery: &quot;(sum(rate(&lt;&lt;.Series&gt;&gt;{&lt;&lt;.LabelMatchers&gt;&gt;}[1m])) by (&lt;&lt;.GroupBy&gt;&gt;))&quot;, }, </code></pre> <p>After this update, the <code>./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet</code> file should look like this:<br /> <strong>NOTE:</strong> This is not the entire file, just an important part of it.</p> <pre><code>$ cat custom-metrics.libsonnet // Custom metrics API allows the HPA v2 to scale based on arbirary metrics. // For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links { values+:: { prometheusAdapter+: { namespace: $.values.common.namespace, // Rules for custom-metrics config+:: { rules+: [ { seriesQuery: &quot;nginx_vts_server_requests_total&quot;, resources: { overrides: { namespace: { resource: 'namespace' }, pod: { resource: 'pod' }, }, }, name: { &quot;matches&quot;: &quot;^(.*)_total&quot;, &quot;as&quot;: &quot;${1}_per_second&quot; }, metricsQuery: &quot;(sum(rate(&lt;&lt;.Series&gt;&gt;{&lt;&lt;.LabelMatchers&gt;&gt;}[1m])) by (&lt;&lt;.GroupBy&gt;&gt;))&quot;, }, ... </code></pre> <p>Use the jsonnet-bundler update functionality to update the <code>kube-prometheus</code> dependency:</p> <pre><code>$ jb update </code></pre> <p>Compile the manifests:</p> <pre><code>$ ./build.sh example.jsonnet </code></pre> <p>Now simply use <code>kubectl</code> to install Prometheus and other components as per your configuration:</p> <pre><code>$ kubectl apply --server-side -f manifests/setup $ kubectl apply -f manifests/ </code></pre> <p>After configuring Prometheus, we can deploy a sample <code>hpa-prom-demo</code> Deployment:<br /> <strong>NOTE:</strong> I've deleted the annotations because I'm going to use a <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md#related-resources" rel="nofollow noreferrer">ServiceMonitor</a> to describe the set of targets to be monitored by Prometheus.</p> <pre><code>$ cat hpa-prome-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hpa-prom-demo spec: selector: matchLabels: app: nginx-server template: metadata: labels: app: nginx-server spec: containers: - name: nginx-demo image: cnych/nginx-vts:v1.0 resources: limits: cpu: 50m requests: cpu: 50m ports: - containerPort: 80 name: http --- apiVersion: v1 kind: Service metadata: name: hpa-prom-demo labels: app: nginx-server spec: ports: - port: 80 targetPort: 80 name: http selector: app: nginx-server type: LoadBalancer </code></pre> <p>Next, create a <code>ServiceMonitor</code> that describes how to monitor our NGINX:</p> <pre><code>$ cat servicemonitor.yaml kind: ServiceMonitor apiVersion: monitoring.coreos.com/v1 metadata: name: hpa-prom-demo labels: app: nginx-server spec: selector: matchLabels: app: nginx-server endpoints: - interval: 15s path: &quot;/status/format/prometheus&quot; port: http </code></pre> <p>After waiting some time, let's check the <code>hpa-prom-demo</code> logs to make sure that it is scrapped correctly:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m $ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh ... 10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3771 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; 10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3771 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; 10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3773 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; 10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3773 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; 10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3773 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; 10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] &quot;GET /status/format/prometheus HTTP/1.1&quot; 200 3773 &quot;-&quot; &quot;Prometheus/2.29.1&quot; &quot;-&quot; ... </code></pre> <p>Finally, we can check if our metrics work as expected:</p> <pre><code>$ kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1/&quot; | jq . | grep -A 7 &quot;nginx_vts_server_requests_per_second&quot; &quot;name&quot;: &quot;pods/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: true, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] }, -- &quot;name&quot;: &quot;namespaces/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: false, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] }, $ kubectl get --raw &quot;/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second&quot; | jq . { &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;apiVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;, &quot;metadata&quot;: { &quot;selfLink&quot;: &quot;/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second&quot; }, &quot;items&quot;: [ { &quot;describedObject&quot;: { &quot;kind&quot;: &quot;Pod&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;name&quot;: &quot;hpa-prom-demo-bbb6c65bb-49jsh&quot;, &quot;apiVersion&quot;: &quot;/v1&quot; }, &quot;metricName&quot;: &quot;nginx_vts_server_requests_per_second&quot;, &quot;timestamp&quot;: &quot;2022-02-04T09:32:59Z&quot;, &quot;value&quot;: &quot;533m&quot;, &quot;selector&quot;: null } ] } </code></pre>
<p>I had installed <code>kube-prometheus-stack</code> from the helm chart repo <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">prometheus-community</a></p> <pre><code>(k8s: minikube) $ kubectl get deploy,statefulset -n monitoring NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-prometheus-stack-grafana 1/1 1 1 20d deployment.apps/kube-prometheus-stack-kube-state-metrics 1/1 1 1 20d deployment.apps/kube-prometheus-stack-operator 1/1 1 1 20d NAME READY AGE statefulset.apps/alertmanager-kube-prometheus-stack-alertmanager 1/1 20d statefulset.apps/prometheus-kube-prometheus-stack-prometheus 1/1 20d </code></pre> <p>As you can see, by default, grafana installed as <code>Deployment</code>, but I want to change the kind to <code>Statefulset</code> by changing it in its helm chart, instead of direct <code>kubectl edit</code> on the cluster.</p> <p>Following is the directory structure inside the <code>kube-prometheus-stack</code> repo:</p> <pre><code>kube-prometheus-stack vjwilson(k8s: minikube) $ ls Chart.lock charts Chart.yaml CONTRIBUTING.md crds README.md templates values.yaml kube-prometheus-stack (k8s: minikube) $ tree -d . β”œβ”€β”€ charts β”‚Β Β  β”œβ”€β”€ grafana β”‚Β Β  β”‚Β Β  β”œβ”€β”€ ci β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dashboards β”‚Β Β  β”‚Β Β  └── templates β”‚Β Β  β”‚Β Β  └── tests β”‚Β Β  β”œβ”€β”€ kube-state-metrics β”‚Β Β  β”‚Β Β  └── templates β”‚Β Β  └── prometheus-node-exporter β”‚Β Β  β”œβ”€β”€ ci β”‚Β Β  └── templates β”œβ”€β”€ crds └── templates β”œβ”€β”€ alertmanager β”œβ”€β”€ exporters β”‚Β Β  β”œβ”€β”€ core-dns β”‚Β Β  β”œβ”€β”€ kube-api-server β”‚Β Β  β”œβ”€β”€ kube-controller-manager β”‚Β Β  β”œβ”€β”€ kube-dns β”‚Β Β  β”œβ”€β”€ kube-etcd β”‚Β Β  β”œβ”€β”€ kubelet β”‚Β Β  β”œβ”€β”€ kube-proxy β”‚Β Β  └── kube-scheduler β”œβ”€β”€ grafana β”‚Β Β  └── dashboards-1.14 β”œβ”€β”€ prometheus β”‚Β Β  └── rules-1.14 └── prometheus-operator └── admission-webhooks └── job-patch 30 directories </code></pre> <p>I am confused and stuck where exactly on this helm to change and tell grafana to install as <code>Statefulset</code> instead of default <code>Deployment</code>. Would be great if someone can help with it.</p>
<p>Here's how I found the answer. In a helm chart, if there is a folder named <code>charts</code>, that means that the chart is declaring chart dependencies. Looking at the <code>Chart.yaml</code>, we see the grafana dependency:</p> <pre><code>dependencies: - name: grafana version: &quot;6.21.*&quot; repository: https://grafana.github.io/helm-charts condition: grafana.enabled </code></pre> <p>Going to this link, We can look at their <a href="https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/statefulset.yaml" rel="nofollow noreferrer">statefulset.yaml</a>. Looking here we find that Grafana creates a stateful set using this condition:</p> <pre><code>{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type &quot;statefulset&quot;)}} </code></pre> <p>A dependent chart can still have its chart values overwritten if you have a section in your <code>values.yaml</code> that has a top level tag of the dependency name. So in this case, the dependency is named <code>grafana</code>, so we can override the <code>values.yaml</code> of the dependent chart using this configuration:</p> <pre><code>grafana: enabled: true persistence: enabled: true type: statefulset </code></pre> <p>(For other configuration options <a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana" rel="nofollow noreferrer">see this repo</a>. All of the <code>values.yaml</code> from this chart can be overwritten as long as they are inside of the <code>grafana:</code> block.)</p> <p>The dependent chart is the official chart from Grafana. However, if this doesn't work for you (maybe you aren't using persistent volume claims), your second option is to disable the grafana dependency from the chart you are using and deploy a custom version of the Grafana chart.</p> <pre><code>grafana: enabled: false </code></pre> <p>Once you disable grafana, you can then install grafana on its own and either alter the generated manifests using something like <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> or a simple <a href="https://linuxize.com/post/how-to-use-sed-to-find-and-replace-string-in-files/" rel="nofollow noreferrer">sed replace</a>, or fork the grafana helm chart and use your own custom grafana chart that is deployed as a statefulset.</p>
<p>I have Docker Desktop and I want to create multiple clusters so I can work on different projects. For example cluster name 1: hello and cluster name 2: world.</p> <p>I currently have one cluster with the context of docker-desktop that actually working.</p>
<p>To clarify I am posting Community Wiki answer.</p> <p>A tool <code>kind</code> met your expectations in this case.</p> <blockquote> <p><a href="https://sigs.k8s.io/kind" rel="nofollow noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container β€œnodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.</p> </blockquote> <p>Here one can find User Guide to this tool. One can install it with 5 ways:</p> <ul> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager" rel="nofollow noreferrer">With A Package Manager</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-release-binaries" rel="nofollow noreferrer">From Release Binaries</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-from-source" rel="nofollow noreferrer">From Source</a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-make" rel="nofollow noreferrer">With <em><strong>make</strong></em></a></li> <li><a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-go-get--go-install" rel="nofollow noreferrer">With <em><strong>go get</strong></em> / <em><strong>go install</strong></em></a></li> </ul> <p>To create cluster with this tool run:</p> <pre><code>kind create cluster </code></pre> <p>To specify another image use the <code>--image</code> flag:</p> <pre><code>kind create cluster --image=xyz </code></pre> <p>In <code>kind</code> the node-image is built off the base-image, that installs all the dependencies required for Docker and Kubernetes to run in a container.</p> <p>To assign the cluster a different name than <code>kind</code>, use <code>--name</code> flag.</p> <p>More uses can be found with with:</p> <pre><code>kind create cluster --help </code></pre>
<p>Why do I have to use <code>--verb=list</code> option when I list all resources in the k8s namespace?</p> <p>I read <a href="https://stackoverflow.com/questions/47691479/listing-all-resources-in-a-namespace">this question</a> and linked GitHub issue, and they worked for me. However, I cannot understand why <code>--verb=list</code> option is used.</p> <p>Thanks to the help, I now know what this option does. When I add this option, the command shows only resources which support list verb. However, I could not figure out why it was necessary to show only the resources that support the list verb.</p> <p>Please teach me this.</p>
<p>The question you quoted was to list resources. To be able to list, the resource must support the listing. Based on <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types" rel="nofollow noreferrer">official documentation</a>:</p> <pre><code>kubectl api-resources --verbs=list,get # All resources that support the &quot;list&quot; and &quot;get&quot; request verbs </code></pre> <p>In the case of documentation, we have 2 verbs (list, get), you had one (list). The idea is for the command to return only those api-resources which handle <a href="https://www.oreilly.com/library/view/kubernetes-security/9781492039075/ch04.html" rel="nofollow noreferrer">list verb</a>.</p> <p>In conclusion, the <code>--verb=list</code> flag was used to limit the results to only those that support the listing.</p> <blockquote> <p>I could not figure out why it was necessary to show only the resources that support the list verb.</p> </blockquote> <p>This solution is good if, for example, later you want to work on api-resources using only the list. If you would like to operate on a resource that does not support it, you will get an error similar to this:</p> <pre><code>kubectl list tokenreviews error: unknown command &quot;list&quot; for &quot;kubectl&quot; Did you mean this? get wait </code></pre> <p>To avoid this situation you can filter results before with the flag <code>--verb=list</code>.</p>
<p>I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine. While trying to connect to k8s suppose to list pods, a simple <code>kubectl get pods</code> returns an error <code>[Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255</code></p> <p>But if i do <code>saml2aws exec kubectl get pods</code> i am able to fetch pods.</p> <p>I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.</p> <p>Any kind of help will be appreciated.</p>
<p>To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first</p> <ul> <li>Configure Profile</li> </ul> <pre><code>saml2aws configure \ --skip-prompt \ --mfa Auto \ --region &lt;region, ex us-east-2&gt; \ --profile &lt;awscli_profile&gt; \ --idp-account &lt;saml2aws_profile_name&gt;&gt; \ --idp-provider Okta \ --username &lt;your email&gt; \ --role arn:aws:iam::&lt;account_id&gt;:role/&lt;aws_role_initial_assume&gt; \ --session-duration 28800 \ --url &quot;https://&lt;company&gt;.okta.com/home/amazon_aws/.......&quot; </code></pre> <blockquote> <p>URL, region ... can be got from OKTA integration UI.</p> </blockquote> <ul> <li>Login</li> </ul> <pre><code>samle2aws login --idp-account &lt;saml2aws_profile_name&gt; </code></pre> <p>that should prompt you for password and MFA if exist.</p> <ul> <li>Verification</li> </ul> <pre><code>aws --profile=&lt;awscli_profile&gt; s3 ls </code></pre> <p>then finally , Just export AWS_PROFILE by</p> <pre><code>export AWS_PROFILE=&lt;awscli_profile&gt; </code></pre> <p>and use awscli directly</p> <pre><code>aws sts get-caller-identity </code></pre>
<p>Im trying to make an ingress for the minikube dashboard using the embedded dashboard internal service.</p> <p>I enabled both <code>ingress</code> and <code>dashboard</code> minikube addons.</p> <p>I also wrote this ingress YAML file :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.com http: paths: - path: / pathType: Prefix backend: service: name: kubernetes-dashboard port: number: 80 </code></pre> <p>My Ingress is being created well as u can see :</p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE dashboard-ingress nginx dashboard.com localhost 80 15s </code></pre> <p>I edited my <code>/etc/hosts</code> to add this line : <code>127.0.0.1 dashboard.com</code>.</p> <p>Now im trying to access the dashboard trough <code>dashboard.com</code>. But it's not working.</p> <p><code>kubectl describe ingress dashboard-ingress -n kubernetes-dashboard</code> gives me this :</p> <pre><code>Name: dashboard-ingress Namespace: kubernetes-dashboard Address: localhost Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- dashboard.com / kubernetes-dashboard:80 (172.17.0.4:9090) Annotations: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 14m (x2 over 14m) nginx-ingress-controller Scheduled for sync </code></pre> <p>I don't really understand what <code>&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;</code> means, but maybe my issue comes from this.</p> <p><code>kubectl get pods -n ingress-nginx</code> result :</p> <pre><code>NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create--1-8krc7 0/1 Completed 0 100m ingress-nginx-admission-patch--1-qblch 0/1 Completed 1 100m ingress-nginx-controller-5f66978484-hvk9j 1/1 Running 0 100m </code></pre> <p>Logs for nginx-controller pod :</p> <pre><code>------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.0.4 Build: 9b78b6c197b48116243922170875af4aa752ee59 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.9 ------------------------------------------------------------------------------- W1205 19:33:42.303136 7 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1205 19:33:42.303261 7 main.go:221] &quot;Creating API client&quot; host=&quot;https://10.96.0.1:443&quot; I1205 19:33:42.319750 7 main.go:265] &quot;Running in Kubernetes cluster&quot; major=&quot;1&quot; minor=&quot;22&quot; git=&quot;v1.22.3&quot; state=&quot;clean&quot; commit=&quot;c92036820499fedefec0f847e2054d824aea6cd1&quot; platform=&quot;linux/amd64&quot; I1205 19:33:42.402223 7 main.go:104] &quot;SSL fake certificate created&quot; file=&quot;/etc/ingress-controller/ssl/default-fake-certificate.pem&quot; I1205 19:33:42.413477 7 ssl.go:531] &quot;loading tls certificate&quot; path=&quot;/usr/local/certificates/cert&quot; key=&quot;/usr/local/certificates/key&quot; I1205 19:33:42.420838 7 nginx.go:253] &quot;Starting NGINX Ingress controller&quot; I1205 19:33:42.424731 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller&quot;, UID:&quot;f2d27cc7-b103-490f-807f-18ccaa614e6b&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;664&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I1205 19:33:42.427171 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;tcp-services&quot;, UID:&quot;e174971d-df1c-4826-85d4-194598ab1912&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;665&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services I1205 19:33:42.427195 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;udp-services&quot;, UID:&quot;0ffc7ee9-2435-4005-983d-ed41aac1c9aa&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;666&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services I1205 19:33:43.622661 7 nginx.go:295] &quot;Starting NGINX process&quot; I1205 19:33:43.622746 7 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader... I1205 19:33:43.623402 7 nginx.go:315] &quot;Starting validation webhook&quot; address=&quot;:8443&quot; certPath=&quot;/usr/local/certificates/cert&quot; keyPath=&quot;/usr/local/certificates/key&quot; I1205 19:33:43.623683 7 controller.go:152] &quot;Configuration changes detected, backend reload required&quot; I1205 19:33:43.643547 7 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader I1205 19:33:43.643635 7 status.go:84] &quot;New leader elected&quot; identity=&quot;ingress-nginx-controller-5f66978484-hvk9j&quot; I1205 19:33:43.691342 7 controller.go:169] &quot;Backend successfully reloaded&quot; I1205 19:33:43.691395 7 controller.go:180] &quot;Initial sync, sleeping for 1 second&quot; I1205 19:33:43.691435 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Pod&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller-5f66978484-hvk9j&quot;, UID:&quot;55d45c26-eda7-4b37-9b04-5491cde39fd4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;697&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1205 21:06:47.402756 7 main.go:101] &quot;successfully validated configuration, accepting&quot; ingress=&quot;dashboard-ingress/kubernetes-dashboard&quot; I1205 21:06:47.408929 7 store.go:371] &quot;Found valid IngressClass&quot; ingress=&quot;kubernetes-dashboard/dashboard-ingress&quot; ingressclass=&quot;nginx&quot; I1205 21:06:47.409343 7 controller.go:152] &quot;Configuration changes detected, backend reload required&quot; I1205 21:06:47.409352 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;kubernetes-dashboard&quot;, Name:&quot;dashboard-ingress&quot;, UID:&quot;be1ebfe9-fdb3-4d0c-925b-0c206cd0ece3&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;5529&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync I1205 21:06:47.458273 7 controller.go:169] &quot;Backend successfully reloaded&quot; I1205 21:06:47.458445 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Pod&quot;, Namespace:&quot;ingress-nginx&quot;, Name:&quot;ingress-nginx-controller-5f66978484-hvk9j&quot;, UID:&quot;55d45c26-eda7-4b37-9b04-5491cde39fd4&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;697&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1205 21:07:43.654037 7 status.go:300] &quot;updating Ingress status&quot; namespace=&quot;kubernetes-dashboard&quot; ingress=&quot;dashboard-ingress&quot; currentValue=[] newValue=[{IP: Hostname:localhost Ports:[]}] I1205 21:07:43.660598 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;kubernetes-dashboard&quot;, Name:&quot;dashboard-ingress&quot;, UID:&quot;be1ebfe9-fdb3-4d0c-925b-0c206cd0ece3&quot;, APIVersion:&quot;networking.k8s.io/v1&quot;, ResourceVersion:&quot;5576&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>Anyone has a clue on how i can solve my problem ?</p> <p>(Im using minikube v1.24.0)</p> <p>Regards,</p>
<p>I have also faced the same issue with minikube(v1.25.1) running in my local.</p> <p><code>kubectl get ingress -n kubernetes-dashboard</code></p> <pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE dashboard-ingress nginx dashboard.com localhost 80 34m </code></pre> <p>After debug I have found this. &quot;If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP&quot;.</p> <p>Run this command</p> <pre><code>minikube ip XXX.XXX.64.2 </code></pre> <p>add this ip in the host file,after that I am able to access dashboard.com</p>
<p>I'm running a kind cluster and from one of the pods I need to access the host machine. I know in minikube you can access it using <code>10.0.0.2</code> is there some way I can access it, the same way I could use <code>host.docker.internal</code> on Docker Desktop?</p>
<p>Docker uses default subdomain 172.17.0.0/16, and assign the pods IP address 172.17.X.X</p> <p>Host server can be access using ip address 172.17.0.1</p>
<p>When running 'kubectl top nodes' has error:</p> <p>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)</p> <pre><code>k8s version: kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} [root@manager ~]# kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 autoscaling/v2beta2 batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1 coordination.k8s.io/v1beta1 events.k8s.io/v1beta1 extensions/v1beta1 metrics.k8s.io/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scheduling.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1 [root@manager ~]# kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-km7jc 1/1 Running 0 2d21h kube-system coredns-86c58d9df4-vltm9 1/1 Running 0 2d21h kube-system etcd-manager 1/1 Running 0 2d23h kube-system kube-apiserver-manager 1/1 Running 0 5h47m kube-system kube-controller-manager-manager 1/1 Running 1 2d23h kube-system kube-flannel-ds-amd64-5g8w8 1/1 Running 0 2d23h kube-system kube-flannel-ds-amd64-69lcm 1/1 Running 0 2d23h kube-system kube-flannel-ds-amd64-9hx2f 1/1 Running 0 2d23h kube-system kube-proxy-9s6zm 1/1 Running 0 2d23h kube-system kube-proxy-k4qwz 1/1 Running 0 2d23h kube-system kube-proxy-wnzgd 1/1 Running 0 2d23h kube-system kube-scheduler-manager 1/1 Running 1 2d23h kube-system kubernetes-dashboard-79ff88449c-7fpw6 1/1 Running 0 2d23h kube-system metrics-server-68d85f76bb-pj8bs 1/1 Running 0 111m kube-system tiller-deploy-5478b6c547-bf82v 1/1 Running 0 4h7m [root@manager ~]# kubectl logs -f -n kube-system metrics-server-68d85f76bb-pj8bs I1217 06:42:43.451782 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) [restful] 2018/12/17 06:42:44 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi [restful] 2018/12/17 06:42:44 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/ I1217 06:42:44.099720 1 serve.go:96] Serving securely on [::]:443 </code></pre> <p>And has no system error log. How can I resolve this problem?</p> <p>OS is :CentOS Linux release 7.5.1804 (Core)</p>
<p>i had solved this problem, add <code>hostNetwork: true</code> to metries-server.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: hostNetwork: true ## add </code></pre> <p>docs:</p> <pre class="lang-sh prettyprint-override"><code>[root@xx yaml]# kubectl explain deployment.spec.template.spec.hostNetwork KIND: Deployment VERSION: apps/v1 FIELD: hostNetwork &lt;boolean&gt; DESCRIPTION: Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. </code></pre> <p>background:<br> successful running metries, but <code>kubectl top nodes</code> occuied: <code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)</code>.</p>
<p>I need to deploy an application that have Europe/Rome timezone.</p> <p>I applied the following deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: 10.166.23.73:5000/local/myapp:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: TZ value: Europe/Rome volumeMounts: - name: tz-rome mountPath: /etc/localtime volumes: - name: tz-rome hostPath: path: /usr/share/zoneinfo/Europe/Rome </code></pre> <p>However, when I run &quot;date&quot; command within the POD, I don't get the &quot;Europe/Rome&quot; timezone...</p> <p>What is wrong with the above deployment yaml?</p>
<p>If you remove the env variable, that should be works. for example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp namespace: test-timezone labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: busybox imagePullPolicy: Always command: [ &quot;sleep&quot;, &quot;10000&quot; ] volumeMounts: - name: tz-rome mountPath: /etc/localtime volumes: - name: tz-rome hostPath: path: /usr/share/zoneinfo/Europe/Rome </code></pre> <p>The output:</p> <pre><code>/ # date Fri Feb 4 02:16:16 CET 2022 </code></pre> <p>If you want to set the timezone using <strong>TZ environment</strong>, you need the <strong>tzdata package</strong> in the container, for example:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp3 namespace: test-timezone labels: app: myapp spec: replicas: 1 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: nginx imagePullPolicy: Always command: [ &quot;sleep&quot;, &quot;10000&quot; ] env: - name: TZ value: Europe/Rome </code></pre> <p>Nginx has the tzdata package inside:</p> <pre><code>root@myapp2-6f5bbdf56-nnx66:/# apt list --installed | grep tzdata WARNING: apt does not have a stable CLI interface. Use with caution in scripts. tzdata/now 2021a-1+deb11u2 all [installed,local] root@myapp2-6f5bbdf56-nnx66:/# date Fri Feb 4 02:32:48 CET 2022 </code></pre>
<p>I've one workflow in which I'm using <code>jsonpath</code> function for a output parameter to extract a specific value from json string, but it is failing with this error <code>Error (exit code 255)</code></p> <p>Here is my workflow</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: wf-dev- spec: entrypoint: main templates: - name: main dag: tasks: - name: dev-create templateRef: name: dev-create-wft template: main arguments: parameters: - name: param1 value: &quot;val1&quot; - name: dev-outputs depends: dev-create.Succeeded templateRef: name: dev-outputs-wft template: main arguments: parameters: - name: devoutputs value: &quot;{{=jsonpath(tasks.dev-create.outputs.parameters.devoutputs, '$.alias.value')}}&quot; </code></pre> <p>In the above workflow task <code>dev-create</code> invokes another workflowTemplate <code>dev-create-wft</code> which returns the output of another workflowTemplate</p> <p>Here is my workflowTemplate</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: dev-create-wft spec: entrypoint: main templates: - name: main outputs: parameters: - name: devoutputs valueFrom: expression: &quot;tasks['dev1'].outputs.parameters.devoutputs&quot; inputs: parameters: - name: param1 dag: tasks: - name: dev1 templateRef: name: fnl-dev template: main arguments: parameters: - name: param1 value: &quot;{{inputs.parameters.param1}}&quot; </code></pre> <p>The returned json output looks like this</p> <pre><code>{ &quot;alias&quot;: { &quot;value&quot;: &quot;testing:dev1infra&quot;, &quot;type&quot;: &quot;string&quot;, &quot;sensitive&quot;: false }, &quot;match&quot;: { &quot;value&quot;: &quot;dev1infra-testing&quot;, &quot;type&quot;: &quot;string&quot;, &quot;sensitive&quot;: false } } </code></pre> <p>Does <code>jsonpath</code> function supported in workflow? The reason why am asking is, it's working when I used the same function in another workflowTemplate <code>dev-outputs-wft</code></p> <p>What could be the issue?</p>
<p>When an expression fails to evaluate, Argo Workflows simply does not substitute the expression with its evaluated value. Argo Workflows passes the expression <em>as if it were the parameter</em>.</p> <p><code>{{=}}</code> &quot;expression tag templates&quot; in Argo Workflows must be written according to the <a href="https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md" rel="noreferrer">expr language spec</a>.</p> <p>In simple tag templates, Argo Workflows itself does the interpreting. So hyphens in parameter names are allowed. For example, <code>value: &quot;{{inputs.parameters.what-it-is}}&quot;</code> is evaluated by Argo Workflows to be <code>value: &quot;over 9000!&quot;</code>.</p> <p>But in expression tag templates, expr interprets hyphens as minus operators. So <code>value: &quot;{{=inputs.parameters.what-it-is}}&quot;</code> looks like a really weird mathematical expression, fails, and isn't substituted. The workaround is to use <code>['what-it-is']</code> to access the appropriate map item.</p> <p>My guess is that your expression is failing, Argo Workflows is passing the expression to <code>dev-outputs-wft</code> un-replaced, and whatever shell script is receiving that parameter is breaking.</p> <p>If I'm right, the fix is easy:</p> <pre><code> - name: dev-outputs depends: dev-create.Succeeded templateRef: name: dev-outputs-wft template: main arguments: parameters: - name: devoutputs - value: &quot;{{=jsonpath(tasks.dev-create.outputs.parameters.devoutputs, '$.alias.value')}}&quot; + value: &quot;{{=jsonpath(tasks['dev-create'].outputs.parameters.devoutputs, '$.alias.value')}}&quot; </code></pre>
<p>I have an environment made of pods that address their target environment based on an environment variable called <code>CONF_ENV</code> that could be <code>test</code>, <code>stage</code> or <code>prod</code>.</p> <p>The application running inside the Pod has the same source code across environments, the configuration file is picked according to the <code>CONF_ENV</code> environment variable.</p> <p>I'v encapsulated this <code>CONF_ENV</code> in <code>*.properties</code> files just because I may have to add more environment variables later, but I make sure that each property file contains the expected <code>CONF_ENV</code> e.g.:</p> <ul> <li><code>test.properites</code> has <code>CONF_ENV=test</code>,</li> <li><code>prod.properties</code> has <code>CONF_ENV=prod</code>, and so on...</li> </ul> <p>I struggle to make this work with Kustomize overlays, because I want to define a <code>ConfigMap</code> as a shared resource across all the pods within the same overlay e.g. <code>test</code> (each pod in their own directory, along other stuff when needed).</p> <p>So the idea is:</p> <ul> <li><code>base/</code> (shared) with the definition of the <code>Namespace</code>, the <code>ConfigMap</code> (and potentially other shared resources</li> <li><code>base/pod1/</code> with the definition of pod1 picking from the shared <code>ConfigMap</code> (this defaults to <code>test</code>, but in principle it could be different)</li> </ul> <p>Then the overlays:</p> <ul> <li><code>overlay/test</code> that patches the base with <code>CONF_ENV=test</code> (e.g. for <code>overlay/test/pod1/</code> and so on)</li> <li><code>overlay/prod/</code> that patches the base with <code>CONF_ENV=prod</code> (e.g. for <code>overlay/prod/pod1/</code> and so on)</li> </ul> <p>Each directory with their own <code>kustomize.yaml</code>.</p> <p>The above doesn't work because when going into e.g. <code>overlay/test/pod1/</code> and I invoke the command <code>kubectl kustomize .</code> to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys <code>bases:</code> or <code>resources:</code>.</p> <p>I am trying to share the <code>ConfigMap</code> across the entire <code>CONF_ENV</code> environment in an attempt to <strong>minimize the boilerplate YAML</strong> by leveraging the patching-pattern with Kustomize.</p> <p>The Kubernetes / Kustomize YAML directory structure works like this:</p> <pre class="lang-sh prettyprint-override"><code>β”œβ”€β”€ base β”‚ β”œβ”€β”€ configuration.yaml # I am trying to share this! β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”œβ”€β”€ my_namespace.yaml # I am trying to share this! β”‚ β”œβ”€β”€ my-scheduleset-etl-misc β”‚ β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”‚ └── my_scheduleset_etl_misc.yaml β”‚ β”œβ”€β”€ my-scheduleset-etl-reporting β”‚ β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”‚ └── my_scheduleset_etl_reporting.yaml β”‚ └── test.properties # I am trying to share this! └── overlay └── test β”œβ”€β”€ kustomization.yaml # here I want tell &quot;go and pick up the shared resources in the base dir&quot; β”œβ”€β”€ my-scheduleset-etl-misc β”‚ β”œβ”€β”€ kustomization.yaml β”‚ └── test.properties # I've tried to share this one level above, but also to add this inside the &quot;leaf&quot; level for a given pod └── my-scheduleset-etl-reporting └── kustomization.yaml </code></pre> <p>The command <code>kubectl</code> with Kustomize:</p> <ul> <li>sometimes complains that the shared namespace does not exist:</li> </ul> <pre><code>error: merging from generator &amp;{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } &lt;nil&gt;}}}: id resid.ResId{Gvk:resid.Gvk{Group:&quot;&quot;, Version:&quot;v1&quot;, Kind:&quot;ConfigMap&quot;, isClusterScoped:false}, Name:&quot;my-schedule-set-props&quot;, Namespace:&quot;&quot;} does not exist; cannot merge or replace </code></pre> <ul> <li>sometimes doesn't allow to have shared resources inside an overlay:</li> </ul> <pre><code>error: loading KV pairs: env source files: [../test.properties]: security; file '/my/path/to/yaml/overlay/test/test.properties' is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc' </code></pre> <ul> <li>sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:</li> </ul> <pre><code>error: accumulating resources: accumulation err='accumulating resources from '../': '/my/path/to/yaml/overlay/test' must resolve to a file': cycle detected: candidate root '/my/path/to/yaml/overlay/test' contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc' </code></pre> <p>The overlay <code>kustomization.yaml</code> files inside the pod dirs have:</p> <pre class="lang-yaml prettyprint-override"><code>bases: - ../ # tried with/without this to share the ConfigMap - ../../../base/my-scheduleset-etl-misc/ </code></pre> <p>The <code>kustomization.yaml</code> at the root of the overlay has:</p> <pre class="lang-yaml prettyprint-override"><code>bases: - ../../base </code></pre> <p>The <code>kustomization.yaml</code> at the base dir contains this configuration for the ConfigMap:</p> <pre class="lang-yaml prettyprint-override"><code># https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9 configMapGenerator: - name: my-schedule-set-props namespace: my-ss-schedules envs: - test.properties vars: - name: CONF_ENV objref: kind: ConfigMap name: my-schedule-set-props apiVersion: v1 fieldref: fieldpath: data.CONF_ENV configurations: - configuration.yaml </code></pre> <p>With <code>configuration.yaml</code> containing:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization varReference: - path: spec/confEnv/value kind: Pod </code></pre> <p>How do I do this?</p> <p>How do I make sure that I minimise the amount of YAML by sharing all the <code>ConfigMap</code> stuff and the Pods definitions as much as I can?</p>
<p>If I understand your goal correctly, I think you may be grossly over-complicating things. I <em>think</em> you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.</p> <p>In base, I have:</p> <pre><code>$ cd base $ tree . β”œβ”€β”€ example.properties β”œβ”€β”€ kustomization.yaml └── pod1 β”œβ”€β”€ kustomization.yaml └── pod.yaml </code></pre> <p>Where <code>example.properties</code> contains:</p> <pre><code>SOME_OTHER_VAR=somevalue CONF_ENV=test </code></pre> <p>And <code>kustomization.yaml</code> contains:</p> <pre><code>resources: - pod1 configMapGenerator: - name: example-props envs: - example.properties </code></pre> <p>I have two overlays defined, <code>test</code> and <code>prod</code>:</p> <pre><code>$ cd ../overlays $ tree . β”œβ”€β”€ prod β”‚Β Β  β”œβ”€β”€ example.properties β”‚Β Β  └── kustomization.yaml └── test └── kustomization.yaml </code></pre> <p><code>test/kustomization.yaml</code> looks like this:</p> <pre><code>resources: - ../../base </code></pre> <p>It's just importing the <code>base</code> without any changes, since the value of <code>CONF_ENV</code> from the <code>base</code> directory is <code>test</code>.</p> <p><code>prod/kustomization.yaml</code> looks like this:</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base configMapGenerator: - name: example-props behavior: merge envs: - example.properties </code></pre> <p>And <code>prod/example.properties</code> looks like:</p> <pre><code>CONF_ENV=prod </code></pre> <p>If I run <code>kustomize build overlays/test</code>, I get as output:</p> <pre><code>apiVersion: v1 data: CONF_ENV: test SOME_OTHER_VAR: somevalue kind: ConfigMap metadata: name: example-props-7245222b9b --- apiVersion: v1 kind: Pod metadata: name: example spec: containers: - command: - sleep - 1800 envFrom: - configMapRef: name: example-props-7245222b9b image: docker.io/alpine name: alpine </code></pre> <p>If I run <code>kustomize build overlays/prod</code>, I get:</p> <pre><code>apiVersion: v1 data: CONF_ENV: prod SOME_OTHER_VAR: somevalue kind: ConfigMap metadata: name: example-props-h4b5tc869g --- apiVersion: v1 kind: Pod metadata: name: example spec: containers: - command: - sleep - 1800 envFrom: - configMapRef: name: example-props-h4b5tc869g image: docker.io/alpine name: alpine </code></pre> <p>That is, everything looks as you would expect given the configuration in <code>base</code>, but we have provided a new value for <code>CONF_ENV</code>.</p> <p>You can find all these files <a href="https://github.com/larsks/so-example-71008589" rel="noreferrer">here</a>.</p>
<p>I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at <a href="https://kubernetes-sigs.github.io/aws-efs-csi-driver/" rel="nofollow noreferrer">AWS EFS SIG Repo</a> using Pulumi. With Source from <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver" rel="nofollow noreferrer">AWS EFS CSI Driver Github Source</a>. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.</p> <p>Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:</p> <pre class="lang-js prettyprint-override"><code>import * as k8s from '@pulumi/kubernetes'; import * as eks from '@pulumi/eks'; export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release { constructor(cluster: eks.Cluster) { super(`aws-efs-csi-driver`, { chart: `aws-efs-csi-driver`, version: `1.3.6`, repositoryOpts: { repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`, }, namespace: `kube-system`, }, { provider: cluster.provider }); } } </code></pre> <p>I've tried several variations on the above code, chopping of the <code>-driver</code> in the name, removing <code>aws-cfs-csi-driver</code> from the <code>repo</code> property, changing to <code>latest</code> for the version.</p> <p>When I do a <code>pulumi up</code> I get: <code>failed to pull chart: chart &quot;aws-efs-csi-driver&quot; version &quot;1.3.6&quot; not found in https://kubernetes-sigs.github.io/aws-efs-csi-driver/ repository</code></p> <pre class="lang-sh prettyprint-override"><code>$ helm version version.BuildInfo{Version:&quot;v3.7.0&quot;, GitCommit:&quot;eeac83883cb4014fe60267ec6373570374ce770b&quot;, GitTreeState:&quot;clean&quot;, GoVersion:&quot;go1.16.8&quot;} </code></pre> <pre class="lang-sh prettyprint-override"><code>$ pulumi version v3.24.1 </code></pre>
<p>You're using the wrong version in your chart invocation.</p> <p>The version you're selecting is the application version, ie the release version of the underlying application. You need to set the Chart version, see <a href="https://helm.sh/docs/topics/charts/#charts-and-versioning" rel="nofollow noreferrer">here</a> which is defined <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/release-1.3/charts/aws-efs-csi-driver/Chart.yaml#L3" rel="nofollow noreferrer">here</a></p> <p>the following works:</p> <pre class="lang-js prettyprint-override"><code>const csiDrive = new kubernetes.helm.v3.Release(&quot;csi&quot;, { chart: `aws-efs-csi-driver`, version: `2.2.3`, repositoryOpts: { repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`, }, namespace: `kube-system`, }); </code></pre> <p>If you want to use the existing code you have, try this:</p> <pre class="lang-js prettyprint-override"><code>import * as k8s from '@pulumi/kubernetes'; import * as eks from '@pulumi/eks'; export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release { constructor(cluster: eks.Cluster) { super(`aws-efs-csi-driver`, { chart: `aws-efs-csi-driver`, version: `2.2.3`, repositoryOpts: { repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`, }, namespace: `kube-system`, }, { provider: cluster.provider }); } } </code></pre>
<p>I am trying to run a Spark job on a separate master Spark server hosted on kubernetes but port forwarding reports the following error:</p> <pre class="lang-none prettyprint-override"><code>E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -&gt; 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused </code></pre> <p>My setup is:</p> <ul> <li>I am using VS Code with a dev container to manage a setup where I can run Spark applications. I can run local spark jobs when I build my context like so : <code>sc = pyspark.SparkContext(appName=&quot;Pi&quot;)</code></li> <li>My host computer is running Docker Desktop where I have kubernetes running and used Helm to run the Spark release from Bitnami <a href="https://artifacthub.io/packages/helm/bitnami/spark" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/spark</a></li> <li>The VS Code dev container <strong>can</strong> access the host correctly since I can do <code>curl host.docker.internal:80</code> and I get the Spark web UI status page. The port 80 is forwarded from the host using <code>kubectl port-forward --namespace default --address 0.0.0.0 svc/my-release-spark-master-svc 80:80</code></li> <li>I am also forwarding the port <code>7077</code> using a similar command <code>kubectl port-forward --address 0.0.0.0 svc/my-release-spark-master-svc 7077:7077</code>.</li> </ul> <p>When I create a Spark context like this <code>sc = pyspark.SparkContext(appName=&quot;Pi&quot;, master=&quot;spark://host.docker.internal:7077&quot;)</code> I am expecting Spark to submit jobs to that master. I don't know much about Spark but I have seen a few examples creating a context like this.</p> <p>When I run the code, I see connections attempts failing at port 7077 of kubernetes port forwarding, so the requests are going through but they are being refused somehow.</p> <pre class="lang-none prettyprint-override"><code>Handling connection for 7077 E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -&gt; 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused </code></pre> <p>Now, I have no idea why the connections are being refused. I know the Spark server is accepting requests because I can see the Web UI from within the docker dev container. I know that the Spark service is exposing port 7077 because I can do:</p> <pre class="lang-none prettyprint-override"><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 28h my-release-spark-headless ClusterIP None &lt;none&gt; &lt;none&gt; 7h40m my-release-spark-master-svc ClusterIP 10.108.109.228 &lt;none&gt; 7077/TCP,80/TCP 7h40m </code></pre> <p>Can anyone tell why the connections are refused and how I can successfully configure the Spark master to accept jobs from external callers ?</p> <p>Example code I am using is:</p> <pre class="lang-py prettyprint-override"><code>import findspark findspark.init() import pyspark import random #sc = pyspark.SparkContext(appName=&quot;Pi&quot;, master=&quot;spark://host.docker.internal:7077&quot;) sc = pyspark.SparkContext(appName=&quot;Pi&quot;) num_samples = 100000000 def inside(p): x, y = random.random(), random.random() return x*x + y*y &lt; 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples print(pi) sc.stop() </code></pre>
<p>After tinkering with it a bit more, I noticed this output when launching the helm chart for Apache Spark <code>** IMPORTANT: When submit an application from outside the cluster service type should be set to the NodePort or LoadBalancer. **</code>.</p> <p>This led me to research a bit more into Kubernetes networking. To submit a job, it is not sufficient to forward port 7077. Instead, the cluster itself needs to have an IP assigned. This requires the helm chart to be launched with the following commands to set Spark config values <code>helm install my-release --set service.type=LoadBalancer --set service.loadBalancerIP=192.168.2.50 bitnami/spark</code>. My host IP address is above and will be reachable by the Docker container.</p> <p>With the LoadBalancer IP assigned, Spark will run using the example code provided.</p> <p>Recap: Don't use port forwarding to submit jobs, a Cluster IP needs to be assigned.</p>
<p>I am getting <code>ServiceUnavailable</code> error when I try to run <code>kubectl top nodes</code> or <code>kubectl top pods</code> command in EKS. I am running my cluster in EKS , and I am not finding any solution for this online. If any one have faced this issue in EKS please let me know how we can resolve this issue</p> <pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io) </code></pre> <p>out put of <code>kubectl get apiservices v1beta1.metrics.k8s.io -o yaml</code></p> <pre><code>apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;apiregistration.k8s.io/v1&quot;,&quot;kind&quot;:&quot;APIService&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;k8s-app&quot;:&quot;metrics-server&quot;},&quot;name&quot;:&quot;v1beta1.metrics.k8s.io&quot;},&quot;spec&quot;:{&quot;group&quot;:&quot;metrics.k8s.io&quot;,&quot;groupPriorityMinimum&quot;:100,&quot;insecureSkipTLSVerify&quot;:true,&quot;service&quot;:{&quot;name&quot;:&quot;metrics-server&quot;,&quot;namespace&quot;:&quot;kube-system&quot;},&quot;version&quot;:&quot;v1beta1&quot;,&quot;versionPriority&quot;:100}} creationTimestamp: &quot;2022-02-03T08:22:59Z&quot; labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io resourceVersion: &quot;1373088&quot; uid: 2066d4cb-8105-4aea-9678-8303595dc47b spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system port: 443 version: v1beta1 versionPriority: 100 status: conditions: - lastTransitionTime: &quot;2022-02-03T08:22:59Z&quot; message: 'failing or missing response from https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1: Get &quot;https://10.16.55.204:4443/apis/metrics.k8s.io/v1beta1&quot;: dial tcp 10.16.55.204:4443: i/o timeout' reason: FailedDiscoveryCheck status: &quot;False&quot; type: Available </code></pre> <p><code>metrics-server 1/1 1 1 3d22h</code></p> <p><code>kubectl describe deployment metrics-server -n kube-system</code></p> <pre><code>Name: metrics-server Namespace: kube-system CreationTimestamp: Thu, 03 Feb 2022 09:22:59 +0100 Labels: k8s-app=metrics-server Annotations: deployment.kubernetes.io/revision: 2 Selector: k8s-app=metrics-server Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 0 max unavailable, 25% max surge Pod Template: Labels: k8s-app=metrics-server Service Account: metrics-server Containers: metrics-server: Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.0 Port: 4443/TCP Host Port: 0/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-insecure-tls=true --kubelet-preferred-address-types=InternalIP --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s Requests: cpu: 100m memory: 200Mi Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /tmp from tmp-dir (rw) Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; Priority Class Name: system-cluster-critical Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: metrics-server-5dcd6cbcb9 (1/1 replicas created) Events: &lt;none&gt; </code></pre>
<p>Download the <a href="https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml" rel="nofollow noreferrer">components.yaml</a>, find and replace 4443 to 443 and do a <code>kubectl replace -f components.yaml -n kube-system --force</code>.</p>
<p>I'm trying to deploy an EKS self managed with Terraform. While I can deploy the cluster with addons, vpc, subnet and all other resources, it always fails at helm:</p> <pre><code>Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials with module.eks-ssp-kubernetes-addons.module.ingress_nginx[0].helm_release.nginx[0] on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/ingress-nginx/main.tf line 19, in resource &quot;helm_release&quot; &quot;nginx&quot;: resource &quot;helm_release&quot; &quot;nginx&quot; { </code></pre> <p>This error repeats for <code>metrics_server</code>, <code>lb_ingress</code>, <code>argocd</code>, but <code>cluster-autoscaler</code> throws:</p> <pre><code>Warning: Helm release &quot;cluster-autoscaler&quot; was created but has a failed status. with module.eks-ssp-kubernetes-addons.module.cluster_autoscaler[0].helm_release.cluster_autoscaler[0] on .terraform/modules/eks-ssp-kubernetes-addons/modules/kubernetes-addons/cluster-autoscaler/main.tf line 1, in resource &quot;helm_release&quot; &quot;cluster_autoscaler&quot;: resource &quot;helm_release&quot; &quot;cluster_autoscaler&quot; { </code></pre> <p>My <code>main.tf</code> looks like this:</p> <pre><code>terraform { backend &quot;remote&quot; {} required_providers { aws = { source = &quot;hashicorp/aws&quot; version = &quot;&gt;= 3.66.0&quot; } kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;&gt;= 2.7.1&quot; } helm = { source = &quot;hashicorp/helm&quot; version = &quot;&gt;= 2.4.1&quot; } } } data &quot;aws_eks_cluster&quot; &quot;cluster&quot; { name = module.eks-ssp.eks_cluster_id } data &quot;aws_eks_cluster_auth&quot; &quot;cluster&quot; { name = module.eks-ssp.eks_cluster_id } provider &quot;aws&quot; { access_key = &quot;xxx&quot; secret_key = &quot;xxx&quot; region = &quot;xxx&quot; assume_role { role_arn = &quot;xxx&quot; } } provider &quot;kubernetes&quot; { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token } provider &quot;helm&quot; { kubernetes { host = data.aws_eks_cluster.cluster.endpoint token = data.aws_eks_cluster_auth.cluster.token cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) } } </code></pre> <p>My <code>eks.tf</code> looks like this:</p> <pre><code>module &quot;eks-ssp&quot; { source = &quot;github.com/aws-samples/aws-eks-accelerator-for-terraform&quot; # EKS CLUSTER tenant = &quot;DevOpsLabs2b&quot; environment = &quot;dev-test&quot; zone = &quot;&quot; terraform_version = &quot;Terraform v1.1.4&quot; # EKS Cluster VPC and Subnet mandatory config vpc_id = &quot;xxx&quot; private_subnet_ids = [&quot;xxx&quot;,&quot;xxx&quot;, &quot;xxx&quot;, &quot;xxx&quot;] # EKS CONTROL PLANE VARIABLES create_eks = true kubernetes_version = &quot;1.19&quot; # EKS SELF MANAGED NODE GROUPS self_managed_node_groups = { self_mg = { node_group_name = &quot;DevOpsLabs2b&quot; subnet_ids = [&quot;xxx&quot;,&quot;xxx&quot;, &quot;xxx&quot;, &quot;xxx&quot;] create_launch_template = true launch_template_os = &quot;bottlerocket&quot; # amazonlinux2eks or bottlerocket or windows custom_ami_id = &quot;xxx&quot; public_ip = true # Enable only for public subnets pre_userdata = &lt;&lt;-EOT yum install -y amazon-ssm-agent \ systemctl enable amazon-ssm-agent &amp;&amp; systemctl start amazon-ssm-agent \ EOT disk_size = 10 instance_type = &quot;t2.small&quot; desired_size = 2 max_size = 10 min_size = 0 capacity_type = &quot;&quot; # Optional Use this only for SPOT capacity as capacity_type = &quot;spot&quot; k8s_labels = { Environment = &quot;dev-test&quot; Zone = &quot;&quot; WorkerType = &quot;SELF_MANAGED_ON_DEMAND&quot; } additional_tags = { ExtraTag = &quot;t2x-on-demand&quot; Name = &quot;t2x-on-demand&quot; subnet_type = &quot;public&quot; } create_worker_security_group = false # Creates a dedicated sec group for this Node Group }, } } enable_amazon_eks_vpc_cni = true amazon_eks_vpc_cni_config = { addon_name = &quot;vpc-cni&quot; addon_version = &quot;v1.7.5-eksbuild.2&quot; service_account = &quot;aws-node&quot; resolve_conflicts = &quot;OVERWRITE&quot; namespace = &quot;kube-system&quot; additional_iam_policies = [] service_account_role_arn = &quot;&quot; tags = {} } enable_amazon_eks_kube_proxy = true amazon_eks_kube_proxy_config = { addon_name = &quot;kube-proxy&quot; addon_version = &quot;v1.19.8-eksbuild.1&quot; service_account = &quot;kube-proxy&quot; resolve_conflicts = &quot;OVERWRITE&quot; namespace = &quot;kube-system&quot; additional_iam_policies = [] service_account_role_arn = &quot;&quot; tags = {} } #K8s Add-ons enable_aws_load_balancer_controller = true enable_metrics_server = true enable_cluster_autoscaler = true enable_aws_for_fluentbit = true enable_argocd = true enable_ingress_nginx = true depends_on = [module.eks-ssp.self_managed_node_groups] } </code></pre>
<p>OP has confirmed in the comment that the problem was resolved:</p> <blockquote> <p>Of course. I think I found the issue. Doing &quot;kubectl get svc&quot; throws: &quot;An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxx:user/terraform_deploy is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxx:user/terraform_deploy&quot;</p> <p>Solved it by using my actual role, that's crazy. No idea why it was calling itself.</p> </blockquote> <p>For similar problem look also <a href="https://github.com/hashicorp/terraform-provider-helm/issues/400" rel="nofollow noreferrer">this issue</a>.</p>
<p>While mounting my EBS volume to the kubernetes cluster I was getting this error :</p> <pre><code> Warning FailedMount 64s kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition </code></pre> <p>Below are my SC, PV, PVC, and Deployment files</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain mountOptions: - debug volumeBindingMode: Immediate --- kind: PersistentVolume apiVersion: v1 metadata: name: ebs-pv labels: type: ebs-pv spec: storageClassName: standard capacity: storage: 1Gi accessModes: - ReadWriteOnce awsElasticBlockStore: volumeID: vol-0221ed06914dbc8fd fsType: ext4 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs-pvc spec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- kind: Deployment metadata: labels: app: gitea name: gitea spec: replicas: 1 selector: matchLabels: app: gitea template: metadata: labels: app: gitea spec: volumes: - name: ebs-volume persistentVolumeClaim: claimName: ebs-pvc containers: - image: gitea/gitea:latest name: gitea volumeMounts: - mountPath: &quot;/data&quot; name: ebs-volume </code></pre> <p>This is my PV and PVC which I believe is connected perfectly</p> <pre><code> NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/ebs-pv 1Gi RWO Retain Bound default/ebs-pvc standard 18m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ebs-pvc Bound ebs-pv 1Gi RWO standard 18m </code></pre> <p>This is my storage class</p> <pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/aws-ebs Retain Immediate false 145m </code></pre> <p>This is my pod description</p> <pre><code>Name: gitea-bb86dd6b8-6264h Namespace: default Priority: 0 Node: worker01/172.31.91.105 Start Time: Fri, 04 Feb 2022 12:36:15 +0000 Labels: app=gitea pod-template-hash=bb86dd6b8 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/gitea-bb86dd6b8 Containers: gitea: Container ID: Image: gitea/gitea:latest Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /data from ebs-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rq86p (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: ebs-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: ebs-pvc ReadOnly: false kube-api-access-rq86p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 20m default-scheduler Successfully assigned default/gitea-bb86dd6b8-6264h to worker01 Warning FailedMount 4m47s (x2 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[kube-api-access-rq86p ebs-volume]: timed out waiting for the condition Warning FailedMount 19s (x7 over 18m) kubelet Unable to attach or mount volumes: unmounted volumes=[ebs-volume], unattached volumes=[ebs-volume kube-api-access-rq86p]: timed out waiting for the condition </code></pre> <p>This is my ebs-volume the last one which I have connected to the master node on which I am performing operations right now...</p> <pre><code>NAME FSTYPE LABEL UUID MOUNTPOINT loop0 squashfs /snap/core18/2253 loop1 squashfs /snap/snapd/14066 loop2 squashfs /snap/amazon-ssm-agent/4046 xvda └─xvda1 ext4 cloudimg-rootfs c1ce24a2-4987-4450-ae15-62eb028ff1cd / xvdf ext4 36609bbf-3248-41f1-84c3-777eb1d6f364 </code></pre> <p>The cluster I have created manually on the AWS ubuntu18 instances, there are 2 worker nodes and 1 master node all on Ubuntu18 instances running on AWS.</p> <p>Below are the commands which I have used to create the EBS volume.</p> <pre><code>aws ec2 create-volume --availability-zone=us-east-1c --size=10 --volume-type=gp2 aws ec2 attach-volume --device /dev/xvdf --instance-id &lt;MASTER INSTANCE ID&gt; --volume-id &lt;MY VOLUME ID&gt; sudo mkfs -t ext4 /dev/xvdf </code></pre> <p>After this the container was successfully created and attached, so I don't think there will be a problem in this part.</p> <p>I have not done one thing which I don't know if it is necessary or not is the below part</p> <pre><code>The cluster also needs to have the flag --cloud-provider=aws enabled on the kubelet, api-server, and the controller-manager during the cluster’s creation </code></pre> <p>This thing I found on one of the blogs but at that moment my cluster was already set-up so I didn't do it but if it is a problem then please notify me and also please give some guidance about how to do it.</p> <p>I have used Flannel as my network plugin while creating the cluster.</p> <p>I don't think I left out any information but if there is something additional you want to know please ask.</p> <p>Thank you in advance!</p>
<p><code>This is my ebs-volume the last one which I have connected to the master node...</code></p> <p>Pod that wish to mount this volume must run on the same node as the PV currently attached. Given the scenario you described; it is currently mounted on your Ubuntu based master node therefore you need to run pod on this node in order to mount it. Otherwise, you need to release it from the master node (detach from the underlying EC2) and re-deploy your PVC/PV/Pod so that they can settle down on a worker node instead of master node.</p>
<p>I have a situation where my AKS cluster is already in place, two AKS clusters are there &amp; they are internally available within their security zones only. I don't want to go via internet to access the internal resources within the cluster from another cluster.</p> <p>I was exploring private link service &amp; endpoint, any suggestions?</p> <p>Both clusters are in different VNets.</p>
<p>I have achieved this by creating private link service on load balancer where my internal ingress (private subnet) is connected in destination cluster. This will require resource ID &amp; load balancer details.</p> <p>Then created a private endpoint to the private link service in my source cluster that will require subnetId, Resource Group.</p> <p>The IP created via private link endpoint can be used to communicate to internal LB &amp; also created a DNS entry for it to be used as host in ingress object.</p>
<p>I read in a book written by Helm creators the following fact about the --force option :</p> <blockquote> <p>Sometimes, though, Helm users want to make sure that the pods are restarted. That’s where the --force flag comes in. Instead of modifying the Deployment (or similar object), it will delete and re-create it. This forces Kubernetes to delete the old pods and create new ones.</p> </blockquote> <p>What I understand from that is, if I install a chart and then I change the number of replicas (=number of pods) then I upgrade the chart, it should recreate all the pods. This is not what happens in my case and I wanted to understand what I am missing here.</p>
<p>Let's take a hypothetical minimal Deployment (many required details omitted):</p> <pre class="lang-yaml prettyprint-override"><code>spec: replicas: 3 template: spec: containers: - image: abc:123 </code></pre> <p>and you change this to <em>only</em> increase the replica count</p> <pre class="lang-yaml prettyprint-override"><code>spec: replicas: 5 # &lt;-- this is the only change template: spec: containers: - image: abc:123 </code></pre> <p>The Kubernetes Deployment controller looks at this change and says &quot;I already have 3 Pods running <code>abc:123</code>; if I leave those alone, and start 2 more, then I will have 5, and the system will look like what the Deployment spec requests&quot;. So absent any change to the embedded Pod spec, the existing Pods will be left alone and the cluster will just scale up.</p> <pre class="lang-none prettyprint-override"><code>deployment-12345-aaaaa deployment-12345-aaaaa deployment-12345-bbbbb deployment-12345-bbbbb deployment-12345-ccccc ---&gt; deployment-12345-ccccc deployment-12345-ddddd deployment-12345-eeeee (replicas: 3) (replicas: 5) </code></pre> <p>Usually this is fine, since you're running the same image version and the same code. If you do need to forcibly restart things, I'd suggest using <code>kubectl rollout restart deployment/its-name</code> rather than trying to convince Helm to do it.</p>
<p>I am not able to communicate between two services.</p> <p>post-deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: python-data-deployment labels: spec: replicas: 1 selector: matchLabels: app: python-web-selector tier: backend template: metadata: labels: app: python-web-selector tier: backend spec: containers: - name: python-web-pod image: sakshiarora2012/python-backend:v10 ports: - containerPort: 5000 </code></pre> <p>post-deployment2.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: python-data-deployment2 labels: spec: replicas: 1 selector: matchLabels: app: python-web-selector2 tier: backend template: metadata: labels: app: python-web-selector2 tier: backend spec: containers: - name: python-web-pod2 image: sakshiarora2012/python-backend:v8 ports: - containerPort: 5000 </code></pre> <p>post-service.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: python-data-service spec: selector: app: python-web-selector tier: backend ports: - port: 5000 nodePort: 30400 type: NodePort </code></pre> <p>post-service2.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: python-data-service2 spec: selector: app: python-web-selector2 tier: backend ports: - port: 5000 type: ClusterIP </code></pre> <p>When I go and try to ping from 1 container to another, it is not able to ping</p> <pre><code>root@python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data. ^C --- python-data-service.default.svc.cluster.local ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 139ms </code></pre> <p>If I see dns entry it is showing</p> <pre><code>sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: python-data-service.default.svc.cluster.local Address: 10.107.11.236 sakshiarora@Sakshis-MacBook-Pro Student_Registration % sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2 Server: 10.96.0.10 Address: 10.96.0.10#53 Name: python-data-service2.default.svc.cluster.local Address: 10.103.97.40 sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube &lt;none&gt; &lt;none&gt; python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube &lt;none&gt; &lt;none&gt; python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube &lt;none&gt; &lt;none&gt; python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube &lt;none&gt; &lt;none&gt; sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service Name: python-data-service Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;python-data-service&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;ports&quot;:[{&quot;no... Selector: app=python-web-selector,tier=backend Type: NodePort IP: 10.107.11.236 Port: &lt;unset&gt; 5000/TCP TargetPort: 5000/TCP NodePort: &lt;unset&gt; 30400/TCP Endpoints: 172.17.0.6:5000 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2 Name: python-data-service2 Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;python-data-service2&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;ports&quot;:[{&quot;p... Selector: app=python-web-selector2,tier=backend Type: ClusterIP IP: 10.103.97.40 Port: &lt;unset&gt; 5000/TCP TargetPort: 5000/TCP Endpoints: 172.17.0.8:5000 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>sakshiarora@Sakshis-MacBook-Pro Student_Registration %</p> <p>I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?</p>
<p>Are you able to connect to your pods maybe try port-forward to see if you can connect &amp; then check the connectivity in two pods. Last check if there is default deny network policy set there - maybe you have some restrictions at network level.</p> <pre><code>kubectl get networkpolicy -n &lt;namespace&gt; </code></pre>
<p>Hi all we have a Flink application with blue-green deployment which we get using the Flink operator.</p> <p><a href="https://github.com/lyft/flinkk8soperator" rel="nofollow noreferrer">Flinkk8soperator</a> for Apache Flink. The operator spins up the following three K8s services after a deployment:</p> <pre><code>my-flinkapp-14hdhsr (Top level service) my-flinkapp-green my-flinkapp-blue </code></pre> <p>The idea is that one of the two among blue green would be active and would have pods (either blue or green).</p> <p>And a selector to the the active one would be stored in the top level <code>myflinkapp-14hdhsr</code> service with the selector <code>flink-application-version=blue</code>. Or green. As follows:</p> <pre><code>Labels: flink-app=my-flinkapp flink-app-hash=14hdhsr flink-application-version=blue Annotations: &lt;none&gt; Selector: flink-app=my-flinkapp,flink-application-version=blue,flink-deployment-type=jobmanager, </code></pre> <p>I have an ingress defined as follows which I want to use to point to the top level service.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Accept-Encoding &quot;&quot;; sub_filter '&lt;head&gt;' '&lt;head&gt; &lt;base href=&quot;/happy-flink-ui/&quot;&gt;'; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/auth-url: &quot;https://$host/oauth2/auth&quot; nginx.ingress.kubernetes.io/auth-signin: &quot;https://$host/oauth2/start?rd=$escaped_request_uri&quot; name: flink-secure-ingress-my-flink-app namespace: happy-flink-flink spec: rules: - host: flinkui-myapp.foo.com http: paths: - path: /happy-flink-ui(/|$)(.*) backend: serviceName: my-flinkapp-14hdhsr // This works but.... servicePort: 8081 </code></pre> <p>The issue I am facing is that the top level service keeps changing the hash at the end as the flink operator changes it at every deployment.. eg. myflinkapp-89hddew .etc.</p> <p>So I cannot have a static service name in the ingress definition.</p> <p>So I am wondering if an ingress can choose a service based on a selector or a regular expression of the service name which can account for the top level app service name plus the hash at the end.</p> <p>The <code>flink-app-hash</code> (i.e the hash part of the service name - 14hdsr) is also part of the Labels in the top level service. Anyway I could leverage that?</p> <p>Wondering if default backend could be applied here?</p> <p>Have folks using Flink operator solved this a different way?</p>
<p>Unfortunately that's not possible OOB to use any kind of <code>regex/wildcards/jsonpath/variables/references</code> inside <code>.backend.serviceName</code> inside Ingres controller. You are able to use regex only in <code>path</code>: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">Ingress Path Matching</a></p> <p>And its not going to be implemented soon: <a href="https://github.com/kubernetes/ingress-nginx/issues/7739" rel="nofollow noreferrer">Allow variable references in backend spec</a> is in the hanged stage.</p> <p>Would be very interesting to hear any possible solution for this. It was previously discussed on stack, however without any progress: <a href="https://stackoverflow.com/a/60810435/9929015">https://stackoverflow.com/a/60810435/9929015</a></p>
<p>I am currently building a CI/CD pipeline where I am trying to test a simple nginx deployment, the problem is when I run <code>kubectl apply -f ./nginx-demplyment.yaml</code> I only get the output that resources got created/updated.</p> <p>In my use case I the first thing i get is:</p> <pre class="lang-sh prettyprint-override"><code>deployment.apps/nginx1.14.2 created service/my-service created </code></pre> <p>And this is the output of <code>kubectl get all</code> where pods <code>STATUS</code> says <strong>ContainerCreating</strong>:</p> <p><a href="https://i.stack.imgur.com/uaJYU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uaJYU.png" alt="kubectl get all output" /></a></p> <p>The problem is in my pipeline I want to run <code>curl</code> command to check if my nginx server is working properly once the image gets pulled and the pod STATUS is Running, but obviously if the image didn't get pulled yet curl will say connection refused because the container is not up yet.</p> <p>How can I do that and is there a way to get the output of pulling images at least?</p> <blockquote> <p>the task runs the commands with &amp;&amp; so the curl gets executed right after kubectl.</p> </blockquote> <blockquote> <p>Am working on a kind cluster with 3 control-plane nodes and 2 worker nodes.</p> </blockquote>
<p>You can use <code>kubectl wait</code> to wait for the deployment to be in a certain condition. Another option (or possibly used in combination) is to retry curl until the request returns a 200. An example of kubectl wait for your nginx deployment to become ready:</p> <pre><code>kubectl wait --for=condition=available deployment/nginx </code></pre>
<p>I'm using GCP composer to run an algorithm and at the end of the stream I want to run a task that will perform several operations copying and deleting files and folders from a volume to a bucket I'm trying to perform these copying and deleting operations via a <code>kubernetespodoperator</code>. I'm having hardship finding the right way to run several commands using &quot;cmds&quot; I also tried using &quot;cmds&quot; with &quot;arguments&quot;. Here is my <code>KubernetesPodOperator</code> and the cmds and arguments combinations I tried:</p> <pre><code>post_algo_run = kubernetes_pod_operator.KubernetesPodOperator( task_id=&quot;multi-coher-post-operations&quot;, name=&quot;multi-coher-post-operations&quot;, namespace=&quot;default&quot;, image=&quot;google/cloud-sdk:alpine&quot;, ### doesn't work ### cmds=[&quot;gsutil&quot;, &quot;cp&quot;, &quot;/data/splitter-output\*.csv&quot;, &quot;gs://my_bucket/data&quot; , &quot;&amp;&quot; , &quot;gsutil&quot;, &quot;rm&quot;, &quot;-r&quot;, &quot;/input&quot;], #Error: #[2022-01-27 09:31:38,407] {pod_manager.py:197} INFO - CommandException: Destination URL must name a directory, bucket, or bucket #[2022-01-27 09:31:38,408] {pod_manager.py:197} INFO - subdirectory for the multiple source form of the cp command. #################### ### doesn't work ### # cmds=[&quot;gsutil&quot;, &quot;cp&quot;, &quot;/data/splitter-output\*.csv&quot;, &quot;gs://my_bucket/data ;&quot;,&quot;gsutil&quot;, &quot;rm&quot;, &quot;-r&quot;, &quot;/input&quot;], # [2022-01-27 09:34:06,865] {pod_manager.py:197} INFO - CommandException: Destination URL must name a directory, bucket, or bucket # [2022-01-27 09:34:06,866] {pod_manager.py:197} INFO - subdirectory for the multiple source form of the cp command. #################### ### only preform the first command - only copying ### # cmds=[&quot;bash&quot;, &quot;-cx&quot;], # arguments=[&quot;gsutil cp /data/splitter-output\*.csv gs://my_bucket/data&quot;,&quot;gsutil rm -r /input&quot;], # [2022-01-27 09:36:09,164] {pod_manager.py:197} INFO - + gsutil cp '/data/splitter-output*.csv' gs://my_bucket/data # [2022-01-27 09:36:11,200] {pod_manager.py:197} INFO - Copying file:///data/splitter-output\Coherence Results-26-Jan-2022-1025Part1.csv [Content-Type=text/csv]... # [2022-01-27 09:36:11,300] {pod_manager.py:197} INFO - / [0 files][ 0.0 B/ 93.0 KiB] # / [1 files][ 93.0 KiB/ 93.0 KiB] # [2022-01-27 09:36:11,302] {pod_manager.py:197} INFO - Operation completed over 1 objects/93.0 KiB. # [20 22-01-27 09:36:12,317] {kubernetes_pod.py:459} INFO - Deleting pod: multi-coher-post-operations.d66b4c91c9024bd289171c4d3ce35fdd #################### volumes=[ Volume( name=&quot;nfs-pvc&quot;, configs={ &quot;persistentVolumeClaim&quot;: {&quot;claimName&quot;: &quot;nfs-pvc&quot;} }, ) ], volume_mounts=[ VolumeMount( name=&quot;nfs-pvc&quot;, mount_path=&quot;/data/&quot;, sub_path=None, read_only=False, ) ], ) </code></pre>
<p>I found a technic for running multiple commands. First I found the relations between Kubernetespodoperator cmds and arguments properties to Docker's ENTRYPOINT and CMD.</p> <p>Kubernetespodoperator cmds overwrite the docker original ENTRYPOINT and Kubernetespodoperator arguments is equivalent to docker's CMD.</p> <p>And so in order to run multiple commands from the Kubernetespodoperator I've used the following syntax: I've set the Kubernetespodoperator cmds to run bash with -c:</p> <pre><code>cmds=[&quot;/bin/bash&quot;, &quot;-c&quot;], </code></pre> <p>And I've set the Kubernetespodoperator arguments to run two echo commands separated by &amp;:</p> <pre><code>arguments=[&quot;echo hello &amp;&amp; echo goodbye&quot;], </code></pre> <p>So my Kubernetespodoperator looks like so:</p> <pre><code>stajoverflow_test = KubernetesPodOperator( task_id=&quot;stajoverflow_test&quot;, name=&quot;stajoverflow_test&quot;, namespace=&quot;default&quot;, image=&quot;google/cloud-sdk:alpine&quot;, cmds=[&quot;/bin/bash&quot;, &quot;-c&quot;], arguments=[&quot;echo hello &amp;&amp; echo goodbye&quot;], ) </code></pre>
<p>I have a GKE cluster with 1 node-pool and 2 nodes in it, one with node affinity to only accept pods of production and the other to development and testing pods. For financial purposes, I want to configure like a cronJob or something similar on the dev/test node so I can spend less money but I don't know if that's possible.</p>
<p>Yes, you can add another node pool named <code>test</code> so you can have two node pools; one to <code>develop</code> and one to <code>test</code>. You can also turn on the <code>auto [scaling]</code><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#overview" rel="nofollow noreferrer">1</a> in your development pool, this feature of GKE will allow you to save money because it automatically resizes your GKE Cluster node pool based on the demand workload when your production pool is not demanding and you can put a maximum of pods to limit the numbers of pods deployed; in case of your workload increase the demand.</p> <p>Once you have configured the production pool, you can create the new test node pool with a fixed size of one node.</p> <p>Then, you will use a node <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">selector</a> in your pods to make sure that they can run in the production node pool. And you could use an <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">anti-affinity</a> rule to ensure that two of your training pods cannot be scheduled on the same node.</p>
<p>I am trying to get ingress EXTERNAL-IP in k8s. Is there any way to get the details from terraform data block. like using data &quot;azurerm_kubernetes_cluster&quot; or something?</p>
<p>you can create the Public IP in advance with terraform and assign this IP to your ingress service:</p> <p>YAML:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG name: ingress-nginx-controller spec: loadBalancerIP: &lt;YOUR_STATIC_IP&gt; type: LoadBalancer </code></pre> <p>Same but Terraform code:</p> <pre><code>resource &quot;kubernetes_service&quot; &quot;ingress_nginx&quot; { metadata { name = &quot;ingress-nginx-controller&quot; annotations { &quot;service.beta.kubernetes.io/azure-load-balancer-resource-group&quot; = &quot;${azurerm_resource_group.YOUR_RG.name}&quot; } spec { selector = { app = &lt;PLACEHOLDER&gt; } port { port = &lt;PLACEHOLDER&gt; target_port = &lt;PLACEHOLDER&gt; } type = &quot;LoadBalancer&quot; load_balancer_ip = &quot;${azurerm_public_ip.YOUR_IP.ip_address}&quot; } } </code></pre>
<p>I am following <a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-control/#access-an-external-https-service" rel="nofollow noreferrer">this guide</a>.</p> <p>Ingress requests are getting logged. Egress traffic control is working as expected, except I am unable to log egress HTTP requests. What is missing?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1 kind: Sidecar metadata: name: myapp spec: workloadSelector: labels: app: myapp outboundTrafficPolicy: mode: REGISTRY_ONLY egress: - hosts: - default/*.example.com </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: example spec: location: MESH_EXTERNAL resolution: NONE hosts: - '*.example.com' ports: - name: https protocol: TLS number: 443 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: accessLogging: - providers: - name: envoy </code></pre> <p>Kubernetes 1.22.2 Istio 1.11.4</p>
<p>For ingress traffic logging I am using <code>EnvoyFilter</code> to set log format and it is working without any additional configuration. In the egress case, I had to set <code>accessLogFile: /dev/stdout</code>.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: config namespace: istio-system spec: profile: default meshConfig: accessLogFile: /dev/stdout </code></pre>
<p>I have a huge patch file that I want to apply to specific overlays. I usually patch files under overlays as it is supposed to be. But the file is same and I do not want to copy it to each overlay. If I could keep my patch file <code>app-new-manifest.yaml</code> under base and patch it under overlay with a single line in <code>kustomization.yaml</code>, it would be awesome.</p> <pre><code>β”œβ”€β”€ base β”‚ β”œβ”€β”€ app-new-manifest.yaml # I am trying to patch this β”‚ β”œβ”€β”€ kustomization.yaml β”‚ β”œβ”€β”€ app β”‚ β”‚ β”œβ”€β”€ app.yaml β”‚ β”‚ └── kustomization.yaml └── overlay └── environment1 β”‚ β”œβ”€β”€ kustomization.yaml # I want to patch app-new-manifest.yaml in base β”‚ └── environment2 β”‚ β”œβ”€β”€ kustomization.yaml # No patch. app.yaml will be as is β”‚ └── environment3 β”œβ”€β”€ kustomization.yaml # I want to patch app-new-manifest.yaml in base </code></pre> <p>When I'm trying to do so, I get this error:</p> <pre><code>'/base/app/app-new-manifest.yaml' is not in or below '/overlays/environment1' </code></pre> <p>Which means, when you patch, the patch file has to be located under overlay not base. Is there any workaround to do this? Because copying the same file to each environment does not make sense to me.</p> <p>Any ideas around this will highly be appreciated, thanks!</p> <p>Edit:</p> <p>Add /base/app/kustomization.yaml</p> <pre><code>resources: - app.yaml </code></pre> <p>Add /overlays/environment1/kustomization.yaml</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base/app patchesStrategicMerge: - ../../base/app/app-new-manifest.yaml # Patch new manifest </code></pre> <p>kustomize version:</p> <pre><code>{Version:kustomize/v4.2.0 GitCommit:d53a2ad45d04b0264bcee9e19879437d851cb778 BuildDate:2021-07-01T00:44:28+01:00 GoOs:darwin GoArch:amd64} </code></pre>
<p>You can't include a file that is outside of you current directory, but you <em>can</em> include another directory that has a <code>kustomize.yaml</code> file. So organize your layout like this:</p> <pre><code>. β”œβ”€β”€ base └── overlay β”œβ”€β”€ patched_based β”œβ”€β”€ environment1 β”œβ”€β”€ environment2 └── environment3 </code></pre> <p>In <code>overlay/patched_base</code>, place your patch file and a kustomization file like:</p> <pre><code>resources: - ../base patchesStrategicMerge: - app-new-manifest.yaml </code></pre> <p>In <code>overlay/environment1</code> and <code>overlay/environment3</code>, you have:</p> <pre><code>resources: - ../patched_base </code></pre> <p>Whereas in <code>overlay/environment2</code>, you have:</p> <pre><code>resources: - ../../base </code></pre> <p>I think this solves all your requirements:</p> <ul> <li>You only need a single instance of the patch</li> <li>You can choose to use the patch or not from each individual overlay</li> </ul>
<p>We have <a href="https://github.com/jonashackt/tekton-argocd-eks" rel="nofollow noreferrer">a full-blown setup using AWS EKS with Tekton</a> installed and want to use ArgoCD for application deployment.</p> <p><a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="nofollow noreferrer">As the docs state</a> we installed ArgoCD on EKS in GitHub Actions with:</p> <pre><code> - name: Install ArgoCD run: | echo &quot;--- Create argo namespace and install it&quot; kubectl create namespace argocd --dry-run=client -o yaml | kubectl apply -f - kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml </code></pre> <p>We also exposed the ArgoCD server (incl. dashboard) <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">as the docs told us</a>:</p> <pre><code> - name: Expose ArgoCD Dashboard run: | echo &quot;--- Expose ArgoCD Dashboard via K8s Service&quot; kubectl patch svc argocd-server -n argocd -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}' echo &quot;--- Wait until Loadbalancer url is present (see https://stackoverflow.com/a/70108500/4964553)&quot; until kubectl get service/argocd-server -n argocd --output=jsonpath='{.status.loadBalancer}' | grep &quot;ingress&quot;; do : ; done </code></pre> <p>Finally we installed <code>argocd</code> CLI with brew:</p> <pre><code> echo &quot;--- Install ArgoCD CLI&quot; brew install argocd </code></pre> <p>Now how can we do a <code>argocd login</code> with GitHub Actions (without human interaction)? The <code>argocd login</code> command wants a username and password...</p>
<p><a href="https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli" rel="nofollow noreferrer">The same docs tell us how to extract the password</a> for argo with:</p> <pre><code>kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d; echo </code></pre> <p>Obtaining the ArgoCD server's <code>hostname</code> is also no big deal using:</p> <pre><code>kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}' </code></pre> <p>And as the <code>argocd login</code> command has the parameters <code>--username</code> and <code>--password</code>, <strong>we can craft our login command like this</strong>:</p> <pre><code>argocd login $(kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}') --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d; echo) --insecure </code></pre> <p>Mind the <code>--insecure</code> to prevent the argo CLI from asking things like <code>WARNING: server certificate had error: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a5f715808162c48c1af54069ba37db0e-1371850981.eu-central-1.elb.amazonaws.com. Proceed insecurely (y/n)?</code>.</p> <p>The successful login should somehow look like this in the GitHub Actions UI (see <a href="https://github.com/jonashackt/tekton-argocd-eks/runs/5105912670?check_suite_focus=true" rel="nofollow noreferrer">a full log here</a>):</p> <pre><code>'admin:login' logged in successfully Context 'a5f715808162c48c1af54069ba37db0e-1371850981.eu-central-1.elb.amazonaws.com' updated </code></pre> <p>Now your GitHub Actions workflow should be able to interact with the ArgoCD server.</p> <h2>Prevent error <code>FATA[0000] dial tcp: lookup a965bfb530e8449f5a355f221b2fd107-598531793.eu-central-1.elb.amazonaws.com on 8.8.8.8:53: no such host</code></h2> <p>This error arises if the <code>argocd-server</code> Kubernetes service is freshly installed right before the <code>argocd login</code> command is run. Then the <code>argocd login</code> command failes for some time until it finally will work correctly.</p> <p>Assuming some DNS propagation issues we can prevent this error from breaking our CI pipeline by wrapping our <code>argocd login</code> command into an <code>until</code> like already done <a href="https://stackoverflow.com/a/70108997/4964553">in this answer</a>. The full command will then look like this:</p> <pre class="lang-sh prettyprint-override"><code>until argocd login $(kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}') --username admin --password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d; echo) --insecure; do : ; done </code></pre> <p>In GitHub Actions this will then look somehow like this:</p> <pre><code>--- Login argocd CLI - now wrapped in until to prevent FATA[0000] dial tcp: lookup 12345.eu-central-1.elb.amazonaws.com on 8.8.8.8:53: no such host time=&quot;2022-02-21T12:57:32Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; time=&quot;2022-02-21T12:57:35Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; time=&quot;2022-02-21T12:57:37Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; [...] time=&quot;2022-02-21T12:58:27Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; time=&quot;2022-02-21T12:58:30Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; time=&quot;2022-02-21T12:58:32Z&quot; level=fatal msg=&quot;dial tcp: lookup a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host&quot; 'admin:login' logged in successfully Context 'a071bed7e9ea14747951b04360133141-459093397.eu-central-1.elb.amazonaws.com' updated </code></pre> <p><a href="https://github.com/jonashackt/tekton-argocd-eks/runs/5274400263?check_suite_focus=true" rel="nofollow noreferrer">Here's also a log</a>.</p>
<p>Now I am remote debugging my java program in kubernetes(v1.15.2) using kubectl proxy forward like this:</p> <pre><code>kubectl port-forward soa-report-analysis 5018:5018 -n dabai-fat </code></pre> <p>I could using intellij idea to remote connect my localhost port 5018 to remote debugging my pod in kubernetes cluster in remote datacenter,but now I am facing a problem is every time I must change the pod name to redebug after pod upgrade,any way to keep a stable channel for debugging? </p>
<p>I could suggest for anyone who looks for ways to debug Java(and Go, NodeJS, Python, .NET Core) applications in Kubernetes to look at <strong>skaffold</strong>. <br/> It simple CLI tool that uses already existing build and deploy configuration that you used to work with. There is no need for additional installation in the cluster, modification for existing deployment configuration, etc.<br/> Install CLI: <a href="https://skaffold.dev/docs/install/" rel="nofollow noreferrer">https://skaffold.dev/docs/install/</a><br/> Open your project, and try:</p> <pre><code>skaffold init </code></pre> <p>This will make skaffold create</p> <p><strong>skaffold.yaml</strong></p> <p>(the only needed config file for skaffold)</p> <p>And then</p> <pre><code>skaffold debug </code></pre> <p>Which will use your existing build and deploy config, to build a container and deploy it. If needed necessary arguments will be injected into the container, and port forwarding will start automatically.</p> <p>For more info look at: <a href="https://skaffold.dev/docs/workflows/debug/" rel="nofollow noreferrer">https://skaffold.dev/docs/workflows/debug/</a></p> <p>This can provide a consistent way to debug your application without having to be aware all time about the current pod or deployment state.</p>
<p>I have a deployment of my backend app which has 6 replicas. In the code of the app, I have a node.js cronjob running every minute. But the problem is that because I have 6 replicas, it runs 6 times in parallel. I want to send env variable to exactly only one Pod in the deployment to ensure that only 1 Pod performs the cronjob. Then in the app code I will check this using the <code>process.env.SHOULD_PROCESS_CRONJOBS == 1</code>.</p> <p>If this is not the right way to achieve what I need, can you inform me on how to handle cronjobs using nodejs in distributed environment like K8S?</p>
<p>Let Kubernetes schedule a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> in addition to the application</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: app-cron spec: schedule: &quot;* * * * *&quot; jobTemplate: spec: template: spec: containers: - name: my-app-cron image: same-as-app imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from scheduled job restartPolicy: OnFailure </code></pre> <p>If you don't want kubernetes supervising the cron scheduling, then another option is to split out the application that schedules the cron into two Deployments. Only use 1 replica for the container that has environment variable:</p> <pre><code> env: - name: SHOULD_PROCESS_CRONJOBS value: &quot;1&quot; </code></pre>
<p>In Kubernetes, are <code>podAffinity</code> and <code>podAntiAffinity</code> weights compared to each other? Or independently? What about <code>podAffinity</code> and <code>nodeAffinity</code>? Would <code>podAntiAffinity</code> outweight the <code>podAffinity</code> in the below example? And what if <code>nodeAffinity</code> was added to the mix as well.</p> <pre><code> affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 50 podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - test-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - test-2 topologyKey: kubernetes.io/hostname </code></pre>
<p>All affinity settings for both pods and nodes are weighed the same way if you don't overwrite the default individual weight of 1.</p> <p>During execution the occurrence of affinity conditions on a node is multiplied by the weight of the condition.</p> <pre><code>pod a has an affinity to b pod a has an anti-affinity to c new pod of type &quot;a&quot; to be scheduled --- node1 runs 3 &quot;b&quot; pods and 1 &quot;c&quot; pod --&gt; affinity score: 2 (3 &quot;b&quot; pods - 1 &quot;c&quot; pod) node2 runs 1 &quot;b&quot; pod --&gt; affinity score 1: 1 (1 &quot;b&quot; pod) </code></pre> <p>Based on the affinity calculation the pod will be scheduled on node1 despite it already running pod &quot;c&quot; as 3 &quot;b&quot; pods outweigh 1 &quot;c&quot; pod.</p> <p>In your example the anti-affinity would outweigh the affinity settings if there is a 1:1 ratio of <code>test-1</code> to <code>test-2</code> on a node. With a 2:1 ratio of <code>test-1</code> to <code>test-2</code> on a node the (anti-)affinities would cancel each other out.</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/a5489431cfc0598dad421fccd2d713f84bf520bd/pkg/scheduler/framework/plugins/interpodaffinity/scoring.go#L79" rel="nofollow noreferrer">Source</a></p>
<p>How can I make a flask session work across multiple instances of containers using Kubernetes? Or does Kubernetes always maintain the same container for a given session?</p>
<p>Default Flask sessions are stored client-side (in the browser) as a cookie and cryptographically signed to prevent tampering. Every request to your Flask application is accompanied by this cookie. Therefore, if all running containers have the same app (at least the same secret key used for signing), then they should all have access to the same session data.</p> <p>Note:</p> <ul> <li>This is cryptographically signed, but it is not encrypted, so don't store sensitive information in the session</li> <li>Flask-Session can be installed for server-side session support</li> </ul>
<p>I love elastic search so on my new project I have been trying to make it work on Kubernetes and skaffold</p> <p>this is the yaml file I wrote:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: eks-depl spec: replicas: 1 selector: matchLabels: app: eks template: metadata: labels: app: eks spec: containers: - name: eks image: elasticsearch:7.17.0 --- apiVersion: v1 kind: Service metadata: name: eks-srv spec: selector: app: eks ports: - name: db protocol: TCP port: 9200 targetPort: 9200 - name: monitoring protocol: TCP port: 9300 targetPort: 9300 </code></pre> <p>After I run skaffold dev it shows to be working by Kubernetes but after a few seconds it crashes and goes down.</p> <p>I can't understand what I am doing wrong.</p> <p><a href="https://i.stack.imgur.com/oIioq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oIioq.png" alt="This is where the problem seems to occur" /></a></p> <p>After I have updated my config files as Mr. Harsh Manvar it worked like a charm but currently I am facing another issue. The client side says the following....</p> <p>Btw I am using ElasticSearch version 7.11.1 and Client side module &quot;@elastic/elasticsearch^7.11.1&quot;</p> <p><a href="https://i.stack.imgur.com/Ju40F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ju40F.png" alt="enter image description here" /></a></p>
<p>Here is example YAML file you should consider running if you are planning to run the single Node elasticsearch cluster on the Kubernetes</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: elasticsearch component: elasticsearch release: elasticsearch name: elasticsearch namespace: default spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: elasticsearch component: elasticsearch release: elasticsearch serviceName: elasticsearch template: metadata: labels: app: elasticsearch component: elasticsearch release: elasticsearch spec: containers: - env: - name: cluster.name value: es_cluster - name: ELASTIC_PASSWORD value: xyz-xyz - name: discovery.type value: single-node - name: path.repo value: backup/es-backup - name: ES_JAVA_OPTS value: -Xms512m -Xmx512m - name: bootstrap.memory_lock value: &quot;false&quot; - name: xpack.security.enabled value: &quot;true&quot; image: elasticsearch:7.3.2 imagePullPolicy: IfNotPresent name: elasticsearch ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP resources: limits: cpu: 451m memory: 1250Mi requests: cpu: 250m memory: 1000Mi securityContext: privileged: true runAsUser: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-data dnsPolicy: ClusterFirst initContainers: - command: - sh - -c - chown -R 1000:1000 /usr/share/elasticsearch/data - sysctl -w vm.max_map_count=262144 - chmod 777 /usr/share/elasticsearch/data - chomod 777 /usr/share/elasticsearch/data/node - chmod g+rwx /usr/share/elasticsearch/data - chgrp 1000 /usr/share/elasticsearch/data image: busybox:1.29.2 imagePullPolicy: IfNotPresent name: set-dir-owner resources: {} securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-data restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 10 updateStrategy: type: OnDelete volumeClaimTemplates: - metadata: creationTimestamp: null name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi volumeMode: Filesystem </code></pre> <p>i would also recommand checking out the helm charts of the elasticsearch :</p> <pre><code>1 . https://github.com/elastic/helm-charts/tree/master/elasticsearch 2. https://github.com/helm/charts/tree/master/stable/elasticsearch </code></pre> <p>you can expose the above stateful set using the service and use the further with the application.</p>
<p>I'm having issues getting the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> Helm Chart to install via Terraform with Minikube, yet I'm able to install it successfully via the command line. Here is my vanilla Terraform code -</p> <pre><code>provider &quot;kubernetes&quot; { host = &quot;https://127.0.0.1:63191&quot; client_certificate = base64decode(var.client_certificate) client_key = base64decode(var.client_key) cluster_ca_certificate = base64decode(var.cluster_ca_certificate) } provider &quot;helm&quot; { kubernetes { } } resource &quot;helm_release&quot; &quot;nginx&quot; { name = &quot;beta-nginx&quot; repository = &quot;https://kubernetes.github.io/ingress-nginx&quot; chart = &quot;ingress-nginx&quot; namespace = &quot;default&quot; } </code></pre> <p>I get the following logs when I apply the Terraform code above -</p> <pre><code>helm_release.nginx: Still creating... [4m31s elapsed] 2022-01-26T14:32:49.623-0600 [TRACE] dag/walk: vertex &quot;root&quot; is waiting for &quot;provider[\&quot;registry.terraform.io/hashicorp/helm\&quot;] (close)&quot; 2022-01-26T14:32:49.624-0600 [TRACE] dag/walk: vertex &quot;meta.count-boundary (EachMode fixup)&quot; is waiting for &quot;helm_release.nginx&quot; 2022-01-26T14:32:49.624-0600 [TRACE] dag/walk: vertex &quot;provider[\&quot;registry.terraform.io/hashicorp/helm\&quot;] (close)&quot; is waiting for &quot;helm_release.nginx&quot; 2022-01-26T14:32:51.299-0600 [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2022/01/26 14:32:51 [DEBUG] Service does not have load balancer ingress IP address: default/beta-nginx-ingress-nginx-controller: timestamp=2022-01-26T14:32:51.299-0600 2022-01-26T14:32:53.302-0600 [INFO] provider.terraform-provider-helm_v2.4.1_x5: 2022/01/26 14:32:53 [DEBUG] Service does not have load balancer ingress IP address: default/beta-nginx-ingress-nginx-controller: timestamp=2022-01-26T14:32:53.302-0600 2022-01-26T14:32:54.626-0600 [TRACE] dag/walk: vertex &quot;provider[\&quot;registry.terraform.io/hashicorp/helm\&quot;] (close)&quot; is waiting for &quot;helm_release.nginx&quot; Warning: Helm release &quot;beta-nginx&quot; was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again. with helm_release.nginx, on main.tf line 21, in resource &quot;helm_release&quot; &quot;nginx&quot;: 21: resource &quot;helm_release&quot; &quot;nginx&quot; { Error: timed out waiting for the condition with helm_release.nginx, on main.tf line 21, in resource &quot;helm_release&quot; &quot;nginx&quot;: 21: resource &quot;helm_release&quot; &quot;nginx&quot; { </code></pre> <hr /> <p>When I try installing the Helm Chart via the command line <code>helm install beta-nginx ingress-nginx/ingress-nginx</code> it installs the chart no problem.</p> <p>Here are a few version numbers:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Terraform</td> <td>1.0.5</td> </tr> <tr> <td>Minikube</td> <td>1.25.1</td> </tr> <tr> <td>Kubernetes</td> <td>1.21.7</td> </tr> <tr> <td>Helm</td> <td>3.7.2</td> </tr> </tbody> </table> </div>
<p>This is because Terraform waits for LoadBalancer to get a public IP address, but this never happens, so the <code>Error: timed out waiting for the condition</code> error occurs:</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE beta-nginx-ingress-nginx-controller LoadBalancer &lt;PRIVATE_IP&gt; &lt;pending&gt; 80:30579/TCP,443:30909/TCP 7m32s </code></pre> <p>You can install <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> to get a load-balancer implementation or create a NodePort instead of LoadBalancer. I'll briefly demonstrate the second option.</p> <p>All you have to do is modify the <a href="https://github.com/kubernetes/ingress-nginx/blob/c1be3499eb98756af4d2f5a5d165e6ff11cceeb5/charts/ingress-nginx/values.yaml#L501" rel="nofollow noreferrer"><code>controller.service.type</code></a> value from the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">values.yaml</a> file:</p> <pre><code>$ cat beta-nginx.tf provider &quot;helm&quot; { kubernetes { config_path = &quot;~/.kube/config&quot; } } resource &quot;helm_release&quot; &quot;nginx&quot; { name = &quot;beta-nginx&quot; repository = &quot;https://kubernetes.github.io/ingress-nginx&quot; chart = &quot;ingress-nginx&quot; namespace = &quot;default&quot; set { name = &quot;controller.service.type&quot; value = &quot;NodePort&quot; } } $ terraform apply ... + set { + name = &quot;controller.service.type&quot; + value = &quot;NodePort&quot; } ... Apply complete! Resources: 1 added, 0 changed, 0 destroyed. $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE beta-nginx-ingress-nginx-controller NodePort &lt;PRIVATE_IP&gt; &lt;none&gt; 80:32410/TCP,443:31630/TCP 74s </code></pre> <p>As you can see above, the NodePort service has been created instead of the LoadBalancer.</p>
<p>We are using an ingress (<code>kubernetes_ingress.db_admin_ingress</code>) to expose the service (<code>kubernetes_service.db_admin</code>) of a deployment (<code>kubernetes_deployment.db_admin</code>) in Google Kubernetes Engine (GKE) with Terraform.</p> <p>When Terraform creates the ingress, a Level 7 Load Balancer is automatically created with a default health check:</p> <ul> <li>port: 80</li> <li>path: <code>/</code></li> <li>protocol: HTTP(S)</li> </ul> <p>Our deployment (<code>kubernetes_deployment.db_admin</code>) does not respond to the path <code>/</code> with a <code>200</code>, so the health check fails.</p> <p>How can we change the path in the health check configuration?</p> <pre class="lang-hcl prettyprint-override"><code>resource &quot;google_compute_managed_ssl_certificate&quot; &quot;db_admin_ssl_certificate&quot; { provider = google-beta name = &quot;db-admin-ssl-certificate&quot; managed { domains = [&quot;db.${var.domain}.&quot;] } } resource &quot;kubernetes_deployment&quot; &quot;db_admin&quot; { metadata { name = &quot;db-admin&quot; labels = { App = &quot;db-admin&quot; } } spec { replicas = 1 selector { match_labels = { App = &quot;db-admin&quot; } } template { metadata { labels = { App = &quot;db-admin&quot; } } spec { container { image = &quot;dpage/pgadmin4:2022-01-10-1&quot; name = &quot;db-admin&quot; env { name = &quot;PGADMIN_DEFAULT_EMAIL&quot; value = &quot;test@test.com&quot; } env { name = &quot;PGADMIN_DEFAULT_PASSWORD&quot; value = &quot;test&quot; } port { container_port = 80 } resources {} } } } } } resource &quot;kubernetes_service&quot; &quot;db_admin&quot; { metadata { name = &quot;db-admin&quot; } spec { selector = { App = kubernetes_deployment.db_admin.spec.0.template.0.metadata[0].labels.App } port { protocol = &quot;TCP&quot; port = 80 target_port = 80 } type = &quot;NodePort&quot; } } resource &quot;kubernetes_ingress&quot; &quot;db_admin_ingress&quot; { wait_for_load_balancer = true metadata { name = &quot;db-admin-ingress&quot; annotations = { &quot;ingress.gcp.kubernetes.io/pre-shared-cert&quot; = google_compute_managed_ssl_certificate.db_admin_ssl_certificate.name } } spec { rule { http { path { backend { service_name = &quot;db-admin&quot; service_port = 80 } path = &quot;/*&quot; } } } } } </code></pre>
<p>According to Google Kubernetes Engine (GKE) official documentation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">here</a>, you are able to customize <code>ingress</code>/Level 7 Load Balancer health checks through either:</p> <ul> <li><p>the <code>readinessProbe</code> for the <code>container</code> within the <code>pod</code> your <code>ingress</code> is serving traffic to</p> <p><strong>Warning</strong>: this method comes with warnings <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#interpreted_hc" rel="nofollow noreferrer">here</a></p> </li> <li><p>a <code>backendconfig</code> resource</p> </li> </ul> <p>I would <strong>highly</strong> recommend creating a <code>backendconfig</code> resource.</p> <p>Unfortunately, the <code>kubernetes</code> Terraform provider does <strong>not</strong> seem to support the <code>backendconfig</code> resource based on <a href="https://github.com/hashicorp/terraform-provider-kubernetes/issues/764" rel="nofollow noreferrer">this</a> GitHub issue. This means that you can either:</p> <ul> <li>use the <code>kubernetes-alpha</code> provider (found <a href="https://registry.terraform.io/providers/hashicorp/kubernetes-alpha/0.6.0" rel="nofollow noreferrer">here</a>) to transcribe a YAML <code>backendconfig</code> manifest to HCL with the <code>manifest</code> argument for the only <code>kubernetes-alpha</code> resource: <code>kubernetes-manifest</code> (more on that <a href="https://registry.terraform.io/providers/hashicorp/kubernetes-alpha/latest/docs/resources/kubernetes_manifest" rel="nofollow noreferrer">here</a>)</li> <li>use an unofficial provider (such as <code>banzaicloud/k8s</code> found <a href="https://github.com/banzaicloud/terraform-provider-k8s" rel="nofollow noreferrer">here</a>)</li> <li>check the <code>backendconfig</code> manifest (as either JSON or YAML) into SCM</li> </ul> <p>A sample <code>backendconfig</code> YAML manifest:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: db-admin namespace: default spec: healthCheck: checkIntervalSec: 30 timeoutSec: 5 healthyThreshold: 1 unhealthyThreshold: 2 type: HTTP requestPath: /v1/some/path port: 80 </code></pre> <p><strong>Note</strong>: a <code>service</code> is needed to associate a <code>backendconfig</code> with an <code>ingress</code>/Level 7 Load Balancer:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: db-admin-ingress-backend-config labels: app: db-admin annotations: cloud.google.com/backend-config: '{&quot;ports&quot;: {&quot;80&quot;:&quot;db-admin&quot;}}' cloud.google.com/neg: '{&quot;ingress&quot;: true}' spec: type: NodePort selector: app: db-admin ports: - port: 80 protocol: TCP targetPort: 80 </code></pre> <p>You can learn more about the <code>backendconfig</code> resource and the <code>service</code> it requires <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#exercise" rel="nofollow noreferrer">here</a>.</p>
<p>I was looking for method to upgrade k8s version without downtime for Azure AKS and found this amazing blog post <a href="https://omichels.github.io/zerodowntime-aks.html" rel="nofollow noreferrer">https://omichels.github.io/zerodowntime-aks.html</a> but I got error at the start only</p> <p>So currently running version of k8s in my region is no more available. When I tried to create a temporary nodepool got below error</p> <pre><code>(AgentPoolK8sVersionNotSupported) Version 1.19.6 is not supported in this region. Please use [az aks get-versions] command to get the supported version list in this region. For more information, please check https://aka.ms/supported-version-list </code></pre> <p>What can I do to achieve zero downtime upgrade?</p>
<p>Here is how I upgraded without downtime, for your reference.</p> <ol> <li><p>Upgrade control plane only. (Can finish it on azure portal)<a href="https://i.stack.imgur.com/TcPX2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TcPX2.png" alt="enter image description here" /></a></p> </li> <li><p>Add a new Node pool. Now the version of new node pool is higher(same with control plane). Then add a label to it, e.g. <strong>nodePool=newNodePool</strong>.</p> </li> <li><p>Patch all application to the new node pool. (By nodeSelector)</p> <p><code>$ kubectl get deployment -n {namespace} -o name | xargs kubectl patch -p &quot;{\&quot;spec\&quot;:{\&quot;template\&quot;:{\&quot;spec\&quot;:{\&quot;nodeSelector\&quot;:{\&quot;nodePool\&quot;:\&quot;newNodePool\&quot;}}}}}&quot; -n {namespace}</code></p> </li> <li><p>Check the pods if are scheduled to the new node pool.</p> <p><code>$ kubectl get pods -owide</code></p> </li> <li><p>Delete the old node pool.</p> </li> </ol>
<p>From our Tekton pipeline we want to use ArgoCD CLI to do a <code>argocd app create</code> and <code>argocd app sync</code> dynamically based on the app that is build. We created a new user <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#create-new-user" rel="nofollow noreferrer">as described in the docs</a> by adding a <code>accounts.tekton: apiKey</code> to the <code>argocd-cm</code> ConfigMap:</p> <pre><code>kubectl patch configmap argocd-cm -n argocd -p '{&quot;data&quot;: {&quot;accounts.tekton&quot;: &quot;apiKey&quot;}}' </code></pre> <p>Then we created a token for the <code>tekton</code> user with:</p> <pre><code>argocd account generate-token --account tekton </code></pre> <p>With this token as the <code>password</code> and the <code>username</code> <code>tekton</code> we did the <code>argocd login</code> like</p> <pre><code>argocd login $(kubectl get service argocd-server -n argocd --output=jsonpath='{.status.loadBalancer.ingress[0].hostname}') --username=tekton --password=&quot;$TOKEN&quot;; </code></pre> <p>Now from within our Tekton pipeline (but we guess that would be the same for every other CI, given the usage of a non-admin user) we get the following error if we run <code>argocd app create</code>:</p> <pre><code>$ argocd app create microservice-api-spring-boot --repo https://gitlab.com/jonashackt/microservice-api-spring-boot-config.git --path deployment --dest-server https://kubernetes.default.svc --dest-namespace default --revision argocd --sync-policy auto error rpc error: code = PermissionDenied desc = permission denied: applications, create, default/microservice-api-spring-boot, sub: tekton, iat: 2022-02-03T16:36:48Z </code></pre>
<p>The problem is mentioned in <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/#local-usersaccounts-v15" rel="nofollow noreferrer">Argo's useraccounts docs</a>:</p> <blockquote> <p>When you create local users, each of those users will need additional RBAC rules set up, otherwise they will fall back to the default policy specified by policy.default field of the argocd-rbac-cm ConfigMap.</p> </blockquote> <p>But these additional RBAC rules could be setup the simplest <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/" rel="nofollow noreferrer">using ArgoCD <code>Projects</code></a>. And with such a <code>AppProject</code> you don't even need to create a user like <code>tekton</code> in the ConfigMap <code>argocd-cm</code>. ArgoCD projects have the ability to define <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-roles" rel="nofollow noreferrer">Project roles</a>:</p> <blockquote> <p>Projects include a feature called roles that enable automated access to a project's applications. These can be used to give a CI pipeline a restricted set of permissions. For example, a CI system may only be able to sync a single app (but not change its source or destination).</p> </blockquote> <p>There are 2 solutions how to configure the <code>AppProject</code>, role &amp; permissions incl. role token:</p> <ol> <li>using <code>argocd</code> CLI</li> <li>using a manifest YAML file</li> </ol> <h2>1.) Use <code>argocd</code> CLI to create <code>AppProject</code>, role &amp; permissions incl. role token</h2> <p>So let's get our hands dirty and create a ArgoCD <code>AppProject</code> using the <code>argocd</code> CLI called <code>apps2deploy</code>:</p> <pre><code>argocd proj create apps2deploy -d https://kubernetes.default.svc,default --src &quot;*&quot; </code></pre> <p>We create it with the <code>--src &quot;*&quot;</code> as a wildcard for any git repository (<a href="https://github.com/argoproj/argo-cd/issues/5382#issue-799715045" rel="nofollow noreferrer">as described here</a>).</p> <p>Now we create a Project <code>role</code> called <code>create-sync</code> via:</p> <pre><code>argocd proj role create apps2deploy create-sync --description &quot;project role to create and sync apps from a CI/CD pipeline&quot; </code></pre> <p>You can check the new role has been created with <code>argocd proj role list apps2deploy</code>.</p> <p>Then we need to create a token for the new Project role <code>create-sync</code>, which can be created via:</p> <pre><code>argocd proj role create-token apps2deploy create-sync </code></pre> <p>This token needs to be used for the <code>argocd login</code> command inside our Tekton / CI pipeline. There's also a <code>--token-only</code> parameter for the command, so we can create an environment variable via</p> <pre><code>ARGOCD_AUTH_TOKEN=$(argocd proj role create-token apps2deploy create-sync --token-only) </code></pre> <p>The <code>ARGOCD_AUTH_TOKEN</code> will be automatically used by <code>argo login</code>.</p> <p>Now we need to give permissions to the role, so it will be able to create and sync our application in ArgoCD from within Tekton or any other CI pipeline. <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-roles" rel="nofollow noreferrer">As described in the docs we therefore add policies to our roles</a> using the <code>argocd</code> CLI:</p> <pre class="lang-sh prettyprint-override"><code>argocd proj role add-policy apps2deploy create-sync --action get --permission allow --object &quot;*&quot; argocd proj role add-policy apps2deploy create-sync --action create --permission allow --object &quot;*&quot; argocd proj role add-policy apps2deploy create-sync --action sync --permission allow --object &quot;*&quot; argocd proj role add-policy apps2deploy create-sync --action update --permission allow --object &quot;*&quot; argocd proj role add-policy apps2deploy create-sync --action delete --permission allow --object &quot;*&quot; </code></pre> <p>Have a look on the role policies with <code>argocd proj role get apps2deploy create-sync</code>, which should look somehow like this:</p> <pre><code>$ argocd proj role get apps2deploy create-sync Role Name: create-sync Description: project role to create and sync apps from a CI/CD pipeline Policies: p, proj:apps2deploy:create-sync, projects, get, apps2deploy, allow p, proj:apps2deploy:create-sync, applications, get, apps2deploy/*, allow p, proj:apps2deploy:create-sync, applications, create, apps2deploy/*, allow p, proj:apps2deploy:create-sync, applications, update, apps2deploy/*, allow p, proj:apps2deploy:create-sync, applications, delete, apps2deploy/*, allow p, proj:apps2deploy:create-sync, applications, sync, apps2deploy/*, allow JWT Tokens: ID ISSUED-AT EXPIRES-AT 1644166189 2022-02-06T17:49:49+01:00 (2 hours ago) &lt;none&gt; </code></pre> <p>Finally we should have setup everything to do a successful <code>argocd app create</code>. All we need to do is to add the <code>--project apps2deploy</code> parameter:</p> <pre><code>argocd app create microservice-api-spring-boot --repo https://gitlab.com/jonashackt/microservice-api-spring-boot-config.git --path deployment --project apps2deploy --dest-server https://kubernetes.default.svc --dest-namespace default --revision argocd --sync-policy auto </code></pre> <h2>2.) Use manifest YAML to create <code>AppProject</code>, role &amp; permissions incl. role token</h2> <p>As all those CLI based steps in solution 1.) are quite many, we could also <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#configuring-rbac-with-projects" rel="nofollow noreferrer">using a manifest YAML file</a>. Here's an example <code>argocd-appproject-apps2deploy.yml</code> which configures exactly the same as in solution a):</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: apps2deploy namespace: argocd spec: destinations: - namespace: default server: https://kubernetes.default.svc sourceRepos: - '*' roles: - description: project role to create and sync apps from a CI/CD pipeline name: create-sync policies: - p, proj:apps2deploy:create-sync, applications, get, apps2deploy/*, allow - p, proj:apps2deploy:create-sync, applications, create, apps2deploy/*, allow - p, proj:apps2deploy:create-sync, applications, update, apps2deploy/*, allow - p, proj:apps2deploy:create-sync, applications, delete, apps2deploy/*, allow - p, proj:apps2deploy:create-sync, applications, sync, apps2deploy/*, allow </code></pre> <p>There are only 2 steps left to be able to do a successful <code>argocd app create</code> from within Tekton (or other CI pipeline). We need to <code>apply</code> the manifest with</p> <pre><code>kubectl apply -f argocd-appproject-apps2deploy.yml </code></pre> <p>And we need to need to create a role token, ideally assigning it directly to the <code>ARGOCD_AUTH_TOKEN</code> for the <code>argocd login</code> command (which also needs to be done afterwards):</p> <pre><code>ARGOCD_AUTH_TOKEN=$(argocd proj role create-token apps2deploy create-sync --token-only) </code></pre> <p>The same <code>argocd app create</code> command as mentioned in solution 1.) should work now:</p> <pre><code>argocd app create microservice-api-spring-boot --repo https://gitlab.com/jonashackt/microservice-api-spring-boot-config.git --path deployment --project apps2deploy --dest-server https://kubernetes.default.svc --dest-namespace default --revision argocd --sync-policy auto </code></pre>
<p>I have this microservice based on Spring Boot 2.2.7 and it works very well when locally. However when I try to deploy in a Kubernetes cluster, it fails on loading the Config Map values at the startup and, consequently, everything fails from this point:</p> <p>The container logging is this:</p> <pre><code>2022-02-08 14:54:50.696 WARN [myservice,,,] 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for &quot;http://localhost:8761/config/myservice/dev/master&quot;: Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused </code></pre> <p>I have a bootstrap.yml file like this:</p> <pre><code>jhipster: registry: password: admin spring: application: name: myservice profiles: # The commented value for `active` can be replaced with valid Spring profiles to load. # Otherwise, it will be filled in by maven when building the JAR file # Either way, it can be overridden by `--spring.profiles.active` value passed in the commandline or `-Dspring.profiles.active` set in `JAVA_OPTS` active: #spring.profiles.active# cloud: config: fail-fast: false # if not in &quot;prod&quot; profile, do not force to use Spring Cloud Config uri: http://admin:${jhipster.registry.password}@localhost:8761/config # name of the config server's property source (file.yml) that we want to use name: myservice profile: dev # profile(s) of the property source label: master # toggle to switch to a different version of the configuration as stored in git # it can be set to any label, branch or commit of the configuration source Git repository </code></pre> <p>This is my bootstrap-prod.yml</p> <pre><code>spring: application: name: myservice cloud: kubernetes: config: enabled: true # enables fetching configmap name: myservice profile: prod sources: - name: myservice enabled: true # enables all the sub-configurations </code></pre> <p>It's like when the application starts, Spring Boot just ignored my bootstrap-prod.yml.</p> <p>The container's environment variable SPRING_PROFILES_ACTIVE value is &quot;prod,swagger,no-liquibase&quot;.</p>
<p>I finally found out the cause of this problem.</p> <p>It's necessary to include this dependency in the pom.xml:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-kubernetes-config&lt;/artifactId&gt; &lt;version&gt;${spring-cloud-kubernetes.version}&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>In my case it was a little more tricky because I had this dependency already, but it was inside a &quot;prod&quot; profile. So just had to run maven specifying this profile:</p> <pre><code>mvn package -P prod </code></pre>
<p>I am a bit desperate and I hope someone can help me. A few months ago I installed the <a href="https://www.eclipse.org/packages/packages/cloud2edge/installation/" rel="nofollow noreferrer">eclipse cloud2edge</a> package on a kubernetes cluster by following the installation instructions, creating a persistentVolume and running the helm install command with these options.</p> <pre><code>helm install -n $NS --wait --timeout 15m $RELEASE eclipse-iot/cloud2edge --set hono.prometheus.createInstance=false --set hono.grafana.enabled=false --dependency-update --debug </code></pre> <p>The yaml of the persistentVolume is the following and I create it in the same namespace that I install the package.</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-device-registry spec: accessModes: - ReadWriteOnce capacity: storage: 1Mi hostPath: path: /mnt/ type: Directory </code></pre> <p>Everything works perfectly, all pods were ready and running, until the other day when the cluster crashed and some pods stopped working.</p> <p>The <strong>kubectl get pods -n $NS</strong> output is as follows:</p> <pre><code>NAME READY STATUS RESTARTS AGE ditto-mongodb-7b78b468fb-8kshj 1/1 Running 0 50m dt-adapter-amqp-vertx-6699ccf495-fc8nx 0/1 Running 0 50m dt-adapter-http-vertx-545564ff9f-gx5fp 0/1 Running 0 50m dt-adapter-mqtt-vertx-58c8975678-k5n49 0/1 Running 0 50m dt-artemis-6759fb6cb8-5rq8p 1/1 Running 1 50m dt-dispatch-router-5bc7586f76-57dwb 1/1 Running 0 50m dt-ditto-concierge-f6d5f6f9c-pfmcw 1/1 Running 0 50m dt-ditto-connectivity-f556db698-q89bw 1/1 Running 0 50m dt-ditto-gateway-589d8f5596-59c5b 1/1 Running 0 50m dt-ditto-nginx-897b5bc76-cx2dr 1/1 Running 0 50m dt-ditto-policies-75cb5c6557-j5zdg 1/1 Running 0 50m dt-ditto-swaggerui-6f6f989ccd-jkhsk 1/1 Running 0 50m dt-ditto-things-79ff869bc9-l9lct 1/1 Running 0 50m dt-ditto-thingssearch-58c5578bb9-pwd9k 1/1 Running 0 50m dt-service-auth-698d4cdfff-ch5wp 1/1 Running 0 50m dt-service-command-router-59d6556b5f-4nfcj 0/1 Running 0 50m dt-service-device-registry-7cf75d794f-pk9ct 0/1 Running 0 50m </code></pre> <p>The pods that fail all have the same error when running <strong>kubectl describe pod POD_NAME -n $NS</strong>.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 53m default-scheduler Successfully assigned digitaltwins/dt-service-command-router-59d6556b5f-4nfcj to node1 Normal Pulled 53m kubelet Container image &quot;index.docker.io/eclipse/hono-service-command-router:1.8.0&quot; already present on machine Normal Created 53m kubelet Created container service-command-router Normal Started 53m kubelet Started container service-command-router Warning Unhealthy 52m kubelet Readiness probe failed: Get &quot;https://10.244.1.89:8088/readiness&quot;: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 2m58s (x295 over 51m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>According to this, the readinessProbe fails. In the yalm definition of the affected deployments, the readinessProbe is defined:</p> <pre><code>readinessProbe: failureThreshold: 3 httpGet: path: /readiness port: health scheme: HTTPS initialDelaySeconds: 45 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 </code></pre> <p>I have tried increasing these values, increasing the delay to 600 and the timeout to 10. Also i have tried uninstalling the package and installing it again, but nothing changes: the installation fails because the pods are never ready and the timeout pops up. I have also exposed port 8088 (health) and called /readiness with wget and the result is still 503. On the other hand, I have tested if livenessProbe works and it works fine. I have also tried resetting the cluster. First I manually deleted everything in it and then used the following commands:</p> <pre><code>sudo kubeadm reset sudo iptables -F &amp;&amp; sudo iptables -t nat -F &amp;&amp; sudo iptables -t mangle -F &amp;&amp; sudo iptables -X sudo systemctl stop kubelet sudo systemctl stop docker sudo rm -rf /var/lib/cni/ sudo rm -rf /var/lib/kubelet/* sudo rm -rf /etc/cni/ sudo ifconfig cni0 down sudo ifconfig flannel.1 down sudo ifconfig docker0 down sudo ip link set cni0 down sudo brctl delbr cni0 sudo systemctl start docker sudo kubeadm init --apiserver-advertise-address=192.168.44.11 --pod-network-cidr=10.244.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl --kubeconfig $HOME/.kube/config apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>The cluster seems to work fine because the Eclipse Ditto part has no problem, it's just the Eclipse Hono part. I add a little more information in case it may be useful.</p> <p>The <strong>kubectl logs dt-service-command-router-b654c8dcb-s2g6t -n $NS</strong> output:</p> <pre><code>12:30:06.340 [vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.NetServerImpl - Client from origin /10.244.1.101:44142 failed to connect over ssl: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown 12:30:06.756 [vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.NetServerImpl - Client from origin /10.244.1.100:46550 failed to connect over ssl: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown 12:30:07.876 [vert.x-eventloop-thread-1] ERROR io.vertx.core.net.impl.NetServerImpl - Client from origin /10.244.1.102:40706 failed to connect over ssl: javax.net.ssl.SSLHandshakeException: Received fatal alert: certificate_unknown 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.client.impl.HonoConnectionImpl - starting attempt [#258] to connect to server [dt-service-device-registry:5671, role: Device Registration] 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - OpenSSL [available: false, supports KeyManagerFactory: false] 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - using JDK's default SSL engine 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - enabling secure protocol [TLSv1.3] 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - enabling secure protocol [TLSv1.2] 12:30:08.315 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqps://dt-service-device-registry:5671, role: Device Registration] 12:30:08.339 [vert.x-eventloop-thread-1] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqps://dt-service-device-registry:5671, role: Device Registration]: Failed to create SSL connection 12:30:08.339 [vert.x-eventloop-thread-1] WARN o.e.h.client.impl.HonoConnectionImpl - attempt [#258] to connect to server [dt-service-device-registry:5671, role: Device Registration] failed javax.net.ssl.SSLHandshakeException: Failed to create SSL connection </code></pre> <p>The <strong>kubectl logs dt-adapter-amqp-vertx-74d69cbc44-7kmdq -n $NS</strong> output:</p> <pre><code>12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.client.impl.HonoConnectionImpl - starting attempt [#19] to connect to server [dt-service-device-registry:5671, role: Credentials] 12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - OpenSSL [available: false, supports KeyManagerFactory: false] 12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - using JDK's default SSL engine 12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - enabling secure protocol [TLSv1.3] 12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - enabling secure protocol [TLSv1.2] 12:19:36.686 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - connecting to AMQP 1.0 container [amqps://dt-service-device-registry:5671, role: Credentials] 12:19:36.711 [vert.x-eventloop-thread-0] DEBUG o.e.h.c.impl.ConnectionFactoryImpl - can't connect to AMQP 1.0 container [amqps://dt-service-device-registry:5671, role: Credentials]: Failed to create SSL connection 12:19:36.712 [vert.x-eventloop-thread-0] WARN o.e.h.client.impl.HonoConnectionImpl - attempt [#19] to connect to server [dt-service-device-registry:5671, role: Credentials] failed javax.net.ssl.SSLHandshakeException: Failed to create SSL connection </code></pre> <p>The <strong>kubectl version</strong> output is as follows:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.3&quot;, GitCommit:&quot;1e11e4a2108024935ecfcb2912226cedeafd99df&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-14T12:50:19Z&quot;, GoVersion:&quot;go1.15.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.16&quot;, GitCommit:&quot;e37e4ab4cc8dcda84f1344dda47a97bb1927d074&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-10-27T16:20:18Z&quot;, GoVersion:&quot;go1.15.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Thanks in advance!</p>
<p>based on the iconic <em>Failed to create SSL Connection</em> output in the logs, I assume that you have run into the dreaded <em>The demo certificates included in the Hono chart have expired</em> problem.</p> <p>The Cloud2Edge package chart is being updated currently (<a href="https://github.com/eclipse/packages/pull/337" rel="nofollow noreferrer">https://github.com/eclipse/packages/pull/337</a>) with the most recent version of the Ditto and Hono charts (which includes fresh certificates that are valid for two more years to come). As soon as that PR is merged and the Eclipse Packages chart repository has been rebuilt, you should be able to do a <code>helm repo update</code> and then (hopefully) succesfully install the c2e package.</p>
<p>I have deployed a MySQL database (statefulset) on Kubernetes zonal cluster, running as a service (GKE) in Google Cloud Platform.</p> <p>The zonal cluster consist of 3 instances of type e2-medium.</p> <p>The MySQL container cannot start due to the following error.</p> <pre><code>kubectl logs mysql-statefulset-0 2022-02-07 05:55:38+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.35-1debian10 started. find: '/var/lib/mysql/': Input/output error </code></pre> <p>Last seen events.</p> <pre><code>4m57s Warning Ext4Error gke-cluster-default-pool-rnfh kernel-monitor, gke-cluster-default-pool-rnfh EXT4-fs error (device sdb): __ext4_find_entry:1532: inode #2: comm mysqld: reading directory lblock 0 40d 8062 gke-cluster-default-pool-rnfh 3m22s Warning BackOff pod/mysql-statefulset-0 spec.containers{mysql} kubelet, gke-cluster-default-pool-rnfh Back-off restarting failed container </code></pre> <p>Nodes.</p> <pre><code>kubectl get node -owide gke-cluster-default-pool-ayqo Ready &lt;none&gt; 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8 gke-cluster-default-pool-rnfh Ready &lt;none&gt; 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8 gke-cluster-default-pool-sc3p Ready &lt;none&gt; 54d v1.21.5-gke.1302 So.Me.I.P So.Me.I.P Container-Optimized OS from Google 5.4.144+ containerd://1.4.8 </code></pre> <p>I also noticed that rnfh node is out of memory.</p> <pre><code>kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-cluster-default-pool-ayqo 117m 12% 992Mi 35% gke-cluster-default-pool-rnfh 180m 19% 2953Mi 104% gke-cluster-default-pool-sc3p 179m 19% 1488Mi 52% </code></pre> <p>MySql mainfest</p> <pre><code># HEADLESS SERVICE apiVersion: v1 kind: Service metadata: name: mysql-headless-service labels: kind: mysql-headless-service spec: clusterIP: None selector: tier: mysql-db ports: - name: 'mysql-http' protocol: 'TCP' port: 3306 --- # STATEFUL SET apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql-statefulset spec: selector: matchLabels: tier: mysql-db serviceName: mysql-statefulset replicas: 1 template: metadata: labels: tier: mysql-db spec: terminationGracePeriodSeconds: 10 containers: - name: my-mysql image: my-mysql:latest imagePullPolicy: Always args: - &quot;--ignore-db-dir=lost+found&quot; ports: - name: 'http' protocol: 'TCP' containerPort: 3306 volumeMounts: - name: mysql-pvc mountPath: /var/lib/mysql env: - name: MYSQL_ROOT_USER valueFrom: secretKeyRef: name: mysql-secret key: mysql-root-username - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secret key: mysql-root-password - name: MYSQL_USER valueFrom: configMapKeyRef: name: mysql-config key: mysql-username - name: MYSQL_PASSWORD valueFrom: configMapKeyRef: name: mysql-config key: mysql-password - name: MYSQL_DATABASE valueFrom: configMapKeyRef: name: mysql-config key: mysql-database volumeClaimTemplates: - metadata: name: mysql-pvc spec: storageClassName: 'mysql-fast' resources: requests: storage: 120Gi accessModes: - ReadWriteOnce - ReadOnlyMany </code></pre> <p>MySQL storage class manifest:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: mysql-fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: Immediate </code></pre> <p>Why Kubernetes is trying to schedule pod in out of memory node?</p> <p><strong>UPDATES</strong></p> <p>I've added requests and limits to <code>MySQL</code> manifest to improve the <code>Qos Class</code>. Now the <code>Qos Class</code> is <code>Guaranteed</code>.</p> <p>Unfortunately, Kubernetes still trying to schedule to out of memory <code>rnfh</code> node.</p> <pre><code>kubectl describe po mysql-statefulset-0 | grep node -i Node: gke-cluster-default-pool-rnfh/So.Me.I.P kubectl describe po mysql-statefulset-0 | grep qos -i QoS Class: Guaranteed </code></pre>
<p>I ran a few more tests but I couldn't replicate this.</p> <p>To answer this one correctly, we would need much more logs. Not sure if you still have them. If I could guess which was the root cause of this issue I would say it was connected with the PersistentVolume.</p> <p>In one of the <a href="https://github.com/kubernetes-retired/external-storage/issues/752" rel="nofollow noreferrer">Github issue - Volume was remounted as read only after error #752</a> I found very similar behavior to OP's behavior.</p> <p>You have created a <code>special</code> storageclass for your MySQL. You've set <code>reclaimPolicy: Retain</code> so PV was not removed. When <code>Statefulset</code> pod (with the same suffix <code>-0</code>) has been recreated (restarted due to error with connectivity, some issues on DB, hard to say) it tried to re-claim this Volume. In the mentioned Github issue, user had very similar situation. Also got <code>inode #262147: comm mysqld: reading directory lblock</code> issue, but in the bellow there was also entry <code>[ +0.003695] EXT4-fs (sda): Remounting filesystem read-only</code>. Maybe it changed permissions when re-mounted?</p> <p>Another thing that your <code>volumeClaimTemplates</code> contained</p> <pre><code> accessModes: - ReadWriteOnce - ReadOnlyMany </code></pre> <p>So one <code>PersistentVolume</code> could be used as <code>ReadWriteOnce</code> by one node or only <code>ReadOnlyMany</code> by many nodes. There is a possibility that POD was recreated in different node with <code>Read-Only</code> assessMode.</p> <pre><code>[ +35.912075] EXT4-fs warning (device sda): htree_dirblock_to_tree:977: inode #2: lblock 0: comm mysqld: error -5 reading directory block [ +6.294232] EXT4-fs error (device sda): ext4_find_entry:1436: inode #262147: comm mysqld: reading directory lblock ... [ +0.005226] EXT4-fs error (device sda): ext4_find_entry:1436: inode #2: comm mysqld: reading directory lblock 0 [ +1.666039] EXT4-fs error (device sda): ext4_journal_check_start:61: Detected aborted journal [ +0.003695] EXT4-fs (sda): Remounting filesystem read-only </code></pre> <p>It would fit to OP's comment:</p> <blockquote> <p>Two days ago for reasons unknown to me Kubernetes restarted the container and was keep trying to run it on rnfa machine. The container was probably evicted from another node.</p> </blockquote> <p>Another thing is that node or cluster might be updated (depending if the auto update option was turned on) which might enforce restart of the pod.</p> <p>Issue with <code>'/var/lib/mysql/': Input/output error</code> might point to database corruption like mentioned <a href="https://dba.stackexchange.com/questions/155251/possibility-of-fixing-corrupt-mysql-database">here</a>.</p> <p>In general, the issue has been resolved by <code>cordoning</code> affected node. Additional information about the difference between <code>cordon</code> and <code>drain</code> can be found <a href="https://intl.cloud.tencent.com/document/product/457/30654" rel="nofollow noreferrer">here</a>.</p> <p>Just as an addition, to assign pods to specific node or node with specified label, you can use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Affinity</a></p>
<p>I'm trying to migrate my Nextcloud instance to a kubernetes cluster. I've succesfully deployed a Nextcloud instance using <strong>openEBS-cStor</strong> storage. Before I can &quot;kubectl cp&quot; my old files to the cluster, I need to put Nextcloud in maintenance mode.</p> <p>This is what I've tried so far:</p> <ul> <li>Shell access to pod</li> <li>Navigate to folder</li> <li>Run OCC command to put next cloud in maintenance mode</li> </ul> <p>These are the commands I used for the OCC way:</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec --stdin --tty -n nextcloud nextcloud-7ff9cf449d-rtlxh -- /bin/bash su -c 'php occ maintenance:mode --on' www-data # This account is currently not available. </code></pre> <p>Any tips on how to put Nextcloud in maintenance mode would be appreciated!</p>
<p>The <code>su</code> command fails because there is no shell associated with the <code>www-data</code> user.</p> <p>What worked for me is explicitly specifying the shell in the <code>su</code> command:</p> <pre class="lang-sh prettyprint-override"><code>su -s /bin/bash www-data -c &quot;php occ maintenance:mode --on&quot; </code></pre>
<p>I have kubernets cluster in gcp with docker container runtime. I am trying to change docker container runtime into containerd. Following steps shows what I did.</p> <ol> <li>New node pool added ( nodes with containerd )</li> <li>drained old nodes</li> </ol> <p>Once I perform above steps I am getting &quot; Pod is blocking scale down because it has local storage &quot; warning message.</p>
<p>You need to add the once annotation to POD so that cluster autoscaler can remove that POD from POD safe to evict.</p> <pre><code>cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;true&quot; </code></pre> <p>above annotation, you have to add in into POD.</p> <p>You can read more at : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#cluster-not-scalingdown" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler-visibility#cluster-not-scalingdown</a></p> <blockquote> <p>NoScaleDown example: You found a noScaleDown event that contains a per-node reason for your node. The message ID is &quot;no.scale.down.node.pod.has.local.storage&quot; and there is a single parameter: &quot;test-single-pod&quot;. After consulting the list of error messages, you discover this means that the &quot;Pod is blocking scale down because it requests local storage&quot;. You consult the Kubernetes Cluster Autoscaler FAQ and find out that the solution is to add a &quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;true&quot; annotation to the Pod. After applying the annotation, cluster autoscaler scales down the cluster correctly.</p> </blockquote>
<p>I have a yml file called <code>output.yml</code> which contains a K8s Service, Deployment and Ingress resources like so (lots of fields omitted for brevity):</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name spec: selector: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name spec: selector: matchLabels: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name template: metadata: labels: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: labels: app.kubernetes.io/name: app-name app.kubernetes.io/instance: instance-name </code></pre> <p>What I would like to do is, for all occurrences of where the key = <code>app.kubernetes.io/instance</code>, replace all values of <code>instance-name</code> with a different value say <code>app-instance-name2</code>. I have tried several things like using the <code>select</code> and <code>has</code> operators like so: <code>yq eval '(.. | select(has(&quot;app.kubernetes.io/instance&quot;))'</code> but it returns all map keys instead of just the one I want to update and then I'm not really sure where to go from there.</p> <p>At the moment, I am just updating each individual value like so:</p> <pre><code>yq e '.metadata.labels[&quot;app.kubernetes.io/instance&quot;] = strenv(INSTANCE_NAME)' -i output.yml yq e 'select(.kind == &quot;Service&quot;).spec.selector[&quot;app.kubernetes.io/instance&quot;] = strenv(INSTANCE_NAME)' -i output.yml yq e 'select(.kind == &quot;Deployment&quot;).spec.selector.matchLabels[&quot;app.kubernetes.io/instance&quot;] = strenv(INSTANCE_NAME)' -i output.yml yq e 'select(.kind == &quot;Deployment&quot;).spec.template.metadata.labels[&quot;app.kubernetes.io/instance&quot;] = strenv(INSTANCE_NAME)' -i output.yml </code></pre> <p>which works but is pretty verbose so I'd like if there was a succinct single line option?</p> <p>I am using yq version 4.17.2 from <a href="https://mikefarah.gitbook.io/yq" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq</a></p> <p>Any advice is much appreciated</p>
<p>It can be accomplished by doing a <a href="https://mikefarah.gitbook.io/yq/operators/recursive-descent-glob#recursively-find-nodes-with-keys" rel="nofollow noreferrer">recursive decent</a> to identify keys matching your string and update their value part using <code>|=</code></p> <pre class="lang-none prettyprint-override"><code>yq e '(..|select(has(&quot;app.kubernetes.io/instance&quot;)).[&quot;app.kubernetes.io/instance&quot;]) |= &quot;app-instance-name2&quot;' output.yml </code></pre> <p>Starting <a href="https://github.com/mikefarah/yq/releases/tag/v4.18.1" rel="nofollow noreferrer">v4.18.1</a>, the <code>eval</code> flag is the default action, so the <code>e</code> flag can be avoided</p>
<p>I have an application that has react in the front-end and a node service in the back-end. The app is deployed in the GKE cluster. Both the apps are exposed as a NodePort Service, and the fan out ingress path is done as follows :</p> <pre><code>- host: example.com http: paths: - backend: serviceName: frontend-service servicePort: 3000 path: /* - backend: serviceName: backend-service servicePort: 5000 path: /api/* </code></pre> <p>I have enabled authentication using IAP for both services. When enabling IAP for both the kubernetes services, new Client Id and Client Secret is created individually. But I need to provide authentication for the back-end API from the front-end, since they have 2 different accounts, its not possible, i.e when I call the back-end API service from the front-end the authentication fails because the cookies provided from the FE does not match in the back-end service.</p> <p>What is the best way to handle this scenario. Is there a way to use the same client credentials for both these services and if so, Is that the right way to do it or Is there a way to authenticate the Rest API using IAP directly.</p>
<p>If IAP is setup using BackendConfig, then you can have two separate BackendConfig objects for frontend and backend applications but both of them use the same secrete (secretName) for oauthclientCredentials.</p> <p><em><strong>For frontend app</strong></em></p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: frontend-iap-config namespace: namespace-1 spec: iap: enabled: true oauthclientCredentials: secretName: common-iap-oauth-credentials </code></pre> <p><em><strong>For backend app</strong></em></p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: backend-iap-config namespace: namespace-1 spec: iap: enabled: true oauthclientCredentials: secretName: common-iap-oauth-credentials </code></pre> <p>Then refer these BackendConfigs from respective kubernetes service objects</p>
<p>Prometheus supports multiple roles in its <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">Kubernetes SD config</a></p> <p>I'm confused about whether I should use a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">Pod</a> config or a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">endpoints</a> role for my Deployment + Service.</p> <p>The service I am monitoring is a Deployment</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment spec: replicas: ~10 strategy: rollingUpdate: maxSurge: 5 maxUnavailable: 0 type: RollingUpdate template: containers: - name: web-app ports: - containerPort: 3182 name: http - containerPort: 6060 name: metrics </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: web-app spec: ports: - name: http port: 3182 targetPort: http selector: app: web-app type: ClusterIP </code></pre> <p>The number of pods can vary in the deployment. The deployment is continuously being updated with new images.</p> <p>I can add annotations or labels as needed to either of these YAML files.</p> <p>Is there a reason to prefer either a <strong>Pod</strong> role or an <strong>Endpoints</strong> role?</p>
<p>In short, there are two major differences:</p> <ul> <li>an <code>endpoints</code> role gives you more data in labels (to which service a pod belongs, for example);</li> <li>a <code>pod</code> role targets <strong><em>any</em></strong> pod out there and not just those belonging to a service.</li> </ul> <p>What's best for you is for you to decide, but I suppose that an <code>endpoints</code> role would fit well for your production applications (all these usually have a corresponding service), and a <code>pod</code> role for everything else. Or you may do with just one <code>pod</code> role job for everything and bring that extra information with the <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> exporter.</p>
<p>I am using <a href="https://kustomize.io/" rel="nofollow noreferrer">https://kustomize.io/</a> have below is my kustomization.yaml file,</p> <p>I would like to pass <code>newTag</code> image version to labels on <code>deployment.yaml</code> when i use ArgoCD to apply this file. Does anyone have any idea without using shell script to sed the <code>newtag</code> to deployment.yaml file.</p> <p>deployment.yaml</p> <pre><code>apiVersion: &quot;apps/v1&quot; kind: &quot;Deployment&quot; metadata: name: &quot;hellowolrd&quot; spec: template: metadata: labels: app: aggregate appversion: ${newtag} &lt;&lt;&lt;&lt;&lt; </code></pre> <p>kustomization.yaml</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: hellowolrd newName: hellowolrd newTag: 12345 </code></pre>
<p>You can't do this directly with <code>newTag</code> value. However you can use a PatchTransformer built-in plugin to change <code>appversion</code> value.</p> <p>Add this</p> <pre class="lang-yaml prettyprint-override"><code>resources: - deployment.yaml patches: - patch: |- - op: replace path: /spec/template/metadata/labels/appversion value: v2 target: kind: Deployment </code></pre> <p>to your <em>kustomization.yaml</em>, and run <code>kustomize build</code>.<br /> The result will look like this</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: hellowolrd spec: template: metadata: labels: app: aggregate appversion: v2 </code></pre> <p>Yo ucan read more <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/builtins/#_patchtransformer_" rel="nofollow noreferrer">here</a>.</p>
<p>I have a <code>ConfigMap</code> where I am including a file in its data attribute and I need to replace several strings from it. But I'm not able to divide it (<strong>the &quot;replaces&quot;</strong>) into <strong>several lines</strong> so that it doesn't get a giant line. How can I do this?</p> <p>This is what I don't want:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: {{ (.Files.Glob &quot;myFolder/*.json&quot;).AsConfig | indent 2 | replace &quot;var1_enabled&quot; (toString .Values.myVar1.enabled) | replace &quot;var2_enabled&quot; (toString .Values.myVar2.enabled) }} </code></pre> <p>This is what I'm trying to do:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: {{ (.Files.Glob &quot;myFolder/*.json&quot;).AsConfig | indent 2 | replace &quot;var1_enabled&quot; (toString .Values.myVar1.enabled) | replace &quot;var2_enabled&quot; (toString .Values.myVar2.enabled) }} </code></pre> <p>What is the right syntax to do this?</p>
<blockquote> <p>What is the right syntax to do this?</p> </blockquote> <p>It is well described in <a href="https://helm.sh/docs/chart_template_guide/yaml_techniques/#controlling-spaces-in-multi-line-strings" rel="nofollow noreferrer">this documentation</a>. There are many different ways to achieve your goal, it all depends on the specific situation. You have everything in that documentation. Look at the <a href="https://helm.sh/docs/chart_template_guide/yaml_techniques/#indenting-and-templates" rel="nofollow noreferrer">example</a> most connected to your current situation:</p> <blockquote> <p>When writing templates, you may find yourself wanting to inject the contents of a file into the template. As we saw in previous chapters, there are two ways of doing this:</p> <ul> <li>Use <code>{{ .Files.Get &quot;FILENAME&quot; }}</code> to get the contents of a file in the chart.</li> <li>Use <code>{{ include &quot;TEMPLATE&quot; . }}</code> to render a template and then place its contents into the chart.</li> </ul> <p>When inserting files into YAML, it's good to understand the multi-line rules above. Often times, the easiest way to insert a static file is to do something like this:</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>myfile: | {{ .Files.Get &quot;myfile.txt&quot; | indent 2 }} </code></pre> <blockquote> <p>Note how we do the indentation above: <code>indent 2</code> tells the template engine to indent every line in &quot;myfile.txt&quot; with two spaces. Note that we do not indent that template line. That's because if we did, the file content of the first line would be indented twice.</p> </blockquote> <p>For more look also at the <a href="https://github.com/helm/helm/issues/5451" rel="nofollow noreferrer">similar problem on github</a> and <a href="https://stackoverflow.com/questions/50951124/multiline-string-to-a-variable-in-a-helm-template">question on stack</a>.</p> <hr /> <p><strong>EDIT:</strong></p> <blockquote> <p>But I'm not able to divide it (<strong>the &quot;replaces&quot;</strong>) into <strong>several lines</strong> so that it doesn't get a giant line. How can I do this?</p> </blockquote> <p><strong>It is impossible to achieve. Go Template doesn't support newlines.</strong> For more see <a href="https://stackoverflow.com/questions/49816911/how-to-split-a-long-go-template-function-across-multiple-lines">this question</a>. and <a href="https://pkg.go.dev/text/template" rel="nofollow noreferrer">this documentation</a></p> <blockquote> <p>The input text for a template is UTF-8-encoded text in any format. &quot;Actions&quot;--data evaluations or control structures--are delimited by &quot;{{&quot; and &quot;}}&quot;; all text outside actions is copied to the output unchanged. Except for raw strings, actions may not span newlines, although comments can.</p> </blockquote>
<p>I am trying to connect a folder in windows to a container folder. This is for a .NET app that needs to read files in a folder. In a normal docker container, with docker-compose, the app works without problems, but since this is only one of several different apps that we will have to monitor, we are trying to get kubernetes involved. That is also where we are failing. As a beginner with kubernetes, I used kompose.exe to convert the compose files to kuberetes style. However, no matter if I use hostPath or persistentVolumeClaim as a flag, I do not get things to work &quot;out of the box&quot;. With hostPath, the path is very incorrect, and with persistentVolumeClaim I get a warning saying volume mount on the host is not supported. I, therefore, tried to do that part myself but can get it to work with neither persistent volume nor entering mount data in the deployment file directly. The closest I have come is that I can enter the folder, and I can change to subfolders within, but as soon as I try to run any other command, be it 'ls' or 'cat', I get &quot;Operation not permitted&quot;. Here is my docker compose file, which works as expected by</p> <pre><code>version: &quot;3.8&quot; services: test-create-hw-file: container_name: &quot;testcreatehwfile&quot; image: test-create-hw-file:trygg network_mode: &quot;host&quot; volumes: - /c/temp/testfiles:/app/files </code></pre> <p>Running konvert compose on that file:</p> <pre><code>PS C:\temp&gt; .\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v DEBU Checking validation of provider: kubernetes DEBU Checking validation of controller: DEBU Docker Compose version: 3.8 WARN Service &quot;test-create-hw-file&quot; won't be created because 'ports' is not specified DEBU Compose file dir: C:\temp DEBU Target Dir: . INFO Kubernetes file &quot;test-create-hw-file-deployment.yaml&quot; created </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v kompose.version: 1.26.1 (a9d05d509) creationTimestamp: null labels: io.kompose.service: test-create-hw-file name: test-create-hw-file spec: replicas: 1 selector: matchLabels: io.kompose.service: test-create-hw-file strategy: type: Recreate template: metadata: annotations: kompose.cmd: C:\temp\kompose.exe convert -f .\docker-compose-tchwf.yml --volumes hostPath -v kompose.version: 1.26.1 (a9d05d509) creationTimestamp: null labels: io.kompose.service: test-create-hw-file spec: containers: - image: test-create-hw-file:trygg name: testcreatehwfile resources: {} volumeMounts: - mountPath: /app/files name: test-create-hw-file-hostpath0 restartPolicy: Always volumes: - hostPath: path: C:\temp\c\temp\testfiles name: test-create-hw-file-hostpath0 status: {} </code></pre> <p>Running kubectl apply on that file just gives the infamous error &quot;Error: Error response from daemon: invalid mode: /app/files&quot;, which means, as far as I can understand, not that the &quot;/app/files&quot; is wrong, but the format of the supposedly connected folder is incorrect. This is the quite weird <code>C:\temp\c\temp\testfiles</code> row. After googling and reading a lot, I have two ways of changing that, to either <code>/c/temp/testfiles</code> or <code>/host_mnt/c/temp/testfiles</code>. Both end up in the same &quot;Operation not permitted&quot;. I am checking this via going into the CLI on the container in the docker desktop.</p> <p>The image from the test is just an app that does nothing right now other than wait for five minutes to not quit before I can check the folder. I am logged on to the shell as root, and I have this row for the folder when doing 'ls -lA':</p> <pre><code>drwxrwxrwx 1 root root 0 Feb 7 12:04 files </code></pre> <p>Also, the <code>docker-user</code> has full access to the <code>c:\temp\testfiles</code> folder.</p> <p><strong>Some version data:</strong></p> <pre><code>Docker version 20.10.12, build e91ed57 Kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.5&quot;, GitCommit:&quot;5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-12-16T08:38:33Z&quot;, GoVersion:&quot;go1.16.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;windows/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.5&quot;, GitCommit:&quot;5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-12-16T08:32:32Z&quot;, GoVersion:&quot;go1.16.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Kompose version 1.26.1 (a9d05d509) Host OS: Windows 10, 21H2 </code></pre> <p>//Trygg</p>
<p>Glad that my initial comment solved your issue. I would like to expand my thoughts a little in a form of an official answer.</p> <p>To mount volumes using Kubernetes on Docker Desktop for Windows the path will be:</p> <pre class="lang-yaml prettyprint-override"><code>/run/desktop/mnt/host/c/PATH/TO/FILE </code></pre> <p>Unfortunately there is no official documentation but <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567594291" rel="noreferrer">here</a> is a good comment with explanation that this is related to Docker Daemon:</p> <blockquote> <p>/mnt/wsl is actually the mount point for the cross-distro mounts tmpfs<br /> Docker Daemon mounts it in its /run/desktop/mnt/host/wsl directory</p> </blockquote>
<p>Based on the instructions found here (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/</a>) I am trying to create an nginx deployment and configure it using a config-map. I can successfully access nginx using curl (yea!) but the configmap does not appear to be &quot;sticking.&quot; The only thing it is supposed to do right now is forward the traffic along. I have seen the thread here (<a href="https://stackoverflow.com/questions/52773494/how-do-i-load-a-configmap-in-to-an-environment-variable">How do I load a configMap in to an environment variable?</a>). although I am using the same format, their answer was not relevant.</p> <p>Can anyone tell me how to properly configure the configmaps? the yaml is</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: sandbox spec: selector: matchLabels: run: nginx app: dsp tier: frontend replicas: 2 template: metadata: labels: run: nginx app: dsp tier: frontend spec: containers: - name: nginx image: nginx env: # Define the environment variable - name: nginx-conf valueFrom: configMapKeyRef: # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY name: nginx-conf # Specify the key associated with the value key: nginx.conf resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: containerPort: 80 </code></pre> <p>the nginx-conf is</p> <pre><code> # The identifier Backend is internal to nginx, and used to name this specific upstream upstream Backend { # hello is the internal DNS name used by the backend Service inside Kubernetes server dsp; } server { listen 80; location / { # The following statement will proxy traffic to the upstream named Backend proxy_pass http://Backend; } } </code></pre> <p>I turn it into a configmap using the following line</p> <pre><code>kubectl create configmap -n sandbox nginx-conf --from-file=apps/nginx.conf </code></pre>
<p>You need to mount the configMap rather than use it as an environment variable, as the setting is not a key-value format.</p> <p>Your Deployment yaml should be like below:</p> <pre class="lang-yaml prettyprint-override"><code>containers: - name: nginx image: nginx volumeMounts: - mountPath: /etc/nginx name: nginx-conf volumes: - name: nginx-conf configMap: name: nginx-conf items: - key: nginx.conf path: nginx.conf </code></pre> <p>You need to create (apply) the configMap beforehand. You can create it from file:</p> <pre><code>kubectl create configmap nginx-conf --from-file=nginx.conf </code></pre> <p>or you can directly describe configMap manifest:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-conf data: nginx.conf: | # The identifier Backend is internal to nginx, and used to name this specific upstream upstream Backend { # hello is the internal DNS name used by the backend Service inside Kubernetes server dsp; } ... } </code></pre>
<p>We have setup a GKE cluster using Terraform with private and shared networking:</p> <p>Network configuration:</p> <pre><code>resource &quot;google_compute_subnetwork&quot; &quot;int_kube02&quot; { name = &quot;int-kube02&quot; region = var.region project = &quot;infrastructure&quot; network = &quot;projects/infrastructure/global/networks/net-10-23-0-0-16&quot; ip_cidr_range = &quot;10.23.5.0/24&quot; secondary_ip_range { range_name = &quot;pods&quot; ip_cidr_range = &quot;10.60.0.0/14&quot; # 10.60 - 10.63 } secondary_ip_range { range_name = &quot;services&quot; ip_cidr_range = &quot;10.56.0.0/16&quot; } } </code></pre> <p>Cluster configuration:</p> <pre><code>resource &quot;google_container_cluster&quot; &quot;gke_kube02&quot; { name = &quot;kube02&quot; location = var.region initial_node_count = var.gke_kube02_num_nodes network = &quot;projects/ninfrastructure/global/networks/net-10-23-0-0-16&quot; subnetwork = &quot;projects/infrastructure/regions/europe-west3/subnetworks/int-kube02&quot; master_authorized_networks_config { cidr_blocks { display_name = &quot;admin vpn&quot; cidr_block = &quot;10.42.255.0/24&quot; } cidr_blocks { display_name = &quot;monitoring server&quot; cidr_block = &quot;10.42.4.33/32&quot; } cidr_blocks { display_name = &quot;cluster nodes&quot; cidr_block = &quot;10.23.5.0/24&quot; } } ip_allocation_policy { cluster_secondary_range_name = &quot;pods&quot; services_secondary_range_name = &quot;services&quot; } private_cluster_config { enable_private_nodes = true enable_private_endpoint = true master_ipv4_cidr_block = &quot;192.168.23.0/28&quot; } node_config { machine_type = &quot;e2-highcpu-2&quot; tags = [&quot;kube-no-external-ip&quot;] metadata = { disable-legacy-endpoints = true } oauth_scopes = [ &quot;https://www.googleapis.com/auth/logging.write&quot;, &quot;https://www.googleapis.com/auth/monitoring&quot;, ] } } </code></pre> <p>The cluster is online and running fine. If I connect to one of the worker nodes i can reach the api using <code>curl</code>:</p> <pre><code>curl -k https://192.168.23.2 { &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;forbidden: User \&quot;system:anonymous\&quot; cannot get path \&quot;/\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { }, &quot;code&quot;: 403 } </code></pre> <p>I also see a healthy cluster when using a SSH port forward:</p> <pre><code>❯ k get pods --all-namespaces --insecure-skip-tls-verify=true NAMESPACE NAME READY STATUS RESTARTS AGE kube-system event-exporter-gke-5479fd58c8-mv24r 2/2 Running 0 4h44m kube-system fluentbit-gke-ckkwh 2/2 Running 0 4h44m kube-system fluentbit-gke-lblkz 2/2 Running 0 4h44m kube-system fluentbit-gke-zglv2 2/2 Running 4 4h44m kube-system gke-metrics-agent-j72d9 1/1 Running 0 4h44m kube-system gke-metrics-agent-ttrzk 1/1 Running 0 4h44m kube-system gke-metrics-agent-wbqgc 1/1 Running 0 4h44m kube-system kube-dns-697dc8fc8b-rbf5b 4/4 Running 5 4h44m kube-system kube-dns-697dc8fc8b-vnqb4 4/4 Running 1 4h44m kube-system kube-dns-autoscaler-844c9d9448-f6sqw 1/1 Running 0 4h44m kube-system kube-proxy-gke-kube02-default-pool-2bf58182-xgp7 1/1 Running 0 4h43m kube-system kube-proxy-gke-kube02-default-pool-707f5d51-s4xw 1/1 Running 0 4h43m kube-system kube-proxy-gke-kube02-default-pool-bd2c130d-c67h 1/1 Running 0 4h43m kube-system l7-default-backend-6654b9bccb-mw6bp 1/1 Running 0 4h44m kube-system metrics-server-v0.4.4-857776bc9c-sq9kd 2/2 Running 0 4h43m kube-system pdcsi-node-5zlb7 2/2 Running 0 4h44m kube-system pdcsi-node-kn2zb 2/2 Running 0 4h44m kube-system pdcsi-node-swhp9 2/2 Running 0 4h44m </code></pre> <p>So far so good. Then I setup the Cloud Router to announce the <code>192.168.23.0/28</code> network. This was successful and replicated to our local site using BGP. Running <code>show route 192.168.23.2</code> displays the correct route is advertised and installed.</p> <p>When trying to reach the API from the monitoring server <code>10.42.4.33</code> I just run into timeouts. All three, the Cloud VPN, the Cloud Router and the Kubernetes Cluster run in <code>europe-west3</code>.</p> <p>When i try to ping one of the workers its working completely fine, so networking in general works:</p> <pre><code>[me@monitoring ~]$ ping 10.23.5.216 PING 10.23.5.216 (10.23.5.216) 56(84) bytes of data. 64 bytes from 10.23.5.216: icmp_seq=1 ttl=63 time=8.21 ms 64 bytes from 10.23.5.216: icmp_seq=2 ttl=63 time=7.70 ms 64 bytes from 10.23.5.216: icmp_seq=3 ttl=63 time=5.41 ms 64 bytes from 10.23.5.216: icmp_seq=4 ttl=63 time=7.98 ms </code></pre> <p>Googles <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">Documentation</a> gives no hit what could be missing. From what I understand the Cluster API should be reachable by now.</p> <p>What could be missing and why is the API not reachable via VPN?</p>
<p>I have been missing the peering configuration documented here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cp-on-prem-routing" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cp-on-prem-routing</a></p> <pre><code>resource &quot;google_compute_network_peering_routes_config&quot; &quot;peer_kube02&quot; { peering = google_container_cluster.gke_kube02.private_cluster_config[0].peering_name project = &quot;infrastructure&quot; network = &quot;net-10-13-0-0-16&quot; export_custom_routes = true import_custom_routes = false } </code></pre>
<p>I try to use the post steps with the Jenkins kubernetes plugin. Does anyone has an idea?</p> <pre><code>java.lang.NoSuchMethodError: No such DSL method 'post' found among steps </code></pre> <p>My pipeline:</p> <pre><code>podTemplate( label: 'jenkins-pipeline', cloud: 'minikube', volumes: [ hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'), ]) { node('jenkins-pipeline') { stage('test') { container('maven') { println 'do some testing stuff' } } post { always { println "test" } } } } </code></pre>
<p>This example shows how to use the post step using the Kubernetes plugin:</p> <pre><code>pipeline { agent { kubernetes { label &quot;my-test-pipeline-${BUILD_NUMBER}&quot; containerTemplate { name &quot;my-container&quot; image &quot;alpine:3.15.0&quot; command &quot;sleep&quot; args &quot;99d&quot; } } } stages { stage('Stage 1') { steps { container('my-container') { sh ''' set -e echo &quot;Hello world!&quot; sleep 10 echo &quot;I waited&quot; echo &quot;forcing a fail&quot; exit 1 ''' } } } } post { unsuccessful { container('my-container') { sh ''' set +e echo &quot;Cleaning up stuff here&quot; ''' } } } } </code></pre>
<p>Getting this Error from AKS Jenkins Agent pods. Any idea whats will be reason for this Error? Troubleshooting steps i did.</p> <p>revert Jenkins to old version =&gt; results in same Error upgrade Jenkins to all new Version including plugins in use =&gt; Results in same Error. Downgraded Jenkins K8s and K8s API plugins to stable version as per some suggestion in github. =&gt; same Error Created Brand new cluster and install Jenkins and Job pod starting giving same Error. =&gt; same Error</p> <p>How to fix this?</p> <pre><code>18:23:33 [Pipeline] // podTemplate 18:23:33 [Pipeline] End of Pipeline 18:23:33 io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS 18:23:33 at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:176) 18:23:33 at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:322) 18:23:33 at io.fabric8.kubernetes.client.dsl.internal.core.v1.PodOperationsImpl.exec(PodOperationsImpl.java:84) 18:23:33 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:413) 18:23:33 at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:330) 18:23:33 at hudson.Launcher$ProcStarter.start(Launcher.java:507) 18:23:33 at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176) 18:23:33 at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132) 18:23:33 at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324) 18:23:33 at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319) 18:23:33 at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:193) 18:23:33 at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) 18:23:33 at jdk.internal.reflect.GeneratedMethodAccessor6588.invoke(Unknown Source) 18:23:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 18:23:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 18:23:33 at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93) 18:23:33 at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325) 18:23:33 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213) 18:23:33 at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022) 18:23:33 at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42) 18:23:33 at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) 18:23:33 at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:163) 18:23:33 at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23) 18:23:33 at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:158) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:161) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:165) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:135) 18:23:33 at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17) 18:23:33 at WorkflowScript.run(WorkflowScript:114) 18:23:33 at ___cps.transform___(Native Method) 18:23:33 at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:86) 18:23:33 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:113) 18:23:33 at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:83) 18:23:33 at jdk.internal.reflect.GeneratedMethodAccessor210.invoke(Unknown Source) 18:23:33 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 18:23:33 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 18:23:33 at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72) 18:23:33 at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21) 18:23:33 at com.cloudbees.groovy.cps.Next.step(Next.java:83) 18:23:33 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174) 18:23:33 at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163) 18:23:33 at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129) 18:23:33 at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268) 18:23:33 at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163) 18:23:33 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:18) 18:23:33 at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:51) 18:23:33 at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:185) </code></pre>
<p><strong>EDIT:</strong> Doesn't seem to make any difference. Still getting 5000 ms timeouts, so not sure this method works (with Environment variables at least). It might work if you are actually able to change timeout, but I haven't got that figured out.</p> <hr /> <p>Started seeing the same issue after updating Jenkins (and plugins) only - not the K8S cluster.</p> <p>Except I'm getting 5000 instead of 7000 milliseconds.</p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: not ready after 5000 MILLISECONDS </code></pre> <p>Digging into the stacktrace and source on Github leads back to <a href="https://github.com/fabric8io/kubernetes-client/blob/677884ca88f3911f9b41ec919db152d441ad2cdd/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/Config.java#L135" rel="nofollow noreferrer">this</a> default timeout (which haven't changed in 6 years), so somehow you seem to have a non-default.</p> <pre><code>public static final Long DEFAULT_WEBSOCKET_TIMEOUT = 5 * 1000L; </code></pre> <p>It seems it can <a href="https://github.com/fabric8io/kubernetes-client/blob/677884ca88f3911f9b41ec919db152d441ad2cdd/kubernetes-client-api/src/main/java/io/fabric8/kubernetes/client/Config.java#L421" rel="nofollow noreferrer">be overriden with environment variable</a> <strong>KUBERNETES_WEBSOCKET_TIMEOUT_SYSTEM_PROPERTY</strong> on the pod. I just tried to raise mine to 10sec seeing if it makes a difference.</p> <p>Might be worth a try. If so, it could indicate that the cluster API-server is somehow slower than excepted to respond. I'm not aware of anything that should affect cluster performance around the time of the upgrade in my case, and since the default timeout haven't changed changed for years, it seems odd. Maybe some of the code was refactored somehow no longer ignoring timeouts/retrying etc - Only guessing.</p> <p>EDIT: I'm on running on bare metal cluster</p>
<p>I would like to be able to limit the amount of jobs of a given &quot;type&quot; that run at the same time (maybe based on their label, e.g. no more than N jobs with label <code>mylabel</code> may run at the same time).</p> <p>I have a long running computation that requires a license key to run. I have N license keys and I would like to limit the amount of simultaneously running jobs to N. Here's how I imagine it working: I label the jobs with some special <code>tag</code>. Then, I schedule N + K jobs, then at most N jobs may be in state &quot;running&quot; and K jobs should be in the queue and may only transition to &quot;running&quot; state when the total number of running jobs labeled <code>mytag</code> is less or equal to N.</p> <p>[UPDATE]</p> <ul> <li>The jobs are independent of each other.</li> <li>The execution order is not important, although I would like them to be FIFO (time wise).</li> <li>The jobs are scheduled on user requests. That is, there is no fixed amount of work known in advance that needs to be processed, the requests to run a job with some set of parameters (configuration file) come sporadically in time.</li> </ul>
<p>Unfortunately there is no built-in feature to do it using labels in k8s. But since your jobs are scheduled based on unpredictable user requests, you can achieve your goal like this:</p> <ul> <li>create a new namespace <code>kubectl create namespace quota-pod-ns</code></li> <li>create a ResourceQuota</li> </ul> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ResourceQuota metadata: name: pod-max-number namespace: quota-pod-ns spec: hard: pods: &quot;5&quot; </code></pre> <p>This will limit max number of pods to 5 in the namespace quota-pod-ns.</p> <ul> <li>create k8s jobs in the quota-pod-ns namespace.</li> </ul> <p>When you want to run the 6th job in that namespace, k8s will try to create the 6th pod and will fail to do that. But once one of the running pods is Completed, job-controller will create that new pod within a max limit.</p>
<p>I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page. I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.</p> <p>My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).</p> <p>I have the <code>autopullpolicy</code> set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the <code>BUILD_NUMBER</code> as the tag for the image in the deployment).</p> <p>Here is my pipeline:</p> <pre><code>pipeline { agent any environment { user = &quot;NAME&quot; repo = &quot;prework&quot; imagename = &quot;${user}/${repo}&quot; registryCreds = 'dockerhub' containername = &quot;${repo}-test&quot; } stages { stage (&quot;Build&quot;) { steps { // Building artifact sh ''' docker build -t ${imagename} . docker run -p 80 --name ${containername} -dt ${imagename} ''' } } stage (&quot;Test&quot;) { steps { sh ''' IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername}) STATUS=$(curl -sL -w &quot;%{http_code} \n&quot; $IP:80 -o /dev/null) if [ $STATUS -ne 200 ]; then echo &quot;Site is not up, test failed&quot; exit 1 fi echo &quot;Site is up, test succeeded&quot; ''' } } stage (&quot;Store Artifact&quot;) { steps { echo &quot;Storing artifact: ${imagename}:${BUILD_NUMBER}&quot; script { docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') { def customImage = docker.image(imagename) customImage.push(BUILD_NUMBER) customImage.push(&quot;latest&quot;) } } } } stage (&quot;Deploy to Kubernetes&quot;) { steps { echo &quot;Deploy to k8s&quot; script { kubernetesDeploy(configs: &quot;deployment.yaml&quot;, kubeconfigId: &quot;kubeconfig&quot;) } } } } post { always { echo &quot;Pipeline has ended, deleting image and containers&quot; sh ''' docker stop ${containername} docker rm ${containername} -f ''' } } </code></pre> <p>}</p> <p>EDIT: I used <code>sed</code> to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now. Thanks.</p>
<p>According to the information from <a href="https://github.com/jenkinsci/kubernetes-cd-plugin#configure-the-plugin" rel="nofollow noreferrer">Kubernetes Continuous Deploy Plugin</a> p.6. you can add <code>enableConfigSubstitution: true</code> to <code>kubernetesDeploy()</code> section and use <code>${BUILD_NUMBER}</code> instead of <code>latest</code> in deployment.yaml:</p> <blockquote> <p>By checking &quot;Enable Variable Substitution in Config&quot;, the variables (in the form of $VARIABLE or `${VARIABLE}) in the configuration files will be replaced with the values from corresponding environment variables before they are fed to the Kubernetes management API. This allows you to dynamically update the configurations according to each Jenkins task, for example, using the Jenkins build number as the image tag to be pulled.</p> </blockquote>
<ul> <li>minikube v1.25.1 on Microsoft Windows 10 Home Single Language 10.0.19043 Build 19043 <ul> <li>MINIKUBE_HOME=C:\os\minikube\Minikube</li> </ul> </li> <li>Automatically selected the virtualbox driver</li> <li>Starting control plane node minikube in cluster minikube</li> <li>Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... ! StartHost failed, but will try again: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</li> <li>Creating virtualbox VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...</li> <li>Failed to start virtualbox VM. Running &quot;minikube delete&quot; may fix it: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</li> </ul> <p>X Exiting due to HOST_VIRT_UNAVAILABLE: Failed to start host: creating host: create: precreate: This computer doesn't have VT-X/AMD-v enabled. Enabling it in the BIOS is mandatory</p> <ul> <li>Suggestion: Virtualization support is disabled on your computer. If you are running minikube within a VM, try '--driver=docker'. Otherwise, consult your systems BIOS manual for how to enable virtualization.</li> <li>Related issues: <ul> <li><a href="https://github.com/kubernetes/minikube/issues/3900" rel="noreferrer">https://github.com/kubernetes/minikube/issues/3900</a></li> <li><a href="https://github.com/kubernetes/minikube/issues/4730" rel="noreferrer">https://github.com/kubernetes/minikube/issues/4730</a></li> </ul> </li> </ul>
<p>Try this command - It will work. I faced the similar issue on my Laptop. Tried multiple ways to resolve this issue however nothing has worked for me. Also, the error message states that VT-X/AMD-V should be enabled in BIOS which is mandatory but this cannot be found in my BIOS settings. I tried the below command to resolve this issue and the minikube started normally.</p> <p><strong>minikube start --no-vtx-check</strong></p> <p>Refer this thread: <a href="https://www.virtualbox.org/ticket/4032" rel="noreferrer">https://www.virtualbox.org/ticket/4032</a></p>
<p><code>Kubectl version</code> gives the following output.</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.4&quot;, GitCommit:&quot;b695d79d4f967c403a96986f1750a35eb75e75f1&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-11-17T15:48:33Z&quot;, GoVersion:&quot;go1.16.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.5&quot;, GitCommit:&quot;aea7bbadd2fc0cd689de94a54e5b7b758869d691&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-09-15T21:04:16Z&quot;, GoVersion:&quot;go1.16.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>I have used <code>kubectl</code> to edit persistent volume from 8Gi to 30Gi as <a href="https://i.stack.imgur.com/xYH2o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYH2o.png" alt="enter image description here" /></a></p> <p>However, when I exec the pod and run <code>df -h</code> I see the following:</p> <p><a href="https://i.stack.imgur.com/n02oH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n02oH.png" alt="enter image description here" /></a></p> <p>I have deleted the pods but it again shows the same thing. if I <code>cd</code> into <code>cd/dev</code> I don't see disk and <code>vda1</code> folder there as well. I think I actually want the <code>bitnami/influxdb</code> to be 30Gi. Please guide and let me know if more info is needed.</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p>Based on the comments provided here, there could be several reasons for this behavior.</p> <ol> <li>According to the documentation from the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">Kubernetes website</a>, manually changing the PersistentVolume size will not change the volume size:</li> </ol> <blockquote> <p><strong>Warning:</strong> Directly editing the size of a PersistentVolume can prevent an automatic resize of that volume. If you edit the capacity of a PersistentVolume, and then edit the .spec of a matching PersistentVolumeClaim to make the size of the PersistentVolumeClaim match the PersistentVolume, then no storage resize happens. The Kubernetes control plane will see that the desired state of both resources matches, conclude that the backing volume size has been manually increased and that no resize is necessary.</p> </blockquote> <ol> <li>It also depends on how Kubernetes running and support for the <code>allowVolumeExpansion</code> feature. From <a href="https://github.com/digitalocean/csi-digitalocean/issues/291#issuecomment-598783816" rel="nofollow noreferrer">DigitalOcean</a>:</li> </ol> <blockquote> <p>are you running one of DigitalOcean's managed clusters, or a DIY cluster running on DigitalOcean infrastructure? In case of the latter, which version of our CSI driver do you use? (You need v1.2.0 or later for volume expansion to be supported.)</p> </blockquote>
<p>I have been trying to run a Python Django application on Kubernets but not success. The application runs fine in Docker.</p> <p>This is the yaml Deployment to Kubernets:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: &quot;1&quot; creationTimestamp: &quot;2022-02-06T14:48:45Z&quot; generation: 1 labels: app: keyvault name: keyvault namespace: default resourceVersion: &quot;520&quot; uid: ccf0e490-517f-4102-b282-2dcd71008948 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: keyvault strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: keyvault spec: containers: - image: david900412/keyvault_web:latest imagePullPolicy: Always name: keyvault-web-5wrph resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: &quot;2022-02-06T14:48:45Z&quot; lastUpdateTime: &quot;2022-02-06T14:48:45Z&quot; message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: &quot;False&quot; type: Available - lastTransitionTime: &quot;2022-02-06T14:48:45Z&quot; lastUpdateTime: &quot;2022-02-06T14:48:46Z&quot; message: ReplicaSet &quot;keyvault-6944b7b468&quot; is progressing. reason: ReplicaSetUpdated status: &quot;True&quot; type: Progressing observedGeneration: 1 replicas: 1 unavailableReplicas: 1 updatedReplicas: 1 </code></pre> <p>This is the <code>docker compose</code> file I'm using to run the image in Docker:</p> <pre><code>version: &quot;3.9&quot; services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - &quot;8000:8000&quot; </code></pre> <p>This is the docker file I'm using to run the image in Docker:</p> <pre><code>FROM python:3.9 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ </code></pre> <p><code>Kubectl describe pod</code> Output:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/keyvault-6944b7b468-frss4 to minikube Normal Pulled 37s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 12.5095594s Normal Pulled 33s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 434.2995ms Normal Pulling 17s (x3 over 49s) kubelet Pulling image &quot;david900412/keyvault_web:latest&quot; Normal Created 16s (x3 over 35s) kubelet Created container keyvault-web-5wrph Normal Started 16s (x3 over 35s) kubelet Started container keyvault-web-5wrph Normal Pulled 16s kubelet Successfully pulled image &quot;david900412/keyvault_web:latest&quot; in 395.5345ms Warning BackOff 5s (x4 over 33s) kubelet Back-off restarting failed container </code></pre> <p><code>Kubectl log pod</code> Does not show anything :(</p> <p>Thanks for your help.</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p>Based on the comments, the solution should be as shown below.</p> <ol> <li>Remove <code>volumes</code> definition from the Compose file:</li> </ol> <pre><code>version: &quot;3.9&quot; services: web: build: . command: python manage.py runserver 0.0.0.0:8000 ports: - &quot;8000:8000&quot; </code></pre> <ol start="2"> <li>Specify the startup command with <code>CMD</code> for an image in Dockerfile:</li> </ol> <pre><code> FROM python:3.9 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ CMD [&quot;python3&quot;,&quot;manage.py&quot;,&quot;runserver&quot;] </code></pre> <p>Then translate a Docker Compose file to Kubernetes resources. This can be done with using <a href="https://kompose.io/" rel="nofollow noreferrer">Kompose</a> or another suitable solution.</p>
<h3>Present state</h3> <p>In v1.22 Kubernetes dropped support for <code>v1beta1</code> API. That made our release pipeline crash and we are not sure how to fix it.</p> <p>We use build pipelines to build .NET Core applications and deploy them to the Azure Container Registry. Then there are release pipelines that use <code>helm</code> to upgrade them in the cluster from that ACR. This is how it looks exactly.</p> <p>Build pipeline:</p> <ol> <li>.NET download, restore, build, test, publish</li> <li>Docker task v0: Build task</li> <li>Docker task v0: Push to the <code>ACR</code> task</li> <li>Artifact publish to Azure Pipelines</li> </ol> <p>Release pipeline:</p> <ol> <li>Helm tool installer: Install <code>helm</code> v3.2.4 (check for latest version of Helm unchecked) and install newest <code>Kubectl</code> (Check for latest version checked)</li> <li>Bash task:</li> </ol> <pre><code>az acr login --name &lt;acrname&gt; az acr helm repo add --name &lt;acrname&gt; </code></pre> <ol start="3"> <li>Helm upgrade task: <ul> <li>chart name <code>&lt;acrname&gt;/&lt;chartname&gt;</code></li> <li>version <code>empty</code></li> <li>release name `</li> </ul> </li> </ol> <p>After the upgrade to Kubernetes v1.22 we are getting the following error in Release step 3.:</p> <p><code>Error: UPGRADE FAILED: unable to recognize &quot;&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;</code>.</p> <h3>What I've already tried</h3> <p>Error is pretty obvious and from <a href="https://helm.sh/docs/topics/version_skew/" rel="nofollow noreferrer">Helm compatibility table</a> it states clearly that I need to upgrade the release pipelines to use at least Helm v3.7.x. Unfortunately in this version OCI functionality (about this shortly) is still in experimental phase so at least v3.8.x has to be used.</p> <h6>Bumping helm version to v3.8.0</h6> <p>That makes release step 3. report:</p> <p><code>Error: looks like &quot;https://&lt;acrname&gt;.azurecr.io/helm/v1/repo&quot; is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field &quot;acrMetadata&quot;</code></p> <p>After reading Microsoft tutorial on <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-helm-repos" rel="nofollow noreferrer">how to live with <code>helm</code> and <code>ACR</code></a> I learned that <code>az acr helm</code> commands use helm v2 so are deprecated and <code>OCI</code> artifacts should be used.</p> <h6>Switching to OCI part 1</h6> <p>After reading that I changed release step 2. to a one-liner:</p> <p><code>helm registry login &lt;acrname&gt;.azurecr.io --username &lt;username&gt; --password &lt;password&gt;</code></p> <p>That now gives me <code>Login Succeeded</code> in release step 2. but release step 3. fails with</p> <p><code>Error: failed to download &quot;&lt;acrname&gt;/&lt;reponame&gt;&quot;</code>.</p> <h6>Switching to OCI part 2</h6> <p>I thought that the helm task is incompatible or something with the new approach so I removed release step 3. and decided to make it from the command line in step 2. So now step 2. looks like this:</p> <pre><code>helm registry login &lt;acrname&gt;.azurecr.io --username &lt;username&gt; --password &lt;password&gt; helm upgrade --install --wait -n &lt;namespace&gt; &lt;deploymentName&gt; oci://&lt;acrname&gt;.azurecr.io/&lt;reponame&gt; --version latest --values ./values.yaml </code></pre> <p>Unfortunately, that still gives me:</p> <p><code>Error: failed to download &quot;oci://&lt;acrname&gt;.azurecr.io/&lt;reponame&gt;&quot; at version &quot;latest&quot;</code></p> <h6>Helm pull, export, upgrade instead of just upgrade</h6> <p>The next try was to split the <code>help upgrade</code> into separately <code>helm pull</code>, <code>helm export</code> and then <code>helm upgrade</code> but</p> <p><code>helm pull oci://&lt;acrname&gt;.azurecr.io/&lt;reponame&gt; --version latest</code></p> <p>gives me:</p> <p><code>Error: manifest does not contain minimum number of descriptors (2), descriptors found: 0</code></p> <h6>Changing <code>docker build</code> and <code>docker push</code> tasks to v2</h6> <p>I also tried changing the docker tasks in the build pipelines to v2. But that didn't change anything at all.</p>
<p>Have you tried changing the Ingress object's <code>apiVersion</code> to <code>networking.k8s.io/v1beta1</code> or <code>networking.k8s.io/v1</code>? <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="nofollow noreferrer">Support for Ingress in the extensions/v1beta1 API version is dropped in k8s 1.22</a>.</p> <p>Our <code>ingress.yaml</code> file in our helm chart looks something like this to support multiple k8s versions. You can ignore the AWS-specific annotations since you're using Azure. Our chart has a global value of <code>ingress.enablePathType</code> because at the time of writing the yaml file, <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/1772" rel="nofollow noreferrer">AWS Load Balancer did not support pathType</a> and so we set the value to false.</p> <pre class="lang-yaml prettyprint-override"><code>{{- if .Values.global.ingress.enabled -}} {{- $useV1Ingress := and (.Capabilities.APIVersions.Has &quot;networking.k8s.io/v1/Ingress&quot;) .Values.global.ingress.enablePathType -}} {{- if $useV1Ingress -}} apiVersion: networking.k8s.io/v1 {{- else if semverCompare &quot;&gt;=1.14-0&quot; .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1beta1 {{- else -}} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: example-ingress labels: {{- include &quot;my-chart.labels&quot; . | nindent 4 }} annotations: {{- if .Values.global.ingress.group.enabled }} alb.ingress.kubernetes.io/group.name: {{ required &quot;ingress.group.name is required when ingress.group.enabled is true&quot; .Values.global.ingress.group.name }} {{- end }} {{- with .Values.global.ingress.annotations }} {{- toYaml . | nindent 4 }} {{- end }} # Add these tags to the AWS Application Load Balancer alb.ingress.kubernetes.io/tags: k8s.namespace/{{ .Release.Namespace }}={{ .Release.Namespace }} spec: rules: - host: {{ include &quot;my-chart.applicationOneServerUrl&quot; . | quote }} http: paths: {{- if $useV1Ingress }} - path: / pathType: Prefix backend: service: name: {{ $applicationOneServiceName }} port: name: http-grails {{- else }} - path: /* backend: serviceName: {{ $applicationOneServiceName }} servicePort: http-grails {{- end }} - host: {{ include &quot;my-chart.applicationTwoServerUrl&quot; . | quote }} http: paths: {{- if $useV1Ingress }} - path: / pathType: Prefix backend: service: name: {{ .Values.global.applicationTwo.serviceName }} port: name: http-grails {{- else }} - path: /* backend: serviceName: {{ .Values.global.applicationTwo.serviceName }} servicePort: http-grails {{- end }} {{- end }} </code></pre>
<p>I have an angular application which is deployed on apache container running on Kubernetes. I wanted to setup liveness and readiness probe for pods but I am out of ideas .Any help would be appreciated.</p>
<p>Base on the link you provided, you can use the following as a start:</p> <pre><code>apiVersion: v1 kind: Pod ... spec: ... containers: - name: ... ... livenessProbe: tcpSocket: port: 80 # &lt;-- live when your web server is running initialDelaySeconds: 5 # &lt;-- wait 5s before starting to probe periodSeconds: 20 # &lt;-- do this probe every 20 seconds readinessProbe: httpGet: path: / # &lt;-- ready to accept connection when your home page is serving port: 80 initialDelaySeconds: 15 periodSeconds: 10 failureThreshold: 3 # &lt;-- must not fail &gt; 3 probes </code></pre>
<p>I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.</p> <p>Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.</p> <p>I was trying to figure out where do these images reside in the cluster? </p> <p>Just like Docker stores, the pulled images in /var/lib/docker &amp; since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.</p> <p>But if this is the case, how can I delete the old images from the cluster that are not in use anymore?</p>
<p>Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (<a href="https://learn.microsoft.com/en-us/azure/aks/cluster-configuration#container-runtime-configuration" rel="nofollow noreferrer">Container runtime configuration</a>).</p> <p>To manually remove unused images on a node running containerd:</p> <p>Identity node names:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get nodes </code></pre> <p>Start an interactive debugging container on a node (<a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="nofollow noreferrer">Connect with SSH to Azure Kubernetes Service</a>):</p> <pre class="lang-sh prettyprint-override"><code>kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 </code></pre> <p>Setup <code>crictl</code> on the debugging container (<a href="https://github.com/kubernetes-sigs/cri-tools/releases" rel="nofollow noreferrer">check for newer releases of crictl</a>):</p> <blockquote> <p>The host node's filesystem is available at <code>/host</code>, so configure <code>crictl</code> to use the host node's <code>containerd.sock</code>.</p> </blockquote> <pre class="lang-sh prettyprint-override"><code>curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \ &amp;&amp; export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock </code></pre> <p>Remove unused images on the node:</p> <pre class="lang-sh prettyprint-override"><code>crictl rmi --prune </code></pre>
<p>I've created a NiFi cluster on the AWS EKS. The initial deployment was working fine. Later I attached Persistent volume and persistent volume claim to the NiFi setup. After starting the NiFi, I'm getting this error:</p> <pre><code>ERROR in ch.qos.logback.core.rolling.RollingFileAppender[USER_FILE] - openFile(/opt/nifi/nifi-current/logs/nifi-user.log,true) call failed. java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) </code></pre> <p>As I'm not an expert in NiFi and Kubernetes, I couldn't identify the issue. It looks like a permission issue on NiFi. The NiFi version I'm using is NiFI 1.15.0.</p> <p>What may be the possible root cause for this? Is that because NiFi is not using the root user or is that something else?</p> <p>I'm sharing the full error here:</p> <pre><code>13:56:22,449 |-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[USER_FILE] - openFile(/opt/nifi/nifi-current/logs/nifi-user.log,true) call failed. java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) at java.io.FileNotFoundException: /opt/nifi/nifi-current/logs/nifi-user.log (Permission denied) at at java.io.FileOutputStream.open0(Native Method) at at java.io.FileOutputStream.open(FileOutputStream.java:270) at at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:213) at at ch.qos.logback.core.recovery.ResilientFileOutputStream.&lt;init&gt;(ResilientFileOutputStream.java:26) at at ch.qos.logback.core.FileAppender.openFile(FileAppender.java:204) at at ch.qos.logback.core.FileAppender.start(FileAppender.java:127) at at ch.qos.logback.core.rolling.RollingFileAppender.start(RollingFileAppender.java:100) at at ch.qos.logback.core.joran.action.AppenderAction.end(AppenderAction.java:90) at at ch.qos.logback.core.joran.spi.Interpreter.callEndAction(Interpreter.java:309) at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:193) at at ch.qos.logback.core.joran.spi.Interpreter.endElement(Interpreter.java:179) at at ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:62) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:165) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:152) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:110) at at ch.qos.logback.core.joran.GenericConfigurator.doConfigure(GenericConfigurator.java:53) at at ch.qos.logback.classic.util.ContextInitializer.configureByResource(ContextInitializer.java:75) at at ch.qos.logback.classic.util.ContextInitializer.autoConfig(ContextInitializer.java:150) at at org.slf4j.impl.StaticLoggerBinder.init(StaticLoggerBinder.java:84) at at org.slf4j.impl.StaticLoggerBinder.&lt;clinit&gt;(StaticLoggerBinder.java:55) at at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150) at at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124) at at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:417) at at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:362) at at org.apache.nifi.bootstrap.RunNiFi.&lt;init&gt;(RunNiFi.java:145) at at org.apache.nifi.bootstrap.RunNiFi.main(RunNiFi.java:284) </code></pre> <p>I'm also sharing the Kubernetes manifest part that describe the pv and PVC I used for creating the NiFi cluster:</p> <pre><code> volumeMounts: - name: &quot;data&quot; mountPath: /opt/nifi/nifi-current/data - name: &quot;flowfile-repository&quot; mountPath: /opt/nifi/nifi-current/flowfile_repository - name: &quot;content-repository&quot; mountPath: /opt/nifi/nifi-current/content_repository - name: &quot;provenance-repository&quot; mountPath: /opt/nifi/nifi-current/provenance_repository - name: &quot;logs&quot; mountPath: /opt/nifi/nifi-current/logs volumeClaimTemplates: - metadata: name: &quot;data&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 1Gi - metadata: name: &quot;flowfile-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;content-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;provenance-repository&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 10Gi - metadata: name: &quot;logs&quot; spec: accessModes: [&quot;ReadWriteOnce&quot;] storageClassName: &quot;gp2&quot; resources: requests: storage: 5Gi </code></pre> <p>Any help is appreciated.</p>
<p>Assuming you donΒ΄t have any issues creating pv and pvc, try to use an extra <code>initContainers</code> section to allow the NiFi user with UID and GID 1000 read and write to the provisioned EBS volume:</p> <pre class="lang-yaml prettyprint-override"><code>initContainers: - name: fixmount image: busybox command: [ 'sh', '-c', 'chown -R 1000:1000 /opt/nifi/nifi-current/logs' ] volumeMounts: - name: logs mountPath: /opt/nifi/nifi-current/logs </code></pre> <p>I hope this will help solve your issues. Here is the official Kubernetes documentation page <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a>.</p>
<p>What am I going wrong in the below query?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /nginx backend: serviceName: nginx servicePort: 80 </code></pre> <p>The error I am getting:</p> <pre><code>error validating &quot;ingress.yaml&quot;: error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;serviceName&quot; in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;servicePort&quot; in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p>According to <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a> you need to change ServiceName and ServicePort.</p> <blockquote> <p>Each HTTP rule contains (...) a list of paths (for example, <code>/testpath</code>), each of which has an associated backend defined with a <code>service.name</code> and a <code>service.port.name</code> or <code>service.port.number</code>. Both the host and path must match the content of an incoming request before the load balancer directs traffic to the referenced Service.</p> </blockquote> <p>Here is your yaml file with corrections:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /nginx backend: service: name: nginx port: number: 8080 </code></pre>
<p>How do I access the Minio console?</p> <p><code>minio.yaml</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: minio labels: app: minio spec: clusterIP: None ports: - port: 9000 name: minio selector: app: minio --- apiVersion: apps/v1 kind: StatefulSet metadata: name: minio spec: serviceName: minio replicas: 4 selector: matchLabels: app: minio template: metadata: labels: app: minio spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - minio topologyKey: kubernetes.io/hostname containers: - name: minio env: - name: MINIO_ACCESS_KEY value: &quot;hengshi&quot; - name: MINIO_SECRET_KEY value: &quot;hengshi202020&quot; image: minio/minio:RELEASE.2018-08-02T23-11-36Z args: - server - http://minio-0.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/ - http://minio-1.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/ - http://minio-2.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/ - http://minio-3.minio-internal.cts-cernerdevtools-minio.svc.cluster.local/data/ ports: - containerPort: 9000 - containerPort: 9001 volumeMounts: - name: minio-data mountPath: /data volumeClaimTemplates: - metadata: name: minio-data spec: accessModes: - ReadWriteMany resources: requests: storage: 300M --- apiVersion: v1 kind: Service metadata: name: minio-service spec: type: NodePort ports: - name: server-port port: 9000 targetPort: 9000 protocol: TCP nodePort: 30009 - name: console-port port: 9001 targetPort: 9001 protocol: TCP nodePort: 30010 selector: app: minio </code></pre> <p><code>curl http://NodeIP:30010</code> is failed<br /> I tried <code>container --args --console-address &quot;:9001&quot;</code> or <code>env MINIO_BROWSER</code> still not accessible</p> <p>One more question, what is the latest image startup parameter for Minio? There seems to be something wrong with my args</p> <p><a href="https://i.stack.imgur.com/wgXIa.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>You can specify --console-address :9001 in your deployment.yaml file as below in args: section .</p> <pre><code>args: - server - --console-address - :9001 - /data </code></pre> <p>Same way your Service and Ingress needs to point to 9001 port now with the latest Minio.</p> <pre><code>ports: - protocol: TCP port: 9001 </code></pre>
<p>This is my pod:</p> <pre class="lang-yaml prettyprint-override"><code>spec: containers: volumeMounts: - mountPath: /etc/configs/config.tmpl name: config-main readOnly: true subPath: config.tmpl volumes: - configMap: defaultMode: 420 name: config-main name: config-main </code></pre> <p>This is my config map:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: name: config-main data: config.tmpl: | # here goes my config content </code></pre> <p>This produces the following error:</p> <blockquote> <p>Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting &quot;/var/lib/kubelet/pods/dc66ebd1-90ef-4c25-bb69-4b3329f61a5a/volume-subpaths/config-main/podname/1&quot; to rootfs at &quot;/etc/configs/config.tmpl&quot; caused: mount through procfd: not a directory: unknown</p> </blockquote> <p>The container has pre-existing <code>/etc/configs/config.tmpl</code> file that I want to override</p>
<p>To mount as file instead of directory try:</p> <pre><code>spec: containers: volumeMounts: - mountPath: /etc/configs/config.tmpl name: config-main readOnly: true subPath: config.tmpl volumes: - configMap: defaultMode: 420 items: - key: config.tmpl path: config.tmpl name: config-main name: config-main </code></pre>
<p>I'm trying to configure a Horizontal Pod Autoscaler (HPA) on Google Kubernetes Engine (GKE) using External Metrics from an Ingress LoadBalancer, basing the configuration on instructions such as</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling</a> and <a href="https://blog.doit-intl.com/autoscaling-k8s-hpa-with-google-http-s-load-balancer-rps-stackdriver-metric-92db0a28e1ea" rel="noreferrer">https://blog.doit-intl.com/autoscaling-k8s-hpa-with-google-http-s-load-balancer-rps-stackdriver-metric-92db0a28e1ea</a></p> <p>With an HPA like </p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: my-api namespace: production spec: minReplicas: 1 maxReplicas: 20 metrics: - external: metricName: loadbalancing.googleapis.com|https|request_count metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-production-lb-my-api--63e2a8ddaae70 targetAverageValue: "1" type: External scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: my-api </code></pre> <p>the autoscaler does kick in when the request count rises - but placing heavy load on the service, like 100 simultaneous requests per second, doesn't increase the external metric <code>request_count</code> much beyond 6 RPS, while the observed <code>backend_latencies</code> metric in Stackdriver does increase significantly; so I'd like to utilise that metric by adding to the HPA configuration, like so:</p> <pre><code> - external: metricName: loadbalancing.googleapis.com|https|backend_latencies metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-production-lb-my-api--63e2a8ddaae70 targetValue: "3000" type: External </code></pre> <p>but that results in the error:</p> <p><code>...unable to fetch metrics from external metrics API: googleapi: Error 400: Field aggregation.perSeriesAligner had an invalid value of "ALIGN_RATE": The aligner cannot be applied to metrics with kind DELTA and value type DISTRIBUTION., badRequest</code></p> <p>which can be observed with the command</p> <pre><code>$ kubectl describe hpa -n production </code></pre> <p>or by visiting</p> <p><a href="http://localhost:8080/apis/external.metrics.k8s.io/v1beta1/namespaces/default/loadbalancing.googleapis.com%7Chttps%7Cbackend_latencies" rel="noreferrer">http://localhost:8080/apis/external.metrics.k8s.io/v1beta1/namespaces/default/loadbalancing.googleapis.com%7Chttps%7Cbackend_latencies</a></p> <p>after setting up a proxy with</p> <pre><code>$ kubectl proxy --port=8080 </code></pre> <p>Are <code>https/backend_latencies</code> or <a href="https://cloud.google.com/monitoring/api/metrics_gcp#gcp-loadbalancing" rel="noreferrer"><code>https/total_latencies</code></a> not supported as External Stackdriver Metrics in an HPA configuration for GKE?</p>
<p>Maybe someone would find this helpful, though the question is old.</p> <p>My working config looks like next:</p> <pre><code> metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 95 - type: External external: metric: name: loadbalancing.googleapis.com|https|backend_latencies selector: matchLabels: resource.labels.backend_name: frontend metric.labels.proxy_continent: Europe reducer: REDUCE_PERCENTILE_95 target: type: Value value: &quot;79.5&quot; </code></pre> <p><code>type: Value</code> used because it's the only way to not divide the metric value by the replica number.</p> <p><code>reducer: REDUCE_PERCENTILE_95</code> used to work only with a single value of the distribution (<a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/blob/master/custom-metrics-stackdriver-adapter/pkg/adapter/translator/query_builder.go#L44" rel="nofollow noreferrer">source</a>).</p> <p>Also, I edited <code>custom-metrics-stackdriver-adapter</code> deployment to look like this:</p> <pre><code> - image: gcr.io/gke-release/custom-metrics-stackdriver-adapter:v0.12.2-gke.0 imagePullPolicy: Always name: pod-custom-metrics-stackdriver-adapter command: - /adapter - --use-new-resource-model=true - --fallback-for-container-metrics=true - --enable-distribution-support=true </code></pre> <p>The thing is this key <code>enable-distribution-support=true</code>, which enables working with distribution kind of metrics.</p>
<p>I have a cluster, I created namespaces for different teams. Then, I tried to apply an ingress to one namespace with this command <code>kubectl apply -f ing2_dev_plat.yaml -n namespace_name</code>.</p> <p>After, this error was thrown. How can I properly configure the work of ingress controller in several namespaces?</p> <p>Nginx ingress contoller service is in default namespace.</p> <pre><code>Error from server (BadRequest): error when creating &quot;ing2_dev_plat.yaml&quot;: admission webhook &quot;validate.nginx.ingress.kubernetes.io&quot; denied the request: ------------------------------------------------------------------------------- Error: exit status 1 2022/02/11 09:17:49 [warn] 3250#3250: the &quot;http2_max_field_size&quot; directive is obsolete, use the &quot;large_client_header_buffers&quot; directive instead in /tmp/nginx-cfg1414424955:143 nginx: [warn] the &quot;http2_max_field_size&quot; directive is obsolete, use the &quot;large_client_header_buffers&quot; directive instead in /tmp/nginx-cfg1414424955:143 2022/02/11 09:17:49 [warn] 3250#3250: the &quot;http2_max_header_size&quot; directive is obsolete, use the &quot;large_client_header_buffers&quot; directive instead in /tmp/nginx-cfg1414424955:144 nginx: [warn] the &quot;http2_max_header_size&quot; directive is obsolete, use the &quot;large_client_header_buffers&quot; directive instead in /tmp/nginx-cfg1414424955:144 2022/02/11 09:17:49 [warn] 3250#3250: the &quot;http2_max_requests&quot; directive is obsolete, use the &quot;keepalive_requests&quot; directive instead in /tmp/nginx-cfg1414424955:145 nginx: [warn] the &quot;http2_max_requests&quot; directive is obsolete, use the &quot;keepalive_requests&quot; directive instead in /tmp/nginx-cfg1414424955:145 2022/02/11 09:17:49 [emerg] 3250#3250: duplicate location &quot;/&quot; in /tmp/nginx-cfg1414424955:1045 nginx: [emerg] duplicate location &quot;/&quot; in /tmp/nginx-cfg1414424955:1045 nginx: configuration file /tmp/nginx-cfg1414424955 test failed </code></pre> <p><a href="https://i.stack.imgur.com/liqRY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/liqRY.png" alt="enter image description here" /></a></p>
<p>As I said in my comment under the question:</p> <pre class="lang-text prettyprint-override"><code>nginx: [emerg] duplicate location &quot;/&quot; in /tmp/nginx-cfg1414424955:1045 </code></pre> <p>may indicate you have the same location defined twice.</p> <p>If you have any other Ingress resources with <code>path: /</code>, you have to edit those accordingly.</p> <p>You can get all Ingress resources with their paths with</p> <pre class="lang-text prettyprint-override"><code>kubectl get ingress -A -o=jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.rules[*].http.paths[*].path}{&quot;\n&quot;}{end}' </code></pre>
<p>In local working fine but when i deployed on digital ocean Kubernetes server then showing error. please help....</p> <p><a href="https://i.stack.imgur.com/VxIXr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VxIXr.png" alt="enter image description here" /></a></p>
<p>Check your node version. It happening in new version of node.</p>
<p>Recently my kafka producer running as a cronjob on a kubernetes cluster has started doing the following when pushing new messages to the queue:</p> <pre><code>{&quot;@version&quot;:1,&quot;source_host&quot;:&quot;&lt;job pod name&gt;&quot;,&quot;message&quot;:&quot;[Producer clientId=producer-1] Resetting the last seen epoch of partition &lt;topic name&gt; to 4 since the associated topicId changed from null to JkTOJi-OSzavDEomRvAIOQ&quot;,&quot;thread_name&quot;:&quot;kafka-producer-network-thread | producer-1&quot;,&quot;@timestamp&quot;:&quot;2022-02-11T08:45:40.212+0000&quot;,&quot;level&quot;:&quot;INFO&quot;,&quot;logger_name&quot;:&quot;org.apache.kafka.clients.Metadata&quot;} </code></pre> <p>This results in the producer running into a timeout:</p> <pre><code>&quot;exception&quot;:{&quot;exception_class&quot;:&quot;java.util.concurrent.ExecutionException&quot;,&quot;exception_message&quot;:&quot;org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for &lt;topic name&gt;:120000 ms has passed since batch creation&quot;, stacktrace....} </code></pre> <p>The logs of the kafka consumer pod and kafka cluster pods don't show any out-of-the-ordinary changes.</p> <p>Has anyone seen this behavior before and if yes, how do I prevent it?</p>
<p>Reason:</p> <blockquote> <p>the Java API mode generator cannot connect with Kafka</p> </blockquote> <p>Solution:</p> <blockquote> <p>On each server Add a sentence to the properties file</p> </blockquote> <pre class="lang-sh prettyprint-override"><code>host.name = server IP; </code></pre>
<p>I deployed Jenkins via Helm chart(<a href="https://github.com/jenkinsci/helm-charts/releases/tag/jenkins-3.11.4" rel="nofollow noreferrer">jenkins-helm:3.11.4</a>) on my local Kubernetes cluster(rancher desktop). I installed docker on <code>jenkins/inbound-agent</code> image because it is not included where I am using the default Jenkins-controller image as provided. When I run the docker command in the local pipeline I am getting a permission error as below.</p> <p>I am aware that, the issue is the permission for /var/run/.docker.sock folder but I could not fix it and really stuck. I tried to add <code>command:[&quot;sh&quot;,&quot;-c&quot;,&quot;chmod 777 /var/run/.docker.sock ]</code> to the agent in values.yaml but this time jenkins did not up and running properly. I tried to add <code>RUN usermod -aG docker jenkins</code> to the Dockerfile but still same.</p> <pre><code>jenkins@default-cnmq7:~/agent$ id uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),0(root) jenkins@default-cnmq7:~/agent$ grep docker /etc/group docker:x:107: </code></pre> <p>So how can I grant permission for this folder through the helm chart for Jenkins agent pod? Or what is the proper solution to fix this issue.</p> <pre><code>node { stage('SCM') { checkout(scm) } stage('Build') { echo 'Building Project' sh &quot;&quot;&quot; docker pull alpine &quot;&quot;&quot; } } [Pipeline] sh + docker pull alpine Using default tag: latest Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post &quot;http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=alpine&amp;tag=latest&quot;: dial unix /var/run/docker.sock: connect: permission denied </code></pre> <p>values.yaml</p> <pre><code> controller: componentName: &quot;jenkins-controller&quot; image: &quot;jenkins&quot; # tag: &quot;2.319.3-jdk11&quot; tagLabel: jdk11 imagePullPolicy: &quot;Always&quot; imagePullSecretName: javaOpts: &quot;-Xms512m -Xmx2048m&quot; jenkinsUrl: &quot;http://localhost:8080&quot; agent: enabled: true defaultsProviderTemplate: &quot;&quot; # URL for connecting to the Jenkins contoller jenkinsUrl: jenkinsTunnel: image: &quot;jenkins/inbound-agent&quot; tag: &quot;4.11.2-5&quot; workingDir: &quot;/home/jenkins/agent&quot; nodeUsageMode: &quot;NORMAL&quot; componentName: &quot;jenkins-agent&quot; websocket: false privileged: true runAsUser: runAsGroup: alwaysPullImage: true podRetention: &quot;Never&quot; volumes: - type: HostPath hostPath: /Users/username/workspace mountPath: /Users/username/workspace - type: HostPath hostPath: /var/run/docker.sock mountPath: /var/run/docker.sock command: args: &quot;${computer.jnlpmac} ${computer.name}&quot; </code></pre> <p>Dockerfile for jenkins agent</p> <pre><code>FROM jenkins/inbound-agent:4.11.2-4 USER root RUN set -eux &amp;&amp; \ apt-get update &amp;&amp; \ apt-get install -y curl sudo docker.io docker-compose &amp;&amp; \ curl -sS https://raw.githubusercontent.com/HariSekhon/bash-tools/master/clean_caches.sh | sh RUN usermod -aG docker jenkins USER jenkins </code></pre>
<p>First find the group id of docker from the Host</p> <pre><code>$ grep docker /etc/group docker:x:999: </code></pre> <p>Then create a user in the Dockerfile which its group is the same is docker group id.</p> <pre><code>RUN groupadd -g 999 tech RUN useradd -g tech tech USER tech </code></pre>
<p>So we are in the process of moving from yarn 1.x to yarn 2 (yarn 3.1.1) and I'm getting a little confused on how to configure yarn in my CI/CD config. As of right now our pipeline does the following to deploy to our kubernetes cluster:</p> <p>On branch PR:</p> <ol> <li><p>Obtain branch repo in gitlab runner</p> </li> <li><p>Lint</p> </li> <li><p>Run jest</p> </li> <li><p>Build with environment variables, dependencies, and devdependencies</p> </li> <li><p>Publish image to container registry with tag <code>test</code></p> <p>a. If success, allow merge to main</p> </li> <li><p>Kubernetes watches for updates to test and deploys a test pod to cluster</p> </li> </ol> <p>On merge to main:</p> <ol> <li>Obtain main repo in gitlab runner</li> <li>Lint</li> <li>Run jest</li> <li>Build with environment variables and dependencies</li> <li>Publish image to container registry with tag <code>latest</code></li> <li>Kubernetes watches for updates to latest and deploys a staging pod to cluster</li> </ol> <p>(NOTE: For full-blown production releases we will be using the release feature to manually deploy releases to the production server)</p> <p>The issue is that we are using yarn 2 with zero installs and in the past we have been able prevent the production environment from using any dev dependencies by running <code>yarn install --production</code>. In yarn 2 this command is deprecated.</p> <p>Is there any ideal solution to prevent dev dependencies from being installed on production? I've seen some posts mention using workspaces but that seems to be more tailored towards mono-repos where there are more than one application.</p> <p>Thanks in advance for any help!</p>
<p>I had the same question and came to the same conclusion as you. I could not find an easy way to perform a production build on yarn 2. Yarn Workspaces comes closest but I did find the paragraph below in the documentation:</p> <blockquote> <p>Note that this command is only very moderately useful when using zero-installs, since the cache will contain all the packages anyway - meaning that the only difference between a full install and a focused install would just be a few extra lines in the .pnp.cjs file, at the cost of introducing an extra complexity.</p> </blockquote> <p>From: <a href="https://yarnpkg.com/cli/workspaces/focus#options-production" rel="nofollow noreferrer">https://yarnpkg.com/cli/workspaces/focus#options-production</a></p> <p>Does that mean that there essentially is no production install? It would be nice if that was officially addressed somewhere but this was the closest I could find.</p> <p>Personally, I am using NextJS and upgraded my project to Yarn 2. The features of Yarn 2 seem to work (no node_modules folder) but I can still use <code>yarn build</code> from NextJS to create a production build with output in the <code>.next</code> folder.</p>
<p>We are trying to configure local-storage in Rancher and storage provisioner configured successfully. But when I create pvc using local-storage sc its going in pending state with below error.</p> <pre><code> Normal ExternalProvisioning 4m31s (x62 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner &quot;rancher.io/local-path&quot; or manually created by system administrator Normal Provisioning 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 External provisioner is provisioning volume for claim &quot;local-path-storage/test-pod-pvc-local&quot; Warning ProvisioningFailed 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 failed to provision volume with StorageClass &quot;local-path&quot;: configuration error, no node was specified [root@n01-deployer local]# </code></pre> <p>sc configuration</p> <pre><code>[root@n01-deployer local]# kubectl edit sc local-path # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;storage.k8s.io/v1&quot;,&quot;kind&quot;:&quot;StorageClass&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;local-path&quot;},&quot;provisioner&quot;:&quot;rancher.io/local-path&quot;,&quot;reclaimPolicy&quot;:&quot;Delete&quot;,&quot;volumeBindingMode&quot;:&quot;Immediate&quot;} creationTimestamp: &quot;2022-02-07T16:12:58Z&quot; name: local-path resourceVersion: &quot;1501275&quot; uid: e8060018-e4a8-47f9-8dd4-c63f28eef3f2 provisioner: rancher.io/local-path reclaimPolicy: Delete volumeBindingMode: Immediate </code></pre> <p>PVC configuration</p> <pre><code>[root@n01-deployer local]# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: namespace: local-path-storage name: test-pod-pvc-local-1 spec: accessModes: - ReadWriteOnce storageClassName: local-path resources: requests: storage: 5Gi volumeMode: Filesystem </code></pre> <p>I have mounted the local volume in all the worker node still my pvc not getting created. Can some please help me solve this issue?</p>
<p>The key to your problem was updating PSP.</p> <p>I would like to add something about PSP:</p> <p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/podsecuritypolicy" rel="nofollow noreferrer">this documentation</a> and <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer">this blog</a>:</p> <blockquote> <p>As of Kubernetes <strong>version 1.21</strong>, PodSecurityPolicy (beta) is deprecated. The Kubernetes project aims to shut the feature down in <strong>version 1.25</strong>.</p> </blockquote> <p>However I haven't found any information in Rancher's case (the documentation is up to date).</p> <blockquote> <p>Rancher ships with two default Pod Security Policies (PSPs): the <code>restricted</code> and <code>unrestricted</code> policies.</p> </blockquote> <hr /> <p>See also:</p> <ul> <li><a href="https://codilime.com/blog/the-benefits-of-pod-security-policy-a-use-case/" rel="nofollow noreferrer"><em>The benefits of Pod Security Policy</em></a></li> <li><a href="https://docs.bitnami.com/tutorials/secure-kubernetes-cluster-psp/" rel="nofollow noreferrer"><em>Secure Kubernetes cluster PSP</em></a></li> <li><a href="https://rancher.com/docs/rancher/v2.6/en/cluster-admin/pod-security-policies/" rel="nofollow noreferrer"><em>Pod Security Policies</em></a></li> </ul>
<p>I have a small instance of influxdb running in my kubernetes cluster.<br> The data of that instance is stored in a persistent storage.<br> But I also want to run the backup command from influx at scheduled interval.<br></p> <pre><code>influxd backup -portable /backuppath </code></pre> <p>What I do now is exec into the pod and run it manually.<br> Is there a way that I can do this automatically?</p>
<p>You can consider running a CronJob with <a href="https://bitnami.com/stack/kubectl/containers" rel="nofollow noreferrer">bitnami kubectl</a> which will execute the backup command. This is the same as <code>exec into the pod and run</code> except now you automate it with CronJob.</p>
<p>I have few questions regarding how a random string part in kubernetes podname is decided .</p> <p>how is pod-template-hash decided ?? ( i understand that it is a random string generated by deployment controller). But exactly what are the inputs which the deployment controller considers before generating this random string . is there a maximum length this hash string will limit to ??</p> <p>The reason for asking this, we are storing the complete pod name in database and it cannot exceed certain character length.</p> <p>Most of the times i have seen the length is 10 characters. Can it go beyond 10 characters ??</p>
<p>10 characters? That is only the alphanumeric suffix of the replica set name. Pods under a replica set will have additional suffixes of a dash plus 5 characters long alphanumeric string.</p> <p>The name structure of a Pod will be different depending on the controller type:</p> <ul> <li>StatefulSet: StatefulSet name + &quot;-&quot; + ordinal number starting from 0</li> <li>DaemonSet: DaemonSet name + &quot;-&quot; + 5 alphanumeric characters long string</li> <li>Deployment: ReplicaSet name (which is Deployment name + &quot;-&quot; + 10 alphanumeric characters long string) + &quot;-&quot; + 5 alphanumeric characters long string</li> </ul> <p>But then, the full name of the Pods will also include the name of their controllers, which is rather arbitrary.</p> <p>So, how do you proceed?</p> <p>You just have to prepare the length of the column to be <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/overview/working-with-objects/names/" rel="nofollow noreferrer">the maximum length of a pod name, which is <strong>253 characters</strong></a>.</p>
<p>Shortly, there are two services that communicates with each others via HTTP REST APIs. My deployment is running in an AKS cluster. For ingress controller, I installed this Nginx controller helm chart: <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a> <br><br> The load balancer has a fix IP attached. My deployment running in my cluster should send usage info to the other service periodically and vica versa. However, that service has an IP whitelist and I need to provide a static IP for whitelisting my deployment. Currently, the problem is that my cURL call has the node's IP which is always changing depending on which node my deployment is running on. Also, the number of nodes are scaled dinamically, too. My goal is to send egress traffic through the loadbalancer something like this:<br> <a href="https://i.stack.imgur.com/LQVGc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQVGc.png" alt="enter image description here" /></a> <br><br> Is there any way to route the outbound traffic from my pods to the loadbalancer?</p>
<p>This is possible with Azure Load Balancer with <a href="https://learn.microsoft.com/en-us/azure/load-balancer/egress-only" rel="nofollow noreferrer">outbound rules</a>; which the LB will do a SNAT and your &quot;other service&quot; will see the fixed frontend public IP. Another method is use <a href="https://learn.microsoft.com/en-us/samples/azure-samples/aks-nat-agic/aks-nat-agic/" rel="nofollow noreferrer">Virtual Network NAT</a> where your &quot;other service&quot; will see the fixed NAT public IP. You can then whitelist the fixed public IP either way.</p>
<p>I want to use API endpoint health check like <code>/healthz</code> or <code>/livez</code> or <code>/readyz</code>. I recently heard about it, tried to search but didn't find any place which tell me how we use it. I see that <code>/healthz</code> is deprecated. Can you tell me if have to use it in a pod definition file or we have to use it in code. I have a basic hello world app. can some one help me understand it and tell me where does it fit in ?</p> <pre><code>import time import os import redis from flask import Flask from flask_basicauth import BasicAuth app = Flask(__name__) cache = redis.Redis(host=os.environ['REDIS_HOST'], port=os.environ['REDIS_PORT']) app.config['BASIC_AUTH_USERNAME'] = os.environ['USERNAME'] app.config['BASIC_AUTH_PASSWORD'] = os.environ['PASSWORD'] basic_auth = BasicAuth(app) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count) @app.route('/supersecret') @basic_auth.required def secret_view(): count = get_hit_count() return os.environ['THEBIGSECRET']+ '\n' </code></pre> <p>I already tried something like below and it seems working but if I use /supersecret instead of / it is not working</p> <pre><code> readinessProbe: httpGet: path: / port: 5000 initialDelaySeconds: 20 periodSeconds: 5 livenessProbe: httpGet: path: / port: 5000 intialDelaySeconds: 30 periodSeconds: 5 </code></pre>
<p>You can refer to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">these docs</a> for an explanation of how &quot;health checks&quot;, or probes as they're called in Kubernetes, work. For your question you may be particularly interested in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#http-probes" rel="nofollow noreferrer">section on HTTP probes</a>.</p> <p>You can pick any HTTP path of your application for these probes, while indeed <code>/healthz</code>, <code>/livez</code> and <code>/readyz</code> are common ones and happen to be the ones <a href="https://kubernetes.io/docs/reference/using-api/health-checks/" rel="nofollow noreferrer">used by the Kubernetes API server</a> (with, as you found, the <code>/healthz</code> one being deprecated in favour of the more specific alternatives).</p> <p>So, to implement health probes, simply add a <code>GET</code> route for each of the probes you'd like to the application and make sure the status code returned is 2xx or 3xx to signal a healthy state, or anything else when Kubernetes should consider the application unhealthy.</p> <blockquote> <p>Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.</p> </blockquote> <p>In my opinion, it makes sense to return HTTP 200 for success and 500 or 503 for failure.</p> <p>Once your application is set up to respond accordingly to the prober, you should instruct Kubernetes to use it by adding something along the following lines to the container exposing your API in the PodSpec:</p> <pre class="lang-yaml prettyprint-override"><code> livenessProbe: httpGet: path: /your-liveness-probe-path port: 8080 </code></pre> <hr /> <p>If I understood your question regarding the probe failing for the <code>/supersecret</code> path, ignoring the indentation error in your YAML definition, this is most likely because that route requires authentication. The probes you have defined do not supply that. For this, you may use <code>httpHeaders</code>:</p> <pre class="lang-yaml prettyprint-override"><code> readinessProbe: httpGet: path: / port: 5000 httpHeaders: - name: Authorization value: &quot;Bearer secret&quot; initialDelaySeconds: 20 periodSeconds: 5 </code></pre> <p>However, it is in my experience more common to not require authentication for your probe routes, but instead, to not expose those to the public internet (or even on a separate port). How to accomplish that is a whole different story and depends on how you are exposing your application in the first place, but one way could be to have an ingress with appropriate <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules" rel="nofollow noreferrer">&quot;ingress rules&quot;</a>.</p> <p>On a more general note about troubleshooting failing probes: running <code>kubectl get events</code> is a good starting point.</p>
<p>I am running a sample webapp python in Kubernetes. I am not able to figure out how I make use of probes here.</p> <ol> <li><p>I want app to recover from broken states by automatically restarting pods</p> </li> <li><p>Route the traffic to the healthy pods only.</p> </li> <li><p>Make sure the database is up and running before the application (which in my case is Redis).</p> </li> </ol> <p>I have understanding of probes but not sure how/what values exactly to look for</p> <p>My definition file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment labels: env: production app: frontend spec: selector: matchLabels: env: production app: frontend replicas: 1 template: metadata: name: myapp-pod labels: env: production app: frontend spec: containers: - name: myapp-container image: myimage:latest imagePullPolicy: Always ports: - containerPort: 5000 </code></pre> <p>Now I am doing something like this</p> <pre><code> readinessProbe: httpGet: path: / port: 5000 initialDelaySeconds: 20 periodSeconds: 5 livenessProbe: httpGet: path: / port: 5000 intialDelaySeconds: 10 periodSeconds: 5 </code></pre>
<p>You need to define both</p> <ul> <li><code>readinessProbe</code>: allowing to tell whether your <code>Deployment</code> is ready to serve traffic at any point of time during its lifecycle. This configuration item can support different command but in your case, this would be an <code>httpGet</code> matching and endpoint that you would implement in your web-app (most modern stack define the endpoints by default so check check the documentation of whatever framework you are using). Note that the endpoint handler would need to check the <strong>readiness</strong> of any needed dependency, in your case you would need to check if <code>http[s]://redis-host:redis-port</code> responds with success</li> <li><code>livenessProbe</code>: allowing for the control plane to check continuously your pods health and to decide on actions needed to get the cluster to the desired state rolling out any pods failing to report being alive. This probe support different definitions as well and as for the <code>readinessProbe</code> most modern framework offer endpoints responding to such request by default</li> </ul> <p>Here down you can see a sample of both probes, for which you would have two respective HTTP endpoints within your web application:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment labels: env: production app: frontend spec: selector: matchLabels: env: production app: frontend replicas: 1 template: metadata: name: myapp-pod labels: env: production app: frontend spec: containers: - name: myapp-container image: myimage:latest imagePullPolicy: Always ports: - containerPort: 5000 readinessProbe: httpGet: path: /health port: 5000 livenessProbe: httpGet: path: /health port: 5000 </code></pre>
<p>I'm having to build a demo Kubernetes cluster in AWS using Kubeadm.</p> <p>Unfortunately, for several reasons, Kops, and EKS are out of the question in my current environment.</p> <p>How do I deal with things such as auto-scaling and auto joining worker nodes back to the master if they get terminated for any reason? This is my main concern.</p> <p>I've done this with Kops in the past and it's relatively straightforward, but I'm not sure how to manage using Kubeadm.</p>
<p>If you're using Ansible, you can set up your launch configuration to pull a git repo, and run a playbook to extract the join token from the Master and run on the worker nodes.</p>
<p>I stick to this problem quite long now:</p> <p>I have a standard NextJS app, which uses environment variables (for client side <code>NEXT_PUBLIC_MY_VAR</code> as well for server side ones <code>MY_OTHER_VAR</code>).</p> <p>I use the Gitlab CI-CD AutoDevOps with an tiny custom <code>.gitlab-ci.yml</code>-file (see below).</p> <p>I have a successful connection to my Kubernetes cluster with Gitlab, and my NextJS app is getting also deployed successfully. (Gitlab/K8s cache cleaned, too)</p> <p>The only thing where I struggle is to get the <code>process.env.ENV_VARS</code> running. Whatever I tried they are <code>undefined</code>.</p> <p>I deployed my app manually into the cluster, and mounted a configMap to my deployment ( so a <code>.env.local</code>-file is present at <code>/app/.env.local</code> <strong>ONLY THEN the <code>ENV_VARS</code> are set correctly.</strong></p> <p>So how to set the <code>ENV_VARS</code> when deploying my NextJS app via Gitlab Auto DevOps?</p> <p>I've tried so far:</p> <ul> <li>setting <code>ENV_VARS</code> in Gitlab -&gt; Settings -&gt; CI/CD -&gt; Variables</li> </ul> <p><a href="https://i.stack.imgur.com/eeMTc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eeMTc.png" alt="Gitlab ENV Vars" /></a></p> <ul> <li>I added an ARG in my Dockerfile which should pick up the Gitlab CI vars when building the Docker image:</li> </ul> <pre><code>FROM node:alpine AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci # ... # .. # docker build --build-arg API_URL=http://myApiEndpoint ARG API_URL ENV API_URL=$API_URL RUN npm run build FROM node:alpine AS runner WORKDIR /app ENV NODE_ENV production # COPY --from=builder /app/next.config.js ./ # TRIED WITH AND WITHOUT COPY --from=builder /app/public ./public COPY --from=builder /app/.next ./.next COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/package.json ./package.json RUN addgroup -g 1001 -S nodejs RUN adduser -S nextjs -u 1001 RUN chown -R nextjs:nodejs /app/.next USER nextjs EXPOSE 5000 CMD [&quot;npm&quot;, &quot;run&quot;, &quot;start&quot;] </code></pre> <ul> <li>I have also added the ENV_VARS also in my <code>next.config.js</code></li> </ul> <pre class="lang-js prettyprint-override"><code> module.exports = withSvgr({ serverRuntimeConfig: {}, publicRuntimeConfig: { NEXT_PUBLIC_API_URL: process.env.NEXT_PUBLIC_API_URL, API_URL: process.env.API_URL, }, }); </code></pre> <ul> <li>I also added a custom <code>.gitlab-ci.yml</code> file (complete file here):</li> </ul> <p>yaml include: - template: Auto-DevOps.gitlab-ci.yml</p> <pre><code># added vars for build build: stage: build variables: API_URL: $API_URL </code></pre> <p>This is the error message in my POD</p> <pre><code>&gt; metashop-frontend.react@1.8.0 start &gt; next start -p 5000 ready - started server on 0.0.0.0:5000, url: http://localhost:5000 ApiURL alias: undefined #### &lt;&lt;&lt;---- API_URL: undefined #### &lt;&lt;&lt;---- TypeError: Only absolute URLs are supported at getNodeRequestOptions (/app/node_modules/node-fetch/lib/index.js:1305:9) at /app/node_modules/node-fetch/lib/index.js:1410:19 at new Promise (&lt;anonymous&gt;) at fetch (/app/node_modules/node-fetch/lib/index.js:1407:9) at Object.getItems (/app/.next/server/pages/_app.js:1194:12) at getServerSideProps (/app/.next/server/pages/index.js:2952:55) at renderToHTML (/app/node_modules/next/dist/next-server/server/render.js:40:221) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async /app/node_modules/next/dist/next-server/server/next-server.js:109:97 at async /app/node_modules/next/dist/next-server/server/next-server.js:102:142 </code></pre> <p>And is the code that the error refers to:</p> <pre class="lang-js prettyprint-override"><code>const ApiUrl = process.env.API_URL; console.log(&quot;ApiURL alias:&quot;, ApiUrl); console.log(&quot;API_URL:&quot;, process.env.API_URL); console.log(&quot;NEXT_PUBLIC_API_URL:&quot;, process.env.NEXT_PUBLIC_API_URL); return fetch(`${ApiUrl}/items.json?${qs.stringify(options)}`).then( (response) =&gt; response.json() ); </code></pre> <p>and for completeness (but mostly useless) the tail of the failing job (which seems the normal error when K8s not responding):</p> <pre><code>Error: release production failed, and has been uninstalled due to atomic being set: timed out waiting for the condition Uploading artifacts for failed job 00:01 Uploading artifacts... WARNING: environment_url.txt: no matching files WARNING: tiller.log: no matching files ERROR: No files to upload Cleaning up file based variables 00:01 ERROR: Job failed: exit code 1 </code></pre> <h2>is <code>.env.local</code> the only way to use NextJS ENV_VARS in Kubernetes?</h2> <h2>Do I have to customize the Gilab AutoDevOps for this particular (and common) app deployment?</h2> <p>Thank you in advance, any help would be appreciated</p>
<p>If you stick on Auto Devops on <strong>GITLAB</strong> then YOU HAVE TO set this Gitlab ENV Variable <code>AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS</code> and set it to</p> <p><code>--build-arg=NEXT_PUBLIC_API_URL=http://your.domain.com/api</code></p> <p>In your Dockerfile you can assign it via</p> <pre><code> ARG NEXT_PUBLIC_API_URL ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL </code></pre> <p>More information here:</p> <p><a href="https://docs.gitlab.com/ee/topics/autodevops/customize.html#passing-arguments-to-docker-build" rel="nofollow noreferrer">https://docs.gitlab.com/ee/topics/autodevops/customize.html#passing-arguments-to-docker-build</a></p>
<p>I have cassandra operator installed and I setup cassandra datacenter/cluster with 3 nodes. I have created sample keyspace, table and inserted the data. I see it has created 3 PVC's in my storage section. When I deleting the dataceneter its delete associated PVC's as well ,So when I setup same configuration Datacenter/cluster , its completely new , No earlier keyspace or tables. How can I make them persistence for future use? I am using sample yaml from below <a href="https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x" rel="nofollow noreferrer">https://github.com/datastax/cass-operator/tree/master/operator/example-cassdc-yaml/cassandra-3.11.x</a></p> <p>I don't find any persistentVolumeClaim configuration in it , Its having storageConfig: cassandraDataVolumeClaimSpec: Is anyone came across such scenario?</p> <p>Edit: Storage class details:</p> <pre><code>allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: description: Provides RWO and RWX Filesystem volumes with Retain Policy storageclass.kubernetes.io/is-default-class: &quot;false&quot; name: ocs-storagecluster-cephfs-retain parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Retain volumeBindingMode: Immediate </code></pre> <p>Here is Cassandra cluster YAML:</p> <pre><code> apiVersion: cassandra.datastax.com/v1beta1 kind: CassandraDatacenter metadata: name: dc generation: 2 spec: size: 3 config: cassandra-yaml: authenticator: AllowAllAuthenticator authorizer: AllowAllAuthorizer role_manager: CassandraRoleManager jvm-options: additional-jvm-opts: - '-Ddse.system_distributed_replication_dc_names=dc1' - '-Ddse.system_distributed_replication_per_dc=1' initial_heap_size: 800M max_heap_size: 800M resources: {} clusterName: cassandra systemLoggerResources: {} configBuilderResources: {} serverVersion: 3.11.7 serverType: cassandra storageConfig: cassandraDataVolumeClaimSpec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: ocs-storagecluster-cephfs-retain managementApiAuth: insecure: {} </code></pre> <p>EDIT: PV Details:</p> <pre><code>oc get pv pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: &quot;2022-02-23T20:52:54Z&quot; finalizers: - kubernetes.io/pv-protection managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:pv.kubernetes.io/provisioned-by: {} f:spec: f:accessModes: {} f:capacity: .: {} f:storage: {} f:claimRef: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:namespace: {} f:resourceVersion: {} f:uid: {} f:csi: .: {} f:controllerExpandSecretRef: .: {} f:name: {} f:namespace: {} f:driver: {} f:nodeStageSecretRef: .: {} f:name: {} f:namespace: {} f:volumeAttributes: .: {} f:clusterID: {} f:fsName: {} f:storage.kubernetes.io/csiProvisionerIdentity: {} f:subvolumeName: {} f:volumeHandle: {} f:persistentVolumeReclaimPolicy: {} f:storageClassName: {} f:volumeMode: {} manager: csi-provisioner operation: Update time: &quot;2022-02-23T20:52:54Z&quot; - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:phase: {} manager: kube-controller-manager operation: Update time: &quot;2022-02-23T20:52:54Z&quot; name: pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 resourceVersion: &quot;51684941&quot; selfLink: /api/v1/persistentvolumes/pvc-15def0ca-6cbc-4569-a560-7b9e89a7b7a7 uid: 8ded2de5-6d4e-45a1-9b89-a385d74d6d4a spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: server-data-cstone-cassandra-cstone-dc-default-sts-1 namespace: dv01-cornerstone resourceVersion: &quot;51684914&quot; uid: 15def0ca-6cbc-4569-a560-7b9e89a7b7a7 csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1645064620191-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-92d5e07d-94ea-11ec-92e8-0a580a20028c volumeHandle: 0001-0011-openshift-storage-0000000000000001-92d5e07d-94ea-11ec-92e8-0a580a20028c persistentVolumeReclaimPolicy: Retain storageClassName: ocs-storagecluster-cephfs-retain volumeMode: Filesystem status: phase: Bound </code></pre>
<p>According to the spec:</p> <blockquote> <p>The storage configuration. This sets up a 100GB volume at /var/lib/cassandra on each server pod. The user is left to create the server-storage storage class by following these directions... <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd</a></p> </blockquote> <p>Before you deploy the Cassandra spec, first ensure your cluster already have the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver#enabling_the_on_an_existing_cluster" rel="nofollow noreferrer">CSI driver</a> installed and working properly, then proceed to create the StorageClass as the spec required:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: server-storage provisioner: pd.csi.storage.gke.io volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Retain parameters: type: pd-ssd </code></pre> <p>Re-deploy your Cassandra now should have the data disk retain upon deletion.</p>
<p>I have a VPC, 4 subnets (2 public and 2 private) and an EKS Cluster which all created with Terraform. My infra is working without a problem right now. I'm thinking about creating a second EKS Cluster, but I'm a little bit confused about subnet tagging.</p> <p>For example, one of my private subnets was created like below;</p> <pre><code>resource &quot;aws_subnet&quot; &quot;vpc-private&quot; { vpc_id = aws_vpc.vpc.id cidr_block = var.private_cidr availability_zone = data.aws_availability_zones.available.names[0] tags = { Environment = var.environment Name = &quot;${var.environment}-vpc-private&quot; &quot;kubernetes.io/cluster/${var.cluster_name}&quot; = &quot;shared&quot; &quot;kubernetes.io/role/internal-elb&quot; = &quot;1&quot; } } </code></pre> <p>As you can see, it's tagged with <strong>&quot;kubernetes.io/cluster/${var.cluster_name}&quot; = &quot;shared&quot;</strong> to enable subnet discovery. Since I'm thinking about creating second cluster for the same subnets, I'm trying to clarify how should subnets be tagged. There is no clear information in AWS documents, at least I didn't find anything clear about this situation. Will adding the second tag like <strong>&quot;kubernetes.io/cluster/${var.cluster_2_name}&quot; = &quot;shared&quot;</strong> be alright like below ?</p> <pre><code>resource &quot;aws_subnet&quot; &quot;vpc-private&quot; { vpc_id = aws_vpc.vpc.id cidr_block = var.private_cidr availability_zone = data.aws_availability_zones.available.names[0] tags = { Environment = var.environment Name = &quot;${var.environment}-vpc-private&quot; &quot;kubernetes.io/cluster/${var.cluster_name}&quot; = &quot;shared&quot; &quot;kubernetes.io/cluster/${var.cluster_2_name}&quot; = &quot;shared&quot; &quot;kubernetes.io/role/internal-elb&quot; = &quot;1&quot; } } </code></pre> <p>Any help/recommendation will be highly appreciated, thank you very much.</p>
<p>To answer my own question, yes tagging the subnet for each EKS Cluster is needed, otherwise load balancers couldn't find where to provision.</p> <p>In Terraform, I used exact format I asked above</p> <pre><code>resource &quot;aws_subnet&quot; &quot;vpc-private&quot; { vpc_id = aws_vpc.vpc.id cidr_block = var.private_cidr availability_zone = data.aws_availability_zones.available.names[0] tags = { Environment = var.environment Name = &quot;${var.environment}-vpc-private&quot; &quot;kubernetes.io/cluster/${var.cluster_name}&quot; = &quot;shared&quot; &quot;kubernetes.io/cluster/${var.cluster_2_name}&quot; = &quot;shared&quot; &quot;kubernetes.io/role/internal-elb&quot; = &quot;1&quot; } } </code></pre>
<p>I want to capture <code>subdomain</code> and rewrite URL with <code>/subdomain</code>, For example <code>bhautik.bhau.tk</code> rewrite to <code>bhau.tk/bhautik</code>.</p> <p>I also <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">https://github.com/google/re2/wiki/Syntax</a> tried group syntax</p> <p>Here is my <code>nginx</code> ingress config:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: subdomain namespace: subdomain annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; # nginx.ingress.kubernetes.io/rewrite-target: /$sub nginx.ingress.kubernetes.io/server-snippet: | set $prefix abcd; if ($host ~ ^(\w+).bhau\.tk$) { // TODO? } nginx.ingress.kubernetes.io/rewrite-target: /$prefix/$uri spec: rules: - host: &quot;*.bhau.tk&quot; http: paths: - pathType: Prefix path: &quot;/&quot; backend: service: name: subdomain port: number: 80 </code></pre> <p>How do I capture subdomain from $host?</p>
<p>I believe you want a redirect instead of rewrite. Here is the <code>server-snippet</code> you need:</p> <pre><code>nginx.ingress.kubernetes.io/server-snippet: | if ($host ~ ^(?&lt;subdom&gt;\w+)\.(?&lt;basedom&gt;bhau\.tk)$) { return 302 https://$basedom/$subdom/ ; } </code></pre> <p>If you really want a rewrite where the URL that the user sees remains unchanged but instead the request will be routed to a subpath served by the same service:</p> <pre><code>nginx.ingress.kubernetes.io/server-snippet: | if ($host ~ ^(?&lt;subdom&gt;\w+)\.(?&lt;basedom&gt;bhau\.tk)$) { rewrite ^/(.*)$ /$subdom/$1 ; } </code></pre> <p>Remove the <code>rewrite-target</code> annotation that specifies <code>$prefix</code>. You don't need it.</p> <p>The <code>?&lt;capturename&gt;</code> and <code>$capturename</code> pair is the trick you are looking for.</p>
<p>I'm trying to deploy updates. Installation works fine, but when I change the image field in the yaml file for Job and try to roll updates, an error occurs.</p> <blockquote> <p>Error: UPGRADE FAILED: cannot patch "dev1-test-db-migrate-job" with kind Job: Job.batch "dev1-test-db-migrate-job" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"e60854c6-9a57-413c-8f19-175a755c9852", "job-name":"dev1-test-db-migrate-job", "target-app":"db-migrate", "target-domain":"dev1...", "target-service":"test"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"app", Image:"...:insurance-master-682", Command:[]string{"/bin/sh", "-c"}, Args:[]string{"java -jar ./db/liquibase.jar --logLevel=debug --classpath=./db/mariadb-java-client-2.5.3.jar --driver=org.mariadb.jdbc.Driver --changeLogFile=./db/changelog-insurance.xml --url=$DB_HOST --username=$DB_USER --password=$DB_PASSWORD update"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource{core.EnvFromSource{Prefix:"", ConfigMapRef:(*core.ConfigMapEnvSource)(nil), SecretRef:(*core.SecretEnvSource)(0xc01a48c8a0)}}, Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:200, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"200m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:268435456, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:core.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:134217728, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc014591f78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc010460000), ImagePullSecrets:[]core.LocalObjectReference{core.LocalObjectReference{Name:"artifactory-tradeplace-registry"}}, Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable</p> </blockquote> <p>I still didn’t understand which field should be immutable, probably Image, but it’s very strange, it makes sense that I can embed Image.</p> <p>The error occurs when I change the field Image from ...: insurance-master-682 to ...: insurance-master-681 for example</p> <p>I change the chart file every time I install or update, I change the version field. So, has anyone encountered this? while I see the only way out is to apply 'kubectl delete job ...' before updating</p> <p>part of yamls in 'templetes' directory:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: labels: target-domain: dev1... target-service: test name: dev1-test-db-migrate-job spec: backoffLimit: 0 template: metadata: labels: target-app: db-migrate target-domain: dev1... target-service: test spec: containers: - args: - java -jar ./db/liquibase.jar --logLevel=debug --classpath=./db/mariadb-java-client-2.5.3.jar --driver=org.mariadb.jdbc.Driver --changeLogFile=./db/changelog-insurance.xml --url=$DB_HOST --username=$DB_USER --password=$DB_PASSWORD update command: - /bin/sh - -c envFrom: - secretRef: name: dev1-secret-config-deploy-for-app-gk5b59mb86 image: ...:insurance-master-682 imagePullPolicy: IfNotPresent name: app resources: limits: cpu: 200m memory: 256Mi requests: cpu: 100m memory: 128Mi imagePullSecrets: - name: artifactory-tradeplace-registry restartPolicy: Never </code></pre> <p>Chart.yaml example</p> <pre><code>apiVersion: v2 name: description: A Helm chart for Kubernetes type: application version: 0.1.20200505t154055 appVersion: 1.16.0 </code></pre>
<p>The existing job need to be deleted, because the template section in the job is immutable or not updatable. so you have 2 following options.</p> <ol> <li>Always create a new job with a unique name, so it leaves the old jobs and creates a new one - every time if you include the version of image would be sensible.</li> <li>Automatic cleaning of jobs (more info <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#clean-up-finished-jobs-automatically" rel="noreferrer">check here</a>) - The job property <code>ttlSecondsAfterFinished</code> allows helps to delete the job automatically after a sepecified period. for eg:</li> </ol> <pre><code> apiVersion: batch/v1 kind: Job metadata: name: pi-with-ttl spec: ttlSecondsAfterFinished: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre>
<p>When using Kubernetes <code>.yml</code> files, I can do the following:</p> <pre class="lang-sh prettyprint-override"><code>$ cat configmap.yml apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: foo: ${FOO} bar: ${BAR} static: doesNotChange $ export FOO=myFooVal $ export BAR=myBarVal $ cat configmap.yml | envsubst | kubectl apply -f - </code></pre> <p>This would replace <code>${FOO}</code> and <code>${BAR}</code> in the <code>configmap.yml</code> file before actually applying the file to the cluster.</p> <p>How could I achieve the very same behavior with a Kubernetes secret which has it's data values base64 encoded?</p> <p>I would need to read all the keys in the <code>data:</code> field, decode the values, apply the environment variables and encode it again.</p> <p>A tool to decode and encode the <code>data:</code> values inplace would be much appreciated.</p>
<p>It is actually possible, to store the <code>secret.yml</code> with <code>stringData</code> instead of <code>data</code> which allows to keep the files in plain text (SOPS encryption is still possible and encouraged)</p> <pre><code>$ cat secret.yml apiVersion: v1 kind: Secret metadata: name: test-secret namespace: default type: Opaque stringData: dotenv: | DATABASE_URL=&quot;postgresql://test:test@localhost:5432/test?schema=public&quot; API_PORT=${PORT} FOO=${FOO} BAR=${BAR} $ export PORT=80 $ export FOO=myFooValue $ export BAR=myBarValue $ cat secret.yml | envsubst | kubectl apply -f - </code></pre> <p>A plus is for sure, that this not only allows for creation of the secret, but updating is also possible.</p> <p>Just for documentation, here would be the full call with SOPS:</p> <pre><code>$ sops --decrypt secret.enc.yml | envsubst | kubectl apply -f - </code></pre>
<p>I have configured a Kafka Cluster with Strimzi. I have enabled tls authentication and I have exposed the service with NodePort.</p> <p>After that I have exported my ca and my password to generate a JKS to connect with Kafka. But the problem is that I'm having the next error:</p> <blockquote> <p>java.security.cert.CertificateException: No subject alternative names matching IP address 172.26.195.44 found</p> </blockquote> <p>To export password and ca:</p> <pre><code>kubectl get secret kafka-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 --decode &gt; ca.crt kubectl get secret kafka-cluster-cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 --decode &gt; ca.password </code></pre> <p>To generate the jks I have made these steps:</p> <pre><code>keytool -genkey -alias kafka -keyalg RSA -keystore kafka.jks -keysize 2048 keytool -importkeystore -srckeystore kafka.jks -destkeystore kafka.jks -deststoretype pkcs12 export CERT_FILE_PATH=ca.crt export CERT_PASSWORD_FILE_PATH=ca.password export KEYSTORE_LOCATION=kafka.jks export PASSWORD=`cat $CERT_PASSWORD_FILE_PATH` export CA_CERT_ALIAS=strimzi-kafka-cert sudo keytool -importcert -alias $CA_CERT_ALIAS -file $CERT_FILE_PATH -keystore $KEYSTORE_LOCATION -keypass $PASSWORD sudo keytool -list -alias $CA_CERT_ALIAS -keystore $KEYSTORE_LOCATION </code></pre> <p>Also I have tried adding -ext SAN=dns:test.abc.com,ip:172.26.195.44</p> <p>Any idea about this?</p>
<p>As described in the docs, when using node ports listeners, you have to by default disable the hostname verification in your client. The reason is that the node address is not known upfront to add it to the certificates and including all nodes would often not work because the worker nodes might come and go.</p> <p>If you know the node addresses upfront because of your cluster configuration, you can have them added to the certificates using the <a href="https://strimzi.io/docs/operators/latest/full/using.html#property-listener-config-altnames-reference" rel="nofollow noreferrer"><code>alternativeNames</code> option</a> in the Kafka CR.</p>
<p>With Kubernetes 1.22, the beta API for the <code>CustomResourceDefinition</code> <code>apiextensions.k8s.io/v1beta1</code> was removed and replaced with <code>apiextensions.k8s.io/v1</code>. While changing the CRDs, I have come to realize that my older controller (operator pattern, originally written for <code>v1alpha1</code>) still tries to list <code>apiextensions.k8s.io/v1alpha1</code> even though I have changed the CRD to <code>apiextensions.k8s.io/v1</code>.</p> <p>I have read <a href="https://stackoverflow.com/questions/58481850/no-matches-for-kind-deployment-in-version-extensions-v1beta1">this source</a> and it states that for deployment, I should change the API version but my case is an extension of this since I don't have the controller for the new API.</p> <p>Do I need to write a new controller for the new API version?</p>
<blockquote> <p>Do I need to write a new controller for the new API version ?</p> </blockquote> <p>Unfortunately, it looks like it does. If you are unable to apply what is described in <a href="https://stackoverflow.com/questions/54778620/how-to-update-resources-after-customresourcedefinitions-changes">this similar question</a>, because you are using a custom controller then you need to create your own new controller (if you cannot change API inside it) that will work with the supported API. Look at <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers" rel="nofollow noreferrer">Custom controllers</a> page in the official documentation.</p> <blockquote> <p>I am not sure if the controller can manage the new API version. Even after changing the API version of the CRD to v1 from v1Alpha1, I get an error message stating that tha controller is trying to list CRD with API version v1alpha1.</p> </blockquote> <p>It looks like the controller has some bugs. There should be no problem referencing the new API as written [in this documentation](It looks like the controller is badly written. There should be no problem referencing the new API as written in this documentation.):</p> <blockquote> <p>The <strong>v1.22</strong> release will stop serving the following deprecated API versions in favor of newer and more stable API versions:</p> <ul> <li>Ingress in the <strong>extensions/v1beta1</strong> API version will no longer be served</li> <li>Migrate to use the <strong>networking.k8s.io/v1beta1</strong> API version, available since v1.14. &gt;Existing persisted data can be retrieved/updated via the new version.</li> </ul> </blockquote> <blockquote> <p>Kubernetes 1.16 is due to be released in September 2019, so be sure to audit your configuration and integrations now!</p> <ul> <li>Change YAML files to reference the newer APIs</li> <li><strong>Update custom integrations and controllers to call the newer APIs</strong></li> <li>Update third party tools (ingress controllers, continuous delivery systems) to call the newer APIs</li> </ul> </blockquote> <p>See also <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="nofollow noreferrer">Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know</a>.</p>
<p>I am trying to create a cronjob , I have written a Springboot application for this and have created a abc-dev.yml file for application configuration</p> <p>error: unable to recognize &quot;src/java/k8s/abc-dev.yml&quot;: no matches for kind &quot;CronJob&quot; in version &quot;apps/v1&quot;</p> <pre><code>apiVersion: apps/v1 kind: CronJob metadata: name: abc-cron-job spec: schedule: &quot;* * * * *&quot; jobTemplate: spec: template: spec: container: - name: abc-cron-job image: busybox imagePullPolicy: IfNotPresent command: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre>
<p>If you are running kubernetes 1.20 or lower, the correct apiVersion value is:</p> <p><code>apiVersion: batch/v1beta1</code></p> <p>If you are running kubernetes 1.21 or higher, its</p> <p><code>apiVersion: batch/v1</code></p>
<p>I have deployed Gitlab from gitlab official helm chart. When I deployed it I didn't enable LDAP. Be informed that I didn't edit the values.yaml rather I used <code>helm update --install XXX</code> command to do it.</p> <p>My question is how do I extract the helm values.yaml of my existing helm deployment (Name: <code>prime-gitlab</code>). I know how to use <code>helm value show</code> command to download the value.yaml from the gitlab / artifactoryhub but here I would like extract my existing value.yaml so I can edit the LDAP part in the values.yaml file.</p> <pre class="lang-none prettyprint-override"><code>01:36 AM βœ” root on my-k8s-man-01 Ξ” [~] Ξ© helm ls -n prime-gitlab NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION prime-gitlab prime-gitlab 1 2022-02-12 01:02:15.901215658 -0800 PST deployed gitlab-5.7.2 14.7.2 </code></pre>
<p>The answer here is very short. Exactly, like @DavidMaze mentioned in comments section you're looking for <a href="https://docs.helm.sh/docs/helm/helm_get_values/" rel="nofollow noreferrer"><code>helm get values</code></a>.</p> <p>To this command one can use several options.</p> <blockquote> <p>This command downloads a values file for a given release.</p> <pre><code>helm get values RELEASE_NAME [flags] </code></pre> </blockquote> <p><em>Options:</em></p> <pre><code> -a, --all dump all (computed) values -h, --help help for values -o, --output format prints the output in the specified format. Allowed values: table, json, yaml (default table) --revision int get the named release with revision </code></pre>
<p>We have a data processing service that tries to utilize as much CPU and memory as possible. In VMs this service uses max CPU and memory available and keeps on running. But when we run this service to Kubernetes it gets evicted as soon as it hits resource limits. Is there a way to let the service hit max resource usage and not get evicted?</p>
<blockquote> <p>The kubelet is the primary &quot;node agent&quot; that runs on each node. [<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">1</a>]</p> <p>When you specify a <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pod</a>, you can optionally specify how much of each resource a <a href="https://kubernetes.io/docs/concepts/containers/" rel="nofollow noreferrer">container</a> needs. [<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">2</a>]</p> </blockquote> <p>@mmking has the point. Indeed kubelet requires some resources per node and that's the reason of seeing eviction.</p> <p>And again like @mmking mentioned - unfortunately there's no way around that.</p> <blockquote> <p>I'd recommend setting resource limits to whatever the math comes down to (total resource minus kubelet requirements).</p> </blockquote> <p>I agree with the sentence above. <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Here</a> you can find documentation.</p> <p>References:</p> <p>[1] <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer"><em>Kubelet</em></a></p> <p>[2] <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer"><em>Manage resources containers</em></a></p>
<p>I see this in Kubernetes doc,</p> <blockquote> <p>In Kubernetes, controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.</p> </blockquote> <p>Also this,</p> <blockquote> <p>The Deployment controller and Job controller are examples of controllers that come as part of Kubernetes itself (&quot;built-in&quot; controllers).</p> </blockquote> <p>But, I couldn't find how does the control loop work. Does it check the current state of the cluster every few seconds? If yes, what is the default value?</p> <p>I also found something interesting here,</p> <p><a href="https://stackoverflow.com/questions/55453072/what-is-the-deployment-controller-sync-period-for-kube-controller-manager">What is the deployment controller sync period for kube-controller-manager?</a></p>
<p>I would like to start explaining that the <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">kube-controller-manager</a> is a collection of individual control processes tied together to reduce complexity.</p> <p>Being said that, the control process responsible for monitoring the node's health and a few other parameters is the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller" rel="nofollow noreferrer">Node Controller</a>, and it does that by reading the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#heartbeats" rel="nofollow noreferrer">Heartbeats</a> sent by the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a> agent in the nodes.</p> <p>According to the Kubernete's documentation:</p> <blockquote> <p>For nodes there are two forms of heartbeats:</p> <ul> <li>updates to the <code>.status</code> of a Node</li> <li><a href="https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/lease-v1/" rel="nofollow noreferrer">Lease</a> objects within the <code>kube-node-lease</code> namespace. Each Node has an associated Lease object.</li> </ul> <p>Compared to updates to <code>.status</code> of a Node, a Lease is a lightweight resource. Using Leases for heartbeats reduces the performance impact of these updates for large clusters.</p> <p>The kubelet is responsible for creating and updating the <code>.status</code> of Nodes, and for updating their related Leases.</p> <ul> <li>The kubelet updates the node's <code>.status</code> either when there is change in status or if there has been no update for a configured interval. The default interval for <code>.status</code> updates to Nodes is 5 minutes, which is much longer than the 40 second default timeout for unreachable nodes.</li> <li>The kubelet creates and then updates its Lease object every 10 seconds (the default update interval). Lease updates occur independently from updates to the Node's <code>.status</code>. If the Lease update fails, the kubelet retries, using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.</li> </ul> </blockquote> <p>As for the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">Kubernetes Objects</a> running in the nodes:</p> <blockquote> <p>Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:</p> <ul> <li>What containerized applications are running (and on which nodes)</li> <li>The resources available to those applications</li> <li>The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance</li> </ul> <p>A Kubernetes object is a &quot;record of intent&quot;--once you create the object, the Kubernetes system will constantly work to ensure that object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's desired state.</p> </blockquote> <p>Depending on the Kubernetes Object, the controller mechanism is responsible for maintaining its desired state. The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment Object</a> for example, uses the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">Replica Set</a> underneath to maintain the desired described state of the Pods; while the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Statefulset Object</a> uses its own Controller for the same purpose.</p> <p>To see a complete list of Kubernetes Objects managed by your cluster, you can run the command: <code>kubectl api-resources</code></p>
<p>I'm installing ingress nginx using a modified yaml file</p> <pre><code>kubectl apply -f deploy.yaml </code></pre> <p>The yaml file is just the <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">original deploy file</a> but with added hostPorts for the deployment:</p> <pre><code>ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP </code></pre> <p>become:</p> <pre><code>ports: - name: http containerPort: 80 protocol: TCP hostPort: 80 #&lt;-- added - name: https containerPort: 443 protocol: TCP hostPort: 443 #&lt;-- added - name: webhook containerPort: 8443 protocol: TCP hostPort: 8443 #&lt;-- added </code></pre> <p>So this is working for me. But I would like to <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">install</a> ingress nginx using helm:</p> <pre><code>helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace </code></pre> <p>Is it possible to add the <code>hostPort</code> values using helm (<code>-f values.yml</code>)? I need to add <code>hostPort</code> in <code>Deployment.spec.template.containers.ports</code>, but I have two problems to write the correct values.yml file:</p> <p><em>values.yml</em></p> <pre><code># How to access the deployment? spec: template: containers: ports: # How to add field with existing containerPort value of each element in the array? </code></pre>
<p>Two ways to find out:</p> <ol> <li><p>You can take a closer look at the helm chart itself <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx</a></p> <p>Here you can find deployment spec <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml</a></p> <p>And under it, you can see, there's condition that enables <code>hostPort</code> <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L113" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L113</a></p> </li> <li><p>(Proper one) Always dig through <code>values.yaml</code> <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L90" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L90</a></p> <p>and chart documentation <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#:%7E:text=controller.hostPort.enabled" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md#:~:text=controller.hostPort.enabled</a></p> </li> </ol>
<p>I have a folder containing multiple values.yaml files and I would like to pass all the yaml files in that folder as an argument to helm install.</p> <p>It is possible to use like <code>helm install example . -f values/values1.yaml -f values/values2.yaml</code></p> <p>But there are more than 10 files in values folder Is it possible to simply pass a folder as an argument</p> <p>I already tried <code>helm install example . -f values/*</code> And this does not work.</p>
<p>This is not possible as <code>-f</code> expects a file or URL = <code>specify values in a YAML file or a URL (can specify multiple)</code> and helm does not know a command to use a directory.</p> <p>Maybe you should reduce your values.yaml files to have a base value file and then one environment specific values file:</p> <pre><code>helm install example . -f values.yaml -f env/values_dev.yaml </code></pre>
<p>I have created a job in kubernetes through <strong>client-go</strong> api. Now I want to get the log of the job, but I can't find the log api of job in <strong>client-go</strong>. Therefore, I want to obtain the name of all the pods in a job to obtain the POD logs by name, and then obtain the logs of the job.</p> <p>So,how to get the name of pod in a job in kubernetes through client-go?</p> <p>Thanks so much.</p>
<p>I create a pod with label, and then I get it through <strong>LabelSelector</strong>. Like it :</p> <pre class="lang-golang prettyprint-override"><code> config, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, &quot;~/.kube/config&quot;) if err != nil { println(&quot;config build error&quot;) } client, err := kubernetes.NewForConfig(config) pods, err := client.CoreV1().Pods(&quot;test&quot;).List(context.TODO(), v1.ListOptions{LabelSelector: &quot;name=label_name&quot;}) for _, v := range pods.Items { log := client.CoreV1().Pods(&quot;test&quot;).GetLogs(v.Name, &amp;v12.PodLogOptions{}) } </code></pre>
<p>I am a bit new to Kubernetes and I am working with EKS. I have two main apps for which there is a number of pods and I have set up a ELB for external access.</p> <p>I also have a small app with say 1-2 pods. I don't want to set up a ELB just for this small app. I checked the node port, but in that case, I can't use the default HTTPS port 443.</p> <p>So I feel the best thing to do in this case would be to bring the small app outside the cluster, then maybe set it up in a EC2 instance. Or is there some other way to expose the small app while keeping it inside the cluster itself?</p>
<p>You can try to use the <strong>Host network</strong> (Node) like <strong>hostport</strong> (Not recommended in k8s to use in prod)</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 hostPort: 443 </code></pre> <blockquote> <p>The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node and the usage of the hostPort is considered a privileged operation on OpenShift.</p> </blockquote> <p><strong>Extra</strong></p> <blockquote> <p>I don't want to set up a elb just for this small app.</p> </blockquote> <p>Ideally, you have to use the deployments with the ingress and ingress controller. So there will be <strong>single ELB</strong> for the <strong>whole EKS</strong> cluster and all services will be using that single point.</p> <p>All PODs or deployment will be running into a single cluster if you want. Single point <strong>ingress</strong> will work as handling the traffic into EKS cluster.</p> <p><a href="https://i.stack.imgur.com/tMSxt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tMSxt.png" alt="enter image description here" /></a></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>You can read this article how to setup the ingress in EKS aws so you will get an idea.</p> <p>You can use a different domains for exposing services.</p> <p><strong>Example</strong> :</p> <p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/</a></p>
<p>I'm having to build a demo Kubernetes cluster in AWS using Kubeadm.</p> <p>Unfortunately, for several reasons, Kops, and EKS are out of the question in my current environment.</p> <p>How do I deal with things such as auto-scaling and auto joining worker nodes back to the master if they get terminated for any reason? This is my main concern.</p> <p>I've done this with Kops in the past and it's relatively straightforward, but I'm not sure how to manage using Kubeadm.</p>
<p><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a> is what you are looking for to achieve it.</p> <blockquote> <p>Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true: -there are pods that failed to run in the cluster due to insufficient resources. -there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.</p> </blockquote> <p>This tool will create and remove instances in ASG.</p> <blockquote> <p>Cluster Autoscaler requires the ability to examine and modify EC2 Auto Scaling Groups.</p> </blockquote> <p>You will only need add a template with a script for new created instances to add them to your cluster.</p> <blockquote> <p>If you use kubeadm to provision your cluster, it is up to you to automatically execute <code>kubeadm join</code> at boot time via some script</p> </blockquote> <p>If you have any questions about this tool, everything you can find in <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#frequently-asked-questions" rel="nofollow noreferrer">FAQ</a></p> <p>You can check documentation how to set up it on <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#cluster-autoscaler-on-aws" rel="nofollow noreferrer">AWS</a></p> <blockquote> <p>On AWS, Cluster Autoscaler utilizes Amazon EC2 Auto Scaling Groups to manage node groups. Cluster Autoscaler typically runs as a <code>Deployment</code> in your cluster.</p> </blockquote>
<p>I have an issue when OpenShift project deployed with autoscaler configuration like this:</p> <ul> <li>Min Pods = 10</li> <li>Max Pods = 15</li> </ul> <p>I can see that deployer immediately creates 5 pods and <em>TcpDiscoveryKubernetesIpFinder</em> creates not one grid, but multiple grids with same <em>igniteInstanceName</em>.</p> <p><strong>This issue could be is solved by this workaround</strong></p> <p>I changed autoscaler configuration to start with ONE pod:</p> <ul> <li>Min Pods = 1</li> <li>Max Pods = 15</li> </ul> <p>And then scale up to 10 pods (or replicas=10):</p> <ul> <li>Min Pods = 10</li> <li>Max Pods = 15</li> </ul> <p>Looks like <em>TcpDiscoveryKubernetesIpFinder</em> is not locking when it reads data from Kubernetes service that maintains list of IP addresses of all project pods. So when multiple pods started simultaneously it cause multiple grids creation. But when there is ONE pod started and grid with this pod created - new autoscaled pods are joining this existing grid.</p> <p>PS No issues with ports 47100 or 47500, comms and discovery is working.</p>
<p>OP confirmed in the comment, that the problem is resolved:</p> <blockquote> <p>Thank you, let me know when TcpDiscoveryKubernetesIpFinder early adoption fix will be available. For now I've switched my Openshift micro-service IgniteConfiguration#discoverySpi to TcpDiscoveryJdbcIpFinder - which solved this issue (as it has this kind of lock, transactionIsolation=READ_COMMITTED).</p> </blockquote> <p>You can read more about <a href="https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/discovery/tcp/ipfinder/jdbc/TcpDiscoveryJdbcIpFinder.html" rel="nofollow noreferrer">TcpDiscoveryJdbcIpFinder - here</a>.</p>
<p>I'm trying to pass toleration values into helm using terraform. But I have got different error messages.</p> <p>Default values of the <a href="https://artifacthub.io/packages/helm/victoriametrics/victoria-metrics-cluster?modal=values&amp;path=vmstorage.tolerations" rel="nofollow noreferrer">chart are here</a>.</p> <pre class="lang-yaml prettyprint-override"><code>... tolerations: [] ... </code></pre> <p>I use this code.</p> <pre><code>locals { victoria_tolerations = [{ &quot;key&quot; : &quot;k8s-app&quot;, &quot;operator&quot; : &quot;Equal&quot;, &quot;value&quot; : &quot;grafana&quot;, &quot;effect&quot; : &quot;NoSchedule&quot; }] } resource &quot;helm_release&quot; &quot;victoria_metrics&quot; { name = var.vm_release_name chart = var.vm_chart repository = var.vm_chart_repository_url version = var.vm_chart_version namespace = local.namespace_victoria max_history = var.max_history set { name = &quot;vmselect.tolerations&quot; value = jsonencode(local.victoria_tolerations) } } </code></pre> <p>And have got the error message:</p> <pre><code>Error: failed parsing key &quot;vmselect.tolerations&quot; with value [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;k8s-app&quot;,&quot;operator&quot;:&quot;Equal&quot;,&quot;value&quot;:&quot;grafana&quot;}], key &quot;\&quot;key\&quot;:\&quot;k8s-app\&quot;&quot; has no value (cannot end with ,) </code></pre> <p>If I use this variable</p> <pre><code>victoria_tolerations = &lt;&lt;EOF - key: k8s-app operator: Equal value: grafana effect: NoSchedule EOF </code></pre> <p>I have got this error:</p> <pre><code>Error: unable to build kubernetes objects from release manifest: error validating &quot;&quot;: error validating data: ValidationError(Deployment.spec.template.spec.tolerations): invalid type for io.k8s.api.core.v1.PodSpec.tolerations: got &quot;string&quot;, expected &quot;array&quot; </code></pre> <p>P.S. Also, I tried to pass as <code>values</code>. This doesn't work in this case.</p> <pre><code>locals { victoria_values = { &quot;tolerations&quot; : [ { &quot;key&quot; : &quot;k8s-app&quot;, &quot;operator&quot; : &quot;Equal&quot;, &quot;value&quot; : &quot;grafana&quot;, &quot;effect&quot; : &quot;NoSchedule&quot; } ] } } </code></pre> <pre><code>resource &quot;helm_release&quot; &quot;victoria_metrics&quot; { name = var.vm_release_name ... values = [ yamlencode(local.victoria_values) ] } </code></pre>
<p>Try <strong>dynamic</strong> block</p> <pre><code>dynamic &quot;toleration&quot; { for_each = var.tolerations content { key = toleration.value[&quot;key&quot;] operator = toleration.value[&quot;operator&quot;] value = toleration.value[&quot;value&quot;] effect = toleration.value[&quot;effect&quot;] } } </code></pre> <p><strong>var</strong> file</p> <pre><code>variable &quot;tolerations&quot; { type = list(map(string)) default = [] description = &quot;Tolerations to apply to deployment&quot; } </code></pre> <p><strong>arg</strong></p> <pre><code>tolerations = [ { key = &quot;node.kubernetes.io/role&quot;, operator = &quot;Equal&quot;, value = &quot;true&quot;, effect = &quot;NoSchedule&quot; } ] </code></pre>
<p>I don't undestand why i can't get certificates on K8S using <strong>cert-manager</strong></p> <ul> <li><p>I installed cert-manager : <a href="https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml</a></p> </li> <li><p>I created ClusterIssuer</p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: user@example.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: example-issuer-account-key solvers: - http01: ingress: class: nginx </code></pre> </li> <li><p>I created ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-staging spec: rules: - host: mytest.example.fr http: paths: - path: / pathType: Prefix backend: service: name: webapp port: number: 80 tls: - hosts: - mytest.example.fr secretName: letsencrypt-staging </code></pre> </li> </ul> <p><a href="https://i.stack.imgur.com/IPWtg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IPWtg.png" alt="enter image description here" /></a></p> <p>But when i try to get an certificate i get 'no resources found' <a href="https://i.stack.imgur.com/0kXSe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0kXSe.png" alt="enter image description here" /></a></p> <p>Any idea ?</p> <p>Thank you for your help</p>
<p>If you don't want to create <strong>kind certificate</strong> you can use</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: cluster-issuer-name namespace: development spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: harsh@example.com privateKeySecretRef: name: secret-name solvers: - http01: ingress: class: nginx-class-name --- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx-class-name cert-manager.io/cluster-issuer: cluster-issuer-name nginx.ingress.kubernetes.io/rewrite-target: / name: example-ingress spec: rules: - host: sub.example.com http: . . #Path and service configs . . tls: - hosts: - sub.example.com secretName: secret-name </code></pre> <p><strong>ingress</strong> will call <strong>clusterisser</strong> and it will auto-create <strong>certificate</strong> for you.</p> <p>Update <strong>ingress</strong> resources as per need if you are higher version <strong>1.18</strong> or <strong>above</strong></p> <p><strong>Notes</strong></p> <ul> <li><p>Make sure you are using the URL <code>https://acme-v02.api.letsencrypt.org/directory</code> in clusterissue or else you will get <strong>fake</strong> certificate in browser.</p> </li> <li><p>For refrence you can read more here : <a href="https://stackoverflow.com/a/55183209/5525824">https://stackoverflow.com/a/55183209/5525824</a></p> </li> <li><p>Make sure also you ingress pointing to proper <strong>clusterissuer</strong> if you have created new.</p> </li> <li><p>Also don't use same <strong>privateKeySecretRef:name: secret-name</strong> you need to delete it or use the <strong>new</strong> <strong>name</strong> as <strong>fake certificate</strong> now stored in that secret so.</p> </li> </ul>