prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have a cronjob that keeps restarting, despite its <code>RestartPolicy</code> set to <code>Never</code>:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: cron-zombie-pod-killer spec: schedule: &quot;*/9 * * * *&quot; successfulJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: name: cron-zombie-pod-killer spec: containers: - name: cron-zombie-pod-killer image: bitnami/kubectl command: - &quot;/bin/sh&quot; args: - &quot;-c&quot; - &quot;kubectl get pods --all-namespaces --field-selector=status.phase=Failed | awk '{print $2 \&quot; --namespace=\&quot; $1}' | xargs kubectl delete pod &gt; /dev/null&quot; serviceAccountName: pod-read-and-delete restartPolicy: Never </code></pre> <p>I would expect it to run every 9th minute, but that's not the case. What happens is that when there are pods to clean up (so, when there's smth to do for the pod) it would run normally. Once everything is cleared up, it keeps restarting -&gt; failing -&gt; starting, etc. in a loop every second.</p> <p>Is there something I need to do to tell k8s that the job has been successful, even if there's nothing to do (no pods to clean up)? What makes the job loop in restarts and failures?</p>
<p>That is by design. <code>restartPolicy</code> is not applied to a CronJob, but a Pod it creates.</p> <p>If <code>restartPolicy</code> is set to <code>Never</code>, it will ust create new pods, if the previous failed. Setting it to <code>OnFailure</code> causes the Pod to be restarted, and prevents the stream of new Pods.</p> <p>This was discussed in this GitHub issue: <a href="https://github.com/kubernetes/kubernetes/issues/20255" rel="nofollow noreferrer">Job being constanly recreated despite RestartPolicy: Never #20255</a></p> <hr /> <p>Your <code>kubectl</code> command results in exit code <em>123</em> (any invocation exited with a non-zero status) if there are no Pods in <em>Failed</em> state. This causes the Job to fail, and constant restarts.</p> <p>You can fix that by forcing <code>kubectl</code> command to exit with exit code <em>0</em>. Add <code>|| exit 0</code> to the end of it:</p> <pre class="lang-text prettyprint-override"><code>kubectl get pods --all-namespaces --field-selector=status.phase=Failed | awk '{print $2 \&quot; --namespace=\&quot; $1}' | xargs kubectl delete pod &gt; /dev/null || exit 0 </code></pre>
<p>I am learning the Google Cloud platform, trying to implement my first project and am getting lost in the tutorials. I am stuck at the trying to implement an nginx ingress. My ingress is stuck in CrashLoopBackoff and the logs show the following error.</p> <p>I know how to do this task with DockerCompose, but not here.</p> <p>Where do I start?</p> <pre><code>1#1: cannot load certificate &quot;/etc/letsencrypt/live/blah.com/fullchain.pem&quot;: BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/blah.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) nginx: [emerg] cannot load certificate &quot;/etc/letsencrypt/live/blah.com/fullchain.pem&quot;: BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/blah.com/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) </code></pre> <p>I am not yet certain this is helpful, but I have set up the Certificate Authority Service (<a href="https://cloud.google.com/certificate-authority-service/docs/best-practices" rel="nofollow noreferrer">https://cloud.google.com/certificate-authority-service/docs/best-practices</a>).</p>
<p>Instead of using that and following setup of GCP CA setup i would suggest using <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager</a> with the ingress.</p> <p>Cert-manager will get the <strong>TLS</strong> cert from <strong>let's-encrypt CA</strong> , cert-manager will create the secret into k8s and store verified certificate into a secret.</p> <p>You can attach secret with the ingress, as per host and use it.</p> <p><a href="https://cert-manager.io/docs/installation/" rel="nofollow noreferrer">Cert-manager installation</a></p> <p>YAML example :</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: cluster-issuer-name spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: harsh@example.com privateKeySecretRef: name: secret-name solvers: - http01: ingress: class: nginx-class-name --- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx-class-name cert-manager.io/cluster-issuer: cluster-issuer-name nginx.ingress.kubernetes.io/rewrite-target: / name: example-ingress spec: rules: - host: sub.example.com http: paths: - path: /api backend: serviceName: service-name servicePort: 80 tls: - hosts: - sub.example.com secretName: secret-name </code></pre> <p>You can read this blog for ref : <a href="https://medium.com/@harsh.manvar111/kubernetes-nginx-ingress-and-cert-manager-ssl-setup-c82313703d0d" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/kubernetes-nginx-ingress-and-cert-manager-ssl-setup-c82313703d0d</a></p>
<p>I am new to Kubernetes and AWS and I have a problem. I am trying to run parallel Kubernetes jobs on an EKS cluster. How can I get the environment variable JOB_COMPLETION_INDEX? I have tested my Java code before with Minikube &quot;locally&quot;, there everything works fine. But when I switch to the EKS cluster, System.getenv(&quot;JOB_COMPLETION_INDEX&quot;) = null. What am I missing? What am I doing wrong?</p> <p>I used EKS version 1.21.2.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: calculator labels: jobgroup: calculator spec: parallelism: 2 completions: 4 completionMode: Indexed template: metadata: name: calculator spec: containers: - name: calculater image: fbaensch/calculator_test:latest imagePullPolicy: Always restartPolicy: Never </code></pre>
<p>This is a v1.22 beta feature which currently not available on EKS v1.21.x.</p>
<p>I need to create kubernetes job that will run below script on mongo shell:</p> <pre><code>var operations = []; db.product.find().forEach(function(doc) { var documentLink = doc.documentLink; var operation = { updateMany :{ &quot;filter&quot; : {&quot;_id&quot; : doc._id}, &quot;update&quot; : {$set:{&quot;documentLinkMap.en&quot;:documentLink,&quot;documentLinkMap.de&quot;:&quot;&quot;}, $unset: {documentLink:&quot;&quot;,&quot;descriptionMap.tr&quot;:&quot;&quot;,&quot;news.tr&quot;:&quot;&quot;,&quot;descriptionInternal.tr&quot;:&quot;&quot;,&quot;salesDescription.tr&quot;:&quot;&quot;,&quot;salesInternal.tr&quot;:&quot;&quot;,&quot;deliveryDescription.tr&quot;:&quot;&quot;,&quot;deliveryInternal.tr&quot;:&quot;&quot;,&quot;productRoadMapDescription.tr&quot;:&quot;&quot;,&quot;productRoadMapInternal.tr&quot;:&quot;&quot;,&quot;technicalsAndIntegration.tr&quot;:&quot;&quot;,&quot;technicalsAndIntegrationInternal.tr&quot;:&quot;&quot;,&quot;versions.$[].descriptionMap.tr&quot;:&quot;&quot;,&quot;versions.$[].releaseNoteMap.tr&quot;:&quot;&quot;,&quot;versions.$[].artifacts.$[].descriptionMap.tr&quot;:&quot;&quot;,&quot;versions.$[].artifacts.$[].artifactNotes.tr&quot;:&quot;&quot;}}}}; operations.push(operation); }); operations.push( { ordered: true, writeConcern: { w: &quot;majority&quot;, wtimeout: 5000 } }); db.product.bulkWrite(operations); </code></pre> <p>I will need a sample of how that job will look like. Should I create presistent volume and claim to it or is there possibility to run this job without persistent volume? I need to run this once and then remove it.</p>
<p>You can solve it much easier with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer"><code>configMap</code></a> and then mount the <code>configMap</code> as a volume which will be resolved in a file.</p> <p>Below is example how to proceed with it (Note! You will need to use a proper image for it as well as some other changes how mongo shell works):</p> <ol> <li><p>Create a <code>configMap</code> from file. Can be done by running this command:</p> <pre><code>$ kubectl create cm mongoscript-cm --from-file=mongoscript.js configmap/mongoscript-cm created </code></pre> <p>You can check that you file is stored inside by running:</p> <pre><code>$ kubectl describe cm mongoscript-cm </code></pre> </li> <li><p>Create a job with volume mount from configmap (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-template" rel="nofollow noreferrer">spec template is the same as it used in pods</a>):</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: mongojob spec: template: spec: containers: - name: mongojob image: ubuntu # for testing purposes, you need to use appropriate one command: ['bin/bash', '-c', 'echo STARTED ; cat /opt/mongoscript.js ; sleep 120 ; echo FINISHED'] # same for command, that's for demo purposes volumeMounts: - name: mongoscript mountPath: /opt # where to mount the file volumes: - name: mongoscript configMap: name: mongoscript-cm # reference to previously created configmap restartPolicy: OnFailure # required for jobs </code></pre> </li> <li><p>Checking how it looks inside the pod</p> <p>Connect to the pod:</p> <pre><code>$ kubectl exec -it mongojob--1-8w4ml -- /bin/bash </code></pre> <p>Check file is presented:</p> <pre><code># ls /opt mongoscript.js </code></pre> <p>Check its content:</p> <pre><code># cat /opt/mongoscript.js var operations = []; db.product.find().forEach(function(doc) { var documentLink = doc.documentLink; var operation = { updateMany :{ &quot;filter&quot; : {&quot;_id&quot; : doc._id}, &quot;update&quot; : {$set {&quot;documentLinkMap.en&quot;:documentLink,&quot;documentLinkMap.de&quot;:&quot;&quot;}, $unset: {documentLink:&quot;&quot;,&quot;descriptionMap.tr&quot;:&quot;&quot;,&quot;news.tr&quot;:&quot;&quot;,&quot;descriptionInternal.tr&quot;:&quot;&quot;,&quot;salesDescription.tr&quot;:&quot;&quot;,&quot;salesInternal.tr&quot;:&quot;&quot;,&quot;deliveryDescription.tr&quot;:&quot;&quot;,&quot;deliveryInternal.tr&quot;:&quot;&quot;,&quot;productRoadMapDescription.tr&quot;:&quot;&quot;,&quot;productRoadMapInternal.tr&quot;:&quot;&quot;,&quot;technicalsAndIntegration.tr&quot;:&quot;&quot;,&quot;technicalsAndIntegrationInternal.tr&quot;:&quot;&quot;,&quot;versions.$[].descriptionMap.tr&quot;:&quot;&quot;,&quot;versions.$[].releaseNoteMap.tr&quot;:&quot;&quot;,&quot;versions.$[].artifacts.$[].descriptionMap.tr&quot;:&quot;&quot;,&quot;versions.$[].artifacts.$[].artifactNotes.tr&quot;:&quot;&quot;}}}}; operations.push(operation); }); operations.push( { ordered: true, writeConcern: { w: &quot;majority&quot;, wtimeout: 5000 } }); db.product.bulkWrite(operations); </code></pre> </li> </ol>
<p>I've setup a very <strong>simple local kubernetes cluster</strong> for development purposes, and for that I aim to pull a docker image for my pods from ECR.</p> <p>Here's the code</p> <pre><code> terraform { required_providers { kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;&gt;= 2.0.0&quot; } } } provider &quot;kubernetes&quot; { config_path = &quot;~/.kube/config&quot; } resource &quot;kubernetes_deployment&quot; &quot;test&quot; { metadata { name = &quot;test-deployment&quot; namespace = kubernetes_namespace.test.metadata.0.name } spec { replicas = 2 selector { match_labels = { app = &quot;MyTestApp&quot; } } template { metadata { labels = { app = &quot;MyTestApp&quot; } } spec { container { image = &quot;public ECR URL&quot; &lt;--- this times out name = &quot;myTestPod&quot; port { container_port = 4000 } } } } } } </code></pre> <p>I've set that <strong>ECR repo to public</strong> and made sure that <strong>it's accessible</strong>. My challenge is that in a normal scenario <strong>you have to login to ECR</strong> in order to retrieve the image, and I do not know <strong>how to achieve that in Terraform</strong>. So on 'terraform apply', it times out and fails.</p> <p>I read the documentation on <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecr_repository" rel="nofollow noreferrer">aws_ecr_repository</a>, <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecr_authorization_token" rel="nofollow noreferrer">aws_ecr_authorization_token</a>,<a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="nofollow noreferrer">Terraform EKS module</a> and <a href="https://www.terraform.io/language/resources/provisioners/local-exec" rel="nofollow noreferrer">local-exec</a>, but none of them seem to have a solution for this.</p> <p>Achieving this in a Gitlab pipeline is fairly easy, but how can one achieve this in Terraform? how can I pull an image from a public <strong>ECR repo for my local Kubernetes</strong> cluster?</p>
<p>After a while, I figured out the cleanest way to achieve this;</p> <p><strong>First</strong> retrieve your <strong>ECR authorization token</strong> data;</p> <pre><code>data &quot;aws_ecr_authorization_token&quot; &quot;token&quot; { } </code></pre> <p><strong>Second</strong>, create a <strong>secret for your kubernetes</strong> cluster**:</p> <pre><code>resource &quot;kubernetes_secret&quot; &quot;docker&quot; { metadata { name = &quot;docker-cfg&quot; namespace = kubernetes_namespace.test.metadata.0.name } data = { &quot;.dockerconfigjson&quot; = jsonencode({ auths = { &quot;${data.aws_ecr_authorization_token.token.proxy_endpoint}&quot; = { auth = &quot;${data.aws_ecr_authorization_token.token.authorization_token}&quot; } } }) } type = &quot;kubernetes.io/dockerconfigjson&quot; } </code></pre> <p>Bear in mind that the example in the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer">docs</a> base64 encodes the username and password. The exported attribute <strong>authorization_token</strong> does the same thing.</p> <p><strong>Third</strong>, once the secret is created, you can then have your pods use it as the <strong>image_pull_secrets</strong>:</p> <pre><code>resource &quot;kubernetes_deployment&quot; &quot;test&quot; { metadata { name = &quot;MyTestApp&quot; namespace = kubernetes_namespace.test.metadata.0.name } spec { replicas = 2 selector { match_labels = { app = &quot;MyTestApp&quot; } } template { metadata { labels = { app = &quot;MyTestApp&quot; } } spec { image_pull_secrets { name = &quot;docker-cfg&quot; } container { image = &quot;test-image-URL&quot; name = &quot;test-image-name&quot; image_pull_policy = &quot;Always&quot; port { container_port = 4000 } } } } } depends_on = [ kubernetes_secret.docker, ] } </code></pre> <p><strong>Gotcha</strong>: the token <strong>expires after 12 hours</strong>, so you should either write a bash script that updates your secret in the corresponding namespace, or you should write a <a href="https://www.terraform.io/language/resources/provisioners/syntax" rel="nofollow noreferrer">Terraform provisioner</a> that gets triggered every time the token expires.</p> <p>I hope this was helpful.</p>
<p>In my project, I need to test if Guaranteed Application pods should evict any dummy application pods which are running. How do I achieve that application pods always have the highest priority?</p>
<p>The answer provided by the <a href="https://stackoverflow.com/users/6309601/p">P....</a> is very good and useful. By <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">Pod Priority and Preemption</a> you can achieve what you are up to.</p> <p>However, apart from that, you can use dedicated solutions, for example in the clouds. Look at the <a href="https://cloud.google.com/blog/products/gcp/get-the-most-out-of-google-kubernetes-engine-with-priority-and-preemption" rel="nofollow noreferrer">Google cloud example</a>:</p> <blockquote> <p>Before priority and preemption, Kubernetes pods were scheduled purely on a first-come-first-served basis, and ran to completion (or forever, in the case of pods created by something like a Deployment or StatefulSet). This meant less important workloads could block more important, later-arriving, workloads from running—not the desired effect. Priority and preemption solves this problem.</p> <p>Priority and preemption is valuable in a number of scenarios. For example, imagine you want to cap autoscaling to a maximum cluster size to control costs, or you have clusters that you can’t grow in real-time (e.g., because they are on-premises and you need to buy and install additional hardware). Or you have high-priority cloud workloads that need to scale up faster than the cluster autoscaler can add nodes. In short, priority and preemption lead to better resource utilization, lower costs and better service levels for critical applications.</p> </blockquote> <p>Additional guides for other clouds:</p> <ul> <li><a href="https://cloud.ibm.com/docs/containers?topic=containers-pod_priority" rel="nofollow noreferrer">IBM cloud</a></li> <li><a href="https://www.eksworkshop.com/intermediate/201_resource_management/pod-priority/" rel="nofollow noreferrer">AWS cloud</a></li> <li><a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-advanced-scheduler" rel="nofollow noreferrer">Azure cloud</a></li> <li><a href="https://docs.openshift.com/container-platform/4.7/nodes/pods/nodes-pods-priority.html" rel="nofollow noreferrer">RedHat Openshift</a></li> </ul> <p>See also <a href="https://mohaamer5.medium.com/kubernetes-pod-priority-and-preemption-943c58aee07d" rel="nofollow noreferrer">this useful tutorial</a>.</p>
<p>Kubernetes has a support of Pod load-balancing, session affinity through its <code>kube-proxy</code>. Kubernetes’ kube-proxy is essentially an L4 load balancer so we cannot rely on it to load balance L7-transport, e.g. muliple gRPC live connections or load-balancing based on http-headers, cookies, etc.</p> <p>Service Mesh implementation like e.g. <a href="https://istio.io/" rel="nofollow noreferrer">istio</a> can handle these patterns on L7-level including gRPC. But I always thought that Service Mesh is just another layer on top of Kubernetes with additional capabilities(encrypted traffic, blue/green deployments/etc). E.g. My assumption always was that Kubernetes applications should be able to work on both vanilla Kubernetes without Mesh (e.g. for development/testing) or with a Mesh on. Adding this advanced traffic management on L7 breaks this assumption. I won't be able to work on a vanilla Kubernetes anymore, I will be tied to a specific implementation of Istio dataplane(Envoy).</p> <p>Please let know if my assumption is correct or why not? There's not much information about this type of separation of concerns on this internet.</p>
<p>Let me refer first to the following statement of yours:</p> <blockquote> <p>My assumption always was that Kubernetes applications should be able to work on both vanilla Kubernetes without Mesh (e.g. for development/testing) or with a Mesh on. Adding this advanced traffic management on L7 breaks this assumption.</p> </blockquote> <p>I have different view at that, Service Meshes are transparent to the application, so they don't break anything in them, but just add an extra (network, security, monitoring) functions at no cost (ok, the cost it quite complex configuration from Mesh operator perspective). The Service Mesh like Istio doesn't need to occupy all K8S namespaces, so you can still have mixed type of workloads in your cluster (with and w/o proxies). If we speak about Istio, to enable full interoperatbility between them (mixed workloads) you may combine two its features together:</p> <ul> <li>Peer authentication set to PERMISSIVE, so that workloads without sidecars (proxy) can accept both mutual TLS and plain text traffic.</li> <li>Manual protocol selection, e.g. if you prefer your app speak raw TCP instead of app protocol determined by Envoy itself (e.g. http) - to avoid extra decorations injected by proxy to intercepted requests.</li> </ul> <p>Alternatively you can write your own custom tcp_proxy EnvoyFilter to use Envoy as a L4 network proxy.</p> <p>References:</p> <p><a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/envoy-filter/</a> <a href="https://istio.io/latest/docs/concepts/security/#peer-authentication" rel="nofollow noreferrer">https://istio.io/latest/docs/concepts/security/#peer-authentication</a> <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/" rel="nofollow noreferrer">https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/</a></p>
<p>I need to create a <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer"><code>NetwrokPolicy</code></a> for my pod which from the network aspects needs to access to <strong>only one specific endpoint</strong> outside the cluster, only this endpoint.</p> <p>The endpoint looks like following. <a href="https://140.224.232.236:8088" rel="nofollow noreferrer">https://140.224.232.236:8088</a></p> <p>Before apply the following network policy I've exec to the pod (the image is based on alpine) and run <code>ping www.google.com</code> and it works as expected,</p> <p>However when I apply the network policy I and I try to <code>ping google.com</code> I got <code> ping: bad address 'www.google.com'</code> but when I ping to the ip like</p> <ol> <li><code>ping 140.224.232.236</code></li> <li><code>ping 140.224.232.236:8080</code></li> </ol> <p>it get stuck and before I was able to see something like this</p> <pre><code>64 bytes from 140.224.232.236: seq=518 ttl=245 time=137.603 ms 64 bytes from 140.224.232.236: seq=519 ttl=245 time=137.411 ms 64 bytes from 140.224.232.236: seq=520 ttl=245 time=137.279 ms 64 bytes from 140.224.232.236: seq=521 ttl=245 time=137.138 ms .... </code></pre> <p>Now its just stuck on this, any idea?</p> <pre><code>ping 140.224.232.236 PING 140.224.232.236 (140.224.232.236): 56 data bytes </code></pre> <p>and nothing more, what does it mean?</p> <p>What I did is the following</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: dev spec: podSelector: matchLabels: app.kubernetes.io/name: foo policyTypes: - Egress egress: - to: - ipBlock: cidr: 140.224.232.236/32 ports: - protocol: TCP port: 8088 </code></pre> <ol> <li>I've applied it on the same ns of the pod.</li> <li>the name of the pod selector label has taken from the pod which I want to apply the network policy and have the following. <code>app.kubernetes.io/name: foo</code></li> </ol>
<p>As you defined egress with ports:</p> <pre><code>ports: - protocol: TCP port: 8080 </code></pre> <p>which means allowing connections to ip 140.224.232.236 on <strong>TCP port</strong> 8080.</p> <p>However, <code>ping</code>(ICMP) doesnot know the port. Ping will fail every time.</p> <p><a href="https://i.stack.imgur.com/1xH4D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1xH4D.png" alt="enter image description here" /></a></p> <ol> <li><p><code>ping</code> is a networking utility used to test the reachability of a remote host over <strong>Internet Protocol (IP)</strong>. The <code>ping</code> utility does so by sending out a series of <strong>Internet Control Message Protocol (ICMP)</strong> echo request packets to a remote host.</p> </li> <li><p><em><strong>You cannot probe a specific port with ping command</strong></em> because ICMP belongs to <strong>layer-3 IP layer</strong>, not layer-4 transport layer (e.g., TCP/UDP).</p> </li> <li><p>In order to ping a specific port of a remote host, you need to use <strong>layer-4 transport protocols</strong> which have a notion of port numbers, with many command-line tools like:</p> </li> </ol> <ul> <li><code>paping</code> 140.224.232.236 -p 8080 -c 3</li> <li><code>nmap</code> -p 80 -sT <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a></li> <li><code>tcpping</code> <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a></li> </ul>
<p>Can traefik act as a reverse proxy for some external endpoint ? Like nginx's proxy path for a specific location. For example, I'd like to perform transparent reverse-proxying to <a href="https://app01.host.com" rel="nofollow noreferrer">https://app01.host.com</a> which is in another datacenter</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: backend01-ingressroute-app spec: entryPoints: - websecure routes: - match: Host(`backend01.host.local`) &amp;&amp; PathPrefix(`/app`) kind: Rule services: .... </code></pre> <p>backend01.host.local/app -&gt; <a href="https://app01.host.com" rel="nofollow noreferrer">https://app01.host.com</a> ? But what I need to specify as &quot;services&quot; here to achieve that ?</p>
<p>I found that external name services are per default disabled when using traefik with helm. Note that this has to be set for <code>kubernetesCRD</code> and for <code>kubernetesIngress</code> seperately. This is not explained well in the documentation: <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroute</a></p> <p>traefik helm values file:</p> <pre><code>... # # Configure providers # providers: kubernetesCRD: enabled: true allowCrossNamespace: false allowExternalNameServices: false # &lt;- This needs to be true # ingressClass: traefik-internal # labelSelector: environment=production,method=traefik namespaces: [] # - &quot;default&quot; kubernetesIngress: enabled: true allowExternalNameServices: false # &lt;- This needs to be true # labelSelector: environment=production,method=traefik namespaces: ... </code></pre>
<p>Concerning the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale" rel="nofollow noreferrer">Kubernetes Horizontal Autoscaler</a>, are there any metrics related to the number of changes between certain time periods?</p>
<p>Kubernetes does not provide such metrics, but you can get <code>events</code> for a k8s resource.</p> <blockquote> <p>An event in Kubernetes is an object in the framework that is automatically generated in response to changes with other resources—like nodes, pods, or containers.</p> </blockquote> <p>The simplest way to get events for HPA:</p> <pre class="lang-yaml prettyprint-override"><code>$ kubectl get events | grep HorizontalPodAutoscaler 7m5s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target 3m20s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 10; reason: </code></pre> <p>or</p> <pre><code>$ kubectl describe hpa &lt;yourHpaName&gt; </code></pre> <p><strong>But</strong> Kubernetes events are deleted by default after 1 hour (it is the default time-to-live, higher values might require more resources for <code>etcd</code>). Therefore <strong>you must watch for and collect important events as they happen</strong>.</p> <p>To do this you can use for example:</p> <ul> <li><a href="https://github.com/bitnami-labs/kubewatch" rel="nofollow noreferrer">KubeWatch</a> is great open-source tool for watching and streaming K8s events to third-party tools and webhooks.</li> <li><a href="https://github.com/heptiolabs/eventrouter?ref=thechiefio" rel="nofollow noreferrer">EventRouter</a> is another great open-source tool for collecting Kubernetes events. It is effortless to set up and aims to stream Kubernetes events to multiple sources or <em>sinks</em> as they are referred to in its documentation. However, just like KubeWatch, it also does not offer querying or persistence features. You need to connect it with a third-party storage and analysis tool for a full-fledged experience.</li> <li>almost any logging tool for k8s.</li> </ul> <p><strong>Also</strong>, you can use the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">official Kubernetes API library</a> to develop some simple app for catching events from HPA. There is a good example of how to develop a simple monitoring app for k8s by yourself <a href="https://stackoverflow.com/a/63774000/17739440">in this answer</a>. They monitor pods' statuses there, but you can get a good idea of developing the app you need.</p>
<h1>Overview</h1> <p>I am writing a Kubernetes controller for a VerticalScaler CRD that can vertically scale a Deployment in the cluster. My spec references an existing Deployment object in the cluster. I'd like to enqueue a reconcile request for a VerticalScaler if the referenced Deployment is modified or deleted.</p> <pre class="lang-golang prettyprint-override"><code>// VerticalScalerSpec defines the desired state of VerticalScaler. type VerticalScalerSpec struct { // Name of the Deployment object which will be auto-scaled. DeploymentName string `json:&quot;deploymentName&quot;` } </code></pre> <h2>Question</h2> <p>Is there a good way to watch an arbitrary resource when that resource is not owned by the controller, and the resource does not hold a reference to the object whose resource is managed by the controller?</p> <h1>What I Found</h1> <p>I think this should be configured in the Kubebuilder-standard <a href="https://sdk.operatorframework.io/docs/building-operators/golang/tutorial/#resources-watched-by-the-controller" rel="noreferrer">SetupWithManager</a> function for the controller, though it's possible a watch could be set up someplace else.</p> <pre class="lang-golang prettyprint-override"><code>// SetupWithManager sets up the controller with the Manager. func (r *VerticalScalerReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&amp;v1beta1.VerticalScaler{}). Complete(r) } </code></pre> <p>I've been searching for a good approach in <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/builder" rel="noreferrer">controller-runtime/pkg/builder</a> and the Kubebuilder docs. The closest example I found was the section &quot;Watching Arbitrary Resources&quot; in the <a href="https://book-v1.book.kubebuilder.io/beyond_basics/controller_watches.html" rel="noreferrer">kubebuilder-v1 docs on watches</a>:</p> <blockquote> <p>Controllers may watch arbitrary Resources and map them to a key of the Resource managed by the controller. Controllers may even map an event to multiple keys, triggering Reconciles for each key.</p> <p>Example: To respond to cluster scaling events (e.g. the deletion or addition of Nodes), a Controller would watch Nodes and map the watch events to keys of objects managed by the controller.</p> </blockquote> <p>My challenge is how to map the Deployment to the depending VerticalScaler(s), since this information is not present on the Deployment. I could <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client#hdr-Indexing" rel="noreferrer">create an index</a> on the VerticalScaler and look up depending VerticalScalers from the <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/handler#MapFunc" rel="noreferrer">MapFunc</a> using a field selector, but it doesn't seem like I should do I/O inside a MapFunc. If the list-Deployments operation failed I would be unable to retry or re-enqueue the change.</p> <p>I have this code working using this imperfect approach:</p> <pre class="lang-golang prettyprint-override"><code>const deploymentNameIndexField = &quot;.metadata.deploymentName&quot; // SetupWithManager sets up the controller with the Manager. func (r *VerticalScalerReconciler) SetupWithManager(mgr ctrl.Manager) error { if err := r.createIndices(mgr); err != nil { return err } return ctrl.NewControllerManagedBy(mgr). For(&amp;v1beta1.VerticalScaler{}). Watches( &amp;source.Kind{Type: &amp;appsv1.Deployment{}}, handler.EnqueueRequestsFromMapFunc(r.mapDeploymentToRequests)). Complete(r) } func (r *VerticalScalerReconciler) createIndices(mgr ctrl.Manager) error { return mgr.GetFieldIndexer().IndexField( context.Background(), &amp;v1beta1.VerticalScaler{}, deploymentNameIndexField, func(object client.Object) []string { vs := object.(*v1beta1.VerticalScaler) if vs.Spec.DeploymentName == &quot;&quot; { return nil } return []string{vs.Spec.DeploymentName} }) } func (r *VerticalScalerReconciler) mapDeploymentToRequests(object client.Object) []reconcile.Request { deployment := object.(*appsv1.Deployment) ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() var vsList v1beta1.VerticalScalerList if err := r.List(ctx, &amp;vsList, client.InNamespace(deployment.Namespace), client.MatchingFields{deploymentNameIndexField: deployment.Name}, ); err != nil { r.Log.Error(err, &quot;could not list VerticalScalers. &quot; + &quot;change to Deployment %s.%s will not be reconciled.&quot;, deployment.Name, deployment.Namespace) return nil } requests := make([]reconcile.Request, len(vsList.Items)) for i, vs := range vsList.Items { requests[i] = reconcile.Request{ NamespacedName: client.ObjectKeyFromObject(&amp;vs), } } return requests } </code></pre> <h2>Other Considered Approaches</h2> <p>Just to cover my bases I should mention I don't want to set the VerticalScaler as an owner of the Deployment because I don't want to garbage collect the Deployment if the VerticalScaler is deleted. Even a non-controller ownerReference causes garbage collection.</p> <p>I also considered using a <a href="https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/source#Channel" rel="noreferrer">Channel</a> watcher, but the docs say that is for events originating from outside the cluster, which this is not.</p> <p>I could also create a separate controller for the Deployment, and update some field on the depending VerticalScaler(s) from that controller's Reconcile function, but then I would also need a finalizer to handle triggering a VerticalScaler reconcile when a Deployment is deleted, and that seems like overkill.</p> <p>I could have my VerticalScaler reconciler add an annotation to the Deployment, but there's a probability that the Deployment annotations can be overwritten if managed by for example Helm. That also would not cause a reconcile request in the case where the VerticalScaler is created before the Deployment.</p>
<p>As an alternative to <code>EnqueueRequestsFromMapFunc</code>, you can use:</p> <pre class="lang-golang prettyprint-override"><code>ctrl.NewControllerManagedBy(mgr). For(&amp;v1beta1.VerticalScaler{}). Watches( &amp;source.Kind{Type: &amp;appsv1.Deployment{}}, handler.Funcs{CreateFunc: r.CreateFunc})... </code></pre> <p>The handler's callback functions such as the <code>CreateFunc</code> above that you'd define, has the signature <code>func(event.CreateEvent, workqueue.RateLimitingInterface)</code>, giving you direct access to the workqueue. By default if you don't call <code>Done()</code> on the workqueue it will get requeued with exponential backoff. This should allow you to handle errors with io operations.</p>
<p>I am running a <strong>3 Node Kubernetes cluster with Flannel as CNI</strong>. I used kubeadm to setup the cluster and the version is 1.23.</p> <p>My pods need to talk to external hosts using DNS addresses but there is no DNS server for those hosts. For that, I have added their entries in /etc/hosts on each node in cluster. The nodes can resolve the host from DNS but Pods are not able to resolve them.</p> <p>I tried to search this problem over internet and there are suggestions to use HostAlias or update /etc/hosts file inside container. My problem is that the list of hosts is large and it's not feasible to maintain the list in the yaml file.</p> <p>I also looked if Kubernetes has some inbuilt flag to make Pod look for entries in Node's /etc/hosts but couldn't find it.</p> <p>So My question is -</p> <ol> <li>Why the pods running on the node cannot resolve hosts present in /etc/hosts file.</li> <li>Is there a way to setup a local DNS server and asks all the Pods to query this DNS server for specific hosts resolutions?</li> </ol> <p>Any other suggestions or workarounds are also welcomed.</p>
<p>Environments in the container should be separated from other containers and machines (including its host machine), and the same goes for /etc/hosts.</p> <p>If you are using coreDNS (the default internal DNS), you can easily add extra hosts information by modifying its configMap.</p> <p>Open the configMap <code>kubectl edit configmap coredns -n kube-system</code> and edit it so that it includes <code>hosts</code> section:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: Corefile: | .:53 { ... kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } ### Add the following section ### hosts { {ip1} {hostname1} {ip2} {hostname2} ... fallthrough } prometheus :9153 ... } </code></pre> <p>The setting will be loaded in a few minutes then all the pods can resolve the hosts described in the configMap.</p>
<p>I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).</p> <p>Is this the expected behavior? Is it a Kubernetes limitation or a security feature? For debugging etc., we might need to access the services from the node. How can I achieve it?</p>
<p>No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. <code>ClusterIP</code> service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kubernetes documentation</a>.</p> <p>Services are <em>not</em> node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal <code>port:</code> while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.</p> <p><strong>EDIT</strong>: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no <code>NetworkPolicy</code> resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called <code>default-allow</code> behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above. If one or more <code>NetworkPolicy</code> is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, <code>NetworkPolicy</code>that both selects the pod and has &quot;Ingress&quot;/&quot;Egress&quot; in its policyTypes) - <code>default-deny</code> behavior.</p> <p>What is <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">more</a>:</p> <blockquote> <p>Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.</p> </blockquote> <p>So yes, it is expected behavior for Kubernetes <code>NetworkPolicy</code> - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of <code>NetworkPolicy</code> defined. To be compatible with it, <a href="https://projectcalico.docs.tigera.io/security/calico-network-policy" rel="nofollow noreferrer">Calico network policy</a> follows the same behavior for Kubernetes pods. <code>NetworkPolicy</code> is applied to pods within a particular namespace - either the same or different with the help of the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">selectors</a>.</p> <p>As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of <code>ipBlock</code> in pod/service <code>NetworkPolicy</code> - particular <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">IP ranges</a> are selected to allow as ingress sources or egress destinations for pod/service.</p> <p>Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described <a href="https://discuss.kubernetes.io/t/how-do-we-write-an-ingress-networkpolicy-object-in-kubernetes-so-that-calls-via-api-server-proxy-can-come-through/10142/4" rel="nofollow noreferrer">here</a>.</p>
<p>Basically, I had installed Prometheues-Grafana from the <a href="https://github.com/prometheus-operator/kube-prometheus" rel="noreferrer">kube-prometheus-stack</a> using the provided helm chart repo <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="noreferrer">prometheus-community</a></p> <pre><code># helm repo add prometheus-community https://prometheus-community.github.io/helm-charts # helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>They are working fine.</p> <p>But the problem I am facing now is integrating <strong>Thanos</strong> with this existing <strong>kube-prometheus-stack</strong>.</p> <p>I installed thanos from the <a href="https://artifacthub.io/packages/helm/bitnami/thanos" rel="noreferrer">bitnami helm chart repo</a></p> <pre><code># helm repo add bitnami https://charts.bitnami.com/bitnami # helm install thanos bitnami/thanos </code></pre> <p>I can load the Thanos Query Frontend GUI, but no metrics showing there.</p> <p><a href="https://i.stack.imgur.com/CDfVO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CDfVO.png" alt="thanos metrics" /></a> <a href="https://i.stack.imgur.com/ZlttQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZlttQ.png" alt="thanos store" /></a></p> <p>I am struggling now to get it worked properly. Is it because of Thanos from a completely different helm chart and Prometheus-operator-grafana stack from another helm chart ?.</p> <p>My Kubernetes cluster on AWS has been created using Kops. And, I use Gitlab pipeline and helm to deploy apps to the cluster.</p>
<p>It's not enough to simply install them, you need to <strong>integrate</strong> <code>prometheus</code> with <code>thanos</code>.</p> <p>Below I'll describe all steps you need to perform to get the result.</p> <p>First short theory. The most common approach to integrate them is to use <code>thanos sidecar</code> container for <code>prometheus</code> pod. You can read more <a href="https://www.infracloud.io/blogs/prometheus-ha-thanos-sidecar-receiver/" rel="noreferrer">here</a>.</p> <p><strong>How this is done:</strong></p> <p>(considering that installation is clean, it can be easily deleted and reinstalled from the scratch).</p> <ol> <li>Get <code>thanos sidecar</code> added to the <code>prometheus</code> pod.</li> </ol> <p>Pull <code>kube-prometheus-stack</code> chart:</p> <pre><code>$ helm pull prometheus-community/kube-prometheus-stack --untar </code></pre> <p>You will have a folder with a chart. You need to modify <code>values.yaml</code>, two parts to be precise:</p> <pre><code># Enable thanosService prometheus: thanosService: enabled: true # by default it's set to false # Add spec for thanos sidecar prometheus: prometheusSpec: thanos: image: &quot;quay.io/thanos/thanos:v0.24.0&quot; version: &quot;v0.24.0&quot; </code></pre> <p>Keep in mind, this feature is still experimental:</p> <blockquote> <pre><code>## This section is experimental, it may change significantly without deprecation notice in any release. ## This is experimental and may change significantly without backward compatibility in any release. ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec </code></pre> </blockquote> <p>Once it's done, install the <code>prometheus</code> chart with edited <code>values.yaml</code>:</p> <pre><code>$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace </code></pre> <p>And check that sidecar is deployed as well:</p> <pre><code>$ kubectl get pods -n prometheus | grep prometheus-0 prometheus-prometheus-kube-prometheus-prometheus-0 3/3 Running 0 67s </code></pre> <p>It should be 3 containers running (by default it's 2). You can inspect it in more details with <code>kubectl describe</code> command.</p> <ol start="2"> <li>Setup <code>thanos</code> chart and deploy it.</li> </ol> <p>Pull the <code>thanos</code> chart:</p> <pre><code>$ helm pull bitnami/thanos --untar </code></pre> <p>Edit <code>values.yaml</code>:</p> <pre><code>query: dnsDiscovery: enabled: true sidecarsService: &quot;prometheus-kube-prometheus-thanos-discovery&quot; # service which was created before sidecarsNamespace: &quot;prometheus&quot; # namespace where prometheus is deployed </code></pre> <p>Save and install this chart with edited <code>values.yaml</code>:</p> <pre><code>$ helm install thanos . -n thanos --create-namespace </code></pre> <p>Check that it works:</p> <pre><code>$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos </code></pre> <p>We are interested in this line:</p> <pre><code>level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg=&quot;adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]&quot; address=10.44.1.213:10901 extLset=&quot;{prometheus=\&quot;prometheus/prometheus-kube-prometheus-prometheus\&quot;, prometheus_replica=\&quot;prometheus-prometheus-kube-prometheus-prometheus-0\&quot;}&quot; </code></pre> <ol start="3"> <li>Now go to the UI and see that metrics are available:</li> </ol> <p><a href="https://i.stack.imgur.com/Sii8l.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Sii8l.png" alt="enter image description here" /></a></p> <p><strong>Good article to read:</strong></p> <ul> <li><a href="https://medium.com/nerd-for-tech/deep-dive-into-thanos-part-ii-8f48b8bba132" rel="noreferrer">Deep Dive into Thanos-Part II</a></li> </ul>
<p>I have tried setting max nodes per pod using the following upon install:</p> <pre class="lang-sh prettyprint-override"><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=&quot;--max-pods 250&quot; sh -s - </code></pre> <p>However, the K3s server will then fail to load. It appears that the <code>--max-pods</code> flag has been deprecated per the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubernetes docs</a>:</p> <blockquote> <p>--max-pods int32 Default: 110</p> <p>(DEPRECATED: This parameter should be set via the config file specified by the Kubelet's <code>--config</code> flag. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/</a> for more information.)</p> </blockquote> <p>So with K3s, where is that kubelet config file and can/should it be set using something like the above method?</p>
<p>To update your existing installation with an increased max-pods, add a kubelet config file into a k3s associated location such as <code>/etc/rancher/k3s/kubelet.config</code>:</p> <pre><code>apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration maxPods: 250 </code></pre> <p>edit <code>/etc/systemd/system/k3s.service</code> to change the k3s server args:</p> <pre><code>ExecStart=/usr/local/bin/k3s \ server \ '--disable' \ 'servicelb' \ '--disable' \ 'traefik' \ '--kubelet-arg=config=/etc/rancher/k3s/kubelet.config' </code></pre> <p>reload systemctl to pick up the service change:</p> <p><code>sudo systemctl daemon-reload</code></p> <p>restart k3s:</p> <p><code>sudo systemctl restart k3s</code></p> <p>Check the output of describe nodes with <code>kubectl describe &lt;node&gt;</code> and look for allocatable resources:</p> <pre><code>Allocatable: cpu: 32 ephemeral-storage: 199789251223 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131811756Ki pods: 250 </code></pre> <p>and a message noting that allocatable node limit has been updated in Events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 20m kube-proxy Starting kube-proxy. Normal Starting 20m kubelet Starting kubelet. ... Normal NodeNotReady 7m52s kubelet Node &lt;node&gt; status is now: NodeNotReady Normal NodeAllocatableEnforced 7m50s kubelet Updated Node Allocatable limit across pods Normal NodeReady 7m50s kubelet Node &lt;node&gt; status is now: NodeReady </code></pre>
<p>I'm trying to scale up &amp; down deployments from within a pod.<br /> To do that, I've created a service account, clusterrolebinding with the following rbac:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: namespace: backups-scripts name: backups-roles rules: - apiGroups: [&quot;&quot;] resources: - pods verbs: - get - list - delete - watch - apiGroups: [&quot;apps&quot;,&quot;extensions&quot;] resources: - deployments - replicasets - statefulsets verbs: - get - list - patch - update - watch - scale </code></pre> <p>When testing with <code>auth can-i</code> kube say everything is ok:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl auth can-i delete deployment --namespace vm-catalogue --as system:serviceaccount:backups-scripts:backups-sa no - no RBAC policy matched $ kubectl auth can-i list deployment --namespace vm-catalogue --as system:serviceaccount:backups-scripts:backups-sa yes $ kubectl auth can-i scale deployment --namespace vm-catalogue --as system:serviceaccount:backups-scripts:backups-sa yes $ kubectl auth can-i update deployment --namespace vm-catalogue --as system:serviceaccount:backups-scripts:backups-sa yes $ kubectl auth can-i patch deployment --namespace vm-catalogue --as system:serviceaccount:backups-scripts:backups-sa yes </code></pre> <p>But now when within the pod the kubectl command is executed I get the following error:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl scale --replicas=&quot;$replicas&quot; deployment -n &quot;vm-catalogue&quot; &quot;mysql&quot; Error from server (Forbidden): deployments.extensions &quot;mysql&quot; is forbidden: User &quot;system:serviceaccount:backups-scripts:backups-sa&quot; cannot get resource &quot;deployments/scale&quot; in API group &quot;extensions&quot; in the namespace &quot;vm-catalogue&quot; </code></pre> <p>I known the &quot;list&quot; and &quot;get&quot; verbs works because I'm extracting those information within the script (and that part works).</p> <p>So.. I don't get it, what did I missed?</p>
<p>I think the error message you pasted suggests it well:</p> <pre><code>$ kubectl scale --replicas=&quot;$replicas&quot; deployment -n &quot;vm-catalogue&quot; &quot;mysql&quot; Error from server (Forbidden): deployments.extensions &quot;mysql&quot; is forbidden: User &quot;system:serviceaccount:backups-scripts:backups-sa&quot; cannot get resource &quot;deployments/scale&quot; in API group &quot;extensions&quot; in the namespace &quot;vm-catalogue&quot; </code></pre> <p><strong>cannot get resource &quot;deployments/scale&quot;</strong></p> <p>According to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="nofollow noreferrer">Kubernetes rbac docs #referring to resources</a></p> <p><strong>&quot;To represent this in an RBAC role, use a slash (/) to delimit the resource and subresource&quot;</strong>.</p> <p>Such as:</p> <pre><code>- deployments/scale - deployments/status - pods/log </code></pre>
<p>We are using AWS EKS, i deployed Promethus using the below command:</p> <pre><code>kubectl create namespace prometheus helm install prometheus prometheus-community/prometheus \ --namespace prometheus \ --set alertmanager.persistentVolume.storageClass=&quot;gp2&quot; \ --set server.persistentVolume.storageClass=&quot;gp2&quot; </code></pre> <p>Once this is done i get this message: The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:</p> <p>The services on my prometheus deployment looks like blow:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/prometheus-alertmanager ClusterIP 10.22.210.131 &lt;none&gt; 80/TCP 20h service/prometheus-kube-state-metrics ClusterIP 10.12.43.248 &lt;none&gt; 8080/TCP 20h service/prometheus-node-exporter ClusterIP None &lt;none&gt; 9100/TCP 20h service/prometheus-pushgateway ClusterIP 10.130.54.42 &lt;none&gt; 9091/TCP 20h service/prometheus-server ClusterIP 10.90.94.70 &lt;none&gt; 80/TCP 20h </code></pre> <p>I am now using this URL in the datasource on Grafana as:</p> <pre><code>datasources: datasources.yaml: apiVersion: 1 datasources: - name: Prometheus type: prometheus url: http://prometheus-alertmanager.prometheus.svc.cluster.local access: proxy isDefault: true </code></pre> <p>Grafana is also up, but when the default datasource which is prometheus in this case is unable to pull any data, when i check in the datasources tab on Grafana and try to test the datasource i am getting Error reading Prometheus: client_error: client error: 404</p> <p>since both these deployments are on the same cluster ideally it should have been able to access this. Any help here would be highly appreciated.</p>
<p>This is because you're targeting the wrong service. You're using the alert manager url instead of the prometheus server.<br /> The URL should be this one :</p> <pre class="lang-yaml prettyprint-override"><code>url: http://prometheus-server.prometheus.svc.cluster.local </code></pre>
<p>I'm trying to create a GKE Ingress that points to two different backend services based on path. I've seen a few posts explaining this is only possible with an nginx Ingress because gke ingress doesn't support rewrite-target. However, this Google documentation, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#multiple_backend_services" rel="nofollow noreferrer">GKE Ingresss - Multiple backend services</a>, seems to imply otherwise. I've followed the steps in the docs but haven't had any success. Only the service that is available on the path prefix of <code>/</code> is returned. Any other path prefix, like <code>/v2</code>, returns a 404 Not found.</p> <p>Details of my setup are below. Is there an obvious error here -- is the Google documentation incorrect and this is only possible using nginx ingress?</p> <pre><code>-- Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress annotations: kubernetes.io/ingress.global-static-ip-name: app-static-ip networking.gke.io/managed-certificates: app-managed-cert spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 80 - path: /v2 pathType: Prefix backend: service: name: api-2-service port: number: 8080 -- Service 1 apiVersion: v1 kind: Service metadata: name: api-service labels: app: api spec: type: NodePort selector: app: api ports: - port: 80 targetPort: 5000 -- Service 2 apiVersion: v1 kind: Service metadata: name: api-2-service labels: app: api-2 spec: type: NodePort selector: app: api-2 ports: - port: 8080 targetPort: 5000 </code></pre>
<p>GCP Ingress supports multiple paths. This is also well described in <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">Setting up HTTP(S) Load Balancing with Ingress</a>. For my test I've used both Hello-world v1 and v2.</p> <p>There are 3 possible issues.</p> <ul> <li>Issue is with container ports opened. You can check it using netstat:</li> </ul> <pre><code>$ kk exec -ti first-55bb869fb8-76nvq -c container -- bin/sh / # netstat -plnt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 :::8080 :::* LISTEN 1/hello-app </code></pre> <ul> <li><p>Issue might be also caused by the <code>Firewall</code> configuration. Make sure you have proper settings. (In general, in the new cluster I didn't need to add anything but if you have more stuff and have specific Firewall configurations it might block).</p> </li> <li><p>Misconfiguration between <code>port</code>, <code>containerPort</code> and <code>targetPort</code>.</p> </li> </ul> <p>Below my example:</p> <p><strong>1st deployment with</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: first labels: app: api spec: selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - name: container image: gcr.io/google-samples/hello-app:1.0 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: api-service labels: app: api spec: type: NodePort selector: app: api ports: - port: 5000 targetPort: 8080 </code></pre> <p><strong>2nd deployment</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: second labels: app: api-2 spec: selector: matchLabels: app: api-2 template: metadata: labels: app: api-2 spec: containers: - name: container image: gcr.io/google-samples/hello-app:2.0 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: api-2-service labels: app: api-2 spec: type: NodePort selector: app: api-2 ports: - port: 6000 targetPort: 8080 </code></pre> <p><strong>Ingress</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app-ingress spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 5000 - path: /v2 pathType: Prefix backend: service: name: api-2-service port: number: 6000 </code></pre> <p><strong>Outputs</strong>:</p> <pre><code>$ curl 35.190.XX.249 Hello, world! Version: 1.0.0 Hostname: first-55bb869fb8-76nvq $ curl 35.190.XX.249/v2 Hello, world! Version: 2.0.0 Hostname: second-d7d87c6d8-zv9jr </code></pre> <p>Please keep in mind that you can also use <code>Nginx Ingress</code> on GKE by adding specific annotation.</p> <pre><code>kubernetes.io/ingress.class: &quot;nginx&quot; </code></pre> <p>Main reason why people use <code>nginx ingress</code> on <code>GKE</code> is using <code>rewrite</code> annotation and possibility to use <code>ClusterIP</code> or <code>NodePort</code> as serviceType, where <code>GCP ingress</code> allows only <code>NodePort</code> serviceType.</p> <p>Additional information you can find in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Ingress for HTTP(S) Load Balancing</a></p>
<p>Good afternoon everyone, I have a question about adding monitoring of the application itself to prometheus. I am using spring boot actuator and see the values for prometheus accordingly: <a href="https://example.com/actuator/prometheus" rel="nofollow noreferrer">https://example.com/actuator/prometheus</a> I have raised prometheus via the default helm chart (<code> helm -n monitor upgrade -f values.yaml pg prometheus-community/kube-prometheus-stack</code> )by adding default values for it:</p> <pre><code>additionalScrapeConfigs: job_name: prometheus scrape_interval: 40s scrape_timeout: 40s metrics_path: /actuator/prometheus scheme: https </code></pre> <p>Prometheus itself can be found at <a href="http://ex.com/prometheus" rel="nofollow noreferrer">http://ex.com/prometheus</a> The deployment.yaml file of my springboot application is as follows:</p> <pre><code>apiVersion : apps/v1 kind: Deployment metadata: name: {{ .Release.Name }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: backend template: metadata: labels: app: backend annotations: prometheus.io/path: /actuator/prometheus prometheus.io/scrape: &quot;true&quot; prometheus.io/port: &quot;8080&quot; spec: containers: - env: - name: DATABASE_PASSWORD value: {{ .Values.DATABASE_PASSWORD }} - name: DATASOURCE_USERNAME value: {{ .Values.DATASOURCE_USERNAME }} - name: DATASOURCE_URL value: jdbc:postgresql://database-postgresql:5432/dev-school name : {{ .Release.Name }} image: {{ .Values.container.image }} ports: - containerPort : 8080 </code></pre> <p>However, after that prometheus still can't see my values. Can you tell me what the error could be?</p>
<p>In prometheus-operator,</p> <p><code>additionalScrapeConfigs</code> is not used in this way.</p> <p>According to documentation <a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">Additional Scrape Configuration</a>: <strong>AdditionalScrapeConfigs</strong> allows specifying a key of a Secret containing additional Prometheus scrape configurations.</p> <p>The easiest way to add new scrape config is to use a <code>servicemonitor</code>, like the example below:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: example-app labels: team: frontend spec: selector: matchLabels: app: backend endpoints: - port: web </code></pre>
<p>I've update my Airbyte image from <code>0.35.2-alpha</code> to <code>0.35.37-alpha</code>. [running in kubernetes]</p> <p>When the system rolled out the db pod wouldn't terminate and I [a terrible mistake] deleted the pod. When it came back up, I get an error -</p> <pre><code>PostgreSQL Database directory appears to contain a database; Skipping initialization 2022-02-24 20:19:44.065 UTC [1] LOG: starting PostgreSQL 13.6 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027, 64-bit 2022-02-24 20:19:44.065 UTC [1] LOG: listening on IPv4 address &quot;0.0.0.0&quot;, port 5432 2022-02-24 20:19:44.065 UTC [1] LOG: listening on IPv6 address &quot;::&quot;, port 5432 2022-02-24 20:19:44.071 UTC [1] LOG: listening on Unix socket &quot;/var/run/postgresql/.s.PGSQL.5432&quot; 2022-02-24 20:19:44.079 UTC [21] LOG: database system was shut down at 2022-02-24 20:12:55 UTC 2022-02-24 20:19:44.079 UTC [21] LOG: invalid resource manager ID in primary checkpoint record 2022-02-24 20:19:44.079 UTC [21] PANIC: could not locate a valid checkpoint record 2022-02-24 20:19:44.530 UTC [1] LOG: startup process (PID 21) was terminated by signal 6: Aborted 2022-02-24 20:19:44.530 UTC [1] LOG: aborting startup due to startup process failure 2022-02-24 20:19:44.566 UTC [1] LOG: database system is shut down </code></pre> <p>Pretty sure the WAL file is corrupted, but I'm not sure how to fix this.</p>
<p><em>Warning</em> - there is a potential for data loss</p> <p>This is a test system, so I wasn't concerned with keeping the latest transactions, and had no backup.</p> <p>First I overrode the container command to keep the container running but not try to start postgres.</p> <pre><code>... spec: containers: - name: airbyte-db-container image: airbyte/db command: [&quot;sh&quot;] args: [&quot;-c&quot;, &quot;while true; do echo $(date -u) &gt;&gt; /tmp/run.log; sleep 5; done&quot;] ... </code></pre> <p>And spawned a shell on the pod -</p> <pre><code>kubectl exec -it -n airbyte airbyte-db-xxxx -- sh </code></pre> <p>Run <code>pg_reset_wal</code></p> <pre><code># dry-run first pg_resetwal --dry-run /var/lib/postgresql/data/pgdata </code></pre> <p>Success!</p> <pre><code>pg_resetwal /var/lib/postgresql/data/pgdata Write-ahead log reset </code></pre> <p>Then removed the temp command in the container, and postgres started up correctly!</p>
<p>Apologies for a basic question. I have a simple Kubernetes deployment where I have 3 containers (each in their own pod) deployed to a Kubernetes cluster.</p> <p>The <code>RESTapi</code> container is dependent upon the <code>OracleDB</code> container starting. However, the <code>OracleDB</code> container takes a while to startup and by that time the <code>RESTapi</code> container has restarted a number of times due to not being able to connect and ends up in a <code>Backoff</code> state.</p> <p>Is there a more elegant solution for this?</p> <p>I’ve also noticed that when the <code>RESTapi</code> container goes into the <code>Backoff</code> state it stops retrying?</p>
<p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p> <p>The best approach in this case is to improve your “RESTapi” application to provide a more reliable and fault-tolerant service that will allow it reconnect to the database anyway.</p> <p>From <a href="https://github.com/learnk8s/kubernetes-production-best-practices/blob/dfebcd934ee6d6a6b8c4b8f7aa3cfad9a980f592/application-development.md#the-app-retries-connecting-to-dependent-services" rel="nofollow noreferrer">Kubernetes production best practices</a>:</p> <blockquote> <p>When the app starts, it shouldn't crash because a dependency such as a database isn't ready.</p> <p>Instead, the app should keep retrying to connect to the database until it succeeds.</p> <p>Kubernetes expects that application components can be started in any order.</p> </blockquote> <p>In other case you can use solution with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a>.</p> <p>You can look at this <a href="https://stackoverflow.com/questions/50838141/how-can-we-create-service-dependencies-using-kubernetes">question on stackoverflow</a>, which is just one of many others about the practical use of Init Containers for the case described.</p>
<p>I am trying to make a Kubernetes deployment script using helm. I created following 2 jobs (skipped the container template since I guess it does not matter)</p> <p>templates/jobs/migrate.yaml</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: {{ .Release.Name }}-migrate namespace: {{ .Release.Namespace }} annotations: &quot;helm.sh/hook&quot;: post-install &quot;helm.sh/hook-weight&quot;: &quot;10&quot; &quot;helm.sh/hook-delete-policy&quot;: hook-succeeded spec: ... </code></pre> <p>templates/jobs/seed.yaml</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: {{ .Release.Name }}-seed namespace: {{ .Release.Namespace }} spec: ... </code></pre> <p>First job is updating the database structure.<br /> Second job will reset the database contents and fill it with example data.</p> <p>Since I did not add <code>post-install</code> hook to the seed job I was expecting that job to not run automatically but only when I manually ask it to run.<br /> But it not only ran automatically, it tried to run before migrate.</p> <p>How can I define a job that I have to manually trigger for it to run?<br /> In vanilla kubernetes jobs run only when I explicitly execute their files using<br /> <code>kubectl apply -f job/seed-database.yaml</code></p> <p>How can I do the same using helm?</p>
<p>Replying to your last comment and thanks to @HubertNNN for his idea:</p> <blockquote> <p>Can I run a suspended job multiple times? From documentation it seems like a one time job that cannot be rerun like normal jobs</p> </blockquote> <p>It's normal job, you just editing yaml file with the <code>.spec.suspend: true</code> and it's <code>startTime</code>:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: myjob spec: suspend: true parallelism: 1 completions: 5 template: spec: ... </code></pre> <blockquote> <p>If all Jobs were created in the suspended state and placed in a pending queue, I can achieve priority-based Job scheduling by resuming Jobs in the right order.</p> </blockquote> <p>More information is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#suspending-a-job" rel="nofollow noreferrer">here</a></p>
<p>we are planning to introduce AWS spot instances in production ( non-prod is running with spot already ). In order to achieve HA we are running HPA with minimum replicas 2 for all critical deployments. Because of the spot instances behaviour we want to run on-demand instances and one pod should be running on on-demand instances for the same</p> <p><strong>Question:</strong></p> <p>Is there anyway i can split pods to get launch one pod of the deployment in on-demand and all the other pods (another one since minimum is 2 and if HPA increase the pods ) of the same deployment in spot instances.</p> <p>We already using nodeaAffinity and podAntiAffinity since we have multiple node groups for different reasons. Below is the snippet.</p> <pre><code> nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: category operator: In values: - &lt;some value&gt; podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: &lt;lable key&gt; operator: In values: - &lt;lable value&gt; topologyKey: &quot;kubernetes.io/hostname&quot; </code></pre>
<p>Following up on your last message:</p> <blockquote> <p>Will check the two deployments with same label in non-prod then we will update here.</p> </blockquote> <p>Just wondering how this went. Was there any issues/gotchas from this setup that you could share. Are you currently using this setup, or have you moved on to another setup.</p>
<p>I'm unable to launch new pods despite resources seemingly being available.</p> <p>Judging from the below screenshot there should be room for about 40 new pods.</p> <p><a href="https://i.stack.imgur.com/CHW8S.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CHW8S.png" alt="enter image description here" /></a></p> <p>And also judging from the following screenshot the nodes seems fairly underutilized</p> <p><a href="https://i.stack.imgur.com/myRet.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/myRet.png" alt="enter image description here" /></a></p> <p>However I'm currently facing the below error message</p> <pre><code>0/3 nodes are available: 1 Insufficient cpu, 2 node(s) had volume node affinity conflict. </code></pre> <p>And last night it was the following</p> <pre><code>0/3 nodes are available: 1 Too many pods, 2 node(s) had volume node affinity conflict. </code></pre> <p>Most of my services require very little memory and cpu. And therefore their resources are configured as seen below</p> <pre><code>resources: limits: cpu: 100m memory: 64Mi requests: cpu: 100m memory: 32Mi </code></pre> <p>Why I can't deploy more pods? And how to fix this?</p>
<p>Your problem is &quot;volume node affinity conflict&quot;.</p> <p>From <a href="https://stackoverflow.com/a/55514852">Kubernetes Pod Warning: 1 node(s) had volume node affinity conflict</a>:</p> <blockquote> <p>The error &quot;volume node affinity conflict&quot; happens when the persistent volume claims that the pod is using are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone.</p> </blockquote> <p>First, try to investigate exactly where the problem is. You can find a <a href="https://www.datree.io/resources/kubernetes-troubleshooting-fixing-persistentvolumeclaims-error" rel="nofollow noreferrer">detailed guide here</a>. You will need commands like:</p> <pre><code>kubectl get pv kubectl describe pv kubectl get pvc kubectl describe pvc </code></pre> <p>Then you can delete the PV and PVC and move pods to the same zone along with the PV and PVC.</p>
<p>We had a major outage when both our container registry and the entire K8S cluster lost power. When the cluster recovered faster than the container registry, my pod (part of a statefulset) is stuck in <code>Error: ImagePullBackOff</code>.</p> <p>Is there a config setting to retry downloading the image from the CR periodically or recover without manual intervention?</p> <p>I looked at <code>imagePullPolicy</code> but that does not apply for a situation when the CR is unavailable.</p>
<p>The <code>BackOff</code> part in <code>ImagePullBackOff</code> status means that Kubernetes is keep trying to pull the image from the registry, with an exponential back-off delay (10s, 20s, 40s, …). The delay between each attempt is increased until it reaches a compiled-in limit of 300 seconds (5 minutes) - more on it in <a href="https://kubernetes.io/docs/concepts/containers/images/#imagepullbackoff" rel="noreferrer">Kubernetes docs</a>.</p> <p><code>backOffPeriod</code> parameter for the image pulls is a hard-coded constant in Kuberenets and unfortunately is not tunable now, as it can affect the node performance - otherwise, it can be adjusted in the very <a href="https://github.com/kubernetes/kubernetes/blob/5f920426103085a28069a1ba3ec9b5301c19d075/pkg/kubelet/kubelet.go#L155" rel="noreferrer">code</a> for your custom kubelet binary. There is still ongoing <a href="https://github.com/kubernetes/kubernetes/issues/57291" rel="noreferrer">issue</a> on making it adjustable.</p>
<p>I am trying to build and tag a docker image in Github Actions runner and am getting this error from the runner</p> <pre class="lang-sh prettyprint-override"><code>unable to prepare context: path &quot; &quot; not found Error: Process completed with exit code 1. </code></pre> <p>I have gone through all other similar issues on StackOverflow and implemented them but still, no way forward.</p> <p>The interesting thing is, I have other microservices using similar workflow and Dockerfile working perfectly fine.</p> <p><strong>My workflow</strong></p> <pre class="lang-yaml prettyprint-override"><code>name: some-tests on: pull_request: branches: [ main ] jobs: tests: runs-on: ubuntu-latest env: AWS_REGION: us-east-1 IMAGE_NAME: service IMAGE_TAG: 1.1.0 steps: - name: Checkout uses: actions/checkout@v2 - name: Create cluster uses: helm/kind-action@v1.2.0 - name: Read secrets from AWS Secrets Manager into environment variables uses: abhilash1in/aws-secrets-manager-action@v1.1.0 id: read-secrets with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} secrets: | users-service/secrets parse-json: true - name: Build and Tag Image id: build-image run: | # Build a docker container and Tag docker build --file Dockerfile \ --build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \ -t $IMAGE_NAME:$IMAGE_TAG . echo &quot;::set-output name=image::$IMAGE_NAME:$IMAGE_TAG&quot; - name: Push Image to Kind cluster id: kind-cluster-image-push env: KIND_IMAGE: ${{ steps.build-image.outputs.image }} CLUSTER_NAME: chart-testing CLUSTER_CONTROLLER: chart-testing-control-plane run: | kind load docker-image $KIND_IMAGE --name $CLUSTER_NAME docker exec $CLUSTER_CONTROLLER crictl images </code></pre> <p><strong>Dockerfile</strong>*</p> <pre><code>FROM node:14 AS base WORKDIR /app FROM base AS development COPY .npmrc .npmrc COPY package.json ./ RUN npm install --production RUN cp -R node_modules /tmp/node_modules RUN npm install RUN rm -f .npmrc COPY . . FROM development AS builder COPY .npmrc .npmrc RUN yarn run build RUN rm -f .npmrc RUN ls -la FROM node:14-alpine AS production # Install curl RUN apk update &amp;&amp; apk add curl COPY --from=builder /tmp/node_modules ./node_modules COPY --from=builder /app/dist ./dist COPY --from=builder /app/package.json ./ ARG APP_API # set environmental variables ENV APP_API=$APP_API EXPOSE ${PORT} CMD [ &quot;yarn&quot;, &quot;start&quot; ] </code></pre> <p>I guess the problem is coming from the building command or something, these are the different things I have tried</p> <p><strong>I used --file explicitly with period(.)</strong>*</p> <pre class="lang-yaml prettyprint-override"><code>docker build --file Dockerfile \ --build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \ -t $IMAGE_NAME:$IMAGE_TAG . echo &quot;::set-output name=image::$IMAGE_NAME:$IMAGE_TAG&quot; </code></pre> <p><strong>I used only period (.)</strong></p> <pre class="lang-yaml prettyprint-override"><code>docker build \ --build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \ -t $IMAGE_NAME:$IMAGE_TAG . echo &quot;::set-output name=image::$IMAGE_NAME:$IMAGE_TAG&quot; </code></pre> <p><strong>I used relative path for Dockerfile (./Dockerfile)</strong></p> <pre class="lang-yaml prettyprint-override"><code>docker build --file ./Dockerfile \ --build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \ -t $IMAGE_NAME:$IMAGE_TAG . echo &quot;::set-output name=image::$IMAGE_NAME:$IMAGE_TAG&quot; </code></pre> <p><strong>I used relative path for the period (./)</strong></p> <pre class="lang-yaml prettyprint-override"><code>docker build \ --build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \ -t $IMAGE_NAME:$IMAGE_TAG ./ echo &quot;::set-output name=image::$IMAGE_NAME:$IMAGE_TAG&quot; </code></pre> <p>I have literally exhausted everything I've read from SO</p>
<p>The problem was basically a white-spacing issue. Nothing could show this. Thanks to <a href="https://stackoverflow.com/a/69033092/16069603">This Github answer</a></p>
<p>I'm currently evaluating Knative but I've definitely find no way to use path instead of subdomain in the URL for accessing service.</p> <p>By default, when creating an service, the URL is made like this: <a href="http://Name.Namespace.Domain" rel="nofollow noreferrer">http://Name.Namespace.Domain</a> and what I would like to have is something like this http://Domain/Namespace/Name</p> <p>Does anybody knows if it is possible ? Thanks in advance,</p> <p>Cédric</p>
<p>Knative uses subdomains rather than URL paths because the underlying container could handle many different URLs, and might encode requests with absolute URLs (which might point to a different function depending on deployment) or relative URLs (which would point within the current application).</p> <p>If you want to map multiple Knative services under a single domain name, you can use an Ingress implementation or API server like <a href="https://github.com/knative/docs/tree/main/code-samples/serving/kong-routing-go" rel="nofollow noreferrer">Kong</a>, <a href="https://github.com/knative/docs/tree/main/code-samples/serving/knative-routing-go" rel="nofollow noreferrer">Istio</a>, or many others. You will need an HTTP router which can rewrite the <code>Host</code> header to point to the Knative Service's hostname in question; the default Kubernetes <code>Ingress</code> resource doesn't expose this capability.</p> <p>If you choose to set this up, you'll also need to decide on a policy for mapping the URL paths: you could either strip the URL paths off when passing them to the Knative Service, or leave them present. It probably makes more sense to strip the URL paths off, since otherwise you'll end up needing to have a dependency between your application code and the <code>namespace</code> and <code>name</code> that you have chosen to deploy it at.</p> <p>Other gotchas to watch out for:</p> <ul> <li>Since all the Knative Services are behind a single hostname, they'll share the same cookie domain, and could inadvertently stomp or poison each others cookies.</li> <li>Absolute vs relative URL references, as I mentioned above. It's likely that your HTTP router doesn't have the ability to re-add the stripped URL prefix on outbound paths; doubly so if you have URLs which are being constructed in HTML or Javascript, rather than simply in URL headers.</li> <li>Automatically programming your HTTP router as new Services are created is not automated -- you'd need to do this yourself. You <em>could</em> also write a Knative Service to do this routing and use a <a href="https://knative.dev/docs/serving/services/custom-domains/" rel="nofollow noreferrer"><code>DomainMapping</code></a> to map that one Knative Service to your desired domain name. The Knative Service could then automatically do the URL rewriting, and you could do the reverse rewriting on the outbound if you wanted.</li> </ul>
<p>The docs show <code>argocd login &lt;ARGOCD_SERVER&gt;</code> but they never say what ARGOCD_SERVER is. How can we login to ArgoCD on a kind cluster?</p>
<p><code>ARGOCD_SERVER</code>: IP or domain to access in dashboard of ArgoCD.<br /> Example:</p> <pre><code>argocd login 10.12.156.99:8443 argocd login argocd.xxx.com </code></pre>
<p>I have hard time getting this working with NLB using ingress controller : <a href="https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#network-load-balancer-nlb</a></p> <p>Even subnets are not taking effect here , its not passing my configurations in the API that creates the NLB:</p> <pre><code>================================ kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc- 07e3afcd4b7b5d644,eipalloc-0d9cb0154be5ab55d,eipalloc-0e4e5ec3df81aa3ea" service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet- 061f4a497621a7179,subnet-001c2e5df9cc93960" spec: type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https </code></pre>
<p>The number of <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#eip-allocations" rel="nofollow noreferrer">eip allocations</a> must match the number of subnets in the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/#subnets" rel="nofollow noreferrer">subnet annotation</a>.</p> <p><code>service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xyz, eipalloc-zzz</code></p> <p><code>service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet</code></p> <p>You have 3 allocations but only 2 subnets.</p> <p>In addition, the annotation</p> <p><code>service.beta.kubernetes.io/aws-load-balancer-scheme: &quot;internet-facing&quot;</code> is missing.</p> <p>By default this will use scheme &quot;internal&quot;.</p> <p>I assume since you are allocating elastic IP addresses that you might want &quot;internet-facing&quot;.</p> <p>Also, you are using annotations that are meant for &quot;AWS Load Balancer Controller&quot; but you are using an &quot;AWS cloud provider load balancer controller&quot;</p> <blockquote> <p>The external value for aws-load-balancer-type is what causes the AWS Load Balancer Controller, rather than the AWS cloud provider load balancer controller, to create the Network Load Balancer. <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html#nlb-sample-app-verify-deployment" rel="nofollow noreferrer">docs</a></p> </blockquote> <p>You are using <code>service.beta.kubernetes.io/aws-load-balancer-type: nlb</code> which means that none of the links provided earlier in this answer pertain to your Load Balancer. <code>nlb</code> type is an &quot;AWS cloud provider load balancer controller&quot; not an &quot;AWS Load Balancer Controller&quot;</p> <p>For &quot;AWS cloud provider load balancer controller&quot; all <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html" rel="nofollow noreferrer">the docs</a> reference is <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">this</a>.</p>
<p>I am using AWS Opensearch to retrieve the logs from all my Kubernetes applications. I have the following pods: <code>Kube-proxy</code>, <code>Fluent-bit</code>, <code>aws-node</code>, <code>aws-load-balancer-controller</code>, and all my apps (around 10).</p> <p>While fluent-bit successfully send all the logs from <code>Kube-proxy</code>, <code>Fluent-bit</code>, <code>aws-node</code> and <code>aws-load-balancer-controller</code>, none of the logs from my applications are sent. My applications had <code>DEBUG</code>, <code>INFO</code>, <code>ERROR</code> logs, and none are sent by fluent bit.</p> <p>Here is my fluent bit configuration:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: my-namespace labels: k8s-app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off output-elasticsearch.conf: | [OUTPUT] Name es Match * Host my-host.es.amazonaws.com Port 443 TLS On AWS_Auth On AWS_Region ap-southeast-1 Retry_Limit 6 parsers.conf: | [PARSER] Name apache Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^ ]*) +\S*)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?&lt;time&gt;[^\]]*)\] \[(?&lt;level&gt;[^\]]*)\](?: \[pid (?&lt;pid&gt;[^\]]*)\])?( \[client (?&lt;client&gt;[^\]]*)\])? (?&lt;message&gt;.*)$ [PARSER] Name nginx Format regex Regex ^(?&lt;remote&gt;[^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] &quot;(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\&quot;]*?)(?: +\S*)?)?&quot; (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: &quot;(?&lt;referer&gt;[^\&quot;]*)&quot; &quot;(?&lt;agent&gt;[^\&quot;]*)&quot;)?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep On [PARSER] Name syslog Format regex Regex ^\&lt;(?&lt;pri&gt;[0-9]+)\&gt;(?&lt;time&gt;[^ ]* {1,2}[^ ]* [^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;ident&gt;[a-zA-Z0-9_\/\.\-]*)(?:\[(?&lt;pid&gt;[0-9]+)\])?(?:[^\:]*\:)? *(?&lt;message&gt;.*)$ Time_Key time Time_Format %b %d %H:%M:%S </code></pre> <p>I followed <a href="https://www.eksworkshop.com/intermediate/230_logging/deploy/" rel="nofollow noreferrer">this documentation</a></p> <p>Thanks a lot for the help.</p>
<p>Finally, I did two things that solved my issue:</p> <ol> <li>Modified this configuration:</li> </ol> <pre><code># before output-elasticsearch.conf: | [OUTPUT] Name es Match * Host search-blacaz-logs-szzq6vokwwm4y5fkfwyngjwjxq.ap-southeast-1.es.amazonaws.com Port 443 TLS On AWS_Auth On AWS_Region ap-southeast-1 Retry_Limit 6 # after output-elasticsearch.conf: | [OUTPUT] Name es Match * Host search-blacaz-logs-szzq6vokwwm4y5fkfwyngjwjxq.ap-southeast-1.es.amazonaws.com Port 443 TLS On AWS_Auth On Replace_Dots On // added this AWS_Region ap-southeast-1 Retry_Limit 6 </code></pre> <p>Then, I had to delete the fluent-bit Elastic search index, and re-create it. Indeed, the index was probably not well suited for my JAVA logs at first, and adjusted to it after re-creation.</p>
<p>I am trying to understand where do these username field is mapped to in the Kubernetes cluster.</p> <p>This is a sample configmap:</p> <pre><code>apiVersion: v1 data: mapRoles: | - rolearn: arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes mapUsers: | - userarn: arn:aws:iam::111122223333:user/admin username: admin groups: - system:masters - userarn: arn:aws:iam::444455556666:user/ops-user username: ops-user groups: - eks-console-dashboard-full-access-group </code></pre> <ol> <li><p>If I change the username from <code>system:node:{{EC2PrivateDNSName}}</code> to something like <code>mynode:{{EC2PrivateDNSName}}</code> does it really make any difference? Does It make any sense to the k8's cluster by adding the <code>system:</code> prefix ?.</p> </li> <li><p>And where can I see these users in k8's. Can I query it using <code>kubectl</code> just like <code>k get pods</code>, as <code>kubectl get usernames</code>. Is it a dummy user name we are providing to map with or does it hold any special privileges.</p> </li> <li><p>From where do these names <code>{{EC2PrivateDNSName}}</code> comes from. Are there any other variables available? I can't see any information related to this from the documentation.</p> </li> </ol> <p>Thanks in advance!</p>
<p>Posting the answer as a community wiki, feel free to edit and expand.</p> <hr /> <ol> <li>As you can read in <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#core-component-roles" rel="nofollow noreferrer">documentation</a>, <code>system:node</code> require to have prefix <code>system</code>. If you delete <code>system</code>, it won't work correctly:</li> </ol> <blockquote> <p><strong>system:node</strong><br /> Allows access to resources required by the kubelet, <strong>including read access to all secrets, and write access to all pod status objects</strong>. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8.</p> </blockquote> <ol start="2"> <li>You can view RBAC users using external plugin example <a href="https://github.com/FairwindsOps/rbac-lookup" rel="nofollow noreferrer">RBAC Lookup</a> and use a command:<code>rbac-lookup</code></li> </ol> <blockquote> <p>RBAC Lookup is a CLI that allows you to easily find Kubernetes roles and cluster roles bound to any user, service account, or group name. Binaries are generated with goreleaser for each release for simple installation.</p> </blockquote> <ol start="3"> <li>Names will come from your AWS IAM. You can read more about it <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">here</a>:</li> </ol> <blockquote> <p>Access to your cluster using AWS IAM entities is enabled by the AWS IAM Authenticator for Kubernetes which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the <code>aws-auth</code> <code>ConfigMap</code>. For all <code>aws-auth</code> <code>ConfigMap</code> settings.</p> </blockquote>
<p>I create a k8s deployment script with python, and to get the configuration from <code>kubectl</code>, I use the python command:</p> <pre><code>from kubernetes import client, config config.load_kube_config() </code></pre> <p>to get the azure aks configuration I use the following <code>az</code> commands:</p> <pre><code>az login az aks get-credentials --resource-group [resource group name] --name [aks name] </code></pre> <p>is there any way to get azure aks credential only from python and without the need of the <code>az</code> commands? </p> <p>thanks!</p>
<p>Yes, this can be done with the Azure Python SDK.</p> <pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential from azure.mgmt.containerservice import ContainerServiceClient credential = DefaultAzureCredential(exclude_cli_credential=True) subscription_id = os.environ[&quot;AZURE_SUBSCRIPTION_ID&quot;] container_service_client = ContainerServiceClient(credential, subscription_id) kubeconfig = container_service_client.managed_clusters.list_cluster_user_credentials(&quot;resourcegroup-name&quot;, &quot;cluster-name&quot;).kubeconfigs[0] </code></pre>
<p>we have a <strong>deamonset</strong> and we want to make it HA (not our deamonset), does the following is applicable for HA for deamaon set also?</p> <ul> <li>affinity (anti affinity)</li> <li>toleration's</li> <li>pdb</li> </ul> <p>we have on each cluster 3 worker nodes I did it in the past for deployment but not sure what is also applicable for deamonset, this is <strong>not our app</strong> but we need to make sure it is HA as it's critical app</p> <p><strong>update</strong></p> <p>Does it make sense to add the following to deamonset, lets say I've 3 worker nodes and I want it to be scheduled only in <code>foo</code> workers nodes?</p> <pre><code>spec: tolerations: - effect: NoSchedule key: WorkGroup operator: Equal value: foo - effect: NoExecute key: WorkGroup operator: Equal value: foo nodeSelector: workpcloud.io/group: foo </code></pre>
<p>You have asked two, somewhat unrelated questions.</p> <blockquote> <p>does the following is applicable for HA for deamaon set also?</p> <ul> <li>affinity (anti affinity)</li> <li>toleration's</li> <li>pdb</li> </ul> </blockquote> <p>A <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">daemonset</a> (generally) runs on a policy of &quot;one pod per node&quot; -- you <strong>CAN'T</strong> make it HA (for example, by using autoscaling), and you will (assuming you use defaults) have as many replicas of the daemonset as you have nodes, unless you explicitly specify which nodes you want to want to run the daemonset pods, using things like <code>nodeSelector</code> and/or <code>tolerations</code>, in which case you will have less pods. The documentation page linked above gives more details and has some examples</p> <blockquote> <p>this is not our app but we need to make sure it is HA as it's critical app</p> </blockquote> <p>Are you asking how to make your critical app HA? I'm going to assume you are.</p> <p>If the app is as critical as you say, then a few starter recommendations:</p> <ol> <li>Make sure you have at least 3 replicas (4 is a good starter number)</li> <li>Add tolerations if you must schedule those pods on a node pool that has taints</li> <li>Use node selectors as needed (e.g. for regions or zones, but only if necessary do to something like disks being present in those zones)</li> <li>Use affinity to group or spread your replicas. Definitely would recommend using a spread so that if one node goes down, the other replicas are still up</li> <li>Use a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">pod priority</a> to indicate to the cluster that your pods are more important than other pods (beware this may cause issues if you set it too high)</li> <li>Setup notifications to something like PagerDuty, OpsGenie, etc, so you (or your ops team) are notified if the app goes down. If the app is critical, then you'll want to know it's down ASAP.</li> <li>Setup <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets" rel="nofollow noreferrer">pod disruption budgets</a>, and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">horizontal pod autoscalers</a> to ensure an agreed number of pods are always up.</li> </ol>
<p>I am using K8S with helm.</p> <p>I need to run pods and dependencies with a predefined flow order.</p> <p>How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?</p> <p>Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.</p> <p>Need to build 2 pods, as is described as following:</p> <p>I have a database.</p> <p><strong>1st step</strong> is to create the database.</p> <p><strong>2nd step</strong> is to populate the db.</p> <p>Once I populate the db, this job need to finish.</p> <p><strong>3rd step</strong> is another pod (not the db pod) that uses that database, and always in listen mode (never stops).</p> <p>Can I define in which order the dependencies are running (and not always parallel).</p> <p>What I see for <code>helm create</code> command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?</p> <p>What are the best charts types for this scenario?</p> <p>Also, need the to know what is the chart hierarchy.</p> <p>i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.</p> <p><a href="https://i.stack.imgur.com/FpBys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FpBys.png" alt="enter image description here" /></a></p> <p>The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).</p> <p>What is the correct tree flow</p> <p>Thanks.</p>
<p>You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.</p> <p>The first step, define a k8s job to create and populate the db</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: {{ template &quot;my-chart.name&quot; . }}-db-prepare annotations: &quot;helm.sh/hook&quot;: pre-install,pre-upgrade &quot;helm.sh/hook-weight&quot;: &quot;-1&quot; &quot;helm.sh/hook-delete-policy&quot;: hook-succeeded labels: app: {{ template &quot;my-chart.name&quot; . }} chart: {{ template &quot;my-chart.chart&quot; . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: backoffLimit: 4 template: metadata: labels: app: {{ template &quot;my-chart.name&quot; . }} release: {{ .Release.Name }} spec: containers: - name: {{ template &quot;my-chart.name&quot; . }}-db-prepare image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} command: [&quot;/docker-entrypoint.sh&quot;] args: [&quot;rake&quot;, &quot;db:extensions&quot;, &quot;db:migrate&quot;, &quot;db:seed&quot;] envFrom: - configMapRef: name: {{ template &quot;my-chart.name&quot; . }}-configmap - secretRef: name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template &quot;my-chart.name&quot; . }}-secrets{{- end }} initContainers: - name: init-wait-for-dependencies image: wshihadeh/wait_for:v1.2 imagePullPolicy: {{ .Values.image.pullPolicy }} command: [&quot;/docker-entrypoint.sh&quot;] args: [&quot;wait_for_tcp&quot;, &quot;postgress:DATABASE_HOST:DATABASE_PORT&quot;] envFrom: - configMapRef: name: {{ template &quot;my-chart.name&quot; . }}-configmap - secretRef: name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template &quot;my-chart.name&quot; . }}-secrets{{- end }} imagePullSecrets: - name: {{ .Values.imagePullSecretName }} restartPolicy: Never </code></pre> <p>Note the following : 1- The Job definitions have helm hooks to run on each deployment and to be the first task</p> <pre><code> &quot;helm.sh/hook&quot;: pre-install,pre-upgrade &quot;helm.sh/hook-weight&quot;: &quot;-1&quot; &quot;helm.sh/hook-delete-policy&quot;: hook-succeeded </code></pre> <p>2- the container command, will take care of preparing the db</p> <pre><code>command: [&quot;/docker-entrypoint.sh&quot;] args: [&quot;rake&quot;, &quot;db:extensions&quot;, &quot;db:migrate&quot;, &quot;db:seed&quot;] </code></pre> <p>3- The job will not start until the db-connection is up (this is achieved via initContainers)</p> <pre><code>args: [&quot;wait_for_tcp&quot;, &quot;postgress:DATABASE_HOST:DATABASE_PORT&quot;] </code></pre> <p>the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ template &quot;my-chart.name&quot; . }}-web annotations: checksum/config: {{ include (print $.Template.BasePath &quot;/configmap.yaml&quot;) . | sha256sum }} checksum/secret: {{ include (print $.Template.BasePath &quot;/secrets.yaml&quot;) . | sha256sum }} labels: app: {{ template &quot;my-chart.name&quot; . }} chart: {{ template &quot;my-chart.chart&quot; . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.webReplicaCount }} selector: matchLabels: app: {{ template &quot;my-chart.name&quot; . }} release: {{ .Release.Name }} template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath &quot;/configmap.yaml&quot;) . | sha256sum }} checksum/secret: {{ include (print $.Template.BasePath &quot;/secrets.yaml&quot;) . | sha256sum }} labels: app: {{ template &quot;my-chart.name&quot; . }} release: {{ .Release.Name }} service: web spec: imagePullSecrets: - name: {{ .Values.imagePullSecretName }} containers: - name: {{ template &quot;my-chart.name&quot; . }}-web image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} command: [&quot;/docker-entrypoint.sh&quot;] args: [&quot;web&quot;] envFrom: - configMapRef: name: {{ template &quot;my-chart.name&quot; . }}-configmap - secretRef: name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template &quot;my-chart.name&quot; . }}-secrets{{- end }} ports: - name: http containerPort: 8080 protocol: TCP resources: {{ toYaml .Values.resources | indent 12 }} restartPolicy: {{ .Values.restartPolicy }} {{- with .Values.nodeSelector }} nodeSelector: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{ toYaml . | indent 8 }} {{- end }} </code></pre>
<p>I am using K8S with helm.</p> <p>I need to run pods and dependencies with a predefined flow order.</p> <p>How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?</p> <p>Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.</p> <p>Need to build 2 pods, as is described as following:</p> <p>I have a database.</p> <p><strong>1st step</strong> is to create the database.</p> <p><strong>2nd step</strong> is to populate the db.</p> <p>Once I populate the db, this job need to finish.</p> <p><strong>3rd step</strong> is another pod (not the db pod) that uses that database, and always in listen mode (never stops).</p> <p>Can I define in which order the dependencies are running (and not always parallel).</p> <p>What I see for <code>helm create</code> command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?</p> <p>What are the best charts types for this scenario?</p> <p>Also, need the to know what is the chart hierarchy.</p> <p>i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.</p> <p><a href="https://i.stack.imgur.com/FpBys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FpBys.png" alt="enter image description here" /></a></p> <p>The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).</p> <p>What is the correct tree flow</p> <p>Thanks.</p>
<p>There is a fixed order with which helm with create resources, which you cannot influence apart from <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">hooks</a>.</p> <p>Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.</p> <p>I tend to use jobs and init containers that blocks until the jobs are done.</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: Pod metadata: name: mysql labels: name: mysql spec: containers: - name: mysql image: mysql --- apiVersion: batch/v1 kind: Job metadata: name: migration spec: ttlSecondsAfterFinished: 100 template: spec: initContainers: - name: wait-for-db image: bitnami/kubectl args: - wait - pod/mysql - --for=condition=ready - --timeout=120s containers: - name: migration image: myapp args: [--migrate] restartPolicy: Never --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: selector: matchLabels: app: myapp replicas: 3 template: metadata: labels: app: myapp spec: initContainers: - name: wait-for-migration image: bitnami/kubectl args: - wait - job/migration - --for=condition=complete - --timeout=120s containers: - name: myapp image: myapp args: [--server] </code></pre> <p>Moving the migration into its own job, is beneficial if you want to scale your application horizontally. Your migration need to run only 1 time. So it doesn't make sense to run it for each deployed replica.</p> <p>Also, in case a pod crashes and restarts, the migration doest need to run again. So having it in a separate one time job, makes sense.</p> <p>The main chart structure would look like this.</p> <pre><code>. ├── Chart.lock ├── charts │ └── mysql-8.8.26.tgz ├── Chart.yaml ├── templates │ ├── deployment.yaml # waits for db migration job │ └── migration-job.yaml # waits for mysql statefulset master pod └── values.yaml </code></pre>
<p>Running microk8s v1.23.3 on Ubuntu 20.04.4 LTS. I have set up a minimal pod+service:</p> <pre><code>kubectl create deployment whoami --image=containous/whoami --namespace=default </code></pre> <p>This works as expected, curl <code>10.1.76.4:80</code> gives the proper reply from <code>whoami</code>. I have a service configured, see content of <code>service-whoami.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: whoami namespace: default spec: selector: app: whoami ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>This also works as expected, the pod can be reached through the clusterIP on <code>curl 10.152.183.220:80</code>. Now I want to expose the service using the <code>ingress-whoami.yaml</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress namespace: default annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: defaultBackend: service: name: whoami port: number: 80 rules: - http: paths: - path: /whoami pathType: Prefix backend: service: name: whoami port: number: 80 </code></pre> <p>ingress addon is enabled.</p> <pre><code>microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: ha-cluster # Configure high availability on the current node ingress # Ingress controller for external access </code></pre> <p>ingress seems to point to the correct pod and port. <code>kubectl describe ingress</code> gives</p> <pre><code>Name: whoami-ingress Labels: &lt;none&gt; Namespace: default Address: Default backend: whoami:80 (10.1.76.12:80) Rules: Host Path Backends ---- ---- -------- * /whoami whoami:80 (10.1.76.12:80) Annotations: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>Trying to reach the pod from outside with <code>curl 127.0.0.1/whoami</code> gives a <code>404</code>:</p> <pre><code>&lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;404 Not Found&lt;/h1&gt;&lt;/center&gt; &lt;hr&gt;&lt;center&gt;nginx&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Where did I go wrong? This setup worked a few weeks ago.</p>
<p>Ok, figured it out. I had forgotten to specify the <code>ingress.class</code> in the annotations-block. I updated <code>ingress-whoami.yaml</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami-ingress namespace: default annotations: kubernetes.io/ingress.class: public nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /whoami pathType: Prefix backend: service: name: whoami port: number: 80 </code></pre> <p>Now everything is working.</p>
<p>Kubernetes newbie here.</p> <p>I have some strange Skaffold/Kubernetes behavior. I'm working in Google Cloud but I've changed to the local environment just for test and it's the same. So probably it's me how's doing something wrong. The problem is that though I see Skaffold syncing changes these changes aren't reflected. All the files inside the pods are the old ones.</p> <p>Skaffold.yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: # local: # push: false googleCloudBuild: projectId: ts-maps-286111 artifacts: - image: us.gcr.io/ts-maps-286111/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . - image: us.gcr.io/ts-maps-286111/client context: client docker: dockerfile: Dockerfile sync: manual: - src: '**/*.js' dest: . - image: us.gcr.io/ts-maps-286111/tickets context: tickets docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . - image: us.gcr.io/ts-maps-286111/orders context: orders docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . - image: us.gcr.io/ts-maps-286111/expiration context: expiration docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . </code></pre> <p>When a file inside one of the directories is changed I see following logs:</p> <pre class="lang-sh prettyprint-override"><code>time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:06+03:00&quot; level=debug msg=&quot;Change detected notify.Write: \&quot;F:\\projects\\lrn_microservices\\complex\\expiration\\src\\index.ts\&quot;&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Found dependencies for dockerfile: [{package.json /app true} {. /app true}]&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: node_modules&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Found dependencies for dockerfile: [{package.json /app true} {. /app true}]&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: .next&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: node_modules&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Found dependencies for dockerfile: [{package.json /app true} {. /app true}]&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: node_modules&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Found dependencies for dockerfile: [{package.json /app true} {. /app true}]&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: node_modules&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Found dependencies for dockerfile: [{package.json /app true} {. /app true}]&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Skipping excluded path: node_modules&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=info msg=&quot;files modified: [expiration\\src\\index.ts]&quot; Syncing 1 files for us.gcr.io/ts-maps-286111/expiration:2aae7ff-dirty@sha256:2e31caedf3d9b2bcb2ea5693f8e22478a9d6caa21d1a478df5ff8ebcf562573e time=&quot;2020-09-05T01:24:07+03:00&quot; level=info msg=&quot;Copying files: map[expiration\\src\\index.ts:[/app/src/index.ts]] to us.gcr.io/ts-maps-286111/expiration:2aae7ff-dirty@sha256:2e31caedf3d9b2bcb2ea5693f8e22478a9d6caa21d1a478df5ff8ebcf562573e&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;getting client config for kubeContext: ``&quot; time=&quot;2020-09-05T01:24:07+03:00&quot; level=debug msg=&quot;Running command: [kubectl --context gke_ts-maps-286111_europe-west3-a_ticketing-dev exec expiration-depl-5cb997d597-p49lv --namespace default -c expiration -i -- tar xmf - -C / --no-same-owner]&quot; time=&quot;2020-09-05T01:24:09+03:00&quot; level=debug msg=&quot;Command output: [], stderr: tar: removing leading '/' from member names\n&quot; Watching for changes... time=&quot;2020-09-05T01:24:11+03:00&quot; level=info msg=&quot;Streaming logs from pod: expiration-depl-5cb997d597-p49lv container: expiration&quot; time=&quot;2020-09-05T01:24:11+03:00&quot; level=debug msg=&quot;Running command: [kubectl --context gke_ts-maps-286111_europe-west3-a_ticketing-dev logs --since=114s -f expiration-depl-5cb997d597-p49lv -c expiration --namespace default]&quot; [expiration] [expiration] &gt; expiration@1.0.0 start /app [expiration] &gt; ts-node-dev --watch src src/index.ts [expiration] [expiration] ts-node-dev ver. 1.0.0-pre.62 (using ts-node ver. 8.10.2, typescript ver. 3.9.7) [expiration] starting expiration!kdd [expiration] Connected to NATS! </code></pre> <p>NodeJS server inside the pod restarts. Sometimes I see this line, sometimes not, the result overall is the same</p> <pre class="lang-sh prettyprint-override"><code>[expiration] [INFO] 22:23:42 Restarting: src/index.ts has been modified </code></pre> <p>But there are no changes made. If I cat the changed file inside a pod it's the old version, if I delete a pod it starts again with an old version.</p> <p>My folder structure:</p> <pre class="lang-sh prettyprint-override"><code>+---auth | \---src | +---models | +---routes | | \---__test__ | +---services | \---test +---client | +---.next | | +---cache | | | \---next-babel-loader | | +---server | | | \---pages | | | +---auth | | | \---next | | | \---dist | | | \---pages | | \---static | | +---chunks | | | \---pages | | | +---auth | | | \---next | | | \---dist | | | \---pages | | +---development | | \---webpack | | \---pages | | \---auth | +---api | +---components | +---hooks | \---pages | \---auth +---common | +---build | | +---errors | | +---events | | | \---types | | \---middlewares | \---src | +---errors | +---events | | \---types | \---middlewares +---config +---expiration | \---src | +---events | | +---listeners | | \---publishers | +---queue | \---__mocks__ +---infra | \---k8s +---orders | \---src | +---events | | +---listeners | | | \---__test__ | | \---publishers | +---models | +---routes | | \---__test__ | +---test | \---__mocks__ +---payment \---tickets \---src +---events | +---listeners | | \---__test__ | \---publishers +---models | \---__test__ +---routes | \---__test__ +---test \---__mocks__ </code></pre> <p>Would be grateful for any help!</p>
<p>What worked for me was using <code>--poll</code> flag with <code>ts-node-dev</code>. My script looks like this</p> <pre class="lang-json prettyprint-override"><code>&quot;start&quot; : &quot;ts-node-dev --respawn --poll --inspect --exit-child src/index.ts </code></pre>
<p>I'm trying to start minikube on ubuntu 18.04 inside nginx proxy manager docker network in order to setup some kubernetes services and manage the domain names and the proxy hosts in the nginx proxy manager platform.</p> <p>so I have <code>nginxproxymanager_default</code> docker network and when I run <code>minikube start --network=nginxproxymanager_default</code> I get</p> <blockquote> <p>Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use</p> </blockquote> <p>what might I been doing wrong?</p>
<p>A similar error was reported with <a href="https://github.com/kubernetes/minikube/issues/12894" rel="nofollow noreferrer">kubernetes/minikube issue 12894</a></p> <blockquote> <p>please check whether there are other services using that IP address, and try starting minikube again.</p> </blockquote> <p>Considering <a href="https://minikube.sigs.k8s.io/docs/commands/start/" rel="nofollow noreferrer"><code>minikube start</code> man page</a></p> <blockquote> <h2><code>--network string</code></h2> <p>network to run minikube with.<br /> Now it is used by docker/podman and KVM drivers.</p> <p>If left empty, minikube will create a new network.</p> </blockquote> <p>Using an existing NGiNX network (as opposed to docker/podman) might not be supported.</p> <p>I have seen <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">NGiNX set up as ingress</a>, not directly as &quot;network&quot;.</p>
<p>I have the following cronjob which deletes pods in a specific namespace.</p> <p>I run the job as-is but it seems that the job doesn't run for each 20 min, it runs every few (2-3) min, what I need is that on each 20 min the job will start deleting the pods in the specified namespace and then terminate, any idea what could be wrong here?</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: restart spec: schedule: &quot;*/20 * * * *&quot; concurrencyPolicy: Forbid successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 jobTemplate: spec: backoffLimit: 0 template: spec: serviceAccountName: sa restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl:1.22.3 command: - /bin/sh - -c - kubectl get pods -o name | while read -r POD; do kubectl delete &quot;$POD&quot;; sleep 30; done </code></pre> <p>I'm really not sure why this happens...</p> <p>Maybe the delete of the pod collapse</p> <p><strong>update</strong></p> <p>I tried the following but no pods were deleted,any idea?</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: restart spec: schedule: &quot;*/1 * * * *&quot; concurrencyPolicy: Forbid successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 jobTemplate: spec: backoffLimit: 0 template: metadata: labels: name: restart spec: serviceAccountName: pod-exterminator restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl:1.22.3 command: - /bin/sh - -c - kubectl get pods -o name --selector name!=restart | while read -r POD; do kubectl delete &quot;$POD&quot;; sleep 10; done. </code></pre>
<p>This cronjob pod will delete itself at some point during the execution. Causing the job to fail and additionally resetting its back-off count.</p> <p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="nofollow noreferrer">docs</a> say:</p> <blockquote> <p>The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.</p> </blockquote> <p>You need to apply an appropriate filter. Also note that you can delete all pods with a single command.</p> <p>Add a label to <code>spec.jobTemplate.spec.template.metadata</code> that you can use for filtering.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: CronJob metadata: name: restart spec: jobTemplate: spec: template: metadata: labels: name: restart # label the pod </code></pre> <p>Then use this label to delete all pods that are not the cronjob pod.</p> <pre class="lang-sh prettyprint-override"><code>kubectl delete pod --selector name!=restart </code></pre> <hr /> <p>Since you state in the comments, you <em>need</em> a loop, a full working example may look like this.</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: batch/v1 kind: CronJob metadata: name: restart namespace: sandbox spec: schedule: &quot;*/20 * * * *&quot; concurrencyPolicy: Forbid successfulJobsHistoryLimit: 0 failedJobsHistoryLimit: 0 jobTemplate: spec: backoffLimit: 0 template: metadata: labels: name: restart spec: serviceAccountName: restart restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl:1.22.3 command: - /bin/sh - -c - | kubectl get pods -o name --selector &quot;name!=restart&quot; | while read -r POD; do kubectl delete &quot;$POD&quot; sleep 30 done --- apiVersion: v1 kind: ServiceAccount metadata: name: restart namespace: sandbox --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-management namespace: sandbox rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;, &quot;delete&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: restart-pod-management namespace: sandbox subjects: - kind: ServiceAccount name: restart namespace: sandbox roleRef: kind: Role name: pod-management apiGroup: rbac.authorization.k8s.io </code></pre> <pre class="lang-sh prettyprint-override"><code>kubectl create namespace sandbox kubectl config set-context --current --namespace sandbox kubectl run pod1 --image busybox -- sleep infinity kubectl run pod2 --image busybox -- sleep infinity kubectl apply -f restart.yaml # the above file </code></pre> <p>Here you can see how the first pod is getting terminated.</p> <pre><code>$ kubectl get all NAME READY STATUS RESTARTS AGE pod/pod1 1/1 Terminating 0 43s pod/pod2 1/1 Running 0 39s pod/restart-27432801-rrtvm 1/1 Running 0 16s NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE cronjob.batch/restart */1 * * * * False 1 17s 36s NAME COMPLETIONS DURATION AGE job.batch/restart-27432801 0/1 17s 17s </code></pre> <p>Note that this is actually slightly buggy. Because from the time you're reading the pod list to the time you delete an individual pod in the list, the pod may not exist any more. You could use the below to ignore those cases, since when they are gone you don't need to delete them.</p> <pre><code>kubectl delete &quot;$POD&quot; || true </code></pre> <hr /> <p>That said, since you name your job restart, I assume the purpose of this is to restart the pods of some deployments. You could actually use a proper restart, leveraging Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">update strategies</a>.</p> <pre><code>kubectl rollout restart $(kubectl get deploy -o name) </code></pre> <p>With the default update strategy, this will lead to new pods being created first and making sure they are ready before terminating the old ones.</p> <pre><code>$ kubectl rollout restart $(kubectl get deploy -o name) NAME READY STATUS RESTARTS AGE pod/app1-56f87fc665-mf9th 0/1 ContainerCreating 0 2s pod/app1-5cbc776547-fh96w 1/1 Running 0 2m9s pod/app2-7b9779f767-48kpd 0/1 ContainerCreating 0 2s pod/app2-8d6454757-xj4zc 1/1 Running 0 2m9s </code></pre> <p>This also works with deamonsets.</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl rollout restart -h Restart a resource. Resource rollout will be restarted. Examples: # Restart a deployment kubectl rollout restart deployment/nginx # Restart a daemon set kubectl rollout restart daemonset/abc </code></pre>
<p>The application is in Azure Functions,</p> <p>The error that we are getting from container Pod logs is &quot;Could not load type 'Microsoft.Azure.WebJobs.Host.Scale.ConcurrencyManager' from assembly 'Microsoft.Azure.WebJobs.Host, Version=3.0.26.0&quot;.</p> <p>In our application version all of the dll ver is 3.0.30.0</p> <p><a href="https://i.stack.imgur.com/lJ5d4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lJ5d4.png" alt="enter image description here" /></a></p> <p>In the &quot;dll&quot; folder of debug is having the version with 3.0.30.0</p> <p><a href="https://i.stack.imgur.com/pAuTW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pAuTW.png" alt="enter image description here" /></a></p> <p>And in this version 3.0.30.0, it has the class &quot;Microsoft.Azure.WebJobs.Host.Scale.ConcurrencyManager&quot;</p> <p><a href="https://i.stack.imgur.com/OGWuc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OGWuc.png" alt="enter image description here" /></a></p> <p>Not sure, where this &quot;assembly 'Microsoft.Azure.WebJobs.Host, Version=3.0.26.0&quot; is coming from.</p>
<p>For me this was happening because <strong>Azure Functions Core Tools</strong> version mismatched due to upgradation of <strong>Visual Studio</strong> to latest version. Removing the Azure Function Tools from the system path <code>C:\Users\user.name\AppData\Local\AzureFunctionsTools</code> and Let Visual Studio automatically install <strong>Azure Functions Core Tools</strong> fixed the issue.</p>
<p><strong>nginx Ingress</strong> is mainly used for <em>path based routing</em> and <em>sub domain based routing</em> to route the request to particular pod and <strong>Cloud provider load balancer</strong> will provide external ip address to get the requests from external world and which in turn points to ingress.</p> <p>In Kubernetes, <strong>Service components</strong> acts as a real load balancer by balancing the load to multiple pods in the cluster. This is my understanding, Am I correct?</p>
<p>Yes you are correct, so the flow goes like</p> <p>You create the LB on any Cloud provider which gives you endpoint to the <strong>internet</strong> you can use it with ingress controller. (You can further use the SSL/TLS certificate also with LB to run HTTPS)</p> <p>If you are using the <strong>Nginx ingress controller</strong> or any other controller depending on that it will handle or manage the <strong>Ingress resource</strong> in the cluster.</p> <p>Once ingress rule decide service to forward traffic, K8s internal service load balance the traffic across running PODs for specific deployment based on rule.</p> <pre><code>internet &gt; Loadbalancer &gt; ingress &gt; ingress controller checks &gt; service &gt; Loadbalance traffic across avilable POD of that specific service </code></pre> <p>Default load balancing will be round robbin.</p>
<p>I am creating a program that gets a list of all deployments from Kubernetes as a <code>*v1.DeploymentList</code>. I managed to do that and it works. Then I do some processing of this list and execute many actions afterwards. Now, I have a new requirement; need to also be able to pull just ONE deployment and apply the same logic to it. The problem is when I use get the deployment what I get is <code>*v1.Deployment</code> which of course is different from <code>*v1.DeploymentList</code> as this is a list. Now, this DeploymentList is not a slice, so I can NOT just use <code>append</code> and do not know how to convert/cast. As a &quot;pragmatic&quot; solution, what I am trying to do it to just convert that Deployment into DeploymentList and then apply the rest of my logic as just a deployment as changing everything else would imply a lot of burden at this point.</p> <p>I have the following code:</p> <pre><code>func listK8sDeployments(the_clientset *kubernetes.Clientset, mirrorDeploy *string) *v1.DeploymentList { if mirrorDeploy != nil { tmp_deployments, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).Get(context.TODO(), *mirrorDeploy, metav1.GetOptions{}) if err != nil { panic(err.Error()) } // Here would need to convert the *v1.Deployment into *v1.DeploymentList a list to retun it according to my EXISTING logic. If I can do this, I do not need to change anything else on the program. // return the Deployment list with one single deployment inside and finish. } deployments_list, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).List(context.TODO(), metav1.ListOptions{}) if err != nil { panic(err.Error()) } return deployments_list } </code></pre> <p>It returns a <code>*v1.Deployment</code>, but I need this data as a list even if it <code>*v1.DeploymentList</code> I have tried to append, but the <code>*v1.DeploymentList</code> is not a slice, so I can not do it. Any ideas as to how to achieve this or should I change the way things are done? Please explain. FYI: I am new to Go and to programming k8s related things too.</p>
<p>when you look at the definition of v1.DeploymentList you can see where the Deployment is located:</p> <pre><code>// DeploymentList is a list of Deployments. type DeploymentList struct { metav1.TypeMeta `json:&quot;,inline&quot;` // Standard list metadata. // +optional metav1.ListMeta `json:&quot;metadata,omitempty&quot; protobuf:&quot;bytes,1,opt,name=metadata&quot;` // Items is the list of Deployments. Items []Deployment `json:&quot;items&quot; protobuf:&quot;bytes,2,rep,name=items&quot;` } </code></pre> <p>then you can easily create a new instance of it with your value:</p> <pre><code>func listK8sDeployments(the_clientset *kubernetes.Clientset, mirrorDeploy *string) *v1.DeploymentList { if *mirrorDeploy != &quot;&quot; { tmp_deployments, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).Get(context.TODO(), *mirrorDeploy, metav1.GetOptions{}) if err != nil { panic(err.Error()) } // create a new list with your deployment and return it deployments_list := v1.DeploymentList{Items: []v1.Deployment{*tmp_deployments}} return &amp;deployments_list } deployments_list, err := the_clientset.AppsV1().Deployments(apiv1.NamespaceDefault).List(context.TODO(), metav1.ListOptions{}) if err != nil { panic(err.Error()) } return deployments_list } </code></pre>
<p>I have an Azure kubernetes cluster, but because of the limitation of attached default volumes per node (8 at my node size), I had to find a different solution to provision volumes.<br /> The solution was to use Azure files volume and I followed this article <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume#mount-options" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/azure-files-volume#mount-options</a> which works, I have a volume mounted.</p> <p><strong>But the problem is with the MySQL instance, it just won't start.</strong></p> <p>For the test purpose, I created a deployment with 2 simple DB containers, one of which is using the <em>default</em> storage class volume and the second one is using the <em>Azure-files</em>.</p> <p>Here is my manifest:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-db labels: prj: test-db spec: selector: matchLabels: prj: test-db template: metadata: labels: prj: test-db spec: containers: - name: db-default image: mysql:5.7.37 imagePullPolicy: IfNotPresent args: - &quot;--ignore-db-dir=lost+found&quot; ports: - containerPort: 3306 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - name: default-pv mountPath: /var/lib/mysql subPath: test - name: db-azurefiles image: mysql:5.7.37 imagePullPolicy: IfNotPresent args: - &quot;--ignore-db-dir=lost+found&quot; - &quot;--initialize-insecure&quot; ports: - containerPort: 3306 name: mysql env: - name: MYSQL_ROOT_PASSWORD value: password volumeMounts: - name: azurefile-pv mountPath: /var/lib/mysql subPath: test volumes: - name: default-pv persistentVolumeClaim: claimName: default-pvc - name: azurefile-pv persistentVolumeClaim: claimName: azurefile-pvc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: default-pvc spec: accessModes: - ReadWriteOnce storageClassName: default resources: requests: storage: 200Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile-pvc spec: accessModes: - ReadWriteMany storageClassName: azure-file-store resources: requests: storage: 200Mi --- apiVersion: storage.k8s.io/v1 kind: StorageClass mountOptions: - dir_mode=0777 - file_mode=0777 - uid=0 - gid=0 - mfsymlinks - cache=strict - nosharesock parameters: skuName: Standard_LRS provisioner: file.csi.azure.com reclaimPolicy: Delete volumeBindingMode: Immediate </code></pre> <p>The one with default PV works without any problem, but the second one with Azure-files throws this error:</p> <pre><code>[Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started. [Note] [Entrypoint]: Switching to dedicated user 'mysql' [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.37-1debian10 started. [Note] [Entrypoint]: Initializing database files [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). [Warning] InnoDB: New log files created, LSN=45790 [Warning] InnoDB: Creating foreign key constraint system tables. [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: e86bdae0-979b-11ec-abbf-f66bf9455d85. [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. mysqld: Can't change permissions of the file 'ca-key.pem' (Errcode: 1 - Operation not permitted) [ERROR] Could not set file permission for ca-key.pem [ERROR] Aborting </code></pre> <p>Based on the error, it seems like the database can't write to the volume mount, but that's not (entirely) true. I mounted both of those volumes to another container to be able to inspect files, here is the output and we can see that database was able to write files on the volume:</p> <pre><code>-rwxrwxrwx 1 root root 56 Feb 27 07:07 auto.cnf -rwxrwxrwx 1 root root 1680 Feb 27 07:07 ca-key.pem -rwxrwxrwx 1 root root 215 Feb 27 07:07 ib_buffer_pool -rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile0 -rwxrwxrwx 1 root root 50331648 Feb 27 07:07 ib_logfile1 -rwxrwxrwx 1 root root 12582912 Feb 27 07:07 ibdata1 </code></pre> <p>Obviously, some files are missing, but this output disproved my thought that the Mysql can't write to the folder.</p> <p>My guess is, that the MySQL can't properly work with the file system used on Azure files.</p> <p>What I tried:</p> <ul> <li>different versions of MySQL (5.7.16, 5.7.24, 5.7.31, 5.7.37) and MariaDB (10.6)</li> <li>testing different arguments for mysql</li> <li>recreate the storage with NFS v3 enabled</li> <li>create a custom Mysql image with added <code>cifs-utils</code></li> <li>testing different permissions, gid/uid, and other attributes of the container and also storage class</li> </ul>
<p>It appears to be the permissions of volumes mounted this way that is causing the issue.</p> <p>If we modify your storage class to match the uid/gid of the mysql user, the pod can start:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass mountOptions: - dir_mode=0777 - file_mode=0777 - uid=999 - gid=999 - mfsymlinks - cache=strict - nosharesock parameters: skuName: Standard_LRS provisioner: file.csi.azure.com reclaimPolicy: Delete volumeBindingMode: Immediate </code></pre> <p>The mount options permanently set the owner of the files contained in the mount, which doesn't work well for anything that wants to own the files it creates. Because things are created 777, anyone can read/write to the directories just not own them.</p>
<p>I've got an issue, I'm trying to install linkerd on my cluster, all is going well</p> <p>I went exactly with this official README</p> <pre class="lang-sh prettyprint-override"><code>https://linkerd.io/2.11/tasks/install-helm/ </code></pre> <p>installed it via helm</p> <pre class="lang-sh prettyprint-override"><code>MacBook-Pro-6% helm list -n default NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd2 default 1 2021-12-15 15:47:10.823551 +0100 CET deployed linkerd2-2.11.1 stable-2.11.1 </code></pre> <p>linkerd itself works, and the <code>linkerd check</code> command as well</p> <pre class="lang-sh prettyprint-override"><code>MacBook-Pro-6% linkerd version Client version: stable-2.11.1 Server version: stable-2.11.1 </code></pre> <p>but when I try to install <code>viz</code> dashboard as described in the <a href="https://linkerd.io/2.11/getting-started/" rel="nofollow noreferrer">getting-started</a> page I run</p> <pre class="lang-sh prettyprint-override"><code>linkerd viz install | kubectl apply -f - </code></pre> <p>and when going with</p> <pre class="lang-sh prettyprint-override"><code>linkerd check ... Status check results are √ Linkerd extensions checks ========================= / Running viz extension check </code></pre> <p>and it keeps on checking the viz extensions, and when I ran <code>linkerd dashboard</code> (deprecated I know) shows the same error</p> <pre class="lang-sh prettyprint-override"><code>Waiting for linkerd-viz extension to become available </code></pre> <p>anyone got any clue what I'm doing wrong ? Been stuck at this part for 2hrs &amp;_&amp; and noone seem to have any answers</p> <p>note, when I ran, <code>linkerd check</code> after instalation of viz I get</p> <pre class="lang-sh prettyprint-override"><code> linkerd-viz ----------- √ linkerd-viz Namespace exists √ linkerd-viz ClusterRoles exist √ linkerd-viz ClusterRoleBindings exist √ tap API server has valid cert √ tap API server cert is valid for at least 60 days ‼ tap API service is running FailedDiscoveryCheck: failing or missing response from https://10.190.101.142:8089/apis/tap.linkerd.io/v1alpha1: Get &quot;https://10.190.101.142:8089/apis/tap.linkerd.io/v1alpha1&quot;: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) see https://linkerd.io/2.11/checks/#l5d-tap-api for hints ‼ linkerd-viz pods are injected could not find proxy container for grafana-8d54d5f6d-cv7q5 pod see https://linkerd.io/2.11/checks/#l5d-viz-pods-injection for hints √ viz extension pods are running × viz extension proxies are healthy No &quot;linkerd-proxy&quot; containers found in the &quot;linkerd&quot; namespace see https://linkerd.io/2.11/checks/#l5d-viz-proxy-healthy for hints </code></pre> <hr /> <p>debugging</p>
<p>From your problem descripiton:</p> <blockquote> <p>‼ linkerd-viz pods are injected could not find proxy container for grafana-8d54d5f6d-cv7q5 pod see <a href="https://linkerd.io/2.11/checks/#l5d-viz-pods-injection" rel="nofollow noreferrer">https://linkerd.io/2.11/checks/#l5d-viz-pods-injection</a> for hints</p> </blockquote> <p>and:</p> <blockquote> <p>MacBook-Pro-6% helm list -n default</p> </blockquote> <p>I encountered a similar problem but with <code>flagger</code> pod rather than <code>grafana</code> pod (I didn't attempt to install <code>grafana</code> component like you did).</p> <p>A side effect of my problem is this:</p> <pre><code>$ linkerd viz dashboard Waiting for linkerd-viz extension to become available Waiting for linkerd-viz extension to become available Waiting for linkerd-viz extension to become available ... ## repeating for 5 minutes or so before popping up the dashboard in browser. </code></pre> <p>The cause for my problem turned out to be that I installed the <code>viz</code> extension into the <code>linkerd</code> namespace. It should belong to the <code>linkerd-viz</code> namespace.</p> <p>Looking at your original problem description, it seems that you installed the control plane into the <code>default</code> namespace (as opposed to the <code>linkerd</code> namespace.) While you can use any namespace you want, the control plane must be in a separate namespace from the <code>viz</code> extension. Details can be seen in the discussion I wrote here:</p> <ul> <li><a href="https://github.com/linkerd/website/issues/1309" rel="nofollow noreferrer">https://github.com/linkerd/website/issues/1309</a></li> </ul>
<p>I'm getting below error whenever I'm trying to apply an ingress resource/rules yaml file:</p> <p><em><strong>failed calling webhook &quot;validate.nginx.ingress.kubernetes.io&quot;: Post &quot;https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s&quot;: EOF</strong></em></p> <p>It seems there are multiple errors for &quot;failed calling webhook &quot;validate.nginx.ingress.kubernetes.io&quot;: Post &quot;https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s&quot;: <em>Error here</em></p> <p>Like below:</p> <ol> <li>context deadline exceeded</li> <li>x509: certificate signed by unknown authority</li> <li>Temporary Redirect</li> <li>EOF</li> <li>no endpoints available for service &quot;ingress-nginx-controller-admission&quot;</li> </ol> <p>...and many more.</p> <p><strong>My Observations:</strong></p> <p>As soon as the the ingress resource/rules yaml is applied, the above error is shown and the Ingress Controller gets restarted as shown below:</p> <pre><code>NAME READY STATUS RESTARTS AGE ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 6 30m ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 OOMKilled 6 30m ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 CrashLoopBackOff 6 30m ingress-nginx-controller-5cf97b7d74-zvrr6 0/1 Running 7 31m ingress-nginx-controller-5cf97b7d74-zvrr6 1/1 Running 7 32m </code></pre> <p>One possible solution could be (not sure though) mentioned here: <a href="https://stackoverflow.com/a/69289313/12241977">https://stackoverflow.com/a/69289313/12241977</a></p> <p>But not sure if it could possibly work in case of Managed Kubernetes services like AWS EKS as we don't have access to kube-api server.</p> <p>Also the section &quot;kind: ValidatingWebhookConfiguration&quot; has below field from yaml:</p> <pre><code>clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /networking/v1/ingresses </code></pre> <p>So what does the &quot;path: /networking/v1/ingresses&quot; do &amp; where it resides or simply where we can find this path? I checked the validation webhook using below command but, not able to get where to find the above path</p> <pre><code>kubectl describe validatingwebhookconfigurations ingress-nginx-admission </code></pre> <p><strong>Setup Details</strong></p> <p>I installed using the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters" rel="nofollow noreferrer">Bare-metal method</a> exposed with NodePort</p> <p>Ingress Controller Version - <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">v1.1.0</a></p> <p>Kubernetes Cluster Version (AWS EKS): 1.21</p>
<p>Ok, I got this working now: I was getting the status as &quot;OOMKilled&quot; (Out Of Memory). So what I did is I've added the &quot;limits:&quot; section under &quot;resources:&quot; section of Deployment yaml as below:</p> <pre><code> resources: requests: cpu: 100m memory: 90Mi limits: cpu: 200m memory: 190Mi </code></pre> <p>Now, it works fine for me.</p>
<p>Is it possible to create a kubernetes RBAC rule that allows creating a Job from an existing CronJob, but prevents creating a Job any other way?</p> <p>We want to keep our clusters tightly locked down to avoid arbitrary deployments not managed by CICD - but we also need to facilitate manual testing of CronJobs, or rerunning failed jobs off schedule. I'd like developers to be able to run a command like:</p> <pre><code>kubectl create job --from=cronjob/my-job my-job-test-run-1 </code></pre> <p>But not be able to run something like:</p> <pre><code>kubectl create job my-evil-job -f evil-job.yaml </code></pre> <p>Is that possible?</p>
<p>In this scenario in order to successfully execute this command:</p> <pre><code>kubectl create job --from=cronjob/&lt;cronjob_name&gt; </code></pre> <p>User/ServiceAccount should have proper <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> rules (at least two from the output provided below, create <code>Jobs</code> and get <code>CronJobs</code>.</p> <p>In first example I granted access to create <code>Jobs</code> and get <code>CronJobs</code> and I was able to create <code>Job</code> and <code>Job --from CronJob</code></p> <pre><code>user@minikube:~$ cat test_role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: job rules: - apiGroups: [&quot;batch&quot;] resources: [&quot;jobs&quot;] verbs: [&quot;create&quot;] - apiGroups: [&quot;batch&quot;] resources: [&quot;cronjobs&quot;] verbs: [&quot;get&quot;] user@minikube:~$ kubectl create job --image=inginx testjob20 job.batch/testjob20 created user@minikube:~$ kubectl create job --from=cronjobs/hello testjob21 job.batch/testjob21 created </code></pre> <p>But if I granted access only to create <code>Job</code> without get <code>CronJob</code>, I was be able to create <code>Job</code> but not to create <code>Job --from CronJob</code></p> <pre><code>user@minikube:~$ cat test_role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: job rules: - apiGroups: [&quot;batch&quot;] resources: [&quot;jobs&quot;] verbs: [&quot;create&quot;] user@minikube:~$ kubectl create job --image=nginx testjob3 job.batch/testjob3 created user@minikube:~$ kubectl create job --from=cronjobs/hello testjob4 Error from server (Forbidden): cronjobs.batch &quot;hello&quot; is forbidden: User &quot;system:serviceaccount:default:t1&quot; cannot get resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; </code></pre> <p>When I deleted access to create <code>Jobs</code>, I couldn't create <code>Job</code> and also <code>Job --from CronJob</code></p> <pre><code>user@minikube:~$ cat test_role apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: job rules: - apiGroups: [&quot;batch&quot;] resources: [&quot;cronjobs&quot;] verbs: [&quot;get&quot;] user@minikube:~$ kubectl create job --image=inginx testjob10 error: failed to create job: jobs.batch is forbidden: User &quot;system:serviceaccount:default:t1&quot; cannot create resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; user@minikube:~$ kubectl create job --from=cronjobs/hello testjob11 error: failed to create job: jobs.batch is forbidden: User &quot;system:serviceaccount:default:t1&quot; cannot create resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; </code></pre> <p>As you can see if User/ServiceAccount doesn't have both permission in this scenario it's impossible to create (<code>Job</code> or <code>Job --from CronJob</code>) so it's impossible to create such restrictions using only RABC rules.</p> <p>One possible solution is to split this permission into two different User/ServiceAccount for two different tasks (first user can create <code>Jobs</code> + get <code>CronJobs</code>, second user without permission to create <code>Jobs</code>).</p> <p>Another possibility is to try to use k8s admission Controller with f.e. <a href="https://www.openpolicyagent.org/docs/v0.12.2/kubernetes-admission-control/" rel="nofollow noreferrer">Open Policy agent</a></p>
<p>I have deployed ambassador edge stack and I am using hosts and mapping resources to route my traffic. I want to implement the mapping in such a way that if there is any double slash in the path, using regex (or any other available way) to remove one slash from it. For example, if client request <code>https://a.test.com//testapi</code> I want it to be <code>https://a.test.com/testapi</code>.</p> <p>I search through the ambassador documents but I am unable to find anything that can be of help.</p> <p>Thank You</p>
<p>There is the <a href="https://www.getambassador.io/docs/emissary/1.14/topics/running/ambassador/#the-module-resource" rel="nofollow noreferrer">Module Resource</a> for emissary ingress.</p> <blockquote> <p>If present, the Module defines system-wide configuration. This module can be applied to any Kubernetes service (the ambassador service itself is a common choice). You may very well not need this Module. To apply the Module to an Ambassador Service, it MUST be named ambassador, otherwise it will be ignored. To create multiple ambassador Modules in the same namespace, they should be put in the annotations of each separate Ambassador Service.</p> </blockquote> <p>You should add this to the module's yaml file:</p> <pre><code>spec: ... config: ... merge_slashes: true </code></pre> <blockquote> <p>If true, Emissary-ingress will merge adjacent slashes for the purpose of route matching and request filtering. For example, a request for //foo///bar will be matched to a Mapping with prefix /foo/bar.</p> </blockquote>
<p>service type : Nodeport</p> <p>problem : can't access clusterIP:Nodeport</p> <p>and find kube-proxy pod log like below</p> <p>&quot;can't open port, skipping it&quot; err=&quot;listen tcp4 :32060: bind: address already in use&quot; port={Description:nodePort for default/network-example2 IP: IPFamily:4 Port:32060 Protocol:TCP}</p> <p>what is the problem??</p>
<p>This seems to be caused by a reported <a href="https://github.com/kubernetes/kubernetes/issues/107170" rel="nofollow noreferrer">bug</a> in the kube-proxy versions after v1.20.x (mine is v1.23.4). This <a href="https://github.com/kubernetes/kubernetes/pull/107413" rel="nofollow noreferrer">fix</a> is merged in the upcoming v1.24 release.</p> <p>It is also confirmed in this <a href="https://github.com/kubernetes/kubernetes/issues/107297" rel="nofollow noreferrer">issue</a> that there would be no error if downgraded to the earlier release v1.20.1.</p>
<p>With k3d, I am receiving a DNS error when the pod tries to access a URL over the internet.</p> <pre><code>ERROR: getaddrinfo EAI_AGAIN DNS could not be resolved </code></pre> <p>How can I get past this error?</p>
<p>It depends on your context, OS, version.</p> <p>For instance, you will see various proxy issue in <a href="https://github.com/k3d-io/k3d/issues/209" rel="nofollow noreferrer"><code>k3d-io/k3d</code> issue 209</a></p> <p>this could be related to the way k3d creates the docker network.</p> <blockquote> <p>Indeed, k3d creates a custom docker network for each cluster and when this happens resolving is done through the docker daemon.<br /> The requests are actually forwarded to the DNS servers configured in your host's <code>resolv.conf</code>. But through a single DNS server (the embedded one of docker).</p> <p>This means that if your daemon.json is, like mine, not configured to provide extra DNS servers it defaults to 8.8.8.8 which does not resolve any company address for example.</p> <p>It would be useful to have a custom options to provide to k3d when it starts the cluster and specify the DNS servers there</p> </blockquote> <p>Which is why there is &quot;<a href="https://github.com/k3d-io/k3d/issues/220" rel="nofollow noreferrer">v3/networking: <code>--network</code> flag to attach to existing networks</a>&quot;, referring to <a href="https://k3d.io/v5.3.0/design/networking/" rel="nofollow noreferrer">Networking</a>.</p> <p>Before that new flag:</p> <blockquote> <p>For those who have the problem, a simple fix is to mount your <code>/etc/resolve.conf</code> onto the cluster:</p> <pre><code>k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf </code></pre> </blockquote>
<p>I am getting this error in clusterissuer (cert-manager version 1.7.1):</p> <p>&quot;Error getting keypair for CA issuer: error decoding certificate PEM block&quot;</p> <p>I have the ca.crt, tls.crt and tls.key stored in a Key Vault in Azure.</p> <p><strong>kubectl describe clusterissuer ca-issuer</strong></p> <pre><code> Ca: Secret Name: cert-manager-secret Status: Conditions: Last Transition Time: 2022-02-25T11:40:49Z Message: Error getting keypair for CA issuer: error decoding certificate PEM block Observed Generation: 1 Reason: ErrGetKeyPair Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrGetKeyPair 3m1s (x17 over 58m) cert-manager Error getting keypair for CA issuer: error decoding certificate PEM block Warning ErrInitIssuer 3m1s (x17 over 58m) cert-manager Error initializing issuer: error decoding certificate PEM block </code></pre> <p><strong>kubectl get clusterissuer</strong></p> <pre><code>NAME READY AGE ca-issuer False 69m </code></pre> <ul> <li>This is the clusterissuer yaml file:</li> </ul> <p><strong>ca-issuer.yaml</strong></p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: ca-issuer namespace: cert-manager spec: ca: secretName: cert-manager-secret </code></pre> <p>This is the KeyVault yaml file to retrieve the ca.crt, tls.crt and tls.key</p> <p><strong>keyvauls.yaml</strong></p> <pre><code>apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akscacrt namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akscacrt # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: ca.crt # key to store object value in kubernetes secret --- apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akstlscrt namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akstlscrt # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: tls.crt # key to store object value in kubernetes secret --- apiVersion: spv.no/v2beta1 kind: AzureKeyVaultSecret metadata: name: secret-akstlskey namespace: cert-manager spec: vault: name: kv-xx # name of key vault object: name: akstlskey # name of the akv object type: secret # akv object type output: secret: name: cert-manager-secret # kubernetes secret name dataKey: tls.key # key to store object value in kubernetes secret --- </code></pre> <p>and these are the certificates used:</p> <pre><code>--- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: argocd-xx namespace: argocd spec: secretName: argocd-xx issuerRef: name: ca-issuer kind: ClusterIssuer commonName: &quot;argocd.xx&quot; dnsNames: - &quot;argocd.xx&quot; privateKey: size: 4096 --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: sonarqube-xx namespace: sonarqube spec: secretName: &quot;sonarqube-xx&quot; issuerRef: name: ca-issuer kind: ClusterIssuer commonName: &quot;sonarqube.xx&quot; dnsNames: - &quot;sonarqube.xx&quot; privateKey: size: 4096 </code></pre> <p>I can see that I can retrive the secrets for the certificate from key vault:</p> <p><strong>kubectl get secret -n cert-manager cert-manager-secret -o yaml</strong></p> <pre><code>apiVersion: v1 data: ca.crt: XXX tls.crt: XXX tls.key: XXX </code></pre> <p>Also, another strange thing is that I am getting other secrets in sonarqube/argocd namespace which I deployed previously but are not any more in my deployment file. I cannot delete them, when I try to delete them, they are re-created automatically. Looks like they are stored in some kind of cache. Also I tried to delete the namespace akv2k8s/cert-manager and delete the cert-manager/akv2k8s controllers and re-install them again but same issue after re-installing and applying the deployment...</p> <pre><code>kubectl get secret -n sonarqube NAME TYPE DATA AGE cert-manager-secret Opaque 3 155m default-token-c8b86 kubernetes.io/service-account-token 3 2d1h sonarqube-xx-7v7dh Opaque 1 107m sql-db-secret Opaque 2 170m kubectl get secret -n argocd NAME TYPE DATA AGE argocd-xx-7b5kb Opaque 1 107m cert-manager-secret-argo Opaque 3 157m default-token-pjb4z kubernetes.io/service-account-token 3 3d15h </code></pre> <p><strong>kubectl describe certificate sonarqube-xxx -n sonarqube</strong></p> <pre><code>Status: Conditions: Last Transition Time: 2022-02-25T11:04:08Z Message: Issuing certificate as Secret does not exist Observed Generation: 1 Reason: DoesNotExist Status: False Type: Ready Last Transition Time: 2022-02-25T11:04:08Z Message: Issuing certificate as Secret does not exist Observed Generation: 1 Reason: DoesNotExist Status: True Type: Issuing Next Private Key Secret Name: sonarqube-xxx-7v7dh Events: &lt;none&gt; </code></pre> <p>Any idea?</p> <p>Thanks.</p>
<p>I figured it out just uploading the certificate info <strong>ca.crt</strong>. <strong>tls.crt</strong> and <strong>tls.key</strong> <strong>in plain text, without BASE64 encoding</strong> in the Key Vault secrets in Azure.</p> <p>When AKV2K8S retrives the secrets from the Key Vault and stored in Kubernetes, automatically it is encoded in BASE64.</p> <p>Regards,</p>
<p>I am trying to backup a database from a kubernetes cluster to my computer as a bson file. I have connected my mongodb compass to the kubernetes cluster using port-forwarding. Can anyone help me with the command I need to download a particular Collection (450gb) from a databank to my desktop?</p> <p>I've been trying for a while now but I cant seem to find the way around it.</p> <p>In mongocompass there is unfortunately no way to download a collection as a bson file. The port I have forwarded the kubernetes pod to is 27017.</p>
<p>From the mongodb official docs:</p> <blockquote> <p>Run <a href="https://docs.mongodb.com/database-tools/mongodump/#mongodb-binary-bin.mongodump" rel="nofollow noreferrer"><code>mongodump</code></a> from the system command line, not the <a href="https://docs.mongodb.com/manual/reference/program/mongo/#mongodb-binary-bin.mongo" rel="nofollow noreferrer"><code>mongo</code></a> shell.</p> </blockquote> <p>So, assuming you configured <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">Kubernetes Port Forwarding</a> on your <strong>local machine</strong> like this:</p> <pre><code>$ kubectl port-forward service/mongo 28015:27017 </code></pre> <p>And you've got output similar to this:</p> <pre class="lang-yaml prettyprint-override"><code>Forwarding from 127.0.0.1:28015 -&gt; 27017 Forwarding from [::1]:28015 -&gt; 27017 </code></pre> <p>You can now export data from mongodb using the following command:</p> <pre><code>$ mongodump --username root --port=28015 -p secretpassword </code></pre> <p>This will create a <code>dump</code> directory in the current working directory and export data there.</p> <p>Also, to export only specific collection use the following option:</p> <pre><code>--collection=&lt;collection&gt;, -c=&lt;collection&gt; </code></pre> <blockquote> <p>Specifies a collection to backup. If you do not specify a collection, this option copies all collections in the specified database or instance to the dump files.</p> </blockquote> <p>You can check other available option <a href="https://docs.mongodb.com/database-tools/mongodump/" rel="nofollow noreferrer">here</a>.</p>
<p>Let's assume I have a node labeled with the labels <code>myKey1: 2</code>, <code>myKey2: 5</code>, <code>myKey3: 3</code>, <code>myKey4: 6</code>. I now want to check if one of those labels has a value greater than 4 and if so schedule my workload on this node. For that I use the following <code>nodeAffinity</code> rule:</p> <pre class="lang-yaml prettyprint-override"><code> spec: containers: - name: wl1 image: myImage:latest imagePullPolicy: IfNotPresent affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: myKey1 operator: Gt values: - 4 nodeSelectorTerms: - matchExpressions: - key: myKey2 operator: Gt values: - 4 nodeSelectorTerms: - matchExpressions: - key: myKey3 operator: Gt values: - 4 nodeSelectorTerms: - matchExpressions: - key: myKey4 operator: Gt values: - 4 </code></pre> <p>I would instead love to use something shorter to be able to address a bunch of similar labels like e.g.</p> <pre class="lang-yaml prettyprint-override"><code> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: myKey* operator: Gt values: - 4 </code></pre> <p>so basically using a <code>key</code>-wildcard and the different checks connected via a logical <code>OR</code>. Is this possible or is there another solution to check the value of multiple similar labels?</p>
<p>As <a href="https://stackoverflow.com/users/1909531/matthias-m" title="9,753 reputation">Matthias M</a> wrote in the comment:</p> <blockquote> <p>I would add an extra label to all nodes, which should match. I think that was the simplest solution. Was that also a solution for you? <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node</a></p> </blockquote> <p>In your situation, it will actually be easier to just add another key and check only one condition. Alternatively, you can try to use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="nofollow noreferrer">set-based values</a>:</p> <blockquote> <p>Newer resources, such as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer"><code>Job</code></a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer"><code>ReplicaSet</code></a>, and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer"><code>DaemonSet</code></a>, support <em>set-based</em> requirements as well.</p> </blockquote> <pre class="lang-yaml prettyprint-override"><code>selector: matchLabels: component: redis matchExpressions: - {key: tier, operator: In, values: [cache]} - {key: environment, operator: NotIn, values: [dev]} </code></pre> <blockquote> <p><code>matchLabels</code> is a map of <code>{key,value}</code> pairs. A single <code>{key,value}</code> in the <code>matchLabels</code> map is equivalent to an element of <code>matchExpressions</code>, whose <code>key</code> field is &quot;key&quot;, the <code>operator</code> is &quot;In&quot;, and the <code>values</code> array contains only &quot;value&quot;. <code>matchExpressions</code> is a list of pod selector requirements. Valid operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and NotIn. All of the requirements, from both <code>matchLabels</code> and <code>matchExpressions</code> are ANDed together -- they must all be satisfied in order to match.</p> </blockquote> <p>For more about it read also <a href="https://stackoverflow.com/questions/55002334/kubernetes-node-selector-regex">this question</a>.</p>
<p>I want to force the interface, setting the <code>IP_AUTODETECTION_METHOD</code>:</p> <pre><code>$ kubectl set env daemonset/calico-node -n calico-system IP_AUTODETECTION_METHOD=interface=ens192 daemonset.apps/calico-node env updated </code></pre> <p>But nothing happens:</p> <pre><code>$ kubectl set env daemonset/calico-node -n calico-system --list | grep IP_AUTODETECTION_METHOD IP_AUTODETECTION_METHOD=first-found </code></pre>
<p>On my cluster, running this:</p> <pre><code>kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=vbmgt0 </code></pre> <p>from the control node did the trick as expected.</p> <p>Are you running kubectl from the proper node ?</p>
<p>All I have is a GKE cluster and there are 3 node pools and the machine size is <code>e2-standard-2</code> but when I push my deployment into this cluster I got this error on gke dashboard <a href="https://i.stack.imgur.com/FDVOl.png" rel="nofollow noreferrer">error image</a></p> <p>Although I have enabled the node auto-provisioning but it is still showing this error. Can you help me out how can I fix this issue?</p>
<p>From what I understand, you are using <code>Standard GKE</code> not <code>Autopilot GKE</code>.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning" rel="nofollow noreferrer">Using node auto-provisioning</a> GKE documentation provides much information about this feature.</p> <p>Regarding your issue, you mention that your cluster didn't have <code>node auto-provisioning</code> and you have enabled it. In <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#enable" rel="nofollow noreferrer">Enabling node auto-provisioning</a> part you have note:</p> <blockquote> <p>If you disable then re-enable auto-provisioning on your cluster, <strong>existing node pools will not have auto-provisioning enabled. To re-enable auto-provisioning for these node pools, you need to</strong> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#mark_node_auto-provisioned" rel="nofollow noreferrer">mark individual node pools</a> as auto-provisioned.</p> </blockquote> <p>In short, if you just enabled auto-provisioning on cluster using:</p> <pre><code>gcloud container clusters update CLUSTER_NAME \ --enable-autoprovisioning \ ... (other config) </code></pre> <p><strong>It will work for new node-pools.</strong></p> <p>To enable <code>node auto-provisioning</code> for <strong>already existing</strong> <code>node-pools</code>, you need to specify it by name:</p> <pre><code>gcloud container node-pools update NODE_POOL_NAME \ --enable-autoprovisioning </code></pre> <p>If you didn't change anything, the first <code>node-pool</code> in the new cluster by default is called <code>default-pool</code>. So for example you should use</p> <pre><code>gcloud container node-pools update default-pool \ --enable-autoprovisioning </code></pre> <p>If this won't solve your issue, please provide more details about your cluster, what commands you execute in order to replicate this behavior.</p>
<p>The Minikube cluster on my Debian system removes all resources which have been up and running after I run <code>minikube stop</code> or shutting down the system.</p> <p>What could possibly be the reason? How can I persist it?</p> <p>Kubernetes &amp; Kubectl version 1.23.1.</p>
<p>Posting it as an answer: the issue with removing all deployed resources in Minikube cluster after <code>minikube stop</code> and system shutdown was raised <a href="https://github.com/kubernetes/minikube/issues/12655" rel="nofollow noreferrer">here</a>. It is currently fixed in the newest release, upgrading your Minikube installation to 1.25.2 version should solve it.</p>
<p>We are working with Akka and have an application which uses Akka Cluster that we deploy to AWS EKS environment. When we run a load test against the application we observe heartbeat intervals between the components growing large and pods getting restarted. Akka documentation mentions not to use CPU resource limit, and removing the resource limit solves the issue.</p> <p>But is there any other way of getting around this, we are not sure if removing the resource limits is a good practice when deploying application.</p> <p><a href="https://doc.akka.io/docs/akka/current/additional/deploying.html?_ga=2.206580627.186827534.1646216705-1504733962.1642433119#resource-limits" rel="nofollow noreferrer">Resource limits</a></p> <blockquote> <p>To avoid CFS scheduler limits, it is best not to use resources.limits.cpu limits, but use resources.requests.cpu configuration instead.</p> </blockquote>
<p>So, I know the docs do make this recommendation to not use limits so the following is just &quot;unofficial&quot; advice you are getting from StackOverflow. (And thus, if you want support from Lightbend you will have to debate this with them.)</p> <p>Firstly, I agree with you. For many reasons, you absolutely should have resource limits. For example, if you don't have CPU limits your process ends up being designated as &quot;best effort&quot; as far as the CFS scheduler is concerned and that can actually have bad consequences.</p> <p>As I understand the history of this recommendation from Lightbend, it comes from a situation similar to yours where the CFS scheduler was preventing Akka from getting the resources it needed. Plus the broader problem that, especially when the garbage collector kicks in, it's definitely possible to consume all of your CFS quota very quickly and end up with long GC pause times. The Lightbend position has been if you use CPU resource limits, then the CFS scheduler will limit your CPU consumption and that could cause problems.</p> <p>But my position is that limiting your CPU consumption is the entire point, and is actually a good thing. Kubernetes is a shared environment and limiting CPU consumption is how the system is designed to prevent &quot;noisy neighbors&quot;, fair sharing, and often cost chargebacks. My position is that the CPU limits themselves aren't bad, the problem is only when your limits are too low.</p> <p>While I hate to make generic advice, as there may be some situations where I might make different recommendations, I would generally recommend setting CPU limits, but having them be significantly higher than your CPU requests. (2-3 times as a rule of thumb.) This type of setting will classify your Pods as &quot;Burstable&quot;. Which is important so that your Akka node can burst up to handle high load situations such as GC, handling failover, etc.</p> <p>Ideally you then use something like HBA such that your Akka cluster will autoscale so that it can handle its normal load with its &quot;request&quot; allocation of CPU and only uses that burst allocation during these occasional circumstances. (Because if you are always operating past the request allocation and close to burst allocation, then you really aren't getting the benefit of bursting.)</p> <p>For example, in your specific case, you say you have problems with heartbeats when you set CPU limits but that goes away when you turn off the CPU limits. This tells me that your CPU limits were <em>way</em> too low. If Akka isn't able get enough CPU to do heartbeats, even under normal circumstances, then there is no way that it would be able to get enough CPU to handle peaks.</p> <p>(Also, since we are we assuming that the system will be normally running at the request CPU allocation, but potentially running at limit CPU allocation during GC, I would typically tune my Akka threads as if it had &quot;request&quot; CPU and tune my GC threads as if it had &quot;limit&quot; CPU allocation.)</p>
<p>database-deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: postgres name: postgres-db spec: replicas: selector: matchLabels: app: postgres-db template: metadata: labels: app: postgres-db spec: containers: - name: postgres-db image: postgres:latest ports: - protocol: TCP containerPort: 1234 env: - name: POSTGRES_DB value: &quot;classroom&quot; - name: POSTGRES_USER value: temp - name: POSTGRES_PASSWORD value: temp </code></pre> <p>database-service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: database-service spec: selector: app: postgres-db ports: - protocol: TCP port: 1234 targetPort: 1234 </code></pre> <p>I want to use this database-service url for other deployment so i tried to add it in configMap</p> <p>my-configMap.yaml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: classroom-configmap data: database_url: database-service </code></pre> <p>[Not Working] Expected - database_url : database-service (will be replaced with corresponding service URL)</p> <p><code>ERROR - Driver org.postgresql.Driver claims to not accept jdbcUrl, database-service</code></p> <pre><code>$ kubectl describe configmaps classroom-configmap </code></pre> <p>Output :</p> <pre><code>Name: classroom-configmap Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== database_url: ---- database-service BinaryData ==== Events: &lt;none&gt; </code></pre>
<p>According to the error you are having:</p> <p><code>Driver org.postgresql.Driver claims to not accept jdbcUrl</code></p> <p>It seems that there are a few issues with that URL, and a latest PSQL driver may complain.</p> <ol> <li><code>jdbc:postgres:</code> isn't right, use <code>jdbc:postgresql:</code>instead</li> <li>Do not use <code>jdbc:postgresql://&lt;username&gt;:&lt;passwor&gt;..., user parameters instead: jdbc:postgresql://&lt;host&gt;:&lt;port&gt;/&lt;dbname&gt;?user=&lt;username&gt;&amp;password=&lt;password&gt;</code></li> <li>In some cases you have to force SSL connection by adding <code>sslmode=require</code> parameter</li> </ol>
<p>When adding a kustomize patch to a <code>kustomization.yaml</code> the double quotes are replaced with single quotes that lead to error</p> <p>I am using the following:</p> <pre class="lang-sh prettyprint-override"><code>kustomize edit add patch --patch &quot;- op: add\n path: /metadata/annotations/argocd.argoproj.io~1sync-wave\n. value: 1&quot; --kind Deployment </code></pre> <p>is converted to</p> <pre class="lang-yaml prettyprint-override"><code>- patch: '- op: add\n path: /metadata/annotations/argocd.argoproj.io~1sync-wave\n value: 1' target: kind: Deployment </code></pre> <p>in the <code>kustomization.yaml</code></p> <p>This leads to the following error when you do <code>kustomize build</code></p> <pre><code>Error: trouble configuring builtin PatchTransformer with config: ` patch: ‘- op: add\n path: /metadata/annotations/argocd.argoproj.io~1sync-wave\n value: 1’ target: kind: Deployment `: unable to parse SM or JSON patch from [- op: add\n path: /metadata/annotations/argocd.argoproj.io~1sync-wave\n value: 1] </code></pre> <p>How do I make sure that the patch in <code>kustomization.yaml</code> has double quotes instead?</p>
<p>Since I had hundreds of kustomization files that needed to be updated with the annotations for ArgoCD sync-waves, I worked around the problem by using <code>commonAnnotations</code> instead (I believe this is the right way of doing it as well). So instead of adding a patch, I did the following:</p> <pre><code>kustomize edit add annotation argocd.argoproj.io/sync-wave:$wave --force </code></pre> <p>This will add the annotation to all objects. Where <code>$wave</code> was the wave number and <code>--force</code> overwrites the annotation if it already exists in the file.</p>
<p>I have installed minikube in my windows 10. I was able to create deployment and work. But when i stop minikube everything(including deployment) is lost. How to run minikube as a service startup in windows?</p>
<p>I've reproduced your problem on <code>1.25.1</code> version - indeed, all resources that were deployed in the cluster are deleted on <code>minikube stop</code>.</p> <p>As I already mentioned in the comments, this issue was raised <a href="https://github.com/kubernetes/minikube/issues/12655" rel="nofollow noreferrer">here</a>. It is currently fixed in the latest release, you would need to upgrade your Minikube installation to to <code>1.25.2</code> version - basically, re-install it with newest version available <a href="https://github.com/kubernetes/minikube/releases/tag/v1.25.2" rel="nofollow noreferrer">here</a>. Confirming from my side that as soon as I've upgraded Minikube from 1.25.1 to 1.25.2 version, deployments and all other resources were present on the cluster after restarting Minikube.</p>
<p>I'm trying to setup a Google Kubernetes Engine cluster with GPU's in the nodes loosely following <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#gcloud" rel="nofollow noreferrer">these instructions</a>, because I'm programmatically deploying using the Python client.</p> <p>For some reason I can create a cluster with a NodePool that contains GPU's <a href="https://i.stack.imgur.com/5yR8q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5yR8q.png" alt="GKE NodePool with GPUs" /></a></p> <p>...But, the nodes in the NodePool don't have access to those GPUs. <a href="https://i.stack.imgur.com/DJPVm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DJPVm.png" alt="Node without access to GPUs" /></a></p> <p>I've already installed the NVIDIA DaemonSet with this yaml file: <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml</a></p> <p>You can see that it's there in this image: <a href="https://i.stack.imgur.com/PEggq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PEggq.png" alt="enter image description here" /></a></p> <p>For some reason those 2 lines always seem to be in status &quot;ContainerCreating&quot; and &quot;PodInitializing&quot;. They never flip green to status = &quot;Running&quot;. How can I get the GPU's in the NodePool to become available in the node(s)?</p> <h3>Update:</h3> <p>Based on comments I ran the following commands on the 2 NVIDIA pods; <code>kubectl describe pod POD_NAME --namespace kube-system</code>.</p> <p>To do this I opened the UI KUBECTL command terminal on the node. Then I ran the following commands:</p> <p><code>gcloud container clusters get-credentials CLUSTER-NAME --zone ZONE --project PROJECT-NAME</code></p> <p>Then, I called <code>kubectl describe pod nvidia-gpu-device-plugin-UID --namespace kube-system</code> and got this output:</p> <pre><code>Name: nvidia-gpu-device-plugin-UID Namespace: kube-system Priority: 2000001000 Priority Class Name: system-node-critical Node: gke-mycluster-clust-default-pool-26403abb-zqz6/X.X.X.X Start Time: Wed, 02 Mar 2022 20:19:49 +0000 Labels: controller-revision-hash=79765599fc k8s-app=nvidia-gpu-device-plugin pod-template-generation=1 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: DaemonSet/nvidia-gpu-device-plugin Containers: nvidia-gpu-device-plugin: Container ID: Image: gcr.io/gke-release/nvidia-gpu-device-plugin@sha256:aa80c85c274a8e8f78110cae33cc92240d2f9b7efb3f53212f1cefd03de3c317 Image ID: Port: 2112/TCP Host Port: 0/TCP Command: /usr/bin/nvidia-gpu-device-plugin -logtostderr --enable-container-gpu-metrics --enable-health-monitoring State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: cpu: 50m memory: 50Mi Requests: cpu: 50m memory: 20Mi Environment: LD_LIBRARY_PATH: /usr/local/nvidia/lib64 Mounts: /dev from dev (rw) /device-plugin from device-plugin (rw) /etc/nvidia from nvidia-config (rw) /proc from proc (rw) /usr/local/nvidia from nvidia (rw) /var/lib/kubelet/pod-resources from pod-resources (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: device-plugin: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/device-plugins HostPathType: dev: Type: HostPath (bare host directory volume) Path: /dev HostPathType: nvidia: Type: HostPath (bare host directory volume) Path: /home/kubernetes/bin/nvidia HostPathType: Directory pod-resources: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/pod-resources HostPathType: proc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: nvidia-config: Type: HostPath (bare host directory volume) Path: /etc/nvidia HostPathType: default-token-qnxjr: Type: Secret (a volume populated by a Secret) SecretName: default-token-qnxjr Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: :NoExecute op=Exists :NoSchedule op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m55s default-scheduler Successfully assigned kube-system/nvidia-gpu-device-plugin-hxdwx to gke-opcode-trainer-clust-default-pool-26403abb-zqz6 Warning FailedMount 6m42s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[nvidia-config default-token-qnxjr device-plugin dev nvidia pod-resources proc]: timed out waiting for the condition Warning FailedMount 4m25s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[proc nvidia-config default-token-qnxjr device-plugin dev nvidia pod-resources]: timed out waiting for the condition Warning FailedMount 2m11s kubelet Unable to attach or mount volumes: unmounted volumes=[nvidia], unattached volumes=[pod-resources proc nvidia-config default-token-qnxjr device-plugin dev nvidia]: timed out waiting for the condition Warning FailedMount 31s (x12 over 8m45s) kubelet MountVolume.SetUp failed for volume &quot;nvidia&quot; : hostPath type check failed: /home/kubernetes/bin/nvidia is not a directory </code></pre> <p>Then, I called <code>kubectl describe pod nvidia-driver-installer-UID --namespace kube-system</code> and got this output:</p> <pre><code>Name: nvidia-driver-installer-UID Namespace: kube-system Priority: 0 Node: gke-mycluster-clust-default-pool-26403abb-zqz6/X.X.X.X Start Time: Wed, 02 Mar 2022 20:20:06 +0000 Labels: controller-revision-hash=6bbfc44f6d k8s-app=nvidia-driver-installer name=nvidia-driver-installer pod-template-generation=1 Annotations: &lt;none&gt; Status: Pending IP: 10.56.0.9 IPs: IP: 10.56.0.9 Controlled By: DaemonSet/nvidia-driver-installer Init Containers: nvidia-driver-installer: Container ID: Image: gke-nvidia-installer:fixed Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Requests: cpu: 150m Environment: &lt;none&gt; Mounts: /boot from boot (rw) /dev from dev (rw) /root from root-mount (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro) Containers: pause: Container ID: Image: gcr.io/google-containers/pause:2.0 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-qnxjr (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: dev: Type: HostPath (bare host directory volume) Path: /dev HostPathType: boot: Type: HostPath (bare host directory volume) Path: /boot HostPathType: root-mount: Type: HostPath (bare host directory volume) Path: / HostPathType: default-token-qnxjr: Type: Secret (a volume populated by a Secret) SecretName: default-token-qnxjr Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m20s default-scheduler Successfully assigned kube-system/nvidia-driver-installer-tzw42 to gke-opcode-trainer-clust-default-pool-26403abb-zqz6 Normal Pulling 2m36s (x4 over 4m19s) kubelet Pulling image &quot;gke-nvidia-installer:fixed&quot; Warning Failed 2m34s (x4 over 4m10s) kubelet Failed to pull image &quot;gke-nvidia-installer:fixed&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/library/gke-nvidia-installer:fixed&quot;: failed to resolve reference &quot;docker.io/library/gke-nvidia-installer:fixed&quot;: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed Warning Failed 2m34s (x4 over 4m10s) kubelet Error: ErrImagePull Warning Failed 2m22s (x6 over 4m9s) kubelet Error: ImagePullBackOff Normal BackOff 2m7s (x7 over 4m9s) kubelet Back-off pulling image &quot;gke-nvidia-installer:fixed&quot; </code></pre>
<p>According the docker image that the container is trying to pull (<code>gke-nvidia-installer:fixed</code>), it looks like you're trying use Ubuntu daemonset instead of <code>cos</code>.</p> <p>You should run <code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml </code></p> <p>This will apply the right daemonset for your <code>cos</code> node pool, as stated <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">here</a>.</p> <p>In addition, please verify your node pool has the <code>https://www.googleapis.com/auth/devstorage.read_only</code> scope which is needed to pull the image. You can should see it in your node pool page in GCP Console, under Security -&gt; Access scopes (The relevant service is Storage).</p>
<p>For two Statefulsets <code>sts1</code> and <code>sts2</code>, would it be possible to schedule:</p> <ul> <li>sts1-pod-0 and sts2-pod-0 on the same node,</li> <li>sts1-pod-1 and sts2-pod-1 on the same node,</li> <li>...</li> <li>sts1-pod-n and sts2-pod-n on the same node,</li> </ul> <p>An in addition do not collocate two pods of a given Statefulset on the same node?</p>
<pre><code>sts1-pod-0 and sts2-pod-0 on the same node, sts1-pod-1 and sts2-pod-1 on the same node, ... sts1-pod-n and sts2-pod-n on the same node, </code></pre> <p>One possible way is run the paired containers in the same StatefulSet, this has the same effect as running side by side as pod on the same node. In this case your affinity rule only need to ensure no two pods run on same node.</p>
<p>We are <strong>Prometheus version 2.26.0</strong> and <strong>kubernetes verion 1.21.7 in Azure</strong>. We mount the data in <strong>Azure storage NFS</strong> and it was working fine. From last few days Prometheus container is crash-looping and below are the logs</p> <pre><code>level=info ts=2022-01-26T08:04:14.375Z caller=main.go:418 msg=&quot;Starting Prometheus&quot; version=&quot;(version=2.26.0, branch=HEAD, revision=3cafc58827d1ebd1a67749f88be4218f0bab3d8d)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:423 build_context=&quot;(go=go1.16.2, user=root@a67cafebe6d0, date=20210331-11:56:23)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:424 host_details=&quot;(Linux 5.4.0-1065-azure #68~18.04.1-Ubuntu SMP Fri Dec 3 14:08:44 UTC 2021 x86_64 prometheus-6b9d9d54f4-nc45x (none))&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:425 fd_limits=&quot;(soft=1048576, hard=1048576)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:426 vm_limits=&quot;(soft=unlimited, hard=unlimited)&quot; level=info ts=2022-01-26T08:04:14.503Z caller=web.go:540 component=web msg=&quot;Start listening for connections&quot; address=0.0.0.0:9090 level=info ts=2022-01-26T08:04:14.507Z caller=main.go:795 msg=&quot;Starting TSDB ...&quot; level=info ts=2022-01-26T08:04:14.509Z caller=tls_config.go:191 component=web msg=&quot;TLS is disabled.&quot; http2=false level=info ts=2022-01-26T08:04:14.560Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641478251052 maxt=1641513600000 ulid=01FRSEHC4YHV3N26JY5AMNZFRW level=info ts=2022-01-26T08:04:14.593Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641513600037 maxt=1641578400000 ulid=01FRVCAP2VJGDF0Z9CS24EXAJJ level=info ts=2022-01-26T08:04:14.624Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641578400038 maxt=1641643200000 ulid=01FRXA4AQHMHAEYWRKQFGP075M level=info ts=2022-01-26T08:04:14.651Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641643200422 maxt=1641708000000 ulid=01FRZ7XQQ4RA96DCPPBP22D71N level=info ts=2022-01-26T08:04:14.679Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641708000020 maxt=1641772800000 ulid=01FS15QDG6BS7H6M6Y09HG3E12 level=info ts=2022-01-26T08:04:14.707Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641772800011 maxt=1641837600000 ulid=01FS33GT38PRSB9VP56YFXT2M0 level=info ts=2022-01-26T08:04:14.736Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641963555381 maxt=1641967200000 ulid=01FS6MRNZEWT1Z6P697K09KHD7 level=info ts=2022-01-26T08:04:14.763Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641837600100 maxt=1641902400000 ulid=01FS6R88C70TCD8CYC4XJ95X23 level=info ts=2022-01-26T08:04:14.810Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641967200019 maxt=1642032000000 ulid=01FS8WXQP3YJ7EXBVNYBQG4DVY level=info ts=2022-01-26T08:04:14.836Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642032000072 maxt=1642096800000 ulid=01FSATQBR4XBQRDM72ATFS9PQ2 level=info ts=2022-01-26T08:04:14.863Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642096800059 maxt=1642161600000 ulid=01FSCRHE2YBDX7GPRPSH6BNGRX level=info ts=2022-01-26T08:04:14.895Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642161600091 maxt=1642226400000 ulid=01FSEPB1GPGAANVCQ2VKW9BQ4G level=info ts=2022-01-26T08:04:14.948Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642226400026 maxt=1642291200000 ulid=01FSGM4J0G1D0A6H1GD3N9C372 level=info ts=2022-01-26T08:04:14.973Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642291200005 maxt=1642356000000 ulid=01FSJHY6W0FRYDHCXBVB5XPFYG level=info ts=2022-01-26T08:04:15.002Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642356000027 maxt=1642420800000 ulid=01FSMFR96DASV6YPN66W7C86H9 level=info ts=2022-01-26T08:04:15.077Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642420800042 maxt=1642485600000 ulid=01FSPDHGWRT65D8CKWQ2JPRHW3 level=info ts=2022-01-26T08:04:15.105Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642485600006 maxt=1642550400000 ulid=01FSRBAVP2MW71H08F32D6HGB4 level=info ts=2022-01-26T08:04:15.130Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642550400028 maxt=1642615200000 ulid=01FST9482FD0Z3PHXHNW2W616E level=info ts=2022-01-26T08:04:15.157Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642680000018 maxt=1642687200000 ulid=01FSW00TJKJ7CGCQ7JJS3XQK8G level=info ts=2022-01-26T08:04:15.187Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642687200018 maxt=1642694400000 ulid=01FSW6WHTSEAXHWV5J7PQP94X7 level=info ts=2022-01-26T08:04:15.213Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642615200021 maxt=1642680000000 ulid=01FSW6XYH2Y429PG5YRM0K45XS level=info ts=2022-01-26T08:04:15.275Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642694400018 maxt=1642701600000 ulid=01FSWDR92Y7H302NDZRX1V2PX9 level=info ts=2022-01-26T08:04:21.840Z caller=head.go:696 component=tsdb msg=&quot;Replaying on-disk memory mappable chunks if any&quot; level=info ts=2022-01-26T08:04:22.623Z caller=head.go:710 component=tsdb msg=&quot;On-disk memory mappable chunks replay completed&quot; duration=782.403397ms level=info ts=2022-01-26T08:04:22.623Z caller=head.go:716 component=tsdb msg=&quot;Replaying WAL, this may take a while&quot; level=info ts=2022-01-26T08:04:34.169Z caller=head.go:742 component=tsdb msg=&quot;WAL checkpoint loaded&quot; level=info ts=2022-01-26T08:04:38.895Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=299 maxSegment=7511 level=warn ts=2022-01-26T08:04:46.423Z caller=main.go:645 msg=&quot;Received SIGTERM, exiting gracefully...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:668 msg=&quot;Stopping scrape discovery manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:682 msg=&quot;Stopping notify discovery manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:704 msg=&quot;Stopping scrape manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:678 msg=&quot;Notify discovery manager stopped&quot; level=info ts=2022-01-26T08:04:46.425Z caller=main.go:698 msg=&quot;Scrape manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=manager.go:934 component=&quot;rule manager&quot; msg=&quot;Stopping rule manager...&quot; level=info ts=2022-01-26T08:04:46.426Z caller=manager.go:944 component=&quot;rule manager&quot; msg=&quot;Rule manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=notifier.go:601 component=notifier msg=&quot;Stopping notification manager...&quot; level=info ts=2022-01-26T08:04:46.426Z caller=main.go:872 msg=&quot;Notifier manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=main.go:664 msg=&quot;Scrape discovery manager stopped&quot; level=info ts=2022-01-26T08:04:46.792Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=300 maxSegment=7511 level=info ts=2022-01-26T08:04:46.870Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=301 maxSegment=7511 level=info ts=2022-01-26T08:04:46.901Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=302 maxSegment=7511 level=info ts=2022-01-26T08:04:46.946Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=303 maxSegment=7511 level=info ts=2022-01-26T08:04:46.974Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=304 maxSegment=7511 level=info ts=2022-01-26T08:04:47.008Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=305 maxSegment=7511 level=info ts=2022-01-26T08:04:47.034Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=306 maxSegment=7511 level=info ts=2022-01-26T08:04:47.067Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=307 maxSegment=7511 level=info ts=2022-01-26T08:04:47.098Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=308 maxSegment=7511 level=info ts=2022-01-26T08:04:47.124Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=309 maxSegment=7511 level=info ts=2022-01-26T08:04:47.158Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=310 maxSegment=7511 level=info ts=2022-01-26T08:04:47.203Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=311 maxSegment=7511 level=info ts=2022-01-26T08:04:47.254Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=312 maxSegment=7511 level=info ts=2022-01-26T08:04:47.486Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=313 maxSegment=7511 level=info ts=2022-01-26T08:04:47.511Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=314 maxSegment=7511 level=info ts=2022-01-26T08:04:47.539Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=315 maxSegment=7511 level=info ts=2022-01-26T08:04:47.564Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=316 maxSegment=7511 . . . . . . . . . level=info ts=2022-01-26T08:05:15.161Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1401 maxSegment=7511 level=info ts=2022-01-26T08:05:15.182Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1402 maxSegment=7511 level=info ts=2022-01-26T08:05:15.205Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1403 maxSegment=7511 level=info ts=2022-01-26T08:05:15.229Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1404 maxSegment=7511 level=info ts=2022-01-26T08:05:15.251Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1405 maxSegment=7511 level=info ts=2022-01-26T08:05:15.274Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1406 maxSegment=7511 level=info ts=2022-01-26T08:05:15.297Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1407 maxSegment=7511 level=info ts=2022-01-26T08:05:15.323Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1408 maxSegment=7511 level=info ts=2022-01-26T08:05:15.349Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1409 maxSegment=7511 level=info ts=2022-01-26T08:05:15.372Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1410 maxSegment=7511 level=info ts=2022-01-26T08:05:15.426Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1411 maxSegment=7511 level=info ts=2022-01-26T08:05:15.452Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1412 maxSegment=7511 level=info ts=2022-01-26T08:05:15.475Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1413 maxSegment=7511 level=info ts=2022-01-26T08:05:15.498Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1414 maxSegment=7511 rpc error: code = NotFound desc = an error occurred when try to find container &quot;ae14079418f59b04bb80d8413e8fdc34f167bfe762317ef674e05466d34c9e1f&quot;: not found </code></pre> <p>So I deleted the deployment and redeployed to the the same storage account, the I got new error</p> <pre><code>level=info ts=2022-01-26T11:10:11.530Z caller=main.go:418 msg=&quot;Starting Prometheus&quot; version=&quot;(version=2.26.0, branch=HEAD, revision=3cafc58827d1ebd1a67749f88be4218f0bab3d8d)&quot; level=info ts=2022-01-26T11:10:11.534Z caller=main.go:423 build_context=&quot;(go=go1.16.2, user=root@a67cafebe6d0, date=20210331-11:56:23)&quot; level=info ts=2022-01-26T11:10:11.535Z caller=main.go:424 host_details=&quot;(Linux 5.4.0-1064-azure #67~18.04.1-Ubuntu SMP Wed Nov 10 11:38:21 UTC 2021 x86_64 prometheus-6b9d9d54f4-wnmzh (none))&quot; level=info ts=2022-01-26T11:10:11.536Z caller=main.go:425 fd_limits=&quot;(soft=1048576, hard=1048576)&quot; level=info ts=2022-01-26T11:10:11.536Z caller=main.go:426 vm_limits=&quot;(soft=unlimited, hard=unlimited)&quot; level=info ts=2022-01-26T11:10:14.168Z caller=web.go:540 component=web msg=&quot;Start listening for connections&quot; address=0.0.0.0:9090 level=info ts=2022-01-26T11:10:15.385Z caller=main.go:795 msg=&quot;Starting TSDB ...&quot; level=info ts=2022-01-26T11:10:16.022Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641837600024 maxt=1641902400000 ulid=01FS51ANKBFTVNRPZ68FGQQ5GA level=info ts=2022-01-26T11:10:16.309Z caller=tls_config.go:191 component=web msg=&quot;TLS is disabled.&quot; http2=false level=info ts=2022-01-26T11:10:16.494Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641902400005 maxt=1641967200000 ulid=01FS6Z46FGXN932K7D39D9166D level=info ts=2022-01-26T11:10:16.806Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641967200106 maxt=1642032000000 ulid=01FS8WXRJ7Q80FKD4C8EJNR0AD level=info ts=2022-01-26T11:10:17.011Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642032000003 maxt=1642096800000 ulid=01FSATQE1VMNR101KRW1X10Q75 level=info ts=2022-01-26T11:10:17.305Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642096800206 maxt=1642161600000 ulid=01FSCRGVT1E7562SF7EQN12JBM level=info ts=2022-01-26T11:10:18.240Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642161600059 maxt=1642226400000 ulid=01FSEPAFP2CX03ANRB7Q1AG514 level=info ts=2022-01-26T11:10:21.046Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642226400051 maxt=1642291200000 ulid=01FSGM3WT0TKR0XW9BD4QSKPQE level=info ts=2022-01-26T11:10:21.422Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642291200113 maxt=1642356000000 ulid=01FSJHXKHMANW0E6FXDXVM265G level=info ts=2022-01-26T11:10:22.822Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642356000032 maxt=1642420800000 ulid=01FSMFQ6XJ97VJFKNCYQBVB4DZ level=info ts=2022-01-26T11:10:23.536Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642420800021 maxt=1642485600000 ulid=01FSPDGM95FDDV2CDWX93BTDCS level=info ts=2022-01-26T11:10:23.880Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642485600072 maxt=1642550400000 ulid=01FSRBA555RWY4QNP4HD9YKRBM level=info ts=2022-01-26T11:10:25.021Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642550400031 maxt=1642615200000 ulid=01FST93N3C82K9VS20MKTMGGYC level=info ts=2022-01-26T11:10:25.713Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642615200014 maxt=1642680000000 ulid=01FSW6X95FRNSN1XJZ2YK0MXW7 level=info ts=2022-01-26T11:10:26.634Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642680000012 maxt=1642744800000 ulid=01FSY4PXA7V1XQHHA3MC35JSWQ level=info ts=2022-01-26T11:10:27.776Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642744800174 maxt=1642809600000 ulid=01FT02G9XGHPV8GME53ZPMYXE6 level=info ts=2022-01-26T11:10:28.760Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642809600070 maxt=1642874400000 ulid=01FT209WP8AXXVZB1NCSC55ACE level=info ts=2022-01-26T11:10:29.618Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642874400253 maxt=1642939200000 ulid=01FT3Y3A4H72FFW318RKHEXXGA level=info ts=2022-01-26T11:10:30.313Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642939200047 maxt=1643004000000 ulid=01FT5VX3YC838QN5VQFAERV1QX level=info ts=2022-01-26T11:10:30.483Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643004000040 maxt=1643068800000 ulid=01FT7SPHC5EV0SS1R0WT04H9FR level=info ts=2022-01-26T11:10:30.696Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643068800035 maxt=1643133600000 ulid=01FT9QFZXBZ7EYY2CTE8WXZTB9 level=info ts=2022-01-26T11:10:31.838Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643133600000 maxt=1643155200000 ulid=01FTA574G4M45WX97Z470DQF73 level=info ts=2022-01-26T11:10:33.686Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643176800008 maxt=1643184000000 ulid=01FTASSZCG8V5N2VGAGFBYJBSR level=info ts=2022-01-26T11:10:36.078Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643184000000 maxt=1643191200000 ulid=01FTB0NP47JW5JCF808QZZ8WZQ level=info ts=2022-01-26T11:10:36.442Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643155200065 maxt=1643176800000 ulid=01FTB0P9H3H09B2ADD5X1RXFW6 level=info ts=2022-01-26T11:10:40.079Z caller=main.go:668 msg=&quot;Stopping scrape discovery manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:682 msg=&quot;Stopping notify discovery manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:704 msg=&quot;Stopping scrape manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:678 msg=&quot;Notify discovery manager stopped&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:664 msg=&quot;Scrape discovery manager stopped&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:698 msg=&quot;Scrape manager stopped&quot; level=info ts=2022-01-26T11:10:40.080Z caller=manager.go:934 component=&quot;rule manager&quot; msg=&quot;Stopping rule manager...&quot; level=info ts=2022-01-26T11:10:40.080Z caller=manager.go:944 component=&quot;rule manager&quot; msg=&quot;Rule manager stopped&quot; level=info ts=2022-01-26T11:10:40.080Z caller=notifier.go:601 component=notifier msg=&quot;Stopping notification manager...&quot; level=info ts=2022-01-26T11:10:40.080Z caller=main.go:872 msg=&quot;Notifier manager stopped&quot; level=error ts=2022-01-26T11:10:40.080Z caller=main.go:881 err=&quot;opening storage failed: lock DB directory: resource temporarily unavailable&quot; </code></pre> <p>The yaml is provided by the <strong>Istio</strong> .Below is deployment yaml file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: component: &quot;server&quot; app: prometheus release: prometheus chart: prometheus-14.6.1 heritage: Helm name: prometheus namespace: istio-system spec: selector: matchLabels: component: &quot;server&quot; app: prometheus release: prometheus replicas: 1 template: metadata: labels: component: &quot;server&quot; app: prometheus release: prometheus chart: prometheus-14.6.1 heritage: Helm sidecar.istio.io/inject: &quot;false&quot; spec: enableServiceLinks: true serviceAccountName: prometheus containers: - name: prometheus-server-configmap-reload image: &quot;jimmidyson/configmap-reload:v0.5.0&quot; imagePullPolicy: &quot;IfNotPresent&quot; args: - --volume-dir=/etc/config - --webhook-url=http://127.0.0.1:9090/-/reload resources: {} volumeMounts: - name: config-volume mountPath: /etc/config readOnly: true - name: prometheus-server image: &quot;prom/prometheus:v2.26.0&quot; imagePullPolicy: &quot;IfNotPresent&quot; args: - --storage.tsdb.retention.time=15d - --config.file=/etc/config/prometheus.yml - --storage.tsdb.path=/data - --web.console.libraries=/etc/prometheus/console_libraries - --web.console.templates=/etc/prometheus/consoles - --web.enable-lifecycle ports: - containerPort: 9090 readinessProbe: httpGet: path: /-/ready port: 9090 initialDelaySeconds: 0 periodSeconds: 5 timeoutSeconds: 4 failureThreshold: 3 successThreshold: 1 livenessProbe: httpGet: path: /-/healthy port: 9090 initialDelaySeconds: 30 periodSeconds: 15 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 1 resources: {} volumeMounts: - name: config-volume mountPath: /etc/config - name: azurefileshare mountPath: /data subPath: &quot;&quot; hostNetwork: false dnsPolicy: ClusterFirst securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 terminationGracePeriodSeconds: 300 volumes: - name: config-volume configMap: name: prometheus - name: azurefileshare azureFile: secretName: log-storage-secret shareName: prometheusfileshare readOnly: false </code></pre> <p><strong>Expected Behavior</strong> When I mount the data to new container, It should load the data.</p> <p><strong>Actual Behavior</strong> Not able to load the data or not able bind the data with newly created pod when old pod dies</p> <p>Help me out, to resolve the issue.</p>
<p><em><strong>Thank You <a href="https://stackoverflow.com/users/3781502/ywh">YwH</a> for your suggestion, Posting this an answer so it can help other community member if they encounter the same issue in future.</strong></em></p> <p>As stated in this <a href="https://istio.io/latest/docs/ops/integrations/prometheus/" rel="nofollow noreferrer"><em><strong>document</strong></em></a> Istio provides a basic sample installation to quickly get Prometheus up and running: This is intended for demonstration only, and is not tuned for performance or security.</p> <blockquote> <p>Note : Isio configuration is well-suited for small clusters and monitoring for short time horizons, it is not suitable for large-scale meshes or monitoring over a period of days or weeks</p> </blockquote> <p><em><strong>Solution : Prometheus is a stateful application, better deployed with a StatefulSet, not Deployment.</strong></em></p> <p>StatefulSets are valuable for applications that require one or more of the following.</p> <p><em><code>Stable, persistent storage. Ordered, graceful deployment and scaling.</code></em></p> <p><em><strong>You can use this <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/prometheus/manifest/prometheus-statefulset.yaml" rel="nofollow noreferrer"><em><strong>Stateful</strong></em></a> code for deployment of prometheus container.</strong></em></p>
<p>I got an alert while configuring the monitoring module using <code>prometheus/kube-prometheus-stack 25.1.0</code>.</p> <p><strong>Alert</strong></p> <pre><code>[FIRING:1] KubeProxyDown - critical Alert: Target disappeared from Prometheus target discovery. - critical Description: KubeProxy has disappeared from Prometheus target discovery. Details: • alertname: KubeProxyDown • prometheus: monitoring/prometheus-kube-prometheus-prometheus • severity: critical </code></pre> <p>I think it is a new default rule in <code>kube-prometheus-stack 25.x.x</code>. It does not exist in <code>prometheus/kube-prometheus-stack 21.x.x</code>.</p> <p>The same issue happened in the EKS and minikube.</p> <p><strong>KubeProxyDown</strong> Rule</p> <pre><code>alert: KubeProxyDown expr: absent(up{job=&quot;kube-proxy&quot;} == 1) for: 15m labels: severity: critical annotations: description: KubeProxy has disappeared from Prometheus target discovery. runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown summary: Target disappeared from Prometheus target discovery. </code></pre> <p>How can I resolve this issue?</p> <p>I would be thankful if anyone could help me</p>
<p>This is what worked for me in AWS EKS cluster v1.21:</p> <pre><code>$ kubectl edit cm/kube-proxy-config -n kube-system --- metricsBindAddress: 127.0.0.1:10249 ### &lt;--- change to 0.0.0.0:10249 $ kubectl delete pod -l k8s-app=kube-proxy -n kube-system </code></pre> <p>Note, the name of the config map is <code>kube-proxy-config</code>, not <code>kube-proxy</code></p>
<p>I have several Windows servers available and would like to setup a Kubernetes cluster on them. Is there some tool or a step by step instruction how to do so?</p> <p>What I tried so far is to install DockerDesktop and enable its Kubernetes feature. That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible: <a href="https://stackoverflow.com/questions/54658194/docker-desktop-kubernetes-add-node">Docker desktop kubernetes add node</a></p> <p>Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?</p> <p>I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?</p> <p>Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a &quot;kubeadm for Windows&quot;?</p> <p>Would be great if you could give me some hint on the right direction.</p> <p><strong>Edit</strong>:<br> Related info about installing <strong>kubeadm inside Docker</strong> container:<br> <a href="https://github.com/kubernetes/kubernetes/issues/35712" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/35712</a><br> <a href="https://github.com/kubernetes/kubeadm/issues/17" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/17</a></p> <p>Related question about <strong>Minikube</strong>:<br> <a href="https://stackoverflow.com/questions/51821057/adding-nodes-to-a-windows-minikube-kubernetes-installation-how">Adding nodes to a Windows Minikube Kubernetes Installation - How?</a></p> <p>Info on <strong>kind</strong> (kubernetes in docker) multi-node cluster:<br> <a href="https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/" rel="nofollow noreferrer">https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/</a> (Creates multi-node kubernetes cluster on <strong>single</strong> windows host)<br> Also see:</p> <ul> <li><a href="https://github.com/kubernetes-sigs/kind/issues/2652" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kind/issues/2652</a></li> <li><a href="https://hub.docker.com/r/kindest/node" rel="nofollow noreferrer">https://hub.docker.com/r/kindest/node</a></li> </ul>
<p>You can always refer to the official kubernetes documentation which is the right source for the information.</p> <p>This is the correct way to manage this question.</p> <p>Based on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/" rel="nofollow noreferrer">Adding Windows nodes</a>, you need to have two prerequisites:</p> <blockquote> <ul> <li><p>Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are using VXLAN/Overlay networking you must have also have KB4489899 installed.</p> </li> <li><p>A <strong>Linux-based Kubernetes kubeadm cluster</strong> in which you have access to the control plane (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">see Creating a single control-plane cluster with kubeadm</a>).</p> </li> </ul> </blockquote> <p>Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).</p> <p>And once you have a proper running control plane, there's a <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#joining-a-windows-worker-node" rel="nofollow noreferrer"><code>kubeadm for windows</code></a> to proper join Windows nodes to the kubernetes cluster. As well as a documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-windows-nodes/" rel="nofollow noreferrer">how to upgrade windows nodes</a>.</p> <p>For firewall and which ports should be open check <a href="https://kubernetes.io/docs/reference/ports-and-protocols/" rel="nofollow noreferrer">ports and protocols</a>.</p> <p>For worker node (which will be windows nodes):</p> <pre><code>Protocol Direction Port Range Purpose Used By TCP Inbound 10250 Kubelet API Self, Control plane TCP Inbound 30000-32767 NodePort Services All </code></pre> <hr /> <p>Another option can be running windows nodes in cloud managed kuberneres, for example <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster-windows" rel="nofollow noreferrer">GKE with windows node pool</a> (yes, I understand that it's not your use-case, but for further reference).</p>
<p>I am connecting to pod via client-Go and I want to get the properties of the file directory</p> <pre><code>func GetPodFiles(c *gin.Context) { client, _ := Init.ClusterID(c) path := c.DefaultQuery(&quot;path&quot;, &quot;/&quot;) cmd := []string{ &quot;sh&quot;, &quot;-c&quot;, fmt.Sprintf(&quot;ls -l %s&quot;, path), } config, _ := Init.ClusterCfg(c) req := client.CoreV1().RESTClient().Post(). Resource(&quot;pods&quot;). Name(&quot;nacos-0&quot;). Namespace(&quot;default&quot;).SubResource(&quot;exec&quot;).Param(&quot;container&quot;, &quot;nacos&quot;) req.VersionedParams( &amp;v1.PodExecOptions{ Command: cmd, Stdin: false, Stdout: true, Stderr: true, TTY: false, }, scheme.ParameterCodec, ) var stdout, stderr bytes.Buffer exec, err := remotecommand.NewSPDYExecutor(config, &quot;POST&quot;, req.URL()) if err != nil { response.FailWithMessage(response.InternalServerError, err.Error(), c) return } err = exec.Stream(remotecommand.StreamOptions{ Stdin: nil, Stdout: &amp;stdout, Stderr: &amp;stderr, }) if err != nil { response.FailWithMessage(response.InternalServerError, &quot;Error obtaining file&quot;, c) return } fmt.Println(stdout.String()) } </code></pre> <p>Execution Result Output</p> <pre><code>total 0 lrwxrwxrwx 1 root root 7 Jun 1 2018 bin -&gt; usr/bin drwxr-xr-x 5 root root 360 Feb 16 16:39 dev lrwxrwxrwx 1 root root 8 Jun 1 2018 sbin -&gt; usr/sbin drwxr-xr-x 2 root root 6 Apr 11 2018 srv </code></pre> <p>Expect the result</p> <pre><code>&quot;data&quot;: [ { &quot;perm&quot;: &quot;drwxr-xr-x&quot;, &quot;mod_time&quot;: &quot;2022-03-02 15:02:15&quot;, &quot;kind&quot;: &quot;d&quot;, &quot;name&quot;: &quot;temp&quot;, &quot;size&quot;: &quot;&quot; }, ] </code></pre> <p>Is there a good way or a golang third-party library to handle it. Please let me know. Thank you</p>
<p>In a Kubernetes pod you can execute the <code>stat</code> linux command instead of <code>ls</code> command.</p> <pre><code>$ stat yourFileOrDirName </code></pre> <p>The output of this command by default is like this:</p> <pre class="lang-yaml prettyprint-override"><code> File: yourFileOrDirName Size: 346 Blocks: 0 IO Block: 4096 directory Device: 51h/82d Inode: 40431 Links: 1 Access: (0755/drwxr-xr-x) Uid: ( 1000/ username) Gid: ( 1000/ groupname) Access: 2022-03-02 11:59:07.384821351 +0100 Modify: 2022-03-02 11:58:48.733821177 +0100 Change: 2022-03-02 11:58:48.733821177 +0100 Birth: 2021-12-21 11:12:05.571841723 +0100 </code></pre> <p>But you can tweak its output like this:</p> <pre><code>$ stat --printf=&quot;%n,%A,%y,%s&quot; yourFileOrDirName </code></pre> <p>where <code>%n</code> - file name, <code>%A</code> - permission bits and file type in human readable form, <code>%y</code> - time of last data modification human-readable, <code>%s</code> - total size, in bytes. You can also choose any character as a delimiter instead of comma.</p> <p>the output will be:</p> <pre class="lang-yaml prettyprint-override"><code>yourFileOrDirName,drwxr-xr-x,2022-03-02 11:58:48.733821177 +0100,346 </code></pre> <p>See more info about the <code>stat</code> command <a href="https://man7.org/linux/man-pages/man1/stat.1.html" rel="nofollow noreferrer">here</a>.</p> <p>After you get such output, I believe you can easily 'convert' it to json format if you really need it.</p> <p><strong>Furthermore</strong>, you can run the <code>stat</code> command like this:</p> <pre><code>$ stat --printf=&quot;{\&quot;data\&quot;:[{\&quot;name\&quot;:\&quot;%n\&quot;,\&quot;perm\&quot;:\&quot;%A\&quot;,\&quot;mod_time\&quot;:\&quot;%y\&quot;,\&quot;size\&quot;:\&quot;%s\&quot;}]}&quot; yourFileOrDirName </code></pre> <p>Or as @mdaniel suggested, since the command does not contain any shell variables, nor a <code>'</code>, the cleaner command is:</p> <pre><code>stat --printf='{&quot;data&quot;:[{&quot;name&quot;:&quot;%n&quot;,&quot;perm&quot;:&quot;%A&quot;,&quot;mod_time&quot;:&quot;%y&quot;,&quot;size&quot;:&quot;%s&quot;}]}' yourFileOrDirName </code></pre> <p>and get the DIY json output:</p> <pre class="lang-json prettyprint-override"><code>{&quot;data&quot;:[{&quot;name&quot;:&quot;yourFileOrDirName&quot;,&quot;perm&quot;:&quot;drwxrwxr-x&quot;,&quot;mod_time&quot;:&quot;2022-02-04 15:17:27.000000000 +0000&quot;,&quot;size&quot;:&quot;4096&quot;}]} </code></pre>
<p>We are working with Akka and have an application which uses Akka Cluster that we deploy to AWS EKS environment. When we run a load test against the application we observe heartbeat intervals between the components growing large and pods getting restarted. Akka documentation mentions not to use CPU resource limit, and removing the resource limit solves the issue.</p> <p>But is there any other way of getting around this, we are not sure if removing the resource limits is a good practice when deploying application.</p> <p><a href="https://doc.akka.io/docs/akka/current/additional/deploying.html?_ga=2.206580627.186827534.1646216705-1504733962.1642433119#resource-limits" rel="nofollow noreferrer">Resource limits</a></p> <blockquote> <p>To avoid CFS scheduler limits, it is best not to use resources.limits.cpu limits, but use resources.requests.cpu configuration instead.</p> </blockquote>
<p>There are two basic flavors of CPU limiting that Kubernetes supports: CFS and shares.</p> <p>CFS places a hard limit on the host CPU time that a give container (or more properly the cgroup associated with the container) can consume: typically X milliseconds in every 100 milliseconds. Once the container has had X milliseconds it cannot run for the remainder of the 100 millisecond slice.</p> <p>Shares instead proportionally allocates CPU limits. If the containers on a host have a total share of 4000, a container with a share of 1500 will get 15/40 = 37.5% of the total CPU capacity. Since Kubernetes typically determines the share value based on millicores (1.0 CPU =&gt; 1000 share), in our 4000 total share example, if the host has 8 cores the container with a 1500 share (1.5 CPU) would actually get the equivalent number of clock cycles as 3 CPU (or more if other containers aren't using their allocations). If the total shares were 8000, that container would get the equivalent of 1.5 CPU (Kubernetes in turn will ensure that the total share doesn't go above 8000).</p> <p>So under full load, there's not really a difference between the CFS limit approach and the share approach. When the host isn't fully used, CFS will have idle CPU while shares will allow containers to have more CPU. If the workloads running on the host are bursty (especially at shorter timescales), the share approach will be very effective.</p> <p>For steadier workloads (think IoT more than online retailing), that you're seeing this is indicative of an issue:</p> <ul> <li>You might be underprovisioning CPU.</li> <li>It's possible that you're seeing thread starvation, especially in the dispatchers related to clustering/remoting. A thread starvation detector (e.g. the one available with a Lightbend subscription (disclaimer: I am employed by Lightbend, but can endorse its utility from when I was a client)) can help diagnose this. In Akka 2.6, clustering actors run in a separate dispatcher from application actors, so this is less likely to be an issue; if running pre-2.6, you can have clustering use a separate dispatcher by adjusting <code>akka.cluster.use-dispatcher</code>. It is also probably worth looking for actors in your application which take a long time to process messages and moving those actors to other dispatchers.</li> <li>JVM garbage collectors all have situations where they can become stop-the-world and typically the duration of the pause is correlated with the size of the heap: more instances with smaller heaps can help here.</li> <li>Finally, if the constraints of the application and how you're deploying it are such that heartbeat arrivals are going to be volatile, you can adjust the failure detector to be less eager to declare a node failed. The primary knobs to twiddle here are <code>akka.cluster.failure-detector.threshold</code> and <code>akka.cluster.failure-detector.acceptable-heartbeat-pause</code>. Note that the former defaults to 8.0 which is considered appropriate for a LAN environment with physical hardware. Every layer of virtualization and sharing of network/compute infrastructure with workloads you don't control warrants increasing the <code>threshold</code>: I've seen 12.0 recommended for EC2 deploys and given that EKS and such gives you even less control, I'd consider something like 13.0. Likewise, my experience has been that <code>acceptable-heartbeat-pause</code> should be about 500ms per GB of Java heap. Note that doing anything to make the failure detector less eager implies being slower to respond to a node being down.</li> </ul>
<p>I made an app for video and audio transcoding using node.js, PostgreSQL, rabbitmq, EC2 with EBS, and Kubernetes.<br /> I am a little bit worried about storage size and computing power. Computing power is easy to solve but storage is more problematic. The best will be a solution that automatically scales storage size but I am not sure if Kubernetes supports that.</p> <p>Right now I am thinking of swapping EC2 to bare metal with Hadoop hdfs but I feel it is overkill.</p> <p>The second idea is to buy a few instances of ec2 and for every instance add own EBS, but again I am not sure if Kubernetes supports such think, at the end every EC2 instance need access to another EC2 EBS storage.</p> <p>What do you think? Maybe there is an easier and cheaper way.</p>
<p>So one thing you can consider since you are on AWS is to use EFS (elastic file system) as a volume type. By adding the EFS provisioner and EFS storage class to your cluster, you can make use the dynamic sizing that EFS gives you. you wont have to worry about running out of space.</p> <p>You can check out the efs provsioner <a href="https://github.com/kubernetes-retired/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">here</a>!!</p> <p>I have been using this storage class for other applications that are not too intense on read/writes and have had a lot of success with it! You also can mount the volume across multiple AZs so you get HA (high availability) as a plus!</p>
<p>I want to expose a few webapps in EKS to the internet in a centrally managed secure way.</p> <p>In AWS, using an ALB is nice, as it for example allows you to terminate TLS and add authentication using Cognito. (<a href="https://aws.amazon.com/blogs/containers/how-to-use-application-load-balancer-and-amazon-cognito-to-authenticate-users-for-your-kubernetes-web-apps/" rel="nofollow noreferrer">see here</a>)</p> <p>To provision an ALB and connect it to the application there is the <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="nofollow noreferrer">aws-load-balancer-controller</a>. It works fine, but it requires for each and every app/ingress to configure a new ALB:</p> <pre><code> annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/tags: Environment=test,Project=cognito external-dns.alpha.kubernetes.io/hostname: sample.${COK_MY_DOMAIN} alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;: 80}, {&quot;HTTPS&quot;:443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{&quot;Type&quot;: &quot;redirect&quot;, &quot;RedirectConfig&quot;: { &quot;Protocol&quot;: &quot;HTTPS&quot;, &quot;Port&quot;: &quot;443&quot;, &quot;StatusCode&quot;: &quot;HTTP_301&quot;}}' alb.ingress.kubernetes.io/auth-type: cognito alb.ingress.kubernetes.io/auth-scope: openid alb.ingress.kubernetes.io/auth-session-timeout: '3600' alb.ingress.kubernetes.io/auth-session-cookie: AWSELBAuthSessionCookie alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate alb.ingress.kubernetes.io/auth-idp-cognito: '{&quot;UserPoolArn&quot;: &quot;$(aws cognito-idp describe-user-pool --user-pool-id $COK_COGNITO_USER_POOL_ID --region $COK_AWS_REGION --query 'UserPool.Arn' --output text)&quot;,&quot;UserPoolClientId&quot;:&quot;${COK_COGNITO_USER_POOL_CLIENT_ID}&quot;,&quot;UserPoolDomain&quot;:&quot;${COK_COGNITO_DOMAIN}.auth.${COK_AWS_REGION}.amazoncognito.com&quot;}' alb.ingress.kubernetes.io/certificate-arn: $COK_ACM_CERT_ARN alb.ingress.kubernetes.io/target-type: 'ip' </code></pre> <p>I would love to have one central well defined ALB and all the application do not need to care about this anymore.</p> <p>My idea was having a regular <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx-ingress-controller</a> and expose it via a central ALB.</p> <p>Now the question is: <strong>How do I connect the ALB to the nginx-controller?</strong></p> <p>One way would be manually configuring the ALB and build the target group by hand, which does not feel like a stable solution.</p> <p>Another way would be using aws-load-balancer-controller to connect the nginx. In that case however nginx seems not to be able to publish the correct loadbalancer address and external-dns will enter the wrong DNS records. (Unfortunately there seems to be no <a href="https://github.com/kubernetes/ingress-nginx/issues/5231" rel="nofollow noreferrer">--publish-ingress</a> option in usual ingress controllers like nginx or traefik.)</p> <p><strong>Question:</strong></p> <ul> <li>Is there a way to make the nginx-ingress-controller provide the correct address?</li> <li>Is there maybe an easier way that combining two ingress controllers?</li> </ul>
<p>I think I found a good solution.</p> <p>I set up my environment using terraform. After I set up the alb ingress controller, I can create a suitable ingress object, wait until the ALB is up, use terraform to extract the address of the ALB and use <code>publish-status-address</code> to tell nginx to publish exactly that address to all its ingresses:</p> <pre><code>resource &quot;kubernetes_ingress_v1&quot; &quot;alb&quot; { wait_for_load_balancer = true metadata { name = &quot;alb&quot; namespace = &quot;kube-system&quot; annotations = { &quot;alb.ingress.kubernetes.io/scheme&quot; = &quot;internet-facing&quot; &quot;alb.ingress.kubernetes.io/listen-ports&quot; = &quot;[{\&quot;HTTP\&quot;: 80}, {\&quot;HTTPS\&quot;:443}]&quot; &quot;alb.ingress.kubernetes.io/ssl-redirect&quot; = &quot;443&quot; &quot;alb.ingress.kubernetes.io/certificate-arn&quot; = local.cert &quot;alb.ingress.kubernetes.io/target-type&quot; = &quot;ip&quot; } } spec { ingress_class_name = &quot;alb&quot; default_backend { service { name = &quot;ing-nginx-ingress-nginx-controller&quot; port { name = &quot;http&quot; } } } } } resource &quot;helm_release&quot; &quot;ing-nginx&quot; { name = &quot;ing-nginx&quot; repository = &quot;https://kubernetes.github.io/ingress-nginx&quot; chart = &quot;ingress-nginx&quot; namespace = &quot;kube-system&quot; set { name = &quot;controller.service.type&quot; value = &quot;ClusterIP&quot; } set { name = &quot;controller.publishService.enabled&quot; value = &quot;false&quot; } set { name = &quot;controller.extraArgs.publish-status-address&quot; value = kubernetes_ingress_v1.alb.status.0.load_balancer.0.ingress.0.hostname } set { name = &quot;controller.config.use-forwarded-headers&quot; value = &quot;true&quot; } set { name = &quot;controller.ingressClassResource.default&quot; value = &quot;true&quot; } } </code></pre> <p>It is a bit weird, as it introduces something like a circular dependency, but the ingress simply waits until nginx is finally up and all is well.</p> <p>This solution in not exactly the same as the <a href="https://github.com/kubernetes/ingress-nginx/issues/5231" rel="nofollow noreferrer">--publish-ingress</a> option as it will not be able to adapt to any changes of the ALB address. - Luckily I don't expect that address to change, so I'm fine with that solution.</p>
<p>Creating jobs like the following for example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: Job metadata: name: test-job-sebas spec: template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never backoffLimit: 4 </code></pre> <p>Results in the job resource being created but no pod or event is observed. Pod statuses are as follow:</p> <p><code>Pods Statuses: 1 Running / 0 Succeeded / 0 Failed</code></p> <p>And the only event visible is the notification of a succesful pod being created. The problem is that the message appears only after 30 minutes of complete apparent silence.</p> <p><code>Normal SuccessfulCreate 21m job-controller Created pod: test-job-sebas-882bh</code></p> <p>From the time we can observe the <code>kube-apiserver</code> log allowing the &quot;create&quot; verb for the Job resource, we are unable to spot any other log in any of the other pods (controllers/schedulers/apiserver) that have the text &quot;test-job-sebas&quot;, until ~30 minutes later where the <code>kube-controller-manager</code> logs the following.</p> <pre><code>Event occurred&quot; object=&quot;test-namespace/job-test-01&quot; kind=&quot;Job&quot; apiVersion=&quot;batch/v1&quot; type=&quot;Normal&quot; reason=&quot;SuccessfulCreate&quot; message=&quot;Created pod: test-job-sebas-882bh&quot; </code></pre> <p>This happens with any Job in this cluster, no matter which namespace or the nature of the Job, if it comes from a CronJob or if it is explicitly created like the one provided in the example here.</p> <p>Looking at the code does not throw any obvious suspect for us that points to what could be happening: <a href="https://github.com/kubernetes/kubernetes/blob/b5b0cc8bb88fb678c9b065c8da4f4c06a155a628/pkg/controller/job/job_controller.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/b5b0cc8bb88fb678c9b065c8da4f4c06a155a628/pkg/controller/job/job_controller.go</a></p> <p>edit: We currently have ~15.000 jobs in the cluster where it seems that most of them are active, from only one namespace. This would lead us to think that we are hitting some limit or making some sort of saturation...but we can't confirm this by any of the visible data.</p>
<p>This sounds very similar to what I encountered when we had a misbehaving webhook.</p> <p>If you have a massive number of jobs all showing as active, but no pods appearing, or pods taking a long time to appear, then that's a sign of an admission webhook interfering with the pod creation. If it's a cronjob affected, you will get a &quot;snowball&quot; effect:</p> <p>Writeup: <a href="https://blenderfox.com/2020/08/07/the-snowball-effect-in-kubernetes/" rel="nofollow noreferrer">https://blenderfox.com/2020/08/07/the-snowball-effect-in-kubernetes/</a></p> <p>Kubernetes Issue: <a href="https://github.com/kubernetes/kubernetes/issues/93783" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/93783</a></p> <p>As for fixing your issue, you need to find out what is interfering with the creation (in our case, we had an up9 webhook misbehaving. Disabling that allowed the creation of the pods)</p>
<p>I am using official Helm chart for airflow. Every Pod works properly <strong>except Worker node</strong>.</p> <p>Even in that worker node, 2 of the containers (git-sync and worker-log-groomer) works fine.</p> <p>The error happened in the 3rd container (worker) with CrashLoopBackOff. Exit code status as 137 OOMkilled.</p> <p>In my openshift, memory usage is showing to be at 70%.</p> <p>Although this error comes because of memory leak. This doesn't happen to be the case for this one. Please help, I have been going on in this one for a week now.</p> <p><strong>Kubectl describe pod airflow-worker-0</strong> -&gt;</p> <pre><code>worker: Container ID: &lt;&gt; Image: &lt;&gt; Image ID: &lt;&gt; Port: &lt;&gt; Host Port: &lt;&gt; Args: bash -c exec \ airflow celery worker State: Running Started: &lt;&gt; Last State: Terminated Reason: OOMKilled Exit Code: 137 Started: &lt;&gt; Finished: &lt;&gt; Ready: True Restart Count: 3 Limits: ephemeral-storage: 30G memory: 1Gi Requests: cpu: 50m ephemeral-storage: 100M memory: 409Mi Environment: DUMB_INIT_SETSID: 0 AIRFLOW__CORE__FERNET_KEY: &lt;&gt; Optional: false Mounts: &lt;&gt; git-sync: Container ID: &lt;&gt; Image: &lt;&gt; Image ID: &lt;&gt; Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: &lt;&gt; Ready: True Restart Count: 0 Limits: ephemeral-storage: 30G memory: 1Gi Requests: cpu: 50m ephemeral-storage: 100M memory: 409Mi Environment: GIT_SYNC_REV: HEAD Mounts: &lt;&gt; worker-log-groomer: Container ID: &lt;&gt; Image: &lt;&gt; Image ID: &lt;&gt; Port: &lt;none&gt; Host Port: &lt;none&gt; Args: bash /clean-logs State: Running Started: &lt;&gt; Ready: True Restart Count: 0 Limits: ephemeral-storage: 30G memory: 1Gi Requests: cpu: 50m ephemeral-storage: 100M memory: 409Mi Environment: AIRFLOW__LOG_RETENTION_DAYS: 5 Mounts: &lt;&gt; </code></pre> <p>I am pretty much sure you know the answer. Read all your articles on airflow. Thank you :) <a href="https://stackoverflow.com/users/1376561/marc-lamberti">https://stackoverflow.com/users/1376561/marc-lamberti</a></p>
<p>The issues occurs due to placing a <strong>limit</strong> in &quot;<strong>resources</strong>&quot; under helm chart - values.yaml in any of the pods.</p> <p>By <strong>default</strong> it is -</p> <pre><code>resources: {} </code></pre> <p>but this causes an issue as pods can access unlimited memory as required.</p> <p>By changing it to -</p> <pre><code>resources: limits: cpu: 200m memory: 2Gi requests: cpu: 100m memory: 512Mi </code></pre> <p>It makes the pod clear on how much it can access and request. This solved my issue.</p>
<p>I need to retrieve SA token using output in my pipeline, i found an solution in here</p> <p><a href="https://stackoverflow.com/questions/56080359/retrieve-token-data-from-kubernetes-service-account-in-terraform">Retrieve token data from Kubernetes Service Account in Terraform</a></p> <p>but still dont work and get this error:</p> <pre><code>│ Error: Invalid function argument │ │ on access.tf line 51, in output &quot;deploy_user_token&quot;: │ 51: value = lookup(data.kubernetes_secret.deploy_user_secret.data, &quot;token&quot;) │ ├──────────────── │ │ data.kubernetes_secret.deploy_user_secret.data has a sensitive value │ │ Invalid value for &quot;inputMap&quot; parameter: argument must not be null. </code></pre> <p>My code:</p> <pre><code>resource &quot;kubernetes_service_account&quot; &quot;deploy_user&quot; { depends_on = [kubernetes_namespace.namespace] metadata { name = &quot;deploy-user&quot; namespace = var.namespace } } resource &quot;kubernetes_role&quot; &quot;deploy_user_full_access&quot; { metadata { name = &quot;deploy-user-full-access&quot; namespace = var.namespace } rule { api_groups = [&quot;&quot;, &quot;extensions&quot;, &quot;apps&quot;, &quot;networking.istio.io&quot;] resources = [&quot;*&quot;] verbs = [&quot;*&quot;] } rule { api_groups = [&quot;batch&quot;] resources = [&quot;jobs&quot;, &quot;cronjobs&quot;] verbs = [&quot;*&quot;] } } resource &quot;kubernetes_role_binding&quot; &quot;deploy_user_view&quot; { metadata { name = &quot;deploy-user-view&quot; namespace = var.namespace } role_ref { api_group = &quot;rbac.authorization.k8s.io&quot; kind = &quot;Role&quot; name = kubernetes_role.deploy_user_full_access.metadata.0.name } subject { kind = &quot;ServiceAccount&quot; name = kubernetes_service_account.deploy_user.metadata.0.name namespace = var.namespace } } data &quot;kubernetes_secret&quot; &quot;deploy_user_secret&quot; { metadata { name = kubernetes_service_account.deploy_user.default_secret_name } } output &quot;deploy_user_token&quot; { value = lookup(data.kubernetes_secret.deploy_user_secret.data, &quot;token&quot;) } </code></pre> <p>someone have an idea that what i do wrong?</p> <p>Thanks!</p>
<p>it seems that you missing the namespace declaration on your data object, you need it to look like that:</p> <pre><code>data &quot;kubernetes_secret&quot; &quot;deploy_user_secret&quot; { metadata { name = kubernetes_service_account.deploy_user.default_secret_name namespace = var.namespace } } </code></pre> <p>you also need the set <code>sensitive = true</code> on your output:</p> <pre><code>output &quot;deploy_user_token&quot; { sensitive = true value = lookup(data.kubernetes_secret.deploy_user_secret.data, &quot;token&quot;) } </code></pre>
<p>In Terraform I wrote a resource that deploys to AKS. I want to apply the terraform changes multiple times, but don't want to have the error below. The system automatically needs to detect whether the resource already exists / is identical. Currently it shows me 'already exists', but I don't want it to fail. Any suggestions how I can fix this issue?</p> <pre><code>│ Error: services &quot;azure-vote-back&quot; already exists │ │ with kubernetes_service.example2, │ on main.tf line 91, in resource &quot;kubernetes_service&quot; &quot;example2&quot;: │ 91: resource &quot;kubernetes_service&quot; &quot;example2&quot; { </code></pre> <pre class="lang-json prettyprint-override"><code>provider &quot;azurerm&quot; { features {} } data &quot;azurerm_kubernetes_cluster&quot; &quot;aks&quot; { name = &quot;kubernetescluster&quot; resource_group_name = &quot;myResourceGroup&quot; } provider &quot;kubernetes&quot; { host = data.azurerm_kubernetes_cluster.aks.kube_config[0].host client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate) client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.client_key) cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate) } resource &quot;kubernetes_namespace&quot; &quot;azurevote&quot; { metadata { annotations = { name = &quot;azurevote-annotation&quot; } labels = { mylabel = &quot;azurevote-value&quot; } name = &quot;azurevote&quot; } } resource &quot;kubernetes_service&quot; &quot;example&quot; { metadata { name = &quot;azure-vote-front&quot; } spec { selector = { app = kubernetes_pod.example.metadata.0.labels.app } session_affinity = &quot;ClientIP&quot; port { port = 80 target_port = 80 } type = &quot;LoadBalancer&quot; } } resource &quot;kubernetes_pod&quot; &quot;example&quot; { metadata { name = &quot;azure-vote-front&quot; labels = { app = &quot;azure-vote-front&quot; } } spec { container { image = &quot;mcr.microsoft.com/azuredocs/azure-vote-front:v1&quot; name = &quot;front&quot; env { name = &quot;REDIS&quot; value = &quot;azure-vote-back&quot; } } } } resource &quot;kubernetes_pod&quot; &quot;example2&quot; { metadata { name = &quot;azure-vote-back&quot; namespace = &quot;azure-vote&quot; labels = { app = &quot;azure-vote-back&quot; } } spec { container { image = &quot;mcr.microsoft.com/oss/bitnami/redis:6.0.8&quot; name = &quot;back&quot; env { name = &quot;ALLOW_EMPTY_PASSWORD&quot; value = &quot;yes&quot; } } } } resource &quot;kubernetes_service&quot; &quot;example2&quot; { metadata { name = &quot;azure-vote-back&quot; namespace = &quot;azure-vote&quot; } spec { selector = { app = kubernetes_pod.example2.metadata.0.labels.app } session_affinity = &quot;ClientIP&quot; port { port = 6379 target_port = 6379 } type = &quot;ClusterIP&quot; } } </code></pre>
<p>Thats the ugly thing with deploying thing inside Kubernetes with terraform....you will meet this nice errors from time to time and thats why it is not recommended to do it :/</p> <p>You could try to just <a href="https://www.terraform.io/cli/commands/state/rm" rel="nofollow noreferrer">remove the record from the state file</a>:</p> <p><code>terraform state rm 'kubernetes_service.example2'</code></p> <p>Terraform now will no longer track this record and the good thing <strong>it will not be deleted</strong> on the remote system.</p> <p>On the next run terraform then will recognise that this resource exists on the remote system and add the record to the state.</p>
<p>I am new to AKS and trying to set up the cluster and expose it via an app gateway ingress controller. While I was able to set up the cluster using az commands and was able to deploy and hit it using HTTP. I am having some challenges in enabling HTTPS over 443 in-app gateway ingress and looking to get some help.</p> <ol> <li>Below is our workflow and I am trying to setup app gateway listener on port 443</li> <li>Below is the k8 we used for enabling the ingress. If I apply is without ssl cert it woks but if I give ssl cert I get a 502 bad gateway.</li> <li>Cert is uploaded to KV and Cluster has KV add-on installed. But I am not sure how to attach this specific kv to cluster and whether the cert should be uploaded to gateway or Kubernetes.</li> </ol> <p><a href="https://i.stack.imgur.com/vDe9X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vDe9X.png" alt="enter image description here" /></a></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-web-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: workspace-dev-cluster-cert appgw.ingress.kubernetes.io/cookie-based-affinity: &quot;true&quot; appgw.ingress.kubernetes.io/request-timeout: &quot;90&quot; appgw.ingress.kubernetes.io/backend-path-prefix: &quot;/&quot; spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: frontend-svc port: number: 80 </code></pre>
<p>This link can help you with KV add-on certificate on App GW: <a href="https://azure.github.io/application-gateway-kubernetes-ingress/features/appgw-ssl-certificate/" rel="nofollow noreferrer">https://azure.github.io/application-gateway-kubernetes-ingress/features/appgw-ssl-certificate/</a></p> <p>I use different configuration to set certs on Appgw.</p> <ol> <li>I'm getting certificates via the <a href="https://akv2k8s.io/" rel="nofollow noreferrer">akv2k8s</a> tool. This creates secrets on k8s cluster.</li> <li>Then I use those certs in the ingress configuration. Please check tls definition under spec.</li> </ol> <blockquote> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend-web-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/appgw-ssl-certificate: workspace-dev-cluster-cert appgw.ingress.kubernetes.io/cookie-based-affinity: &quot;true&quot; appgw.ingress.kubernetes.io/request-timeout: &quot;90&quot; appgw.ingress.kubernetes.io/backend-path-prefix: &quot;/&quot; spec: tls: - hosts: - yourdomain.com secretName: your-tls-secret-name rules: - http: paths: - path: / pathType: Prefix backend: service: name: frontend-svc port: number: 80 </code></pre> </blockquote>
<p>I am trying to get the maximum capacity of pod per node. I am running <code>kubectl describe node nodename</code> and trying to grep the pods limit in capacity section. Any help would be appreciated. The output is like this.</p> <pre><code>Capacity: attachable-volumes-azure-disk: 8 cpu: 4 ephemeral-storage: 129900528Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16393308Ki pods: 110 </code></pre>
<p>if you want to use just native ooptions of kubectl command</p> <p><code>kubectl get nodes &lt;nodename&gt; -o jsonpath='{.status.capacity.pods}{&quot;\n&quot;}'</code></p> <p>If you dont need trailing newline character after the output:</p> <p><code>kubectl get nodes &lt;nodename&gt; -o jsonpath='{.status.capacity.pods}'</code></p>
<p>After a <strong>mongodump</strong> I am trying to restore using <strong>mongorestore</strong>.</p> <p>It works locally in seconds. However, when I <strong>kubectl exec -it</strong> into the pod of the primary mongodb node and run the same command it gets stuck and endlessly repeats the line with the same progress and an updated timestamp (the first and the last line are the same except the timestamp, so 0 progress). This goes about 5 hours, then I get thrown out with an OOM error.</p> <p>I am using mongo:3.6.9</p> <pre><code>2022-03-02T22:56:36.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:39.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:42.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:45.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:48.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:51.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) 2022-03-02T22:56:54.043+0000 [#############...........] mydb.users 3.65MB/6.37MB (57.4%) </code></pre> <p>The same behavior when I do a mongorestore from a restore container specifying all mongo pods like so <strong>mongorestore --db=mydb --collection=users data/mydb/users.bson --host mongo-0.mongo,mongo-1.mongo,mongo-2.mongo --port 27017</strong></p> <p>Is there anything else I could try?</p>
<p>I found my answer here: <a href="https://stackoverflow.com/a/41352269/18358598">https://stackoverflow.com/a/41352269/18358598</a></p> <p><code>--writeConcern '{w:0}'</code> works.</p>
<p>I want to deploy a MariaDB Galera instance onto a local Minikube cluster with 3 nodes via Helm. I used the following command for that:</p> <pre><code>helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test </code></pre> <p>The problem is, if I do that I get the following error in the log:</p> <pre><code>mariadb 10:27:41.60 mariadb 10:27:41.60 Welcome to the Bitnami mariadb-galera container mariadb 10:27:41.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb-galera mariadb 10:27:41.60 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb-galera/issues mariadb 10:27:41.61 mariadb 10:27:41.61 INFO ==&gt; ** Starting MariaDB setup ** mariadb 10:27:41.64 INFO ==&gt; Validating settings in MYSQL_*/MARIADB_* env vars mariadb 10:27:41.67 INFO ==&gt; Initializing mariadb database mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied </code></pre> <p>The site of the image lists the possibility to use an extra init container to fix that (<a href="https://artifacthub.io/packages/helm/bitnami/mariadb-galera#extra-init-containers" rel="nofollow noreferrer">Link</a>).</p> <p>So I came up with the following configuration:</p> <p>mariadb-galera-init-config.yaml</p> <pre><code>extraInitContainers: - name: initcontainer image: bitnami/minideb command: [&quot;chown -R 1001:1001 /bitnami/mariadb/&quot;] </code></pre> <p>The problem is that when I run the command with this configuration:</p> <pre><code>helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test -f mariadb-galera-init-config.yaml </code></pre> <p>I get the following error on the Minikube dashboard:</p> <pre><code>Error: failed to start container &quot;initcontainer&quot;: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: &quot;chown -R 1001:1001 /bitnami/mariadb/&quot;: stat chown -R 1001:1001 /bitnami/mariadb/: no such file or directory: unknown </code></pre> <p>I don't know how to fix this configuration file, or if there is some other better way to get this working...</p>
<p>In any case someone has issues with this, may I suggest running a initContainer before.</p> <pre><code> initContainers: - name: mariadb-create-directory-structure image: busybox command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;mkdir -p /bitnami/mariadb/data &amp;&amp; chown -R 1001:1001 /bitnami&quot;, ] volumeMounts: - name: data mountPath: /bitnami </code></pre>
<p>In the below manifest yaml:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-operator rules: - apiGroups: [apiextensions.k8s.io] resources: [customresourcedefinitions] verbs: ['*'] - apiGroups: [monitoring.coreos.com] resources: - alertmanagers - prometheuses - prometheuses/finalizers - alertmanagers/finalizers - servicemonitors - prometheusrules verbs: ['*'] </code></pre> <hr /> <p>What does <code>rules</code> with <code>apiGroups</code> signify?</p>
<p>In Kubernetes resources can be either <code>grouped resources</code> or <code>individual resources</code> .</p> <p>Example :</p> <pre><code>kubectl api-resources | grep 'crds\|pods\|NAME' NAME SHORTNAMES APIVERSION NAMESPACED KIND pods po v1 true Pod customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition </code></pre> <p>In above output , for <code>pods</code> <code>apiVersion</code> is displayed as <code>v1</code> so it is an individual resource . where as for <code>customresourcedefinitions</code> <code>apiVersion</code> is displayed as <code>apiextensions.k8s.io/v1</code> which indicates that it is a grouped resource under the group <code>apiextensions.k8s.io</code>.</p> <p>When we are defining RBAC rules (roles/clusreroles) for grouped resources we need to mention <code>apiGroups</code> along with <code>resources</code>.</p>
<p>I have a etcd cluster running on coreOS. The leader was set to the first member I've started (as expected), but I started the wrong one first, so the leader isn't who it's supposed to be.</p> <p>How do I change the role of a leader to a different member?</p>
<p>To change the leader in the etcd cluster you need to run a command:</p> <pre><code>etcdctl move-leader id-of-the-node-you-want-to-be-the-leader </code></pre> <p>To get <code>id-of-the-node-you-want-to-be-the-leader</code> you need to run a command:</p> <pre><code>etcdctl endpoint status -w table </code></pre> <p>Full example with certificates and endpoints - you must adjust the parameters to your configuration of course. First get the table:</p> <pre><code>etcdctl endpoint status -w table \ --cacert=&quot;/ssl/client/ca-client.crt&quot; \ --cert=&quot;/ssl/client/prod-v1-etcd-node-1-client.crt&quot; \ --key=&quot;/ssl/client/prod-v1-etcd-node-1-client.key&quot; \ --endpoints=127.0.0.1:2379,prod-v1-etcd-node-2:2379,prod-v1-etcd-node-3:2379 </code></pre> <p>...then, change the leader:</p> <pre><code>etcdctl move-leader 1f648bfb63561530 \ --cacert=&quot;/ssl/client/ca-client.crt&quot; \ --cert=&quot;/ssl/client/prod-v1-etcd-node-1-client.crt&quot; \ --key=&quot;/ssl/client/prod-v1-etcd-node-1-client.key&quot; \ --endpoints=127.0.0.1:2379,prod-v1-etcd-node-2:2379,prod-v1-etcd-node-3:2379 </code></pre>
<p>I am able to access my django app deployment using LoadBalancer service type but I'm trying to switch to ClusterIP service type and ingress-nginx but I am getting 503 Service Temporarily Unavailable when I try to access the site via the host url. Describing the ingress also shows <code>error: endpoints &quot;django-service&quot; not found</code> and <code>error: endpoints &quot;default-http-backend&quot; not found</code>. What am I doing wrong?</p> <p>This is my service and ingress yaml:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: django-service spec: type: ClusterIP ports: - name: http protocol: TCP port: 80 targetPort: 8000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: django-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/force-ssl-redirect: 'true' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: tls: - hosts: - django.example.com rules: - host: django.example.com http: paths: - path: / pathType: Prefix backend: service: name: django-service port: number: 80 ingressClassName: nginx </code></pre> <p>kubectl get all</p> <pre><code>$ kubectl get all NAME READY STATUS RESTARTS AGE pod/django-app-5bdd8ffff9-79xzj 1/1 Running 0 7m44s pod/postgres-58fffbb5cc-247x9 1/1 Running 0 7m44s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/django-service ClusterIP 10.233.29.58 &lt;none&gt; 80/TCP 7m44s service/pg-service ClusterIP 10.233.14.137 &lt;none&gt; 5432/TCP 7m44s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/django-app 1/1 1 1 7m44s deployment.apps/postgres 1/1 1 1 7m44s NAME DESIRED CURRENT READY AGE replicaset.apps/django-app-5bdd8ffff9 1 1 1 7m44s replicaset.apps/postgres-58fffbb5cc 1 1 1 7m44s </code></pre> <p>describe ingress</p> <pre><code>$ kubectl describe ing django-ingress Name: django-ingress Labels: &lt;none&gt; Namespace: django Address: 10.10.30.50 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) TLS: SNI routes django.example.com Rules: Host Path Backends ---- ---- -------- django.example.com / django-service:80 (&lt;error: endpoints &quot;django-service&quot; not found&gt;) Annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: true nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: true Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync Normal Sync 5m28s (x2 over 6m5s) nginx-ingress-controller Scheduled for sync </code></pre>
<p>I think you forgot to make the link with your deployment in your service.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: django-service spec: type: ClusterIP ports: - name: http protocol: TCP port: 80 targetPort: 8000 selector: app: your-deployment-name </code></pre> <p>Your label must be set in your deployment as well:</p> <pre><code>spec: selector: matchLabels: app: your-deployment-name template: metadata: labels: app: your-deployment-name </code></pre>
<p>Is there a way to set quotas for directories inside each bucket in a MinIO server and monitor the size and quota with the API of each directory in the bucket?</p>
<p>I have found <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin/mc-admin-bucket-quota.html#mc-admin-bucket-quota" rel="nofollow noreferrer">this documentation</a> about bucket quota, but unfortunately it is just for buckets.</p> <blockquote> <p>The <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin/mc-admin-bucket-quota.html#command-mc.admin.bucket.quota" rel="nofollow noreferrer" title="mc.admin.bucket.quota"><code>mc admin bucket quota</code></a> command manages per-bucket storage quotas.</p> </blockquote> <p><strong>NOTE</strong>:</p> <blockquote> <p>MinIO does not support using <a href="https://docs.min.io/minio/baremetal/reference/minio-mc-admin.html#command-mc.admin" rel="nofollow noreferrer" title="mc.admin"><code>mc admin</code></a> commands with other S3-compatible services, regardless of their claimed compatibility with MinIO deployments.</p> </blockquote> <hr /> <p>Using following command you can get usage info:</p> <pre class="lang-sh prettyprint-override"><code>mc du </code></pre> <hr /> <p>See also <a href="https://docs.min.io/docs/minio-admin-complete-guide.html" rel="nofollow noreferrer">this doc</a>.</p>
<p>I'm am trying to set up a kubernetes service where multiple pods share the same directory on an NFS volume. The NFS volume has a bunch of data pre-generated outside kubernetes in a specific directory for the pods to both read &amp; write.</p> <p>I can think of two ways to try this, neither of which are working properly. The way we are currently doing it is by using the NFS option for creating a volume directly in the pod like this:</p> <pre><code>kind: Deployment spec: template: spec: containers: ... volumes: - name: embeddings nfs: path: /data/embeddings server: fs-blahblah.efs.us-west-2.amazonaws.com </code></pre> <p>This mostly works. All the pods see the same directory on NFS, which is pre-populated with data, and they can all access it. <em>But</em> AFAIK that there is no way to specify NFS mount options. The only other allowed option is <code>readonly</code>. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#nfsvolumesource-v1-core" rel="nofollow noreferrer">ref</a> and we need to tune some things in the NFS client like <code>noac</code>.</p> <p>The other way I can think of is to use a PersistentVolume. If you create a PV using NFS you <em>can</em> specify the mount options:</p> <pre><code>kind: PersistentVolume spec: mountOptions: - nfsvers=4.1 - noac </code></pre> <p>But I don't know how to get a pod to access a specific directory within a PV. My understanding is this isn't possible - because pods can't mount PV's they can only mount PVC's, and AFAIK you can't pick a specific directory for a PVC, nor have multiple pods share a PVC which is what I specifically want. Plus the semantics for PVC just seem wrong - I don't want k8s to put storage limits on this drive, I want all the pods to use all the space on the disk they want/need. But maybe it can work anyway?</p> <p>So, how can I get multiple pods to access the same specific directory on an NFS volume while specifying mountOptions?</p>
<p>NFS volume support the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> <code>ReadWriteMany</code>. Meaning you can mount it multiple times.</p> <p>Since a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#lifecycle-of-a-volume-and-claim" rel="nofollow noreferrer">claim</a> only tries to find a volume that matches its spec, you still mount the volume and not the claim. The claim is just a way to express how the volume, you want to mount, should look like.<br /> Perhaps some cluster admin has already created volumes which you could claim, if they match your criteria. Or you let a storage class create the volume based on the claim.<br /> In the below example, the spec explicitly asks for a volume with that name, so in that case no other volume could match, even if it had all the other criteria given.<br /> You could further strengthen that bond by using a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume" rel="nofollow noreferrer">claimRef</a> on the volume, which would prevent any other claim, than the referenced one, to claim the volume.</p> <p>You are right, in that you have to go the volume + claim way, if you want to specify <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">mount options</a> for the NFS.</p> <p>Note that the <a href="https://kubernetes.io/docs/concepts/storage/storage-capacity/" rel="nofollow noreferrer">capacity</a> isn't actually doing anything in this example, volumes for pre-existing NFS have the capacity of the actual NFS and not what you specify. You still need to provide some capacity in your manifest, as it's required by Kubernetes. It can be some arbitrary value, it doesn't matter.</p> <p>Another, thing is the empty <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storage class</a>. This is because you don't want to create the NFS via a storage class, but use a pre created NFS outside Kubernetes. If you don't give a storage class, the default class will be used, which is not what you want. Hence, the empty string is required.</p> <p>Regarding the path, if you want to mount different path inside the NFS to different pods or the same NFS under different path to the same pod to different locations, you can use a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a>. Although it doesn't seem to be required from your question.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-nfs-volume spec: persistentVolumeReclaimPolicy: Retain capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: path: /data/embeddings server: fs-blahblah.efs.us-west-2.amazonaws.com mountOptions: - vers=4 - minorversion=1 - noac --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-nfs-claim spec: volumeName: my-nfs-volume storageClassName: &quot;&quot; accessModes: - ReadWriteMany resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: mounter-fleet spec: replicas: 5 selector: matchLabels: app: mounter-fleet template: metadata: labels: app: mounter-fleet spec: containers: - name: mounter image: busybox command: [&quot;sleep&quot;, &quot;infinity&quot;] volumeMounts: - name: nfs mountPath: /mnt/nfs volumes: - name: nfs persistentVolumeClaim: claimName: my-nfs-claim </code></pre>
<p>I have a simple setup that is using OAuth2 Proxy to handle authentication. It works fine locally using minikube but when I try to use GKE when the oauth callback happens I get a 403 status and the the following message...</p> <blockquote> <p>Login Failed: Unable to find a valid CSRF token. Please try again.</p> </blockquote> <p>The offending url is <code>http://ourdomain.co/oauth2/callback?code=J_6ao0AxSBRn4bwr&amp;state=r_aFqM9wsSpPvyKyyzE_nagGnpNKUp1pLyZafOEO0go%3A%2Fip</code></p> <p>What should be configured differently to avoid the CSRF error?</p>
<p>In my case it was because I needed to set the cookie to <code>secure = false</code>. Apparently I could still have secure true no problem with http and an IP but once I uploaded with a domain it failed.</p>
<p>I just switched from ForkPool to gevent with concurrency (5) as the pool method for Celery workers running in Kubernetes pods. After the switch I've been getting a non recoverable erro in the worker:</p> <p><code>amqp.exceptions.PreconditionFailed: (0, 0): (406) PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p> <p>The broker logs gives basically the same message:</p> <p><code>2021-11-01 22:26:17.251 [warning] &lt;0.18574.1&gt; Consumer None4 on channel 1 has timed out waiting for delivery acknowledgement. Timeout used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more</code></p> <p>I have the <code>CELERY_ACK_LATE</code> set up, but was not familiar with the necessity to set a timeout for the acknowledgement period. And that never happened before using processes. Tasks can be fairly long (60-120 seconds sometimes), but I can't find a specific setting to allow that.</p> <p>I've read in another post in other forum a user who set the timeout on the broker configuration to a huge number (like 24 hours), and was also having the same problem, so that makes me think there may be something else related to the issue.</p> <p>Any ideas or suggestions on how to make worker more resilient?</p>
<p>For future reference, it seems that the new RabbitMQ versions (+3.8) introduced a tight default for <code>consumer_timeout</code> (15min I think).</p> <p>The solution I found (that has also been added to Celery docs not long ago <a href="https://docs.celeryproject.org/en/stable/userguide/calling.html" rel="noreferrer">here</a>) was to just add a large number for the <code>consumer_timeout</code> in RabbitMQ.</p> <p>In <a href="https://stackoverflow.com/questions/67907336/celery-task-with-a-long-eta-and-rabbitmq">this question</a>, someone mentions setting consumer_timeout to false, in a way that using a large number is not needed, but apparently there's some specifics regarding the format of the configuration for that to work.</p> <p>I'm running RabbitMQ in k8s and just done something like:</p> <pre class="lang-yaml prettyprint-override"><code>rabbitmq.conf: | consumer_timeout = 31622400000 </code></pre>
<p>I am aware about the hierarchical order of k8s resources. In brief,</p> <ol> <li><strong>service:</strong> a service is what exposes the application to outer world (or with in cluster). (The <strong>service</strong> <strong>types</strong> like, CluserIp, NodePort, Ingress are not so much relevant to this question. )</li> <li><strong>deployment:</strong> a deployment is what is responsible to keep a set of pods running.</li> <li><strong>replicaset:</strong> a replica set is what a deployment in turn relies on to keep the set of pods running.</li> <li><strong>pod:</strong> - a pod consist of a container or a group of container</li> <li><strong>container</strong> - the actual required application is run inside the container.</li> </ol> <p>The thing i want to empasise in this question is, why we have <code>replicaset</code>. Why don't the <code>deployment</code> directly handle or take responsibility of keeping the required number of pods running. But <code>deployment</code> in turn relies on <code>replicset</code> for this.</p> <p>If k8s is designed this way there should be definitely some benefit of having <code>replicaset</code>. And this is what i want to explore/understand in depth.</p>
<p>Both essentially serves the same purpose. Deployments are a higher abstraction and as the name suggests it deals with creating, maintining and upgrading the deployment (collection of pods) as a whole. Whereas, ReplicationControllers or Replica sets primary responsibility is to maintain a set of identical replicas (which you can achieve declaratively using deployments too, but internally it creates a resplicaset to enable this).</p> <p>More specifically, when you are trying to perform a &quot;rolling&quot; update to your deployment, such as updating the image versions, the deployment internally creates a new replica set and performs the rollout. during the rollout you can see two replicasets for the same deployment. So in other words, Deployment needs the lower level &quot;encapsulation&quot; of Replica sets to achive this. <a href="https://i.stack.imgur.com/C9U23.png" rel="noreferrer"><img src="https://i.stack.imgur.com/C9U23.png" alt="enter image description here" /></a></p>
<p>I wasn't really sure how to label this question, because i'll be good with any of the solutions above (inheritance of containers or defining parameters for the entire workflow without explicitly setting them in each step template).</p> <p>i am currently working with argo yaml's and i want to define certain values that will be inputted once (and also, be optional), and will be used by every pod in the yaml.</p> <p>i'm sure there's a better way to do this than what i found by now, but i can't find anything in the docs. currently the way i saw was defining that parameter as a workflow argument, and then for each container defined - defining it as an input parameter/env parameter.</p> <p>my question is this - isn't there a way to define those 'env' variables at the top level? of the workflow? so that every pod will use them without me explicitly telling it to?</p> <p>or - maybe even create one container that has those arguments defined, so that every other container i define inherits from that container and i wouldn't have to write those parameters as input/env for each one i add?</p> <p>i wouldn't want to add these three values to each container i define. it makes the yaml very big and hard to read and maintain.</p> <pre><code> container: env: - name: env_config value: &quot;{{workflow.parameters.env_config}}&quot; - name: flow_config value: &quot;{{workflow.parameters.flow_config}}&quot; - name: flow_type_config value: &quot;{{workflow.parameters.flow_type_config}}&quot; </code></pre> <p>would love to get your input, even if it's pointing me at the direction of the right doc to read, as i haven't found anything close to it yet.</p> <p>Thanks!</p>
<p>just realised i haven't updated, so for anyone interested, what i ended up doing is setting an anchor inside a container template:</p> <pre><code>templates: #this template is here to define what env parameters each container has using an anchor. - name: env-template container: env: &amp;env_parameters - name: env_config value: &quot;{{workflow.parameters.env_config}}&quot; - name: flow_config value: &quot;{{workflow.parameters.flow_config}}&quot; - name: run_config value: &quot;{{workflow.parameters.run_config}}&quot; </code></pre> <p>and than using that anchor in each container.</p> <pre><code> container: image: image imagePullPolicy: Always env: *env_parameters </code></pre>
<p>I'm currently migrating a DAG from airflow version 1.10.10 to 2.0.0.</p> <p>This DAG uses a custom python operator where, depending on the complexity of the task, it assigns resources dynamically. The problem is that the import used in v1.10.10 (<strong>airflow.contrib.kubernetes.pod import Resources</strong>) no longer works. I read that for v2.0.0 I should use <strong>kubernetes.client.models.V1ResourceRequirements</strong>, but I need to build this resource object dynamically. This might sound dumb, but I haven't been able to find the correct way to build this object.</p> <p>For example, I've tried with</p> <pre><code> self.resources = k8s.V1ResourceRequirements( request_memory=get_k8s_resources_mapping(resource_request)['memory'], limit_memory=get_k8s_resources_mapping(resource_request)['memory_l'], request_cpu=get_k8s_resources_mapping(resource_request)['cpu'], limit_cpu=get_k8s_resources_mapping(resource_request)['cpu_l'] ) </code></pre> <p>or</p> <pre><code> self.resources = k8s.V1ResourceRequirements( requests={'cpu': get_k8s_resources_mapping(resource_request)['cpu'], 'memory': get_k8s_resources_mapping(resource_request)['memory']}, limits={'cpu': get_k8s_resources_mapping(resource_request)['cpu_l'], 'memory': get_k8s_resources_mapping(resource_request)['memory_l']} ) </code></pre> <p>(get_k8s_resources_mapping(resource_request)['xxxx'] just returns a value depending on the resource_request, like '2Gi' for memory or '2' for cpu)</p> <p>But they don't seem to work. The task fails.</p> <p>So, my question is, how would you go about correctly building a V1ResourceRequirements in Python? And, how should it look in the executor_config attribute of the task instance? Something like this, maybe?</p> <pre><code>'resources': {'limits': {'cpu': '1', 'memory': '512Mi'}, 'requests': {'cpu': '1', 'memory': '512Mi'}} </code></pre>
<p>The proper syntax is for example:</p> <pre><code>from kubernetes import client from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator KubernetesPodOperator( ..., resources = client.V1ResourceRequirements( requests={&quot;cpu&quot;: &quot;1000m&quot;, &quot;memory&quot;: &quot;8G&quot;}, limits={&quot;cpu&quot;: &quot;16000m&quot;, &quot;memory&quot;: &quot;128G&quot;} ) ) </code></pre> <p>If you would like to generate it dynamically simply replace the values in requests/limits with function that returns the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1ResourceRequirements.md" rel="nofollow noreferrer">expected string</a>.</p>
<p>I'm following <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/" rel="nofollow noreferrer">this AWS documentation</a> which explains how to properly configure AWS Secrets Manager to let it works with EKS through Kubernetes Secrets.</p> <p>I successfully followed step by step all the different commands as explained in the documentation.</p> <p>The only difference I get is related to <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=kubectl%20get%20po%20%2D%2Dnamespace%3Dkube%2Dsystem" rel="nofollow noreferrer">this step</a> where I have to run:</p> <pre><code>kubectl get po --namespace=kube-system </code></pre> <p>The expected output should be:</p> <pre><code>csi-secrets-store-qp9r8 3/3 Running 0 4m csi-secrets-store-zrjt2 3/3 Running 0 4m </code></pre> <p>but instead I get:</p> <pre><code>csi-secrets-store-provider-aws-lxxcz 1/1 Running 0 5d17h csi-secrets-store-provider-aws-rhnc6 1/1 Running 0 5d17h csi-secrets-store-secrets-store-csi-driver-ml6jf 3/3 Running 0 5d18h csi-secrets-store-secrets-store-csi-driver-r5cbk 3/3 Running 0 5d18h </code></pre> <p>As you can see the names are different, but I'm quite sure it's ok :-)</p> <p>The real problem starts <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=kubectl%20apply%20%2Df%20%2D-,Step%204,-%3A%20Create%20and%20deploy" rel="nofollow noreferrer">here in step 4</a>: I created the following YAML file (as you ca see I added some parameters):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: aws-secrets spec: provider: aws parameters: objects: | - objectName: &quot;mysecret&quot; objectType: &quot;secretsmanager&quot; </code></pre> <p>And finally I created a deploy (as explain <a href="https://aws.amazon.com/it/blogs/security/how-to-use-aws-secrets-configuration-provider-with-kubernetes-secrets-store-csi-driver/#:%7E:text=MySecret2%22%0A%20%20%20%20%20%20%20%20objectType%3A%20%22secretsmanager%22-,Step%205,-%3A%20Configure%20and%20deploy" rel="nofollow noreferrer">here in step 5</a>) using the following yaml file:</p> <pre class="lang-yaml prettyprint-override"><code># test-deployment.yaml kind: Pod apiVersion: v1 metadata: name: nginx-secrets-store-inline spec: serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch containers: - image: nginx name: nginx volumeMounts: - name: mysecret-volume mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: mysecret-volume csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;aws-secrets&quot; </code></pre> <p>After the deployment through the command:</p> <pre><code>kubectl apply -f test-deployment.yaml -n mynamespace </code></pre> <p>The pod is not able to start properly because the following error is generated:</p> <pre><code>Error from server (BadRequest): container &quot;nginx&quot; in pod &quot;nginx-secrets-store-inline&quot; is waiting to start: ContainerCreating </code></pre> <p>But, for example, if I run the deployment with the following yaml <strong>the POD will be successfully created</strong></p> <pre class="lang-yaml prettyprint-override"><code># test-deployment.yaml kind: Pod apiVersion: v1 metadata: name: nginx-secrets-store-inline spec: serviceAccountName: iamserviceaccountforkeyvaultsecretmanagerresearch containers: - image: nginx name: nginx volumeMounts: - name: keyvault-credential-volume mountPath: &quot;/mnt/secrets-store&quot; readOnly: true volumes: - name: keyvault-credential-volume emptyDir: {} # &lt;&lt;== !! LOOK HERE !! </code></pre> <p>as you can see I used</p> <pre><code>emptyDir: {} </code></pre> <p>So as far I can see the <strong>problem</strong> here is related to the following YAML lines:</p> <pre class="lang-yaml prettyprint-override"><code> csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: &quot;aws-secrets&quot; </code></pre> <p>To be honest it's even not clear in my mind what's happing here. Probably I didn't properly enabled the volume permission in EKS?</p> <p>Sorry but I'm a newbie in both AWS and Kubernetes configurations. Thanks for you time</p> <p>--- NEW INFO ---</p> <p>If I run</p> <pre><code>kubectl describe pod nginx-secrets-store-inline -n mynamespace </code></pre> <p>where <em>nginx-secrets-store-inline</em> is the name of the pod, I get the following output:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 30s default-scheduler Successfully assigned mynamespace/nginx-secrets-store-inline to ip-10-0-24-252.eu-central-1.compute.internal Warning FailedMount 14s (x6 over 29s) kubelet MountVolume.SetUp failed for volume &quot;keyvault-credential-volume&quot; : rpc error: code = Unknown desc = failed to get secretproviderclass mynamespace/aws-secrets, error: SecretProviderClass.secrets-store.csi.x-k8s.io &quot;aws-secrets&quot; not found </code></pre> <p>Any hints?</p>
<p>Finally I realized why it wasn't working. As explained <a href="https://secrets-store-csi-driver.sigs.k8s.io/troubleshooting.html#common-errors" rel="nofollow noreferrer">here</a>, the error:</p> <pre><code> Warning FailedMount 3s (x4 over 6s) kubelet, kind-control-plane MountVolume.SetUp failed for volume &quot;secrets-store-inline&quot; : rpc error: code = Unknown desc = failed to get secretproviderclass default/azure, error: secretproviderclasses.secrets-store.csi.x-k8s.io &quot;azure&quot; not found </code></pre> <p>is related to namespace:</p> <blockquote> <p>The SecretProviderClass being referenced in the volumeMount needs to exist in the same namespace as the application pod.</p> </blockquote> <p>So both the yaml file should be deployed in the same namespace (adding, for example, the <em>-n mynamespace</em> argument). Finally I got it working!</p>
<p>I am new to Argo Workflows and following along with <a href="https://youtu.be/XySJb-WmL3Q?t=1247" rel="nofollow noreferrer">this tutorial</a>.</p> <p>Following along with it, we are to create a service account and then attach the pre-existing <code>workflow-role</code> to the service account, like this:</p> <pre class="lang-sh prettyprint-override"><code>&gt; kubectl create serviceaccount mike serviceaccount/mike created # Response from my terminal &gt; kubectl create rolebinding mike --serviceaccount=argo:mike --role=workflow-role rolebinding.rbac.authorization.k8s.io/mike created # Response from my terminal </code></pre> <p>But then when I tried to submit a job using that service account, it said that there is no such role <code>workflow-role</code>:</p> <pre class="lang-sh prettyprint-override"><code>Message: Error (exit code 1): pods &quot;mike-cli-hello-svlmn&quot; is forbidden: User &quot;system:serviceaccount:argo:mike&quot; cannot patch resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;argo&quot;: RBAC: role.rbac.authorization.k8s.io &quot;workflow-role&quot; not found </code></pre> <p>(I also do not understand why my default API group is null, but I'm assuming that is unrelated.)</p> <p>I then checked, and indeed there is no such role:</p> <pre class="lang-sh prettyprint-override"><code>❯ kubectl get role NAME CREATED AT agent 2022-02-28T21:38:31Z argo-role 2022-02-28T21:38:31Z argo-server-role 2022-02-28T21:38:32Z executor 2022-02-28T21:38:32Z pod-manager 2022-02-28T21:38:32Z submit-workflow-template 2022-02-28T21:38:32Z workflow-manager 2022-02-28T21:38:32Z </code></pre> <p>Could it be that the role is <code>workflow-manager</code>? That sounds more like an automated service to manage the pipeline / DAG or something similar.</p> <p>I am obviously quite new to Argo. I have successfully launched jobs, but not when trying to use that newly created service account.</p> <p>Should Argo have a default role of <code>workflow-role</code>? How do I create it?</p>
<p>Actually, I think I got it, but if someone sees this, a confirmation would be nice.</p> <p>I created a role file as follows:</p> <pre class="lang-sh prettyprint-override"><code>role.yaml: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: workflow rules: - apiGroups: - &quot;&quot; resources: - pods verbs: - get - watch - patch - apiGroups: - &quot;&quot; resources: - pods/log verbs: - get - watch </code></pre> <p>I then created the role via the standard</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f role.yaml </code></pre> <p>Then created the role-binding same as above:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create rolebinding mike --serviceaccount=argo:mike --role=workflow </code></pre> <p>Then I could submit jobs with the new service account without a problem:</p> <pre class="lang-sh prettyprint-override"><code>argo submit --serviceaccount mike --watch argo_tutorial.yaml </code></pre>
<p>I am working with logs in my system.<br /> I want to use a log sidecar to collect business container's log.</p> <p>And my business container's log will write to its <strong>STDOUT</strong>.</p> <p>So I want to redirect this STDOUT to pod's volume file, because in a pod all containers share the same volume, so my sidecar can collect log from volume.</p> <p>How should I configuer this?<br /> I mean maybe I should write some configuration in my k8s yaml so k8s will automaticlly redirect the container's <strong>STDOUT</strong> to pod's volume?</p>
<p>Adding this <code>2&gt;&amp;1 &gt; /&lt;your_path_to_volume_inside_pod&gt;/file.log</code> to your <code>command</code> would redirect <code>STDOUT</code> and <code>STDERR</code> to a file</p>
<p>I have a java process which is running on k8s.</p> <p>I set Xms and Xmx to process.</p> <pre><code>java -Xms512M -Xmx1G -XX:SurvivorRatio=8 -XX:NewRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -jar automation.jar </code></pre> <p>My expectation is that pod should consume 1.5 or 2 gb memory, but it consume much more, nearly 3.5gb. its too much. if ı run my process on a virtual machine, it consume much less memory.</p> <p>When ı check memory stat for pods, ı reliase that pod allocate too much cache memory.</p> <p>Rss nearly 1.5GB is OK. Because Xmx is 1gb. But why cache nearly 3GB.</p> <p>is there any way to tune or control this usage ?</p> <pre><code>/app $ cat /sys/fs/cgroup/memory/memory.stat cache 2881228800 rss 1069154304 rss_huge 446693376 mapped_file 1060864 swap 831488 pgpgin 1821674 pgpgout 966068 pgfault 467261 pgmajfault 47 inactive_anon 532504576 active_anon 536588288 inactive_file 426450944 active_file 2454777856 unevictable 0 hierarchical_memory_limit 16657932288 hierarchical_memsw_limit 9223372036854771712 total_cache 2881228800 total_rss 1069154304 total_rss_huge 446693376 total_mapped_file 1060864 total_swap 831488 total_pgpgin 1821674 total_pgpgout 966068 total_pgfault 467261 total_pgmajfault 47 total_inactive_anon 532504576 total_active_anon 536588288 total_inactive_file 426450944 total_active_file 2454777856 total_unevictable 0 </code></pre>
<p>A Java process may consume much more physical memory than specified in <code>-Xmx</code> - I explained it in <a href="https://stackoverflow.com/questions/53451103/java-using-much-more-memory-than-heap-size-or-size-correctly-docker-memory-limi/53624438#53624438">this answer</a>.</p> <p>However, in your case, it's not even the memory of a Java process, but rather an OS-level <a href="https://en.wikipedia.org/wiki/Page_cache" rel="nofollow noreferrer">page cache</a>. Typically you don't need to care about the page cache, since it's the shared reclaimable memory: when an application wants to allocate more memory, but there is not enough immediately available free pages, the OS will likely free a part of the page cache automatically. In this sense, page cache should not be counted as &quot;used&quot; memory - it's more like a spare memory used by the OS for a good purpose while application does not need it.</p> <p>The page cache often grows when an application does a lot of file I/O, and this is fine.</p> <p><a href="https://github.com/jvm-profiling-tools/async-profiler/" rel="nofollow noreferrer">Async-profiler</a> may help to find the exact source of growth:<br /> run it with <code>-e filemap:mm_filemap_add_to_page_cache</code></p> <p>I demonstrated this approach in <a href="https://www.youtube.com/watch?v=c755fFv1Rnk&amp;t=2784s" rel="nofollow noreferrer">my presentation</a>.</p>
<p>We are using Linkerd 2.11.1 on Azure AKS Kubernetes. Amongst others there is a Deployment using using an Alpine Linux image containing Apache/mod_php/PHP8 serving an API. HTTPS is resolved by Traefik v2 with cert-manager, so that in coming traffic to the APIs is on port 80. The Linkerd proxy container is injected as a Sidecar.</p> <p>Recently I saw that the API containers return 504 errors during a short period of time when doing a Rolling deployment. In the Sidecars log, I found the following :</p> <pre><code>[ 0.000590s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime [ 0.001062s] INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191 [ 0.001078s] INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143 [ 0.001081s] INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140 [ 0.001083s] INFO ThreadId(01) linkerd2_proxy: Tap interface on 0.0.0.0:4190 [ 0.001085s] INFO ThreadId(01) linkerd2_proxy: Local identity is default.my-api.serviceaccount.identity.linkerd.cluster.local [ 0.001088s] INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local) [ 0.001090s] INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local) [ 0.014676s] INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity: default.my-api.serviceaccount.identity.linkerd.cluster.local [ 3674.769855s] INFO ThreadId(01) inbound:server{port=80}: linkerd_app_inbound::detect: Handling connection as opaque timeout=linkerd_proxy_http::version::Version protocol detection timed out after 10s </code></pre> <p>My guess is that this detection leads to the 504 errors somehow. However, if I add the linkerd inbound port annotation to the pod template (terraform syntax):</p> <pre><code>resource &quot;kubernetes_deployment&quot; &quot;my_api&quot; { metadata { name = &quot;my-api&quot; namespace = &quot;my-api&quot; labels = { app = &quot;my-api&quot; } } spec { replicas = 20 selector { match_labels = { app = &quot;my-api&quot; } } template { metadata { labels = { app = &quot;my-api&quot; } annotations = { &quot;config.linkerd.io/inbound-port&quot; = &quot;80&quot; } } </code></pre> <p>I get the following:</p> <pre><code>time=&quot;2022-03-01T14:56:44Z&quot; level=info msg=&quot;Found pre-existing key: /var/run/linkerd/identity/end-entity/key.p8&quot; time=&quot;2022-03-01T14:56:44Z&quot; level=info msg=&quot;Found pre-existing CSR: /var/run/linkerd/identity/end-entity/csr.der&quot; [ 0.000547s] INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime thread 'main' panicked at 'Failed to bind inbound listener: Os { code: 13, kind: PermissionDenied, message: &quot;Permission denied&quot; }', /github/workspace/linkerd/app/src/lib.rs:195:14 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace </code></pre> <p>Can somebody tell me why it fails to bind the inbound listener?</p> <p>Any help is much appreciated,</p> <p>thanks,</p> <p>Pascal</p>
<p>Found it : Kubernetes sends asynchronuously requests to shutdown the pods and to no longer send traffic to them. And if the pod shuts down faster than it's removal from the IP lists, it can receive requests when already being dead.</p> <p>To fix this, I added a <code>preStop</code> lifecycle hook to the application container:</p> <pre><code>lifecycle { pre_stop { exec { command = [&quot;/bin/sh&quot;, &quot;-c&quot; , &quot;sleep 5&quot;] } } } </code></pre> <p>and the following annotation to pod template :</p> <pre><code>annotations = { &quot;config.alpha.linkerd.io/proxy-wait-before-exit-seconds&quot; = &quot;10&quot; } </code></pre> <p>Documented here :</p> <p><a href="https://linkerd.io/2.11/tasks/graceful-shutdown/" rel="nofollow noreferrer">https://linkerd.io/2.11/tasks/graceful-shutdown/</a></p> <p>and here :</p> <p><a href="https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304" rel="nofollow noreferrer">https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304</a></p>
<p>I am trying to deploy ArgoCD and applications located in subfolders through Terraform in an AKS cluster.</p> <p>This is my Folder structure tree:</p> <p><em>I'm using app of apps approach, so first I will deploy ArgoCD (this will manage itself as well) and later ArgoCD will let me SYNC the cluster-addons and application manually once installed.</em></p> <pre><code>apps cluster-addons AKV2K8S Cert-Manager Ingress-nginx application application-A argocd override-values.yaml Chart </code></pre> <p>When I run the command &quot;helm install ...&quot; manually in the AKS cluster everything is installed fine. ArgoCD is installed and later when I access ArgoCD I see that rest of applications are missing and I can sync them manually.</p> <p><strong>However, If I want to install it through Terraform only ArgoCD is installed but looks like it does not &quot;detect&quot; the override_values.yaml file</strong>:</p> <p>i mean, ArgoCD and ArgoCD application set controller are installed in the cluster but ArgoCD does not &quot;detect&quot; the values.yaml files that are customized for my AKS cluster. If I run &quot;helm install&quot; manually on the cluster everything works but not through Terraform</p> <pre><code>resource &quot;helm_release&quot; &quot;argocd_applicationset&quot; { name = &quot;argocd-applicationset&quot; repository = https://argoproj.github.io/argo-helm chart = &quot;argocd-applicationset&quot; namespace = &quot;argocd&quot; version = &quot;1.11.0&quot; } resource &quot;helm_release&quot; &quot;argocd&quot; { name = &quot;argocd&quot; repository = https://argoproj.github.io/argo-helm chart = &quot;argo-cd&quot; namespace = &quot;argocd&quot; version = &quot;3.33.6&quot; values = [ &quot;${file(&quot;values.yaml&quot;)}&quot; ] </code></pre> <p>values.yaml file is located in the folder where I have the TF code to install argocd and argocd applicationset.</p> <p>I tried to change the name of the file&quot; values.yaml&quot; to &quot;override_values.yaml&quot; but same issue.</p> <p><strong>I have many things changed into the override_values.yaml file so I cannot use &quot;set&quot; inside the TF code...</strong></p> <p>Also, I tried adding:</p> <pre><code> values = [ &quot;${yamlencode(file(&quot;values.yaml&quot;))}&quot; ] </code></pre> <p>but I get this error in &quot;apply&quot; step in the pipeline:</p> <pre><code>error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {} &quot;argo-cd:\r\n ## ArgoCD configuration\r\n ## Ref: https://github.com/argoproj/argo-cd\r\n </code></pre> <p>Probably because is not a JSON file? It does make sense to convert this file into a JSON one?</p> <p>Any idea if I can pass this override values yaml file through terraform?</p> <p>If not, please may you post a clear/full example with mock variables on how to do that using Azure pipeline?</p> <p>Thanks in advance!</p>
<p>The issue was with the values identation in TF code.</p> <p>The issue was resolved when I resolve that:</p> <pre><code>resource &quot;helm_release&quot; &quot;argocd_applicationset&quot; { name = &quot;argocd-applicationset&quot; repository = https://argoproj.github.io/argo-helm chart = &quot;argocd-applicationset&quot; namespace = &quot;argocd&quot; version = &quot;1.11.0&quot; } resource &quot;helm_release&quot; &quot;argocd&quot; { name = &quot;argocd&quot; repository = https://argoproj.github.io/argo-helm chart = &quot;argo-cd&quot; namespace = &quot;argocd&quot; version = &quot;3.33.6&quot; values = [file(&quot;values.yaml&quot;)] </code></pre> <p>It is working fine also with quoting.</p>
<p>is there a set of commands to change the docker image name/tag in an existing deployment in a project in an OpenShift cluster?</p>
<p>You can use the <code>oc set image</code> command to change the image for a container in an existing Deployment / DeploymentConfig:</p> <pre><code>oc set image dc/myapp mycontainer=nginx:1.9.1 </code></pre> <p>Try <code>oc set image --help</code> for some examples.</p>
<p>I'm using the SeleniumGrid in the most recent version <code>4.1.2</code> in a Kubernetes cluster.</p> <p>In many cases (I would say in about half) when I execute a test through the grid, the node fails to kill the processes and does not go back to being idle. The container then keeps using one full CPU all the time until I kill it manually.</p> <p>The log in the container is the following:</p> <pre><code>10:51:34.781 INFO [NodeServer$1.lambda$start$1] - Sending registration event... 10:51:35.680 INFO [NodeServer.lambda$createHandlers$2] - Node has been added Starting ChromeDriver 98.0.4758.102 (273bf7ac8c909cde36982d27f66f3c70846a3718-refs/branch-heads/4758@{#1151}) on port 39592 Only local connections are allowed. Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe. [1C6h4r6o1m2e9D1r2i3v.e9r8 7w]a[sS EsVtEaRrEt]e:d bsiuncdc(e)s sffauillleyd.: Cannot assign requested address (99) 11:08:24.970 WARN [SeleniumSpanExporter$1.lambda$export$0] - {&quot;traceId&quot;: &quot;99100300a4e6b4fe2afe5891b50def09&quot;,&quot;eventTime&quot;: 1646129304968456597,&quot;eventName&quot;: &quot;No slot matched the requested capabilities. &quot;,&quot;attributes&quot; 11:08:44.672 INFO [OsProcess.destroy] - Unable to drain process streams. Ignoring but the exception being swallowed follows. org.apache.commons.exec.ExecuteException: The stop timeout of 2000 ms was exceeded (Exit value: -559038737) at org.apache.commons.exec.PumpStreamHandler.stopThread(PumpStreamHandler.java:295) at org.apache.commons.exec.PumpStreamHandler.stop(PumpStreamHandler.java:180) at org.openqa.selenium.os.OsProcess.destroy(OsProcess.java:135) at org.openqa.selenium.os.CommandLine.destroy(CommandLine.java:152) at org.openqa.selenium.remote.service.DriverService.stop(DriverService.java:281) at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply(DriverServiceSessionFactory.java:183) at org.openqa.selenium.grid.node.config.DriverServiceSessionFactory.apply(DriverServiceSessionFactory.java:65) at org.openqa.selenium.grid.node.local.SessionSlot.apply(SessionSlot.java:143) at org.openqa.selenium.grid.node.local.LocalNode.newSession(LocalNode.java:314) at org.openqa.selenium.grid.node.NewNodeSession.execute(NewNodeSession.java:52) at org.openqa.selenium.remote.http.Route$TemplatizedRoute.handle(Route.java:192) at org.openqa.selenium.remote.http.Route.execute(Route.java:68) at org.openqa.selenium.grid.security.RequiresSecretFilter.lambda$apply$0(RequiresSecretFilter.java:64) at org.openqa.selenium.remote.tracing.SpanWrappedHttpHandler.execute(SpanWrappedHttpHandler.java:86) at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64) at org.openqa.selenium.remote.http.Route$CombinedRoute.handle(Route.java:336) at org.openqa.selenium.remote.http.Route.execute(Route.java:68) at org.openqa.selenium.grid.node.Node.execute(Node.java:240) at org.openqa.selenium.remote.http.Route$CombinedRoute.handle(Route.java:336) at org.openqa.selenium.remote.http.Route.execute(Route.java:68) at org.openqa.selenium.remote.AddWebDriverSpecHeaders.lambda$apply$0(AddWebDriverSpecHeaders.java:35) at org.openqa.selenium.remote.ErrorFilter.lambda$apply$0(ErrorFilter.java:44) at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64) at org.openqa.selenium.remote.ErrorFilter.lambda$apply$0(ErrorFilter.java:44) at org.openqa.selenium.remote.http.Filter$1.execute(Filter.java:64) at org.openqa.selenium.netty.server.SeleniumHandler.lambda$channelRead0$0(SeleniumHandler.java:44) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 11:08:44.673 ERROR [OsProcess.destroy] - Unable to kill process Process[pid=75, exitValue=143] 11:08:44.675 WARN [SeleniumSpanExporter$1.lambda$export$0] - {&quot;traceId&quot;: &quot;99100300a4e6b4fe2afe5891b50def09&quot;,&quot;eventTime&quot;: 1646129316638154262,&quot;eventName&quot;: &quot;exception&quot;,&quot;attributes&quot;: {&quot;driver.url&quot;: &quot;http:\u002f\u002f </code></pre> <p>Here's an excerpt from the Kubernetes manifest:</p> <pre class="lang-yaml prettyprint-override"><code> - name: selenium-node-chrome image: selenium/node-chrome:latest ... env: - name: TZ value: Europe/Berlin - name: START_XVFB value: &quot;false&quot; - name: SE_NODE_OVERRIDE_MAX_SESSIONS value: &quot;true&quot; - name: SE_NODE_MAX_SESSIONS value: &quot;1&quot; envFrom: - configMapRef: name: selenium-event-bus-config ... volumeMounts: - name: dshm mountPath: /dev/shm ... volumes: - name: dshm emptyDir: medium: Memory </code></pre> <p>The <code>selenium-event-bus-config</code> contains the following vars:</p> <pre class="lang-yaml prettyprint-override"><code>data: SE_EVENT_BUS_HOST: selenium-hub SE_EVENT_BUS_PUBLISH_PORT: &quot;4442&quot; SE_EVENT_BUS_SUBSCRIBE_PORT: &quot;4443&quot; </code></pre> <p>Did I misconfigure anything? Has anyone any idea how I can fix this?</p>
<p>If you don't need to use Xvfb you can remove it from your code and your problem will be resolved.</p> <blockquote> <p>Apparently the issue resolves when removing the <code>START_XVFB</code> parameter. With a node with only the timezone config I did not yet have the problem.</p> </blockquote> <p>For the workaround you can try to change your driver for example to <a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow noreferrer">Chromedriver</a>. You can read about the differences between them <a href="https://stackoverflow.com/questions/41460168/what-is-difference-between-xvfb-and-chromedriver-and-when-to-use-them">here</a>.</p> <p>See also <a href="https://github.com/SeleniumHQ/docker-selenium/issues/1500" rel="nofollow noreferrer">this similar problem</a>.</p>
<p>I'm trying to run this, after creating the folder \data\MongoDb\database and sharing it with everyone (Windows 10 doesn't seem to want to share with localsystem, but that should work anyway)</p> <p>It crashes trying to Create the container with a 'Create Container error' I think that somehow I've specified the mapping on how to mount the claim - I'm trying for /data/db which I've confirmed there is data there if I remove the 'volumeMounts' part at the bottom - but if I don't have that, then how does it know that is where I want it mounted? It appears to not mount that folder if I don't add that, and the server works fine in that case, but of course, it has the data inside the server and when it gets powered down and back up POOF! goes your data.</p> <p>Here is the YAML file I'm using</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongodb labels: app: mongodb spec: ports: - port: 27017 targetPort: 27017 selector: app: mongodb type: LoadBalancer externalIPs: - 192.168.1.9 --- apiVersion: v1 kind: PersistentVolume metadata: name: mongo-storage labels: type: local spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: manual hostPath: path: c:/data/mongodb/database --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-mongo-storage spec: accessModes: - ReadWriteOnce storageClassName: manual resources: requests: storage: 5Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: mongodb labels: app: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongo2 image: mongo ports: - containerPort: 27017 name: mongodb env: - name: MONGO_INITDB_ROOT_USERNAME value: xxx - name: MONGO_INITDB_ROOT_PASSWORD value: xxxx - name: MONGO_INITDB_DATABASE value: defaultDatabase volumeMounts: - mountPath: &quot;/data/db&quot; name: mongo-storage volumes: - name: mongo-storage persistentVolumeClaim: claimName: pv-mongo-storage </code></pre> <p>I would presume that there is also some vastly better way to set the password in the MongoDb Container too... This is the only way I've see that worked so far... Oh, I also tried the mountPath without the &quot; around it, because 1. that makes more sense to me, and 2 some of the examples did it that way... No luck</p> <p>The error I'm getting is 'invalid mode: /data/db' - which would imply that the image can't mount that folder because it has the wrong mode... On the other hand, is it because the host is Windows?</p> <p>I don't know... I would hope that it can mount it under Windows...</p>
<p>Adding this from comments so it will be visible to a community.</p> <p>Docker Desktop for Windows provides an ability to access Windows files/directories from Docker containers. The windows directories are mounted in a docker container in <code>/run/desktop/mnt/host/</code> dir.</p> <p>So, you should specify the path to your db directory on Windows (<code>c:/data/mongodb/database</code>) host as:</p> <pre><code>/run/desktop/mnt/host/c/data/mongodb/database </code></pre> <p>This is only specific to Docker Desktop for Windows.</p>
<p>I have a <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metrics-server</a> and a horizontal pod autoscaler using this server, running on my cluster.<br /> This works perfectly fine, until i inject linkerd-proxies into the deployments of the namespace where my application is running. Running <code>kubectl top pod</code> in that namespace results in a <code>error: Metrics not available for pod &lt;name&gt;</code> error. However, nothing appears in the metrics-server pod's logs.<br /> The metrics-server clearly works fine in other namespaces, because top works in every namespace but the meshed one.</p> <p>At first i thought it could be because the proxies' resource requests/limits weren't set, but after running the injection with them (<code>kubectl get -n &lt;namespace&gt; deploy -o yaml | linkerd inject - --proxy-cpu-request &quot;10m&quot; --proxy-cpu-limit &quot;1&quot; --proxy-memory-request &quot;64Mi&quot; --proxy-memory-limit &quot;256Mi&quot; | kubectl apply -f -</code>), the issue stays the same.</p> <p>Is this a known problem, are there any possible solutions?</p> <p>PS: I have a <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> running in a different namespace, and this seems to be able to scrape the pod metrics from the meshed pods just fine <img src="https://i.imgur.com/bOogl3P.png" alt="grafana dashboard image showing prometheus can collect the data" /></p>
<p>The problem was apparently a bug in the cAdvisor stats provider with the CRI runtime. The linkerd-init containers keep producing metrics after they've terminated, which shouldn't happen. The metrics-server ignores stats from pods that contain containers that report zero values (to avoid reporting invalid metrics, like when a container is restarting, metrics aren't collected yet,...). You can follow up on the <a href="https://github.com/kubernetes/kubernetes/issues/103368" rel="nofollow noreferrer">issue</a> here. Solutions seem to be changing to another runtime or using the PodAndContainerStatsFromCRI flag, which will let the internal CRI stats provider be responsible instead of the cAdvisor one.</p>
<p>I am using this link to have a Keycloak setup on my K8s cluster in Azure cloud. <a href="https://www.keycloak.org/getting-started/getting-started-kube" rel="nofollow noreferrer">https://www.keycloak.org/getting-started/getting-started-kube</a></p> <p>Even after following all the steps successfully, unable to get the Keycloak Admin console or Keycloak account on my browser. I have minicube on my machine, also enabled the ingress addon.</p> <p>Deployed Keycloak deployment and service and also ingress.</p> <p>I do the echo : KEYCLOAK_URL=https://keycloak.$(minikube ip).nip.io &amp;&amp;</p> <pre><code>echo &quot;Keycloak: $KEYCLOAK_URL&quot; &amp;&amp; echo &quot;Keycloak Admin Console: $KEYCLOAK_URL/admin&quot; &amp;&amp; echo &quot;Keycloak Account Console: $KEYCLOAK_URL/realms/myrealm/account&quot; &amp;&amp; echo &quot;&quot; </code></pre> <p>and get the successful output without errors:</p> <pre><code>Keycloak: https://keycloak.&lt;IP&gt;.nip.io Keycloak Admin Console: https://keycloak.&lt;IP&gt;.nip.io:8443/admin Keycloak Account Console: https://keycloak.&lt;IP&gt;.nip.io/realms/myrealm/account </code></pre> <p>But when I try opening the Admin console link or Keycloak link, in my browser it does not open.</p> <p>Not sure as what am I missing and what else is supposed to be done?</p>
<p>You can check out my YAML files to deploy the Keycloak on Kubernetes.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak labels: app: keycloak spec: ports: - name: http port: 8080 targetPort: 8080 selector: app: keycloak type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: keycloak namespace: default labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak spec: containers: - name: keycloak image: quay.io/keycloak/keycloak:10.0.0 env: - name: KEYCLOAK_USER value: &quot;admin&quot; - name: KEYCLOAK_PASSWORD value: &quot;admin&quot; - name: PROXY_ADDRESS_FORWARDING value: &quot;true&quot; - name: DB_VENDOR value: POSTGRES - name: DB_ADDR value: postgres - name: DB_DATABASE value: keycloak - name: DB_USER value: root - name: DB_PASSWORD value: password - name : KEYCLOAK_HTTP_PORT value : &quot;80&quot; - name: KEYCLOAK_HTTPS_PORT value: &quot;443&quot; - name : KEYCLOAK_HOSTNAME value : keycloak.harshmanvar.tk #replace with ingress URL ports: - name: http containerPort: 8080 - name: https containerPort: 8443 readinessProbe: httpGet: path: /auth/realms/master port: 8080 </code></pre> <p><a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment</a></p> <p>Feel free to refer this article for more : <a href="https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39" rel="nofollow noreferrer">https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39</a></p>
<p>I'm using OVH cloud and K8S (with ingress/loadbalancer/nginx)</p> <p>How can I route all my node traffic (my containers) to ingress (Loadbalancer)? Actually I got one Public IP by Node, (and they change every time I setup/delete node)</p> <p><strong>The Goal</strong> : have always same public IP when I request external API, I need trust my IP to this external API (by white list)</p> <p>I looked about <code>Egress</code> but is look it's not work.</p> <p>Have you some example or tips for me?</p>
<p>You can proxy container or proxy DNS option with each running pod which will pass the traffic to specific DNS pod and that will manage your traffic from one single Node.</p> <p>However it's not scalable solution.</p> <p>If you are using the <strong>istio</strong> also thn also you have to create the Egress gateway POD fix on a single node with <strong>affinity</strong>. So each time you will get the same IP if the container <strong>restart</strong> or so and get back scheduled to same node.</p> <p><a href="https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/</a></p> <p>Another solution is to use the NAT gateway i have not setup with OVH cloud however it's scalable solution if you Node are in public subnet.</p> <p>Here is nice GKE project we been using to create single point of egress for our K8s clusters : <a href="https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway</a></p> <p>You can also implement same with OVH cloud if you can using the NAT gateway.</p>
<p>I have a Minikube cluster setup in WSL 2 of Windows 10 pro, where the docker-for-windows is used with WSL2 integration. Minikube was started with default docker driver.</p> <pre><code>$ minikube version minikube version: v1.25.2 commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 </code></pre> <p>If I follow the <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">getting started guide</a>,after creating the <code>hellow-minikube</code> service, I should be able to connect to the service either via <code>&lt;minikube-ip&gt;:nodeport</code> or via <code>minikube service</code> command.</p> <p>But the first method didn't worked. Because it was impossible to even ping the minikube ip from WSL 2: (This works in Minikube setup on a pure Ubuntu installation. The problem is in WSL2 - Windows subsystem for linux).</p> <pre><code>$ minikube ip 192.168.49.2 $ ping 192.168.49.2 PING 192.168.49.2 (192.168.49.2) 56(84) bytes of data. ^C --- 192.168.49.2 ping statistics --- 293 packets transmitted, 0 received, 100% packet loss, time 303708ms </code></pre> <p>The second method <code>minikube service hello-minikube</code> also didn't worked because it was again giving the access url with minikube IP.</p> <pre><code>$ minikube service hello-minikube 🏃 Starting tunnel for service hello-minikube. 🎉 Opening service default/hello-minikube in default browser... 👉 **http://192.168.49.2:30080** ❗ Because you are using a Docker driver on linux, the terminal needs to be open to run it. </code></pre> <p>But this was actually working in previous Minikube versions, as it was actually exposing a host port to the service, and we could connect to the host port to access the service. It needed a manual intervention as the hostport access was available only until the <code>minikube service</code> command keeps running.</p> <p><a href="https://i.stack.imgur.com/4BOFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4BOFm.png" alt="enter image description here" /></a></p> <p>Is there any way that I can pre-configure a port to access the service (nodePort), and can access the service even if it is deployed in Minikube in WSL2?</p> <p><strong>Note:</strong></p> <p>I tried using other drivers from WSL like <code>--driver=none</code>. But that setup would be much more complicated because it has <code>systemd</code>, <code>conntrack</code> and other packages as dependencies, which WSL2 doesn't have currently.</p> <p>Also tried to setup a Virtualbox+vagrant Ubuntu box in Windows 10 and installed docker and started minikube with docker driver there. Everything works inside that VM. But cannot access the services from windows host as minikube ip is a host-only ip address available inside that VM only.</p>
<p>Minikube in WSL2 with docker driver, creates a docker container with name <code>minikube</code> when <code>minikube start</code> command is executed. That container has some port mappings that helps kubectl and clients to connect to the server.</p> <p>Notice that <code>kubectl cluster-info</code> connects to one of those ports as server. (Normally, the control plane would be running at port 8443, here it is a random high port, which is a mapped one).</p> <pre><code>$ kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:55757 CoreDNS is running at https://127.0.0.1:55757/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9cc01654bd2f gcr.io/k8s-minikube/kicbase:v0.0.30 &quot;/usr/local/bin/entr…&quot; 7 minutes ago Up 7 minutes 127.0.0.1:55758-&gt;22/tcp, 127.0.0.1:55759-&gt;2376/tcp, 127.0.0.1:55756-&gt;5000/tcp, 127.0.0.1:55757-&gt;8443/tcp, 127.0.0.1:55760-&gt;32443/tcp minikube </code></pre> <p>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.</p> <p>If you can provide a fixed nodePort to your app's service, then you can add a custom port mapping on minikube from that nodePort (of minikube host/VM) to a hostPort (of WSL2). And then you can acccess the service with <code>localhost:hostPort</code>.</p> <p>For example,</p> <p>You want to create a service with nodePort <code>30080</code>.</p> <p>In that case, make sure you start the minikube with a custom port mapping that includes this node port:</p> <pre><code>$ minikube start --ports=127.0.0.1:30080:30080 </code></pre> <p><a href="https://i.stack.imgur.com/KHDAq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KHDAq.png" alt="enter image description here" /></a></p> <p>Now if you deploy the service with <code>nodePort=30080</code> you will be able to access it via <strong>http://localhost:30080/</strong>.</p> <p>There were issues like this in Minikube installation on MacOS. Here are some details about the workaround: <a href="https://github.com/kubernetes/minikube/issues/11193" rel="noreferrer">https://github.com/kubernetes/minikube/issues/11193</a></p>
<p>I need to install Grafana Loki with Prometheus in my Kubernetes cluster. So I followed the below to install them. It basically uses Helm to install it. Below is the command which I executed to install it.</p> <pre><code>helm upgrade --install loki grafana/loki-stack --set grafana.enabled=true,prometheus.enabled=true,prometheus.alertmanager.persistentVolume.enabled=false,prometheus.server.persistentVolume.enabled=false,loki.persistence.enabled=true,loki.persistence.storageClassName=standard,loki.persistence.size=5Gi -n monitoring --create-namespace </code></pre> <p>I followed the <a href="https://grafana.com/docs/loki/latest/installation/helm/" rel="nofollow noreferrer">official Grafana website</a> in this case.</p> <p>But when I execute the above helm command, I get the below error. In fact, I'm new to Helm.</p> <pre><code>Release &quot;loki&quot; does not exist. Installing it now. W0307 16:54:55.764184 1474330 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy &quot;loki-grafana&quot; in namespace &quot;&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-name&quot; must equal &quot;loki&quot;: current value is &quot;loki-grafana&quot; </code></pre> <p>I don't see any Grafana chart installed.</p> <pre><code>helm list -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION cert-manager cert-manager 1 2021-11-26 13:07:26.103036078 +0000 UTC deployed cert-manager-v0.16.1 v0.16.1 ingress-nginx ingress-basic 1 2021-11-18 12:23:28.476712359 +0000 UTC deployed ingress-nginx-4.0.8 1.0.5 </code></pre>
<p>Well, I was able to get through my issue. The issue was &quot;PodSecurityPolicy&quot;. I deleted the existing Grafana PodSecurityPolicy and it worked.</p>