prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Recently I got this warning message, when I have builded <code>quarkus maven</code> project. I have tested with several later versions and I think this has to be something local in my environment.</p> <pre><code>[WARNING] Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring </code></pre> <p>And this result in a build failure ...</p> <pre><code>[error]: Build step io.quarkus.kubernetes.deployment.KubernetesDeployer#deploy threw an exception: java.lang.RuntimeException: </code></pre> <p>Although Kubernetes deployment was requested, it however cannot take place, because there was an error during communication with the API Server at <code>https://kubernetes.default.svc/</code></p> <p>Any ideas what could be wrong?</p>
<p>Like <a href="https://stackoverflow.com/users/11023331/tapani-rundgren">Tapani Rundgren</a> mentioned in the comments, the solution is to export the variables:</p> <pre><code>export KUBERNETES_MASTER=&lt;your server here&gt; export KUBERNETES_NAMESPACE=&lt;your namspace here&gt; </code></pre>
<p>I want to get specific output for a command like getting the nodeports and loadbalancer of a service. How do I do that?</p>
<p>The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.</p> <p>When you use Kubernetes, you are most probably using <code>kubectl</code> to interact with <code>kubeapi-server</code>.</p> <p>Some of the commands you can use to retrieve the information from the cluster:</p> <ul> <li><code>$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME</code></li> <li><code>$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME</code></li> </ul> <hr /> <h3>Example:</h3> <p>Let's assume that you have a <code>Service</code> of type <code>LoadBalancer</code> (I've redacted some output to be more readable):</p> <ul> <li><code>$ kubectl get service nginx -o yaml</code></li> </ul> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx namespace: default spec: clusterIP: 10.2.151.123 externalTrafficPolicy: Cluster ports: - nodePort: 30531 port: 80 protocol: TCP targetPort: 80 selector: app: nginx sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: A.B.C.D </code></pre> <p>Getting a <code>nodePort</code> from this output could be done like this:</p> <ul> <li><code>kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>30531 </code></pre> <p>Getting a <code>loadBalancer IP</code> from this output could be done like this:</p> <ul> <li><code>kubectl get svc nginx -o jsonpath=&quot;{.status.loadBalancer.ingress[0].ip}&quot;</code></li> </ul> <pre><code>A.B.C.D </code></pre> <p>You can also use <code>kubectl</code> with <code>custom-columns</code>:</p> <ul> <li><code>kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP</code></li> </ul> <pre class="lang-sh prettyprint-override"><code>NAME IP kubernetes 10.2.0.1 nginx 10.2.151.123 </code></pre> <hr /> <p>There are a lot of possible ways to retrieve data with <code>kubectl</code> which you can read more by following the:</p> <ul> <li><code>kubectl get --help</code>:</li> </ul> <blockquote> <p>-o, --output='': Output format. One of: json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=... See <a href="http://kubernetes.io/docs/user-guide/kubectl-overview/#custom-columns" rel="nofollow noreferrer">custom columns</a>, <a href="http://golang.org/pkg/text/template/#pkg-overview" rel="nofollow noreferrer">golang template</a> and <a href="http://kubernetes.io/docs/user-guide/jsonpath" rel="nofollow noreferrer">jsonpath template</a>.</p> </blockquote> <ul> <li><em><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output</a></em></li> </ul> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">Kubernetes.io: Docs: Reference: Kubectl: Overview</a></em></li> <li><em><a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Github.com: Kubernetes client: Python</a></em> - if you would like to retrieve this information with Python</li> <li><em><a href="https://stackoverflow.com/a/53669973/12257134">Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value</a></em></li> </ul>
<p>I restarted the Prometheus pod and now the Prometheus pod is crashlooping. Found this error in the logs:</p> <pre><code>level=error ts=2021-06-09T09:27:29.066Z caller=main.go:758 err=&quot;opening storage failed: block dir: \&quot;/prometheus/01F6J0P4KBBWVJD2M8B1PE7C5E\&quot;: open /prometheus/01F6J0P4KBBWVJD2M8B1PE7C5E/meta.json: no such file or directory&quot; </code></pre> <p>Noticed that the <code>01F6J0P4KBBWVJD2M8B1PE7C5E</code> folder only has the chunks folder in it.</p> <p>Any idea why this occurs and is there a way to fix this?</p>
<p>So you have already found correct github related issues and asked there also.</p> <ul> <li><p><a href="https://github.com/prometheus/prometheus/issues/7090" rel="noreferrer">Still seeing &quot;opening storage failed: block dir&quot; on more recent version of prometheus #7090</a></p> </li> <li><p><a href="https://github.com/prometheus/prometheus/issues/4058" rel="noreferrer">does not start up after corrupted meta.json file</a></p> </li> </ul> <p>And seems currently the only way to fix the problem is to delete <code>01F6J0P4KBBWVJD2M8B1PE7C5E</code> folder that contains empty <code>meta.json</code> either nothing (<a href="https://github.com/prometheus/prometheus/issues/4058#issuecomment-426290403" rel="noreferrer">link</a>)</p>
<p>I want to patch (overwrite) <strong>list</strong> in <strong>kubernetes</strong> manifest with <strong>Kustomize</strong>. I am using patchesStrategicMerge method. When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched. When I patch list the whole list is replaced.</p> <p><strong>How can I replace only specific items in the list and the res of the items in list stay untouched?</strong></p> <p>I found these two resources:<br /> <a href="https://github.com/kubernetes-sigs/kustomize/issues/581" rel="noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/581</a><br /> <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md" rel="noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md</a><br /> but wasn't able to make desired solution of it.</p> <p>exmaple code: orig-file.yaml</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: alertmanager-slack-config namespace: system-namespace spec: test: test other: other-stuff receivers: - name: default slackConfigs: - name: slack username: test-user channel: &quot;#alerts&quot; sendResolved: true apiURL: name: slack-webhook-url key: address </code></pre> <p>patch.yaml:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: alertmanager-slack-config namespace: system-namespace spec: test: brase-yourself receivers: - name: default slackConfigs: - name: slack username: Karl </code></pre> <p>kustomization.yaml</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - orig-file.yaml patchesStrategicMerge: - patch.yaml </code></pre> <p>What I get:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: alertmanager-slack-config namespace: system-namespace spec: other: other-stuff receivers: - name: default slackConfigs: - name: slack username: Karl test: brase-yourself </code></pre> <p>What I want:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: alertmanager-slack-config namespace: system-namespace spec: other: other-stuff receivers: - name: default slackConfigs: - name: slack username: Karl channel: &quot;#alerts&quot; sendResolved: true apiURL: name: slack-webhook-url key: address test: brase-yourself </code></pre>
<p>What you can do is to use <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/jsonpatch.md" rel="noreferrer">jsonpatch</a> instead of <code>patchesStrategicMerge</code>, so in your case:</p> <pre><code>cat &lt;&lt;EOF &gt;./kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - orig-file.yaml patches: - path: patch.yaml target: group: monitoring.coreos.com version: v1alpha1 kind: AlertmanagerConfig name: alertmanager-slack-config EOF </code></pre> <p>patch:</p> <pre><code>cat &lt;&lt;EOF &gt;./patch.yaml - op: replace path: /spec/receivers/0/slackConfigs/0/username value: Karl EOF </code></pre>
<p>What is the purpose of args if one could specify all arguments using command in kubernetes manifest file? for example i can use below syntax which totally negates the usage of the args.</p> <pre><code>command: [ &quot;bin/bash&quot;, &quot;-c&quot;, &quot;mycommand&quot; ] </code></pre> <p>or also</p> <pre><code>command: - &quot;bin/bash&quot; - &quot;-c&quot; - &quot;mycommand&quot; </code></pre>
<p>The main reason to use <code>args:</code> instead of <code>command:</code> is if the container has a specific entrypoint directive that you don't want to change. For example if the Dockerfile has <code>ENTRYPOINT [&quot;/myapp&quot;]</code> you might put <code>args: [--log-level=debug]</code> to add that one argument without changing the path to the binary. In many cases it isn't relevant though and you just use <code>command:</code> to do it all at once.</p>
<p>I am implementing a Kubernetes based solution where I am autoscaling a deployment based on a dynamic metric. I am running this deployment with autoscaling capabilities against a workload for 15 minutes. During this time, pods of this deployment are created and deleted dynamically as a result of the deployment autoscaling decisions.</p> <p>I am interested in saving (for later inspection) of the logs of each of the dynamically created (and potentially deleted) pods occuring in the course of the autoscaling experiment.</p> <p>If the deployment has a label like <strong>app=myapp</strong> , can I run the below command to store all the logs of my deployment?</p> <pre><code>kubectl logs -l app=myapp &gt; myfile.txt </code></pre> <p>Any other more reliable suggestion (without the overhead of manual central logging solution) ? I am runnig on goole kubernetes engine GKE, Does the GKE keep the logs of deleted pods?</p> <p>Thank you.</p>
<p>Yes, by default GKE sends logs for <em>all</em> pods to Stackdriver and you can view/query them there.</p>
<p>I am currently deploying <a href="https://docs.openfaas.com/" rel="nofollow noreferrer">openfaas</a> on my local virtual machine's <a href="https://kubernetes.io/" rel="nofollow noreferrer">kubernetes</a> cluster. I found that the time zone of the container started after publishing the function is inconsistent with the host machine. How should I solve this problem?</p> <pre class="lang-sh prettyprint-override"><code>[root@k8s-node-1 ~]# date # Host time 2021εΉ΄ 06月 09ζ—₯ ζ˜ŸζœŸδΈ‰ 11:24:40 CST [root@k8s-node-1 ~]# docker exec -it 5410c0b41f7a date # Container time Wed Jun 9 03:24:40 UTC 2021 </code></pre>
<p>As <strong>@coderanger</strong> pointed out in the comments section, the timezone difference is not related to <code>OpenFaaS</code>.<br /> It depends on the image you are using, most of the images use <code>UTC</code> timezone. Normally this shouldn't be a problem, but in some special cases you may want to change this timezone.</p> <p>As described in this <a href="https://bobcares.com/blog/change-time-in-docker-container/" rel="nofollow noreferrer">article</a>, you can use the <code>TZ</code> environment variable to set the timezone of a container (there are also other ways to change the timezone).</p> <p>If you have your own <code>Dockerfile</code>, you can use the <a href="https://docs.docker.com/engine/reference/builder/#env" rel="nofollow noreferrer">ENV</a> instruction to set this variable:<br /> <strong>NOTE:</strong> The <code>tzdata</code> package has to be installed in the container for setting the <code>TZ</code> variable.</p> <pre><code>$ cat Dockerfile FROM nginx:latest RUN apt-get install -y tzdata ENV TZ=&quot;Europe/Warsaw&quot; $ docker build -t mattjcontainerregistry/web-app-1 . $ docker push mattjcontainerregistry/web-app-1 $ kubectl run time-test --image=mattjcontainerregistry/web-app-1 pod/time-test created $ kubectl exec -it time-test -- bash root@time-test:/# date Wed Jun 9 17:22:03 CEST 2021 root@time-test:/# echo $TZ Europe/Warsaw </code></pre>
<p>I have a microservice written in Haskell, the compiler is 8.8.3. I built it with <code>--profile</code> option and ran it with <code>+RTS -p</code>. It is running about 30 minutes, there is <code>&lt;my-service&gt;.prof</code> file but it is empty (literally 0 bytes). Previously I did it on my local machine and I stop the service with CTRL-C and after the exit it produced <code>&lt;my-service&gt;.prof</code> file which was not empty.</p> <p>So, I have 2 questions:</p> <ol> <li>How to collect profiling information when a Haskell microservice runs under Kubernetes in the most correct way (to be able to read this .prof file)?</li> <li>How to pass run time parameter to Haskell run-time where to save this .prof file (maybe some workaround if no such an option), for 8.8.3 - because I have feeling that the file may be big and I can hit disk space problem. Also I don't know how to flush/read/get this file while microservice is running. I suppose if I will be able to pass full path for this .prof file then I can save it somewhere else on some permanent volume, to &quot;kill&quot; the service with <code>INT</code> signal for example, and to get this .prof file from the volume.</li> </ol> <p>What is the usual/convenient way to get this .prof file when the service runs in Kubernetes?</p> <p>PS. I saw some relevant options in the documentation for newest versions, but I am with 8.8.3</p>
<p>I think the only way to do live profiling with GHC is to use the eventlog. You can insert <code>Debug.Trace.traceEvent</code> into your code at the functions that you want to measure and then compile with <code>-eventlog</code> and run with <code>+RTS -l -ol &lt;output-file-name&gt; -RTS</code>. You can use <a href="https://hackage.haskell.org/package/ghc-events-analyze" rel="nofollow noreferrer"><code>ghc-events-analyze</code></a> to analyze and visualize the produced eventlog.</p> <p>The official eventlog documentation for GHC 8.8.3 is <a href="https://downloads.haskell.org/ghc/8.8.3/docs/html/users_guide/runtime_control.html#tracing" rel="nofollow noreferrer">here</a>.</p>
<p>Is there any way to profile a runtime program written in golang? In my case, kubelet shows steady increase in memory and I want to try memory profiling it.</p>
<p>use <code>import _ &quot;net/http/pprof&quot;</code> in your application file and use <code>go tool pprof</code> commands or web UI to profile your application.</p> <p>reference -</p> <ul> <li><a href="https://golang.org/pkg/net/http/pprof/" rel="nofollow noreferrer">golang.org/pkg/net/http/pprof</a></li> <li><a href="https://www.freecodecamp.org/news/how-i-investigated-memory-leaks-in-go-using-pprof-on-a-large-codebase-4bec4325e192/" rel="nofollow noreferrer">how-i-investigated-memory-leaks-in-go-using-pprof</a></li> <li><a href="https://jvns.ca/blog/2017/09/24/profiling-go-with-pprof/" rel="nofollow noreferrer">profiling-go-with-pprof</a></li> </ul>
<p>I am running a KEDA enabled Queue Triggered Azure Function for which <code>terminationGracePeriodSeconds</code> has been set to <code>900</code> seconds (15 minutes). The function execution time varies between 30 seconds and 600 seconds (10 minutes). So grace period of 15 minutes seems to be reasonable.</p> <p>Is there a possibility that after the SIGTERM is issued, the pod takes up a new request after finishing the current one if there is still some grace period left.</p> <p>I have come across situations where the function execution stops abruptly, leaving the message unprocessed.</p> <p>Here are the specifications in the deployment:</p> <pre><code>spec: containers: - image: ####.azurecr.io/azurefunctionqueuetriggeredk8s name: queuetrigcontainer ports: - containerPort: 80 resources: requests: memory: &quot;500Mi&quot; cpu: &quot;700m&quot; limits: memory: &quot;600Mi&quot; cpu: &quot;700m&quot; nodeSelector: agentpool: testuserpool terminationGracePeriodSeconds: 900 </code></pre> <p>Is there any plausible explanation for this behavior? How this can be resolved?</p> <p>EDIT: The container logs show &quot;Application is shutting down...&quot;</p> <p><a href="https://i.stack.imgur.com/pGg5a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pGg5a.png" alt="enter image description here" /></a></p>
<p>Looks like this is a non-implemented event on the code side, and not necessarily something related to your configuration. The most probable explanation is the SIGTERM is not handled and the function keeps processing events, no matter what.</p> <p>There are reports here on GitHub (<a href="https://github.com/Azure/azure-functions-host/issues/5365" rel="nofollow noreferrer">https://github.com/Azure/azure-functions-host/issues/5365</a>) about kind of the same situation with a timeout.</p> <p>If you have the chance you could handle the <code>ApplicationStopping</code> and <code>ApplicationStopped</code> events on the <code>IApplicationLifetime</code> object and stop the function from processing if the app is in a stopping state.</p>
<blockquote> <p><em>How to change the Docker <code>ENTRYPOINT</code> in a Kubernetes deployment, without changing also the Docker <code>CMD</code>?</em></p> </blockquote> <p>In the Pod I would do</p> <pre><code>image: &quot;alpine&quot; entrypoint: &quot;/myentrypoint&quot; </code></pre> <p>but this overwrites either <code>ENTRYPOINT</code> <em><strong>and</strong></em> the <code>CMD</code> from the <code>Dockerfile</code>.</p> <p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">documentation</a> doesn't mention it, but seems like a big use-case missed.</p>
<p>That's not a thing.</p> <ul> <li><code>ENTRYPOINT</code> (in Dockerfile) is equal to <code>command:</code> (in PodSpec)</li> <li><code>CMD</code> (in Dockerfile) equals <code>args:</code> (in PodSpec)</li> </ul> <p>So just override <code>command</code> but not <code>args</code>.</p>
<p>I am trying to have 1 redis master with 2 redis replicas tied to a 3 Quorum Sentinel on Kubernetes. I am very new to Kubernetes.</p> <p>My initial plan was to have the master running on a pod tied to 1 Kubernetes SVC and the 2 replicas running on their own pods tied to another Kubernetes SVC. Finally, the 3 Sentinel pods will be tied to their own SVC. The replicas will be tied to the master SVC (because without svc, ip will change). The sentinel will also be configured and tied to master and replica SVCs. But I'm not sure if this is feasible because when master pod crashes, how will one of the replica pods move to the master SVC and become the master? Is that possible?</p> <p>The second approach I had was to wrap redis pods in a replication controller and the same for sentinel as well. However, I'm not sure how to make one of the pods master and the others replicas with a replication controller.</p> <p>Would any of the two approaches work? If not, is there a better design that I can adopt? Any leads would be appreciated.</p>
<p>You can deploy Redis Sentinel using the <a href="https://helm.sh/" rel="noreferrer">Helm</a> package manager and the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="noreferrer">Redis Helm Chart</a>.<br /> If you don't have <code>Helm3</code> installed yet, you can use this <a href="https://helm.sh/docs/intro/install/" rel="noreferrer">documentation</a> to install it.</p> <p>I will provide a few explanations to illustrate how it works.</p> <hr /> <p>First we need to get the <code>values.yaml</code> file from the Redis Helm Chart to customize our installation:</p> <pre><code>$ wget https://raw.githubusercontent.com/bitnami/charts/master/bitnami/redis/values.yaml </code></pre> <p>We can configure a lot of parameters in the <code>values.yaml</code> file , but for demonstration purposes I only enabled Sentinel and set the redis password:<br /> <strong>NOTE:</strong> For a list of parameters that can be configured during installation, see the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#parameters" rel="noreferrer">Redis Helm Chart Parameters</a> documentation.</p> <pre><code># values.yaml global: redis: password: redispassword ... replica: replicaCount: 3 ... sentinel: enabled: true ... </code></pre> <p>Then we can deploy Redis using the configuration from the <code>values.yaml</code> file:<br /> <strong>NOTE:</strong> It will deploy a three Pod cluster (one master and two slaves) managed by the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSets</a> with a <code>sentinel</code> container running inside each Pod.</p> <pre><code>$ helm install redis-sentinel bitnami/redis --values values.yaml </code></pre> <p>Be sure to carefully read the <em><strong>NOTES</strong></em> section of the chart installation output. It contains many useful information (e.g. how to connect to your database from outside the cluster)</p> <p>After installation, check redis <code>StatefulSet</code>, <code>Pods</code> and <code>Services</code> (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">headless service</a> can be used for internal access):</p> <pre><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP redis-sentinel-node-0 2/2 Running 0 2m13s 10.4.2.21 redis-sentinel-node-1 2/2 Running 0 86s 10.4.0.10 redis-sentinel-node-2 2/2 Running 0 47s 10.4.1.10 $ kubectl get sts NAME READY AGE redis-sentinel-node 3/3 2m41s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-sentinel ClusterIP 10.8.15.252 &lt;none&gt; 6379/TCP,26379/TCP 2m redis-sentinel-headless ClusterIP None &lt;none&gt; 6379/TCP,26379/TCP 2m </code></pre> <p>As you can see, each <code>redis-sentinel-node</code> Pod contains the <code>redis</code> and <code>sentinel</code> containers:</p> <pre><code>$ kubectl get pods redis-sentinel-node-0 -o jsonpath={.spec.containers[*].name} redis sentinel </code></pre> <p>We can check the <code>sentinel</code> container logs to find out which <code>redis-sentinel-node</code> is the master:</p> <pre><code>$ kubectl logs -f redis-sentinel-node-0 sentinel ... 1:X 09 Jun 2021 09:52:01.017 # Configuration loaded 1:X 09 Jun 2021 09:52:01.019 * monotonic clock: POSIX clock_gettime 1:X 09 Jun 2021 09:52:01.019 * Running mode=sentinel, port=26379. 1:X 09 Jun 2021 09:52:01.026 # Sentinel ID is 1bad9439401e44e749e2bf5868ad9ec7787e914e 1:X 09 Jun 2021 09:52:01.026 # +monitor master mymaster 10.4.2.21 6379 quorum 2 ... 1:X 09 Jun 2021 09:53:21.429 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.2.21 6379 1:X 09 Jun 2021 09:53:21.435 * +slave slave 10.4.1.10:6379 10.4.1.10 6379 @ mymaster 10.4.2.21 6379 ... </code></pre> <p>As you can see from the logs above, the <code>redis-sentinel-node-0</code> Pod is the master and the <code>redis-sentinel-node-1</code> &amp; <code>redis-sentinel-node-2</code> Pods are slaves.</p> <p>For testing, let's delete the master and check if sentinel will switch the master role to one of the slaves:</p> <pre><code> $ kubectl delete pod redis-sentinel-node-0 pod &quot;redis-sentinel-node-0&quot; deleted $ kubectl logs -f redis-sentinel-node-1 sentinel ... 1:X 09 Jun 2021 09:55:20.902 # Executing user requested FAILOVER of 'mymaster' ... 1:X 09 Jun 2021 09:55:22.666 # +switch-master mymaster 10.4.2.21 6379 10.4.1.10 6379 ... 1:X 09 Jun 2021 09:55:50.626 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.1.10 6379 1:X 09 Jun 2021 09:55:50.632 * +slave slave 10.4.2.22:6379 10.4.2.22 6379 @ mymaster 10.4.1.10 6379 </code></pre> <p>A new master (<code>redis-sentinel-node-2</code> <code>10.4.1.10</code>) has been selected, so everything works as expected.</p> <p>Additionally, we can display more information by connecting to one of the Redis nodes:</p> <pre><code>$ kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=redispassword --image docker.io/bitnami/redis:6.2.1-debian-10-r47 --command -- sleep infinity pod/redis-client created $ kubectl exec --tty -i redis-client --namespace default -- bash I have no name!@redis-client:/$ redis-cli -h redis-sentinel-node-1.redis-sentinel-headless -p 6379 -a $REDIS_PASSWORD Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. redis-sentinel-node-1.redis-sentinel-headless:6379&gt; info replication # Replication role:slave master_host:10.4.1.10 master_port:6379 master_link_status:up ... </code></pre>
<p>I am trying to access a secret on GCP Secrets and I get the following error :</p> <pre><code>in get_total_results &quot;api_key&quot;: get_credentials(&quot;somekey&quot;).get(&quot;somekey within key&quot;), File &quot;/helper.py&quot;, line 153, in get_credentials response = client.access_secret_version(request={&quot;name&quot;: resource_name}) File &quot;/usr/local/lib/python3.8/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/client.py&quot;, line 1136, in access_secret_version response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) File &quot;/usr/local/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py&quot;, line 145, in __call__ return wrapped_func(*args, **kwargs) File &quot;/usr/local/lib/python3.8/site-packages/google/api_core/retry.py&quot;, line 285, in retry_wrapped_func return retry_target( File &quot;/usr/local/lib/python3.8/site-packages/google/api_core/retry.py&quot;, line 188, in retry_target return target() File &quot;/usr/local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py&quot;, line 69, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File &quot;&lt;string&gt;&quot;, line 3, in raise_from google.api_core.exceptions.PermissionDenied: 403 Request had insufficient authentication scopes. </code></pre> <p>The code is fairly simple:-</p> <pre><code>def get_credentials(secret_id): project_id = os.environ.get(&quot;PROJECT_ID&quot;) resource_name = f&quot;projects/{project_id}/secrets/{secret_id}/versions/1&quot; client = secretmanager.SecretManagerServiceClient() response = client.access_secret_version(request={&quot;name&quot;: resource_name}) secret_string = response.payload.data.decode(&quot;UTF-8&quot;) secret_dict = json.loads(secret_string) return secret_dict </code></pre> <p>So, what I have is a cloud function, which is deployed using Triggers, and uses a service account which has the Owner role.</p> <p>The cloud function triggers a Kubernete Work Job and creates a container, which downloads a repo inside the container and executes it.</p> <p>Dockerfile is:</p> <pre><code>FROM gcr.io/project/repo:latest FROM python:3.8-slim-buster COPY . /some_dir WORKDIR /some_dir COPY --from=0 ./repo /a_repo RUN pip install -r requirements.txt &amp; pip install -r a_repo/requirements.txt ENTRYPOINT [&quot;python3&quot; , &quot;main.py&quot;] </code></pre>
<p>The GCE instance might not have the correct authentication scope.</p> <p>From: <a href="https://developers.google.com/identity/protocols/oauth2/scopes#secretmanager" rel="noreferrer">https://developers.google.com/identity/protocols/oauth2/scopes#secretmanager</a></p> <p><code>https://www.googleapis.com/auth/cloud-platform</code> is the required scope.</p> <p>When creating the GCE instance you need to select the option that gives the instance the correct scope to call out to cloud APIs: <a href="https://i.stack.imgur.com/CQFjz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CQFjz.png" alt="from the cloud console" /></a></p>
<p>I'm running into an issue with an nginx ingress controller (ingress-nginx v0.44.0) on EKS where the X-Forwarded-* headers are set to the kubernetes worker node the controller pod is running on as opposed to the details of the request of the actual user hitting the controller itself. As we're terminating our SSL on the ingress controller this means the 'X-Forwarded-Proto' is set to 'http' instead of 'https' which causes issues on the application pods.</p> <p>I deployed a <a href="https://github.com/brndnmtthws/nginx-echo-headers" rel="nofollow noreferrer">test pod</a> which returns the headers it received to confirm the issue and I can see these headers having been received:</p> <pre><code>X-Forwarded-For: &lt;ip of the eks worker node&gt; X-Forwarded-Host: foo.bar.net X-Forwarded-Port: 8000 X-Forwarded-Proto: http </code></pre> <p>I was expecting these though:</p> <pre><code>X-Forwarded-For: &lt;ip of the origin of the original request&gt; X-Forwarded-Host: foo.bar.net X-Forwarded-Port: 443 X-Forwarded-Proto: https </code></pre> <p>Now, we do have an old legacy cluster running an older nginx ingress controller (nginx-ingress v0.34.1) which does actually behave as I expected, but I'm struggling to find how this has been configured to make it do this correctly. I did notice that the nginx.conf of this controller contains the 'full_x_forwarded_proto' variable identically as <a href="https://stackoverflow.com/questions/21230918/nginx-scheme-variable-behind-load-balancer/21911864#21911864">described here</a> but I can't find any place where this is configured as in a configmap or similar.</p> <pre><code>map $http_x_forwarded_proto $full_x_forwarded_proto { default $http_x_forwarded_proto; &quot;&quot; $scheme; } </code></pre> <p>Does anybody have any suggestions how I can configure nginx to send the correct 'X-Forwarded-*' headers?</p>
<p>It depends a lot on the exact networking setup in front of Nginx. By default, Kubernetes routes all external connections through the kube-proxy mesh which hides the true client IP. You also might have an AWS ELB of some kind in front of that which also can hide the client IP depending on settings.</p> <p>For the first part, see <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer</a> (tl;dr set <code>externalTrafficPolicy: Local</code>) but for the second you'll have to look at your specific load balancer setup.</p>
<p>sudo kubeadm init I0609 02:20:26.963781 3600 version.go:252] remote version is much newer: v1.21.1; falling back to: stable-1.18 W0609 02:20:27.069495 3600 configset.go:202]</p> <pre><code>WARNING: kubeadm cannot validate component configs `for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]` `[init] Using Kubernetes version: v1.18.19` `[preflight] Running pre-flight checks` `error execution phase preflight: [preflight] Some fatal errors occurred:` `[ERROR Port-10259]: Port 10259 is in use` `[ERROR Port-10257]: Port 10257 is in use` `[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: `/etc/kubernetes/manifests/kube-apiserver.yaml already exists` `[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]:` `/etc/kubernetes/manifests/kube-controller-manager.yaml already exists` `[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]:` /etc/kubernetes/manifests/kube-scheduler.yaml already exists `[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists` `[ERROR Port-10250]: Port 10250 is in use` `[ERROR Port-2379]: Port 2379 is in use` `[ERROR Port-2380]: Port 2380 is in use` `[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty` [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre>
<p>Hi and welcome to Stack Overflow.</p> <p><strong>&quot;Port in use&quot;</strong> means that there's a process running that uses that port. So you need to stop that process. Since you already ran kubeadm init once, it must have already changed a number of things.</p> <p>First run <strong>kubeadm reset</strong> to undo all of the changes from the first time you ran it.</p> <p>Then run <strong><strong>systemctl restart</strong> kubelet</strong>.</p> <p>Finally, when you run <strong>kubeadm init</strong> you should no longer get the error.</p> <p><strong>Even after following the above steps , if you get this error:</strong></p> <pre><code>[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` </code></pre> <p>Then, remove the etcd folder (/var/lib/etcd) before you run <strong>kubeadm init</strong>.</p> <p><strong>Note:</strong></p> <ol> <li><p>This <a href="https://www.edureka.co/community/19089/error-while-doing-kubernetes-init-command" rel="nofollow noreferrer">solution</a> worked for other users.</p> </li> <li><p>The warning itself is not an issue, it's just warning that kubeadm no longer validates the KubeletConfiguration, KubeProxyConfiguration that it feeds to the kubelet, kube-proxy components.</p> </li> </ol>
<p>I have a cluster that has numerous services running as pods from which I want to pull logs with fluentd. <strong>All</strong> services show logs when doing <code>kubectl logs service</code>. However, some logs don't show up in those folders:</p> <ul> <li>/var/log</li> <li>/var/log/containers</li> <li>/var/log/pods</li> </ul> <p>although the other containers are there. The containers that ARE there are created as a Cronjob, or as a Helm chart, like a MongoDB installation.</p> <p>The containers that aren't logging are created by me with a Deployment file like so:</p> <pre><code>kind: Deployment metadata: namespace: {{.Values.global.namespace | quote}} name: {{.Values.serviceName}}-deployment spec: replicas: {{.Values.replicaCount}} selector: matchLabels: app: {{.Values.serviceName}} template: metadata: labels: app: {{.Values.serviceName}} annotations: releaseTime: {{ dateInZone &quot;2006-01-02 15:04:05Z&quot; (now) &quot;UTC&quot;| quote }} spec: containers: - name: {{.Values.serviceName}} # local: use skaffold, dev: use passed tag, test: use released version image: {{ .Values.image }} {{- if (eq .Values.global.env &quot;dev&quot;) }}:{{ .Values.imageConfig.tag}}{{ end }} imagePullPolicy: {{ .Values.global.imagePullPolicy }} envFrom: - configMapRef: name: {{.Values.serviceName}}-config {{- if .Values.resources }} resources: {{- if .Values.resources.requests }} requests: memory: {{.Values.resources.requests.memory}} cpu: {{.Values.resources.requests.cpu}} {{- end }} {{- if .Values.resources.limits }} limits: memory: {{.Values.resources.limits.memory}} cpu: {{.Values.resources.limits.cpu}} {{- end }} {{- end }} imagePullSecrets: - name: {{ .Values.global.imagePullSecret }} restartPolicy: {{ .Values.global.restartPolicy }} {{- end }} </code></pre> <p>and a Dockerfile CMD like so: <code>CMD [&quot;node&quot;, &quot;./bin/www&quot;]</code></p> <p>One assumption might be that the CMD doesn't pipe to STDOUT, but why would the logs show up in <code>kubectl logs</code> then?</p>
<p>This is how I would proceed to find out where a container is logging:</p> <ol> <li><p>Identify the node on which the Pod is running with:</p> <pre><code>kubectl get pod pod-name -owide </code></pre> </li> <li><p>SSH on that node, you can check which logging driver is being used by the node with:</p> <pre><code>docker info | grep -i logging </code></pre> <p>if the output is <code>json-file</code>, then the logs are being written to file as expected. If there is something different, then it may depends on what the driver do (there are many drivers, they could write to <code>journald</code> for example, or other options)</p> </li> <li><p>If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node:</p> <pre><code>kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}' </code></pre> <p>(if there are more containers in the same pod, the index to use may vary, depending on which container you want to inspect)</p> </li> <li><p>With the id extracted, which will be something like <code>docker://f834508490bd2b248a2bbc1efc4c395d0b8086aac4b6ff03b3cc8fd16d10ce2c</code>, you can inspect the container with docker, on the node on which the container is running. Just remove the <code>docker://</code> part from the id, SSH again on the node you identified before, then do a:</p> <pre><code>docker inspect container-id | grep -i logpath </code></pre> </li> </ol> <p>Which should output where the container is actively writing its logs to file.</p> <hr /> <p>In my case, the particular container I tried this procedure on, is currently logging into:</p> <pre><code>/var/lib/docker/containers/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63/289271086d977dc4e2e0b80cc28a7a6aca32c888b7ea5e1b5f24b28f7601ff63-json.log </code></pre>
<p>I want to get pod with label</p> <p>app=vovo component=db</p> <p>When i get a pod with label app=vovo and component=db, i can get the result with below command.</p> <pre><code>kubectl get pod -l app=vovo,component=db </code></pre> <p>However, when i want to get the result</p> <p>app=vovo or component=db</p> <p>How can i get the result with one kubectl command?</p>
<p><code>OR</code> operations for label selection is not supported as per the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">documentation</a></p> <blockquote> <p>Caution: For both equality-based and set-based conditions there is no logical OR (||) operator. Ensure your filter statements are structured accordingly</p> </blockquote> <p>Here is a close <strong>hack</strong> you could do:</p> <pre><code>kubectl get pod -l app=volvo &amp;&amp; kubectl get pod -l component=db --no-headers </code></pre> <p>This will run two <code>kubectl</code> queries for two different labels.</p>
<p>i'm working on a new idea for which I've created a setup as follows on Azure Kubernetes:</p> <ol> <li>1 cluster</li> <li>1 node pool in said cluster</li> <li>1 deployment which creates 2 pods in the pool</li> <li>1 load balancer service balancing requests between the 2 pods</li> </ol> <p>I'm trying to submit a json request into the loadbalancer from outside the cluster with an AKS IP, to which i encounter 502 Bad Gateway issues.</p> <p>This is my deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: deployment-1 spec: selector: matchLabels: app: deployment-1 replicas: 2 template: metadata: labels: app: deployment-1 spec: containers: - name: api-1 image: dummy imagePullPolicy: Always ports: - containerPort: 5000 nodeSelector: agentpool: analytics1 --- apiVersion: v1 kind: Service metadata: name: service-1 annotations: service.beta.kubernetes.io/azure-dns-label-name: dummy service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; spec: type: LoadBalancer ports: - port: 5000 selector: app: deployment-1 </code></pre> <p>Checking if services are running.</p> <pre><code>LAMU02Y36ZHJG5J% kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cloud-api-v4-svc LoadBalancer 10.0.63.65 10.240.0.5 80:30054/TCP 37d kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 47d service-1 LoadBalancer 10.0.55.129 10.240.0.9 5000:30507/TCP 41h </code></pre> <p>i use this LoadBalancer IP (10.240.0.9) to create an ingress service as below.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-1 annotations: kubernetes.io/ingress.class: addon-http-application-routing spec: rules: - host: service-1.a3333fc09cb44a6b86b8.eastus.aksapp.io - http: paths: - path: / pathType: Prefix backend: service: name: service-1 port: number: 5000 </code></pre> <p>And this is after ingress is created.</p> <pre><code>LAMU02Y36ZHJG5J% kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE ingress-1 &lt;none&gt; service-1.a333fc09cb44a6b86b8.eastus.aksapp.io 20.84.1.20 80 65m </code></pre> <p>So i think things are good, and then i go ahead to create my AKS application gateway so it's accessible from any IP. what i specified for the AKS App gateway. backend pool IP - 20.84.1.20 with port 80 ( which is the Ingress IP. i'm not sure if this is where i'm going wrong)</p> <p>After configuring, now when i try to access the App gateway IP as is, i get a 502 Bad Gateway error. I'm very new to AKS</p>
<p>I don't see below annotations in your Ingress..</p> <p>Can you add them and try?</p> <pre><code>appgw.ingress.kubernetes.io/use-private-ip: &quot;false&quot; kubernetes.io/ingress.class: azure/application-gateway </code></pre> <p>This is where you can see more details in <a href="https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-troubleshoot" rel="nofollow noreferrer">Troubleshoot common questions or issues with Ingress Controller</a>.</p>
<p>In <code>Pod</code> specification, there is an option to specify the user ID that needs to be run as by all containers</p> <pre><code>podSecurityContext: runAsUser: &lt;a numeric Id&gt; </code></pre> <p>Is there a way we can change the user name as well, the way we have for windows pods and container, like below</p> <pre><code> securityContext: windowsOptions: runAsUserName: &quot;ContainerUser&quot; </code></pre>
<p>Unfortunately, there is no such way. <code>WindowsSecurityContextOptions</code> contain <strong>Windows-specific</strong> options and credentials. <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context" rel="nofollow noreferrer">PodSecurityContext</a> allows you to use:</p> <blockquote> <ul> <li><code>securityContext.runAsUser</code> (int64)</li> </ul> <p>The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container.</p> <ul> <li><code>securityContext.runAsNonRoot</code> (boolean)</li> </ul> <p>Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.</p> <ul> <li><code>securityContext.runAsGroup</code> (int64)</li> </ul> <p>The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container.</p> </blockquote> <p>Trying to use String instead of Integer for <code>runAsUser</code> will result in error:</p> <pre><code>invalid type for io.k8s.api.core.v1.SecurityContext.runAsUser: got &quot;string&quot;, expected &quot;integer&quot; </code></pre>
<p>Reading Kubernetes documentation:</p> <p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p> <pre><code>128974848, 129e6, 129M, 123Mi </code></pre> <p>What are the differences between M and Mi here?</p> <p>If I want to request 128GB of RAMs, how many Mi is the correct number? 128000Mi? Thank you!</p>
<p>&quot;MB&quot; is the <a href="https://en.wikipedia.org/wiki/Byte#Multiple-byte_units" rel="noreferrer">metric unit</a>, where 1 MB = 10<sup>6</sup> B and 1 GB = 10<sup>9</sup> B.</p> <p>&quot;MiB&quot; is the power 2 based unit, where 1 MiB = 1024<sup>2</sup> B = 1048576 B.</p> <p>Thus, 128 GB = 128 Β· 10<sup>9</sup> B = 122070.3 MiB.</p>
<p>Why can't we create PV or PVC in imperative way?</p> <p>Trying using create command, but it doesn't show any of them.</p> <p><code>kubectl create --help</code></p> <pre><code>Available Commands: clusterrole Create a ClusterRole. clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole configmap Create a configmap from a local file, directory or literal value cronjob Create a cronjob with the specified name. deployment Create a deployment with the specified name. ingress Create an ingress with the specified name. job Create a job with the specified name. namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name. priorityclass Create a priorityclass with the specified name. quota Create a quota with the specified name. role Create a role with single rule. rolebinding Create a RoleBinding for a particular Role or ClusterRole secret Create a secret using specified subcommand service Create a service using specified subcommand. serviceaccount Create a service account with the specified name </code></pre>
<p>As described in the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="noreferrer">documentation</a> <code>kubectl</code> uses imperative commands <strong>built into the kubectl command-line tool</strong> in order to help you creating objects quickly.</p> <p>After some checks it seems like this is not available because it has not been implemented yet. You can see the full list of the create options at <strong>kubectl/pkg/cmd/<a href="https://github.com/kubernetes/kubectl/tree/master/pkg/cmd/create" rel="noreferrer">create</a></strong>. For example, <a href="https://github.com/kubernetes/kubernetes/pull/78153" rel="noreferrer">#78153</a> was responsible for <code>kubectl create ingress</code> functionality.</p> <p>You would probably get more information and perhaps reasons why this is not implemented by asking the developers and opening a <a href="https://github.com/kubernetes/kubectl/issues/new/choose" rel="noreferrer">new issue</a>.</p>
<p>I'm trying to create an internal ingress for inter-cluster communication with gke. The service that I'm trying to expose is headless and points to a kafka-broker on the cluster.</p> <p>However when I try to load up the ingress, it says it cannot find the service?</p> <pre><code>Warning Sync 3m22s (x17 over 7m57s) loadbalancer-controller Error syncing to GCP: error running load balancer syncing routine: loadbalancer coilwp7v-redpanda-test-abc123-redpanda-japm3lph does not exist: googleapi: Error 400: Invalid value for field 'resource.target': 'https://www.googleapis.com/compute/v1/projects/abc-123/regions/europe-west2/targetHttpProxies/k8s2-tp-coilwp7v-redpanda-test-abc123-redpanda-japm3lph'. A reserved and active subnetwork is required in the same region and VPC as the forwarding rule., invalid </code></pre> <h2>Ingress:</h2> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: abc-redpanda namespace: redpanda-test annotations: kubernetes.io/ingress.class: &quot;gce-internal&quot; spec: defaultBackend: service: name: redpanda-service port: number: 9092 </code></pre> <h2>Service:</h2> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: redpanda-service namespace: redpanda-test annotations: io.cilium/global-service: &quot;true&quot; cloud.google.com/neg: '{&quot;ingress&quot;: true}' labels: app: abc-panda spec: type: ExternalName externalName: redpanda-cluster-0.redpanda-cluster.redpanda-test.svc.cluster.local ports: - port: 9092 targetPort: 9092 </code></pre>
<p>Setting up ingress for internal load balancing requires you to configure a proxy-only subnet on the same VPC used by your GKE cluster. This subnet will be used for the load balancers proxies. You'll also need to create a fw rule to allow traffic as well.</p> <p>Have a look at the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress#before_you_begin" rel="nofollow noreferrer">prereqs</a> for ingress and then look <a href="https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-internal#configure-a-network" rel="nofollow noreferrer">here</a> for info on how to setup the proxy-only subnet for your VPC.</p>
<p>I have deployed EFS file system in AWS EKS cluster after the deployment my storage pod is up and running.</p> <pre><code>kubectl get pod -n storage NAME READY STATUS RESTARTS AGE nfs-client-provisioner-968445d79-g8wjr 1/1 Running 0 136m </code></pre> <p>When I'm trying deploy application pod is not not coming up its pending state <code>0/1</code> at the same time PVC is not bounded its pending state.</p> <p>Here are the logs for after the actual application deployment.</p> <pre><code>I0610 13:26:11.875109 1 controller.go:987] provision &quot;default/logs&quot; class &quot;efs&quot;: started E0610 13:26:11.894816 1 controller.go:1004] provision &quot;default/logs&quot; class &quot;efs&quot;: unexpected error getting claim reference: selfLink was empty, can't make reference </code></pre> <p>I'm using k8 version 1.20 could you please some one help me on this.</p>
<p>Kubernetes <em>1.20</em> stopped propagating <code>selfLink</code>.<br /> There is a <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25#issuecomment-742616668" rel="nofollow noreferrer">workaround</a> available, but it does not always work.</p> <p>After the lines</p> <pre><code>spec: containers: - command: - kube-apiserver </code></pre> <p>add</p> <pre><code>- --feature-gates=RemoveSelfLink=false </code></pre> <p>then reapply API server configuration</p> <pre><code>kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml </code></pre> <p>This workaround will not work after version <em>1.20</em> (1.21 and up), as <code>selfLink</code> will be completely removed.</p> <hr /> <p>Another solution is to use newer NFS provisioner image:<br /> <code>gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0</code></p>
<p>I am executing the below-mentioned command to install Prometheus.</p> <pre><code>helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack </code></pre> <p>I am getting the below error message. Please advise.</p> <pre><code>Error: unable to build kubernetes objects from release manifest: error validating &quot;&quot;: error validating data: [ValidationError(Alertmanager.spec): unknown field &quot;alertmanagerConfigNamespaceSelector&quot; in com.coreos.monitoring.v1.Alertmanager.spec, ValidationError(Alertmanager.spec): unknown field &quot;alertmanagerConfigSelector&quot; in com.coreos.monitoring.v1.Alertmanager.spec] </code></pre>
<p>Hello @saerma and welcome to Stack Overflow!</p> <p>@rohatgisanat might be right but without seeing your current configs it's impossible to verify that. Please check if that was the case.</p> <p>There are also two other things you should look for:</p> <ol> <li>If there was any previous installations of other prometheus-relevant manifest files than delete the following:</li> </ol> <ul> <li><code>crd alertmanagerconfigs.monitoring.coreos.com</code></li> <li><code>alertmanagers.monitoring.coreos.com</code></li> <li><code>crd podmonitors.monitoring.coreos.com</code></li> <li><code>crd probes.monitoring.coreos.com</code></li> <li><code>crd prometheuses.monitoring.coreos.com</code></li> <li><code>crd prometheusrules.monitoring.coreos.com</code></li> <li><code>crd servicemonitors.monitoring.coreos.com</code></li> <li><code>crd thanosrulers.monitoring.coreos.com</code></li> </ul> <p>Also, check if there are any other Prometheus related config files with:</p> <pre><code>kubectl get configmap --all-namespaces </code></pre> <p>and also delete them.</p> <p>Notice that deleting the CRDs will result in deleting any servicemonitors and so on, which have previously been created by other charts.</p> <p>After that you can try to install again from scratch.</p> <ol start="2"> <li>If installing fresh, run:</li> </ol> <hr /> <pre><code>kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.45.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml </code></pre> <hr /> <p>as CRD changed with the newer version and you need to use the updated ones.</p> <p><a href="https://github.com/prometheus-community/helm-charts/issues/557" rel="nofollow noreferrer">Source</a>.</p>
<p>I have a container with IBM MQ (Docker image <code>ibmcom/mq/9.2.2.0-r1</code>) exposing two ports (9443 - admin, 1414 - application).</p> <p>All required setup in OpenShift is done (Pod, Service, Routes).</p> <p>There are two routes, one for each port.</p> <ul> <li><a href="https://route-admin.my.domain" rel="nofollow noreferrer">https://route-admin.my.domain</a></li> <li><a href="https://route-app.my.domain" rel="nofollow noreferrer">https://route-app.my.domain</a></li> </ul> <p>pointing to the ports accordingly (external ports are default http=80, https=443).</p> <p>Admin console is accessible through the first route, hence, MQ is up and running.</p> <p>I tried to connect as a client (JMS 2.0, <code>com.ibm.mq.allclient:9.2.2.0</code>) using standard approach:</p> <pre><code> var fctFactory = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER); var conFactory = fctFactory.createConnectionFactory(); // ... other props conFactory.setObjectProperty(WMQConstants.WMQ_HOST_NAME, &quot;route-app.my.domain&quot;); conFactory.setObjectProperty(WMQConstants.WMQ_PORT, 443); </code></pre> <p>and failed to connect. Also tried to redefine route as HTTP and use port 80, and again without success.</p> <p>If it helps let's assume we use the latest version of MQ Explorer as a client.</p> <p>Each time the same connection error appears:</p> <pre><code>... Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') reason '2009' ('MQRC_CONNECTION_BROKEN'). ... Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2009;AMQ9204: Connection to host 'route-app.my.domain(443)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2009;AMQ9208: Error on receive from host 'route-app.my.domain/10.227.248.2:443 (route-app.my.domain)'. [1=-1,2=ffffffff,3=route-app.my.domain/10.227.248.2:443 (route-app.my.domain),4=TCP]], 3=route-app.my.domain(443),5=RemoteConnection.receiveTSH] ... Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2009;AMQ9208: Error on receive from host 'route-app.my.domain/10.227.248.2:443 </code></pre> <p>Maybe, this <a href="https://www.ibm.com/support/pages/mq-connection-terminating-error-code-2009" rel="nofollow noreferrer">article</a> could give some hints about error code 2009, but still not sure what exactly affects connection errors from the OpenShift side.</p> <p>Previously, I always connected to IBM MQ specifying a port value explicitly, but here is a bit different situation.<br /> <strong>How to connect to IBM MQ in OpenShift cluster through TCP?</strong></p> <hr /> <p>Configurations in OpenShift are as follows:</p> <hr /> <pre><code>kind: Pod apiVersion: v1 metadata: name: ibm-mq labels: app: ibm-mq spec: containers: - resources: limits: cpu: '1' memory: 600Mi requests: cpu: '1' memory: 600Mi name: ibm-mq ports: - containerPort: 1414 protocol: TCP - containerPort: 9443 protocol: TCP containerStatuses: image: 'nexus-ci/docker-lib/ibm_mq:latest' --- kind: Service apiVersion: v1 metadata: name: ibm-mq spec: ports: - name: admin protocol: TCP port: 9443 targetPort: 9443 - name: application protocol: TCP port: 1414 targetPort: 1414 selector: app: ibm-mq --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: ibm-mq-admin spec: host: ibm-mq-admin.my-domain.com to: kind: Service name: ibm-mq weight: 100 port: targetPort: admin tls: termination: passthrough insecureEdgeTerminationPolicy: None wildcardPolicy: None --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: ibm-mq-app spec: host: ibm-mq-app.my-domain.com to: kind: Service name: ibm-mq weight: 100 port: targetPort: application tls: termination: passthrough insecureEdgeTerminationPolicy: None wildcardPolicy: None --- </code></pre> <p><strong>UPDATE:</strong> Ended up with creating and deploying to OpenShift a small web-application receiving HTTP requests and interacting with MQ via JMS (put/get text messages), like:</p> <ul> <li><code>POST /queue/{queueName}/send + &lt;body&gt;</code>;</li> <li><code>GET /queue/{queueName}/receive</code>.</li> </ul> <p>It interacts with MQ inside the OpenShift cluster using TCP, and accepts external HTTP connections as a regular web application.<br /> Other solutions seem to take too much efforts, but I accepted one of them as it is theoretically correct and straightforward.</p>
<p>I'm not sure to fully understand your setup, but<code>&quot;Routes&quot;</code>only route HTTP traffic (On ports 80 or 443 onyl), not TCP traffic.<br /> If you want to access your MQ server from outside the cluster, there are a few solutions, one is to create a service of type: <code>&quot;NodePort&quot;</code></p> <p>Doc: <a href="https://docs.openshift.com/container-platform/4.7/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.7/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-nodeport.html</a></p> <p>Your Service is not a NodePort Service. In your case, it should be something like</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ibm-mq spec: type: NodePort ports: - port: 1414 targetPort: 1414 nodePort: 30001 selector: app: ibm-mq </code></pre> <p>Then access from outside with anyname.&lt;cluster domaine&gt;:30001<br /> And delete the useless corresponding route. As said before, I assumed you read in the doc I pointed to you that says that route only route HTTP traffic on port 80 or 443.</p> <p>Doc: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
<p>We are deploying a spring-boot application using spring-session-hazelcast + hazelcast-kubernetes on an OpenShift/Kubernetes cluster.</p> <p>Due to the nature of our platform, we can only use <code>service-dns</code> configuration. We expose a service on port 5701 for multicasting and set <code>service-dns</code> property to the multicast service name.</p> <p>Below is a snippet for creation of the Hazelcast instance.</p> <pre><code>@Bean public HazelcastInstance hazelcastInstance() { var config = new Config(); config.setClusterName(&quot;spring-session-cluster&quot;); var join = config.getNetworkConfig().getJoin(); join.getTcpIpConfig().setEnabled(false); join.getMulticastConfig().setEnabled(false); join.getKubernetesConfig().setEnabled(true) .setProperty(&quot;service-dns&quot;, &quot;&lt;multicast-service-name&gt;&quot;); var attribute = new AttributeConfig() .setName(Hazelcast4IndexedSessionRepository.PRINCIPAL_NAME_ATTRIBUTE) .setExtractorClassName(Hazelcast4PrincipalNameExtractor.class.getName()); config.getMapConfig(Hazelcast4IndexedSessionRepository.DEFAULT_SESSION_MAP_NAME) .addAttributeConfig(attribute) .addIndexConfig(new IndexConfig(IndexType.HASH, Hazelcast4IndexedSessionRepository.PRINCIPAL_NAME_ATTRIBUTE)); var serializer = new SerializerConfig(); serializer.setImplementation(new HazelcastSessionSerializer()) .setTypeClass(MapSession.class); config.getSerializationConfig().addSerializerConfig(serializer); return Hazelcast.newHazelcastInstance(config); } </code></pre> <p>When we run 2 pods for this application, we see the below ERROR log:</p> <pre><code>com.hazelcast.internal.cluster.impl.operations.SplitBrainMergeValidationOp Message: [&lt;private-ip&gt;]:5701 [spring-session-cluster] [4.2] Target is this node! -&gt; [&lt;private-ip&gt;]:5701 </code></pre> <p>Can someone please explain how to fix this error, still using &quot;service-dns&quot; configuration?</p>
<p>You need to enable headless mode for your service in openshift.</p> <p><a href="https://github.com/hazelcast/hazelcast-kubernetes#dns-lookup" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes#dns-lookup</a></p>
<p>I have a 3 node system, on which I have hosted a basic ArangoDB cluster having 3 dbservers. I am using Python-arango library, which says that to connect to the clusters, we need to supply the list of IPs with the port 8529 to ArangoClient class.My Python code is running in a different pod.</p> <p>Using Arangosh, I can access the coordinator pods using localhost:8529 configuration. However, on Arangosh, I can only access one coordinator pod at once. Since the Arango arrangement is a cluster, I would want to connect to all three coordinator pods at once in a roundrobin manner. Now as per the Kubernetes documentation in case of <a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/" rel="nofollow noreferrer">MongoDB</a>, I can supply the list of the &lt;pod_name.service_name&gt; to the MongoClient and it would get connected. I am trying something similar with the Arango coordinator pods, but that does not seem to work. The examples in Arango documentation only point to writing 'localhost-1','localhost-2', etc.</p> <p>To summarize, the issue in hand is that I do not know how to identify the IP of the Arango coordinator pods. I tried using the internal IPs of the coordinators, I tried using the <em>http://pod_name.8529</em> and I have tried using the end point of the coordinator that shows in the Web UI (which begins with ssl, but I had replaced ssl with http) and yet got no success.</p> <p>Would be grateful for any help in this context. Please let me know if some additional information is needed. Am stuck with this problem since a week now.</p>
<p>if you follow this : <a href="https://www.arangodb.com/2018/12/deploying-arangodb-3-4-on-kubernetes/" rel="nofollow noreferrer">https://www.arangodb.com/2018/12/deploying-arangodb-3-4-on-kubernetes/</a></p> <p>there is YAML config files it's creating multiple PODs. How your application should be connecting with ArragoDB is using the <code>Kubernetes service</code>.</p> <p>So your application will be connecting to Kubernetes service and by default kubernets service manage the RoundRobin load balancing.</p> <p>in example you can see it's creating three services</p> <pre><code>my-arangodb-cluster ClusterIP 10.11.247.191 &lt;none&gt; 8529/TCP 46m my-arangodb-cluster-ea LoadBalancer 10.11.241.253 35.239.220.180 8529:31194/TCP 46m my-arangodb-cluster-int ClusterIP None &lt;none&gt; 8529/TCP 46m </code></pre> <p>service <code>my-arangodb-cluster-ea</code> publicly open. While service <code>my-arangodb-cluster</code> can be accessed internally using the application.</p> <p>Your flow will be something like</p> <pre><code>Application &gt; Arango service (Round robin) &gt; Arango Pods </code></pre>
<p>I had a working 3 node k8s cluster (v1.21.0 on Ubuntu 20.04 bare metal) installed using <code>kubeadm</code>. I removed one of the nodes and re-installed it from scratch (wipe disks, new OS but IP address is the same). Now it is unable to join the cluster:</p> <pre><code># kubeadm join k8s.example.com:6443 --token who21h.jolq7z79twv7bf4m \ --discovery-token-ca-cert-hash sha256:f63c5786cea2be46c999f4b5c595abd0aa24896c3b37616c347df318d7406c00 \ --control-plane ... [check-etcd] Checking that the etcd cluster is healthy error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://65.21.128.36:2379 with maintenance client: context deadline exceeded To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>I ran the same (after <code>kubeadm reset</code>) with <code>--v=5</code> and it gets stuck logging these:</p> <pre><code>Failed to get etcd status for https://123.123.123.123:2379: failed to dial endpoint https://123.123.123.123:2379 with maintenance client: context deadline exceeded </code></pre> <p><code>123.123.123.123</code> is the IP address for the node I am trying to return to the cluster.</p> <p>Running <code>kubectl get nodes</code> on one of the other masters just lists the 2 remaining masters. I removed the node in question properly:</p> <pre><code>kubectl get nodes kubectl drain &lt;node-name&gt; kubectl drain &lt;node-name&gt; --ignore-daemonsets --delete-local-data kubectl delete node &lt;node-name&gt; </code></pre> <p>Any ideas? Tx.</p>
<p>Take a closer look at the error message you get:</p> <pre><code>Failed to get etcd status for https://123.123.123.123:2379: failed to dial endpoint https://123.123.123.123:2379 with maintenance client: context deadline exceeded </code></pre> <p>This is quite common issue, related with <code>etcd</code> cluster, which is well-documented. Compare with the following threads:</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/issues/81071" rel="noreferrer">Control plain won't join #81071</a></li> <li><a href="https://github.com/kubernetes/kubeadm/issues/1432" rel="noreferrer">kubeadm join is not fault tolerant to etcd endpoint failures #1432</a></li> <li><a href="https://superuser.com/questions/1473905/etcd-becomes-unhealthy-after-i-delete-one-of-master-node-i-am-look-for-a-fix">etcd becomes unhealthy after I delete one of master node, I am look for a fix</a></li> <li><a href="https://docs.openshift.com/container-platform/3.10/admin_guide/assembly_restore-etcd-quorum.html" rel="noreferrer">Restoring etcd quorum</a></li> </ul> <p>Specifically, this is related with the loss of <code>etcd</code> quorum. You can check it as described <a href="https://docs.openshift.com/container-platform/3.10/admin_guide/assembly_restore-etcd-quorum.html" rel="noreferrer">here</a>.</p> <p>The solution is described step by step <a href="https://github.com/kubernetes/kubernetes/issues/81071#issuecomment-519143803" rel="noreferrer">in this comment</a>:</p> <blockquote> <p>For the record here the command to run on one of the remaining etcd pod :</p> <p>Find the id of the member to remove</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list 5a4945140f0b39d9, started, sbg2-k8s001, https://192.168.208.12:2380, https://192.168.208.12:2379 740381e3c57ef823, started, gra3-k8s001, https://192.168.208.13:2380, https://192.168.208.13:2379 77a8fbb530b10f4a, started, rbx4-k8s001, https://192.168.208.14:2380, https://192.168.208.14:2379 </code></pre> <p>I want to remove 740381e3c57ef823</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member remove 740381e3c57ef823 Member 740381e3c57ef823 removed from cluster a2c90ef66bb95cc9 </code></pre> <p>Checking</p> <pre><code>ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key member list 5a4945140f0b39d9, started, sbg2-k8s001, https://192.168.208.12:2380, https://192.168.208.12:2379 77a8fbb530b10f4a, started, rbx4-k8s001, https://192.168.208.14:2380, https://192.168.208.14:2379 </code></pre> <p>Now I can join my new master.</p> </blockquote>
<p>I am trying out external secrets operator (ESO) by following below:</p> <p><a href="https://github.com/external-secrets/external-secrets" rel="nofollow noreferrer">https://github.com/external-secrets/external-secrets</a></p> <p><a href="https://external-secrets.io/guides-getting-started/" rel="nofollow noreferrer">https://external-secrets.io/guides-getting-started/</a></p> <p>I am using minikube and AWS secrets manager to do this (I also tried it out in k8s cluster hosted in EC2, but I get the same exact error).</p> <p>I followed the steps from the links above:</p> <ol> <li>Added the repo:</li> </ol> <p><code>helm repo add external-secrets https://charts.external-secrets.io</code></p> <ol start="2"> <li>Install it in the namespace. I'm already inside of the namespace, so I didn't create the namespace.</li> </ol> <p><code>helm install external-secrets external-secrets/external-secrets --set installCRDs=true</code></p> <ol start="3"> <li>Added k8s secret:</li> </ol> <p><code>k create secret generic aws-credentials --from-literal=aws-access-key-id='xxx' --from-literal=aws-secret-access-key='xxx'</code></p> <ol start="4"> <li>I created a secret in AWS secrets manager called <em>test_user_1</em></li> <li>kubectl apply -f secret-store.yaml</li> </ol> <pre><code>kind: SecretStore metadata: name: secretstore-sample spec: provider: aws: service: SecretsManager role: arn:aws:iam::123456789012:role/somerole region: us-east-1 auth: secretRef: accessKeyIDSecretRef: name: aws-credentials key: aws-access-key-id secretAccessKeySecretRef: name: aws-credentials key: aws-secret-access-key </code></pre> <ol start="6"> <li>kubectl apply -f externalsecrets.yaml</li> </ol> <pre><code>apiVersion: external-secrets.io/v1alpha1 kind: ExternalSecret metadata: name: example spec: refreshInterval: 1h secretStoreRef: name: secretstore-sample kind: SecretStore target: name: secret-to-be-created creationPolicy: Owner data: - secretKey: user-1-username remoteRef: key: test_user_1 property: username - secretKey: user-1-password remoteRef: key: test_user_1 property: password </code></pre> <p>Then it says</p> <p><code>externalsecret.external-secrets.io/example created</code></p> <p>When I do</p> <p><code>kubectl describe externalsecret.external-secrets.io/example</code></p> <p>Below is what I get and no <em>secret-to-be-created</em> is created:</p> <pre><code>... Status: Conditions: Last Transition Time: 2021-06-09T22:45:10Z Message: could not get secret data from provider: key &quot;test_user_1&quot; from ExternalSecret &quot;example&quot;: InvalidClientTokenId: The security token included in the request is invalid. status code: 403, request id: 5a544aa0-3953-4c0d-9dab-37bde10e328b Reason: SecretSyncedError Status: False Type: Ready Refresh Time: &lt;nil&gt; Events: &lt;none&gt; </code></pre> <p>I know this role has access to aws secrets manager (I've run python scripts to access aws secrets manager from my laptop using this role). But, I have limited knowledge of k8s, so, I appreciate any help.</p>
<p>I solved the issue. It's on AWS side. I needed to create a new user and a new role. Put the newly created role in the <em>role:</em> section of the configmap and allow the user to assume that role by providing the aws creds for the user as environment variables in the k8s cluster.</p> <p>Put this policy for the newly created role:</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;secretsmanager:GetResourcePolicy&quot;, &quot;secretsmanager:GetSecretValue&quot;, &quot;secretsmanager:DescribeSecret&quot;, &quot;secretsmanager:ListSecretVersionIds&quot; ], &quot;Resource&quot;: [ &quot;arn:aws:secretsmanager:us-west-2:111122223333:secret:dev-*&quot;, ] } ] } </code></pre> <p>And below as the trust relationship for the role:</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: { &quot;AWS&quot;: &quot;arn:aws:iam::...&quot; }, &quot;Action&quot;: &quot;sts:AssumeRole&quot; } ] } </code></pre> <p>Below policy for the newly created user:</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: &quot;sts:AssumeRole&quot;, &quot;Resource&quot;: &quot;arn:aws:iam::....&quot; } } </code></pre>
<p>I'm working with Filebeat 7.9.3 as a daemonset on k8s. I'm not able to parse docker container logs of a Springboot app that writes logs to stdout in json. The fact is that the every row of the Springboot app logs is written in this way:</p> <pre><code>{ &quot;@timestamp&quot;: &quot;2020-11-16T13:39:57.760Z&quot;, &quot;log.level&quot;: &quot;INFO&quot;, &quot;message&quot;: &quot;Checking comment 'se' done = true&quot;, &quot;service.name&quot;: &quot;conduit-be-moderator&quot;, &quot;event.dataset&quot;: &quot;conduit-be-moderator.log&quot;, &quot;process.thread.name&quot;: &quot;http-nio-8081-exec-2&quot;, &quot;log.logger&quot;: &quot;it.koopa.app.ModeratorController&quot;, &quot;transaction.id&quot;: &quot;1ed5c62964ff0cc2&quot;, &quot;trace.id&quot;: &quot;20b4b28a3817c9494a91de8720522972&quot;} </code></pre> <p>But the corresponding docker log file under /var/log/containers/ writes log in this way:</p> <pre><code>{ &quot;log&quot;: &quot;{\&quot;@timestamp\&quot;:\&quot;2020-11-16T11:27:32.273Z\&quot;, \&quot;log.level\&quot;: \&quot;INFO\&quot;, \&quot;message\&quot;:\&quot;Checking comment 'a'\&quot;, \&quot;service.name\&quot;:\&quot;conduit-be-moderator\&quot;,\&quot;event.dataset\&quot;:\&quot;conduit-be-moderator.log\&quot;,\&quot;process.thread.name\&quot;:\&quot;http-nio-8081-exec-4\&quot;,\&quot;log.logger\&quot;:\&quot;it.koopa.app.ModeratorController\&quot;,\&quot;transaction.id\&quot;:\&quot;9d3ad972dba65117\&quot;,\&quot;trace.id\&quot;:\&quot;8373edba92808d5e838e07c7f34af6c7\&quot;}\n&quot;, &quot;stream&quot;: &quot;stdout&quot;, &quot;time&quot;: &quot;2020-11-16T11:27:32.274816903Z&quot; } </code></pre> <p>I always receive this on filebeat logs</p> <pre><code>Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {} </code></pre> <p>This is my filebeat config that tries to parse json log message from docker logs where I'm using decode_json_fields to try to catch Elasticsearch standard fields (I'm using <a href="https://github.com/elastic/ecs-logging-java/blob/master/logback-ecs-encoder/README.md" rel="nofollow noreferrer">co.elastic.logging.logback.EcsEncoder</a>)</p> <pre><code>filebeat.yml: |- filebeat.inputs: - type: container #json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.message_key: log paths: - /var/log/containers/*.log include_lines: &quot;conduit-be-moderator&quot; processors: - decode_json_fields: fields: [&quot;log&quot;] overwrite_keys: true - add_kubernetes_metadata: host: ${NODE_NAME} in_cluster: true matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; processors: - add_cloud_metadata: - add_host_metadata: </code></pre> <p>How can I do this???</p>
<p>As processors are applied before the JSON parser of the input, you will need to first configure the <code>decode_json_fields</code> processors which will allow you to decode your json.log field. You will then be able to apply the json configuration fo the inputs on the <code>message</code> fields. Something like:</p> <pre class="lang-yaml prettyprint-override"><code>filebeat.yml: |- filebeat.inputs: - type: container json.keys_under_root: true json.overwrite_keys: true json.add_error_key: true json.message_key: message paths: - /var/log/containers/*.log include_lines: &quot;conduit-be-moderator&quot; processors: - decode_json_fields: fields: ['log'] expand_keys: true - add_kubernetes_metadata: host: ${NODE_NAME} in_cluster: true matchers: - logs_path: logs_path: &quot;/var/log/containers/&quot; processors: - add_cloud_metadata: - add_host_metadata: </code></pre> <p>This configuration assumes that all your logs use JSON format. Else you will probably need to add an exclude or include regex pattern.</p>
<p>I have a secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth data: ssh-privatekey: | SEVMTE9PT09PT09PT09PT09PT09PCg== </code></pre> <p>and deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 volumeMounts: - name: secret-ssh-auth mountPath: /root/.ssh volumes: - name: secret-ssh-auth secret: secretName: secret-ssh-auth defaultMode: 0400 </code></pre> <p>It creates a file with this path <code>/root/.ssh/ssh-privatekey</code> while I want to have <code>/root/.ssh/id_rsa</code> name instead.</p> <p>I know we can solve it by running a kubectl command, but I want to handle it inside the YAML file. So, how to do that by the YAML file?</p>
<p>Based on the Kubernetes documentation the <code>ssh-privatekey</code> key is mandatory, in this case, you can leave it empty via <code>stringData</code> key, then define another one by <code>data</code> key like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth stringData: ssh-privatekey: | - data: id_rsa: | SEVMTE9PT09PT09PT09PT09PT09PCg== </code></pre>
<p>After implementing istio, the service which was working before on browser has started giving 403 forbidden error.please note that this service still works inside k8 cluster and gives 200 response but on the browser it responds with 403 forbidden when it is called from other service.</p> <p>what could be the reason?</p> <p>any help appreciated.</p>
<p>By fixing the http_upgrade in nginx.this issue got resolved</p>
<p>We are running our EKS node groups on AWS Spot Instances. In order to make sure that the services don't go down at any point of time we would like to run 1 pod of each service on a separate node group which uses reserved instances. Is there any way we can configure the deployments so that 1 pod runs on reserved node group and rest on the spot instances node group? Currently we are using node selector to match the label to decide on which node group the service has to run. Is there any way we can use the labels of two separate node groups(reserved and spot) in node selector and specify the weights to divide the load?</p>
<p>Couldn't find a way to do that in a single deployment, but achieved the same using two seperate deployments one deploys to spot node group and other to on-demand node group but the service uses both the pods.</p>
<p>I am searching for an specific information regarding kubernetes requests and limits and still didn't find an answer or just didn't understand quite well. Say I've defined two containers A and B for a single pod, both with its resources limits and requests:</p> <p>A: RAM request: 1Gi RAM limit: 2Gi B: RAM request: 1Gi RAM limit: 2Gi</p> <p>So, we have a PoD limit of 4Gi (total). Suppose the A container exceeded the limit (say +1Gi), but B is consuming 64Mi only. So, my questions are:</p> <ol> <li>What happens to the pod? Is it evicted?</li> <li>Is the container A restarted?</li> <li>Is the container A allowed to use the B's available RAM?</li> </ol> <p>Thanks!</p>
<blockquote> <p>What happens to the pod? Is it evicted?</p> </blockquote> <p>If the memory limit of a container is exceeded, the kernel's OOM killer is invoked and terminates the container's process. The Pod then starts a new container on the same Node.</p> <p>(CPU limits use a different mechanism (CFS Bandwidth Control) that throttles the processes' CPU cycles instead of terminating the process.)</p> <blockquote> <p>Is the container A restarted?</p> </blockquote> <p>Yes.</p> <blockquote> <p>Is the container A allowed to use the B's available RAM?</p> </blockquote> <p>The memory is tracked separately for each container. They are not pooled together into the same limit.</p>
<p>Currently I try to use <code>KubeVirt</code> with <code>GKE</code> cluster.</p> <p>What I have done (follow the official document):</p> <ol> <li>Create a <code>GKE</code> cluster with 3 nodes via GCP console</li> <li>Install <code>kubectl</code> locally and connect to this cluster</li> <li>Install <code>kubevirt</code> via <code>kubectl</code> </li> <li>Install <code>virtctl</code> locally</li> <li>set <code>debug.useEmulation</code> to <code>true</code></li> <li>create the <code>testvm</code> (follow the demo) All the steps above work fine.</li> </ol> <p>But now I have troubles to start the vm</p> <ol> <li>If I try to start it via <code>"virtctl start testvm"</code>, I get the following error message:</li> </ol> <p><strong>"Error starting VirtualMachine the server has asked for the client to provide credentials"</strong></p> <ol start="2"> <li>If I try to modify the VM template to set it running by default, it doesn't work either.In the <code>virt-launcher</code> pod, the compute container starts successfully but the <code>volumecontainerdisk</code> fails, with the following log:</li> </ol> <pre class="lang-py prettyprint-override"><code>standard_init_linux.go:211: exec user process caused "permission denied" </code></pre> <p>Any help is appreciated, thanks.</p>
<p>It seems there are issues with the default GKE auth method when connecting from local to a cluster when using <code>gcloud container clusters get-credentials...</code>.</p> <p>What worked for me is creating a new SA and creating a kubeconfig for it:</p> <pre><code>{ kubectl create sa deployer kubectl create clusterrolebinding deployer --clusterrole cluster-admin --serviceaccount default:deployer KUBE_DEPLOY_SECRET_NAME=`kubectl get sa deployer -o jsonpath='{.secrets[0].name}'` KUBE_API_EP=`kubectl get ep -o jsonpath='{.items[0].subsets[0].addresses[0].ip}'` KUBE_API_TOKEN=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.token}'|base64 --decode` KUBE_API_CA=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.ca\.crt}'|base64 --decode` echo $KUBE_API_CA &gt; tmp.deploy.ca.crt export KUBECONFIG=./my-new-kubeconfig kubectl config set-cluster k8s --server=https://$KUBE_API_EP --certificate-authority=tmp.deploy.ca.crt --embed-certs=true kubectl config set-credentials k8s-deployer --token=$KUBE_API_TOKEN kubectl config set-context k8s --cluster k8s --user k8s-deployer kubectl config use-context k8s rm tmp.deploy.ca.crt unset KUBECONFIG } kubectl virt start testvm --kubeconfig ./my-new-kubeconfig </code></pre> <p>More explanation: <a href="https://faun.pub/manually-connect-to-your-kubernetes-cluster-from-the-outside-d852346a7f0a" rel="nofollow noreferrer">https://faun.pub/manually-connect-to-your-kubernetes-cluster-from-the-outside-d852346a7f0a</a></p>
<p>I am trying to connect to Firestore from code running on GKE Container. Simple REST GET api is working fine, but when I access the Firestore from read/write, I am getting Missing or insufficient permissions.</p> <pre><code>An unhandled exception was thrown by the application. Info 2021-06-06 21:21:20.283 EDT Grpc.Core.RpcException: Status(StatusCode=&quot;PermissionDenied&quot;, Detail=&quot;Missing or insufficient permissions.&quot;, DebugException=&quot;Grpc.Core.Internal.CoreErrorDetailException: {&quot;created&quot;:&quot;@1623028880.278990566&quot;,&quot;description&quot;:&quot;Error received from peer ipv4:172.217.193.95:443&quot;,&quot;file&quot;:&quot;/var/local/git/grpc/src/core/lib/surface/call.cc&quot;,&quot;file_line&quot;:1068,&quot;grpc_message&quot;:&quot;Missing or insufficient permissions.&quot;,&quot;grpc_status&quot;:7}&quot;) at Google.Api.Gax.Grpc.ApiCallRetryExtensions.&lt;&gt;c__DisplayClass0_0`2.&lt;&lt;WithRetry&gt;b__0&gt;d.MoveNext() </code></pre> <p><strong>Update</strong> I am trying to provide secret to pod with service account credentails. Here is the k8 file which deploys a pod to cluster with no issues when no secrets are provided and I can do Get Operations which don't hit Firestore, and they work fine.</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: foo-worldmanagement-production spec: replicas: 1 selector: matchLabels: app: foo role: worldmanagement env: production template: metadata: name: worldmanagement labels: app: foo role: worldmanagement env: production spec: containers: - name: worldmanagement image: gcr.io/foodev/foo/master/worldmanagement.21 resources: limits: memory: &quot;500Mi&quot; cpu: &quot;300m&quot; imagePullworld: Always readinessProbe: httpGet: path: /api/worldManagement/policies port: 80 ports: - name: worldmgmt containerPort: 80 </code></pre> <p>Now, if I try to mount secret, the pod never gets created fully, and it eventually fails</p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: foo-worldmanagement-production spec: replicas: 1 selector: matchLabels: app: foo role: worldmanagement env: production template: metadata: name: worldmanagement labels: app: foo role: worldmanagement env: production spec: volumes: - name: google-cloud-key secret: secretName: firestore-key containers: - name: worldmanagement image: gcr.io/foodev/foo/master/worldmanagement.21 volumeMounts: - name: google-cloud-key mountPath: /var/ env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/key.json resources: limits: memory: &quot;500Mi&quot; cpu: &quot;300m&quot; imagePullworld: Always readinessProbe: httpGet: path: /api/worldManagement/earth port: 80 ports: - name: worldmgmt containerPort: 80 </code></pre> <p>I tried to deploy the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">sample application</a> and it works fine.</p> <p>If I keep only the following the yaml file, the container gets deployed properly</p> <pre><code>- name: google-cloud-key secret: secretName: firestore-key </code></pre> <p>But once I add the following to yaml, it fails</p> <pre><code> volumeMounts: - name: google-cloud-key mountPath: /var/ env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/key.json </code></pre> <p>And I can see in GCP events that the container is not able to find the google-cloud-key. Any idea how to troubleshoot this issue, i.e why I am not able to mount the secrets, I can bash into the pod if needed.</p> <p>I am using multi stage docker file made of</p> <pre><code>From mcr.microsoft.com/dotnet/sdk:5.0 AS build FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS runtime </code></pre> <p>Thanks</p>
<p>Looks like they key itself might not be correctly visible to the pod. I would start by getting into the pod with <code>kubectl exec --stdin --tty &lt;podname&gt; -- /bin/bash</code> and ensuring that the <code>/var/key.json</code> (per your config) is accessible and has the correct credentials.</p> <p>The following would be a good way to mount the secret:</p> <pre><code>volumeMounts: - name: google-cloud-key mountPath: /var/run/secret/cloud.google.com env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secret/cloud.google.com/key.json </code></pre> <p>The above assumes your secret was created with a command like:</p> <pre><code>kubectl --namespace &lt;namespace&gt; create secret generic firestore-key --from-file key.json </code></pre> <p>Also it is important to check your Workload Identity setup. The <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#pod_cant_authenticate_to:%7E:text=Pod%20can%27t%20authenticate%20to%20Google%20Cloud" rel="nofollow noreferrer">Workload Identity | Kubernetes Engine Documentation</a> has a good section on this.</p>
<p>I need to connect a service running in a local container inside Docker on my machine to a database that's running on a Kubernetes cluster.</p> <p>Everything I found on port forwarding allowed me to connect my machine to the cluster, but not the local container to the cluster (unless I install kubectl on my container, which I cannot do).</p> <p>Is there a way to do this?</p>
<p><a href="https://www.telepresence.io/" rel="nofollow noreferrer">https://www.telepresence.io/</a> is what you're looking for. It will hook into the cluster network like a VPN and patch the services so traffic will get routed through the tunnel.</p>
<p>We are using client-go to talk to our kubernetes cluster (the api version: batchv1/appv1/corev1), and we mainly use three types of resources: Job, Deployment and Service.</p> <p>My question is how do we judge when A Job or deployment is ready and is of running status?</p> <p>For A Job, we found that when batchV1.Spec.Active &gt; 0, the pod controlled by this job may be in either of pending status or running status. So to check if one kubernetes Job's pods are all in running status, do we have to enumerate every pods of the Kubernetes Job and check they are all in running status, then the Job is ready and running? Is there simple way to do that?</p> <p>And how about the kubernetes Deployment and Service? Is there simple way to check deployment is ready?</p>
<p>To check the deployment status you need to check the pod status created by this deployment. example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: myapp name: myapp spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - image: myimage name: myapp livenessProbe: # your desired liveness check </code></pre> <p>You can get the desired PodTemplate from deployments using client-go</p> <p>For example:</p> <pre class="lang-golang prettyprint-override"><code>clientset := kubernetes.NewForConfigOrDie(config) deploymentClient := clientset.AppsV1().Deployments(&quot;mynamespace&quot;) deployment, err := deploymentClient.Get(&quot;myapp&quot;, metav1.GetOptions{}) for _, container := range deployment.Spec.Template.Spec.Containers { container.LivenessProbe // add your logic } </code></pre> <p><strong>Note:</strong> The Deployment only contains the desired PodTemplate, so to look at any status, you have to look at the created Pods. Pods</p> <p>You can list the Pods created from the deployment by using the same labels as in the selector of the Deployment.</p> <p><strong>Example list of Pods:</strong></p> <pre class="lang-golang prettyprint-override"><code>pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{ LabelSelector: &quot;app=myapp&quot;, }) // check the status for the pods - to see Probe status for _, pod := range pods.Items { pod.Status.Conditions // use your custom logic here for _, container := range pod.Status.ContainerStatuses { container.RestartCount // use this number in your logic } } </code></pre> <p>The Status part of a Pod contain conditions: with some Probe-information and containerStatuses: with restartCount:, also illustrated in the Go example above. Use your custom logic to use this information.</p> <p>A Pod is restarted whenever the livenessProbe fails.</p> <p>Example of a Pod Status:</p> <pre class="lang-yaml prettyprint-override"><code>status: conditions: - lastProbeTime: null lastTransitionTime: &quot;2020-09-15T07:17:25Z&quot; status: &quot;True&quot; type: Initialized containerStatuses: - containerID: docker://25b28170c8cec18ca3af0e9c792620a3edaf36aed02849d08c56b78610dec31b image: myimage imageID: docker-pullable://myimage@sha256:a432251b2674d24858f72b1392033e0d7a79786425555714d8e9a656505fa08c name: myapp restartCount: 0 </code></pre> <p>I hope that can help you to resolve your issue .</p>
<p>I've got the following:</p> <p><strong>ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: abcxyz annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: abcxyz http: paths: - path: /a/ backend: serviceName: service-a servicePort: 80 - path: /b/ backend: serviceName: service-b servicePort: 80 </code></pre> <p>Output of <code>kubectl describe ingress abcxyz</code>:</p> <pre><code>Name: abcxyz Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- abcxyz /a/ service-a:80 (&lt;none&gt;) /b/ service-b:80 (&lt;none&gt;) Annotations: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 16m nginx-ingress-controller Ingress default/abcxyz Normal UPDATE 12m (x2 over 15m) nginx-ingress-controller Ingress default/abcxyz </code></pre> <p>Why is the address empty? I've installed the 'nginx ingress controller' through helm using <code>helm install stable/nginx-ingress</code> - and all of it's pods relevent seem to be running fine.</p> <p>How can I provide access to the ingress?</p>
<p>The solution for me was:</p> <pre><code>minikube addons enable ingress </code></pre>
<p>I want to allow <strong>only</strong> listing of resources and not the manifest content.</p> <p>Example,</p> <pre><code>NAME READY STATUS RESTARTS AGE airflow-redis-0 1/1 Running 0 32h airflow-postgresql-0 1/1 Running 0 32h airflow-scheduler-9416ddfd6f-n7jcr 2/2 Running 0 32h airflow-webserver-9bf7f3c95c-kf1fx 1/1 Running 0 32h airflow-worker-0 2/2 Running 0 4h8m </code></pre> <p>With <code>GET</code> permission users can see the manifest of the pod individually. For example, <code>kubectl describe pod airflow-worker-0 -n airflow</code></p> <p>Similarly, with <code>LIST</code> permission users can see the manifest of all the pods with commands like - <code>kubectl get pod --output=json</code></p> <p>Is it possible to restrict manifest access and just allow the listing of resources in K8 RBAC?</p>
<p>Showing only a listing of resources and not the object content is not possible by using any of the RBAC request verbs.</p> <p><code>list</code> can't be used. It provides the listing but will also allow accessing full object content.</p>
<p>I'm trying to follow instructions on this <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">guide</a> but under docker.</p> <p>I set up a folder with:</p> <pre><code>. β”œβ”€β”€ Dockerfile └── main.py 0 directories, 2 files </code></pre> <p><code>main.py</code> is:</p> <pre><code>#!/usr/bin/env python3 print(&quot;Docker is magic!&quot;) </code></pre> <p>Dockerfile is:</p> <pre><code>FROM python:latest COPY main.py / CMD [ &quot;python&quot;, &quot;./main.py&quot; ] FROM python:3.7-alpine COPY ./ /usr/src/app/ WORKDIR /usr/src/app RUN apk add curl openssl bash --no-cache RUN curl -LO &quot;https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl&quot; \ &amp;&amp; chmod +x ./kubectl \ &amp;&amp; mv ./kubectl /usr/local/bin/kubectl kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment hello-node --type=LoadBalancer --port=38080 minikube start --driver=docker kubectl get pods </code></pre> <p>When I run docker run python-test I see in terminal:</p> <pre><code>Docker is magic! </code></pre> <p>but I don't see the get pods output.</p> <p>My goal here is to run a simple <code>minikube</code> in the docker that just print the list of the pods. What is wrong here?</p>
<p>If you want to use kubernetes inside a docker container my suggestion is to use k3d .</p> <blockquote> <p>k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.</p> </blockquote> <p>You can Download , install and use it directly with Docker. For more information you can follow the official documentation from <a href="https://k3d.io/" rel="nofollow noreferrer">https://k3d.io/</a> .</p> <p>To get the list of pods you dont' need to create a k8s cluster inside a docker container . what you need is a config file for any k8s cluster . β”œβ”€β”€ Dockerfile β”œ-- config └── main.py 0 directories, 3 files</p> <p>after that :</p> <pre><code>FROM python:latest COPY main.py / CMD [ &quot;python&quot;, &quot;./main.py&quot; ] FROM python:3.7-alpine COPY ./ /usr/src/app/ WORKDIR /usr/src/app RUN apk add curl openssl bash --no-cache RUN curl -LO &quot;https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl&quot; \ &amp;&amp; chmod +x ./kubectl \ &amp;&amp; mv ./kubectl /usr/local/bin/kubectl COPY config ~/.kube/config # now if you execute k get pods you can get the list of pods #Example; RUN kubectl get pods </code></pre> <p>to get this file config you can follow this link <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Organizing Cluster Access Using kubeconfig Files</a></p> <p>I hope that can help you to resolve your issue .</p>
<p>Currently I have a Spring containers running in a Kubernetes cluster. I am going through Udacity's Spring web classes and find the Eureka server interesting.</p> <p>Is there any benefit in using the Eureka server within the cluster?</p> <p>any help will be appreciated.</p> <p>Thank you</p>
<p>This is mostly an option question but ... probably not? The core Service system does most of the same thing. But if you're specifically using Eureka's service metadata system then maybe?</p>
<p>Upon digging deeper into the Kubernetes architecture it seems that all Kubernetes clusters (both on-premises and cloud) must use Linux as their control-plane (a.k.a. master) nodes.</p> <p>With that being said the following questions come to mind:</p> <ul> <li>How come this is the case?</li> <li>Why couldn't Windows be used as the control-plane?</li> </ul>
<p>First of all I want to say that from a technical perspective it <em>would</em> be possible to have a control plane running onto Windows. It is totally doable, however, no one wants to invest time into a solution which is worse than what already exist and it would take quite some time in order to make this work. Why eat soup with a fork if you already have a spoon?</p> <p>Now one might wonder if I am exaggerating or not. So I'll try to explain some of the issues that Windows has when it comes to containerization. For that to happen I'll have to explain <strong>how containers work</strong> first:</p> <p>Nowadays whenever people are talking about containers they are talking about Linux containers (which I am also going to do in this answer unless stated otherwise). Containers are essentially using Linux Kernel features, most importantly (but not limited to) <a href="https://man7.org/linux/man-pages/man7/namespaces.7.html" rel="nofollow noreferrer">Linux namespaces</a>. There are many different namespaces (PID, Network, ...) that can be used for &quot;isolation&quot;. As an example one can create a new PID namespace, assign it a process and that process will only be able to see itself as the running process (because it is &quot;isolated&quot;). Sounds familiar? Well, if you ever executed <code>ps aux</code> in a container this is what is going to happen. Since it is not possible to cover all the different kinds of Linux features that are essential in order for containers to work in a single post, I hope that by now it is clear that &quot;normal&quot; containers are essentially dependent on Linux.</p> <p>Okay, so if what I am saying is true, <strong>how can containers work on Windows at all</strong>?</p> <p>Guess what...they don't. What Windows is actually doing is spinning up a lightweight Linux machine in the background which then hosts containers. Sounds ridiculous? Well, it is. <a href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/container-docker-introduction/docker-defined" rel="nofollow noreferrer">Here</a> is a passage out of Microsoft's documentation:</p> <blockquote> <p>However, Windows images can run only on Windows hosts and Linux images can run on Linux hosts and Windows hosts (using a Hyper-V Linux VM, so far), where host means a server or a VM.</p> </blockquote> <p>So what about <strong>Windows containers</strong> then (as opposed to Linux containers)?</p> <p>Windows containers do run natively on Windows by using features of the Windows kernel, similar as Linux containers do. Developers tried to mimic the behavior of Linux containers as much as possible, however, due to poor design of the Windows kernel this simply is not possible and many hacks had to be used. As one can imagine, many issues come with that decision, too many to actually mention them all. Just to mention one: Windows containers are way bigger than Linux containers. It is very common for Window containers to actually reach gigabyte sizes. <a href="https://devblogs.microsoft.com/dotnet/we-made-windows-server-core-container-images-40-smaller/" rel="nofollow noreferrer">Even after making Windows Server Core images smaller by 40% back in 2019</a> the insider image was still over 1GB (uncompressed even over 2.5GB).</p> <p>With all that overhead in mind, Linux simply is superior in every way when it comes to containerization (and many other things as well) and there has never been a need to have a Windows control plane.</p> <p><strong>TL;DR</strong></p> <p>Because Windows is a poor operating system when it comes to containerization (and many other things).</p>
<blockquote> <p>readinessProbe: Indicates whether the container is ready to respond to requests. If the readiness probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success</p> </blockquote> <p>If the readiness probe fails (and the Pod's IP address is removed from end point), what happens next? Will the Pod's readiness probe conditions be checked again? Will it check again after initial delay? Is there any chance the Pod's IP address is added to the end point again (if the Pod self healed after readiness probe fails)? Will that Pod receive traffic again incase if it's healed?</p>
<blockquote> <p>will the pod's readiness prob conditions checked again?</p> </blockquote> <p>yes, the condition will be checked again depends on the threshold you have set.</p> <p>On each <code>periodSeconds</code>configuration readiness will be checked for POD.</p> <blockquote> <p>will it check again after the initial delay?</p> </blockquote> <p>It will check after the Initial delay only. Initial delay comes into the picture when POD is initializing or starting. readiness check will wait for configured time and after that time it will start checking the readiness of POD on each time interval suppose every <code>5</code> second or <code>10</code> seconds depends on the configuration of the <code>periodSeconds</code>.</p> <blockquote> <p>are there any chance pod's ip address added to the end point again(if pod self healed after readiness probe fails)?</p> </blockquote> <p>Yes if get auto healed mean, <code>successThreshold</code> set to <code>1</code> time if POD give 200 one time it will marked as heal and running pod is this case POD will get the traffic again.</p> <blockquote> <p>will the pod ever receive traffic again incase if it's healed?</p> </blockquote> <p>Yes</p> <p>For example :</p> <pre><code>readinessProbe: httpGet: path: /k8/readiness port: 9595 initialDelaySeconds: 25 periodSeconds: 8 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 30 livenessProbe: httpGet: path: /k8/liveness port: 9595 initialDelaySeconds: 30 periodSeconds: 8 timeoutSeconds: 10 successThreshold: 1 failureThreshold: 30 </code></pre> <p>The readiness and liveness probe will check the status on HTTP endpoint as mentioned in configuration.</p> <p><strong>initialDelaySeconds</strong> : it will only come into the picture when your POD initializing or starting again due to restart or anything. So when POD is starting readiness wont check service status till 30 second.</p> <p>After <strong>30</strong> <code>seconds</code> it will try to check the status on the endpoint. If successful POD will be in a <strong>ready</strong> state to handle the traffic or else it will try for another time <code>periodSeconds</code> so after <strong>8</strong> seconds it will try again if we will <code>200 response</code> POD will be <strong>Ready</strong> or else will try after <strong>8</strong> seconds.</p> <p><strong>timeoutSeconds</strong> : single hop or request will wait the amount of time to get response from service or else mark as failed check.</p> <p><strong>failureThreshold</strong> : Maximum number of failed check after this POD will get started or changed to Not Ready state based on configuration liveness or readiness.</p> <p><strong>successThreshold</strong> : Successful threshold means if single request get success response from service POD status gets changed to <strong>Ready</strong>.</p> <p>If continuous <strong>30</strong> <code>failureThreshold</code> occur then only POD will get marked as <strong>Not ready</strong> if in between <strong>single</strong> <code>successThreshold</code> occur POD will get mark as <strong>Ready</strong> same for liveness.</p> <p><strong>Note</strong> : Above example is just for reference can not be useful in an actual production scenario.</p> <p>Read more at : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a></p>
<p>I originally posted this question as an issue on the GitHub project for the AWS Load Balancer Controller here: <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2069" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2069</a>.</p> <p>I'm seeing some odd behavior that I can't trace or explain when trying to get the loadBalacnerDnsName from an ALB created by the controller. I'm using v2.2.0 of the AWS Load Balancer Controller in a CDK project. The ingress that I deploy triggers the provisioning of an ALB, and that ALB can connect to my K8s workloads running in EKS.</p> <p>Here's my problem: I'm trying to automate the creation of a Route53 A Record that points to the <code>loadBalancerDnsName</code> of the load balancer, but the <code>loadBalancerDnsName</code> that I get in my CDK script is not the same as the <code>loadBalancerDnsName</code> that shows up in the AWS console once my stack has finished deploying. The value in the console is correct and I can get a response from that URL. My CDK script outputs the value of the DnsName as a CfnOutput value, but that URL does not point to anything.</p> <p>In CDK, I have tried to use <code>KubernetesObjectValue</code> to get the DNS name from the load balancer. This isn't working (see this related issue: <a href="https://github.com/aws/aws-cdk/issues/14933" rel="nofollow noreferrer">https://github.com/aws/aws-cdk/issues/14933</a>), so I'm trying to lookup the Load Balancer with CDK's <code>.fromLookup</code> and using a tag that I added through my ingress annotation:</p> <pre class="lang-js prettyprint-override"><code> const alb = elbv2.ApplicationLoadBalancer.fromLookup(this, 'appAlb', { loadBalancerTags: { Environment: 'test', }, }); </code></pre> <p>Here's the project in which I'm running into this issue: <a href="https://github.com/briancaffey/django-cdk" rel="nofollow noreferrer">https://github.com/briancaffey/django-cdk</a></p> <p>Here are some relevant files:</p> <p>Installing the AWS Load Balancer Controller with CDK: <a href="https://github.com/briancaffey/django-cdk/blob/main/src/eks/awslbc/index.ts" rel="nofollow noreferrer">https://github.com/briancaffey/django-cdk/blob/main/src/eks/awslbc/index.ts</a></p> <p>Ingress object definition: <a href="https://github.com/briancaffey/django-cdk/tree/main/src/eks/resources/ingress" rel="nofollow noreferrer">https://github.com/briancaffey/django-cdk/tree/main/src/eks/resources/ingress</a></p> <p>ALB <code>fromLookup</code> method that is generating the wrong DnsName: <a href="https://github.com/briancaffey/django-cdk/blob/main/src/django-eks.ts#L297" rel="nofollow noreferrer">https://github.com/briancaffey/django-cdk/blob/main/src/django-eks.ts#L297</a></p> <p>Does anyone have any ideas about what could be causing this or how I can debug? If there is any other information that would be helpful to collect in my debugging process, please let me know.</p> <p>I'm not sure if this is possible, but it seems like the Load Balancer might be created, assigned a DnsName, and then maybe fails provisioning at some point and tries to create another ALB that with another DnsName that is successful.</p> <p><strong>Update</strong>: Here are logs from the AWS Load Balancer Controller deployment:</p> <pre><code>~/git/github/django-cdk$ kubectl logs deployment/goeksstackdjangoekssamplealbingresscontroller48a16415-aws-load -n kube-system </code></pre> <pre><code>Found 2 pods, using pod/goeksstackdjangoekssamplealbingresscontroller48a16415-aws-c6d8h {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.094481,&quot;msg&quot;:&quot;version&quot;,&quot;GitVersion&quot;:&quot;v2.2.0&quot;,&quot;GitCommit&quot;:&quot;68c417a7ea37ff153f053d9ffef1cc5c70d7e211&quot;,&quot;BuildDate&quot;:&quot;2021-05-14T21:49:05+0000&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.1275837,&quot;logger&quot;:&quot;controller-runtime.metrics&quot;,&quot;msg&quot;:&quot;metrics server is starting to listen&quot;,&quot;addr&quot;:&quot;:8080&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.1310954,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;adding health check for controller&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.1312613,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/mutate-v1-pod&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.131368,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.131444,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.1315207,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/validate-networking-v1beta1-ingress&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379830.1316643,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;starting podInfo repo&quot;} I0611 02:50:32.131767 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.1318467,&quot;logger&quot;:&quot;controller-runtime.manager&quot;,&quot;msg&quot;:&quot;starting metrics server&quot;,&quot;path&quot;:&quot;/metrics&quot;} I0611 02:50:32.150030 1 leaderelection.go:252] successfully acquired lease kube-system/aws-load-balancer-controller-leader {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2321155,&quot;logger&quot;:&quot;controller-runtime.webhook.webhooks&quot;,&quot;msg&quot;:&quot;starting webhook server&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2321231,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2321854,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2322075,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2323742,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;channel source: 0xc00053a820&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2324347,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;channel source: 0xc00053a870&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2324626,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2324827,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2325017,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.232624,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;service&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2326622,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting Controller&quot;,&quot;controller&quot;:&quot;service&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.232741,&quot;logger&quot;:&quot;controller-runtime.certwatcher&quot;,&quot;msg&quot;:&quot;Updated current TLS certificate&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2328286,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;serving webhook server&quot;,&quot;host&quot;:&quot;&quot;,&quot;port&quot;:9443} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.2332337,&quot;logger&quot;:&quot;controller-runtime.certwatcher&quot;,&quot;msg&quot;:&quot;Starting certificate watcher&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.3324778,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.3327508,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting workers&quot;,&quot;controller&quot;:&quot;service&quot;,&quot;worker count&quot;:3} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.332836,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;channel source: 0xc00053a8c0&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.3328674,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.4330835,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting Controller&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.4333706,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting workers&quot;,&quot;reconcilerGroup&quot;:&quot;elbv2.k8s.aws&quot;,&quot;reconcilerKind&quot;:&quot;TargetGroupBinding&quot;,&quot;controller&quot;:&quot;targetGroupBinding&quot;,&quot;worker count&quot;:3} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.433311,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting EventSource&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;source&quot;:&quot;kind source: /, Kind=&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.4334028,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting Controller&quot;,&quot;controller&quot;:&quot;ingress&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379832.433451,&quot;logger&quot;:&quot;controller&quot;,&quot;msg&quot;:&quot;Starting workers&quot;,&quot;controller&quot;:&quot;ingress&quot;,&quot;worker count&quot;:3} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379851.8822243,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;successfully built model&quot;,&quot;model&quot;:&quot;{\&quot;id\&quot;:\&quot;app/app-ingress\&quot;,\&quot;resources\&quot;:{\&quot;AWS::EC2::SecurityGroup\&quot;:{\&quot;ManagedLBSecurityGroup\&quot;:{\&quot;spec\&quot;:{\&quot;groupName\&quot;:\&quot;k8s-app-appingre-64b5836d2b\&quot;,\&quot;description\&quot;:\&quot;[k8s] Managed SecurityGroup for LoadBalancer\&quot;,\&quot;tags\&quot;:{\&quot;Environment\&quot;:\&quot;test\&quot;},\&quot;ingress\&quot;:[{\&quot;ipProtocol\&quot;:\&quot;tcp\&quot;,\&quot;fromPort\&quot;:80,\&quot;toPort\&quot;:80,\&quot;ipRanges\&quot;:[{\&quot;cidrIP\&quot;:\&quot;0.0.0.0/0\&quot;}]}]}}},\&quot;AWS::ElasticLoadBalancingV2::Listener\&quot;:{\&quot;80\&quot;:{\&quot;spec\&quot;:{\&quot;loadBalancerARN\&quot;:{\&quot;$ref\&quot;:\&quot;#/resources/AWS::ElasticLoadBalancingV2::LoadBalancer/LoadBalancer/status/loadBalancerARN\&quot;},\&quot;port\&quot;:80,\&quot;protocol\&quot;:\&quot;HTTP\&quot;,\&quot;defaultActions\&quot;:[{\&quot;type\&quot;:\&quot;fixed-response\&quot;,\&quot;fixedResponseConfig\&quot;:{\&quot;contentType\&quot;:\&quot;text/plain\&quot;,\&quot;statusCode\&quot;:\&quot;404\&quot;}}],\&quot;tags\&quot;:{\&quot;Environment\&quot;:\&quot;test\&quot;}}}},\&quot;AWS::ElasticLoadBalancingV2::ListenerRule\&quot;:{\&quot;80:1\&quot;:{\&quot;spec\&quot;:{\&quot;listenerARN\&quot;:{\&quot;$ref\&quot;:\&quot;#/resources/AWS::ElasticLoadBalancingV2::Listener/80/status/listenerARN\&quot;},\&quot;priority\&quot;:1,\&quot;actions\&quot;:[{\&quot;type\&quot;:\&quot;forward\&quot;,\&quot;forwardConfig\&quot;:{\&quot;targetGroups\&quot;:[{\&quot;targetGroupARN\&quot;:{\&quot;$ref\&quot;:\&quot;#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/app/app-ingress-api-http:80/status/targetGroupARN\&quot;}}]}}],\&quot;conditions\&quot;:[{\&quot;field\&quot;:\&quot;path-pattern\&quot;,\&quot;pathPatternConfig\&quot;:{\&quot;values\&quot;:[\&quot;/*\&quot;]}}],\&quot;tags\&quot;:{\&quot;Environment\&quot;:\&quot;test\&quot;}}}},\&quot;AWS::ElasticLoadBalancingV2::LoadBalancer\&quot;:{\&quot;LoadBalancer\&quot;:{\&quot;spec\&quot;:{\&quot;name\&quot;:\&quot;k8s-app-appingre-df49e963f5\&quot;,\&quot;type\&quot;:\&quot;application\&quot;,\&quot;scheme\&quot;:\&quot;internet-facing\&quot;,\&quot;ipAddressType\&quot;:\&quot;ipv4\&quot;,\&quot;subnetMapping\&quot;:[{\&quot;subnetID\&quot;:\&quot;subnet-0257383f56bbc4810\&quot;},{\&quot;subnetID\&quot;:\&quot;subnet-0b8b2e282788f64bc\&quot;}],\&quot;securityGroups\&quot;:[{\&quot;$ref\&quot;:\&quot;#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\&quot;}],\&quot;tags\&quot;:{\&quot;Environment\&quot;:\&quot;test\&quot;}}}},\&quot;AWS::ElasticLoadBalancingV2::TargetGroup\&quot;:{\&quot;app/app-ingress-api-http:80\&quot;:{\&quot;spec\&quot;:{\&quot;name\&quot;:\&quot;k8s-app-apihttp-ee37d33f98\&quot;,\&quot;targetType\&quot;:\&quot;instance\&quot;,\&quot;port\&quot;:30867,\&quot;protocol\&quot;:\&quot;HTTP\&quot;,\&quot;protocolVersion\&quot;:\&quot;HTTP1\&quot;,\&quot;healthCheckConfig\&quot;:{\&quot;port\&quot;:\&quot;traffic-port\&quot;,\&quot;protocol\&quot;:\&quot;HTTP\&quot;,\&quot;path\&quot;:\&quot;/\&quot;,\&quot;matcher\&quot;:{\&quot;httpCode\&quot;:\&quot;200\&quot;},\&quot;intervalSeconds\&quot;:15,\&quot;timeoutSeconds\&quot;:5,\&quot;healthyThresholdCount\&quot;:2,\&quot;unhealthyThresholdCount\&quot;:2},\&quot;tags\&quot;:{\&quot;Environment\&quot;:\&quot;test\&quot;}}}},\&quot;K8S::ElasticLoadBalancingV2::TargetGroupBinding\&quot;:{\&quot;app/app-ingress-api-http:80\&quot;:{\&quot;spec\&quot;:{\&quot;template\&quot;:{\&quot;metadata\&quot;:{\&quot;name\&quot;:\&quot;k8s-app-apihttp-ee37d33f98\&quot;,\&quot;namespace\&quot;:\&quot;app\&quot;,\&quot;creationTimestamp\&quot;:null},\&quot;spec\&quot;:{\&quot;targetGroupARN\&quot;:{\&quot;$ref\&quot;:\&quot;#/resources/AWS::ElasticLoadBalancingV2::TargetGroup/app/app-ingress-api-http:80/status/targetGroupARN\&quot;},\&quot;targetType\&quot;:\&quot;instance\&quot;,\&quot;serviceRef\&quot;:{\&quot;name\&quot;:\&quot;api-http\&quot;,\&quot;port\&quot;:80},\&quot;networking\&quot;:{\&quot;ingress\&quot;:[{\&quot;from\&quot;:[{\&quot;securityGroup\&quot;:{\&quot;groupID\&quot;:{\&quot;$ref\&quot;:\&quot;#/resources/AWS::EC2::SecurityGroup/ManagedLBSecurityGroup/status/groupID\&quot;}}}],\&quot;ports\&quot;:[{\&quot;protocol\&quot;:\&quot;TCP\&quot;}]}]}}}}}}}}&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.083325,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating securityGroup&quot;,&quot;resourceID&quot;:&quot;ManagedLBSecurityGroup&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.2566178,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created securityGroup&quot;,&quot;resourceID&quot;:&quot;ManagedLBSecurityGroup&quot;,&quot;securityGroupID&quot;:&quot;sg-080bffd697dc0ab27&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.4109924,&quot;msg&quot;:&quot;authorizing securityGroup ingress&quot;,&quot;securityGroupID&quot;:&quot;sg-080bffd697dc0ab27&quot;,&quot;permission&quot;:[{&quot;FromPort&quot;:80,&quot;IpProtocol&quot;:&quot;tcp&quot;,&quot;IpRanges&quot;:[{&quot;CidrIp&quot;:&quot;0.0.0.0/0&quot;,&quot;Description&quot;:&quot;&quot;}],&quot;Ipv6Ranges&quot;:null,&quot;PrefixListIds&quot;:null,&quot;ToPort&quot;:80,&quot;UserIdGroupPairs&quot;:null}]} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.5575335,&quot;msg&quot;:&quot;authorized securityGroup ingress&quot;,&quot;securityGroupID&quot;:&quot;sg-080bffd697dc0ab27&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.6474898,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating targetGroup&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;app/app-ingress-api-http:80&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379852.9693868,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created targetGroup&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;app/app-ingress-api-http:80&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:targetgroup/k8s-app-apihttp-ee37d33f98/8e8cf427880a005f&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.0108032,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating loadBalancer&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;LoadBalancer&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.6124952,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created loadBalancer&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;LoadBalancer&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:loadbalancer/app/k8s-app-appingre-df49e963f5/e1e08ff54e34da78&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.6379986,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating listener&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;80&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.7017045,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created listener&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;80&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:listener/app/k8s-app-appingre-df49e963f5/e1e08ff54e34da78/7896e4ee12889f1d&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.7323081,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating listener rule&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;80:1&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.7993383,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created listener rule&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;80:1&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:listener-rule/app/k8s-app-appingre-df49e963f5/e1e08ff54e34da78/7896e4ee12889f1d/2fc270d382a475d1&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.7995286,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;creating targetGroupBinding&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;app/app-ingress-api-http:80&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.836185,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;created targetGroupBinding&quot;,&quot;stackID&quot;:&quot;app/app-ingress&quot;,&quot;resourceID&quot;:&quot;app/app-ingress-api-http:80&quot;,&quot;targetGroupBinding&quot;:{&quot;namespace&quot;:&quot;app&quot;,&quot;name&quot;:&quot;k8s-app-apihttp-ee37d33f98&quot;}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379853.989902,&quot;logger&quot;:&quot;controllers.ingress&quot;,&quot;msg&quot;:&quot;successfully deployed model&quot;,&quot;ingressGroup&quot;:&quot;app/app-ingress&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379854.0531545,&quot;msg&quot;:&quot;authorizing securityGroup ingress&quot;,&quot;securityGroupID&quot;:&quot;sg-03aa126eb1aaeacc5&quot;,&quot;permission&quot;:[{&quot;FromPort&quot;:0,&quot;IpProtocol&quot;:&quot;tcp&quot;,&quot;IpRanges&quot;:null,&quot;Ipv6Ranges&quot;:null,&quot;PrefixListIds&quot;:null,&quot;ToPort&quot;:65535,&quot;UserIdGroupPairs&quot;:[{&quot;Description&quot;:&quot;elbv2.k8s.aws/targetGroupBinding=shared&quot;,&quot;GroupId&quot;:&quot;sg-080bffd697dc0ab27&quot;,&quot;GroupName&quot;:null,&quot;PeeringStatus&quot;:null,&quot;UserId&quot;:null,&quot;VpcId&quot;:null,&quot;VpcPeeringConnectionId&quot;:null}]}]} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379854.1990266,&quot;msg&quot;:&quot;authorized securityGroup ingress&quot;,&quot;securityGroupID&quot;:&quot;sg-03aa126eb1aaeacc5&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379854.3660266,&quot;msg&quot;:&quot;registering targets&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:targetgroup/k8s-app-apihttp-ee37d33f98/8e8cf427880a005f&quot;,&quot;targets&quot;:[{&quot;AvailabilityZone&quot;:null,&quot;Id&quot;:&quot;i-08b9e38bcf885a07a&quot;,&quot;Port&quot;:30867},{&quot;AvailabilityZone&quot;:null,&quot;Id&quot;:&quot;i-0cdcd5bd12476c990&quot;,&quot;Port&quot;:30867}]} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379854.5757735,&quot;msg&quot;:&quot;registered targets&quot;,&quot;arn&quot;:&quot;arn:aws:elasticloadbalancing:us-east-1:111111111111:targetgroup/k8s-app-apihttp-ee37d33f98/8e8cf427880a005f&quot;} </code></pre> <p>Here are the logs for the other pod in the AWS Load Balancer Controller deployment:</p> <pre><code>~/git/github/django-cdk$ kubectl logs -n kube-system goeksstackdjangoekssamplealbingresscontroller48a16415-aws-q9nmg {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.7888832,&quot;msg&quot;:&quot;version&quot;,&quot;GitVersion&quot;:&quot;v2.2.0&quot;,&quot;GitCommit&quot;:&quot;68c417a7ea37ff153f053d9ffef1cc5c70d7e211&quot;,&quot;BuildDate&quot;:&quot;2021-05-14T21:49:05+0000&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.8260994,&quot;logger&quot;:&quot;controller-runtime.metrics&quot;,&quot;msg&quot;:&quot;metrics server is starting to listen&quot;,&quot;addr&quot;:&quot;:8080&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.8316746,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;adding health check for controller&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.8317719,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/mutate-v1-pod&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.8318048,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.83182,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.831849,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;registering webhook&quot;,&quot;path&quot;:&quot;/validate-networking-v1beta1-ingress&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379840.8319223,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;starting podInfo repo&quot;} I0611 02:50:42.832101 1 leaderelection.go:242] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader... {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379842.8324547,&quot;logger&quot;:&quot;controller-runtime.manager&quot;,&quot;msg&quot;:&quot;starting metrics server&quot;,&quot;path&quot;:&quot;/metrics&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379842.9327362,&quot;logger&quot;:&quot;controller-runtime.webhook.webhooks&quot;,&quot;msg&quot;:&quot;starting webhook server&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379842.9332633,&quot;logger&quot;:&quot;controller-runtime.certwatcher&quot;,&quot;msg&quot;:&quot;Updated current TLS certificate&quot;} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379842.9333658,&quot;logger&quot;:&quot;controller-runtime.webhook&quot;,&quot;msg&quot;:&quot;serving webhook server&quot;,&quot;host&quot;:&quot;&quot;,&quot;port&quot;:9443} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1623379842.9334989,&quot;logger&quot;:&quot;controller-runtime.certwatcher&quot;,&quot;msg&quot;:&quot;Starting certificate watcher&quot;} </code></pre> <p>I replaced my account ID with <code>111111111111</code>.</p> <p>Here is the ALB DNS name from my CDK Stack's CfnOutput:</p> <pre><code>k8s-app-appingre-a5bb1f9208-1217069225.us-east-1.elb.amazonaws.com </code></pre> <p>and here is the DNS name for the ALB from the EC2 &gt; Load Balancers console:</p> <pre><code>k8s-app-appingre-df49e963f5-842078657.us-east-1.elb.amazonaws.com </code></pre>
<p>I think that the answer is to use <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer">external-dns</a>.</p> <blockquote> <p>ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.</p> </blockquote>
<p>Helm Chart has the concept of Version and appVersion.</p> <p>We are using the Version to document that a content of the Helm Chart changed or not (for ex, a template, deployment.yaml has new Envrionment value or a configmap.yaml has an additional) value, for these scenarios Version number should increase. We are using appVersion to document docker image tag changes (so actual Version of the Business Application, I know there can be multiple container images but we are able identify the one of those as main application and use its tag)...</p> <p>Now in our development process there can be multiple valid images of the Business Application (feature development, lets say, feature1, feature2, feature3), so we can have a constellation like the following, [Helm Chart: myChart Version: 5.1 appVersion: feature1], [Helm Chart: myChart Version: 5.1 appVersion: feature2], [Helm Chart: myChart Version: 5.1 appVersion: feature3] most of deployment are automated but there can be cases that we have to say somebody deploy feature2.</p> <p>Now here comes the dilema, in our Helm Repository we would have these 3 charts.</p> <pre><code>5.1-&gt;feature1 5.1-&gt;feature2 5.1-&gt;feature3 </code></pre> <p>but when I look to the Helm Commands &quot;Helm Install&quot;, &quot;Helm Upgrade&quot;, &quot;Helm Pull&quot; I only see &quot;--version&quot; as parameter but no &quot;--appVersion&quot;, so it is not possible to install</p> <pre><code>helm upgrade -i myChartFeature2 myChart --version 5.1 --appVersion feature2 </code></pre> <p>We don't want to version our charts, &quot;5.1.0-feature1&quot; because then we will loose our ablity to identify, we have a new Chart while something in the templates has changed or we have a new version while Business Logic has changed...</p> <p>So my question is</p> <ul> <li>is there a way to say I want to install this specific appVersion of my Chart?</li> </ul> <p>Thx for answers...</p>
<p>appVersion doesn't work this way, it isn't involved in dependency management. It's just for humans to know &quot;this chart packages version 1.2 of Foobar&quot;, though these days many charts support multiple versions of the underlying thing so it usually is just set to the default one, if it's set at all.</p>
<p>I was able to successfully start <strong>keycloak server</strong> on <strong>AWS K3S Kubernetes Cluster</strong> with <strong>Istio Gateway</strong> and <strong>AWS HTTPS Application Load Balancer</strong>.</p> <p>I can successfully see <strong>Keycloak Home Page</strong>: <strong><a href="https://keycloak.skycomposer.net/auth/" rel="nofollow noreferrer">https://keycloak.skycomposer.net/auth/</a></strong></p> <p>But when I click on <strong>Admin Console</strong> link, then the <strong>Blank Page</strong> is shown: <strong><a href="https://keycloak.skycomposer.net/auth/admin/master/console/" rel="nofollow noreferrer">https://keycloak.skycomposer.net/auth/admin/master/console/</a></strong></p> <p><strong>Browser Inspect Tool</strong> shows that: <strong><a href="http://keycloak.skycomposer.net/auth/js/keycloak.js?version=rk826" rel="nofollow noreferrer">http://keycloak.skycomposer.net/auth/js/keycloak.js?version=rk826</a></strong> link returns the following status:</p> <pre><code>(blocked:mixed-content) </code></pre> <p>I did some research on the internet and the reason seems to be related with redirection from <strong>https</strong> to <strong>http</strong>, which is not correctly handled by <strong>istio gateway</strong> and <strong>aws load balancer</strong></p> <p>But unfortunately, I couldn't find the solution, how to solve it for my particular environment.</p> <p>Here are my configuration files:</p> <p><strong>keycloak-config.yaml:</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: keycloak data: KEYCLOAK_USER: admin@keycloak KEYCLOAK_MGMT_USER: mgmt@keycloak JAVA_OPTS_APPEND: '-Djboss.http.port=8080' PROXY_ADDRESS_FORWARDING: 'true' KEYCLOAK_HOSTNAME: 'keycloak.skycomposer.net' KEYCLOAK_FRONTEND_URL: 'https://keycloak.skycomposer.net/auth' KEYCLOAK_LOGLEVEL: INFO ROOT_LOGLEVEL: INFO DB_VENDOR: H2 </code></pre> <p><strong>keycloak-deployment.yaml:</strong></p> <pre><code>kind: Deployment apiVersion: apps/v1 metadata: name: keycloak labels: app: keycloak spec: replicas: 1 selector: matchLabels: app: keycloak template: metadata: labels: app: keycloak annotations: sidecar.istio.io/rewriteAppHTTPProbers: &quot;true&quot; spec: containers: - name: keycloak image: jboss/keycloak:13.0.1 imagePullPolicy: Always ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - name: keycloak-data mountPath: /opt/jboss/keycloak/standalone/data env: - name: KEYCLOAK_USER valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_USER - name: KEYCLOAK_MGMT_USER valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_MGMT_USER - name: JAVA_OPTS_APPEND valueFrom: configMapKeyRef: name: keycloak key: JAVA_OPTS_APPEND - name: DB_VENDOR valueFrom: configMapKeyRef: name: keycloak key: DB_VENDOR - name: PROXY_ADDRESS_FORWARDING valueFrom: configMapKeyRef: name: keycloak key: PROXY_ADDRESS_FORWARDING - name: KEYCLOAK_HOSTNAME valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_HOSTNAME - name: KEYCLOAK_FRONTEND_URL valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_FRONTEND_URL - name: KEYCLOAK_LOGLEVEL valueFrom: configMapKeyRef: name: keycloak key: KEYCLOAK_LOGLEVEL - name: ROOT_LOGLEVEL valueFrom: configMapKeyRef: name: keycloak key: ROOT_LOGLEVEL - name: KEYCLOAK_PASSWORD valueFrom: secretKeyRef: name: keycloak key: KEYCLOAK_PASSWORD - name: KEYCLOAK_MGMT_PASSWORD valueFrom: secretKeyRef: name: keycloak key: KEYCLOAK_MGMT_PASSWORD volumes: - name: keycloak-data persistentVolumeClaim: claimName: keycloak-pvc </code></pre> <p><strong>keycloak-service.yaml:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: keycloak spec: ports: - protocol: TCP name: http port: 80 targetPort: 8080 selector: app: keycloak </code></pre> <p><strong>istio-gateway.yaml:</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;keycloak.skycomposer.net&quot; </code></pre> <p><strong>istio-virtualservice.yaml:</strong></p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: keycloak spec: hosts: - keycloak.skycomposer.net gateways: - istio-gateway http: - match: - uri: prefix: / route: - destination: host: keycloak.default.svc.cluster.local port: number: 80 </code></pre> <p>I successfully installed <strong>istio 1.9.1</strong> with <strong>istioctl</strong>:</p> <pre><code>istioctl install \ --set meshConfig.accessLogFile=/dev/stdout \ --skip-confirmation </code></pre> <p>Also, I labelled default namespace with <strong>istio injection</strong>, so all my pods in default namespace have <strong>istio sidecar container</strong>:</p> <pre><code>kubectl label namespace default istio-injection=enabled NAME READY STATUS RESTARTS AGE whoami-6c4757bbb5-9zkbl 2/2 Running 0 13m notification-microservice-5dfcf96b95-ll8lm 2/2 Running 0 13m customermgmt-6b48586868-ddlnw 2/2 Running 0 13m usermgmt-c5b65964-df2vc 2/2 Running 0 13m keycloak-d48f9bbbf-tsm5h 2/2 Running 0 13m </code></pre> <p>Here is also <strong>terraform</strong> configuration of <strong>AWS Load Balancer</strong>:</p> <pre><code>resource &quot;aws_lb&quot; &quot;mtc_lb&quot; { name = &quot;mtc-loadbalancer&quot; subnets = var.public_subnets security_groups = [var.public_sg] idle_timeout = 400 } resource &quot;aws_lb_target_group&quot; &quot;mtc_tg&quot; { name = &quot;mtc-lb-tg-${substr(uuid(), 0, 3)}&quot; port = var.tg_port protocol = var.tg_protocol vpc_id = var.vpc_id lifecycle { create_before_destroy = true ignore_changes = [name] } health_check { healthy_threshold = var.elb_healthy_threshold unhealthy_threshold = var.elb_unhealthy_threshold timeout = var.elb_timeout interval = var.elb_interval } } resource &quot;aws_lb_listener&quot; &quot;mtc_lb_listener_http&quot; { load_balancer_arn = aws_lb.mtc_lb.arn port = 80 protocol = &quot;HTTP&quot; default_action { type = &quot;redirect&quot; redirect { port = &quot;443&quot; protocol = &quot;HTTPS&quot; status_code = &quot;HTTP_301&quot; } } } resource &quot;aws_lb_listener&quot; &quot;mtc_lb_listener&quot; { load_balancer_arn = aws_lb.mtc_lb.arn port = 443 protocol = &quot;HTTPS&quot; depends_on = [aws_lb_target_group.mtc_tg] certificate_arn = var.certificate_arn default_action { type = &quot;forward&quot; target_group_arn = aws_lb_target_group.mtc_tg.arn } } </code></pre>
<p>Investigating <strong>request headers</strong>, I finally found the cause of the issue.</p> <p>This header was always &quot;<strong>http</strong>&quot; by default:</p> <pre><code>X-Forwarded-Proto: http </code></pre> <p>Changing the value to:</p> <pre><code>X-Forwarded-Proto: https </code></pre> <p>solved the issue.</p> <p>Here is the example of <strong>Istio Virtual Service</strong>, which sets &quot;<strong>X-Forwarded-Proto</strong>&quot; request header to &quot;<strong>https</strong>&quot; for all requests:</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: keycloak spec: hosts: - keycloak.skycomposer.net gateways: - istio-gateway http: - match: - uri: prefix: / route: - destination: host: keycloak.default.svc.cluster.local port: number: 80 headers: request: set: x-forwarded-proto: https </code></pre> <p>P.S. Ideal solution would be to set this value in <strong>AWS Application Load Balancer</strong>, but I wasn't sure how to do it with my <strong>terraform configuration</strong> of <strong>aws load balancer</strong>, so I decided to solve it on <strong>Istio Virtual Service</strong> level.</p>
<p><code>OpenShift 4.6</code></p> <p>There is a basic set up in OpenShift: [Pod &lt;- Service &lt;- Route]. A service running in the pod have an HTTP endpoint responding for 90 seconds or longer. And in some cases it is normal, so I would like to allow this behavior.</p> <p>Nevertheless, after a request to the route is sent (and a response did not arrive back) some time later (approx. 60-70 seconds) route responds with <code>HTTP 504</code>:</p> <pre><code>&lt;html&gt; &lt;body&gt; &lt;h1&gt;504 Gateway Time-out&lt;/h1&gt; The server didn't respond in time. &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I am not sure at what point OpenShift decides to break the circuit and I can't find any configuration options that allow to change this timeout.</p> <p><strong>How to set custom timeout for a service and a pod to extend duration of request-response cycle?</strong></p>
<p>You might be looking for the <code>haproxy.router.openshift.io/timeout</code> annotation, with which you can annotate your Route:</p> <pre><code>oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s </code></pre> <p>You can find more information about Route configuration in the documentation: <a href="https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html#nw-configuring-route-timeouts_route-configuration" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.6/networking/routes/route-configuration.html#nw-configuring-route-timeouts_route-configuration</a></p>
<p>I am learning kubernetes and got into a point where I am very confused. I have installed metallb and <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">ingress-nginx</a> so it is possible to access content from outside. I saw several examples that are using <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">nginx image</a> running in pod despite they are also using ingress-nginx.</p> <p>Isn't ingress-nginx capable to do all work as nginx image? Otherwise what roles those two play?</p> <p>I need to deploy express server where I would like to utilize some nginx features like gzip so it is where reverse proxy comes.</p> <p>So do I need to make it work on ingress-nginx level or from nginx image? And if I need nginx image does it mean that I need to run separately nginx image and my built node image with my express app?</p>
<p>Short answer: <strong>No</strong><br /> But it's complicated.</p> <p><code>nginx</code> <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">image</a> you mentioned is one of the most popular images (5th on the Docker Hub, by the time of this writing), is relatively small (133MB), and is easy to remember. That's why it is widely used as an example in many tutorials.</p> <hr /> <blockquote> <p>Isn't ingress-nginx capable to do all work as nginx image?</p> </blockquote> <p>To some extent.<br /> <em>Pod</em> and <em>Ingress</em> are different Kubernetes resources, and they act differently. <code>nginx</code> image is usually deployed as a container inside a pod.</p> <p>In case of <em>nginx ingress controller</em>, similiar image is used for both <em>Pod</em> and <em>Ingress</em> (mentioned below).</p> <p>Whenever you deploy (for example) a rewrite rule in ingress controller</p> <pre><code>metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 </code></pre> <p>corresponding entry is being created in <code>nginx.conf</code> file in <code>nginx-ingress</code> image (again, mentioned below).</p> <p>You can read more <a href="https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengsettingupingresscontroller.htm" rel="nofollow noreferrer">here</a>.</p> <hr /> <p>That being said, there is <a href="https://hub.docker.com/r/nginx/nginx-ingress/" rel="nofollow noreferrer"><code>nginx/nginx-ingress</code></a> image that is being deployed when you create <em>Ingress Controller</em> resource.<br /> You can even pull it from Docker Hub</p> <pre><code>$ docker image pull nginx; docker image pull nginx/nginx-ingress Using default tag: latest latest: Pulling from library/nginx 69692152171a: Pull complete 30afc0b18f67: Pull complete 596b1d696923: Pull complete febe5bd23e98: Pull complete 8283eee92e2f: Pull complete 351ad75a6cfa: Pull complete Digest: sha256:6d75c99af15565a301e48297fa2d121e15d80ad526f8369c526324f0f7ccb750 Status: Downloaded newer image for nginx:latest docker.io/library/nginx:latest Using default tag: latest latest: Pulling from nginx/nginx-ingress 69692152171a: Already exists 30afc0b18f67: Already exists 596b1d696923: Already exists febe5bd23e98: Already exists 8283eee92e2f: Already exists 351ad75a6cfa: Already exists ff0027f23312: Pull complete 18361787d4a4: Pull complete e1aba8353cbb: Pull complete f9a4438768c4: Pull complete Digest: sha256:a57fc7835c14b7294180f5c11ae6b81f2745189ef2adb57a5188952bf8aaa17a Status: Downloaded newer image for nginx/nginx-ingress:latest docker.io/nginx/nginx-ingress:latest </code></pre> <pre><code>$ docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE nginx/nginx-ingress latest 1bc8d3e90493 2 weeks ago 177MB nginx latest d1a364dc548d 2 weeks ago 133MB </code></pre> <p>As you can see <code>nginx</code> and <code>nginx-ingress</code> are two different images.</p> <p>In fact, you can even <a href="https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/" rel="nofollow noreferrer">build the image</a> yourself.</p>
<p>I recently set up a GKE autopilot but realized it doesn't support webhooks which cert-manager is dependent on. What are the other options we have to add/manage SSL certificates to a GKE auto-pilot cluster?</p>
<p>As of May 2021, GKE Autopilot has no support for 3rd party webhooks. Without webhooks, many Kubernetes plugins such as cert-manager cannot operate correctly. Cert-manager uses a custom mutating admission webhook to manage certificates, which is immutable on GKE Autopilot.</p> <p>To add/manage SSL certificates for Autopilot clusters, you should first start with this official GCP doc <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="noreferrer">Google-managed SSL certificates</a>.</p> <p>You can configure Google-managed SSL certificates using a ManagedCertificate custom resource, which is available in different API versions, depending on your GKE cluster version. It's recommended that you use a newer API version.</p> <ul> <li>ManagedCertificate v1beta2 API is available in GKE cluster versions 1.15 and later.</li> <li>ManagedCertificate v1 API is available in GKE cluster versions 1.17.9-gke.6300 and later.</li> </ul> <blockquote> <p><strong>Note</strong>: Google-managed SSL certificates aren't currently supported for internal HTTPS load balancers. For internal HTTPS load balancers, use self-managed SSL certificates instead. This feature is only available for Ingress for External HTTP(S) Load Balancing, can read more <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="noreferrer">here</a>.</p> </blockquote> <p>To configure a Google-managed SSL certificate and associate it with an Ingress, follow the two basic steps first:</p> <ul> <li>Create a ManagedCertificate object in the same namespace as the Ingress.</li> <li>Associate the ManagedCertificate object to an Ingress by adding an annotation networking.gke.io/managed-certificates to the Ingress. This annotation is a comma-separated list of ManagedCertificate resources, cert1,cert2,cert3 for example. Which is mentioned in detail <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#creating_an_ingress_with_a_google-managed_certificate" rel="noreferrer">here</a>.</li> </ul> <p>You have to follow some <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#prerequisites" rel="noreferrer">prerequisites</a>:</p> <ul> <li>You must own the domain name (Google Domains or another registrar).</li> <li>Your &quot;kubernetes.io/ingress.class&quot; must be &quot;gce&quot;.</li> <li>Create a reserved (static) external IP address. If you do not reserve an address, it may change, requiring you to reconfigure your domain's DNS records.</li> </ul> <p>For setting up a Google-managed certificate, go through the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#setting_up_a_google-managed_certificate" rel="noreferrer">sample ManagedCertificate manifest</a>.</p>
<p>I have keycloak installed on kubernetes by oficial helm chart. I have 2 replicas on a statefulset. I have created a realm and configure the admin-cli client to go by confidential. When I ask for a token by this client (with the correct client secret), I get the token, and when I try to create a user, sometimes the curl works, and sometimes I get an unauthorized error. I tried to set only 1 replica and always works fine, but with more than 1 replica, curl sometimes works, sometimes doesn't. I think the problem is between the pods, the sync between pods or something like this. Can anyone helps me?</p> <p>Thanks</p>
<p>Just setting 2 replicas doesn't mean that Keycloak runs in a cluster, it's just a K8s cluster, but not a Keycloak cluster. How should Keycloak know, if you don't tell it?</p> <p>You'll have to configure Infinispan and JGroups properly, using some <code>*_PING</code> method for node discovery. Try using <code>JDBC_PING</code>, <code>DNS_PING</code> or <code>KUBE_PING</code>. See <a href="http://www.jgroups.org/manual4/index.html#protlist" rel="nofollow noreferrer">JGroups docs</a> for more details.</p> <p>See also for example my YouTube video on Keycloak clustering: <a href="https://youtu.be/P96VQkBBNxU" rel="nofollow noreferrer">https://youtu.be/P96VQkBBNxU</a></p>
<p>I'm trying to use the Python Kubernetes Client to get the run-time of each container for all my pods. Using the K8s CLI this information is available by using <code>kubectl describe pods</code>. </p> <p>I can get the output for a single pod by using </p> <pre class="lang-py prettyprint-override"><code>api_response = api_instance.read_namespaced_pod(name='pod-name',namespace='namespace-name') </code></pre> <p>However, I need this information for all the pods. </p> <p>What is the equivalent of <code>kubectl get pods</code>for Python K8s library? I'm thinking I can use this to create a list of pods and use the mentioned command above to loop through them by their pod-name and get the required information. </p>
<p>You are right about using this command to get all pod names:</p> <pre><code>api_response = api_instance.read_namespaced_pod(name='pod-name',namespace='namespace-name') </code></pre> <p>But to get specifically the name of those, use something like this:</p> <pre><code>for i in api_response.items: print(&quot;%s&quot; %(i.metadata.name)) </code></pre> <p>That should work for you :)</p>
<p>We need to disable the automount of service account from our existing deployments in AKS cluster. There are 2 ways to do by adding the property &quot;automountserviceaccount : false&quot; in either in the service account manifest or pod template.</p> <p>We are using separate service account specified in our application deployments, however when we looked in the namespace, there are default service account also created.</p> <p>So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.</p> <p>Since our app already live, will there be any impact by adding this to the service account s.</p> <p>How to know the used service accounts of a pod and it's dependencies ?</p>
<blockquote> <p>So inorder to secure our cluster, do we need to disable the automount property for both default and application specific service accounts?.</p> </blockquote> <p>The design behind the <code>default</code> ServiceAccount is that it does not have any rights unless you give them some. So from a security point of view there is not much need to disable the mount unless you granted them access for some reason. Instead, whenever an application truly needs some access, go ahead and create a ServiceAccount for that particular application and grant it the permissions it needs via RBAC.</p> <blockquote> <p>Since our app already live, will there be any impact by adding this to the service account s.</p> </blockquote> <p>In case you truly want to disable the mount there won't be an impact on your application if it didn't use the ServiceAccount beforehand. What is going to happen though, is that a new Pod will be created and the existing one is being delete. However, if you properly configured readinessProbes and a rolling update strategy, then Kubernetes will ensure that there will be no downtime.</p> <blockquote> <p>How to know the used service accounts of a pod and it's dependencies ?</p> </blockquote> <p>You can check what ServiceAccount a Pod is mounting by executing <code>kubectl get pods &lt;pod-name&gt; -o yaml</code>. The output is going to show you the entirety of the Pod's manifest and the field <code>spec.serviceAccountName</code> contains information on which ServiceAccount the Pod is mounting.</p>
<p>I am having a problem using Kubernetes Ingress with a ASP.NET core web API.</p> <p>Lets say I have a web API with three controllers (simplified code to demonstrate three routes /, /ep1, /ep2):</p> <pre><code>endpoints.MapGet(&quot;/&quot;, async context =&gt; { await context.Response.WriteAsync(&quot;##&quot;); }); endpoints.MapGet(&quot;/ep1&quot;, async context =&gt; { await context.Response.WriteAsync(&quot;1&quot;); }); endpoints.MapGet(&quot;/ep2&quot;, async context =&gt; { await context.Response.WriteAsync(&quot;2&quot;); }); </code></pre> <p>What I want is to define an ingress rule that will pass the internal route to the application:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: myhost.com http: paths: - path: /(/|$)(.*) pathType: Prefix backend: service: name: myservice port: number: 80 </code></pre> <p>Then what I expect to happen is that browsing to myhost.com/ will return &quot;##&quot;, myhost.com/ep1 will return &quot;1&quot; and myhost.com/ep2 will return &quot;2&quot; However, all three routes return &quot;##&quot;</p> <p>Am I doing something wrong while configuring the ingress? Is it something that even possible when using ingress rules? If not, how do you create a controller that is using dynamic parameters? Let's say I want a web API that returns the amount in bank for some user ID, when specifying its ID in the GET path: myhost.com/balance/1312323, how can I implement this using a web API and ingress rules with Kubernetes?</p> <p>Thanks for helping</p>
<p>Routing within the app should be handled by the app. So, there should be no need to define dynamic paths. Try this.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: myhost.com http: paths: - path: / backend: service: name: myservice port: number: 80 </code></pre>
<p>I got that error when deploying a k8s deployment, I tried to impersonate being a root user via the security context but it didn't help, any guess how to solve it? Unfortunately, I don't have any other ideas or a workaround to avoid this permission issue.</p> <p>The error I get is:</p> <pre><code>30: line 1: /scripts/wrapper.sh: Permission denied stream closed </code></pre> <p>The deployment is as follows:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: cluster-autoscaler-grok-exporter labels: app: cluster-autoscaler-grok-exporter spec: replicas: 1 selector: matchLabels: app: cluster-autoscaler-grok-exporter sidecar: cluster-autoscaler-grok-exporter-sidecar template: metadata: labels: app: cluster-autoscaler-grok-exporter sidecar: cluster-autoscaler-grok-exporter-sidecar spec: securityContext: runAsUser: 1001 fsGroup: 2000 serviceAccountName: flux imagePullSecrets: - name: id-docker containers: - name: get-data # 3.5.0 - helm v3.5.0, kubectl v1.20.2, alpine 3.12 image: dtzar/helm-kubectl:3.5.0 command: [&quot;sh&quot;, &quot;-c&quot;, &quot;/scripts/wrapper.sh&quot;] args: - cluster-autoscaler - &quot;90&quot; # - cluster-autoscaler - &quot;30&quot; - /scripts/get_data.sh - /logs/data.log volumeMounts: - name: logs mountPath: /logs/ - name: scripts-volume-get-data mountPath: /scripts/get_data.sh subPath: get_data.sh - name: scripts-wrapper mountPath: /scripts/wrapper.sh subPath: wrapper.sh - name: export-data image: ippendigital/grok-exporter:1.0.0.RC3 imagePullPolicy: Always ports: - containerPort: 9148 protocol: TCP volumeMounts: - name: grok-config-volume mountPath: /grok/config.yml subPath: config.yml - name: logs mountPath: /logs volumes: - name: grok-config-volume configMap: name: grok-exporter-config - name: scripts-volume-get-data configMap: name: get-data-script defaultMode: 0777 defaultMode: 0700 - name: scripts-wrapper configMap: name: wrapper-config defaultMode: 0777 defaultMode: 0700 - name: logs emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: cluster-autoscaler-grok-exporter-sidecar labels: sidecar: cluster-autoscaler-grok-exporter-sidecar spec: type: ClusterIP ports: - name: metrics protocol: TCP targetPort: 9144 port: 9148 selector: sidecar: cluster-autoscaler-grok-exporter-sidecar --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/name: cluster-autoscaler-grok-exporter app.kubernetes.io/part-of: grok-exporter name: cluster-autoscaler-grok-exporter spec: endpoints: - port: metrics selector: matchLabels: sidecar: cluster-autoscaler-grok-exporter-sidecar </code></pre>
<p>From what I can see, your script does not have execute permissions.</p> <p>Remove this line from your config map.</p> <pre><code>defaultMode: 0700 </code></pre> <p>Keep only:</p> <pre><code>defaultMode: 0777 </code></pre> <p>Also, I see missing leading / in your script path</p> <pre><code>- /bin/sh scripts/get_data.sh </code></pre> <p>So, change it to</p> <pre><code>- /bin/sh /scripts/get_data.sh </code></pre>
<p>Kubernetes documentation describes pod as a <code>wrapper</code> around one or more containers. containers running <code>inside</code> of a pod share a set of namespaces (e.g. network) which makes me think <code>namespaces</code> are nested (I kind doubt that). What is the <code>wrapper</code> here from container runtime's perspective?</p> <p>Since containers are just processes constrained by <code>namespaces, Cgroups</code> e.g. Perhaps, pod is just the first <code>container</code> launched by Kubelet and the rest of containers are started and grouped by namespaces.</p>
<p>The main difference is networking, the network namespace is shared by all containers in the same Pod. Optionally, the process (pid) namespace can also be shared. That means containers in the same Pod all see the same <code>localhost</code> network (which is otherwise hidden from everything else, like normal for localhost) and optionally can send signals to processes in other containers.</p> <p>The idea is the Pods are groups of related containers, not really a wrapper per se but a set of containers that should always deploy together for whatever reason. Usually that's a primary container and then some sidecars providing support services (mesh routing, log collection, etc).</p>
<p>I'm getting lots of errors on one of my K8s worker nodes saying &quot;http: TLS handshake error from some_ip:port: remote error: tls: bad certificate&quot;, but I'm not having any problems using any of my K8s containers. The problem is being logged in /var/log/syslog seems to be specific to one particular K8s node.</p> <p>I assume I need to update a certificate, but I'm not sure if it's something in /etc/kubernetes/pki or /var/lib/kubelet/pki.</p> <p>I assume it's related to the cni0 interface, since that's the subnet that matches the .</p> <p>Does anybody know what it means, or better yet, how to fix it?</p> <p>Thanks in advance!</p>
<p>This is more likely coming from the cert-manager. You can find this from cert-manager-webhook- pod, usually in cert-manager namespace.</p>
<p>So, I have a specific chain of kubernetes and bash commands that I have to execute on gcloud, I'd like to know if there is any way to execute those commands automatically via scripting, without the need of actually having to open and interact with gcloud CLI. Maybe using a npm package but i don't know if there is a package for this usage.</p> <p>I already have gcloud cli installed locally, should exec and/or spawn commands work? I've tried but ultimately failed.</p> <p>TL;DR: I just want to know how to automate gcloud commands using code! Preferably on node.js, but i can learn it on another language too.</p>
<p>Please provide a little more detail as to what it is you're trying to automate as this will help guide recommendations.</p> <p>There are several challenges automating subprocesses (e.g. invoking bash and it running <code>gcloud</code>) from within another program. High among these is that shell-scripting doesn't provide very strong parameter passing (and no typing beyond strings) and error handling is difficult.</p> <p>For NodeJS you can use <a href="https://nodejs.org/api/child_process.html" rel="nofollow noreferrer"><code>child_process</code></a>. You'll need to ensure that the child process environment is able to access the <code>gcloud</code> binary and its configuration and that it can authenticate too. Since you're running a program to interact with <code>gcloud</code>, I recommend you use a service account to authenticate if you choose this approach and then you may need (if you run off-GCP) to provide a service account key to your code too.</p> <p>There are alternatives although these require more work, they provide much more robust solutions:</p> <ol> <li>Use a definitive &quot;infrastructure as code&quot; tools (e.g. <a href="https://terraform.io" rel="nofollow noreferrer">terraform</a>). Google Cloud provides <a href="https://cloud.google.com/blog/products/devops-sre/google-cloud-templates-for-terraform-and-deployment-manager-now-available" rel="nofollow noreferrer">templates</a> for Terraform</li> <li>Use Google Cloud SDKs (there are 2 flavors <a href="https://cloud.google.com/apis/docs/client-libraries-explained" rel="nofollow noreferrer">explained</a>) and these cover everything except Kubernetes itself (you get management of GKE clusters) and a NodeJS Kubernetes <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">client library</a></li> </ol> <p>If you add more details to your question, I can add more details to this answer.</p>
<p>I have two network policies (one with pod selector app=db and the other with app=proxy) and I have one pod to apply both network policies, the pod config doesn't allow to have 2 different labels with the same key app.</p> <p>How can I do it in this case without modifying any network policies?</p>
<p>If the pod/label/app selector is the only selector in each policy then it's not possible. The net policy probably needs a <code>matchExpressions</code> selector then a new label.</p> <p>Ingress and Egress rules can supply an array of <code>podSelector</code>s for the network targets, or similar <code>matchExpressions</code></p> <pre><code>spec: podSelector: matchExpressions: - key: role operator: In values: [ &quot;db-proxy&quot;, &quot;db&quot; ] </code></pre>
<p>I have GKE Ingress with a backend referring to a kubernetes service of type ExternalName. The &quot;external&quot; service refers to another kubernetes service within the same cluster (in a different namespace, that's why the &quot;external&quot; service is used as a bridge). However, the healthcheck defined on the &quot;external&quot; service via BackendConfig is failing. When I test the &quot;external&quot; service on the healthcheck path and port via curl manually, it's working, giving 200 OK. Does anybody know if it's even possible to have an ExternalName service as a backend service for GKE Ingress? Or is there any other way how to reference a service from an Ingress across different namespaces? I know that I can create one Ingress per namespace to reference services in different namespace but each Ingress means a new IP address, new forwarding rules, more $$$. The namespaces are used to separate environments. Any hint is welcome...</p>
<p>I don't think you can use an <code>ExternalName</code> service as the backend for an Ingress. <code>ExternalName</code> is meant to be used for internal clients to alias an external service. You might be able to do what you want, but I've never tried that approach.</p> <p><strong>EDIT</strong></p> <p>As an alternative to using an ingress resource, you can try using the new <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api" rel="nofollow noreferrer">Gateway API</a> which does support a single shared entry point for multiple services (even across namespaces). It is currently in alpha, but mainly because the actual API will likely change in the future. Moving forward, it would be the path you'd want to take.</p> <p><strong>EDIT</strong></p> <p>If you want to use ingress, I'd suggest deploying an nginx ingress controller as it does support sharing a single external IP for multiple ingresses. You should be able to create an Ingress in each namespace pointing to the appropriate service and they will &quot;share&quot; the same external IP. You can specify a different path for the backend in each Ingress resource:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-hello-world-1 namespace: hello1 spec: rules: - http: paths: - backend: serviceName: service-hello1 servicePort: 80 path: /hello-world-1 </code></pre> <p>and</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-hello-world-2 namespace: hello2 spec: rules: - http: paths: - backend: serviceName: service-hello2 servicePort: 80 path: /hello-world-2 </code></pre>
<p>There are build environment variables (<a href="https://argoproj.github.io/argo-cd/user-guide/build-environment/" rel="noreferrer">https://argoproj.github.io/argo-cd/user-guide/build-environment/</a>) so can inject something like <code>$ARGOCD_APP_NAME</code> on the application/helm yaml file and it resolves to the actual value.</p> <p>Is there a way we can set custom environment variables so it can be resolved on the argocd application yaml file?</p> <p>For example on below argocd application yaml, need to set the ENV value so helm can know which values.yaml to use.</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: ... spec: ... source: ... helm: valueFiles: - values_${ENV}.yaml </code></pre>
<p>It's a late answer, but you can. You can use the plugin field to add the ENV variables in the application level, the example follows:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Application metadata: ... spec: ... source: plugin: env: - name: ENV_VARIABLE value: ENV_VALUE </code></pre>
<p>I am currently setting up a Kubernetes cluster but I noticed there are no default storage classes defined.</p> <pre><code>u@n:~$ kubectl get sc No resources found in default namespace. </code></pre> <p>When reading through the docs there are lots of examples for storage classes used when you're deploying your cluster on cloud providers, but not self hosted. What kind of storage class do I need to use in my case?</p>
<p>In general, you can start from the Kubernetes documentation. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">Here</a> you can find storage-classes concept. Each StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified. <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">local volumes</a> could help you. Look at the example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p>Local volumes do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until Pod scheduling. This is specified by the <code>WaitForFirstConsumer</code> volume binding mode.</p> <p>Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim.</p> <p>If you are looking for complete guide to configure storage for bare metal cluster you can find it <a href="https://www.weave.works/blog/kubernetes-faq-configure-storage-for-bare-metal-cluster" rel="nofollow noreferrer">here</a>. As I mentioned before local volumes do not currently support dynamic provisioning. However it could be workaround, if you are using NFS Server. Look at <a href="https://geoffreymahugu.medium.com/kubernetes-bare-metal-dynamic-storage-allocation-e5311ac45909" rel="nofollow noreferrer">this guide</a>.</p>
<p>I couldn't find an equivalent k8s cli command to do something like this, nor any ssh keys stored as k8s secrets. It also appears to do this in a cloud-agnostic fashion.</p> <p>Is it just using a k8s pod with special privileges or something?</p> <p>Edit: oops, it's open-source. I'll investigate and update this question accordingly</p>
<p>Posting this community wiki answer to give more visibility on the comment that was made at a github issue that addressed this question:</p> <blockquote> <p>Lens will create <code>nsenter</code> pod to the selected node</p> <blockquote> <pre><code>protected async createNodeShellPod(podId: string, nodeName: string) { const kc = this.getKubeConfig(); const k8sApi = kc.makeApiClient(k8s.CoreV1Api); const pod = { metadata: { name: podId, namespace: &quot;kube-system&quot; }, spec: { restartPolicy: &quot;Never&quot;, terminationGracePeriodSeconds: 0, hostPID: true, hostIPC: true, hostNetwork: true, tolerations: [{ operator: &quot;Exists&quot; }], containers: [{ name: &quot;shell&quot;, image: &quot;docker.io/alpine:3.9&quot;, securityContext: { privileged: true, }, command: [&quot;nsenter&quot;], args: [&quot;-t&quot;, &quot;1&quot;, &quot;-m&quot;, &quot;-u&quot;, &quot;-i&quot;, &quot;-n&quot;, &quot;sleep&quot;, &quot;14000&quot;] }], nodeSelector: { &quot;kubernetes.io/hostname&quot;: nodeName } } } as k8s.V1Pod; </code></pre> </blockquote> <p>and exec into that container in lens terminal.</p> <p>-- <em><a href="https://github.com/lensapp/lens/issues/824#issuecomment-688826431" rel="noreferrer">Github.com: Lensapp: Issues: How Lens accessing nodes in AKS/EKS without user and SSH key under ROOT?</a></em></p> </blockquote> <hr /> <p>I've checked this and as it can be seen below the <code>Pod</code> with <code>nsenter</code> is created in the <code>kube-system</code> (checked on <code>GKE</code>):</p> <ul> <li><code>$ kubectl get pods -n kube-system</code> (output redacted)</li> </ul> <pre class="lang-sh prettyprint-override"><code>kube-system node-shell-09f6baaf-dc4a-4faa-969e-8016490eb8e0 1/1 Running 0 10m </code></pre> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://github.com/lensapp/lens/issues/295" rel="noreferrer">Github.com: Lensapp: Lens: Issues: How does lens use terminal/ssh for worker nodes?</a></em></li> <li><em><a href="https://man7.org/linux/man-pages/man1/nsenter.1.html" rel="noreferrer">Man7.org: Linux: Man pages: Nsenter</a></em></li> </ul>
<p>Starting kubernetes 1.20 there has been a change regarding <code>ownerReferences</code> and how K8s performs GC. Basically if a resource in <code>x</code> namespace spins up a pod/job in a <code>y</code> namespace with child having <code>ownerReferences</code> referencing to parent resource in <code>x</code>, K8s terminates the child pod/job.</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md" rel="nofollow noreferrer">Reference</a>:</p> <blockquote> <ul> <li>Resolves non-deterministic behavior of the garbage collection controller when ownerReferences with incorrect data are encountered. Events with a reason of OwnerRefInvalidNamespace are recorded when namespace mismatches between child and owner objects are detected. The kubectl-check-ownerreferences tool can be run prior to upgrading to locate existing objects with invalid ownerReferences. <ul> <li>A namespaced object with an ownerReference referencing a uid of a namespaced kind which does not exist in the same namespace is now consistently treated as though that owner does not exist, and the child object is deleted.</li> <li>A cluster-scoped object with an ownerReference referencing a uid of a namespaced kind is now consistently treated as though that owner is not resolvable, and the child object is ignored by the garbage collector. (#92743, @liggitt) [SIG API Machinery, Apps and Testing]</li> </ul> </li> </ul> </blockquote> <p>If we remove the <code>ownerReferences</code>, the resource wont be garbage collected. Is there a way to deal with this situation i.e.; how to make <code>ownerReferences</code> work in multiple namespaces OR let the job/pod clean itself once completed? Thanks.</p>
<p>As per <a href="https://github.com/kubernetes/kubernetes/pull/92743#issuecomment-729701767" rel="nofollow noreferrer">Fix GC uid races and handling of conflicting ownerReferences #92743 </a></p> <blockquote> <p>namespaces are intended to be independent of each other, so cross-namespace references have not been permitted in things like ownerReferences, secret/configmap volume references, etc.</p> <p>additionally, granting permissions to namespace a is not generally intended to provide visibility or ability to interact with objects from namespace b (or cause system controllers to interact with objects from namespace b).</p> </blockquote> <p>and <a href="https://github.com/kubernetes/website/pull/25091/files" rel="nofollow noreferrer">Update GC cross-namespace note #25091</a></p> <blockquote> <p>Cross-namespace owner references are disallowed by design.</p> </blockquote> <p>So, using <code>ownerReferences</code> for garbage collection across namespaces is not possible <em>by desing</em>.</p> <hr /> <p>However, you can emulate multi-namespace GC using labels. You just need to configure those labels when some object creates sub-object.</p> <p>Alternatively you can delete a namespace to GC all object in that namespace, but that's probably suboptimal solution.</p> <hr /> <p><strong>EDIT</strong></p> <pre><code>$ kubectl label pods owner=my -l region=europe $ kubectl label pods owner=my -l region=pacific </code></pre> <pre><code>$ kubectl label svc owner=my -l svc=europe $ kubectl label svc owner=my -l svc=pacific </code></pre> <pre><code>$ kubectl label pod kube-proxy-2wpz2 owner=my -n kube-system $ kubectl label pod kube-proxy-cpqxt owner=my -n kube-system </code></pre> <pre><code>$ kubectl get pods,svc -l owner=my --show-labels --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE LABELS default pod/aloha-pod 1/1 Running 0 54d app=aloha,owner=my,region=pacific default pod/ciao-pod 1/1 Running 0 54d app=ciao,owner=my,region=europe default pod/hello-pod 1/1 Terminating 0 54d app=hello,owner=my,region=europe default pod/ohayo-pod 1/1 Running 0 54d app=ohayo,owner=my,region=pacific kube-system pod/kube-proxy-2wpz2 1/1 Running 2 299d controller-revision-hash=5cf956ffcf,k8s-app=kube-proxy,owner=my,pod-template-generation=1 kube-system pod/kube-proxy-cpqxt 1/1 Running 3 299d controller-revision-hash=5cf956ffcf,k8s-app=kube-proxy,owner=my,pod-template-generation=1 NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS default service/europe ClusterIP 10.109.5.102 &lt;none&gt; 80/TCP 54d owner=my,svc=europe default service/pacific ClusterIP 10.99.255.196 &lt;none&gt; 80/TCP 54d owner=my,svc=pacific </code></pre> <pre><code>$ kubectl delete pod,svc -l owner=my --dry-run --all-namespaces pod &quot;aloha-pod&quot; deleted (dry run) pod &quot;ciao-pod&quot; deleted (dry run) pod &quot;hello-pod&quot; deleted (dry run) pod &quot;ohayo-pod&quot; deleted (dry run) pod &quot;kube-proxy-2wpz2&quot; deleted (dry run) pod &quot;kube-proxy-cpqxt&quot; deleted (dry run) service &quot;europe&quot; deleted (dry run) service &quot;pacific&quot; deleted (dry run) </code></pre> <p>Alternatively there could be a bash script that deletes all objects whose controller object doesn't exist, based on labels. It could also run inside the cluster with proper service account configured.</p> <hr /> <p>There is no straightforward, built-in option to achieve what you want. You should keep <em>owner referenced</em> objects in the same namespace.</p>
<p>when I run <code>kubectl get deploy/my-deploy -o yaml &gt; my-deployment.yaml</code> and modify only the container image tag, and run <code>kubectl apply</code> with the fixed file, is there a command that I can check whether the configured deployment is ready or not?</p> <p>When I create a new deployment by changing both the image tag AND the deployment name, I check the new deployment’s readiness by checking the <code>.status.conditions[]</code> and selecting the <code>reason MinimumReplicasAvailable</code>. However this is not possible in the case above because the MinimumReplicasAvailable already meets (As the original one before modifying the image tag is already alive).</p> <p>Any helps or suggestions ?</p>
<p>The field that indicates how many pods are updated is the <code>status.updatedReplicas</code>.</p> <p>According to the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#deploymentstatus-v1-apps" rel="nofollow noreferrer">documentation</a>, this is the meaning of that field:</p> <blockquote> <p>Total number of non-terminated pods targeted by this deployment that have the desired template spec.</p> </blockquote> <p>This means you should be able to tell when a Deployment rollout is complete by comparing <code>spec.Replicas</code> with <code>status.updatedReplicas</code>. When they match the rollout is complete.</p>
<p>I am trying to install jenkins on my kubernetes cluster under <code>jenkins</code> namespace. When I deploy my pv and pvc, the pv remains available and does not bind to my pvc.</p> <p>Here is my yamls:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: jenkins labels: type: jenkins spec: storageClassName: standard capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>Below is my Storageclass for manual. Standard class has not been changed, should be the same as the default standard on kubernetes.</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;storage.k8s.io/v1&quot;,&quot;kind&quot;:&quot;StorageClass&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;manual&quot;},&quot;provisioner&quot;:&quot;kubernetes.io/no-provisioner&quot;,&quot;volumeBindingMode&quot;:&quot;WaitForFirstConsumer&quot;} creationTimestamp: &quot;2021-06-14T14:41:39Z&quot; name: manual resourceVersion: &quot;3643100822&quot; uid: 8254d900-58e5-49e1-a07e-1830096aac87 provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer </code></pre>
<p>Based on the storage class spec, I think the problem is the <code>volumeBindingMode</code> being set as <code>WaitForFirstConsumer</code> which means the PV will remain unbound until there is a Pod to consume it.</p> <p>You can change it <code>Immediate</code> to allow the PV to be bound immediately without requiring to create a Pod.</p> <p>You can read about the different volume binding modes in detail in the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode" rel="nofollow noreferrer">docs</a>.</p>
<p>Why kubectl cluster-info is running on <strong>control plane</strong> and not <strong>master node</strong> And on the control plane it is running on a specific IP Address <strong><a href="https://192.168.49.2:8443" rel="nofollow noreferrer">https://192.168.49.2:8443</a></strong> and not not <strong>localhost</strong> or <strong>127.0.0.1</strong> Running the following command in terminal:</p> <ol> <li>minikube start --driver=docker</li> </ol> <blockquote> <p>πŸ˜„ minikube v1.20.0 on Ubuntu 16.04 ✨ Using the docker driver based on user configuration πŸŽ‰ minikube 1.21.0 is available! Download it: <a href="https://github.com/kubernetes/minikube/releases/tag/v1.21.0" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/releases/tag/v1.21.0</a> πŸ’‘ To disable this notice, run: 'minikube config set WantUpdateNotification false'</p> </blockquote> <blockquote> <p>πŸ‘ Starting control plane node minikube in cluster minikube 🚜 Pulling base image ... &gt; gcr.io/k8s-minikube/kicbase...: 358.10 MiB / 358.10 MiB 100.00% 797.51 K ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.22, but successfully downloaded kicbase/stable:v0.0.22 as a fallback image πŸ”₯ Creating docker container (CPUs=2, Memory=2200MB) ... 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ Generating certificates and keys ... β–ͺ Booting up control plane ... β–ͺ Configuring RBAC rules ... πŸ”Ž Verifying Kubernetes components... β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Enabled addons: storage-provisioner, default-storageclass πŸ„ Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default</p> </blockquote> <ol start="2"> <li>kubectl cluster-info</li> </ol> <blockquote> <p>Kubernetes control plane is running at <a href="https://192.168.49.2:8443" rel="nofollow noreferrer">https://192.168.49.2:8443</a> KubeDNS is running at <a href="https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a></p> </blockquote> <blockquote> <p>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.</p> </blockquote>
<p>The Kubernetes project is making an effort to move away from wording that can be considered offensive, with one concrete recommendation being <a href="https://github.com/kubernetes/community/blob/master/wg-naming/recommendations/001-master-control-plane.md" rel="noreferrer">renaming master to control-plane</a>. In other words <code>control-plane</code> and <code>master</code> mean essentially the same thing, and the goal is to switch the terminology to use <code>control-plane</code> exclusively going forward. (More info in <a href="https://stackoverflow.com/questions/66186014/what-is-the-difference-between-kubernetes-labels-node-role-kubernetes-io-master/66186779#66186779">this answer</a>)</p> <p>The <code>kubectl</code> command is a command line interface that executes on a client (i.e your computer) and interacts with the cluster through the <code>control-plane</code>. The IP address you are seing through <code>cluster-info</code> is the IP address through which you reach the control-plane</p>
<p>I have broken down my workflow scenario into 2 separate <code>WorkflowTemplates</code>. <code>outer-template</code> would just define the steps and <code>inner-template</code> would hold that <code>job</code> definition that will spin up desired container, with all other fancy stuff. Now when I submit a request <code>request.yaml</code>, it does pass the parameter <code>message</code> down to outer and inner template and fails with this error:</p> <pre><code> hello-59jg8-394098346: Boundary ID: hello-59jg8-1953291600 Children: hello-59jg8-534805352 Display Name: [0] Finished At: 2021-06-15T00:41:45Z Id: hello-59jg8-394098346 Message: child 'hello-59jg8[0].init-step[0].step-1' errored Name: hello-59jg8[0].init-step[0] Phase: Error Started At: 2021-06-15T00:41:45Z Template Name: HelloWorld Template Scope: namespaced/outer-template Type: StepGroup hello-59jg8-534805352: Boundary ID: hello-59jg8-1953291600 Display Name: step-1 Finished At: 2021-06-15T00:41:45Z Id: hello-59jg8-534805352 Message: inputs.parameters.message was not supplied Name: hello-59jg8[0].init-step[0].step-1 Phase: Error Started At: 2021-06-15T00:41:45Z Template Ref: Name: inner-template Template: InnerJob Template Scope: namespaced/outer-template Type: Skipped Phase: Failed Started At: 2021-06-15T00:41:45Z Stored Templates: </code></pre> <p>Below 2 are <code>WorkflowTemplate</code>s and third one is the request.</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: inner-template namespace: cali labels: workflows.argoproj.io/controller-instanceid: cali spec: templates: - name: InnerJob metadata: annotations: sidecar.istio.io/inject: &quot;false&quot; inputs: parameters: - name: message - name: stepName value: &quot;&quot; resource: action: create successCondition: status.succeeded &gt; 0 failureCondition: status.failed &gt; 0 manifest: | apiVersion: batch/v1 kind: Job metadata: generateName: hello-pod- annotations: sidecar.istio.io/inject: &quot;false&quot; spec: template: metadata: annotations: sidecar.istio.io/inject: &quot;false&quot; spec: containers: - name: hellopods image: centos:7 command: [sh, -c] args: [&quot;echo ${message}; sleep 5; echo done; exit 0&quot;] env: - name: message value: &quot;{{inputs.parameters.message}}&quot; - name: stepName value: &quot;{{inputs.parameters.stepName}}&quot; restartPolicy: Never outputs: parameters: - name: job-name valueFrom: jsonPath: '{.metadata.name}' - name: job-obj valueFrom: jqFilter: '.' </code></pre> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: outer-template namespace: cali labels: workflows.argoproj.io/controller-instanceid: cali spec: entrypoint: HelloWorld templates: - name: HelloWorld inputs: parameters: - name: message steps: - - name: step-1 templateRef: name: inner-template template: InnerJob arguments: parameters: - name: message - name: stepName value: &quot;this is step 1&quot; - - name: step-2 templateRef: name: inner-template template: InnerJob arguments: parameters: - name: message - name: stepName value: &quot;this is step 2&quot; </code></pre> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello- namespace: cali labels: workflows.argoproj.io/controller-instanceid: cali spec: entrypoint: HelloWorld serviceAccountName: argo templates: - name: HelloWorld steps: - - arguments: parameters: - name: message value: &quot;Hello World.....&quot; name: init-step templateRef: name: outer-template template: HelloWorld </code></pre>
<p>When passing an argument to a template in a step, you have to explicitly set the argument value.</p> <p>In the <code>outer-template</code> WorkflowTemplate, you invoke <code>inner-template</code> twice. In each case you have half-specified the <code>message</code> argument. You have to also set the <code>value</code> for each parameter.</p> <p>You should set <code>value: &quot;{{inputs.parameters.message}}&quot;</code> in <code>step-1</code> and <code>step-2</code>. That will pull the <code>message</code> input parameter from <code>outer-template.HelloWorld</code>.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: outer-template namespace: cali labels: workflows.argoproj.io/controller-instanceid: cali spec: entrypoint: HelloWorld templates: - name: HelloWorld inputs: parameters: - name: message steps: - - name: step-1 templateRef: name: inner-template template: InnerJob arguments: parameters: - name: message value: &quot;{{inputs.parameters.message}}&quot; - name: stepName value: &quot;this is step 1&quot; - - name: step-2 templateRef: name: inner-template template: InnerJob arguments: parameters: - name: message value: &quot;{{inputs.parameters.message}}&quot; - name: stepName value: &quot;this is step 2&quot; </code></pre>
<p>Can someone explain to me what the role of the keyword &quot;template&quot; is in this code :</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ template &quot;identity-openidconnect&quot; . }} namespace: {{ .Release.Namespace }} labels: app: {{ template &quot;microService.name&quot; . }} release: &quot;{{ .Release.Name }}&quot; xxxx xxxxxxxxxxxx </code></pre>
<p>The keyword &quot;template&quot; means, that Helm will find the previously created template and complete the yaml file according to the template in the template. It has to be created in advance. This type of construction allows you to refer to the same scheme many times.</p> <p>For example, we can define a template to encapsulate a Kubernetes block of labels:</p> <pre class="lang-yaml prettyprint-override"><code>{{- define &quot;mychart.labels&quot; }} labels: generator: helm date: {{ now | htmlDate }} {{- end }} </code></pre> <p>Now we can embed this template inside of our existing ConfigMap, and then include it with the <code>template</code> action:</p> <pre class="lang-yaml prettyprint-override"><code>{{- define &quot;mychart.labels&quot; }} labels: generator: helm date: {{ now | htmlDate }} {{- end }} apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap {{- template &quot;mychart.labels&quot; }} data: myvalue: &quot;Hello World&quot; {{- range $key, $val := .Values.favorite }} {{ $key }}: {{ $val | quote }} {{- end }} </code></pre> <p>When the template engine reads this file, it will store away the reference to <code>mychart.labels</code> until <code>template &quot;mychart.labels&quot;</code> is called. Then it will render that template inline. So the result will look like this:</p> <pre class="lang-yaml prettyprint-override"><code># Source: mychart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: running-panda-configmap labels: generator: helm date: 2016-11-02 data: myvalue: &quot;Hello World&quot; drink: &quot;coffee&quot; food: &quot;pizza&quot; </code></pre> <p>Note: a <code>define</code> does not produce output unless it is called with a template, as in this example.</p> <p>For more info about <code>templates</code> you can read <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="noreferrer">this page</a>.</p>
<p>looking to understand the order in which kubenetes examine the pods using the 3 type of probes- startup, readiness and live.</p> <p>How to understand or design these 3 probes correctly for normal applications? What is the chance of getting conflict or breaking the application if the startup probe has wrong entries</p>
<h2>Startup probe</h2> <p><strong>This runs first.</strong> When it succeeds, the Readiness Probe and Liveness Probe are run continuously. If this fails, the container is killed.</p> <p>Use this for &quot;slow staring apps&quot;, you can use the same command as Liveness if you want.</p> <blockquote> <p>The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.</p> </blockquote> <p>From <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">configuring probes</a></p> <h2>Liveness probe</h2> <p>This is used to kill the container, in case of a <em>deadlock</em> in the application.</p> <h2>Readiness probe</h2> <p>This is used to check that the container can receive traffic.</p>
<p>Setting the following pod configuration will enforce BW usage of 10MB on average or also prevent spikes of over 10MB. If this does not prevent spikes. Is there a way to configure CNI to &quot;level off&quot; spikes so it will queue the data and will send over a period of few seconds and within the BW limit of 10MB.</p> <pre><code>kubernetes.io/ingress-bandwidth: 10M kubernetes.io/egress-bandwidth: 10M </code></pre>
<p>There isn't a way to control the burst rate currently.</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/7735cf7f2674627283db081d61a2c45f4b70ee8f/pkg/kubelet/dockershim/network/cni/cni.go#L427-L443" rel="nofollow noreferrer">Here is the relevant code that implements bandwidth rate and burst</a>. Notice how <code>bandwidthParam.IngressBurst</code> and <code>bandwidthParam.EgressBurst</code> are set to <code>math.MaxInt32</code>.</p> <pre><code> if ingress != nil || egress != nil { bandwidthParam := cniBandwidthEntry{} if ingress != nil { // see: https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md and // https://github.com/containernetworking/plugins/blob/master/plugins/meta/bandwidth/README.md // Rates are in bits per second, burst values are in bits. bandwidthParam.IngressRate = int(ingress.Value()) // Limit IngressBurst to math.MaxInt32, in practice limiting to 2Gbit is the equivalent of setting no limit bandwidthParam.IngressBurst = math.MaxInt32 } if egress != nil { bandwidthParam.EgressRate = int(egress.Value()) // Limit EgressBurst to math.MaxInt32, in practice limiting to 2Gbit is the equivalent of setting no limit bandwidthParam.EgressBurst = math.MaxInt32 } rt.CapabilityArgs[bandwidthCapability] = bandwidthParam } </code></pre>
<p>I have multiple nodes, which are lying mostly idle, but getting an error while scheduling pods/services saying &quot;Insufficient CPU&quot;</p> <p>Node usage output :-</p> <pre><code>top - 17:59:45 up 17 days, 2:52, 1 user, load average: 5.61, 7.85, 8.58 Tasks: 2030 total, 3 running, 1771 sleeping, 0 stopped, 0 zombie %Cpu(s): 6.5 us, 2.3 sy, 0.4 ni, 90.4 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st KiB Mem : 39616812+total, 29776403+free, 30507960 used, 67896128 buff/cache KiB Swap: 0 total, 0 free, 0 used. 35842112+avail Mem </code></pre> <p>As it can be seen, whole bunch of memory/cpu is lying idle (~ 80 to 90 % is free)</p> <p>Same can be confirmed by the fact :-</p> <pre><code>$ kubectl top nodes W0615 14:03:16.457271 108 top_node.go:119] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 4680m 29% 14943Mi 47% node10 9524m 19% 44735Mi 11% node11 273m 0% 1614Mi 0% node12 289m 0% 1617Mi 0% node2 1736m 10% 11683Mi 37% node3 3223m 20% 17837Mi 56% node4 1680m 10% 15075Mi 47% node5 7386m 15% 39163Mi 10% node6 5392m 22% 26448Mi 20% node7 2457m 5% 28002Mi 7% node8 4853m 10% 51863Mi 13% node9 3620m 7% 18299Mi 4% </code></pre> <p>But when scheduling pods, getting an error (kubectl describe pod POD_NAME) :-</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 62m default-scheduler 0/12 nodes are available: 5 node(s) had taints that the pod didn't tolerate, 7 Insufficient cpu. </code></pre> <p>The reason I understand why this is happening is (kubectl descibe node node10) :-</p> <pre><code>Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 47887m (99%) 92270m (192%) memory 59753371Ki (15%) 87218649344 (21%) ephemeral-storage 2Gi (0%) 2Gi (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) </code></pre> <p>The pods have allocated all of the cpu resources across all the nodes</p> <p>Even though the actual usage is very low, k8s thinks the nodes are fully occupied. What I am trying to achieve is how to overcommit the resources ? I tried editing &quot;Allocatable&quot; cpu 2x times the &quot;Capacity&quot;, but changes don't persist. Any suggestion how can I overcommit ?</p>
<p>You cannot overcommit on requests, as those form the minimum required resources for a given pod to run. You can overcommit on limits, as you can see by your <code>192%</code> there.</p>
<p>How can I speedup the rollout of new images in Kubernetes?</p> <p>Currently, we have an automated build job that modifies a yaml file to point to a new revision and then runs <code>kubectl apply</code> on it.</p> <p>It works, but it takes long delays (up to 20 minutes PER POD) before all pods with the previous revision are replaced with the latest.</p> <p>Also, the deployment is configured for 3 replicas. We see one pod at a time is started with the new revision. (Is this the Kubernetes &quot;surge&quot; ?) But that is too slow, I would rather kill all 3 pods and have 3 new ones with the new image.</p>
<blockquote> <p>I would rather kill all 3 pods and have 3 new ones with the new image.</p> </blockquote> <p>You can do that. Set <code>strategy.type:</code> to <code>Recreate</code> instead of the default <code>RollingUpdate</code> in your <code>Deployment</code>. See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">strategy</a>.</p> <p>But you probably get some downtime during deployment.</p>
<p>I understand that a pod can have multiple containers. But, I am trying to find if there is a max limit on the number of containers within a pod. I have googled around, but could not find an answer.</p>
<p>AFAIK, there is no such pod-specific limit. Though there are some cluster-specific criteria (k8s v1.21) like:</p> <ul> <li>No more than 110 pods per node</li> <li>No more than 5000 nodes</li> <li>No more than 150000 total pods</li> <li>No more than 300000 total containers</li> </ul> <p><strong>N.B.</strong>: Since a pod can be scheduled to a single node, you are also limited by node resources (ie. cpu, memory). So, when you increase the number of containers of a pod, make sure that your node has enough resources to accommodate your pod.</p>
<p>I am trying to get the current Kubernetes version via the API, because depending on the version some things are not supported (e.g. readyz endpoint is not available before 1.16, so I want to check healthz endpoint instead).</p> <p>I found the following endpoint: <code>/version</code></p> <p>Example result:</p> <pre><code>{ &quot;major&quot;: &quot;1&quot;, &quot;minor&quot;: &quot;11+&quot;, &quot;gitVersion&quot;: &quot;v1.11.0+d4cacc0&quot;, &quot;gitCommit&quot;: &quot;d4cacc0&quot;, &quot;gitTreeState&quot;: &quot;clean&quot;, &quot;buildDate&quot;: &quot;2021-05-12T14:13:55Z&quot;, &quot;goVersion&quot;: &quot;go1.10.8&quot;, &quot;compiler&quot;: &quot;gc&quot;, &quot;platform&quot;: &quot;linux/amd64&quot; } </code></pre> <p>Is this the correct approach to get the version via API? I thaught Kubernetes uses semantic versioning, why is the &quot;+&quot; in the minor String? Is this configurable? Where can I find a specification of this API/fields?</p> <p>Ideally this should work for standalone and managed Kubernetes flavors (Openshift, GKE...).</p>
<p><code>GitVersion</code>/<code>gitVersion</code> is the semantic versioning field.</p> <p>This is the version of the <code>kube-api-server</code> component specifically.</p> <p>The parsing is done by <a href="https://github.com/kubernetes/apimachinery/blob/57f2a0733447cfd41294477d833cce6580faaca3/pkg/util/version/version.go#L35-L40" rel="nofollow noreferrer">regex</a> in The <a href="https://github.com/kubernetes/apimachinery" rel="nofollow noreferrer">k8s.io/apimachinery</a> package:</p> <pre><code>^\s*v?([0-9]+(?:\.[0-9]+)*)(.*)*$` </code></pre> <p>The <code>/version</code> endpoint is part of the documented API spec (<code>operationId: &quot;getCode&quot;</code>) so should be pretty stable. It's what <code>kubectl version</code> uses so I doubt it will change easily.</p>
<p>I'm trying to use the free tier (autopilot mode) to learn k8s on gcp. However I cam across the following <a href="https://stackoverflow.com/questions/63987678/is-it-possible-to-have-google-cloud-kubernetes-cluster-in-the-free-tier">Is it possible to have Google Cloud Kubernetes cluster in the free tier?</a>. However when I checked the link given in the question I could not find the specified limitation <code>f1-micro machines are not supported due to insufficient memory</code>. Is this still valid ? can I use k8s on gcp in the free tier without incurring any cost?</p>
<p>There is no way to get a free GKE cluster on GCP, but you can get a very cheap one by following the instructions at <a href="https://github.com/Neutrollized/free-tier-gke" rel="nofollow noreferrer">https://github.com/Neutrollized/free-tier-gke</a>.</p> <p>Using a combination of GKE's free management tier and a low cost machine type, the cost estimate is less than $5 per month: .</p> <p>More details on what is available as part of the free tier can be found here: <a href="https://cloud.google.com/free" rel="nofollow noreferrer">https://cloud.google.com/free</a>.</p> <p>Also, for your question regarding limitation of f1-micro to be used in GKE,if you follow the documentation <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/min-cpu-platform#limitations" rel="nofollow noreferrer">limitation</a> It is written that- Minimum CPU platform cannot be used with shared core machine types. Now since f1-micro machines are shared core machine types. So it is valid and cannot be used.</p>
<p>How can I retrieve a configmap value using jsonpath?</p> <p>I'm trying to retrieve the value of haproxy.cfg from my configmap, but I can't get it just right:</p> <pre><code>kubectl get cm -l app=haproxy -o jsonpath=&quot;{['items'][0]['data']['haproxy.cfg']}&quot; </code></pre> <p>The above returns no results. But I can definitely get the configmap if I leave off the haproxy.cfg:</p> <pre><code>kubectl get cm -l app=haproxy -o jsonpath=&quot;{['items'][0]['data']}&quot; </code></pre> <p>Yields:</p> <pre><code>map[haproxy.cfg: global user root group root maxconn 256 ... ] </code></pre> <p>I can use jq:</p> <pre><code>kubectl get cm -l app=haproxy -o json | jq -r '.items[0].data[&quot;haproxy.cfg&quot;]' </code></pre> <p>which yields exactly what I want:</p> <pre><code> global user root group root maxconn 256 </code></pre> <p>If you need help creating your ConfigMap, here's the manifest for the one I am using:</p> <pre><code>apiVersion: v1 data: haproxy.cfg: &quot;\nglobal\n user root\n group root\n maxconn 256\n\n&quot; kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: haproxy meta.helm.sh/release-namespace: haproxy labels: app: haproxy app-version: 2.4.0 app.kubernetes.io/managed-by: Helm heritage: Helm release: haproxy version: 0.0.3 name: haproxy </code></pre>
<p>Escape the <code>.</code> inside single quotes</p> <pre><code>kubectl get cm -l app=haproxy -o jsonpath=&quot;{.items[0].data['haproxy\.cfg']}&quot; </code></pre> <p>* This didn't work a long time ago, pre 1.5. Then you needed to use <code>go-template</code> formatting.</p>
<p>I am trying to deploy my react app to a kubernetes cluster by serving it via an nginx server. As you can see in the below Dockerfile I am building my app and afterwards copying the build artefacts into the <code>/usr/share/nginx/html/</code> path on my nginx server.</p> <pre><code># Stage 1 - build container FROM node:12-alpine as build WORKDIR /app ENV PATH /app/node_modules/.bin:$PATH COPY yarn.lock ./ COPY package.json ./ RUN yarn install COPY . ./ ENV GENERATE_SOURCEMAP=false SASS_PATH=node_modules:src ARG env RUN yarn run build:${env} # Stage 2 - productive environment FROM nginx:stable-alpine COPY --from=build /app/build /usr/share/nginx/html/ COPY nginx.conf /etc/nginx/conf.d/default.conf RUN apk update RUN apk upgrade EXPOSE 80 CMD [&quot;nginx&quot;,&quot;-g&quot;,&quot;daemon off;&quot;] </code></pre> <p>I am using the following nginx configuration. From what I understand this should instruct the nginx server to search for resources using the specified root path.</p> <pre><code>server { listen 80; root /usr/share/nginx/html; error_page 500 502 503 504 /50x.html; location / { index index.html index.htm; try_files $uri $uri/ /index.html; } } </code></pre> <p>I can see that this works when running the docker container locally (all react app resources get loaded), but when I deploy it onto my kubernetes cluster and expose it via an ingress controller, I am getting the following errors for the build artifacts:</p> <pre><code>GET https://*host*/static/css/main.0e41ac5f.chunk.css net::ERR_ABORTED 404 </code></pre> <p>This is interesting since when I ssh into the container I can see that all the requested files still exist at the correct directory (<code>/usr/share/nginx/html/static/</code>).</p> <p>I already tried setting the <strong>homepage</strong> value in my <strong>package.json</strong> to <code>&quot;.&quot;</code>, but this didn't change anything.</p> <p>My ingress configuration looks like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: search-dev annotations: kubernetes.io/ingress.class: &quot;public-iks-k8s-nginx&quot; spec: tls: hosts: - host secretName: secret rules: - host: host http: paths: - path: / pathType: Exact backend: service: name: search-frontend-dev port: number: 80 </code></pre> <p>I also tried setting this annotation:</p> <pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>But unfortunately this didn't work either.</p> <p>Would appreciate some help with this.</p>
<p>For all of you who are having a similar problem, set pathType to <strong>Prefix</strong>. Otherwise only requests to &quot;/&quot; will get routed to your service. Requests to &quot;/static/...&quot; were simply not routed to the services and therefore to my nginx server.</p>
<p>I noticed some of my clusters were reporting a CPUThrottlingHigh alert for metrics-server-nanny container (image: gke.gcr.io/addon-resizer:1.8.11-gke.0) in GKE. I couldn't see a way to configure this container to give it more CPU because it's automatically deployed as part of the metrics-server pod, and Google automatically resets any changes to the deployment/pod resource settings.</p> <p>So out of curiosity, I created a small kubernetes cluster in GKE (3 standard nodes) with autoscaling turned on to scale up to 5 nodes. No apps or anything installed. Then I installed the kube-prometheus monitoring stack (<a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>) which includes the CPUThrottlingHigh alert. Soon after installing the monitoring stack, this same alert popped up for this container. I don't see anything in the logs of this container or the related metrics-server-nanny container.</p> <p>Also, I don't notice this same issue on AWS or Azure because while they do have a similar metrics-server pod in the kube-system namespace, they do not contain the sidecar metrics-server-nanny container in the pod.</p> <p>Has anyone seen this or something similar? Is there a way to give this thing more resources without Google overwriting config changes?</p>
<p>This is a known issue in <a href="https://github.com/kubernetes/kubernetes/issues/67577" rel="nofollow noreferrer">Kubernetes</a> that CFS leads to Throttling Pods that exhibit a spikey CPU usage pattern. As Kubernetes / GKE uses to implement CPU quotas, this is causing pods to get throttled even when they really aren't busy.</p> <p>Kubernetes uses CFS quotas to enforce CPU limits for the pods running an application. The Completely Fair Scheduler (CFS) is a process scheduler that handles CPU resource allocation for executing processes, based on time period and not on available CPU power.</p> <p>We have no direct control over CFS via Kubernetes, so the only solution is to disable it. This is done via node config.</p> <p>Allow users to tune Kubelet configs &quot;<strong>CPUManagerPolicy</strong>&quot; and &quot;<strong>CPUCFSQuota</strong>”</p> <p>The workaround is to temporarily disable Kubernetes CFS quotas entirely (kubelet's flag <strong>--cpu-cfs-quota=false</strong>)</p> <pre><code> $ cat node-config.yaml kubeletConfig: cpuCFSQuota: false cpuManagerPolicy: static $ gcloud container clusters create --node-config=node-config.yaml </code></pre> <p>gcloud will map the fields from the YAML node config file to the newly added GKE API fields.</p>
<p>I am working with ingress-nginx in kubernetes to set up a server. The issue is that the paths are not routing at all and I get a 404 error from the nginx server on any request I make. Below is my code for ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-srv annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: # defaultBackend: # service: # name: auth-srv # port: # number: 3000 rules: - host: app.dev - http: paths: - pathType: Prefix path: /api/auth/?(.*) backend: service: name: auth-srv port: number: 3000 - path: /api/coms/?(.*) pathType: Prefix backend: service: name: coms-srv port: number: 3000 </code></pre> <p>If I uncomment the default backend service I get a response but as soon as I remove it I get the 404 nginx error. So I know its connecting to the services I set.</p> <p>I don't know where I'm going wrong how to go about fixing this as I'm copying straight from the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">docs</a>. Any help or insight would be great. Thank you in advance!</p> <p><strong>Edit 1:</strong> I removed the regex from the path and commented out the /api/auth path so no requests should be going to the auth-srv. For some reason, all requests route to the auth-srv even though there is no mapping to it. NOTE: Both the auth and coms pods/services are running in the background, just ingress-nginx still isn't routing properly.</p>
<p>So the reason why this wasn't routing properly was because of the:</p> <pre><code> - host: app.dev - http: </code></pre> <p>The &quot;-&quot; in front of the &quot;http&quot; made the controller think it was its own ruleset so the following routes had a host of &quot;*&quot;. After I Removed the &quot;-&quot; in front of the &quot;http&quot;, the rules were set to the proper host of app.dev and it started routing accordingly.</p> <p>Thank you for your help everyone! What a long day it has been :')</p>
<p>Trying to set up MongoDB in a Kubernetes cluster on my local machine using Minikube but getting the following error. (I tried multiple MongoDB images; latest, 5.0.0, 4.0, etc. The problem continued.)</p> <p>My <code>mongodb-secret.yaml</code> is as follows:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mongodb-secret type: Opaque data: mongo-root-username: dXNlcm5hbWUNCg== mongo-root-password: cGFzc3dvcmQNCg== </code></pre> <p>My <code>mongodb-deployment.yaml</code> file is as follows:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongodb-deployment labels: apps: mongodb spec: replicas: 1 selector: matchLabels: app: mongodb template: metadata: labels: app: mongodb spec: containers: - name: mongodb image: mongo:latest ports: - containerPort: 27017 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-username - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongodb-secret key: mongo-root-password </code></pre> <p>The output when I run <code>kubectl get all</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/mongodb-deployment-6c587ddcbb-lnk2q 0/1 CrashLoopBackOff 6 7m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 6d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mongodb-deployment 0/1 1 0 14m NAME DESIRED CURRENT READY AGE replicaset.apps/mongodb-deployment-59ff8f9fd7 0 0 0 9m1s replicaset.apps/mongodb-deployment-6c587ddcbb 1 1 0 7m56s replicaset.apps/mongodb-deployment-7fc5cbcf9c 0 0 0 10m replicaset.apps/mongodb-deployment-8f6675bc5 0 0 0 14m </code></pre> <p>The output when I run <code>kubectl logs mongodb-deployment-6c587ddcbb-lnk2q</code>:</p> <pre><code>about to fork child process, waiting until server is ready for connections. forked process: 28 {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.738+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:20698, &quot;ctx&quot;:&quot;main&quot;,&quot;msg&quot;:&quot;***** SERVER RESTARTED *****&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.740+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:23285, &quot;ctx&quot;:&quot;main&quot;,&quot;msg&quot;:&quot;Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;W&quot;, &quot;c&quot;:&quot;ASIO&quot;, &quot;id&quot;:22601, &quot;ctx&quot;:&quot;main&quot;,&quot;msg&quot;:&quot;No TransportLayer configured during NetworkInterface startup&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:4648601, &quot;ctx&quot;:&quot;main&quot;,&quot;msg&quot;:&quot;Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:4615611, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;MongoDB starting&quot;,&quot;attr&quot;:{&quot;pid&quot;:28,&quot;port&quot;:27017,&quot;dbPath&quot;:&quot;/data/db&quot;,&quot;architecture&quot;:&quot;64-bit&quot;,&quot;host&quot;:&quot;mongodb-deployment-6c587ddcbb-lnk2q&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:23403, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Build Info&quot;,&quot;attr&quot;:{&quot;buildInfo&quot;:{&quot;version&quot;:&quot;4.4.6&quot;,&quot;gitVersion&quot;:&quot;72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7&quot;,&quot;openSSLVersion&quot;:&quot;OpenSSL 1.1.1 11 Sep 2018&quot;,&quot;modules&quot;:[],&quot;allocator&quot;:&quot;tcmalloc&quot;,&quot;environment&quot;:{&quot;distmod&quot;:&quot;ubuntu1804&quot;,&quot;distarch&quot;:&quot;x86_64&quot;,&quot;target_arch&quot;:&quot;x86_64&quot;}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:51765, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Operating System&quot;,&quot;attr&quot;:{&quot;os&quot;:{&quot;name&quot;:&quot;Ubuntu&quot;,&quot;version&quot;:&quot;18.04&quot;}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:21951, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Options set by command line&quot;,&quot;attr&quot;:{&quot;options&quot;:{&quot;net&quot;:{&quot;bindIp&quot;:&quot;127.0.0.1&quot;,&quot;port&quot;:27017,&quot;tls&quot;:{&quot;mode&quot;:&quot;disabled&quot;}},&quot;processManagement&quot;:{&quot;fork&quot;:true,&quot;pidFilePath&quot;:&quot;/tmp/docker-entrypoint-temp-mongod.pid&quot;},&quot;systemLog&quot;:{&quot;destination&quot;:&quot;file&quot;,&quot;logAppend&quot;:true,&quot;path&quot;:&quot;/proc/1/fd/1&quot;}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.741+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22297, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem&quot;,&quot;tags&quot;:[&quot;startupWarnings&quot;]} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.742+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22315, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Opening WiredTiger&quot;,&quot;attr&quot;:{&quot;config&quot;:&quot;create,cache_size=12285M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.855+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22430, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;WiredTiger message&quot;,&quot;attr&quot;:{&quot;message&quot;:&quot;[1623831333:855500][28:0x7f583f1b6ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.855+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22430, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;WiredTiger message&quot;,&quot;attr&quot;:{&quot;message&quot;:&quot;[1623831333:855566][28:0x7f583f1b6ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.865+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:4795906, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;WiredTiger opened&quot;,&quot;attr&quot;:{&quot;durationMillis&quot;:123}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.865+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;RECOVERY&quot;, &quot;id&quot;:23987, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;WiredTiger recoveryTimestamp&quot;,&quot;attr&quot;:{&quot;recoveryTimestamp&quot;:{&quot;$timestamp&quot;:{&quot;t&quot;:0,&quot;i&quot;:0}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.886+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:4366408, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;No table logging settings modifications are required for existing WiredTiger tables&quot;,&quot;attr&quot;:{&quot;loggingEnabled&quot;:true}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.887+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:22262, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Timestamp monitor starting&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.894+00:00&quot;},&quot;s&quot;:&quot;W&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:22120, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Access control is not enabled for the database. Read and write access to data and configuration is unrestricted&quot;,&quot;tags&quot;:[&quot;startupWarnings&quot;]} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.894+00:00&quot;},&quot;s&quot;:&quot;W&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:22178, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'&quot;,&quot;tags&quot;:[&quot;startupWarnings&quot;]} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.894+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:20320, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;createCollection&quot;,&quot;attr&quot;:{&quot;namespace&quot;:&quot;admin.system.version&quot;,&quot;uuidDisposition&quot;:&quot;provided&quot;,&quot;uuid&quot;:{&quot;uuid&quot;:{&quot;$uuid&quot;:&quot;5118c0eb-1d88-4b4a-837c-ec78ce97b710&quot;}},&quot;options&quot;:{&quot;uuid&quot;:{&quot;$uuid&quot;:&quot;5118c0eb-1d88-4b4a-837c-ec78ce97b710&quot;}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.908+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;INDEX&quot;, &quot;id&quot;:20345, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Index build: done building&quot;,&quot;attr&quot;:{&quot;buildUUID&quot;:null,&quot;namespace&quot;:&quot;admin.system.version&quot;,&quot;index&quot;:&quot;_id_&quot;,&quot;commitTimestamp&quot;:{&quot;$timestamp&quot;:{&quot;t&quot;:0,&quot;i&quot;:0}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.908+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;COMMAND&quot;, &quot;id&quot;:20459, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Setting featureCompatibilityVersion&quot;,&quot;attr&quot;:{&quot;newVersion&quot;:&quot;4.4&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.908+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:20536, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Flow Control is enabled on this deployment&quot;} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.909+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:20320, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;createCollection&quot;,&quot;attr&quot;:{&quot;namespace&quot;:&quot;local.startup_log&quot;,&quot;uuidDisposition&quot;:&quot;generated&quot;,&quot;uuid&quot;:{&quot;uuid&quot;:{&quot;$uuid&quot;:&quot;5e09a18c-0532-4fad-8981-283eb3f2f429&quot;}},&quot;options&quot;:{&quot;capped&quot;:true,&quot;size&quot;:10485760}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.924+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;INDEX&quot;, &quot;id&quot;:20345, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Index build: done building&quot;,&quot;attr&quot;:{&quot;buildUUID&quot;:null,&quot;namespace&quot;:&quot;local.startup_log&quot;,&quot;index&quot;:&quot;_id_&quot;,&quot;commitTimestamp&quot;:{&quot;$timestamp&quot;:{&quot;t&quot;:0,&quot;i&quot;:0}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.924+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;FTDC&quot;, &quot;id&quot;:20625, &quot;ctx&quot;:&quot;initandlisten&quot;,&quot;msg&quot;:&quot;Initializing full-time diagnostic data capture&quot;,&quot;attr&quot;:{&quot;dataDirectory&quot;:&quot;/data/db/diagnostic.data&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.925+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;CONTROL&quot;, &quot;id&quot;:20712, &quot;ctx&quot;:&quot;LogicalSessionCacheReap&quot;,&quot;msg&quot;:&quot;Sessions collection is not set up; waiting until next sessions reap interval&quot;,&quot;attr&quot;:{&quot;error&quot;:&quot;NamespaceNotFound: config.system.sessions does not exist&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.925+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;STORAGE&quot;, &quot;id&quot;:20320, &quot;ctx&quot;:&quot;LogicalSessionCacheRefresh&quot;,&quot;msg&quot;:&quot;createCollection&quot;,&quot;attr&quot;:{&quot;namespace&quot;:&quot;config.system.sessions&quot;,&quot;uuidDisposition&quot;:&quot;generated&quot;,&quot;uuid&quot;:{&quot;uuid&quot;:{&quot;$uuid&quot;:&quot;8149a51f-47fd-4180-a6a4-310574dfae9b&quot;}},&quot;options&quot;:{}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.925+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:23015, &quot;ctx&quot;:&quot;listener&quot;,&quot;msg&quot;:&quot;Listening on&quot;,&quot;attr&quot;:{&quot;address&quot;:&quot;/tmp/mongodb-27017.sock&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.926+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:23015, &quot;ctx&quot;:&quot;listener&quot;,&quot;msg&quot;:&quot;Listening on&quot;,&quot;attr&quot;:{&quot;address&quot;:&quot;127.0.0.1&quot;}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.926+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:23016, &quot;ctx&quot;:&quot;listener&quot;,&quot;msg&quot;:&quot;Waiting for connections&quot;,&quot;attr&quot;:{&quot;port&quot;:27017,&quot;ssl&quot;:&quot;off&quot;}} child process started successfully, parent exiting {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.948+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;INDEX&quot;, &quot;id&quot;:20345, &quot;ctx&quot;:&quot;LogicalSessionCacheRefresh&quot;,&quot;msg&quot;:&quot;Index build: done building&quot;,&quot;attr&quot;:{&quot;buildUUID&quot;:null,&quot;namespace&quot;:&quot;config.system.sessions&quot;,&quot;index&quot;:&quot;_id_&quot;,&quot;commitTimestamp&quot;:{&quot;$timestamp&quot;:{&quot;t&quot;:0,&quot;i&quot;:0}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.948+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;INDEX&quot;, &quot;id&quot;:20345, &quot;ctx&quot;:&quot;LogicalSessionCacheRefresh&quot;,&quot;msg&quot;:&quot;Index build: done building&quot;,&quot;attr&quot;:{&quot;buildUUID&quot;:null,&quot;namespace&quot;:&quot;config.system.sessions&quot;,&quot;index&quot;:&quot;lsidTTLIndex&quot;,&quot;commitTimestamp&quot;:{&quot;$timestamp&quot;:{&quot;t&quot;:0,&quot;i&quot;:0}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.961+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:22943, &quot;ctx&quot;:&quot;listener&quot;,&quot;msg&quot;:&quot;Connection accepted&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55144&quot;,&quot;connectionId&quot;:1,&quot;connectionCount&quot;:1}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.961+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:51800, &quot;ctx&quot;:&quot;conn1&quot;,&quot;msg&quot;:&quot;client metadata&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55144&quot;,&quot;client&quot;:&quot;conn1&quot;,&quot;doc&quot;:{&quot;application&quot;:{&quot;name&quot;:&quot;MongoDB Shell&quot;},&quot;driver&quot;:{&quot;name&quot;:&quot;MongoDB Internal Client&quot;,&quot;version&quot;:&quot;4.4.6&quot;},&quot;os&quot;:{&quot;type&quot;:&quot;Linux&quot;,&quot;name&quot;:&quot;Ubuntu&quot;,&quot;architecture&quot;:&quot;x86_64&quot;,&quot;version&quot;:&quot;18.04&quot;}}}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:33.964+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:22944, &quot;ctx&quot;:&quot;conn1&quot;,&quot;msg&quot;:&quot;Connection ended&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55144&quot;,&quot;connectionId&quot;:1,&quot;connectionCount&quot;:0}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:34.003+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:22943, &quot;ctx&quot;:&quot;listener&quot;,&quot;msg&quot;:&quot;Connection accepted&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55146&quot;,&quot;connectionId&quot;:2,&quot;connectionCount&quot;:1}} {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:34.004+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:51800, &quot;ctx&quot;:&quot;conn2&quot;,&quot;msg&quot;:&quot;client metadata&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55146&quot;,&quot;client&quot;:&quot;conn2&quot;,&quot;doc&quot;:{&quot;application&quot;:{&quot;name&quot;:&quot;MongoDB Shell&quot;},&quot;driver&quot;:{&quot;name&quot;:&quot;MongoDB Internal Client&quot;,&quot;version&quot;:&quot;4.4.6&quot;},&quot;os&quot;:{&quot;type&quot;:&quot;Linux&quot;,&quot;name&quot;:&quot;Ubuntu&quot;,&quot;architecture&quot;:&quot;x86_64&quot;,&quot;version&quot;:&quot;18.04&quot;}}}} uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR : _getErrorWithCode@src/mongo/shell/utils.js:25:13 DB.prototype.createUser@src/mongo/shell/db.js:1386:11 @(shell):1:1 Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory {&quot;t&quot;:{&quot;$date&quot;:&quot;2021-06-16T08:15:34.016+00:00&quot;},&quot;s&quot;:&quot;I&quot;, &quot;c&quot;:&quot;NETWORK&quot;, &quot;id&quot;:22944, &quot;ctx&quot;:&quot;conn2&quot;,&quot;msg&quot;:&quot;Connection ended&quot;,&quot;attr&quot;:{&quot;remote&quot;:&quot;127.0.0.1:55146&quot;,&quot;connectionId&quot;:2,&quot;connectionCount&quot;:0}} </code></pre> <p>What am I missing?</p>
<p>Checking your secret values, something's wrong:</p> <pre><code>$ echo dXNlcm5hbWUNCg==| base64 --decode | cat -e username^M$ $ echo cGFzc3dvcmQNCg== | base64 --decode | cat -e password^M$ </code></pre> <p>Try re-creating your secret without those un-printable characters (<code>\r</code> as well as <code>\n</code>).</p> <p>Maybe, instead of pre-encoding your secret, you could try:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mongodb-secret type: Opaque stringData: mongo-root-username: username mongo-root-password: password </code></pre>
<p>I have spent days now trying to figure out a dependency issue I'm experiencing with (Py)Spark running on Kubernetes. I'm using the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/" rel="noreferrer">spark-on-k8s-operator</a> and Spark's Google Cloud connector.</p> <p>When I try to submit my spark job <strong>without</strong> a dependency using <code>sparkctl create sparkjob.yaml ...</code> with below .yaml file, it works like a charm.</p> <pre><code>apiVersion: &quot;sparkoperator.k8s.io/v1beta2&quot; kind: SparkApplication metadata: name: spark-job namespace: my-namespace spec: type: Python pythonVersion: &quot;3&quot; hadoopConf: &quot;fs.gs.impl&quot;: &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem&quot; &quot;fs.AbstractFileSystem.gs.impl&quot;: &quot;com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS&quot; &quot;fs.gs.project.id&quot;: &quot;our-project-id&quot; &quot;fs.gs.system.bucket&quot;: &quot;gcs-bucket-name&quot; &quot;google.cloud.auth.service.account.enable&quot;: &quot;true&quot; &quot;google.cloud.auth.service.account.json.keyfile&quot;: &quot;/mnt/secrets/keyfile.json&quot; mode: cluster image: &quot;image-registry/spark-base-image&quot; imagePullPolicy: Always mainApplicationFile: ./sparkjob.py deps: jars: - https://repo1.maven.org/maven2/org/apache/spark/spark-sql-kafka-0-10_2.11/2.4.5/spark-sql-kafka-0-10_2.11-2.4.5.jar sparkVersion: &quot;2.4.5&quot; restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 driver: cores: 1 coreLimit: &quot;1200m&quot; memory: &quot;512m&quot; labels: version: 2.4.5 serviceAccount: spark-operator-spark secrets: - name: &quot;keyfile&quot; path: &quot;/mnt/secrets&quot; secretType: GCPServiceAccount envVars: GCS_PROJECT_ID: our-project-id executor: cores: 1 instances: 1 memory: &quot;512m&quot; labels: version: 2.4.5 secrets: - name: &quot;keyfile&quot; path: &quot;/mnt/secrets&quot; secretType: GCPServiceAccount envVars: GCS_PROJECT_ID: our-project-id </code></pre> <p>The Docker image <code>spark-base-image</code> is built with Dockerfile</p> <pre><code>FROM gcr.io/spark-operator/spark-py:v2.4.5 RUN rm $SPARK_HOME/jars/guava-14.0.1.jar ADD https://repo1.maven.org/maven2/com/google/guava/guava/28.0-jre/guava-28.0-jre.jar $SPARK_HOME/jars ADD https://repo1.maven.org/maven2/com/google/cloud/bigdataoss/gcs-connector/hadoop2-2.0.1/gcs-connector-hadoop2-2.0.1-shaded.jar $SPARK_HOME/jars ENTRYPOINT [ &quot;/opt/entrypoint.sh&quot; ] </code></pre> <p>the main application file is uploaded to GCS when submitting the application and subsequently fetched from there and copied into the driver pod upon starting the application. The problem starts whenever I want to supply my own Python module <code>deps.zip</code> as a dependency to be able to use it in my main application file <code>sparkjob.py</code>.</p> <p>Here's what I have tried so far:</p> <p>1</p> <p>Added the following lines to spark.deps in sparkjob.yaml</p> <pre><code>pyFiles: - ./deps.zip </code></pre> <p>which resulted in the operator not being able to even submit the Spark application with error</p> <pre><code>java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found </code></pre> <p><code>./deps.zip</code> is successfully uploaded to the GCS bucket along with the main application file but while the main application file can be successfully fetched from GCS (I see this in the logs in jobs with no dependencies as defined above), <code>./deps.zip</code> can somehow not be fetched from there. I also tried adding the gcs-connector jar to the spark.deps.jars list explicitly - nothing changes.</p> <p>2</p> <p>I added <code>./deps.zip</code> to the base docker image used for starting up the driver and executor pods by adding <code>COPY ./deps.zip /mnt/</code> to the above Dockerfile and adding the dependency in the sparkjob.yaml via</p> <pre><code>pyFiles: - local:///mnt/deps.zip </code></pre> <p>This time the spark job can be submitted and the driver pod is started, however I get a <code>file:/mnt/deps.zip not found</code> error when the Spark context is being initialized I also tried to additionally set <code>ENV SPARK_EXTRA_CLASSPATH=/mnt/</code> in the Dockerfile but without any success. I even tried to explicitly mount the whole <code>/mnt/</code> directory into the driver and executor pods using volume mounts, but that also didn't work.</p> <hr /> <p>edit:</p> <p>My workaround (2), adding dependencies to the Docker image and setting <code>ENV SPARK_EXTRA_CLASSPATH=/mnt/</code> in the Dockerfile actually worked! Turns out the tag didn't update and I've been using an old version of the Docker image all along. Duh.</p> <p>I still don't know why the (more elegant) solution 1 via the gcs-connector isn't working, but it might be related to <a href="https://stackoverflow.com/questions/62408395/mountvolume-setup-failed-for-volume-spark-conf-volume">MountVolume.Setup failed for volume &quot;spark-conf-volume&quot;</a></p>
<p>If the zip files contain jar which you always shall require while running your spark job, facing a similar issue I just added</p> <pre><code>FROM gcr.io/spark-operator/spark-py:v2.4.5 COPY mydepjars/ /opt/spark/jars/ </code></pre> <p>And everything is getting loaded within my spark session. Could be one way to do it.</p>
<p>I am trying to find min/max/average memory consumed by particular pod over a time inteval.</p> <p>Currently I am using</p> <pre><code>sum(container_memory_working_set_bytes{namespace=&quot;test&quot;, pod=&quot;test1&quot;, container!=&quot;POD&quot;, container!=&quot;&quot;}) by (container) Output -&gt; test1 = 9217675264 </code></pre> <p>For report purpose, I need to find what the min/peak memory used by pod over a time interval ( 6h) and average too.</p>
<p>You can do that with a range vector (add an <code>[interval]</code> to a metric name/selector) and an <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="nofollow noreferrer">aggregation-over-time function</a>:</p> <pre><code>min_over_time(container_memory_usage_bytes{}[6h]) max_over_time(container_memory_usage_bytes{}[6h]) avg_over_time(container_memory_usage_bytes{}[6h]) </code></pre>
<p>I have a k8s ingress with more than 5 backend services behind it. The ingress spawns a GoogleCloud LoadBalancer.</p> <p>Each of the services is routed traffic to by an http <code>path</code> rule. Eg. one app is on <code>/foo</code>, another is on <code>/bar</code>, etc. All of them work fine. Then I added a new app, with backend service and routing rule, all the same way as the others.</p> <p>But I'm constantly getting this error when I hit the URL of the new app:</p> <pre><code>Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds. </code></pre> <p>When I open the ingress in GCP console, I can see this warning: <a href="https://i.stack.imgur.com/ZyA7I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZyA7I.png" alt="enter image description here" /></a></p> <p>and the unhealthy service is the one from my newly added app.</p> <p><strong>The weird thing is that the app actually does get traffic when I hit the URL. I can see it in the logs. But I still get that 502 error and the backend service is shown as unhealthy.</strong></p> <p>I am not rly sure how to debug this in order to figure what's the issue.</p>
<p>So, the problem was that the LB health-check was hitting <code>/</code> which was a non-existing endpoint on the app (aka. it wasn't returning <code>OK 200</code>).</p> <p>I added <code>readiness</code> probe to the k8s Deployment. According to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="noreferrer">GCP Ingress docs</a>, if there is a <code>readiness</code> probe, the ingress will pick it up and use it as the LB health-check.</p> <p>I also had to manually update the path to which the health-check object of the backend service was hitting. I guess the pod with readiness probe should exist before the ingress is set up, otherwise it doesn't update the health-check object automatically.</p>
<p>How can I speedup the rollout of new images in Kubernetes?</p> <p>Currently, we have an automated build job that modifies a yaml file to point to a new revision and then runs <code>kubectl apply</code> on it.</p> <p>It works, but it takes long delays (up to 20 minutes PER POD) before all pods with the previous revision are replaced with the latest.</p> <p>Also, the deployment is configured for 3 replicas. We see one pod at a time is started with the new revision. (Is this the Kubernetes &quot;surge&quot; ?) But that is too slow, I would rather kill all 3 pods and have 3 new ones with the new image.</p>
<p>Jonas and SYN are right but I would like to expand this topic with some additional info and examples.</p> <p>You have two types of strategies to choose from when specifying the way of updating your deployments:</p> <ul> <li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate Deployment</a>: All existing Pods are killed before new ones are created.</p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update Deployment</a>: The Deployment updates Pods in a rolling update fashion.</p> </li> </ul> <p>The default and more recommended one is the <code>.spec.strategy.type==RollingUpdate</code>. See the examples below:</p> <pre><code>spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 </code></pre> <p>In this example there would be one additional Pod (<code>maxSurge: 1</code>) above the desired number of 3, and the number of available Pods cannot go lower than that number (<code>maxUnavailable: 0</code>).</p> <p>Choosing this config, the Kubernetes will spin up an additional Pod, then stop an β€œold” one. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment. If not, the Pod will be deployed on an already used Node at the cost of resources from other Pods hosted on the same Node.</p> <p>You can also try something like this:</p> <pre><code>spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 1 </code></pre> <p>With the example above there would be no additional Pods (<code>maxSurge: 0</code>) and only a single Pod at a time would be unavailable (<code>maxUnavailable: 1</code>).</p> <p>In this case, Kubernetes will first stop a Pod before starting up a new one. The advantage of that is that the infrastructure doesn’t need to scale up but the maximum workload will be less.</p> <p>If you chose to use the percentage values for <code>maxSurge</code> and <code>maxUnavailable</code> you need to remember that:</p> <ul> <li><p><code>maxSurge</code> - the absolute number is calculated from the percentage by <strong>rounding up</strong></p> </li> <li><p><code>maxUnavailable</code> - the absolute number is calculated from percentage by <strong>rounding down</strong></p> </li> </ul> <p>With the <code>RollingUpdate</code> defined correctly you also have to make sure your applications provide endpoints to be queried by Kubernetes that return the app’s status. Below it's a <code>/greeting</code> endpoint, that returns an HTTP 200 status when it’s ready to handle requests, and HTTP 500 when it’s not:</p> <pre><code>readinessProbe: httpGet: path: /greeting port: 8080 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 </code></pre> <ul> <li><p><code>initialDelaySeconds</code> - Time (in seconds) before the first check for readiness is done.</p> </li> <li><p><code>periodSeconds</code> - Time (in seconds) between two readiness checks after the first one.</p> </li> <li><p><code>successThreshold</code> - Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.</p> </li> <li><p><code>timeoutSeconds</code> - Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</p> </li> </ul> <p>More on the topic of liveness/readiness probes can be found <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">here</a>.</p>
<p>I have deployed minikube on Windows VM and the minikube VM is created on Virtualbox with the host-only IP.</p> <p>I have deployed the Kubernetes dashboard with NodePort IP so I can access it from outside the cluster. The svc is as follows:</p> <pre><code>PS C:\Users\XXX\Desktop\ingress&gt; kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.111.167.61 &lt;none&gt; 8000/TCP 5d20h kubernetes-dashboard NodePort 10.111.220.57 &lt;none&gt; 443:30613/TCP 5d20h </code></pre> <p>With the help of the minikube ingress addon, I installed the Ingress controller which is of Nginx. Its svc details are as follows:</p> <pre><code>PS C:\Users\XXX\Desktop\ingress&gt; kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.98.29.41 &lt;none&gt; 80:32628/TCP,443:31194/TCP 5d20h ingress-nginx-controller-admission ClusterIP 10.96.35.36 &lt;none&gt; 443/TCP 5d20h </code></pre> <p>Then I have created an Ingress Rule for my dashboard application as follows:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/add-base-url: &quot;true&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/secure-backends: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; ingress.kubernetes.io/configuration-snippet: | rewrite ^(/dashboard)$ $1/ permanent; spec: rules: - host: k8s.dashboard.com http: paths: - path: /dashboard pathType: Prefix backend: service: name: kubernetes-dashboard port: number: 443 </code></pre> <p>But now when I am trying to access the dashboard with the following URL <code>https://k8s.dashboard.com/dashboard</code> then I am facing the error of 404 Not Found. I also tried multiple URL to access the dashboard such as :</p> <pre><code>https://k8s.dashboard.com:30613/dashboard http://k8s.dashboard.com:30613/dashboard https://k8s.dashboard.com/dashboard </code></pre> <p>But this URL is working for me: <code>https://k8s.dashboard.com:30613</code> I have added the minikube IP to hosts files in the Windows machine. Ingress rule describe the output is as follows:</p> <pre><code>PS C:\Users\XXX\Desktop\ingress&gt; kubectl describe ingress -n kubernetes-dashboard Name: dashboard-ingress Namespace: kubernetes-dashboard Address: 192.168.56.100 Default backend: default-http-backend:80 (&lt;error: endpoints &quot;default-http-backend&quot; not found&gt;) Rules: Host Path Backends ---- ---- -------- k8s.dashboard.com /dashboard kubernetes-dashboard:443 (172.17.0.4:8443) Annotations: ingress.kubernetes.io/configuration-snippet: rewrite ^(/dashboard)$ $1/ permanent; kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/add-base-url: true nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/force-ssl-redirect: false nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: true Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 26m (x16 over 5d20h) nginx-ingress-controller Scheduled for sync </code></pre> <p>Any help regarding this really helps. Thanks</p> <p><code>EDITED</code> My ingress controller logs are as follows:</p> <pre><code>192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /dashboard HTTP/2.0&quot; 200 746 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 418 0.019 [kubernetes-dashboard-kubernetes-dashboard-443] [] 172.17.0.4:8443 746 0.018 200 1a2793052f70031c6c9fa59b0d4374d1 192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /styles.aa1f928b22a88c391404.css HTTP/2.0&quot; 404 548 &quot;https://k8s.dashboard.com/dashboard&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 101 0.002 [upstream-default-backend] [] 127.0.0.1:8181 548 0.002 404 1974258442f8b4c46d8badd1dda3e3f5 192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /runtime.2a456dd93bf6c4890676.js HTTP/2.0&quot; 404 548 &quot;https://k8s.dashboard.com/dashboard&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 49 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 96c17c52e6337f29dd8b2b2b68b088ac 192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /polyfills.f4f05ad675be9638106e.js HTTP/2.0&quot; 404 548 &quot;https://k8s.dashboard.com/dashboard&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 40 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 096ae29cb168523aa9191f27a967e47a 192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /scripts.128068f897fc721c4673.js HTTP/2.0&quot; 404 548 &quot;https://k8s.dashboard.com/dashboard&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 38 0.008 [upstream-default-backend] [] 127.0.0.1:8181 548 0.007 404 728f73f75276167b387dc87a69b65a72 192.168.56.1 - - [16/Jun/2021:06:57:00 +0000] &quot;GET /en.main.09bf52db2dbc808e7279.js HTTP/2.0&quot; 404 548 &quot;https://k8s.dashboard.com/dashboard&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36&quot; 38 0.014 [upstream-default-backend] [] 127.0.0.1:8181 548 0.014 404 b11e5ae324a828508d488816306399c2 </code></pre> <p>and this is dashboard logs</p> <pre><code>172.17.0.1 - - [16/Jun/2021:06:59:46 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; 172.17.0.1 - - [16/Jun/2021:06:59:56 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; 172.17.0.1 - - [16/Jun/2021:07:00:00 +0000] &quot;GET /healthz HTTP/1.1&quot; 200 13 &quot;&quot; &quot;dashboard/v2.2.0&quot; 172.17.0.1 - - [16/Jun/2021:07:00:06 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; 172.17.0.1 - - [16/Jun/2021:07:00:16 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; 172.17.0.1 - - [16/Jun/2021:07:00:26 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; 172.17.0.1 - - [16/Jun/2021:07:00:30 +0000] &quot;GET /healthz HTTP/1.1&quot; 200 13 &quot;&quot; &quot;dashboard/v2.2.0&quot; 172.17.0.1 - - [16/Jun/2021:07:00:36 +0000] &quot;GET / HTTP/1.1&quot; 200 6 &quot;&quot; &quot;kube-probe/1.20&quot; {&quot;level&quot;:&quot;error&quot;,&quot;msg&quot;:&quot;Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)&quot;,&quot;time&quot;:&quot;2021-06-16T07:00:41Z&quot;} </code></pre>
<p>According to <a href="https://github.com/kubernetes/dashboard/issues/5017" rel="nofollow noreferrer">this issue</a> this is a limitation/bug of the kubernetes dashboard.</p> <p>They suggest using this config as a workaround:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard labels: app.kubernetes.io/name: kubernetes-dashboard annotations: kubernetes.io/ingress.class: nginx # Add https backend protocol support for ingress-nginx nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Accept-Encoding &quot;&quot;; sub_filter '&lt;base href=&quot;/&quot;&gt;' '&lt;base href=&quot;/dashboard/&quot;&gt;'; sub_filter_once on; nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: my.example.com http: paths: - path: /dashboard(/|$)(.*) backend: serviceName: kubernetes-dashboard servicePort: 443 </code></pre>
<p>I'm a beginner to Kubernetes, Helm and Yaml. I'm trying to access the QuestDB Console via Kubernetes Ingress Controller setup in my minikube, but i'm getting the below error when running a helm upgrade. Could anyone advice how I can correct this?</p> <pre><code>Error: UPGRADE FAILED: failed to create resource: Ingress.extensions &quot;questdb&quot; is invalid: spec: Invalid value: []networking.IngressRule(nil): either `defaultBackend` or `rules` must be specified </code></pre> <p>Here's my overriding value.yaml</p> <pre><code>ingress: enabled: true rules: - host: localhost http: paths: - path: /questdb backend: serviceName: questdb-headless servicePort: 9000 - path: /influxdb backend: serviceName: questdb-headless servicePort: 9009 </code></pre> <p>I've installed the QuestDB helm chart using a local version which has only slightly modified the original ingress.yaml to reference networking.k8s.io/v1 instead of networking.k8s.io/v1beta1. Here's what it is locally:</p> <pre><code>{{- if .Values.ingress.enabled -}} {{- $fullName := include &quot;questdb.fullname&quot; . -}} {{- $svcPort := .Values.service.port -}} {{- if semverCompare &quot;&gt;=1.14-0&quot; .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1 {{- else -}} apiVersion: extensions/v1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} labels: {{- include &quot;questdb.labels&quot; . | nindent 4 }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }} </code></pre> <p>I'm running on these versions:</p> <pre><code>- helm : v3.6.0 - Kubernetes : Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.7&quot;, GitCommit:&quot;1dd5338295409edcfff11505e7bb246f0d325d15&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:23:52Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.2&quot;, GitCommit:&quot;faecb196815e248d3ecfb03c680a4507229c2a56&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:20:00Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} NAME NAMESPACE CHART APP VERSION kubernetes-ingress default kubernetes-ingress-1.15.2 1.6.2 questdb default questdb-0.8.0 6.0.3 </code></pre> <p>More details on the original chart and templates can be found here: <a href="https://github.com/questdb/questdb-kubernetes/tree/master/charts/questdb" rel="nofollow noreferrer">https://github.com/questdb/questdb-kubernetes/tree/master/charts/questdb</a></p>
<p>The Ingress template expects things to stay under <code>.Values.ingress.hosts</code> but in your values are under <code>.Values.ingress.rules</code>.</p> <p>Additionally, paths needs to stay directly under hosts items, not under http, because the ingress is using it with a</p> <pre><code>{{- range .paths }} </code></pre> <p>under <code>.Values.ingress.hosts</code> items. And, paths are just strings, as the service name and port are directly taken from the <code>fullname</code> and the <code>.Values.service.port</code></p> <hr /> <p>I would try changing your values to something like:</p> <pre><code>ingress: enabled: true hosts: - host: localhost paths: - &quot;/questdb&quot; - &quot;/influxdb&quot; </code></pre> <p>or something close to this.</p> <p>Additionally, you can try and see what is the output of an helm upgrade or install command if you add the parameters <code>--debug --dry-run</code> which could greatly help you identify problems like those, showing the definitions as they will be created (if there's no error while building the template, of course)</p> <hr /> <p><strong>Update</strong>: since you also changed the Ingress template to use <code>networking.k8s.io/v1</code>, you need to also change how the template is created, because the new kind of Ingress expects things in a different way, as you can see in the documentation: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>Rules could becomes something like this:</p> <pre><code>rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ .path }} backend: service: name: {{ .svc }} port: number: {{ .port }} {{- end }} {{- end }} </code></pre> <p>and remove the declarations of</p> <pre><code>{{- $fullName := include &quot;questdb.fullname&quot; . -}} {{- $svcPort := .Values.service.port -}} </code></pre> <p>which are now useless. With this, you can change your values in the following:</p> <pre><code>ingress: enabled: true hosts: - host: localhost paths: - path: &quot;/questdb&quot; svc: questdb-headless port: 9000 - path: &quot;/influxdb&quot; svc: questdb-headless port: 9009 </code></pre> <p>But the service taht you specify in the values must be created somehwere of course (by the ingress and it needs to expose the desired ports)</p>
<p>Use case: Get stream from Kafka store in parquet file using spark. Open these parquet files and generate graph using graphframes.</p> <p>Infra: i have a bitnami spark infra on kubernetes connected to Kafka.</p> <p>The goal is to call the spark-submit into a kubernetes pods. With that all the code run into kubernetes and i doesn't install spark outside kubernetes.</p> <p>Without kubernetes, i have do the job into spark master container:</p> <pre><code>docker cp ./Spark/Python_code/edge_stream.py spark_spark_1:/opt/bitnami/spark/edge_stream.py docker cp ./Spark/Python_code/config.json spark_spark_1:/opt/bitnami/spark/config.json docker exec spark_spark_1 \ spark-submit \ --master spark://0.0.0.0:7077 \ --deploy-mode client \ --conf spark.cores.max=1 \ --conf spark.executor.memory=1g \ --conf spark.eventLog.enabled=true \ --conf spark.eventLog.dir=/tmp/spark-events \ --conf spark.eventLog.rolling.maxFileSize=256m\ /opt/bitnami/spark/edge_stream.py </code></pre> <p>Is it possible to do the same job in kubernetes ?</p> <p>Best regards</p>
<p>Using exec command of kubernetes</p> <pre><code>minikube kubectl -- exec my-spark-master-0 -- spark-submit \ --master spark://0.0.0.0:7077 \ --deploy-mode client \ --conf spark.cores.max=1 \ --conf spark.executor.memory=1g \ --conf spark.eventLog.enabled=true \ --conf spark.eventLog.dir=/tmp/spark-events \ --conf spark.eventLog.rolling.maxFileSize=256m\ ../Python/edge_stream.py </code></pre>
<p>I am trying to make my kubernetes cluster pull from a container registry service running inside itself. I have kube dns setup, I have a registry deployment and service running. I can resolve the service internal name via host command on the node. I have added <code>--dns</code> flag to docker daemon with the address of kube dns service. I have kubelet running with <code>--cluster-dns</code> flag set to the same address as well. Yet somehow this is what I get when I try to create a pod using this registry.</p> <p><code>Failed to pull image &quot;kube-registry.kube-system.svc.cluster.local/myuser/myimage&quot;: rpc error: code = Unknown desc = Error response from daemon: Get https://kube-registry.kube-system.svc.cluster.local/v1/_ping: dial tcp: lookup kube-registry.kube-system.svc.cluster.local: no such host </code></p> <p>Somehow even with kube dns address explicitly given to both dockerd and kubelet, pulling images from the registry service fails because of name resolution. What am I missing?</p>
<p>Question: &quot;I am trying to make my kube-cluster pull from a registry running inside itself.&quot; (Note I plan to going to edit the title of your question to clarify slightly / make it easier to search)</p> <p><strong>Short Answer: You can't</strong>*<br /> *<strong>Nuanced Answer: Technically it is possible with hacks and a solid understanding of Kubernetes fundamentals</strong>. You'll probably want to avoid doing this, unless you have a really good reason, and fully understand my explanation of the fundamental issue and workaround, as this is an advanced use case that will require debugging to force it to work. This is complicated and nuanced enough to make step by step directions difficult, but I can give you a solid idea of the fundamental issue you ran into that makes this challenging, and high level overview guidance on how to pull off what you're trying to do anyways.</p> <p><strong>Why you can't / the fundamental issue you ran into:</strong><br /> In Kubernetes land 3 networks tend to exist: Internet, LAN, and Inner Cluster Network.<br /> (Resource that goes into more depth: <a href="https://oteemo.com/kubernetes-networking-and-services-101/" rel="nofollow noreferrer">https://oteemo.com/kubernetes-networking-and-services-101/</a>)<br /> AND these 3 networks each have their own DNS / there's 3 layers of DNS.</p> <ul> <li>Internet DNS: 8.8.8.8, 1.1.1.1, 9.9.9.9 (google, cloudflare, quad9 or whatever public internet DNS the router is configured to point to.)</li> <li>LAN DNS: 192.168.1.1 (LAN DNS hosted on your router)</li> <li>CoreDNS: 10.43.0.10 (10th IP of the CIDR range of the inner cluster network)</li> </ul> <p>Here's the gotcha you're running into:</p> <ul> <li>A pod can resolve DNS entries hosted at any of these 3 levels of DNS.</li> <li>The OS hosting the Kubernetes Cluster can only resolve DNS entries hosted on LAN DNS or Internet DNS. (the OS isn't scoped to have visibility into the existence of CoreDNS/Inner Cluster Network.)</li> <li>kubelet + docker/containerd/cri-o/other runtime, are responsible for pulling images from registries and these exist at the OS level in the form of systemd services and thus don't have scope to Inner Cluster DNS names. This is why what you're trying to do is failing.</li> </ul> <p><strong>Workaround options / hacks and nuances you can do to force what you're trying to do to work:</strong></p> <p><strong>Option 1.) (I don't suggest this, has extra difficult chicken and egg issues, sharing for information purposes only)</strong></p> <p>Host an additional instance of coredns as a LAN facing instance of DNS on Kubernetes, Expose the registry and 2nd instance of coredns to the LAN via explicit NodePorts (using static service manifests so they'll come up with predictable/static NodePorts, vs random NodePorts in the range of 30000 - 32768) so they're routable from the LAN (I suggest NodePorts over LB's here as one less dependency/thing that can go wrong). Have the 2nd instance of coredns use your LAN router/LAN DNS as it's upstream DNS server. Reconfigure the OS to use the LAN facing coredns as it's DNS server.</p> <p><strong>Option 2.) More reasonable and what trow does:</strong></p> <ul> <li><p>What is trow: <a href="https://thenewstack.io/trow-a-container-registry-to-run-inside-a-kubernetes-cluster/" rel="nofollow noreferrer">https://thenewstack.io/trow-a-container-registry-to-run-inside-a-kubernetes-cluster/</a></p> </li> <li><p>Proof they use the /etc/hosts method <a href="https://github.com/ContainerSolutions/trow/blob/main/QUICK-INSTALL.md" rel="nofollow noreferrer">https://github.com/ContainerSolutions/trow/blob/main/QUICK-INSTALL.md</a></p> </li> </ul> <ol> <li><p>Pay $12 for a some-dns-name.tld</p> </li> <li><p>Use Cert Manager Kubernetes Operator or Cert Bot standalone docker container + proof you own the domain to get an <a href="https://registry.some-dns-name.tld" rel="nofollow noreferrer">https://registry.some-dns-name.tld</a> HTTPS cert from Lets Encrypt Free. And configure your inner cluster hosted registry to use this HTTPS cert.</p> </li> <li><p>Expose the registry hosted in the cluster to the LAN using an NodePort service with an explicitly pinned convention based port number, like 32443</p> </li> </ol> <blockquote> <p><strong>Why NodePort and not a LB?</strong> There's 3 reason NP is better than LB for this scenario:</p> <ul> <li><p>1.) Service type LB's implementation differs between Deployment Environment and Kubernetes Distribution, while type NodePort is universal.</p> </li> <li><p>2.) If the LB changes you have to update every node's /etc/host file to point to &quot;LB_IP registry.some-dns-name.tld&quot; AND you have to know the LB IP, that isn't always known in advance / which means you'd have to follow some order of operations. If you use service type NodePort you can add the localhost IP entry to every node's /etc/host, so it looks like &quot;127.0.0.1 registry.some-dns-name.tld&quot;, it's well known reusable and simplifies order of operations.</p> </li> <li><p>3.) If you ever need to change where your cluster is hosted, you can arrange it so you can make the change in 1 centralized location even in scenarios where you have no access to or control over LAN DNS. You can craft services that point to a staticly defined IP or external name (which could exist outside the cluster). and have the NodePort service point to the staticly defined service.</p> </li> </ul> </blockquote> <ol start="4"> <li><p>Add &quot;127.0.0.1 registry.some-dns-name.tld&quot; to /etc/hosts of every node in the cluster.</p> </li> <li><p>Set your yaml manifests to pull from registry.some-dns-name.tls, or configure containerd/cri-o's registry mirroring feature to map registry.some-dns-name.tld:32443 to whatever entries are being mirrored on your local registry.</p> </li> <li><p>There's 2 more solvable chicken and egg problems to deal with. 1st chicken egg problem is that Kubernetes and the registry will both likely need access to container images to even get this far.</p> </li> </ol> <ul> <li>If you have internet access and if your internal registry is just for cacheing purposes this probably isn't a big deal.</li> <li>If you don't have internet access you'd need to .tar up the images needed by your kube distro and registry and &quot;preload&quot; them into spot docker/containerd/cri-o expects.</li> <li>If you don't have internet access you could alternatively have another /etc/hosts entry or LAN DNS entry for a non HA temporary docker compose based registry used for initial bootstrapping or one hosted outside the cluster.</li> </ul> <ol start="7"> <li>2nd chicken egg problem is the registry hosted on your cluster will need some way of being seeded with images.</li> </ol> <ul> <li>If internet access should be easy to figure out and script</li> <li>If no internet access you may need to come up with some kind of DR solution for backup and restoration of the registries backend persistent storage.</li> <li>If no internet access you could alternatively use an &quot;emphemeral transport registry&quot; for seeding purposes. Basically do this docker compose to spin up non-HA registry:2 image with filesystem backing, use <a href="https://github.com/containers/skopeo" rel="nofollow noreferrer">skopeo</a> to seed that, tar up the filesystem backing, and import it into another computer, restore the file system backing, reload non-HA registry:2 from the pre-populated filesystem backing, then use skopeo to copy from your &quot;ephemeral transport registry&quot; to your registry hosted in the cluster.</li> </ul>
<p>I have the following imagepolicy:</p> <pre><code>apiVersion: image.toolkit.fluxcd.io/v1alpha2 kind: ImagePolicy metadata: name: cps-kul-beta-imagepolicy namespace: flux-system spec: imageRepositoryRef: name: cps-kul-image filterTags: pattern: '^sha-(?P&lt;ts&gt;.*)-.+' extract: '$ts' policy: numerical: order: asc </code></pre> <p>And I get the error <code>version list argument cannot be empty</code> when doing kubectl describe on that object.</p>
<p>This error means that there's no images in the registry which conform to the tag pattern prescribed in the filterTags section. Uploading a new image to the registry will fix this problem.</p>
<p>I'm looking for a way to create a live Kubernetes cluster without too much hassle.</p> <p>I've got a nice HP server, which could run a few VM's with kubernetes on top. The reason for VM's is to isolate this from the host machine. Ideally, the VMs should only run containerd and kubelet and are essentially disposable for node-upgrades.</p> <p>However, I get lost in what tooling would provide this. minikube? microk8s? k3s? rancher? charmed kubernetes? some existing qemu image? some existing vagrant config? The more managed it is, the better. So far I liked minikube, but it doesn't have &quot;start on reboot&quot; for example, nor the flexibility for node upgrades.</p>
<p>I have tried a lot of tools to train for the CKAD certification. For my usage, the better option for a local cluster was k3s and multipass (for online clusters, I have used Civo). Both are very fast to proceed their respective tasks, so it allows me to create clusters at will and dispose them to be able to work on clean environments.</p> <ul> <li>multipass to create VM quickly</li> <li>k3s which is nothing else than a lightweight kubernetes</li> </ul> <p>You can find easily some tutorials to automate the creation of clusters for example:</p> <ul> <li><a href="https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c" rel="nofollow noreferrer">https://betterprogramming.pub/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c</a></li> <li><a href="https://medium.com/@yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5" rel="nofollow noreferrer">https://medium.com/@yankee.exe/setting-up-multi-node-kubernetes-cluster-with-k3s-and-multipass-d4efed47fed5</a></li> <li><a href="https://github.com/superseb/multipass-k3s" rel="nofollow noreferrer">https://github.com/superseb/multipass-k3s</a></li> </ul>
<p>I'm running kubernetes 1-21-0 on Centos7. I've set up a keycloak service to test my ingress controller and am able to access the keycloak on the host url with the keycloak port like <code>myurl.com:30872</code>. These are my running services:</p> <pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default keycloak NodePort 10.96.11.164 &lt;none&gt; 8080:30872/TCP 21h default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 11d ingress-nginx ingress-nginx-controller NodePort 10.102.201.24 &lt;none&gt; 80:31110/TCP,443:30566/TCP 9m45s ingress-nginx ingress-nginx-controller-admission ClusterIP 10.107.90.207 &lt;none&gt; 80/TCP,443/TCP 9m45s kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 11d </code></pre> <p>I've deployed the following <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml" rel="nofollow noreferrer">nginx ingress controller</a>.</p> <p>And added an HTTP webhook to the service:</p> <pre><code># Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-3.23.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.44.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: http-webhook port: 80 targetPort: webhook - name: https-webhook port: 443 targetPort: webhook selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller </code></pre> <p>With this ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: keycloak annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /keycloak pathType: Prefix backend: service: name: keycloak port: number: 8080 </code></pre> <p>Now when I attempt to connect to the keycloak service through the ingress I go to <code>myurl.com/keycloak</code> but it's unable to connect and trying to curl it from within my control node shows connection refused:</p> <pre><code># curl -I http://127.0.0.1/keycloak curl: (7) Failed connect to 127.0.0.1:80; Connection refused </code></pre> <p>Can someone see what I'm missing?</p> <p><strong>Edit:</strong></p> <p>I realized the ingress controller actually works, but I need to specify its port also to reach it like this:</p> <pre><code>curl -I http://127.0.0.1:31110/keycloak </code></pre> <p>Which I'd like to avoid.</p>
<p>You have to specify <code>31110</code> port because your nginx ingress is set up with <code>NodePort</code> which means kubernetes listens to this port and all traffic that goes here is redirected to <code>nginx-ingress-controller</code> pod.</p> <p>Depending on your setup and goals, this can be achieved differently.</p> <p><strong>Option 1</strong> - for testing purposes only and without any changes in setup. Works only on a control plane where <code>nginx-ingress-controller</code> pod is running</p> <p>it's possible to forward traffic from outside port 80 to <code>nginx-ingress-controller</code> pod directly port 80. You can run this command (in background):</p> <pre><code>sudo kubectl port-forward ingress-nginx-controller-xxxxxxxx-yyyyy 80:80 -n ingress-nginx &amp; </code></pre> <p>Curl test shows that it's working:</p> <pre><code>curl -I localhost/keycloak Handling connection for 80 HTTP/1.1 200 OK Date: Wed, 16 Jun 2021 13:19:23 GMT </code></pre> <p>Curl can be run on different instance, in this case command will look this way without specifying any ports:</p> <pre><code>curl -I public_ip/keycloak </code></pre> <p><strong>Option 2</strong> - this one is a bit more difficult, however provides better results.</p> <p>It's possible to expose pods outside of the cluster. Feature is called <code>hostPort</code> - it allows to expose a single container port on the host IP. To have this work on different worker nodes, <code>ingress-nginx-controller</code> should be deployed as <code>DaemonSet</code>.</p> <p>Below parts in <code>values.yaml</code> for ingress-nginx helm chart that I corrected:</p> <p>hostPort -&gt; enabled -&gt; <strong>true</strong></p> <pre><code> ## Use host ports 80 and 443 ## Disabled by default ## hostPort: enabled: true ports: http: 80 https: 443 </code></pre> <p>kind -&gt; <strong>DaemonSet</strong></p> <pre><code> ## DaemonSet or Deployment ## kind: DaemonSet </code></pre> <p>Then install ingress-nginx-controller from this chart. What it does is by default <code>ingress-nginx-controller</code> pods will listen to traffic on 80 and 443 port. Which confirms with simple test:</p> <pre><code>curl -I localhost/keycloak HTTP/1.1 200 OK Date: Wed, 16 Jun 2021 13:31:25 GMT </code></pre> <p><strong>Option 3</strong> - may be considered as well if ingress-nginx is installed with LoadBalancer type.</p> <p>Use <code>metallb</code> - software loadbalancer specifically designed for bare metal clusters. <a href="https://metallb.universe.tf/installation/#installation-by-manifest" rel="nofollow noreferrer">How to install metallb</a> and <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">configure</a></p> <p>Once it's done and ingress-nginx is deployed, ingress-nginx will get External-IP:</p> <p>kubectl get svc --all-namespaces</p> <pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx ingress-nginx-controller LoadBalancer 10.102.135.146 192.168.1.240 80:32400/TCP,443:32206/TCP 43s </code></pre> <p>Testing this again with <code>curl</code>:</p> <pre><code>curl -I 192.168.1.240/keycloak HTTP/1.1 200 OK Date: Wed, 16 Jun 2021 13:55:34 GMT </code></pre> <p>More information about topics above:</p> <ul> <li><a href="https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">hostPorts and hostNetwork</a></li> <li><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">Metallb project</a></li> </ul>
<p>Is there any way I can have an AWS deployment using kubernetes where I have a variable and scalable number of pods for my app BUT the environment on the VM is different for a single host?</p> <p>Basically I want that one of my pods doing a task that no of the other pods will do all the time. I am planning to control this using an environment variable.</p>
<p>You can have multiple deployments, however if you are in need of <code>uniqueness</code> between the pods, you should consider <code>statefulsets</code> over deployments. However, deciding between <code>statefulset</code> and <code>deployment</code> is <strong>mostly decided by the type of the application and it is a major decision.</strong> so, using <code>statefulset</code> may not fit in as a 100% replacement for a deployment.</p> <p>In <code>statefulsets</code> each pod get its unique but non-random identity via its hostname.</p> <p>For example, you can create a <code>mysql</code> statefulset using following:</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: null labels: app: foo name: foo spec: serviceName: mysql replicas: 3 selector: matchLabels: app: foo template: metadata: creationTimestamp: null labels: app: foo spec: containers: - image: mysql name: mysql containers: - env: - name: MYSQL_ROOT_PASSWORD value: root image: mysql name: mysql </code></pre> <p>This would create three replicas of the pod:</p> <pre><code>k get pod -l app=foo NAME READY STATUS RESTARTS AGE foo-0 1/1 Running 0 5m17s foo-1 1/1 Running 0 5m14s foo-2 1/1 Running 0 5m10s </code></pre> <p>Now you can fetch the hostname of the pods, the first pod will be always suffixed with <code>0</code> and 2nd with <code>1</code> and so on.</p> <pre><code>exec -it foo-0 -- hostname foo-0 k exec -it foo-1 -- hostname foo-1 k exec -it foo-2 -- hostname foo-2 </code></pre> <p>you can use <code>hostname</code> as selector do decide what action is required on which pod. <a href="https://stackoverflow.com/a/41733207/6309601">Here</a> is one very good read on this.</p>
<p>I've got a Next.js app which has 2 simple <code>readiness</code> and <code>liveness</code> endpoints with the following implementation:</p> <pre><code>return res.status(200).send('OK'); </code></pre> <p>I've created the endpoints as per <a href="https://nextjs.org/docs/api-routes/introduction" rel="noreferrer">the api routes docs</a>. Also, I've got a <code>/stats</code> basePath as per <a href="https://nextjs.org/docs/api-reference/next.config.js/basepath" rel="noreferrer">the docs here</a>. So, the probes endpoints are at <code>/stats/api/readiness</code> and <code>/stats/api/liveness</code>.</p> <p>When I build and run the app in a Docker container locally - the probe endpoints are accessible and returning 200 OK.</p> <p>When I deploy the app to my k8s cluster, though, the probes fail. There's plenty of <code>initialDelaySeconds</code> time, so that's not the cause.</p> <p>I connect to the <code>service</code> of the pod thru <code>port-forward</code> and when the pod has just started, before it fails, I can hit the endpoint and it returns 200 OK. And a bit after it starts failing as usual.</p> <p>I also tried accessing the failing pod thru a healthy pod:</p> <pre><code>k exec -t [healthy pod name] -- curl -l 10.133.2.35:8080/stats/api/readiness </code></pre> <p>And the same situation - in the beginning, while the pod hasn't failed yet, I get 200 OK on the curl command. And a bit after, it start failing.</p> <p>The error on the probes that I get is:</p> <pre><code>Readiness probe failed: Get http://10.133.2.35:8080/stats/api/readiness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>Funny experiment - I tried putting a random, non-existent endpoint for the probes, and I get the same error. Which leads me to the thought that the probes fail because it cannot access the proper endpoints?</p> <p>But then again, the endpoints are accessible for a period of time before the probes start failing. So, I have literally no idea why this is happening.</p> <p>Here is my k8s deployment config for the probes:</p> <pre><code> livenessProbe: httpGet: path: /stats/api/liveness port: 8080 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 3 periodSeconds: 3 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /stats/api/readiness port: 8080 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 3 periodSeconds: 3 successThreshold: 1 failureThreshold: 3 </code></pre> <p><strong>Update</strong></p> <p>used <code>curl -v</code> as requested from comments. The result is:</p> <pre><code>* Trying 10.133.0.12:8080... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 10.133.0.12 (10.133.0.12) port 8080 (#0) &gt; GET /stats/api/healthz HTTP/1.1 &gt; Host: 10.133.0.12:8080 &gt; User-Agent: curl/7.76.1 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 200 OK &lt; ETag: &quot;2-nOO9QiTIwXgNtWtBJezz8kv3SLc&quot; &lt; Content-Length: 2 &lt; Date: Wed, 16 Jun 2021 18:42:23 GMT &lt; Connection: keep-alive &lt; Keep-Alive: timeout=5 &lt; { [2 bytes data] 100 2 100 2 0 0 666 0 --:--:-- --:--:-- --:--:-- 666 * Connection #0 to host 10.133.0.12 left intact OK% </code></pre> <p>Then, ofcourse, once it starts failing, the result is:</p> <pre><code>* Trying 10.133.0.12:8080... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* connect to 10.133.0.12 port 8080 failed: Connection refused * Failed to connect to 10.133.0.12 port 8080: Connection refused 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 * Closing connection 0 curl: (7) Failed to connect to 10.133.0.12 port 8080: Connection refused command terminated with exit code 7 </code></pre>
<p>Error tells you: <code>Client.Timeout exceeded while awaiting headers</code>. Meaning the TCP connection is established (not refused, nor timing out).</p> <p>Your liveness/readiness probe timeout is too low. Your application doesn't have enough time to respond.</p> <p>Could be due to CPU or memory allocations being smaller than when using your laptop, due to higher concurrency, maybe a LimitRange that sets some defaults when you did not.</p> <p>Check with:</p> <pre><code>time kubectl exec -t [healthy pod name] -- curl -l 127.0.0.1:8080/stats/api/readiness </code></pre> <p>If you can't allocate more CPU, double that time, round it up, and fix your probes:</p> <pre><code> livenessProbe: ... timeoutSeconds: 10 readinessProbe: ... timeoutSeconds: 10 </code></pre> <p>Alternatively, though probably less in the spirit, you could replace those httpGet checks with tcpSocket ones. They would be faster, though may miss actual issues.</p>
<p>I'm new to Kubeflow and k8s. I have setup a single node k8s cluster and installed Kubeflow on this. I'm now trying the 'conditional pipeline' simple example from &quot;Kubeflow for Machine Learning&quot; book but I am getting &quot;cannot post /apis/v1beta1/experiments&quot; error ...</p> <pre><code>Reason: Not Found HTTP response headers: HTTPHeaderDict({'x-powered-by': 'Express', 'content-security-policy': &quot;default-src 'none'&quot;, 'x-content-type-options': 'nosniff', 'content-type': 'text/html; charset=utf-8', 'content-length': '164', 'date': 'Fri, 11 Jun 2021 20:47:13 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy'}) HTTP response body: &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot;&gt; &lt;title&gt;Error&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;pre&gt;Cannot POST /apis/v1beta1/experiments&lt;/pre&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Any pointer what could be going wrong?</p> <p>If I do &quot;kubectl get -A svc | grep dashboard&quot;, I only see Kubeflow central dashboard. Could this error be related to k8s dashboard not running?</p> <p>This is the example I am trying:</p> <p><a href="https://github.com/intro-to-ml-with-kubeflow/intro-to-ml-with-kubeflow-examples/blob/master/pipelines/ControlStructures.ipynb" rel="nofollow noreferrer">https://github.com/intro-to-ml-with-kubeflow/intro-to-ml-with-kubeflow-examples/blob/master/pipelines/ControlStructures.ipynb</a></p> <p>Before this I've tried below MNIST example also and I faced exact same issue - <a href="https://github.com/anjuls/fashion-mnist-kfp-lab/blob/master/KF_Fashion_MNIST.ipynb" rel="nofollow noreferrer">https://github.com/anjuls/fashion-mnist-kfp-lab/blob/master/KF_Fashion_MNIST.ipynb</a></p> <p>Finally, I tried to modify the kfp.Client() line to following: kfp.Client(host='http://127.0.0.1:8001').create_run_from_pipeline_func(conditional_pipeline, arguments={})</p> <p>After this I'm getting error related to 'healtz' -</p> <pre><code>MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=8001): Max retries exceeded with url: /apis/v1beta1/healthz (Caused by ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))) </code></pre> <p>I did following: kubectl logs ml-pipeline-ui-7ddcd74489-xrss8 -c ml-pipeline-ui -n kubeflow</p> <p>It seems ml-pipeline is running on http://localhost:3000. So I modified the client call to following: client = kfp.Client(host='http://localhost:3000')</p> <p>I still get an error - this time &quot;connection refused&quot;.</p> <pre><code>MaxRetryError: HTTPConnectionPool(host='localhost', port=3000): Max retries exceeded with url: /apis/v1beta1/healthz (Caused by NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 0x7fffd1ca0f28&gt;: Failed to establish a new connection: [Errno 111] Connection refused',)) </code></pre>
<p>I've made some progress. However, issues are not fully resolved but I can at least proceed with client creation, do health check, list existing pipelines.</p> <p>I could find ml-pipeline service is running on following internal IP:</p> <p>kubeflow service/ml-pipeline ClusterIP 172.19.31.229</p> <p>I then used this IP in kfp.Client() API - this resulted in RBAC access issue. I then patched my k8s with following with some hint from another issue -</p> <pre><code>apiVersion: rbac.istio.io/v1alpha1 kind: ClusterRbacConfig metadata: name: default spec: mode: &quot;OFF&quot; </code></pre> <p>This resolved issues I was facing with kfp.Client(). But now, I'm facing below error when I try to create_experiment():</p> <pre><code>ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Wed, 16 Jun 2021 22:40:54 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy', 'transfer-encoding': 'chunked'}) HTTP response body: {&quot;error&quot;:&quot;Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.&quot;,&quot;message&quot;:&quot;Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.&quot;,&quot;code&quot;:3,&quot;details&quot;:[{&quot;@type&quot;:&quot;type.googleapis.com/api.Error&quot;,&quot;error_message&quot;:&quot;Invalid resource references for experiment. ListExperiment requires filtering by namespace.&quot;,&quot;error_details&quot;:&quot;Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace.&quot;}]} </code></pre>
<p>I am trying to use the --tmpfs flag from the docker run command in the Kubernetes YAML file but could not find the way.</p> <pre><code>sudo docker run --name=ubuntu-gnome -d --rm \ --tmpfs /run --tmpfs /run/lock --tmpfs /tmp \ --cap-add SYS_BOOT --cap-add SYS_ADMIN \ -v /sys/fs/cgroup:/sys/fs/cgroup \ -p 5901:5901 -p 6901:6901 \ darkdragon001/ubuntu-gnome-vnc </code></pre>
<p>You're looking for an emptyDir volume, such as the following:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: demo spec: containers: - name: container ... volumeMounts: - mountPath: /tmp name: tmp subPath: tmp - mountPath: /run name: tmp subPath: run - mountPath: /run/lock name: tmp subPath: run-lock volumes: - name: tmp emptyDir: medium: Memory sizeLimit: 64Mi </code></pre>
<p>I am trying to copy files from a <strong>Kubernetes</strong> pod to a <strong>GCP Bucket</strong>. I can get the path of my file, but I was wondering if I want to do this programmatically using python, how can I do this.</p> <p>I get my buckets using <code>gcsfs</code>. How can I copy a file in my program without using <code>kubectl</code>?</p> <p>Is there anyway to do this through python.</p>
<p>I need to agree with the comment made by @anemyte:</p> <blockquote> <p>There is <a href="https://stackoverflow.com/questions/59703610/copy-file-from-pod-to-host-by-using-kubernetes-python-client">this question</a> about how to copy a file from a pod. You can download it and then use your code to upload it to the bucket.</p> </blockquote> <hr /> <p>I see 2 possible solutions to this question:</p> <ul> <li>Use <code>GCS Fuse</code> and Python code to copy the file from your <code>Pod</code> to <code>GCS</code> bucket</li> <li>Use the Python library to connect to the GCS bucket without <code>gcsfuse</code></li> </ul> <hr /> <h3>Use <code>GCS Fuse</code> and Python code to copy the file from your <code>Pod</code> to <code>GCS</code> bucket</h3> <p>Assuming that you have a <code>Pod</code> that was configured with <code>GCS Fuse</code> and it's working correctly you can use a following code snippet to copy the files (where in <code>dst</code> you pass the mounted directory of a bucket):</p> <blockquote> <pre class="lang-py prettyprint-override"><code>from shutil import copyfile copyfile(src, dst) </code></pre> <p>-- <em><a href="https://stackoverflow.com/questions/123198/how-can-a-file-be-copied">Stackoverflow.com: Questions: 123198: How can a file be copied</a></em></p> </blockquote> <hr /> <h3>Use the Python library to connect to the GCS bucket without <code>GCS Fuse</code></h3> <p>As pointed by community member @anemyte, you can use the Cloud Storage client libraries to programmatically address your question:</p> <ul> <li><em><a href="https://cloud.google.com/storage/docs/reference/libraries" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs: Reference: Libraries</a></em></li> </ul> <p>There is a Python code snippet that addresses the upload operation:</p> <blockquote> <pre class="lang-py prettyprint-override"><code>from google.cloud import storage def upload_blob(bucket_name, source_file_name, destination_blob_name): &quot;&quot;&quot;Uploads a file to the bucket.&quot;&quot;&quot; # The ID of your GCS bucket # bucket_name = &quot;your-bucket-name&quot; # The path to your file to upload # source_file_name = &quot;local/path/to/file&quot; # The ID of your GCS object # destination_blob_name = &quot;storage-object-name&quot; storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(destination_blob_name) blob.upload_from_filename(source_file_name) print( &quot;File {} uploaded to {}.&quot;.format( source_file_name, destination_blob_name ) ) </code></pre> </blockquote> <p>Please have in mind that you will need to have appropriate permissions to use the GCS bucket. You can read more about it by following below link:</p> <ul> <li><em><a href="https://cloud.google.com/storage/docs/uploading-objects#prereqs" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs Uploading objects: Prerequisites</a></em></li> </ul> <blockquote> <p>A side note!</p> <p>You can also use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> as one of the ways to assign required permissions to your <code>Pod</code>.</p> </blockquote> <hr /> <h3>Additional resources:</h3> <ul> <li><em><a href="https://github.com/maciekrb/GCS-Fuse-sample" rel="nofollow noreferrer">Github.com: Maciekrb: Gcs fuse example</a></em></li> <li><em><a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">Cloud.google.com: Storage: Docs: GCS Fuse</a></em></li> </ul> <p>It passed my mind that you could want to use Python <strong>outside</strong> of the <code>Pod</code> (like from your laptop) to get the file copied from the <code>Pod</code> to <code>GCS bucket</code>. I'd reckon you could follow this example:</p> <ul> <li><em><a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_exec.py" rel="nofollow noreferrer">Github.com: Kubernetes client: Python: Examples: Pod exec.py</a></em></li> </ul>
<p>The cluster nodes are on-prem vmware servers, we used rancher just to build the k8s cluster.</p> <p>Built was successful, when trying to host apps that are using PVC we have problems, the dynamic volume provisioning isn't happening and pvc are stuck in 'pending' state.</p> <p>VMWare storage class is being used, we got confirmed from our vsphere admins that the VM's have visibility to the datastores and ideally it should work.</p> <p>While configuring the cluster we have used the cloud provider credentials according the rancher docs.</p> <pre><code>cloud_provider: name: vsphere vsphereCloudProvider: disk: scsicontrollertype: pvscsi global: datacenters: nxs insecure-flag: true port: '443' soap-roundtrip-count: 0 user: k8s_volume_svc@vsphere.local </code></pre> <p>Storage class yaml</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nxs01-k8s-0004 parameters: datastore: ds1_K8S_0004 diskformat: zeroedthick reclaimPolicy: Delete </code></pre> <p>PVC yaml</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: arango namespace: arango spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: nxs01-k8s-0004 </code></pre> <p>Now wanted understand why my PVC are stuck under pending state? is there any other steps missed out.</p> <p>I saw in the rancher documentation saying Storage Policy has to be given as an input <a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/vsphere/#creating-a-storageclass</a></p> <p>In a vmware document it referred that as an optional parameter, and also had a statement on the top stating it doesn't apply to the tools that use CSI(container storage Interface)</p> <p><a href="https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html" rel="nofollow noreferrer">https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/storageclass.html</a></p> <p>I found that rancher is using an CSI driver called rshared.</p> <p>So now is this storage policy a mandatory one? is this one that stopping me from provisioning a VMDK file?</p> <p>I gave the documentation of creating the storage policy to the vsphere admins, they said this is for VSAN and the datastores are in VMax. I couldn't understand the difference or find an diff doc for VMax.</p> <p>It would be a great help!! if fixed :)</p>
<p>The whole thing is about just the path defined for the storage end, in the cloud config yaml the PATH was wrong. The vpshere admins gave us the PATH where the vm 's residing instead they should have given the path where the Storage Resides.</p> <p>Once this was corrected the PVC came to bound state.</p>
<p>I'm currently dealing with this situation.</p> <p>Whenever I create multi-node cluster using Minikube, when I stop it and restart it again. It will lose track of the &quot;middle&quot; nodes, e.g. I create 4 nodes: <code>m1</code>, <code>m2</code>, <code>m3</code>, <code>m4</code>; by some reason Minikube loses track of <code>m2</code> and <code>m3</code>.</p> <p><strong>Scenario:</strong></p> <p>Let's say I want to create a Kubernetes cluster with Vault, so then I create one a profile named &quot;vault-cluster&quot; with 4 nodes (1 control plane and 3 worker nodes):</p> <pre><code>$ minikube start --nodes 4 -p vault-cluster </code></pre> <p>Then when I stop them using:</p> <pre><code>minikube stop -p vault-cluster </code></pre> <p><strong>Expected behaviour:</strong></p> <p><strong>Output:</strong></p> <pre><code>βœ‹ Stopping node &quot;vault-cluster&quot; ... βœ‹ Stopping node &quot;vault-cluster-m02&quot; ... βœ‹ Stopping node &quot;vault-cluster-m03&quot; ... βœ‹ Stopping node &quot;vault-cluster-m04&quot; ... πŸ›‘ 4 nodes stopped. </code></pre> <p>So when when I started again:</p> <p><strong>Output:</strong></p> <pre><code>$ minikube start -p vault-cluster πŸ˜„ [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042 ✨ Using the virtualbox driver based on existing profile πŸŽ‰ minikube 1.21.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.21.0 πŸ’‘ To disable this notice, run: 'minikube config set WantUpdateNotification false' πŸ‘ Starting control plane node vault-cluster in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster&quot; ... 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... πŸ”— Configuring CNI (Container Networking Interface) ... πŸ”Ž Verifying Kubernetes components... β–ͺ Using image kubernetesui/dashboard:v2.1.0 β–ͺ Using image kubernetesui/metrics-scraper:v1.0.4 β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 πŸ‘ Starting node vault-cluster-m02 in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster-m02&quot; ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120 β–ͺ no_proxy=192.168.99.120 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 πŸ”Ž Verifying Kubernetes components... πŸ‘ Starting node vault-cluster-m03 in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster-m03&quot; ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120,192.168.99.121 β–ͺ no_proxy=192.168.99.120,192.168.99.121 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 β–ͺ env NO_PROXY=192.168.99.120,192.168.99.121 πŸ”Ž Verifying Kubernetes components... πŸ‘ Starting node vault-cluster-m04 in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster-m04&quot; ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122 β–ͺ no_proxy=192.168.99.120,192.168.99.121,192.168.99.122 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 β–ͺ env NO_PROXY=192.168.99.120,192.168.99.121 β–ͺ env NO_PROXY=192.168.99.120,192.168.99.121,192.168.99.122 πŸ”Ž Verifying Kubernetes components... πŸ„ Done! kubectl is now configured to use &quot;vault-cluster&quot; cluster and &quot;default&quot; namespace by default </code></pre> <p><strong>ACTUAL BEHAVIOUR:</strong></p> <pre><code>$ minikube stop -p vault-cluster βœ‹ Stopping node &quot;vault-cluster&quot; ... βœ‹ Stopping node &quot;vault-cluster-m04&quot; ... βœ‹ Stopping node &quot;vault-cluster-m04&quot; ... βœ‹ Stopping node &quot;vault-cluster-m04&quot; ... </code></pre> <p>So when this is what happens when I try to start cluster again:</p> <pre><code>$ minikube start -p vault-cluster πŸ˜„ [vault-cluster] minikube v1.20.0 on Microsoft Windows 10 Pro 10.0.19042 Build 19042 ✨ Using the virtualbox driver based on existing profile πŸ‘ Starting control plane node vault-cluster in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster&quot; ... 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... πŸ”— Configuring CNI (Container Networking Interface) ... πŸ”Ž Verifying Kubernetes components... β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5 β–ͺ Using image kubernetesui/metrics-scraper:v1.0.4 β–ͺ Using image kubernetesui/dashboard:v2.1.0 🌟 Enabled addons: default-storageclass, dashboard πŸ‘ Starting node vault-cluster-m04 in cluster vault-cluster πŸ”„ Restarting existing virtualbox VM for &quot;vault-cluster-m04&quot; ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120 β–ͺ no_proxy=192.168.99.120 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 πŸ”Ž Verifying Kubernetes components... πŸ‘ Starting node vault-cluster-m04 in cluster vault-cluster πŸƒ Updating the running virtualbox &quot;vault-cluster-m04&quot; VM ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120,192.168.99.123 β–ͺ no_proxy=192.168.99.120,192.168.99.123 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 β–ͺ env NO_PROXY=192.168.99.120,192.168.99.123 πŸ”Ž Verifying Kubernetes components... πŸ‘ Starting node vault-cluster-m04 in cluster vault-cluster πŸƒ Updating the running virtualbox &quot;vault-cluster-m04&quot; VM ... 🌐 Found network options: β–ͺ NO_PROXY=192.168.99.120,192.168.99.123 β–ͺ no_proxy=192.168.99.120,192.168.99.123 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ... β–ͺ env NO_PROXY=192.168.99.120 β–ͺ env NO_PROXY=192.168.99.120,192.168.99.123 πŸ”Ž Verifying Kubernetes components... πŸ„ Done! kubectl is now configured to use &quot;vault-cluster&quot; cluster and &quot;default&quot; namespace by default </code></pre> <p>This is the output when I least the nodes:</p> <pre><code>$ minikube node list -p vault-cluster vault-cluster 192.168.99.120 vault-cluster-m04 192.168.99.123 vault-cluster-m04 192.168.99.123 vault-cluster-m04 192.168.99.123 </code></pre> <p><a href="https://i.stack.imgur.com/Hcde6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hcde6.png" alt="enter image description here" /></a></p> <p><strong>Any ideas what could be wrong?</strong></p> <p>Environment:</p> <ul> <li><p>Windows 10 Pro</p> </li> <li><p>Virtual Box 6.1</p> </li> </ul> <hr /> <pre><code>$ minikube version minikube version: v1.20.0 commit: c61663e942ec43b20e8e70839dcca52e44cd85ae </code></pre> <hr /> <pre><code>$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.7&quot;, GitCommit:&quot;1dd5338295409edcfff11505e7bb246f0d325d15&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:23:52Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;windows/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.2&quot;, GitCommit:&quot;faecb196815e248d3ecfb03c680a4507229c2a56&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-13T13:20:00Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
<p>There seems to be some issue with minikube v1.20.0 and it also happens on linux with kvm2 driver (my setup) so it is not OS or driver specific.</p> <p>It also happens on minikube v1.21.0, although it doesn't happen until stopped second time. After the first stop and start all seems to work fine but after second stop I see exactly what you see.</p> <p>If you want you can create an <a href="https://github.com/kubernetes/minikube/issues" rel="nofollow noreferrer">issue on minikube githib repo</a> and hope developers fix it.</p>
<p>I have read a few documentations and figured out how to set up readiness and liveness endpoints with Actuator, like <a href="https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot" rel="noreferrer">this</a> one. But I am not able to figure out how to set-up the endpoint for the 'startup' probe.</p> <p>My application yml:</p> <pre><code>management: endpoints: web: exposure: include: &quot;*&quot; endpoint: health: show-details: &quot;ALWAYS&quot; group: readiness.include: readinessProbe, dataStream startup.include: readinessProbe, dataStream </code></pre> <p>My deployment config:</p> <pre><code> livenessProbe: httpGet: path: &quot;/actuator/health/liveness&quot; port: &quot;http&quot; initialDelaySeconds: 600 periodSeconds: 15 readinessProbe: httpGet: path: &quot;/actuator/health/readiness&quot; port: &quot;http&quot; periodSeconds: 30 failureThreshold: 15 startupProbe: httpGet: path: &quot;/actuator/health/startup&quot; port: &quot;http&quot; initialDelaySeconds: 150 periodSeconds: 10 failureThreshold: 30 </code></pre> <p>The actuator doesn't seem to provide the URL for the 'startup' probe, or in other words, http://localhost:8080/actuator/health/startup doesn't work. How can I set it up?</p>
<p>Spring boot does not expose a separate endpoint for the startup probe. You could use the liveness probe for this use case as well. The reasoning to have a different probe in kubernetes is to enable a long timeout for the initial startup of the application, which might take some time. After the first successful startup probe call the liveness probe takes over, having reduced timeout values to quickly detect a failure and restart the application.</p>