prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>It may be a vague question but I couldn't find any documentation regarding the same. Does Google cloud platform have provision to integrate with OpsGenie?</p> <p>Basically we have set up few alerts in GCP for our <code>Kubernetes Cluster monitoring</code> and we want them to be feeded to <code>OpsGenie</code> for Automatic call outs in case of high priority incidents.</p> <p>Is it possible?</p>
<p>Recapping for better visibility:</p> <p>OpsGenie supports multiple <a href="https://www.atlassian.com/software/opsgenie/integrations" rel="nofollow noreferrer">tools</a>, including Google Stackdriver.<br /> Instruction on how to integrate it with Stackdriver webhooks can be found <a href="https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-google-stackdriver/" rel="nofollow noreferrer">here</a>.</p>
<p>I want to use the atomix framework using the java API in my application.</p> <p>The application should be deployed and scaled via kubernetes. And every Pod should be "connected" with all Pods of the same kubernetes deployment.</p> <p>I have seen in the documentation that there are different ways how to set up cluster discovery, so that each node of the cluster knows all members but no configuration seems to work for my scenario.</p> <ol> <li>Manual config: Manual configure all members in a list will not work for kubernetes.</li> <li>Use multicast: AFAIK multicast is also not a good option in kubernetes.</li> <li>DNS: I also don't think that I could use DNS discovery for it (because DNS is normally per service and not per Pod)</li> </ol> <p>There is also a section about kubernetes deployment in the atomix manual but it seems that this is only useful for launching multiple atomix agents in a cluster and not for scaling a custom application which uses the Atomix API (please let me know if I get this wrong)</p> <p>I don't have found any examples for such a setting even if it should be quite a common task to solve...</p>
<p>You can use DNS provided you configure a service specifically for this task. In k8s, every pod can be a member of an arbitrary number of services (because a service is just a load-balancer). So you could define a service for this purpose</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: Service metadata: labels: - myLabel name: service-clustering spec: clusterIP: None publishNotReadyAddresses: true ports: - name: appName port: 8080 protocol: TCP targetPort: 8080 selector: - matchLabel type: ClusterIP </code></pre> <p>Important here is the <code>publishNotReadyAddresses</code>, because you want to prevent a split-brain scenario during bootup before all pods passed their readiness checks. Afterwards you can just use DNS to discover the individual pods (using <code>dnsjava</code>):</p> <pre class="lang-java prettyprint-override"><code> private Stream&lt;String&gt; getPodUris() { return Optional.ofNullable(new Lookup(&quot;service-clustering&quot;, Type.SRV)) .stream() .flatMap(Arrays::stream) .filter(r -&gt; r instanceof SRVRecord) .map(r -&gt; ((SRVRecord) r).getTarget().toString()); } </code></pre> <p>For dynamic scaling you'd need to repeat this query in a scheduled task every couple of seconds to inform Atomix about membership changes.</p>
<p><strong>Problem</strong></p> <p>We are trying to create an inference API that load PyTorch ResNet-101 model on AWS EKS. Apparently, it always killed OOM due to high CPU and Memory usage. Our log shows we need around 900m CPU resources limit. Note that we only tested it using <strong>one</strong> 1.8Mb image. Our DevOps team didn't really like it.</p> <p><strong>What we have tried</strong></p> <p>Currently we are using standard PyTorch load model module. We also clean model state dict to clean up the memory usage.</p> <p>Is there any method to reduce the CPU usage to load PyTorch model?</p>
<p>Have you tried limiting the CPU available to the pods?</p> <pre class="lang-yaml prettyprint-override"><code> - name: pytorch-ml-model image: pytorch-cpu-hog-model-haha resources: limits: memory: &quot;128Mi&quot; cpu: &quot;1000m&quot; # Replace this with CPU amount your devops guys will be happy about </code></pre> <p>If your error is OOM, you might want to consider the adding more memory allocated per <code>pod</code>? We as outsiders have no idea how large of memory you would require to execute your models, I would suggest using debugging tools like <a href="https://pytorch.org/tutorials/recipes/recipes/profiler_recipe.html" rel="nofollow noreferrer">PyTorch profiler</a> to understand how much memory you need for your inferencing use-case.</p> <p>You might also want to consider, using <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/memory-optimized-instances.html" rel="nofollow noreferrer">memory-optimized</a> worker nodes and applying <a href="https://www.eksworkshop.com/beginner/140_assigning_pods/affinity/" rel="nofollow noreferrer">deployment-node affinity</a> through labels to ensure that inferencing pods are allocated in memory-optimized nodes in your EKS clusters.</p>
<p>We are having a problem with installing CronJob using Helm chart on GKE <strong>Autopilot</strong> cluster. (when we installing the same Helm chart on Standard GKE cluster with the same GKE version installation works perfectly )</p> <p><strong>GKE version- 1.21.5-gke.1302</strong></p> <p>My CronJob.yaml:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: test01-chronjob </code></pre> <p><em>While using <strong>batch/v1beta1</strong> version:</em></p> <p>[WARNING] templates/test01.yaml: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob</p> <p><em>While using <strong>batch/v1</strong> version:</em></p> <p>W0125 15:08:32.558228 23300 warnings.go:70] Autopilot set default resource requests for Pod namespace01/test01, as resource requests were not specified. See <a href="http://g.co/gke/autopilot-defaults" rel="nofollow noreferrer">http://g.co/gke/autopilot-defaults</a>.</p> <p>Error: INSTALLATION FAILED: admission webhook &quot;workload-defaulter.common-webhooks.networking.gke.io&quot; denied the request: no kind &quot;CronJob&quot; is registered for version &quot;batch/v1&quot; in scheme &quot;pkg/runtime/scheme.go:100&quot;</p> <p>Would love some help :)</p>
<p>Sorry about this. This issue has been fixed in <strong>GKE version- 1.21.9-gke.300</strong> which is currently available in the RAPID and REGULAR channels and will eventually be available in the STABLE channel. Clusters should start being upgraded over the next few weeks.</p> <p>You can check both the default version as well as available versions for a given release channel in each region using the following commands:</p> <p>default version for a channel:</p> <pre><code>gcloud container get-server-config --flatten=&quot;channels&quot; --filter=&quot;channels.channel=${CHANNEL}&quot; \ --format=&quot;yaml(channels.channel,channels.defaultVersion)&quot; --region ${REGION} </code></pre> <p>available versions for a channel:</p> <pre><code>gcloud container get-server-config --flatten=&quot;channels&quot; --filter=&quot;channels.channel=${CHANNEL}&quot; \ --format=&quot;yaml(channels.channel,channels.validVersions)&quot; --region ${REGION} </code></pre> <p>Replace ${CHANNEL} with one of RAPID,REGULAR,STABLE and ${REGION} with the region in which your cluster is deployed.</p> <p>If want to upgrade prior to your cluster(s) being automatically upgraded, you can do the following assuming you are on the RAPID or REGULAR channel:</p> <pre><code>gcloud container clusters upgrade ${CLUSTER_NAME} --master --cluster-version 1.21.9-gke.300 gcloud container clusters upgrade ${CLUSTER_NAME} --cluster-version 1.21.9-gke.300 </code></pre>
<p>I'm using busybox image in my pod. I'm trying to curl another pod, but &quot;curl is not found&quot;. How to fix it?</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: front name: front spec: containers: - image: busybox name: front command: - /bin/sh - -c - sleep 1d </code></pre> <p>this cmd:</p> <pre><code>k exec -it front -- sh curl service-anotherpod:80 -&gt; 'curl not found' </code></pre>
<p><code>busybox</code> is a single binary program which you can't install additional program to it. You can either use <code>wget</code> or you can use a different variant of busybox like <a href="https://github.com/progrium/busybox" rel="nofollow noreferrer">progrium</a> which come with a package manager that allows you to do <code>opkg-install curl</code>.</p>
<p>I'm using busybox image in my pod. I'm trying to curl another pod, but &quot;curl is not found&quot;. How to fix it?</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: front name: front spec: containers: - image: busybox name: front command: - /bin/sh - -c - sleep 1d </code></pre> <p>this cmd:</p> <pre><code>k exec -it front -- sh curl service-anotherpod:80 -&gt; 'curl not found' </code></pre>
<p>Additional to @gohm'c's answer, you could also try uusing Alpine Linux and either make your own image that has curl installed, or use <code>apk add curl</code> in the pod to install it.</p> <p>Example pod with alpine:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: labels: app: front name: front spec: containers: - image: alpine name: front command: - /bin/sh - -c - sleep 1d </code></pre>
<p>I am a junior developer studying eks.</p> <p>When you create eks, you can see that the IAM user used to create it is granted as system:master.</p> <p>I can't find how system:master is specified</p> <p>I can't see the contents of IAM in the generated aws-auth configmap of Kubernetes.</p> <p>Can you find out which part to look for?</p> <p>Started out of curiosity and still looking:(</p> <p>please help i've been looking all day</p>
<p><code>I can't find how system:master is specified</code></p> <p>system:masters is a logical group <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/rbac/escalation_check.go#L38" rel="nofollow noreferrer">defined</a> in kubernetes source. This is not something created by EKS or yourself.</p> <p><code>see the contents of IAM in the generated aws-auth configmap</code></p> <p>kubectl get configmap aws-auth --namespace kube-system --output yaml</p> <p>Try this <a href="https://medium.com/the-programmer/aws-eks-fundamentals-core-components-for-absolute-beginners-part1-9b16e19cedb3" rel="nofollow noreferrer">beginner guide</a>.</p>
<p><strong>Traefik version 2.5.6</strong></p> <p>I have the following ingress settings:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/app-root: /users traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip name: users spec: rules: - host: dev.[REDUCTED] http: paths: - backend: service: name: users-service port: number: 80 path: /users pathType: Prefix </code></pre> <p>But when I call:</p> <pre><code>curl -i http://dev.[REDUCTED]/users/THIS-SHOUD-BE-ROOT </code></pre> <p>I get in the pod, serving the service:</p> <pre><code>error: GET /users/THIS-SHOUD-BE-ROOT 404 </code></pre> <p>What can be the reason for that?</p>
<p>Try to use <a href="https://doc.traefik.io/traefik/v2.5/user-guides/crd-acme/#traefik-routers" rel="nofollow noreferrer">Traefik Routers</a> as in the example below:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: users namespace: default spec: entryPoints: - web routes: - match: Host(`dev.[REDUCTED]`) &amp;&amp; PathPrefix(`/users`) kind: Rule services: - name: users-service port: 80 </code></pre>
<p>I have a problem with helm chart that I would like to use to deploy multiple instances of my app in many namespaces. Let's say it would be ns1, ns2 and ns3.</p> <p>I run helm upgrade with --install option and it goes well for ns1 but when I want to run it second time for ns2 I get an error:</p> <pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy &quot;my-psp&quot; in namespace &quot;&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-namespace&quot; must equal &quot;ns2&quot;: current value is &quot;ns1&quot; </code></pre> <p>I found many topics about this problem but everytime the only answer is to delete the old object and install it again with helm. I don't want to do that - I would like to get two or more instances of my app that uses k8s objects that are common for many namespaces.</p> <p>What can I do in this situation? I know I could change names of those objects with every deployment but that would be really messy. Second idea is to move those object to another chart and deploy it just once but sadly there's like a ton of work to do that so I would like to avoid it. Is it possible to ignore this error somehow and still make this work?</p>
<p>Found out the solution. The easiest way is to add lookup block into your templates:</p> <pre><code>{{- if not (lookup &quot;policy/v1beta1&quot; &quot;PodSecurityPolicy&quot; &quot;&quot; &quot;my-psp&quot;) }} apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: my-psp ... {{- end }} </code></pre> <p>With this config the object will be created only if case that the object with the same name does not exist.</p> <p>It may not be perfect solution but if you know what you do you can save a lot of time.</p>
<p>Do I still need to expose pod via <code>clusterip</code> service?</p> <p>There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on <code>port:80</code>.</p> <p>I don't know how to test it, tried:</p> <pre><code>k exec main -it -- sh netcan -z -v -w 5 service-main 80 </code></pre> <p>and</p> <pre><code>k exec main -it -- sh curl front:80 </code></pre> <p>The main.yaml pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: main item: c18 name: main spec: containers: - image: busybox name: main command: - /bin/sh - -c - sleep 1d </code></pre> <p>The front.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: front name: front spec: containers: - image: busybox name: front command: - /bin/sh - -c - sleep 1d </code></pre> <p>The api.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: api name: api spec: containers: - image: busybox name: api command: - /bin/sh - -c - sleep 1d </code></pre> <p>The main-to-front-networkpolicy.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: front-end-policy spec: podSelector: matchLabels: app: main policyTypes: - Ingress - Egress ingress: - from: - podSelector: matchLabels: app: front ports: - port: 8080 egress: - to: - podSelector: matchLabels: app: front ports: - port: 8080 </code></pre> <p>What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?</p> <p>Also, do I need to write <code>containerPort:80</code> in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?</p> <p>I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:</p> <pre><code>k exec web-pod -it -- sh nc -z -v -w 1 secure-service 80 connection open </code></pre> <p>UPDATE: ideally I want answers to these:</p> <ul> <li><p>a clear explanation of the diff btw <code>service</code> and <code>networkpolicy</code>. If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?</p> </li> <li><p>if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?</p> </li> </ul>
<p>Network policies and services are two different and independent Kubernetes resources.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service is</a>:</p> <blockquote> <p>An abstract way to expose an application running on a set of <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> as a network service.</p> </blockquote> <p>Good explanation <a href="https://kubernetes.io/docs/concepts/services-networking/service/#motivation" rel="nofollow noreferrer">from the Kubernetes docs</a>:</p> <blockquote> <p>Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> to run your app, it can create and destroy Pods dynamically.</p> <p>Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.</p> <p>This leads to a problem: if some set of Pods (call them &quot;backends&quot;) provides functionality to other Pods (call them &quot;frontends&quot;) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?</p> <p>Enter <em>Services</em>.</p> </blockquote> <p>Also another good explanation <a href="https://stackoverflow.com/questions/56896490/what-exactly-kubernetes-services-are-and-how-they-are-different-from-deployments/56896662#56896662">in this answer</a>.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">For production you should use</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p> <blockquote> <p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources. Here are some examples of workload resources that manage one or more Pods:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li> </ul> </blockquote> <p>And use services to make requests to your application.</p> <p>Network policies <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">are used to control traffic flow</a>:</p> <blockquote> <p>If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.</p> </blockquote> <p>Network policies target pods, not services (an abstraction). Check <a href="https://stackoverflow.com/questions/66423222/kubernetes-networkpolicy-limit-egress-traffic-to-service">this answer</a> and <a href="https://stackoverflow.com/a/55527331/16391991">this one</a>.</p> <p>Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster <a href="https://stackoverflow.com/questions/65017380/kubernetes-network-policy-deny-all-policy-not-blocking-basic-communication/65022827#65022827">may not be compatible</a>:</p> <blockquote> <p>For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. <a href="https://docs.projectcalico.org/getting-started/kubernetes/" rel="nofollow noreferrer">Project Calico</a> or <a href="https://cilium.io/" rel="nofollow noreferrer">Cilium</a> are plugins that do so. This is not the default when creating a cluster!</p> </blockquote> <p>Test on kubeadm cluster with Calico plugin -&gt; I created similar pods as you did, but I changed <code>container</code> part:</p> <pre><code>spec: containers: - name: main image: nginx command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf &amp;&amp; exec nginx -g 'daemon off;'&quot;] ports: - containerPort: 8080 </code></pre> <p>So NGINX app is available at the <code>8080</code> port.</p> <p>Let's check pods IP:</p> <pre><code>user@shell:~$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 &lt;none&gt; &lt;none&gt; front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 &lt;none&gt; &lt;none&gt; main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 &lt;none&gt; &lt;none&gt; </code></pre> <p>Let's <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">exec into running <code>main</code> pod</a> and try to make request to the <code>front</code> pod:</p> <pre class="lang-sh prettyprint-override"><code>root@main:/# curl 192.168.156.61:8080 &lt;!DOCTYPE html&gt; ... &lt;title&gt;Welcome to nginx!&lt;/title&gt; </code></pre> <p>It is working.</p> <p>After applying your network policy:</p> <pre class="lang-sh prettyprint-override"><code>user@shell:~$ kubectl apply -f main-to-front.yaml networkpolicy.networking.k8s.io/front-end-policy created user@shell:~$ kubectl exec -it main -- bash root@main:/# curl 192.168.156.61:8080 ... </code></pre> <p>Not working anymore, so it means that network policy is applied successfully.</p> <p>Nice option to get more information about applied network policy is to run <code>kubectl describe</code> command:</p> <pre><code>user@shell:~$ kubectl describe networkpolicy front-end-policy Name: front-end-policy Namespace: default Created on: 2022-01-26 15:17:58 +0000 UTC Labels: &lt;none&gt; Annotations: &lt;none&gt; Spec: PodSelector: app=main Allowing ingress traffic: To Port: 8080/TCP From: PodSelector: app=front Allowing egress traffic: To Port: 8080/TCP To: PodSelector: app=front Policy Types: Ingress, Egress </code></pre>
<p>I have a sidecar container in a MySQL Pod which will use the MySQL socket file in order to access the database.</p> <p>I would like to be sure MySQL has successfully started and therefore have created the socket file before than this sidecar container starts.</p> <p>I tried to add a <code>readiness</code> probe with an <code>exec.command</code> being <code>test -S /var/run/mysqld/mysqld.sock</code> but it fails with:</p> <pre><code>Readiness probe failed: OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: &quot;test -S /var/run/mysqld/mysqld.sock&quot;: stat test -S /var/run/mysqld/mysqld.sock: no such file or directory: unknown </code></pre> <p>When I open a terminal session in the sidecar container, I can <code>ls</code> the socket file and it's there.</p> <p>So it looks like my <code>test -S &lt;path&gt;</code> command doesn't work as expected in the context of the probe.</p> <p>How can I write my probe so that as soon as the socket file is available my sidecar container starts?</p>
<p>Try:</p> <pre><code>... readinessProbe: exec: command: - sh - -c - test -S /var/run/mysqld/mysqld.sock </code></pre>
<p>There is a container in my Kubernetes cluster which I want to debug.</p> <p>But there is no<code>netstat</code>, no <code>ip</code> and no <code>apk</code>.</p> <p>Is there a way to upgrade this image, so that the common tools are installed?</p> <p>In this case it is the nginx container image in a K8s 1.23 cluster.</p>
<p>Alpine is a stripped-down version of the image to reduce the footprint. So the absence of those tools is expected. Although since <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#debugging-using-a-copy-of-the-pod" rel="nofollow noreferrer">Kubernetes</a> <code>1.23</code>, you can use the <code>kubectl debug</code> command to attach a debug pod to the subject pod. Syntax:</p> <pre><code>kubectl debug -it &lt;POD_TO_DEBUG&gt; --image=ubuntu --target=&lt;CONTAINER_TO_DEBUG&gt; --share-processes </code></pre> <p>Example: In the below example, the <code>ubuntu</code> container is attached to the Nginx-alpine pod, requiring debugging. Also, note that the <code>ps -eaf</code> output shows nginx process running and the <code>cat /etc/os-release</code> shows ubuntu running. The indicating process is shared/visible between the two containers.</p> <pre><code>ps@kube-master:~$ kubectl debug -it nginx --image=ubuntu --target=nginx --share-processes Targeting container &quot;nginx&quot;. If you don't see processes from this container, the container runtime doesn't support this feature. Defaulting debug container name to debugger-2pgtt. If you don't see a command prompt, try pressing enter. root@nginx:/# ps -eaf UID PID PPID C STIME TTY TIME CMD root 1 0 0 19:50 ? 00:00:00 nginx: master process nginx -g daemon off; 101 33 1 0 19:50 ? 00:00:00 nginx: worker process 101 34 1 0 19:50 ? 00:00:00 nginx: worker process 101 35 1 0 19:50 ? 00:00:00 nginx: worker process 101 36 1 0 19:50 ? 00:00:00 nginx: worker process root 248 0 1 20:00 pts/0 00:00:00 bash root 258 248 0 20:00 pts/0 00:00:00 ps -eaf root@nginx:/# </code></pre> <p>Debugging as ubuntu as seen here, this arm us with all sort of tools:</p> <pre><code>root@nginx:/# cat /etc/os-release NAME=&quot;Ubuntu&quot; VERSION=&quot;20.04.3 LTS (Focal Fossa)&quot; ID=ubuntu ID_LIKE=debian PRETTY_NAME=&quot;Ubuntu 20.04.3 LTS&quot; VERSION_ID=&quot;20.04&quot; HOME_URL=&quot;https://www.ubuntu.com/&quot; SUPPORT_URL=&quot;https://help.ubuntu.com/&quot; BUG_REPORT_URL=&quot;https://bugs.launchpad.net/ubuntu/&quot; PRIVACY_POLICY_URL=&quot;https://www.ubuntu.com/legal/terms-and-policies/privacy-policy&quot; VERSION_CODENAME=focal UBUNTU_CODENAME=focal root@nginx:/# </code></pre> <p>In case ephemeral containers need to be enabled in your cluster, then you can enable it via feature gates as described <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">here</a>.</p>
<p>I have a service running in Kubernetes and currently, there are two ways of making GET requests to the REST API.</p> <p>The first is</p> <pre><code>kubectl port-forward --namespace test service/test-svc 9090 </code></pre> <p>and then running</p> <pre><code>curl http://localhost:9090/sub/path \ -d param1=abcd \ -d param2=efgh \ -G </code></pre> <p>For the second one, we do a kubctl proxy</p> <pre><code>kubectl proxy --port=8080 </code></pre> <p>followed by</p> <pre><code>curl -lk 'http://127.0.0.1:8080/api/v1/namespaces/test/services/test-svc:9090/proxy/sub/path?param1=abcd&amp;param2=efgh' </code></pre> <p>Both work nicely. However, my question is: How do we repeat one of these with the Python Kubernetes client (<a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">https://github.com/kubernetes-client/python</a>)?</p> <p>Many thanks for your support in advance!</p> <p><strong>Progress</strong></p> <p>I found a solution that brings us closer to the desired result:</p> <pre><code>from kubernetes import client, config config.load_kube_config(&quot;~/.kube/config&quot;, context=&quot;my-context&quot;) api_instance = client.CoreV1Api() name = 'test-svc' # str | name of the ServiceProxyOptions namespace = 'test' # str | object name and auth scope, such as for teams and projects api_response = api_instance.api_client.call_api( '/api/v1/namespaces/{namespace}/services/{name}/proxy/ping'.format(namespace=namespace, name=name), 'GET', auth_settings = ['BearerToken'], response_type='json', _preload_content=False ) print(api_response) </code></pre> <p>yet the result is</p> <pre><code>(&lt;urllib3.response.HTTPResponse object at 0x104529340&gt;, 200, HTTPHeaderDict({'Audit-Id': '1ad9861c-f796-4e87-a16d-8328790c50c3', 'Cache-Control': 'no-cache, private', 'Content-Length': '16', 'Content-Type': 'application/json', 'Date': 'Thu, 27 Jan 2022 15:05:10 GMT', 'Server': 'uvicorn'})) </code></pre> <p>Whereas the desired output was</p> <pre><code>{ &quot;ping&quot;: &quot;pong!&quot; } </code></pre> <p>Do you know how to extract it form here?</p>
<p>This should be something which uses:</p> <pre><code>from kubernetes.stream import portforward </code></pre> <p>To find which command maps to an API call in Python, you can used</p> <pre><code>kubectl -v 10 ... </code></pre> <p>For example:</p> <pre><code>k -v 10 port-forward --namespace znc service/znc 1666 </code></pre> <p>It spits a lot of output, the most important out put is the curl commands:</p> <pre><code>POST https://myk8s:16443/api/v1/namespaces/znc/pods/znc-57647bb8d8-dcq6b/portforward 101 Switching Protocols in 123 milliseconds </code></pre> <p>This allows you to search the code of the python client. For example there is:</p> <pre><code>core_v1.connect_get_namespaced_pod_portforward </code></pre> <p>However, using it is not so straight forward. Luckily, the maintainers include a great example on how to use <a href="https://github.com/kubernetes-client/python/blob/b313b5e74f7cf222ccc39d8c5bf8a07502bd6db3/examples/pod_portforward.py" rel="nofollow noreferrer">portforward method</a>:</p> <pre><code># Copyright 2020 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the &quot;License&quot;); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an &quot;AS IS&quot; BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. &quot;&quot;&quot; Shows the functionality of portforward streaming using an nginx container. &quot;&quot;&quot; import select import socket import time import six.moves.urllib.request as urllib_request from kubernetes import config from kubernetes.client import Configuration from kubernetes.client.api import core_v1_api from kubernetes.client.rest import ApiException from kubernetes.stream import portforward ############################################################################## # Kubernetes pod port forwarding works by directly providing a socket which # the python application uses to send and receive data on. This is in contrast # to the go client, which opens a local port that the go application then has # to open to get a socket to transmit data. # # This simplifies the python application, there is not a local port to worry # about if that port number is available. Nor does the python application have # to then deal with opening this local port. The socket used to transmit data # is immediately provided to the python application. # # Below also is an example of monkey patching the socket.create_connection # function so that DNS names of the following formats will access kubernetes # ports: # # &lt;pod-name&gt;.&lt;namespace&gt;.kubernetes # &lt;pod-name&gt;.pod.&lt;namespace&gt;.kubernetes # &lt;service-name&gt;.svc.&lt;namespace&gt;.kubernetes # &lt;service-name&gt;.service.&lt;namespace&gt;.kubernetes # # These DNS name can be used to interact with pod ports using python libraries, # such as urllib.request and http.client. For example: # # response = urllib.request.urlopen( # 'https://metrics-server.service.kube-system.kubernetes/' # ) # ############################################################################## def portforward_commands(api_instance): name = 'portforward-example' resp = None try: resp = api_instance.read_namespaced_pod(name=name, namespace='default') except ApiException as e: if e.status != 404: print(&quot;Unknown error: %s&quot; % e) exit(1) if not resp: print(&quot;Pod %s does not exist. Creating it...&quot; % name) pod_manifest = { 'apiVersion': 'v1', 'kind': 'Pod', 'metadata': { 'name': name }, 'spec': { 'containers': [{ 'image': 'nginx', 'name': 'nginx', }] } } api_instance.create_namespaced_pod(body=pod_manifest, namespace='default') while True: resp = api_instance.read_namespaced_pod(name=name, namespace='default') if resp.status.phase != 'Pending': break time.sleep(1) print(&quot;Done.&quot;) pf = portforward( api_instance.connect_get_namespaced_pod_portforward, name, 'default', ports='80', ) http = pf.socket(80) http.setblocking(True) http.sendall(b'GET / HTTP/1.1\r\n') http.sendall(b'Host: 127.0.0.1\r\n') http.sendall(b'Connection: close\r\n') http.sendall(b'Accept: */*\r\n') http.sendall(b'\r\n') response = b'' while True: select.select([http], [], []) data = http.recv(1024) if not data: break response += data http.close() print(response.decode('utf-8')) error = pf.error(80) if error is None: print(&quot;No port forward errors on port 80.&quot;) else: print(&quot;Port 80 has the following error: %s&quot; % error) # Monkey patch socket.create_connection which is used by http.client and # urllib.request. The same can be done with urllib3.util.connection.create_connection # if the &quot;requests&quot; package is used. socket_create_connection = socket.create_connection def kubernetes_create_connection(address, *args, **kwargs): dns_name = address[0] if isinstance(dns_name, bytes): dns_name = dns_name.decode() dns_name = dns_name.split(&quot;.&quot;) if dns_name[-1] != 'kubernetes': return socket_create_connection(address, *args, **kwargs) if len(dns_name) not in (3, 4): raise RuntimeError(&quot;Unexpected kubernetes DNS name.&quot;) namespace = dns_name[-2] name = dns_name[0] port = address[1] if len(dns_name) == 4: if dns_name[1] in ('svc', 'service'): service = api_instance.read_namespaced_service(name, namespace) for service_port in service.spec.ports: if service_port.port == port: port = service_port.target_port break else: raise RuntimeError( &quot;Unable to find service port: %s&quot; % port) label_selector = [] for key, value in service.spec.selector.items(): label_selector.append(&quot;%s=%s&quot; % (key, value)) pods = api_instance.list_namespaced_pod( namespace, label_selector=&quot;,&quot;.join(label_selector) ) if not pods.items: raise RuntimeError(&quot;Unable to find service pods.&quot;) name = pods.items[0].metadata.name if isinstance(port, str): for container in pods.items[0].spec.containers: for container_port in container.ports: if container_port.name == port: port = container_port.container_port break else: continue break else: raise RuntimeError( &quot;Unable to find service port name: %s&quot; % port) elif dns_name[1] != 'pod': raise RuntimeError( &quot;Unsupported resource type: %s&quot; % dns_name[1]) pf = portforward(api_instance.connect_get_namespaced_pod_portforward, name, namespace, ports=str(port)) return pf.socket(port) socket.create_connection = kubernetes_create_connection # Access the nginx http server using the # &quot;&lt;pod-name&gt;.pod.&lt;namespace&gt;.kubernetes&quot; dns name. response = urllib_request.urlopen( 'http://%s.pod.default.kubernetes' % name) html = response.read().decode('utf-8') response.close() print('Status Code: %s' % response.code) print(html) def main(): config.load_kube_config() c = Configuration.get_default_copy() c.assert_hostname = False Configuration.set_default(c) core_v1 = core_v1_api.CoreV1Api() portforward_commands(core_v1) if __name__ == '__main__': main() </code></pre>
<p>I'm trying to set up a OKE Cluster on OCI, deploy a Ghost container in it for blogging, then expose it to the internet. I've successfully done it with a load balancer service in my YAML and my blog is visible to the internet:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: blog annotations: service.beta.kubernetes.io/oci-load-balancer-ssl-ports: &quot;443&quot; service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret spec: loadBalancerIP: x.x.x.x type: LoadBalancer selector: app: blog ports: - protocol: TCP port: 443 targetPort: 2368 </code></pre> <p>which provisioned a new Load Balancer of shape 100Mbps in OCI. The problem is that it costs quite a bit.<br /> In OCI, there are two types of load balancer:</p> <p><a href="https://i.stack.imgur.com/ZAO2C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZAO2C.png" alt="enter image description here" /></a></p> <p>and the second one (Network Load Balancer) is free. So the question is, how do i use the second type (Network Load Balancer) with a Kubernetes Cluster in OCI? Is there any other way of exposing my Ghost container pod to the internet? I've read somewhere about creating a NodePort but not sure if it works in OCI and don't really understand it.<br /> Any clue is welcome. Thank you!</p>
<p>I am part of the product team for OKE. OKE does not support OCI Network Load Balancer (NLB) yet. However, we are working on it. For the time being, you can provision an NLB manually and load balance across worker nodes as the article mentioned above describes.</p>
<p>What am I going wrong in the below query?</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /nginx backend: serviceName: nginx servicePort: 80 </code></pre> <p>The error I am getting:</p> <pre><code>error validating &quot;ingress.yaml&quot;: error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;serviceName&quot; in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field &quot;servicePort&quot; in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p>Ingress spec has changed since updated to v1. Try:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /nginx pathType: ImplementationSpecific backend: service: name: nginx port: number: 80 </code></pre>
<p>I have a private docker registry hosted on gitlab and I would like to use this repository to pull images for my local kubernetes cluster:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 68m </code></pre> <p>K8s is on <code>v1.22.5</code> and is a single-node cluster that comes 'out of the box' with Docker Desktop. I have already built and deployed an image to the gitlab container registry <code>registry.gitlab.com</code>. What I have done already:</p> <ol> <li>Executed the command <code>docker login -u &lt;username&gt; -p &lt;password&gt; registry.gitlab.com</code></li> <li>Modified the <code>~/.docker/config.json</code> file to the following: <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: {} }, &quot;credsStore&quot;: &quot;osxkeychain&quot; } </code></pre> </li> <li>Created and deployed a secret to the cluster with the file: <pre><code>apiVersion: v1 kind: Secret metadata: name: registry-key data: .dockerconfigjson: &lt;base-64-encoded-.config.json-file&gt; type: kubernetes.io/dockerconfigjson </code></pre> </li> <li>Deployed an app with the following file: <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment labels: app: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app spec: imagePullSecrets: - name: registry-key containers: - name: test-app image: registry.gitlab.com/&lt;image-name&gt;:latest imagePullPolicy: Always ports: - containerPort: 80 </code></pre> </li> </ol> <hr /> <p>The deployment is created successfully but upon inspection of the pod (<code>kubectl describe pod</code>) I find the following events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21s default-scheduler Successfully assigned default/test-deployment-87b5747b5-xdsl9 to docker-desktop Normal BackOff 19s kubelet Back-off pulling image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot; Warning Failed 19s kubelet Error: ImagePullBackOff Normal Pulling 7s (x2 over 20s) kubelet Pulling image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot; Warning Failed 7s (x2 over 19s) kubelet Failed to pull image &quot;registry.gitlab.com/&lt;image-name&gt;:latest&quot;: rpc error: code = Unknown desc = Error response from daemon: Head &quot;https://registry.gitlab.com/v2/&lt;image-name&gt;/manifests/latest&quot;: denied: access forbidden Warning Failed 7s (x2 over 19s) kubelet Error: ErrImagePull </code></pre> <hr /> <p>Please provide any information that might be causing these errors.</p>
<p>I managed to solve the issue by editing the default <code>config.json</code> produced by <code>$ docker login</code>:</p> <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: {} }, &quot;credsStore&quot;: &quot;osxkeychain&quot; } </code></pre> <p>becomes</p> <pre><code>{ &quot;auths&quot;: { &quot;registry.gitlab.com&quot;: { &quot;auth&quot;:&quot;&lt;access-token-in-plain-text&gt;&quot; } } } </code></pre> <p>Thanks Bala for suggesting this in the comments. I realise storing the access token in plain text in the file may not be secure but this can be changed to use a path if needed.</p> <p>I also created the secret as per OzzieFZI's suggestion:</p> <pre><code>$ kubectl create secret docker-registry registry-key \ --docker-server=registry.gitlab.com \ --docker-username=&lt;username&gt; \ --docker-password=&quot;$(cat /path/to/token.txt)&quot; </code></pre>
<p>I have an application running on kubernetes (It is a cluster running on cloud) and want to setup monitoring and logging for that application. There are various possibilities for the setup. What would be the best practices of doing that, like recommended method or industrial standard?</p> <ul> <li>A <strong>prometheus monitoring setup inside kubernetes cluster</strong>: prometheus-operator helm chart installed inside the cluster that can monitor the entire cluster, including the application.</li> <li>an <strong>external prometheus + grafana setup</strong> deployed with docker-compose.(But I doubt if the external setup can reach the k8s properly to scrape all the metrics)</li> <li>A <strong>prometheus federation setup</strong> where one external prometheus setup gets metrics from an internal prometheus setup of k8s.</li> </ul> <p>Can anyone please help me with some suggestions regarding best practices?</p>
<p>It all depends on how many clusters you have. If you have one cluster, the application you want to monitor on it will be the best choice option 1:</p> <blockquote> <ul> <li>A <strong>prometheus monitoring setup inside kubernetes cluster</strong> : prometheus-operator helm chart installed inside the cluster that can monitor the entire cluster, including the application.</li> </ul> </blockquote> <p>The advantages of such a solution include possibly simple and quick configuration, in addition, you have everything in one place (application and Prometheus) and you do not need a new cluster to monitor another. <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics-with-grafana" rel="nofollow noreferrer">Here</a> you can find example tutorial.</p> <p>However, if you plan to expand to many clusters, or you already need to monitor many clusters, option 3 will be the best choice:</p> <blockquote> <ul> <li>A <strong>prometheus federation setup</strong> where one external prometheus setup gets metrics from an internal prometheus setup of k8s.</li> </ul> </blockquote> <p>Thanks to this solution, you will have all the metrics in one place, regardless of the <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">number of clusters</a> you need to monitor:</p> <blockquote> <p>Commonly, it is used to either achieve scalable Prometheus monitoring setups or to pull related metrics from one service's Prometheus into another.</p> </blockquote> <p>You can find example tutorials about <a href="https://medium.com/@jotak/prometheus-federation-in-kubernetes-4ce46bda834e" rel="nofollow noreferrer">Prometheus federation in Kubernetes</a> and <a href="https://banzaicloud.com/blog/prometheus-federation/" rel="nofollow noreferrer">Monitoring multiple federated clusters with Prometheus - the secure way</a></p>
<p>We have a K8 EKS cluster with ALB ingress controller and behind that, we have a graphql gateway. On the cluster there are two node groups, one microservices the other the remaining monolith. We have not fully completed pulling out all the services. We get some pretty high volume workloads and I need to scale up the monolith node group. How can I load balance traffic across node groups or namespaces? Or some other unthought-of solution.</p>
<p>When your K8s-based application uses <code>services</code>, the traffic is load-balanced between the active pods of the target deployment. Kubernetes services are themselves the crudest form of load balancing traffic. In Kubernetes the most basic type of load balancing is load distribution. Kubernetes uses two methods of load distribution. Both of them are easy to implement at the dispatch level and operate through the kube-proxy feature.</p>
<p>we have 9 microservices in our application. In this all the services have same yaml manifest file(deployment, service and configmap(to supply environment variable to containers)).</p> <p>But when we try to get it as helm template. We couldn't able to pass different environment variable for different services.</p> <p>We have been using same template by having different values.yaml file for different service. Is there any way to support different environment variables using same configmap.</p>
<p>a helm chart acts as a collection of templates, which you can (re-)use to deploy multiple services, especially in a microservices context.</p> <p>you configure different deployments (and therefore different environments) with the same helm chart by providing values-files during installation, which contain the configuration that should be applied during installation.</p> <p>values.yaml in the helm chart folder:</p> <pre><code>service: port: 123 </code></pre> <p>microservice-a-values.yaml provided during installation:</p> <pre><code>service: port: 456 </code></pre> <p>configmap template in the helm charts templates/ folder:</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: &quot;{{ .Release.Name }}&quot; data: port: {{ .Values.service.port }} </code></pre> <p>to run the installation and actually use the microservice-a-values.yaml, provide it when calling the helm install:</p> <pre><code>helm upgrade --install microservice-a --values microservice-a-values.yaml </code></pre> <p>remember that there is a &quot;hierarchy&quot; of provided values files: the last one wins and overwrites the previous ones.</p>
<p>I have a Go application running in a Kubernetes cluster which needs to read files from a large MapR cluster. The two clusters are separate and the Kubernetes cluster does not permit us to use the CSI driver. All I can do is run userspace apps in Docker containers inside Kubernetes pods and I am given <code>maprticket</code>s to connect to the MapR cluster.</p> <p>I'm able to use the <code>com.mapr.hadoop</code> <code>maprfs</code> <a href="https://repository.mapr.com/nexus/content/groups/mapr-public/com/mapr/hadoop/maprfs/6.1.0-mapr/maprfs-6.1.0-mapr.jar" rel="nofollow noreferrer">jar</a> to write a Java app which is able to connect and read files using a <code>maprticket</code>, but we need to integrate this into a Go app, which, ideally, shouldn't require a Java sidecar process.</p>
<p>This is a good question because it highlights the way that some environments impose limits that violate the assumptions external software may hold.</p> <p>And just for reference, MapR was acquired by HPE so a MapR cluster is now an HPE Ezmeral Data Fabric cluster. I am still training myself to say that.</p> <p>Anyway, the accepted method for a generic program in language X to communicate with the Ezmeral Data Fabric (the filesystem formerly known as MapR FS) is to mount the file system and just talk to it using file APIs like open/read/write and such. This applies to Go, Python, C, Julia or whatever. Inside Kubernetes, the normal way to do this mount is to use a CSI driver that has some kind of operator working in the background. That operator isn't particularly magical ... it just does what is needful. In the case of data fabric, the operator mounts the data fabric using NFS or FUSE and then bind mounts[1] part of that into the pod's awareness.</p> <p>But this question is cool because it precludes all of that. If you can't install an operator, then this other stuff is just a dead letter.</p> <p>There are three alternative approaches that may work.</p> <ol> <li><p>NFS mounts were included in Kubernetes as a native capability before the CSI plugin approach was standardized. It might still be possible to use that on a very vanilla Kubernetes cluster and that could give access to the data cluster.</p> </li> <li><p>It is possible to integrate a container into your pod that does the necessary FUSE mount in an unprivileged way. This will be kind of painful because you would have to tease apart the FUSE driver from the data fabric install and get it to work. That would let you see the data fabric inside the pod. Even then, there is no guarantee Kubernetes or the OS will allow this to work.</p> </li> <li><p>There is an unpublished Go file system client that users the low level data fabric API directly. We don't yet release that separately. For more information on that, folks should ping me directly (my contact info is everywhere ... email to ted.dunning hpe.com or gmail.com works)</p> </li> <li><p>The data fabric allows you to access data via S3. With the 7.0 release of Ezmeral Data Fabric, this capability is heavily revamped to give massive performance especially since you can scale up the number of gateways essentially without limit (I have heard numbers like 3-5GB/s per stateless connection to a gateway, but YMMV). This will require the least futzing and should give plenty of performance. You can even access files as if they were S3 objects.</p> </li> </ol> <p>[1] <a href="https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount#:%7E:text=A%20bind%20mount%20is%20an,the%20same%20as%20the%20original">https://unix.stackexchange.com/questions/198590/what-is-a-bind-mount#:~:text=A%20bind%20mount%20is%20an,the%20same%20as%20the%20original</a>.</p>
<p>I'm using minikube version: v1.25.1, win10, k8s version 1.22</p> <p>There is 1 node, 2 pods on it: <code>main</code> and <code>front</code>, 1 service - <code>svc-main</code>.</p> <p>I'm trying to exec into front and call main thru service and see some msg confirming connection is ok.</p> <p>main.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: main name: main namespace: default spec: containers: - name: main image: nginx command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;while true; do echo date; sleep 2; done&quot;] </code></pre> <p>front.yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: app: front name: front spec: containers: - image: nginx name: front command: - /bin/sh - -c - while true; echo date; sleep 2; done </code></pre> <p>service is created like this:</p> <pre><code>k expose pod ngin --name=svc-main --type=ClusterIP --port=80 --target-port=80 k get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-main ClusterIP 10.104.26.249 &lt;none&gt; 80:31775/TCP 11m </code></pre> <p>When I try to curl from inside front it says &quot;Could not resolve host: svc-main&quot;</p> <pre><code> k exec front -it -- sh curl svc-main:80 </code></pre> <p>or this</p> <pre><code>curl http://svc-main:80 curl 10.104.26.249:80 </code></pre> <p>I tried the port 31775, same result. What am I doing wrong?!</p>
<p>The problem is when you are creating a Kubernetes pod using your yaml file, you are overwriting the default Entrypoint and Cmd defined in nginx docker image with your custom command and args:</p> <pre class="lang-yaml prettyprint-override"><code>command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;while true; do echo date; sleep 2; done&quot;] </code></pre> <p>That's why nginx web server doesn´t work in created pods. You should remove these lines, delete running pods, and create new pods. After that, you will be able to reach the nginx web page by running</p> <pre class="lang-sh prettyprint-override"><code># curl svc-main </code></pre> <p>within your front pod.</p> <p>You can read more info about defining a command and arguments for a container in a Pod <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">here</a></p> <p>And there is a good article about Docker CMD and Entrypoint <a href="https://www.cloudbees.com/blog/understanding-dockers-cmd-and-entrypoint-instructions" rel="nofollow noreferrer">here</a></p>
<p>We are enabling Google Cloud Groups RBAC in our existing GKE clusters.</p> <p>For that, we first created all the groups in Workspace, and also the required &quot;gke-security-groups@ourdomain.com&quot; according to documentation.</p> <p>Those groups are created in Workspace with an integration with Active Directory for Single Sign On.</p> <p>All groups are members of &quot;gke-security-groups@ourdomain&quot; as stated by documentation. And all groups can View members.</p> <p>The cluster was updated to enabled the flag for Google Cloud Groups RBAC and we specify the value to be &quot;gke-security-groups@ourdomain.com&quot;.</p> <p>We then Added one of the groups (let's called it group_a@ourdomain.com) to IAM and assigned a custom role which only gives access to:</p> <pre><code>&quot;container.apiServices.get&quot;, &quot;container.apiServices.list&quot;, &quot;container.clusters.getCredentials&quot;, &quot;container.clusters.get&quot;, &quot;container.clusters.list&quot;, </code></pre> <p>This is just the minimum for the user to be able to log into the Kubernetes cluster and from there being able to apply Kubernetes RBACs.</p> <p>In Kubernetes, we applied a Role, which provides list of pods in a specific namespace, and a role binding that specifies the group we just added to IAM.</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-role namespace: custom-namespace rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] </code></pre> <hr /> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test-rolebinding namespace: custom-namespace roleRef: kind: Role name: test-role apiGroup: rbac.authorization.k8s.io subjects: - kind: Group name: group_a@ourdomain.com </code></pre> <p>Everything looks good until now. But when trying to list the pods of this namespace with the user that belongs to the group &quot;group_a@ourdomain.com&quot;, we get:</p> <blockquote> <p>Error from server (Forbidden): pods is forbidden: User &quot;my-user@ourdomain.com&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;custom-namespace&quot;: requires one of [&quot;container.pods.list&quot;] permission(s).</p> </blockquote> <p>Of course if I give container.pods.list to the group_a@ourdomain assigned role, I can list pods, but it opens for all namespaces, as this permission in GCloud is global.</p> <p>What am I missing here?</p> <p>Not sure if this is relevant, but our organisation in gcloud is called for example &quot;my-company.io&quot;, while the groups for SSO are named &quot;...@groups.my-company.io&quot;, and the gke-security-groups group was also created with the &quot;groups.my-company.io&quot; domain.</p> <p>Also, if instead of a Group in the RoleBinding, I specify the user directly, it works.</p>
<p>It turned out to be an issue about case-sensitive strings and nothing related with the actual rules defined in the RBACs, which were working as expected.</p> <p>The names of the groups were created in Azure AD with a camel case model. These group names where then showed in Google Workspace all lowercase.</p> <p><strong>Example in Azure AD:</strong> thisIsOneGroup@groups.mycompany.com</p> <p><strong>Example configured in the RBACs as shown in Google Workspace:</strong> thisisonegroup@groups.mycompany.com</p> <p>We copied the names from the Google Workspace UI all lowercase and we put them in the bindings and that caused the issue. Kubernetes GKE is case sensitive and it didn't match the name configured in the binding with the email configured in Google Workspace.</p> <p>After changing the RBAC bindings to have the same format, everything worked as expected.</p>
<p>I thought I understood how kubernetes services work, and have always seen them like a way to &quot;group&quot; several pods, in order to make it able to contact the service instead of the single pods. However, it seems like I am wrong. I created a mysql deployment (with only one pod) and a service in order to reach out to the service if I want to use the mysql connection from other pods(other microservices). This is the service I made:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql labels: run: mysql spec: ports: - port: 3306 targetPort: 3306 protocol: TCP selector: run: mysql </code></pre> <p>I hoped this would have allowed me to connect to the mysql pod by reaching the <code>&lt;clusterIp&gt;:&lt;targetPort&gt;</code>, but the connection is refused whenever I try to connect. I tried reading online and initially thought <code>nodeport</code> service type was a good idea, but the kubernetes website tells that the service is than reachable by <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code> so this got me confused. MySQL should be reachable only inside the cluster by other nodes. How can I make this happen?</p> <h5>NOTES:</h5> <ul> <li>I am working on minikube.</li> <li>Here is how I try connecting from one pod to the mysql service on python:</li> </ul> <pre><code>service = api.read_namespaced_service(name=&quot;mysql&quot;, namespace=&quot;default&quot;) mydb = mysql.connector.connect(host=service.spec.cluster_ip, user=&quot;root&quot;, password=&quot;password&quot;, database=&quot;db_name&quot;, auth_plugin='mysql_native_password') </code></pre> <p>This is the error I get:</p> <pre><code>Traceback (most recent call last): File &quot;/init/db_init.py&quot;, line 10, in &lt;module&gt; mydb = mysql.connector.connect(host=service.spec.cluster_ip, user=&quot;root&quot;, File &quot;/usr/local/lib/python3.9/site-packages/mysql/connector/__init__.py&quot;, line 272, in connect return CMySQLConnection(*args, **kwargs) File &quot;/usr/local/lib/python3.9/site-packages/mysql/connector/connection_cext.py&quot;, line 85, in __init__ self.connect(**kwargs) File &quot;/usr/local/lib/python3.9/site-packages/mysql/connector/abstracts.py&quot;, line 1028, in connect self._open_connection() File &quot;/usr/local/lib/python3.9/site-packages/mysql/connector/connection_cext.py&quot;, line 241, in _open_connection raise errors.get_mysql_exception(msg=exc.msg, errno=exc.errno, mysql.connector.errors.DatabaseError: 2003 (HY000): Can't connect to MySQL server on '10.107.203.112:3306' (111) </code></pre> <h2>UPDATE</h2> <p>As requested, here is the whole log of the mysql pod:</p> <pre><code>2022-01-27 17:57:14+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.28-1debian10 started. 2022-01-27 17:57:15+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 2022-01-27 17:57:15+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.28-1debian10 started. 2022-01-27 17:57:15+00:00 [Note] [Entrypoint]: Initializing database files 2022-01-27T17:57:15.090697Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.28) initializing of server in progress as process 43 2022-01-27T17:57:15.105399Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-01-27T17:57:16.522380Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2022-01-27T17:57:20.805814Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option. 2022-01-27 17:57:29+00:00 [Note] [Entrypoint]: Database files initialized 2022-01-27 17:57:29+00:00 [Note] [Entrypoint]: Starting temporary server 2022-01-27T17:57:29.868217Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28) starting as process 92 2022-01-27T17:57:29.892649Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-01-27T17:57:30.100941Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2022-01-27T17:57:30.398700Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2022-01-27T17:57:30.398743Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2022-01-27T17:57:30.419293Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2022-01-27T17:57:30.430833Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock 2022-01-27T17:57:30.430879Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL. 2022-01-27 17:57:30+00:00 [Note] [Entrypoint]: Temporary server started. Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it. Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. 2022-01-27 17:57:32+00:00 [Note] [Entrypoint]: Creating database football 2022-01-27 17:57:32+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/SQL.sql 2022-01-27 17:57:33+00:00 [Note] [Entrypoint]: Stopping temporary server 2022-01-27T17:57:33.143178Z 12 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.0.28). 2022-01-27T17:57:36.222404Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.28) MySQL Community Server - GPL. 2022-01-27 17:57:37+00:00 [Note] [Entrypoint]: Temporary server stopped 2022-01-27 17:57:37+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up. 2022-01-27T17:57:37.329690Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.28) starting as process 1 2022-01-27T17:57:37.336444Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-01-27T17:57:37.525143Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended. 2022-01-27T17:57:37.738175Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed. 2022-01-27T17:57:37.738216Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel. 2022-01-27T17:57:37.745722Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2022-01-27T17:57:37.757638Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock 2022-01-27T17:57:37.757679Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.28' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL. </code></pre> <p>Also, here is the deployment I used:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mysql spec: selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysql spec: containers: - image: repo/football-mysql name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre>
<p>There is a label mismatch between the Service and the Pod:</p> <pre class="lang-yaml prettyprint-override"><code># service spec: selector: run: mysql # &lt;- this </code></pre> <p>vs</p> <pre class="lang-yaml prettyprint-override"><code># deployment spec: template: metadata: labels: app: mysql # &lt;- and this </code></pre> <p>Because of it, the Service targets some other Pod, not the one created by the Deployment. In other words, the Service looks for Pods with <code>run</code> label value equal to <code>mysql</code>, while your MySQL Pod has <code>app</code> label with <code>mysql</code> in it.</p> <p>To make it work, at least one pair of labels has to be completely equal on both sides for the Service and the Pod. In this case, replacing <code>run</code> for <code>app</code> should be enough:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql labels: run: mysql # changing here isn't mandatory, up to you spec: ports: - port: 3306 targetPort: 3306 protocol: TCP selector: app: mysql # here is the required change </code></pre>
<p>Hope all is well. I am stuck with this Pod executing a shell script, using the BusyBox image. The one below works,</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: loop name: busybox-loop spec: containers: - args: - /bin/sh - -c - |- for i in 1 2 3 4 5 6 7 8 9 10; \ do echo &quot;Welcome $i times&quot;; done image: busybox name: loop resources: {} dnsPolicy: ClusterFirst restartPolicy: Never status: {} </code></pre> <p>But this one doesn't works as I am using &quot;- &gt;&quot; as the operator,</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: busybox-loop name: busybox-loop spec: containers: - image: busybox name: busybox-loop args: - /bin/sh - -c - &gt; - for i in {1..10}; - do - echo (&quot;Welcome $i times&quot;); - done restartPolicy: Never </code></pre> <p>Is it because the for syntax &quot;for i in {1..10};&quot; will not work on sh shell. As we know we don't have any other shells in Busybox or the &quot;- &gt;&quot; operator is incorrect, I don't think so because it works for other shell scripts.</p> <p>Also when can use &quot;- |&quot; multiline operator(I hope the term is correct) and &quot;- &gt;&quot; operator. I know this syntax below is easy to use, but the problem is when we use double quotes in the script, the escape sequence confuses and never works.</p> <p>args: [&quot;-c&quot;, &quot;while true; do echo hello; sleep 10;done&quot;]</p> <p>Appreciate your support.</p>
<p><code>...But this one doesn't works as I am using &quot;- &gt;&quot; as the operator...</code></p> <p>You don't need '-' after '&gt;' in this case, try:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox labels: run: busybox spec: containers: - name: busybox image: busybox args: - ash - -c - &gt; for i in 1 2 3 4 5 6 7 8 9 10; do echo &quot;hello&quot;; done </code></pre> <p><code>kubectl logs pod busybox</code> will print hello 10 times.</p>
<p>I have a config map that looks like this:</p> <pre><code>kubectl describe configmap example-config --namespace kube-system Name: example-config Namespace: kube-system Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== mapRoles: ---- - rolearn: arn:aws:iam::0123456789012:role/user-role-1 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes - rolearn: arn:aws:iam::0123456789012:role/user-role-2 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes </code></pre> <p>I want to remove user-role-2 from the configmap. I think I need to use kubectl patch with a &quot;remove&quot; operation. What is the syntax to remove a section from a config map?</p> <p>Here is an example command I can use to append to the config map:</p> <pre><code>kubectl patch -n=kube-system cm/aws-auth --patch &quot;{\&quot;data\&quot;:{\&quot;mapRoles\&quot;: \&quot;- rolearn: &quot;arn:aws:iam::0123456789012:role/user-role-3&quot; \n username: system:node:{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\n - system:nodes\n\&quot;}}&quot; </code></pre>
<p>According to kubernetes official docs: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</a></p> <p>There is not such syntax in <code>kubectl patch</code> to remove a section from an api object like a config map.</p> <blockquote> <p>Here is an example command I can use to append to the config map:</p> <p><code>kubectl patch -n=kube-system cm/aws-auth --patch &quot;{\&quot;data\&quot;:{\&quot;mapRoles\&quot;: \&quot;- rolearn: &quot;arn:aws:iam::0123456789012:role/user-role-3&quot; \n username: system:node:{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\n - system:nodes\n\&quot;}}&quot; </code></p> </blockquote> <p>The command above is used to replace the whole data field in configmap. So you can simply use it to patch update with the data you want:</p> <pre><code>kubectl patch -n=kube-system cm/example-config --patch '{&quot;data&quot;:{&quot;mapRoles&quot;: &quot;- rolearn: arn:aws:iam::0123456789012:role/user-role-1\n username: system:node:{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\ n - system:nodes&quot;}}' </code></pre>
<p>This is my current setup:</p> <pre><code>os1@os1:/usr/local/bin$ minikube update-check CurrentVersion: v1.20.0 LatestVersion: v1.25.1 os1@os1:/usr/local/bin$ cat /etc/os-release PRETTY_NAME=&quot;Debian GNU/Linux 10 (buster)&quot; </code></pre> <p>What should be the steps to upgrade the minikube?</p>
<p><a href="https://minikube.sigs.k8s.io/docs/" rel="nofollow noreferrer">Minikube</a> is an executable, in this case you would need to <code>re-install</code> the <code>minikube</code> with the desired version. There is no command to upgrade the running <code>Minikube</code>.</p> <p>You would need to:</p> <pre><code>sudo minikube delete # remove your minikube cluster sudo rm -rf ~/.minikube # remove minikube </code></pre> <p>and reinstall it using <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">Minikube documentation - Start</a>, depends on what your requirements are (packaged used in docs should be always up to date and should cover all your requirements regarding available drivers).</p>
<p>I have used the following configurations to deploy an app on minikube.</p> <p><strong>Deployment</strong>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: angular-app spec: replicas: 2 selector: matchLabels: run: angular-app template: metadata: labels: run: angular-app spec: containers: - name: angular-app image: nheidloff/angular-app ports: - containerPort: 80 - containerPort: 443 </code></pre> <p><strong>Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: angular-app labels: run: angular-app spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http </code></pre> <p><strong>Service description:</strong></p> <pre><code>Name: angular-app Namespace: default Labels: run=angular-app Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.102.174.98 IPs: 10.102.174.98 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 31503/TCP Endpoints: 172.17.0.3:80,172.17.0.4:80 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>When i try to access the endpoints, the links are not responding. However after using <code>minikube service angular-app</code>. Following showed up:</p> <pre><code>|-----------|-------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|---------------------------| | default | angular-app | http/80 | http://192.168.49.2:31503 | |-----------|-------------|-------------|---------------------------| 🏃 Starting tunnel for service angular-app. |-----------|-------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|------------------------| | default | angular-app | | http://127.0.0.1:60611 | |-----------|-------------|-------------|------------------------| </code></pre> <p>With this ip <code>http://127.0.0.1:60611</code> im able to access the app. What is the use of the endpoints given in the service description? How to access each replica? Say if i have 4 replicas, how do i access each one of them?</p>
<p>The answer from ~al-waleed-shihadeh is correct, but I want to give some additional info.</p> <ul> <li><p>You should be able to access the service via the NodePort, too, without needing the <code>minikube service</code> command: <a href="http://192.168.49.2:31503" rel="nofollow noreferrer">http://192.168.49.2:31503</a> . The port is assigned at random, but you can choose a fixed one in the range 30000-32767</p> <pre><code>spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http nodePort: 30080 </code></pre> </li> <li><p>If you want a 'normal' URL to access the service, you must add an Ingress that allow access to the service via a reverse proxy. It will route to one of your services' pods using load-balancing.</p> </li> <li><p>If you want fixed URL's to each of your pods separately, you could use a StatefulSet instead of a Deployment, and create 4 different services with selectors for angular-app-0 to angular-app-3, and then have 4 different ingresses as well.</p> </li> </ul>
<p>I've googled few days and haven't found any decisions. I've tried to update k8s from 1.19.0 to 1.19.6 In Ubuntu-20. (cluster manually installed k81 - master and k82 - worker node)</p> <pre><code># kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [preflight] Some fatal errors occurred: [ERROR CoreDNSUnsupportedPlugins]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User &quot;system:node:k81&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;kube-system&quot; [ERROR CoreDNSMigration]: couldn't retrieve DNS addon deployments: deployments.apps is forbidden: User &quot;system:node:k81&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;kube-system&quot; [ERROR kubeDNSTranslation]: configmaps &quot;kube-dns&quot; is forbidden: User &quot;system:node:k81&quot; cannot get resource &quot;configmaps&quot; in API group &quot;&quot; in the namespace &quot;kube-system&quot;: no relationship found between node 'k81' and this object [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>When I try to list roles and permissions under kubernetes-admin user - it shows the same error with permissions:</p> <pre><code>~# kubectl get rolebindings,clusterrolebindings --all-namespaces Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User &quot;system:node:k81&quot; cannot list resource &quot;rolebindings&quot; in API group &quot;rbac.authorization.k8s.io&quot; at the cluster scope Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User &quot;system:node:k81&quot; cannot list resource &quot;clusterrolebindings&quot; in API group &quot;rbac.authorization.k8s.io&quot; at the cluster scope </code></pre> <p>I can list pods and cluster nodes:</p> <pre><code># kubectl get nodes NAME STATUS ROLES AGE VERSION k81 Ready master 371d v1.19.6 k82 Ready &lt;none&gt; 371d v1.19.6 # kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE gitlab-managed-apps gitlab-runner-gitlab-runner-6bf497d6c9-g7rhc 1/1 Running 47 27d gitlab-managed-apps prometheus-kube-state-metrics-c6bbb8465-8kls5 1/1 Running 3 27d ingress-nginx ingress-nginx-controller-848bfcb64d-r6k6k 1/1 Running 3 27d kube-system coredns-f9fd979d6-6dd42 1/1 Running 1 24h kube-system coredns-f9fd979d6-zjsnz 1/1 Running 1 24h kube-system csi-nfs-controller-5bd5cb55bc-76xdm 3/3 Running 69 27d kube-system csi-nfs-controller-5bd5cb55bc-mkwmv 3/3 Running 61 27d kube-system csi-nfs-node-b4v4g 3/3 Running 18 49d kube-system etcd-k81 1/1 Running 30 371d kube-system kube-apiserver-k81 1/1 Running 54 371d kube-system kube-controller-manager-k81 1/1 Running 27 5d22h kube-system kube-flannel-ds-l4xkx 1/1 Running 13 371d kube-system kube-flannel-ds-rdm4n 1/1 Running 5 371d kube-system kube-proxy-4976l 1/1 Running 5 371d kube-system kube-proxy-g2fn4 1/1 Running 11 371d kube-system kube-scheduler-k81 1/1 Running 330 371d kube-system tiller-deploy-f5c865db5-zlgk9 1/1 Running 5 27d # kubectl logs coredns-f9fd979d6-zjsnz -n kube-system Error from server (Forbidden): pods &quot;coredns-f9fd979d6-zjsnz&quot; is forbidden: User &quot;system:node:k81&quot; cannot get resource &quot;pods/log&quot; in API group &quot;&quot; in the namespace &quot;kube-system&quot; </code></pre> <pre><code># kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin # kubectl get csr No resources found </code></pre>
<p>The solution for the issue is to <a href="https://v1-19.docs.kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/#cmd-phase-kubeconfig" rel="nofollow noreferrer">regenerate the kubeconfig file for the admin</a>:</p> <pre class="lang-sh prettyprint-override"><code>sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=. </code></pre> <p>Above command will create the <code>admin.conf</code> file in the current directory (let's say it is <code>/home/user/testing/</code>) so when you are running <code>kubectl</code> commands you need to specify it using <code>--kubeconfig {directory}/admin.conf</code> flag, for example:</p> <pre class="lang-sh prettyprint-override"><code>sudo kubectl get rolebindings,clusterrolebindings --all-namespaces --kubeconfig /home/user/testing/admin.conf </code></pre> <p>As you are using <code>/etc/kubernetes/admin.conf</code> file by default, you can delete it and create a new one in <code>/etc/kubernetes</code> directory:</p> <pre><code>sudo rm /etc/kubernetes/admin.conf sudo kubeadm init phase kubeconfig admin --kubeconfig-dir=/etc/kubernetes/ </code></pre>
<p>I am debuggin a problem with pod eviction in Kubernetes.</p> <p>It looks like it is related to a configuration in PHP FPM children processes quantity.</p> <p>I assigned a minimum memory of 128 MB and Kubernetes is evicting my pod apparently when exceeds 10x that amount (<code>The node was low on resource: memory. Container phpfpm was using 1607600Ki, which exceeds its request of 128Mi.</code>)</p> <p>How can I prevent this? I thought that requested resources is the minimum and that the pod can use whatever is available if there's no upper limit.</p>
<p>Requested memory is not &quot;the minimum&quot;, it is what it is called - the amount of memory <em>requested</em> by pod. When kubernetes schedules pod, it uses request as a guidance to choose a node which can accommodate this workload, but it doesn't guarantee that pod won't be killed if node is short on memory.</p> <p>As per docs <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run</a></p> <blockquote> <p>if a container exceeds its memory request and the node that it runs on becomes short of memory overall, it is likely that the Pod the container belongs to will be evicted.</p> </blockquote> <p>If you want to guarantee a certain memory window for your pods - you should use <code>limits</code>, but in that case if your pod doesn't use most of this memory, it will be &quot;wasted&quot;</p> <p>So to answer your question &quot;How can I prevent this?&quot;, you can:</p> <ul> <li>reconfigure your php-fpm in a way, that prevents it to use 10x memory (i.e. reduce workers count), and configure autoscaling. That way your overloaded pods won't be evicted, and kubernetes will schedule new pods in event of higher load</li> <li>set memory limit to guarantee a certain amount of memory to your pods</li> <li>Increase memory on your nodes</li> <li>Use affinity to schedule your demanding pods on some dedicated nodes and other workloads on separate nodes</li> </ul>
<p>I know that the moment the pod receives a deletion request, it is deleted from the service endpoint and no longer receives the request. However, I'm not sure if the pod can return a response to a request it received just before it was deleted from the service endpoint. If the pod IP is missing from the service's endpoint, can it still respond to requests?</p>
<p>There are many reasons why Kubernetes might terminate a healthy container (for example, node drain, termination due to lack of resources on the node, rolling update).</p> <h4>Once Kubernetes has decided to terminate a Pod, a series of events takes place:</h4> <h4>1 - Pod is set to the “Terminating” State and removed from the endpoints list of all Services</h4> <p>At this point, the pod stops getting new traffic. Containers running in the pod will not be affected.</p> <h3>2 - preStop Hook is executed</h3> <p>The <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details" rel="nofollow noreferrer">preStop Hook</a> is a special command or http request that is sent to the containers in the pod. If your application doesn’t gracefully shut down when receiving a SIGTERM you can use this hook to trigger a graceful shutdown. Most programs gracefully shut down when receiving a SIGTERM, but if you are using third-party code or are managing a system you don’t have control over, the preStop hook is a great way to trigger a graceful shutdown without modifying the application.</p> <h4>3 - SIGTERM signal is sent to the pod</h4> <p>At this point, Kubernetes will send a SIGTERM signal to the containers in the pod. This signal lets the containers know that they are going to be shut down soon. Your code should listen for this event and start shutting down cleanly at this point. This may include stopping any long-lived connections (like a database connection or WebSocket stream), saving the current state, or anything like that. Even if you are using the preStop hook, it is important that you test what happens to your application if you send it a SIGTERM signal, so you are not surprised in production!</p> <h4>4 - Kubernetes waits for a grace period</h4> <p>At this point, Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds. It’s important to note that this happens in parallel to the preStop hook and the SIGTERM signal. Kubernetes does not wait for the preStop hook to finish. If your app finishes shutting down and exits before the terminationGracePeriod is done, Kubernetes moves to the next step immediately. If your pod usually takes longer than 30 seconds to shut down, make sure you increase the grace period. You can do that by setting the terminationGracePeriodSeconds option in the Pod YAML.</p> <h4>5 - SIGKILL signal is sent to pod, and the pod is removed</h4> <p>If the containers are still running after the grace period, they are sent the SIGKILL signal and forcibly removed. At this point, all Kubernetes objects are cleaned up as well.</p> <p>I hope this gives a good idea of the Kubernetes <strong>termination lifecycle</strong> and how to handle a Pod termination <strong>gracefully</strong>.</p> <p>Based on <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">this article</a>.</p>
<p>I've Two workflowTemplates <code>generate-output</code>, <code>lib-read-outputs</code> and One workflow <code>output-paramter</code> as follows</p> <ol> <li><code>generate-output.yaml</code></li> </ol> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: generate-output spec: entrypoint: main templates: - name: main dag: tasks: # Generate Json for Outputs - name: read-outputs arguments: parameters: - name: outdata value: | { &quot;version&quot;: 4, &quot;terraform_version&quot;: &quot;0.14.11&quot;, &quot;serial&quot;: 0, &quot;lineage&quot;: &quot;732322df-5bd43-6e92-8f46-56c0dddwe83cb4&quot;, &quot;outputs&quot;: { &quot;key_alias_arn&quot;: { &quot;value&quot;: &quot;arn:aws:kms:us-west-2:123456789:alias/tetsing-key&quot;, &quot;type&quot;: &quot;string&quot;, &quot;sensitive&quot;: true }, &quot;key_arn&quot;: { &quot;value&quot;: &quot;arn:aws:kms:us-west-2:123456789:alias/tetsing-key&quot;, &quot;type&quot;: &quot;string&quot;, &quot;sensitive&quot;: true } } } template: retrieve-outputs # Create Json - name: retrieve-outputs inputs: parameters: - name: outdata script: image: python command: [python] env: - name: OUTDATA value: &quot;{{inputs.parameters.outdata}}&quot; source: | import json import os OUTDATA = json.loads(os.environ[&quot;OUTDATA&quot;]) with open('/tmp/templates_lst.json', 'w') as outfile: outfile.write(str(json.dumps(OUTDATA['outputs']))) volumeMounts: - name: out mountPath: /tmp volumes: - name: out emptyDir: { } outputs: parameters: - name: message valueFrom: path: /tmp/templates_lst.json </code></pre> <ol start="2"> <li><code>lib-read-outputs.yaml</code></li> </ol> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: lib-read-outputs spec: entrypoint: main templates: - name: main dag: tasks: # Read Outputs - name: lib-wft templateRef: name: generate-output template: main </code></pre> <ol start="3"> <li><code>output-paramter.yaml</code></li> </ol> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: output-paramter- spec: entrypoint: main templates: - name: main dag: tasks: # Json Output data task1 - name: wf templateRef: name: lib-read-outputs template: main - name: lib-wf2 dependencies: [wf] arguments: parameters: - name: outputResult value: &quot;{{tasks.wf.outputs.parameters.message}}&quot; template: whalesay - name: whalesay inputs: parameters: - name: outputResult container: image: docker/whalesay:latest command: [cowsay] args: [&quot;{{inputs.parameters.outputResult}}&quot;] </code></pre> <p>I am trying to pass the output parameters generated in workflowTemplate <code>generate-output</code> to workflow <code>output-paramter</code> via <code>lib-read-outputs</code></p> <p>When I execute them, it's giving the following error - <code>Failed: invalid spec: templates.main.tasks.lib-wf2 failed to resolve {{tasks.wf.outputs.parameters.message}}</code></p>
<h1>DAG and steps templates don't produce outputs by default</h1> <p>DAG and steps templates do not automatically produce their child templates' outputs, even if there is only one child template.</p> <p>For example, the <code>no-parameters</code> template here does not produce an output, even though it invokes a template which <em>does</em> have an output.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate spec: templates: - name: no-parameters dag: tasks: - name: get-a-parameter template: get-a-parameter </code></pre> <p>This lack of outputs makes sense if you consider a DAG template with multiple tasks:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate spec: templates: - name: no-parameters dag: tasks: - name: get-a-parameter template: get-a-parameter - name: get-another-parameter depends: get-a-parameter template: get-another-parameter </code></pre> <p>Which task's outputs should <code>no-parameters</code> produce? Since it's unclear, DAG and steps templates simply do not produce outputs by default.</p> <p>You can think of templates as being like functions. You wouldn't expect a function to implicitly return the output of a function it calls.</p> <pre class="lang-py prettyprint-override"><code>def get_a_string(): return &quot;Hello, world!&quot; def call_get_a_string(): get_a_string() print(call_get_a_string()) # This prints nothing. </code></pre> <h1>But a DAG or steps template can <em>forward</em> outputs</h1> <p>You can make a DAG or a steps template <em>forward</em> an output by setting its <code>outputs</code> field.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: get-parameters-wftmpl spec: templates: - name: get-parameters dag: tasks: - name: get-a-parameter template: get-a-parameter - name: get-another-parameter depends: get-a-parameter template: get-another-parameter # This is the critical part! outputs: parameters: - name: parameter-1 valueFrom: expression: &quot;tasks['get-a-parameter'].outputs.parameters['parameter-name']&quot; - name: parameter-2 valueFrom: expression: &quot;tasks['get-another-parameter'].outputs.parameters['parameter-name']&quot; --- apiVersion: argoproj.io/v1alpha1 kind: Workflow spec: templates: - name: print-parameter dag: tasks: - name: get-parameters templateRef: name: get-parameters-wftmpl template: get-parameters - name: print-parameter depends: get-parameters template: print-parameter arguments: parameters: - name: parameter value: &quot;{{tasks.get-parameters.outputs.parameters.parameter-1}}&quot; </code></pre> <p>To continue the Python analogy:</p> <pre class="lang-py prettyprint-override"><code>def get_a_string(): return &quot;Hello, world!&quot; def call_get_a_string(): return get_a_string() # Add 'return'. print(call_get_a_string()) # This prints &quot;Hello, world!&quot;. </code></pre> <h1>So, in your specific case...</h1> <ol> <li><p>Add an <code>outputs</code> section to the <code>main</code> template in the <code>generate-parameter</code> WorkflowTemplate to forward the output parameter from the <code>retrieve-parameters</code> template.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: generate-parameter spec: entrypoint: main templates: - name: main outputs: parameters: - name: message valueFrom: expression: &quot;tasks['read-parameters'].outputs.parameters.message&quot; dag: tasks: # ... the rest of the file ... </code></pre> </li> <li><p>Add an <code>outputs</code> section to the <code>main</code> template in the <code>lib-read-parameters</code> WorkflowTemplate to forward <code>generate-parameter</code>'s parameter.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: name: lib-read-parameters spec: entrypoint: main templates: - name: main outputs: parameters: - name: message valueFrom: expression: &quot;tasks['lib-wft'].outputs.parameters.message&quot; dag: tasks: # ... the rest of the file ... </code></pre> </li> </ol>
<p>Akka app on Kubernetes is facing delayed heartbeats, even when there is no load. </p> <p>There is also constantly the following warning: </p> <pre><code>heartbeat interval is growing too large for address ... </code></pre> <p>I tried to add a custom dispatcher for the cluster, even for every specific actor but did not help. I am not doing any blocking operations, since it is just a simple Http server.</p> <p>When the cluster has load, the nodes get Unreachable. </p> <p>I created a repository which can be used to reproduce the issue : <a href="https://github.com/CostasChaitas/Akka-Demo" rel="nofollow noreferrer">https://github.com/CostasChaitas/Akka-Demo</a></p>
<p>I was also having same issue of growing heartbeat intervals but in my case it was once I started using the cluster though the load was not high, I was trying only 2tps.</p> <p>Going through the Akka documentation I found Akka discourages using <code>resources.limits.cpu</code>. I removed it from my deployment manifest file and it works fine without delays.</p> <p>You can refer the documentation here: <a href="https://doc.akka.io/docs/akka/current/additional/deploying.html?_ga=2.222760347.1686781468.1643119007-1504733962.1642433119#resource-limits" rel="nofollow noreferrer">https://doc.akka.io/docs/akka/current/additional/deploying.html?_ga=2.222760347.1686781468.1643119007-1504733962.1642433119#resource-limits</a></p>
<p>We are using <code>jetstack/cert-manager</code> to automate certificate management in a k8s environment.</p> <p>Applying a Certificate with <code>kubectl apply -f cert.yaml</code> works just fine:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: test-cert spec: secretName: test-secret issuerRef: name: letsencrypt kind: Issuer dnsNames: - development.my-domain.com - production.my-domain.com </code></pre> <p>However, it fails when installing a Helm template:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: {{.Values.cert}} spec: secretName: {{.Values.secret}} issuerRef: name: letsencrypt kind: Issuer dnsNames: [{{.Values.dnsNames}}] </code></pre> <pre><code>E0129 09:57:51.911270 1 sync.go:264] cert-manager/controller/orders &quot;msg&quot;=&quot;failed to create Order resource due to bad request, marking Order as failed&quot; &quot;error&quot;=&quot;400 urn:ietf:params:acme:error:rejectedIdentifier: NewOrder request did not include a SAN short enough to fit in CN&quot; &quot;resource_kind&quot;=&quot;Order&quot; &quot;resource_name&quot;=&quot;test-cert-45hgz-605454840&quot; &quot;resource_namespace&quot;=&quot;default&quot; &quot;resource_version&quot;=&quot;v1&quot; </code></pre>
<p>Try to inspect you Certificate object wiht <code>kubectl -n default describe certificate test-cert</code> and post here if you don't find any issues with it.</p> <p>your Certificate Object should be like the following:</p> <pre class="lang-yaml prettyprint-override"><code>Name: test-cert Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; API Version: cert-manager.io/v1 Kind: Certificate Metadata: Creation Timestamp: 2022-01-28T12:25:40Z Generation: 4 Managed Fields: API Version: cert-manager.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:dnsNames: f:issuerRef: .: f:kind: f:name: f:secretName: Manager: kubectl-client-side-apply Operation: Update Time: 2022-01-28T12:25:40Z API Version: cert-manager.io/v1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:conditions: f:lastFailureTime: f:notAfter: f:notBefore: f:renewalTime: f:revision: Manager: controller Operation: Update Subresource: status Time: 2022-01-29T09:57:51Z Resource Version: 344677 Self Link: /apis/cert-manager.io/v1/namespaces/istio-ingress/certificates/test-cert-2 UID: 0015cc16-06c3-4e33-bb99-0f336cf7b788 Spec: Dns Names: development.my-domain.com production.my-domain.com Issuer Ref: Kind: Issuer Name: letsencrypt Secret Name: test-secret </code></pre> <p>Pay closer attention to Spec.DnsNames values. Sometime Heml's template engine renders it as string instead of array object due to missconfigurating.</p> <p>Also, it's a good proctice to inspect Helm charts with <code>helm template mychart</code> before installing.</p>
<p>There is a folder name &quot;data-persistent&quot; in the running container that the code reads and writes from, I want to save the changes made in that folder. when I use persistent volume, it removes/hides the data from that folder and the code gives an error. So what should be my approach.</p> <pre><code>FROM python:latest ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 #RUN mkdir data-persistent ADD linkedin_scrape.py . COPY requirements.txt ./requirements.txt COPY final_links.csv ./final_links.csv COPY credentials.txt ./credentials.txt COPY vectorizer.pk ./vectorizer.pk COPY model_IvE ./model_IvE COPY model_JvP ./model_JvP COPY model_NvS ./model_NvS COPY model_TvF ./model_TvF COPY nocopy.xlsx ./nocopy.xlsx COPY data.db /data-persistent/ COPY textdata.txt /data-persistent/ RUN ls -la /data-persistent/* RUN pip install -r requirements.txt CMD python linkedin_scrape.py --bind 0.0.0.0:8080 --timeout 90 </code></pre> <p>And my deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-first-cluster1 spec: replicas: 2 selector: matchLabels: app: scrape template: metadata: labels: app: scrape spec: containers: - name: scraper image: image-name # ports: - containerPort: 8080 env: - name: PORT value: &quot;8080&quot; volumeMounts: - mountPath: &quot;/dev/shm&quot; name: dshm - mountPath: &quot;/data-persistent/&quot; name: tester volumes: - name: dshm emptyDir: medium: Memory - name: tester persistentVolumeClaim: claimName: my-pvc-claim-1 </code></pre> <p>Let me explain the workflow of the code. The code reads from the textdata.txt file which contains the indices of links to be scraped e.g. from 100 to 150, then it scrapes the profiles, inserts them to data.db file and then writes to the texdata.txt file the sequence to be scraped in next run e.g. 150 to 200.</p>
<p>First , k8s volume mounting point overwrite the original file system /data-persistent/</p> <p>To solve such a case you have many options</p> <p><strong>Solution 1</strong></p> <ul> <li>edit your docker file to copy local data to /tmp-data-persistent</li> <li>then add &quot;init container&quot; that copy content of /tmp-data-persitent to /data-persistent that will copy the data to the volume and apply persistency</li> </ul> <p><strong>Solution 2</strong></p> <ul> <li><p>its not good to copy data in docker images , that will increase images sizes and also unityfing code and data change pipelines</p> </li> <li><p>Its better to keep data in any shared storage like &quot;s3&quot; , and let the &quot;init container&quot; compare and sync data</p> </li> </ul> <p>if cloud services like s3 not available</p> <ul> <li><p>you can use persistent volume type that support multipe r/w mounts</p> </li> <li><p>attach same volume to another deployment { use busybox image as example } and do the copy with &quot;kubectl cp&quot;</p> </li> <li><p>scale temp deployments to zero after finalizing the copy , also you can make it as part of CI pipeline</p> </li> </ul>
<p>I am trying to get / print the name of my current <code>kubernetes</code> context as it is configured in <code>~/.kube/config</code> using <code>client-go</code></p> <p>I hava managed to authenticate and get the <code>*rest.Config</code> object</p> <pre class="lang-golang prettyprint-override"><code> config, err = clientcmd.NewNonInteractiveDeferredLoadingClientConfig( &amp;clientcmd.ClientConfigLoadingRules{ExplicitPath: pathToKubeConfig}, &amp;clientcmd.ConfigOverrides{ CurrentContext: &quot;&quot;, }).ClientConfig() </code></pre> <p>but I don't see any relevant fields in the <code>config</code> struct.</p> <p>Note that despite the fact I am passing an empty string (<code>&quot;&quot;</code>) in the <code>ConfigOverrides</code> the <code>config</code> object returned provides me a <code>kubernetes.Clientset</code> that is based on my current <code>kubectl</code> context.</p>
<p>The function <code>ClientConfig()</code> returns the Kubernetes API client config, so it has no information about your config file.</p> <p>To get the current context, you need to call <code>RawConfig()</code>, then there is a field called <code>CurrentContext</code>.</p> <p>The following code should work.</p> <pre class="lang-golang prettyprint-override"><code> config, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig( &amp;clientcmd.ClientConfigLoadingRules{ExplicitPath: pathToKubeConfig}, &amp;clientcmd.ConfigOverrides{ CurrentContext: &quot;&quot;, }).RawConfig() currentContext := config.CurrentContext </code></pre>
<p>I use the helm-chart ingress-nginx (4.0.x) to create the ingress-controller and use it to direkt tcp traffic via a node-port to my deployment (which works perfectly). But I see that the ingress-controller constantly opens and closes tcp connection to the pod of my deployment. Does anybody know why this happens or how I can configure it?</p> <p>My current configuration for the ingress-nginx chart is</p> <pre class="lang-yaml prettyprint-override"><code>ingress-nginx: controller: replicaCount: 2 nodeSelector: beta.kubernetes.io/os: linux admissionWebhooks: patch: nodeSelector: beta.kubernetes.io/os: linux service: annotations: service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot; defaultBackend: nodeSelector: beta.kubernetes.io/os: linux tcp: 6203: storage-he10/lb-plc-in-scale-service:6203 </code></pre> <pre><code>2021-09-08 06:19:07.3733333 Tcp-Connection closed (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.0.47:48494) 2021-09-08 06:19:07.3766667 Tcp-Connection accepted (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.0.47:48576) 2021-09-08 06:19:09.0666667 Tcp-Connection closed (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.0.47:48518) 2021-09-08 06:19:09.0700000 Tcp-Connection accepted (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.1.83:58122) 2021-09-08 06:19:13.3900000 Tcp-Connection closed (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.0.47:48576) 2021-09-08 06:19:13.3933333 Tcp-Connection accepted (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.1.83:58156) 2021-09-08 06:19:15.0700000 Tcp-Connection closed (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.1.83:58122) 2021-09-08 06:19:15.0700000 Tcp-Connection accepted (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.1.83:58184) 2021-09-08 06:19:19.3933333 Tcp-Connection closed (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.1.83:58156) 2021-09-08 06:19:19.3966667 Tcp-Connection accepted (PlcService: SVC_IN_SCALE, Port: 6203, Client: 10.244.0.47:48768) </code></pre> <p>and <code>helm template</code> produces following code for my deployment</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: Service metadata: name: lb-plc-in-scale-service namespace: storage-he10 labels: siteName: he10 spec: type: NodePort selector: app: plc-in-scale-service ports: # By default and for convenience, the `targetPort` is set to the same value as the `port` field. - port: 6203 ... --- # Source: he10/templates/servers.yaml kind: Deployment apiVersion: apps/v1 metadata: name: plc-in-scale-service namespace: storage-he10 labels: siteName: he10 spec: replicas: selector: matchLabels: app: plc-in-scale-service template: metadata: labels: app: plc-in-scale-service annotations: checksum/config: 4b3245c98480d806bcefeeea2890918ee4d272b2982c1f6fe0621fd323348231 spec: hostAliases: ... containers: - name: plc-in-scale-service image: ... ports: - containerPort: 6203 envFrom: - configMapRef: name: env-config-map-plc-in-scale-service terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: Always volumeMounts: - name: config-volume mountPath: /app/appsettings volumes: - name: config-volume configMap: name: appsettings-config-map-plc-in-scale-service items: - key: appsettings.XXX.json path: appsettings.Development.json nodeSelector: beta.kubernetes.io/os: linux schedulerName: default-scheduler imagePullSecrets: - name: ... ... </code></pre>
<p>I believe this is due to automated health probes coming from Azure's load balancer. There is a GitHub issue requesting an annotation to control this behavior, but no solution from Microsoft as of yet.</p> <p>See: <a href="https://github.com/Azure/AKS/issues/1394" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1394</a></p>
<p>I have helm 3 template created using <code>helm create microservice</code> command. it has below files.</p> <pre><code>/Chart.yaml ./values.yaml ./.helmignore ./templates/ingress.yaml ./templates/deployment.yaml ./templates/service.yaml ./templates/serviceaccount.yaml ./templates/hpa.yaml ./templates/NOTES.txt ./templates/_helpers.tpl ./templates/tests/test-connection.yaml </code></pre> <p>Updated values file based on my application, when I try to install the helm chat its giving below error message.</p> <pre><code>Error: UPGRADE FAILED: template: microservice/templates/ingress.yaml:20:8: executing &quot;microservice/templates/ingress.yaml&quot; at &lt;include &quot;microservice.labels&quot; .&gt;: error calling include: template: no template &quot;microservice.labels&quot; associated with template &quot;gotpl&quot; helm.go:75: [debug] template: microservice/templates/ingress.yaml:20:8: executing &quot;microservice/templates/ingress.yaml&quot; at &lt;include &quot;microservice.labels&quot; .&gt;: error calling include: template: no template &quot;microservice.labels&quot; associated with template &quot;gotpl&quot; </code></pre> <p>Here is the <code>ingress.yaml</code> file.</p> <pre><code>{{- if .Values.ingress.enabled -}} {{- $fullName := include &quot;microservice.fullname&quot; . -}} {{- $svcPort := .Values.service.port -}} {{- if and .Values.ingress.className (not (semverCompare &quot;&gt;=1.18-0&quot; .Capabilities.KubeVersion.GitVersion)) }} {{- if not (hasKey .Values.ingress.annotations &quot;kubernetes.io/ingress.class&quot;) }} {{- $_ := set .Values.ingress.annotations &quot;kubernetes.io/ingress.class&quot; .Values.ingress.className}} {{- end }} {{- end }} {{- if semverCompare &quot;&gt;=1.19-0&quot; .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1 {{- else if semverCompare &quot;&gt;=1.14-0&quot; .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1beta1 {{- else -}} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} labels: {{- include &quot;microservice.labels&quot; . | nindent 4 }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if and .Values.ingress.className (semverCompare &quot;&gt;=1.18-0&quot; .Capabilities.KubeVersion.GitVersion) }} ingressClassName: {{ .Values.ingress.className }} {{- end }} {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ .path }} {{- if and .pathType (semverCompare &quot;&gt;=1.18-0&quot; $.Capabilities.KubeVersion.GitVersion) }} pathType: {{ .pathType }} {{- end }} backend: {{- if semverCompare &quot;&gt;=1.19-0&quot; $.Capabilities.KubeVersion.GitVersion }} service: name: {{ $fullName }} port: number: {{ $svcPort }} {{- else }} serviceName: {{ $fullName }} servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }} {{- end }} </code></pre> <p>How to I added <code>microservice.labels</code> template?. Do I need to create <code>microservice.labels.tlp</code> file?</p> <p>Any tips to fix this error.</p> <p>Thanks SR</p>
<p>I copied the <code>ingress.yaml</code> file to, chart created older version helm. this value was missing in <code>_helpers.tpl</code> file. Now I copied new version of hellpers.tpl file. deployment works now.</p>
<p>I have the following pods,</p> <pre><code>root@sea:scripts# kubectl get pods -l app=mubu7 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mubu71 2/2 Running 0 51m 10.244.1.215 spring &lt;none&gt; &lt;none&gt; mubu72 2/2 Running 0 51m 10.244.2.17 island &lt;none&gt; &lt;none&gt; </code></pre> <p>and each one has 2 containers:</p> <pre><code>root@sea:scripts# kubectl get pods mubu71 -o jsonpath='{.spec.containers[*].name}' | \ &gt; tr &quot; &quot; &quot;\n&quot; | uniq mubu711 mubu712 root@sea:scripts# kubectl get pods mubu72 -o jsonpath='{.spec.containers[*].name}' | \ &gt; tr &quot; &quot; &quot;\n&quot; | uniq mubu721 mubu722 </code></pre> <p>I want to <strong>pass a command to both pods and all their (4) containers</strong>, but the best I can come up with is</p> <pre><code>kubectl get pods \ -l app=mubu7 \ -o jsonpath='{.items[*].metadata.name}' | \ tr &quot; &quot; &quot;\n&quot; | uniq | \ xargs -I{} kubectl exec {} -- bash -c \ &quot;service ssh start&quot; </code></pre> <p>which outputs</p> <pre><code>Defaulted container &quot;mubu711&quot; out of: mubu711, mubu712 * Starting OpenBSD Secure Shell server sshd ...done. Defaulted container &quot;mubu721&quot; out of: mubu721, mubu722 * Starting OpenBSD Secure Shell server sshd ...done. </code></pre> <p>How can I modify the above code to <strong>run the command in all pod-containers and not just the first one</strong> or is there a preferable alternative approach?</p> <p>Any help will be much appreciated.</p>
<p>Following code would iterate over all the pods with the label <code>app=mubu7</code>. Later it would grab their pod name and all the container's name. The list of container names is converted into an array and iterated for each pod. So it does not matter how many containers are in the pod(Eg:1,2,3..etc) this code would work.</p> <pre><code>#JSON-path expression to extract the pod name and the list of containers #{range .items[*]}{.metadata.name} {.spec.containers[*].name}{&quot;\n&quot;} {end} #note that the pod name is stored in 1st variable(pod) # container(s) name is stored in 2nd variable(containers) kubectl get pods -l app=mubu7 -o jsonpath='{range .items[*]}{.metadata.name} {.spec.containers[*].name}{&quot;\n&quot;} {end}' |while read -r pod containers; do # 2nd variable consist of space seprated container lists, converting it to array array=($containers); #inner loop to iterate over containers for each pod(outer loop) for container in &quot;${array[@]}&quot;;do kubectl exec -i ${pod} -c &quot;$container&quot; &lt;/dev/null -- service ssh start done done </code></pre>
<p>I have started learning Kuberenetes for some time (not pro yet). I am using docker-desktop on Windows 11 with Kubernetes all works fine. But at some point I added AKS (Azure Kubernetes) cluster to my test lab and the AKS is deleted at later point from Azure Portal.</p> <p>So when I run <code>kubectl config view</code> I get following output:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://aksxxxxxxx-rg-aks-xxxxxx-xxxxxxxx.hcp.northeurope.azmk8s.io:443 name: aksxxxxxxx - cluster: certificate-authority-data: DATA+OMITTED server: https://kubernetes.docker.internal:6443 name: docker-desktop contexts: - context: cluster: docker-desktop user: docker-desktop name: docker-desktop current-context: docker-desktop kind: Config preferences: {} users: - name: clusterUser_AKSRG_aksxxxxxxx user: client-certificate-data: REDACTED client-key-data: REDACTED token: REDACTED - name: clusterUser_RG-AKS_aksxxxxxxx user: client-certificate-data: REDACTED client-key-data: REDACTED token: REDACTED - name: docker-desktop user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>So what I did, I ran <code>kubectl config get-contexts</code> and deleted the unused aks using <code>kubectl config delete-context aksxxxxxxx</code> and now only one context, the docker-desktop</p> <pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE * docker-desktop docker-desktop docker-desktop </code></pre> <p>Which is fine, my question was, How to clean up the view, so it does not view unused (dead) clusters and users that does not exist? Or am I taking the wrong approach?</p>
<p><code>kubectl config view</code> shows you your whole configuration under <em>.kube/config</em></p> <p><code>kubectl config get-contexts</code> will give you an output about your contexts. A context is the allocation between cluster and user.</p> <p>Since you only have deleted the allocation (context) you still have the cluster and user in your <em>.kube/config</em> which is the output of your <code>kubectl config view</code>.</p> <p>To delete the cluster you can use <code>kubectl config delete-cluster aksxxxxxxx</code>. To delete the user you can use <code>kubectl config unset users.clusterUser_RG-AKS_aksxxxxxxx</code></p>
<p>I have a kube-prometheus deployed to multiple environments using kustomize.</p> <p>kube-prometheus is a base and each environment is an overlay. Let's say I want to deploy dashboards to overlays, which means I need to deploy the same ConfigMaps and the same patch to each overlay.</p> <p>Ideally, I want to avoid changing the base as it is declared outside of my repo and to keep things DRY and not to copy the same configs all over the place.</p> <p>Is there a way to achieve this?</p> <p>Folder structure:</p> <pre><code>/base/ /kube-prometheus/ /overlays/ /qa/ &lt;--- /dev/ &lt;--- I want to share resources+patches between those /staging/ &lt;--- </code></pre>
<p>The proper way to do this is using <strong>components</strong>.</p> <p>Components can encapsulate both resources and patches together. In my case, I wanted to add ConfigMaps (resource) and mount this ConfigMaps to my Deployment (patch) without repeating the patches.</p> <p>So my overlay would look like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base/kube-prometheus/ # Base components: - ../../components/grafana-aws-dashboards/ # Folder with kustomization.yaml that includes both resources and patches </code></pre> <p>And this is the component:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component resources: - grafana-dashboard-aws-apigateway.yaml - grafana-dashboard-aws-auto-scaling.yaml - grafana-dashboard-aws-ec2-jwillis.yaml - grafana-dashboard-aws-ec2.yaml - grafana-dashboard-aws-ecs.yaml - grafana-dashboard-aws-elasticache-redis.yaml - grafana-dashboard-aws-elb-application-load-balancer.yaml - grafana-dashboard-aws-elb-classic-load-balancer.yaml - grafana-dashboard-aws-lambda.yaml - grafana-dashboard-aws-rds-os-metrics.yaml - grafana-dashboard-aws-rds.yaml - grafana-dashboard-aws-s3.yaml - grafana-dashboard-aws-storagegateway.yaml patchesStrategicMerge: - grafana-mount-aws-dashboards.yaml </code></pre> <p>This approach is documented here:<br /> <a href="https://kubectl.docs.kubernetes.io/guides/config_management/components/" rel="noreferrer">https://kubectl.docs.kubernetes.io/guides/config_management/components/</a></p>
<p>I am trying to understand how we can create circuit breakers for cloud run services,Unlike in GKE we are using istio kind of service mesh how we implement same thing cloud Run ?</p>
<p>On GKE you'd <a href="https://cloud.google.com/traffic-director/docs/configure-advanced-traffic-management#circuit-breaking" rel="nofollow noreferrer">set up a circuit breaker</a> to prevent overloading your legacy backend systems from a surge in requests.</p> <p>To accomplish the same on Cloud Run or Cloud Functions, you can set a <a href="https://cloud.google.com/run/docs/configuring/max-instances" rel="nofollow noreferrer">maximum number of instances</a>. From that documentation:</p> <blockquote> <p>Specifying maximum instances in Cloud Run allows you to limit the scaling of your service in response to incoming requests, although this maximum setting can be exceeded for a brief period due to circumstances such as <a href="https://cloud.google.com/run/docs/about-instance-autoscaling#spikes" rel="nofollow noreferrer">traffic spikes</a>. Use this setting as a way to control your costs or to limit the number of connections to a backing service, such as to a database.</p> </blockquote>
<p>I have used the following configurations to deploy an app on minikube.</p> <p><strong>Deployment</strong>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: angular-app spec: replicas: 2 selector: matchLabels: run: angular-app template: metadata: labels: run: angular-app spec: containers: - name: angular-app image: nheidloff/angular-app ports: - containerPort: 80 - containerPort: 443 </code></pre> <p><strong>Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: angular-app labels: run: angular-app spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http </code></pre> <p><strong>Service description:</strong></p> <pre><code>Name: angular-app Namespace: default Labels: run=angular-app Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.102.174.98 IPs: 10.102.174.98 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 31503/TCP Endpoints: 172.17.0.3:80,172.17.0.4:80 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>When i try to access the endpoints, the links are not responding. However after using <code>minikube service angular-app</code>. Following showed up:</p> <pre><code>|-----------|-------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|---------------------------| | default | angular-app | http/80 | http://192.168.49.2:31503 | |-----------|-------------|-------------|---------------------------| 🏃 Starting tunnel for service angular-app. |-----------|-------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|------------------------| | default | angular-app | | http://127.0.0.1:60611 | |-----------|-------------|-------------|------------------------| </code></pre> <p>With this ip <code>http://127.0.0.1:60611</code> im able to access the app. What is the use of the endpoints given in the service description? How to access each replica? Say if i have 4 replicas, how do i access each one of them?</p>
<ul> <li><p>The endpoints provided in the service description are the endpoints for each of the pods <code>172.17.0.3:80,172.17.0.4:80</code>, when you deploy more replicas you will have more endpoints.</p> </li> <li><p>The angular-app service is bonded to port number <code>31503</code> and you can access your service on this port from cluster nodes (not your host machine).</p> </li> <li><p><code>minikube service angular-app</code> will create a tunnel between your host machine and the cluster nodes on port <code>60611</code>. This means anything that comes on <code>127.0.0.1:60611</code> will be redirected to <code>192.168.49.2:31503</code> and then one of the available endpoints.</p> </li> <li><p>The service will take care of balancing the load between all replicas automatically and you don't need to worry about it.</p> </li> <li><p>if you would like to access a specific pod you can use the below command:</p> </li> </ul> <pre><code>kubectl port-forward ${POD_NAME} 80:80 </code></pre> <p>you need to replace the pod name from the above command and the command assumes that port <code>80</code> is available on your machine.</p>
<p>What might be root cause that I got nothing from below command?</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespace/default/pods/*/' | jq Error from server (NotFound): the server could not find the requested resource </code></pre> <p>I checked the '/apis/custom.metrics.k8s.io/v1beta1' as follow.</p> <pre class="lang-sh prettyprint-override"><code>kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1' | jq { &quot;kind&quot;: &quot;APIResourceList&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;groupVersion&quot;: &quot;custom.metrics.k8s.io/v1beta1&quot;, &quot;resources&quot;: [ { &quot;name&quot;: &quot;namespaces/nginx_vts_server_requests_per_second&quot;, &quot;singularName&quot;: &quot;&quot;, &quot;namespaced&quot;: false, &quot;kind&quot;: &quot;MetricValueList&quot;, &quot;verbs&quot;: [ &quot;get&quot; ] } ] } </code></pre> <p>And the adapter was normal which was deployed via <a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">Helm Chart</a>.</p> <pre class="lang-sh prettyprint-override"><code>kubectl get apiservice | grep adapter v1beta1.custom.metrics.k8s.io monitoring/prometheus-adapter True 11m </code></pre> <p>Anybody can help? Thanks in advance.</p>
<pre><code>kubectl get --raw '/apis/custom.metrics.k8s.io/v1beta1/namespaces/*/metrics/nginx_vts_server_requests_per_second' | jq </code></pre> <p>Did you try the namespace metric <code>nginx_vts_server_requests_per_second</code> you pasted from the kubernetes discovery information?</p> <p>You may need to change the prometheus adapter's configuration to include pods related metrics.</p> <p>The <a href="https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">prometheus-adapter walkthrough docs</a> may be useful to you.</p>
<p>Using Calico as CNI and CRI-O. DNS settings properly configured. Installed NGINX ingress controller via official documentation page of NGINX using helm. Set <code>replicaset</code> to 2 when installing.</p> <p>After that used this file for creating 3 objects: <code>Deployment</code>, <code>Service</code> for exposing web server and <code>Ingress</code>.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null name: test-nginx spec: replicas: 1 selector: matchLabels: app: nginx-service-pod template: metadata: labels: app: nginx-service-pod spec: containers: - image: nginx name: test-nginx --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: ClusterIP selector: app: nginx-service-pod ports: - port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress spec: rules: - host: k8s.example.com http: paths: - path: /test pathType: Prefix backend: service: name: nginx-service port: number: 80 ... </code></pre> <p>Tried to test deployment service by curling it and it is working correct:</p> <pre><code># curl http://10.103.88.163 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; &lt;style&gt; html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } &lt;/style&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Welcome to nginx!&lt;/h1&gt; &lt;p&gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.&lt;/p&gt; &lt;p&gt;For online documentation and support please refer to &lt;a href=&quot;http://nginx.org/&quot;&gt;nginx.org&lt;/a&gt;.&lt;br/&gt; Commercial support is available at &lt;a href=&quot;http://nginx.com/&quot;&gt;nginx.com&lt;/a&gt;.&lt;/p&gt; &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>But when I'm trying to curl Ingress getting error:</p> <pre><code># curl http://k8s.example.com/test curl: (7) Failed to connect to k8s.example.com port 80: Connection refused </code></pre> <p>Why this is happening? Because as I see there is no misconfig in the objects configuration.</p>
<p>This problem should be resolved by adding</p> <pre><code>spec: template: spec: hostNetwork: true </code></pre> <p>to the ingress controller yaml manifest. For more please check <a href="https://github.com/kubernetes/ingress-nginx/issues/4799" rel="nofollow noreferrer">this github issue</a> and <a href="https://github.com/kubernetes/ingress-nginx/issues/4799#issuecomment-560406420" rel="nofollow noreferrer">this answer</a>.</p>
<p>I obtained Intermediate SSL certificate from SSL.com recently. I'm running some services in AKS (Azure Kubernetes Service) Earlier I was using Let's Encrypt as the CertManager, but I want to use SSL.com as the CA going forward. So basically, I obtained chained.crt and the private.key</p> <p>The chained.crt consists of 4 certificates. Like below.</p> <pre><code>-----BEGIN CERTIFICATE----- abc -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- def -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ghi -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- jkl -----END CERTIFICATE----- </code></pre> <p>The first step was, I created a secret as below. The content I added in tls.crt and tls.key was base64 decoded data.</p> <pre><code>cat chained.crt | base64 | tr -d '\n' cat private.key | base64 | tr -d '\n' apiVersion: v1 kind: Secret metadata: name: ca-key-pair namespace: cert data: tls.crt: &lt;crt&gt; tls.key: &lt;pvt&gt; </code></pre> <p>Then eventually I created the Issuer by referring the secret I created above.</p> <pre><code>apiVersion: cert-manager.io/v1beta1 kind: Issuer metadata: name: my-issuer namespace: cert spec: ca: secretName: ca-key-pair </code></pre> <p>The issue I'm having here is, when I create the Issuer, it gives an error like this</p> <pre><code>Status: Conditions: Last Transition Time: 2022-01-27T16:09:02Z Message: Error getting keypair for CA issuer: certificate is not a CA Reason: ErrInvalidKeyPair Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrInvalidKeyPair 18s (x2 over 18s) cert-manager Error getting keypair for CA issuer: certificate is not a CA </code></pre> <p>I searched and found this too <a href="https://stackoverflow.com/questions/45796058/how-do-i-add-an-intermediate-ssl-certificate-to-kubernetes-ingress-tls-configura">How do I add an intermediate SSL certificate to Kubernetes ingress TLS configuration?</a> and followed the things mentioned here too. But still getting the same error.</p>
<p>Perfect! After spending more time on this, I was lucky enough to make this work. So in this case, you don't need to create an Issuer or ClusterIssuer.</p> <p>First, create a TLS secret by specifying your private.key and the certificate.crt.</p> <pre><code>kubectl create secret tls ssl-secret --key private.key --cert certificate.crt </code></pre> <p>After creating the secret, you can directly refer to that Secret in the Ingres.</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: frontend-app-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/proxy-body-size: 8m nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; spec: tls: - hosts: - &lt;domain-name&gt; secretName: ssl-secret rules: - host: &lt;domain-name&gt; http: paths: - backend: serviceName: backend servicePort: 80 path: /(.*) </code></pre> <p>Then verify if everything's working. For me the above process worked.</p>
<p>Is there a way to execute some code in pod containers when config maps are updated via helm. Preferrably without a custom sidecar doing constant file watching?</p> <p>I am thinking along the lines of postStart and preExit lifecycle events of Kubernetes, but in my case a &quot;postPatch&quot;.</p>
<p>This might be something a <code>post-install</code> or <code>post-upgrade</code> hook would be perfect for:</p> <p><a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts_hooks/</a></p> <p>You can trigger these jobs to start after an install (<code>post-install</code>) and/or after an upgrade (<code>post-upgrade</code>) and they will run to completion before the chart is considered installed or upgraded.</p> <p>So you can to the upgrade, then as part of that upgrade, the hook would trigger after the update and run your update code. I know the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">nginx ingress controller</a> chart does something like this.</p>
<p>I need to take daily backup for cockroachdb and push to S3 , its running under EKS cluster with statefulset, So can anyone suggest the best method to do this please.</p> <p>Thanks.</p>
<p>the best way is to use the CockroachDB for backup.</p> <pre><code>CREATE SCHEDULE schedule_label FOR BACKUP INTO 's3://test/backups/schedule_test?AWS_ACCESS_KEY_ID=x&amp;AWS_SECRET_ACCESS_KEY=x' WITH revision_history RECURRING '@daily'; </code></pre> <p>you can schedule the exact run time using the RECURRING parameter.</p> <pre><code>CREATE SCHEDULE schedule_database FOR BACKUP DATABASE movr INTO 's3://test/schedule-database?AWS_ACCESS_KEY_ID=x&amp;AWS_SECRET_ACCESS_KEY=x' WITH revision_history RECURRING '1 0 * * *'; </code></pre> <p>the documentation link is <a href="https://www.cockroachlabs.com/docs/stable/create-schedule-for-backup.html" rel="nofollow noreferrer">this</a></p>
<p>I have a <strong>multistage</strong> <code>dockerfile</code> which I'm deploying in k8s with script as <code>ENTRYPOINT [&quot;./entrypoint.sh&quot;]</code>.</p> <p>Deployment is done though helm and env is Azure. While creating the container it errors out <strong>&quot;./entrypoint.sh&quot;: permission denied: unknown</strong></p> <pre><code>Warning Failed 14s (x3 over 31s) kubelet Error: failed to create containerd task: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: &quot;./entrypoint.sh&quot;: permission denied: unknown Warning BackOff 1s (x4 over 30s) kubelet Back-off restarting failed container </code></pre> <p>I have given <code>chmod +x</code> to make it executable and <code>chmod 755</code> for permission.</p> <p><strong>Dockerfile</strong></p> <pre><code>############## ## Build # ############## FROM repo.azurecr.io/maven:3.8.1-jdk-11 AS BUILD ARG WORKDIR=/opt/work COPY . $WORKDIR/ WORKDIR ${WORKDIR} COPY ./settings.xml /root/.m2/settings.xml RUN --mount=type=cache,target=/root/.m2/repository \ mvn clean package -pl app -am RUN rm /root/.m2/settings.xml RUN rm ./settings.xml ################# ### Runtime # ################# FROM repo.azurecr.io/openjdk:11-jre-slim as RUNTIME RUN mkdir /opt/app \ &amp;&amp; useradd -ms /bin/bash javauser \ &amp;&amp; chown -R javauser:javauser /opt/app \ &amp;&amp; apt-get update \ &amp;&amp; apt-get install curl -y \ &amp;&amp; rm -rf /var/lib/apt/lists/* COPY --from=BUILD /opt/work/app/target/*.jar /opt/app/service.jar COPY --from=BUILD /opt/work/entrypoint.sh /opt/app/entrypoint.sh RUN chmod +x /opt/app/entrypoint.sh RUN chmod 755 /opt/app/entrypoint.sh WORKDIR /opt/app USER javauser ENTRYPOINT [&quot;./entrypoint.sh&quot;] </code></pre> <p>PS: Please don't make it duplicate of <a href="https://stackoverflow.com/a/46353378/2226710">https://stackoverflow.com/a/46353378/2226710</a> as i have added <code>RUN chmod +x entrypoint.sh</code> and it didn't solved the issue.</p>
<p>Use <code>bash</code> (or your preferred shell if not <code>bash</code>) in the entrypoint:</p> <pre><code>ENTRYPOINT [ &quot;bash&quot;, &quot;-c&quot;, &quot;./entrypoint.sh&quot; ] </code></pre> <p>This will run the entrypoint script even if you haven't set the script as executable (which I see you have)</p> <p>You an also use this similarly with other scripts, for example with Python:</p> <pre><code>ENTRYPOINT [ &quot;python&quot;, &quot;./entrypoint.py&quot; ] </code></pre> <p>You could also try calling the script with full executable path:</p> <pre><code>ENTRYPOINT [ &quot;/opt/app/entrypoint.sh&quot; ] </code></pre>
<p>Im running:</p> <pre><code>:~$ minikube version minikube version: v1.20.0 Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.6&quot;, GitCommit:&quot;8a62859e515889f07e3e3be6a1080413f17cf2c3&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-04-15T03:28:42Z&quot;, GoVersion:&quot;go1.15.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.6&quot;, GitCommit:&quot;fbf646b339dc52336b55d8ec85c181981b86331a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-12-18T12:01:36Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>Is it possible to upgrade ONLY kubernetes in minikube installation?</p>
<p>You can start <code>minikube</code> with a <code>k8s</code> version of your choice</p> <pre><code>▶ minikube start --kubernetes-version=1.22.1 😄 minikube v1.23.0 on Darwin 11.6.1 ✨ Using the virtualbox driver based on existing profile 👍 Starting control plane node minikube in cluster minikube 🎉 minikube 1.25.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.1 💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' </code></pre>
<p>I'm using the kubernetes go api. I have the IP of a pod, and I want to find the v1.pod object (or just name and namespace) of the pod that has that IP. How can I go about this?</p>
<p>Turns out you need to use a FieldSelector. The field in this case is <code>status.podIP</code>.</p> <pre class="lang-golang prettyprint-override"><code>import ( [...] metav1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; [...] ) [...] pods, err := clientset.CoreV1().Pods(&quot;&quot;).List(context.Background(), metav1.ListOptions{FieldSelector: &quot;status.podIP=&quot; + address}) </code></pre> <p>Where <code>clientset</code> is a <code>*kubernetes.Clientset</code> and <code>address</code> is the string IP of the pod</p>
<p>How to fix this? I am trying to get the version of kubectl but says server not found. I have already installed kubernetes in my PC.</p> <p><a href="https://i.stack.imgur.com/LjtYv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LjtYv.png" alt="enter image description here" /></a></p>
<p>If you are using minikube - just start minikube that will sort this problem</p> <p>'minikube start --memory=4096'</p>
<p>I have several k8s clusters and I would like to monitor pods metrics (cpu/memory mainly). For that, I already have one central instance of prometheus/grafana and I want to use it to monitor pods metrics from all my podsk8s clusters.</p> <p>Sorry if the question was already asked by I already read lots of tutorials but it's always to install a dedicated prometheus/grafana instance on the cluster itself. I don't want that since I already have prometheus/grafana running somewhere else. I just want to &quot;export&quot; metrics.</p> <p>I have metrics-servers installed on each clusters but I'm not sure if I need to deploy something else. Please advise me.</p> <p>So, how can I export my pods metrics to my prometheus/grafana instance?</p> <p>Thanks</p>
<p>Posting the answer as a community wiki, feel free to edit and expand.</p> <hr /> <p>You need to use <code>federation</code> for prometheus for this purpose.</p> <blockquote> <p>Federation allows a Prometheus server to scrape selected time series from another Prometheus server.</p> </blockquote> <p>Main idea of using <code>federation</code> is:</p> <blockquote> <p>Prometheus is a very flexible monitoring solution wherein each Prometheus server is able to act as a target for another Prometheus server in a highly-available, secure way. By configuring and using federation, Prometheus servers can scrape selected time series data from other Prometheus servers</p> </blockquote> <p>See example <a href="https://banzaicloud.com/blog/prometheus-federation/" rel="nofollow noreferrer">here</a>.</p>
<p>I'm trying to set up the use of dotnet-monitor in a windows pod. But if I understand correctly, there are no images for use on Windows nodes <a href="https://hub.docker.com/_/microsoft-dotnet-monitor" rel="nofollow noreferrer">https://hub.docker.com/_/microsoft-dotnet-monitor</a>. Is there any way to install dotnet-monitor utilities in Dockerfile windows pod to start collecting metrics from my windows application?</p>
<p>The side car approach for setting up Dotnet Monitor in a Windows Container to get the diagnostics logs of a different container is currently not supported as mentioned by <em><strong>Jander-MSFT</strong></em> in this <em><strong><a href="https://github.com/dotnet/dotnet-monitor/issues/1160" rel="nofollow noreferrer"><code>Github Issue</code></a></strong></em> which might get resolved by this <em><strong><a href="https://github.com/dotnet/runtime/issues/63950" rel="nofollow noreferrer"><code>Issue</code></a>.</strong></em></p> <p><em><strong>As a Solution , You will have to install the tool on the same Windows container by running the below command :</strong></em></p> <pre><code>dotnet tool install --global dotnet-monitor --version 6.0.0 </code></pre> <p>You can refer this <em><strong><a href="https://www.hanselman.com/blog/exploring-your-net-applications-with-dotnetmonitor" rel="nofollow noreferrer"><code>blog</code></a></strong></em> by <em><strong>Scott Hanselman</strong></em> for more details on the same .</p>
<p>I'm running my Backend on <strong>Kubernetes</strong> on around <strong>250</strong> pods under <strong>15</strong> deployments, backend in written in <strong>NODEJS</strong>.</p> <p>Sometimes after X number of hours (5&lt;X&lt;30) I'm getting <code>ENOTFOUND</code> in one of the PODS, as follows:</p> <pre><code>{ &quot;name&quot;: &quot;main&quot;, &quot;hostname&quot;: &quot;entrypoint-sdk-54c8788caa-aa3cj&quot;, &quot;pid&quot;: 19, &quot;level&quot;: 50, &quot;error&quot;: { &quot;errno&quot;: -3008, &quot;code&quot;: &quot;ENOTFOUND&quot;, &quot;syscall&quot;: &quot;getaddrinfo&quot;, &quot;hostname&quot;: &quot;employees-service&quot; }, &quot;msg&quot;: &quot;Failed calling getEmployee&quot;, &quot;time&quot;: &quot;2022-01-28T13:44:36.549Z&quot;, &quot;v&quot;: 0 } </code></pre> <p>I'm running a stress test on the Backend of YY number of users per second, but I'm keeping this stress level steady and not changing it, and then it happens out of nowhere with no specific reason.</p> <p>Kubernetes is <strong>K3S</strong> Server Version: <strong>v1.21.5+k3s2</strong></p> <p>Any idea what might cause this weird <code>ENOTFOUND</code>?</p>
<p>Already saw your <a href="https://github.com/kubernetes/kubernetes/issues/107866" rel="nofollow noreferrer">same question on github</a> and reference to <a href="https://github.com/external-secrets/kubernetes-external-secrets/issues/860" rel="nofollow noreferrer">getaddrinfo ENOTFOUND with newest versions</a>.</p> <p>As per comments this issue does not appear in k3s 1.21, that is 1 version below yours. I know it almost impossible, but any chance to try similar setup on this ver?</p> <p>And it seems error comes from <a href="https://github.com/nodejs/node/blob/1125c8a814030f16fc995d897181ac9920f9d3db/lib/dns.js#L17-L24" rel="nofollow noreferrer">node/lib/dns.js</a>.</p> <pre><code>function errnoException(err, syscall, hostname) { // FIXME(bnoordhuis) Remove this backwards compatibility nonsense and pass // the true error to the user. ENOTFOUND is not even a proper POSIX error! if (err === uv.UV_EAI_MEMORY || err === uv.UV_EAI_NODATA || err === uv.UV_EAI_NONAME) { err = 'ENOTFOUND'; } </code></pre> <p>What I wanted to suggest you is to check <a href="https://tech.findmypast.com/k8s-dns-lookup/" rel="nofollow noreferrer">Solving DNS lookup failures in Kubernetes</a>. Article describes long hard way of catching the same error you have that also bothered from time to time.</p> <p>As a solution aftet investigating all the metrics, logs, etc - was installing K8s cluster add-on called <a href="https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/" rel="nofollow noreferrer">Node Local DNS cache</a>, that</p> <blockquote> <p>improves Cluster DNS performance by running a dns caching agent on cluster nodes as a DaemonSet. In today's architecture, Pods in ClusterFirst DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns/CoreDNS endpoint via iptables rules added by kube-proxy. With this new architecture, Pods will reach out to the dns caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns service for cache misses of cluster hostnames(cluster.local suffix by default).</p> <p>Motivation</p> <ul> <li>With the current DNS architecture, it is possible that Pods with the highest DNS QPS have to reach out to a different node, if there is no local kube-dns/CoreDNS instance. Having a local cache will help improve the latency in such scenarios.</li> <li>Skipping iptables DNAT and connection tracking will help reduce conntrack races and avoid UDP DNS entries filling up conntrack table.</li> <li>Connections from local caching agent to kube-dns service can be upgraded to TCP. TCP conntrack entries will be removed on connection<br /> close in contrast with UDP entries that have to timeout (default<br /> nf_conntrack_udp_timeout is 30 seconds)</li> <li>Upgrading DNS queries from UDP to TCP would reduce tail latency attributed to dropped UDP packets and DNS timeouts usually up to 30s<br /> (3 retries + 10s timeout). Since the nodelocal cache listens for UDP<br /> DNS queries, applications don't need to be changed.</li> <li>Metrics &amp; visibility into dns requests at a node level.</li> <li>Negative caching can be re-enabled, thereby reducing number of queries to kube-dns service.</li> </ul> </blockquote>
<p>I've been trying to get over this but I'm out of ideas for now hence I'm posting the question here.</p> <p>I'm experimenting with the Oracle Cloud Infrastructure (OCI) and I wanted to create a Kubernetes cluster which exposes some service.</p> <p><strong>The goal is:</strong></p> <ul> <li>A running managed Kubernetes cluster (OKE)</li> <li>2 nodes at least</li> <li>1 service that's accessible for external parties</li> </ul> <p><strong>The infra looks the following:</strong></p> <ul> <li>A VCN for the whole thing</li> <li>A private subnet on 10.0.1.0/24</li> <li>A public subnet on 10.0.0.0/24</li> <li>NAT gateway for the private subnet</li> <li>Internet gateway for the public subnet</li> <li>Service gateway</li> <li>The corresponding security lists for both subnets which I won't share right now unless somebody asks for it</li> <li>A containerengine K8S (OKE) cluster in the VCN with public Kubernetes API enabled</li> <li>A node pool for the K8S cluster with 2 availability domains and with 2 instances right now. The instances are ARM machines with 1 OCPU and 6GB RAM running Oracle-Linux-7.9-aarch64-2021.12.08-0 images.</li> <li>A namespace in the K8S cluster (call it staging for now)</li> <li>A deployment which refers to a custom NextJS application serving traffic on port 3000</li> </ul> <p>And now it's the point where I want to expose the service running on port 3000.</p> <p><strong>I have 2 obvious choices:</strong></p> <ul> <li>Create a LoadBalancer service in K8S which will spawn a classic Load Balancer in OCI, set up it's listener and set up the backendset referring to the 2 nodes in the cluster, plus it adjusts the subnet security lists to make sure traffic can flow</li> <li>Create a Network Load Balancer in OCI and create a NodePort on K8S and manually configure the NLB to the ~same settings as the classic Load Balancer</li> </ul> <p>The first one works perfectly fine but I want to use this cluster with minimal costs so I decided to experiment with option 2, the NLB since it's way cheaper (zero cost).</p> <p>Long story short, everything works and I can access the NextJS app on the IP of the NLB most of the time but sometimes I couldn't. I decided to look it up what's going on and turned out the NodePort that I exposed in the cluster isn't working how I'd imagine.</p> <p>The service behind the NodePort is only accessible on the Node that's running the pod in K8S. Assume NodeA is running the service and NodeB is just there chilling. If I try to hit the service on NodeA, everything is fine. But when I try to do the same on NodeB, I don't get a response at all.</p> <p>That's my problem and I couldn't figure out what could be the issue.</p> <p><strong>What I've tried so far:</strong></p> <ul> <li>Switching from ARM machines to AMD ones - no change</li> <li>Created a bastion host in the public subnet to test which nodes are responding to requests. Turned out only the node responds that's running the pod.</li> <li>Created a regular LoadBalancer in K8S with the same config as the NodePort (in this case OCI will create a classic Load Balancer), that works perfectly</li> <li>Tried upgrading to Oracle 8.4 images for the K8S nodes, didn't fix it</li> <li>Ran the Node Doctor on the nodes, everything is fine</li> <li>Checked the logs of kube-proxy, kube-flannel, core-dns, no error</li> <li>Since the cluster consists of 2 nodes, I gave it a try and added one more node and the service was not accessible on the new node either</li> <li>Recreated the cluster from scratch</li> </ul> <p><strong>Edit:</strong> Some update. I've tried to use a DaemonSet instead of a regular Deployment for the pod to ensure that as a temporary solution, all nodes are running at least one instance of the pod and surprise. The node that was previously not responding to requests on that specific port, it still does not, even though a pod is running on it.</p> <p><strong>Edit2:</strong> Originally I was running the latest K8S version for the cluster (v1.21.5) and I tried downgrading to v1.20.11 and unfortunately the issue is still present.</p> <p><strong>Edit3:</strong> Checked if the NodePort is open on the node that's not responding and it is, at least kube-proxy is listening on it.</p> <pre><code>tcp 0 0 0.0.0.0:31600 0.0.0.0:* LISTEN 16671/kube-proxy </code></pre> <p><strong>Edit4:</strong>: Tried adding whitelisting iptables rules but didn't change anything.</p> <pre><code>[opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P FORWARD ACCEPT [opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P INPUT ACCEPT [opc@oke-cdvpd5qrofa-nyx7mjtqw4a-svceq4qaiwq-0 ~]$ sudo iptables -P OUTPUT ACCEPT </code></pre> <p><strong>Edit5:</strong> Just as a trial, I created a LoadBalancer once more to verify if I'm gone completely mental and I just didn't notice this error when I tried or it really works. Funny thing, it works perfectly fine through the classic load balancer's IP. <strong>But</strong> when I try to send a request to the nodes directly on the port that was opened for the load balancer (it's 30679 for now). I get response only from the node that's running the pod. From the other, still nothing yet through the load balancer, I get 100% successful responses.</p> <p>Bonus, here's the iptables from the Node that's not responding to requests, not too sure what to look for:</p> <pre><code>[opc@oke-cn44eyuqdoq-n3ewna4fqra-sx5p5dalkuq-1 ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes health check service ports */ KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */ KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ ACCEPT all -- 10.244.0.0/16 anywhere ACCEPT all -- anywhere 10.244.0.0/16 Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain KUBE-EXTERNAL-SERVICES (2 references) target prot opt source destination Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 DROP all -- !loopback/8 loopback/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT Chain KUBE-FORWARD (1 references) target prot opt source destination DROP all -- anywhere anywhere ctstate INVALID ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000 ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED Chain KUBE-KUBELET-CANARY (0 references) target prot opt source destination Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-PROXY-CANARY (0 references) target prot opt source destination Chain KUBE-SERVICES (2 references) target prot opt source destination </code></pre> <p>Service spec (the running one since it was generated using Terraform):</p> <pre><code>{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Service&quot;, &quot;metadata&quot;: { &quot;creationTimestamp&quot;: &quot;2022-01-28T09:13:33Z&quot;, &quot;name&quot;: &quot;web-staging-service&quot;, &quot;namespace&quot;: &quot;web-staging&quot;, &quot;resourceVersion&quot;: &quot;22542&quot;, &quot;uid&quot;: &quot;c092f99b-7c72-4c32-bf27-ccfa1fe92a79&quot; }, &quot;spec&quot;: { &quot;clusterIP&quot;: &quot;10.96.99.112&quot;, &quot;clusterIPs&quot;: [ &quot;10.96.99.112&quot; ], &quot;externalTrafficPolicy&quot;: &quot;Cluster&quot;, &quot;ipFamilies&quot;: [ &quot;IPv4&quot; ], &quot;ipFamilyPolicy&quot;: &quot;SingleStack&quot;, &quot;ports&quot;: [ { &quot;nodePort&quot;: 31600, &quot;port&quot;: 3000, &quot;protocol&quot;: &quot;TCP&quot;, &quot;targetPort&quot;: 3000 } ], &quot;selector&quot;: { &quot;app&quot;: &quot;frontend&quot; }, &quot;sessionAffinity&quot;: &quot;None&quot;, &quot;type&quot;: &quot;NodePort&quot; }, &quot;status&quot;: { &quot;loadBalancer&quot;: {} } } </code></pre> <p>Any ideas are appreciated. Thanks guys.</p>
<p>Might not be the ideal fix, but can you try changing the externalTrafficPolicy to Local. This would prevent the health check on the nodes which don't run the application to fail. This way the traffic will only be forwarded to the node where the application is . Setting externalTrafficPolicy to local is also a requirement to preserve source IP of the connection. Also, can you share the health check config for both NLB and LB that you are using. When you change the externalTrafficPolicy, note that the health check for LB would change and the same needs to be applied to NLB.</p> <p>Edit: Also note that you need a security list/ network security group added to your node subnet/nodepool, which allows traffic on all protocols from the worker node subnet.</p>
<p>I am using Kubernetes deployment. I wish to start the pods one by one. Not all at once. Is there any way. I do not want to use statefulSet.</p> <pre><code>kind: Deployment metadata: name: my-deployment labels: app: my-app spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-container-image ports: - name: https containerPort: 8443 volumeMounts: - mountPath: /tmp name: app-vol restartPolicy: Always imagePullSecrets: - name: regcred volumes: - name: app-vol persistentVolumeClaim: claimName: app-vol </code></pre>
<blockquote> <p>I wish to start the pods one by one. Not all at once. Is there any way. I do not want to use statefulSet.</p> </blockquote> <p>Unfortunately you are trying to accomplish something with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer"><code>Deployment</code></a> that is achievable with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer"><code>Statefulset</code></a>. Is it possible to achieve the desired effect with deployment? A similar one can be obtained, but it will require <a href="https://stackoverflow.com/questions/66365892/limit-first-time-deployment-of-pods-in-kubernetes-using-kind-deployment/66641268#66641268">creating a custom script</a> and it will be a non-standard solution.</p> <p>However, this will not be a recommended solution. Statefulset works well for feed control during starts, and you shouldn't use anything else here.</p> <p>To sum up: you should change the assumptions a bit and accept the statefulset, thanks to which you will achieve the result you want, or you should not control the order in which the pods are run.</p> <p>As <a href="https://stackoverflow.com/users/4551228/rohatgisanat" title="620 reputation">rohatgisanat</a> mentioned in the comment:</p> <blockquote> <p>Deployments do not guarantee ordered starts. StatefulSets are the way to achieve what you require.</p> </blockquote>
<p>I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm &quot;pre-upgrade&quot; hook.</p> <p>The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.</p> <p>So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.</p>
<p>There are two ways I can see that you can do this.</p> <p>First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)</p> <pre><code>kubectl scale --replicas=0 -n {namespace} {deployment-name} helm install ..... </code></pre> <p>The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the <code>kubectl</code> scale down.</p>
<p>I am new to the argo universe and was trying to set up Argo Workflows <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows</a> .</p> <p>I have installed the <code>argo</code> CLI from the page : <a href="https://github.com/argoproj/argo-workflows/releases/latest" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/releases/latest</a> . I was trying it in my minikube setup and I have my kubectl already configured to the minikube cluster. I am able to hit argo commands without any issues after putting it in my local bin folder.</p> <p>How does it work? Where do the argo CLI is connecting to operate?</p>
<p>The <code>argo</code> CLI <a href="https://github.com/argoproj/argo-workflows/blob/877d6569754be94f032e1c48d1f7226a83adfbec/cmd/argo/commands/get.go#L73-L74" rel="noreferrer">manages two API clients</a>. The first client connects to the <a href="https://argoproj.github.io/argo-workflows/rest-api/" rel="noreferrer">Argo Workflows API</a> server. The second connects to the Kubernetes API. Depending on what you're doing, the CLI might connect just to one API or the other.</p> <p>To connect to the Kubernetes API, the CLI just uses your kube config.</p> <p>To connect to the Argo server, the CLI first checks for an <code>ARGO_TOKEN</code> environment variable. If it's not available, the CLI <a href="https://github.com/argoproj/argo-workflows/blob/877d6569754be94f032e1c48d1f7226a83adfbec/cmd/argo/commands/client/conn.go#L91" rel="noreferrer">falls back to using the kube config</a>.</p> <p><code>ARGO_TOKEN</code> is <a href="https://argoproj.github.io/argo-workflows/rest-api/" rel="noreferrer">only necessary when the Argo Server is configured to require client auth</a> and then only if you're doing things which require access to the Argo API instead of just the Kubernetes API.</p>
<p>On an Azure AKS cluster with the Calico network policies plugin enabled, I want to:</p> <ol> <li>by default block all incoming traffic.</li> <li>allow all traffic within a namespace (from a pod in a namespace, to another pod in the <strong>same</strong> namespace.</li> </ol> <p>I tried something like:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny.all namespace: test spec: podSelector: {} policyTypes: - Ingress apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow.same.namespace namespace: test spec: podSelector: {} ingress: - from: - podSelector: {} policyTypes: - Ingress </code></pre> <p>But is seems to block traffic between two deployments/pods in the same namespace. What am I doing wrong, am I misreading the documentation?</p> <p>Perhaps it is good to mention that the above setup seems to work on an AWS EKS based Kubernetes cluster.</p>
<p>After investigation it turned out that:</p> <ol> <li>I used terraform to create a k8s cluster with two node pools. System, and worker. (Note: that this is (not yet) possible in the GUI).</li> <li>Both node pools are in different subnets (system subnet, and worker subnet).</li> <li>AKS configures kubeproxy to masquerade traffic that goes outside the system subnet.</li> <li>Pods are deployed on the worker node, and thus use the worker subnet. All traffic that they send outside the node they are running on, is masqueraded.</li> <li>Calico managed iptables drop the masqueraded traffic. I did not look into more details here.</li> <li>However, if I change the kubeproxy masquerade setting to either a larger CIDR range, or remove it all together, it works. Azure however resets this setting after a while.</li> </ol> <p>In conclusion. I tried to use something that is not yet supported by Azure. I now use a single (larger) subnet for both node pools.</p>
<p>I have a cluster with Nginx ingress. I receive an API request, for example:</p> <pre><code>/api/v1/user?json={query} </code></pre> <p>I want to redirect this request with ingress to my service. I want to modify it like this:</p> <pre><code>/api/v2/user/{query} </code></pre>
<p>Assuming your domain name is <code>example.com,</code> and you have a service called <code>example-service</code> exposing port <code>80</code>, you can achieve this task by defining the following ingress rule.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | if ($arg_json) { return 302 https://example.com/api/v2/user/$arg_json; } nginx.ingress.kubernetes.io/use-regex: 'true' name: ingress-rule namespace: default spec: rules: - host: example.com http: paths: - backend: service: name: example-service port: number: 80 path: /api/v1/user(.*) pathType: Prefix </code></pre>
<p>Using <code>crictl an </code>containerd<code>, is there an easy way to find to which pod/container belongs a given process, using it's </code>PID` on the host machine?</p> <p>For example, how can I retrieve the name of the pod which runs the process below (<code>1747</code>):</p> <pre><code>root@k8s-worker-node:/# ps -ef | grep mysql 1000 1747 1723 0 08:58 ? 00:00:01 mysqld </code></pre>
<p>Assuming that you're looking at the primary process in a pod, you could do something like this:</p> <pre><code>crictl ps -q | while read cid; do if crictl inspect -o go-template --template '{{ .info.pid }}' $cid | grep -q $target_pid; then echo $cid fi done </code></pre> <p>This walks through all the crictl managed pods and checks the pod pid against the value of the <code>$target_pid</code> value (which you have set beforehand to the host pid in which you are interested).</p>
<p>I have a backup job running, scheduled to run every 24 hours. I have the concurrency policy set to &quot;Forbid.&quot; I am testing my backup, and I create jobs manually for testing, but these tests are not forbidding concurrent runs. I use:</p> <pre><code>kubectl create job --from=cronjob/my-backup manual-backup-(timestamp) </code></pre> <p>... and when I run them twice in close succession, I find that both begin the work.</p> <p>Does the concurrency policy only apply to jobs created by the Cron job scheduler? Does it ignore manually-created jobs? If it is ignoring those, are there other ways to manually run the job such that the Cron job scheduler knows they are there?</p>
<p><code>...Does the concurrency policy only apply to jobs created by the Cron job scheduler?</code></p> <p><code>concurrencyPolicy</code> applies to <code>CronJob</code> as it influences how <code>CronJob</code> start job. It is part of <code>CronJob</code> spec and not the <code>Job</code> spec.</p> <p><code>...Does it ignore manually-created jobs?</code></p> <p>Yes.</p> <p><code>...ways to manually run the job such that the Cron job scheduler knows they are there?</code></p> <p>Beware that when <code>concurrencyPolicy</code> is set to <code>Forbid</code> and when the time has come for CronJob to run job; but it detected there is job belongs to this <code>CronJob</code> is running; it will count the current attempt as <strong>missed</strong>. It is better to temporary set the <code>CronJob</code> <code>spec.suspend</code> to true if you manually start a job base out of the <code>CronJob</code> and the execution time will span over the next schedule time.</p>
<p>I want to connect GKE (Google Kubernetes Engine) cluster to MongoDB Atlas. But I need to green the IP of my nodes (allow them). But sometimes I have 3 nodes, sometimes I have 10 and sometimes nodes are falling down and re-creating - constant changing means no single IP.</p> <p>I have tried to create NAT on the GCP followed this guide: <a href="https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e" rel="nofollow noreferrer">https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e</a></p> <p>Also I want to green my cluster's IP in the Google Maps APIs so I can use the Directions API, for example.</p> <p>This is a common situation, since there may be many other third party APIs that I want to enable that require incoming requests from certain IPs only, besides Atlas or Google Maps..</p> <p>How can I achieve this?</p>
<p>Private GKE cluster means the nodes do not have public IP addresses but you mentioned</p> <blockquote> <p>the actual outbound transfer goes from the node's IP instead of the NAT's</p> </blockquote> <p>Looks like you have a public cluster of GKE, you have to use the same NAT option to get outbound egress <strong>single IP</strong>.</p> <p>If you are using the <strong>ingress</strong> which means there is a <strong>single point</strong> for incoming request to <strong>cluster</strong> but if your Nodes have <strong>public IP</strong> PODs will use <strong>Node's IP</strong> when there is an outgoing request unless you use <strong>NAT</strong> or so.</p> <p>Your single outbound IP will be there, so all requests going out of PODs won't have node's IP instead they will use the <strong>NAT</strong> IP.</p> <p><strong>how to set up the NAT gateway</strong></p> <p><a href="https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway" rel="nofollow noreferrer">https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway</a></p> <p>Here is terraform ready for the GKE clusters, you just have to run this terraform example bypassing <strong>project ID</strong> and others <strong>vars</strong>.</p> <p>The above terraform example will create the NAT for you and verify the PODs IP as soon as NAT is set. You mostly won't require any changes in NAT terraform script.</p> <p>GitHub link: <a href="https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway</a></p> <p>if you dont have idea of terraform you can follow this article to setup the NAT which will stop the SNAT for PODs : <a href="https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a" rel="nofollow noreferrer">https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a</a></p>
<p>With kubectl, I know i can run below command if I want to see specific resources YAML file</p> <pre><code>kubectl -n &lt;some namespace&gt; get &lt;some resource&gt; &lt;some resource name&gt; -o yaml </code></pre> <p>How would I get this same data using python's kubernetes-client ? Everything I've found so far only talks about creating a resource from a given yaml file.</p> <p>In looking at <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">docs</a>, I noticed that each resource type generally has a <strong>get_api_resources()</strong> function which returns a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1APIResourceList.md" rel="nofollow noreferrer">V1ApiResourceList</a>, where each item is a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1APIResource.md" rel="nofollow noreferrer">V1ApiResource</a>. I was hoping there would be a way to get the resource's yaml-output by using a V1ApiResource object, but doesnt appear like that's the way to go about it.</p> <p>Do you all have any suggestions ? Is this possible with kubernetes-client API ?</p>
<p>If you take a look at the methods available on an object, e.g.:</p> <pre><code>&gt;&gt;&gt; import kubernetes.config &gt;&gt;&gt; client = kubernetes.config.new_client_from_config() &gt;&gt;&gt; core = kubernetes.client.CoreV1Api(client) &gt;&gt;&gt; res = core.read_namespace('kube-system') &gt;&gt;&gt; dir(res) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_api_version', '_kind', '_metadata', '_spec', '_status', 'api_version', 'attribute_map', 'discriminator', 'kind', 'local_vars_configuration', 'metadata', 'openapi_types', 'spec', 'status', 'to_dict', 'to_str'] </code></pre> <p>...you'll see there is a <code>to_dict</code> method. That returns the object as a dictionary, which you can then serialize to YAML or JSON or whatever:</p> <pre><code>&gt;&gt;&gt; import yaml &gt;&gt;&gt; print(yaml.safe_dump(res.to_dict())) api_version: v1 kind: Namespace metadata: [...] </code></pre>
<p>We recently updated the deployment of a dropwizard service deployed using Docker and Kubernetes.</p> <p>It was working correctly before, the readiness probe was yielding a healthcheck ping to internal cluster IP getting 200s. Since we updated the healthcheck pings are resulting in a 301 and the service is considered down.</p> <p>I've noticed that the healthcheck is now <em>Default kubernetes L7 Loadbalancing health check for NEG.</em> (port is set to 80) where it was previously <em>Default kubernetes L7 Loadbalancing health check.</em> where the port was configurable.</p> <p>The kube file is deployed via CircleCI but the readiness probe is:</p> <pre><code> kind: Deployment metadata: name: pes-${CIRCLE_BRANCH} namespace: ${GKE_NAMESPACE_NAME} annotations: reloader.stakater.com/auto: 'true' spec: replicas: 2 selector: matchLabels: app: *** template: metadata: labels: app: *** spec: containers: - name: *** image: *** envFrom: - configMapRef: name: *** - secretRef: name: *** command: ['./gradlew', 'run'] resources: {} ports: - name: pes containerPort: 5000 readinessProbe: httpGet: path: /api/healthcheck port: pes initialDelaySeconds: 15 timeoutSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: *** namespace: ${GKE_NAMESPACE_NAME} spec: ports: - name: pes port: 5000 targetPort: pes protocol: TCP selector: app: *** type: LoadBalancer </code></pre> <p>Any ideas on how this needs to be configured in GCP?</p> <p>I have a feeling that the new deployment has changed from legacy health check to non legacy but no idea what else needs to be set up for it to work. Does the kube file handle creating firewall rules or does that need to be done manually?</p> <p>Reading the docs at <a href="https://cloud.google.com/load-balancing/docs/health-check-concepts?hl=en" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/health-check-concepts?hl=en</a></p> <p>EDIT: <strong>Issue is now resolved</strong>. After GKE version was updated it is now creating a NEG healthcheck by default. We disabled this by adding below annotation to service deployment file.</p> <p>metadata: annotations: cloud.google.com/neg: '{&quot;ingress&quot;:false}'</p>
<p>Issue is now resolved. After GKE version was updated it is now creating a NEG healthcheck by default. We disabled this by adding below annotation to service deployment file.</p> <p><code>metadata: annotations: cloud.google.com/neg: '{&quot;ingress&quot;:false}'</code></p>
<p>what is python kubernetes client equivalent for</p> <pre><code>kubectl get deploy -o yaml </code></pre> <p><a href="https://github.com/kubernetes-client/python/blob/master/examples/deployment_crud.py" rel="noreferrer">CRUD python Client example</a></p> <p>i referred this example for getting python deployment but there is no read deployment option</p>
<p><code>read_namespaced_deployment()</code> does the thing:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config config.load_kube_config() api = client.AppsV1Api() deployment = api.read_namespaced_deployment(name='foo', namespace='bar') </code></pre>
<p>I would like to catch the Client IP Address inside my .NET application running behind GKE Ingress Controller to ensure that the client is permitted.</p> <pre><code>var requestIpAddress = request.HttpContext.Connection.RemoteIpAddress.MapToIPv4(); </code></pre> <p>Instead of getting Client IP Address I get my GKE Ingress IP Address, due to The Ingress apply some forwarding rule.</p> <p>The GKE Ingress controller is pointing to the Kubernetes service of type NodePort.</p> <p>I have tried to add spec to NodePort service to preserve Client IP Address but it doesn't help. It is because the NodePort service is also runng behind the Ingress</p> <pre><code>externalTrafficPolicy: Local </code></pre> <p>Is it possible to preserve Client IP Address with GKE Ingress controller on Kubernetes?</p> <p>NodePort Service for Ingress:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: api-ingress-service labels: app/name: ingress.api spec: type: NodePort externalTrafficPolicy: Local selector: app/template: api ports: - name: http protocol: TCP port: 80 targetPort: http </code></pre> <p>Ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress namespace: default labels: kind: ingress app: ingress annotations: networking.gke.io/v1beta1.FrontendConfig: frontend-config spec: tls: - hosts: - '*.mydomain.com' secretName: tls-secret rules: - host: mydomain.com http: paths: - path: /* pathType: ImplementationSpecific backend: service: name: api-ingress-service port: number: 80 </code></pre>
<p>Posted community wiki for better visibility. Feel free to expand it.</p> <hr /> <p>Currently the only way to get the client source IP address in GKE Ingress is to <a href="https://cloud.google.com/load-balancing/docs/features#ip_addresses" rel="nofollow noreferrer">use <code>X-Forwarded-For</code> header. It's known limitation</a> for all GCP HTTP(s) Load Balancers (GKE Ingress is using External HTTP(s) LB).</p> <p>If it does not suit your needs, consider migrating to a third-party Ingress Controller which is using an external TCP/UDP network LoadBalancer, like <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p>
<p>I have an interesting problem, maybe you could help me out.</p> <p>There are given <em>two spring applications</em>, called app1 and app2. There is plenty of REST calls are happening to both of the services. I need to implement a <em>security solution</em> where both of them can communicate with each other on REST but it is protected by <em>mutual TLS</em> (mTLS where both app has its own cert for each other)</p> <p>Implementing it the standard way its not that hard, Spring has solutions for it (with keystores and etc.), but the twist is, I have to create it in a <strong>Kubernetes</strong> environment. The two app is not in the same cluster, so app1 is in our cluster but app2 deployed in one of our partner's system.<br /> I am pretty new to k8s and not sure what is the best method to achieve this. Should I store the certs or the keystore(s) as secrets? Use and configure <em>nginx</em> ingress somehow, maybe <em>Istio</em> would be useful? I would really want to find the optimal solution but I don't know the right way. I would really like if I could configure it outside my app and let k8s take care about it but I am not sure if it is the right thing to do.</p> <p>Any help would be really appreciated, some guidance to find the right path or some kind of real life examples.<br /> Thank you for your help!</p>
<p>Mikolaj has probably covered everything but still let me add my cent</p> <p>i don't have much experience working with <strong>Istio</strong>, however i would also suggest checking out the <strong>Linkerd</strong> <a href="https://linkerd.io/" rel="nofollow noreferrer">service mesh</a>.</p> <p>Step 1.</p> <p>Considering if you are on multi could GKE &amp; EKS or so still it will work.</p> <p><a href="https://linkerd.io/2.10/tasks/multicluster/" rel="nofollow noreferrer">Multicluster guide</a> details and installation details</p> <p>Linkerd will use the <strong>Trust anchor</strong> between the cluster so traffic can flow encrypted and not get open to the public internet.</p> <p>You have to generate the <strong>certificate</strong> which will form a common base of trust between clusters.</p> <p>Each proxy will get copy of the certificate and use it for validation.</p>
<p>After updating onprem kubernetes from 1.18 to 1.22.5, I had to switch the ingress api versions from <code>v1beta1</code> to <code>v1</code>, and selected <code>ImplementationSpecific</code> as the new, required <code>pathType</code>.</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1 metadata: name: wx-ing-example spec: rules: - host: &quot;*.example.com&quot; http: paths: - path: / pathType: ImplementationSpecific backend: service: name: wx-example port: number: 80 </code></pre> <p>Since the update, subdomains beyond one level aren't being sent to the service, and instead return a 404. I.e. bar.example.com is working, but foo.bar.example.com is not.</p> <p>I've tried changing <code>pathType</code> to <code>Prefix</code> with no change in behaviour.</p> <p><code>k8s.gcr.io/ingress-nginx/controller:v1.1.0</code></p>
<p>What you are descibing is expected behavior according to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards" rel="nofollow noreferrer">official kubernetes ingress documentation</a>.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Host</th> <th>Host header</th> <th>Match?</th> </tr> </thead> <tbody> <tr> <td>*.foo.com</td> <td>bar.foo.com</td> <td>Matches based on shared suffix</td> </tr> <tr> <td>*.foo.com</td> <td>baz.bar.foo.com</td> <td>No match, wildcard only covers a single DNS label</td> </tr> <tr> <td>*.foo.com</td> <td>foo.com</td> <td>No match, wildcard only covers a single DNS label</td> </tr> </tbody> </table> </div> <p><code>PathType</code> has nothing to do with that. This is about the host header.</p> <p>The only option I know of, is leaving the host completely away. So it will match any request that is able to find its way to your ingress controller. Depending on your situation, this may not be desirable.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: any-host spec: ingressClassName: nginx rules: - http: paths: - path: / pathType: Prefix backend: service: name: sample port: number: 80 </code></pre>
<p>I have a kubernetes cluster that's running datadog and some microservices. Each microservice makes healthchecks every 5 seconds to make sure the service is up and running. I want to exclude these healthcheck logs from being ingested into Datadog.</p> <p>I think I need to use <code>log_processing_rules</code> and I've tried that but the healthcheck logs are still making it into the logs section of Datadog. My current Deployment looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment [ ... SNIP ... ] spec: replicas: 2 selector: matchLabels: app: my-service template: metadata: labels: app: my-service version: &quot;fac8fb13&quot; annotations: rollme: &quot;IO2ad&quot; tags.datadoghq.com/env: development tags.datadoghq.com/version: &quot;fac8fb13&quot; tags.datadoghq.com/service: my-service tags.datadoghq.com/my-service.logs: | [{ &quot;source&quot;: my-service, &quot;service&quot;: my-service, &quot;log_processing_rules&quot;: [ { &quot;type&quot;: &quot;exclude_at_match&quot;, &quot;name&quot;: &quot;exclude_healthcheck_logs&quot;, &quot;pattern&quot;: &quot;\&quot;RequestPath\&quot;: \&quot;\/health\&quot;&quot; } ] }] </code></pre> <p>and the logs coming out of the kubernetes pod:</p> <pre><code>$ kubectl logs my-service-pod { &quot;@t&quot;: &quot;2022-01-07T19:13:05.3134483Z&quot;, &quot;@m&quot;: &quot;Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms&quot;, &quot;@i&quot;: &quot;REDACTED&quot;, &quot;ElapsedMilliseconds&quot;: 7.5992, &quot;StatusCode&quot;: 200, &quot;ContentType&quot;: &quot;text/plain&quot;, &quot;ContentLength&quot;: null, &quot;Protocol&quot;: &quot;HTTP/1.1&quot;, &quot;Method&quot;: &quot;GET&quot;, &quot;Scheme&quot;: &quot;http&quot;, &quot;Host&quot;: &quot;10.64.0.80:5000&quot;, &quot;PathBase&quot;: &quot;&quot;, &quot;Path&quot;: &quot;/health&quot;, &quot;QueryString&quot;: &quot;&quot;, &quot;HostingRequestFinishedLog&quot;: &quot;Request finished HTTP/1.1 GET http://10.64.0.80:5000/health - - - 200 - text/plain 7.5992ms&quot;, &quot;EventId&quot;: { &quot;Id&quot;: 2, &quot;Name&quot;: &quot;RequestFinished&quot; }, &quot;SourceContext&quot;: &quot;Microsoft.AspNetCore.Hosting.Diagnostics&quot;, &quot;RequestId&quot;: &quot;REDACTED&quot;, &quot;RequestPath&quot;: &quot;/health&quot;, &quot;ConnectionId&quot;: &quot;REDACTED&quot;, &quot;dd_service&quot;: &quot;my-service&quot;, &quot;dd_version&quot;: &quot;54aae2b5&quot;, &quot;dd_env&quot;: &quot;development&quot;, &quot;dd_trace_id&quot;: &quot;REDACTED&quot;, &quot;dd_span_id&quot;: &quot;REDACTED&quot; } </code></pre> <p>EDIT: Removed 2nd element of the <code>log_processing_rules</code> array above as I've tried with 1 and 2 elements in the rules array.</p> <p>EDIT2: I've also tried changing <code>log_processing_rules</code> type to INCLUDE at match in an attempt to figure this out:</p> <pre><code>&quot;log_processing_rules&quot;: [ { &quot;type&quot;: &quot;include_at_match&quot;, &quot;name&quot;: &quot;testing_include_at_match&quot;, &quot;pattern&quot;: &quot;somepath&quot; } ] </code></pre> <p>and I'm still getting the health logs in Datadog (in theory I should not as <code>/health</code> is not part of the matching pattern)</p>
<p>All of these answers are correct in their own ways, but my specific issue was that the Datadog annotations for the <code>source</code> and <code>service</code> were not properly quoted:</p> <pre><code> ad.datadoghq.com/my-service.logs: | [{ &quot;source&quot;: &quot;my-service&quot;, # Needs Quotes &quot;service&quot;: &quot;my-service&quot;, # Needs Quotes &quot;log_processing_rules&quot;: [ { &quot;type&quot;: &quot;exclude_at_match&quot;, &quot;name&quot;: &quot;exclude_healthcheck_logs&quot;, &quot;pattern&quot;: &quot;\&quot;RequestPath\&quot;: \&quot;\/health\&quot;&quot; } ] }] </code></pre>
<p>Our pipeline by default tries to use a container that matches the name of the current stage. If this container doesn't exist, the container 'default' is used. This functionality works but the problem is that when the container that matches the name of the stage doesn't exist, a ProtocolException occurs, which isn't catchable because it is thrown by a thread that is out of our control. Is there a way to check if a container actually exists when using the Kubernetes plugin for Jenkins to prevent this exception from appearing? It seems like a basic function but I haven't been able to find anything like this online.</p> <p>I can't show the actual code but here's a pipeline-script example extract that would trigger this exception:</p> <pre><code>node(POD_LABEL) stage('Check Version (Maven)') { container('containerThatDoesNotExist'}{ try{ sh 'mvn --version' }catch(Exception e){ // catch Exception } } </code></pre> <pre><code>java.net.ProtocolException: Expected HTTP 101 response but was '400 Bad Request' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) </code></pre>
<p>You can run a pre stage in order to get the current running container by exec kubectl command to server. The tricky point is that kubectl does not exist on worker - so in that case:</p> <ol> <li>pull an image of kubectl on worker.</li> <li>Add a stage for getting the running container - use a label or timestamp to get the desire one.</li> <li>Use the right container 'default' or rather 'some-container'.</li> </ol> <p>Example:</p> <pre><code>pipeline { environment { CURRENT_CONTAINER=&quot;default&quot; } agent { kubernetes { defaultContainer 'jnlp' yaml ''' apiVersion: v1 kind: Pod spec: containers: - name: some-app image: XXX/some-app imagePullPolicy: IfNotPresent tty: true - name: kubectl image: gcr.io/cloud-builders/kubectl imagePullPolicy: IfNotPresent command: - cat tty: true ''' } } stages { stage('Set Container Name') { steps { container('kubectl') { withCredentials([ string(credentialsId: 'minikube', variable: 'api_token') ]) { script { CURRENT_CONTAINER=sh(script: 'kubectl get pods -n jenkins -l job-name=pi -o jsonpath=&quot;{.items[*].spec.containers[0].name}&quot;', returnStdout: true ).trim() echo &quot;Exec container ${CURRENT_CONTAINER}&quot; } } } } } stage('Echo Container Name') { steps { echo &quot;CURRENT_CONTAINER is ${CURRENT_CONTAINER}&quot; } } } } </code></pre>
<p>I logged into the Linux bastion host where <code>kubectl</code> is installed for connecting to the Kubernetes cluster.</p> <p>On the Bastion host when I run any kubectl command like the one below:</p> <pre><code>kubectl get pods </code></pre> <p>I get the error below:</p> <pre><code>fatal error: runtime: out of memory runtime stack: runtime.throw(0x192cb47, 0x16) /usr/local/go/src/runtime/panic.go:617 +0x72 fp=0x7fff534ae560 sp=0x7fff534ae530 pc=0x42d482 runtime.sysMap(0xc000000000, 0x4000000, 0x2d178b8) /usr/local/go/src/runtime/mem_linux.go:170 +0xc7 fp=0x7fff534ae5a0 sp=0x7fff534ae560 pc=0x417d07 runtime.(*mheap).sysAlloc(0x2cfef80, 0x2000, 0x2cfef90, 0x1) /usr/local/go/src/runtime/malloc.go:633 +0x1cd fp=0x7fff534ae648 sp=0x7fff534ae5a0 pc=0x40ae0d runtime.(*mheap).grow(0x2cfef80, 0x1, 0x0) /usr/local/go/src/runtime/mheap.go:1222 +0x42 fp=0x7fff534ae6a0 sp=0x7fff534ae648 pc=0x425022 runtime.(*mheap).allocSpanLocked(0x2cfef80, 0x1, 0x2d178c8, 0x0) /usr/local/go/src/runtime/mheap.go:1150 +0x37f fp=0x7fff534ae6d8 sp=0x7fff534ae6a0 pc=0x424f0f runtime.(*mheap).alloc_m(0x2cfef80, 0x1, 0x2a, 0x0) /usr/local/go/src/runtime/mheap.go:977 +0xc2 fp=0x7fff534ae728 sp=0x7fff534ae6d8 pc=0x424562 runtime.(*mheap).alloc.func1() /usr/local/go/src/runtime/mheap.go:1048 +0x4c fp=0x7fff534ae760 sp=0x7fff534ae728 pc=0x456a4c runtime.(*mheap).alloc(0x2cfef80, 0x1, 0x1002a, 0x0) /usr/local/go/src/runtime/mheap.go:1047 +0x8a fp=0x7fff534ae7b0 sp=0x7fff534ae760 pc=0x42483a runtime.(*mcentral).grow(0x2cffd80, 0x0) /usr/local/go/src/runtime/mcentral.go:256 +0x95 fp=0x7fff534ae7f8 sp=0x7fff534ae7b0 pc=0x417785 runtime.(*mcentral).cacheSpan(0x2cffd80, 0x7fd1a382d000) /usr/local/go/src/runtime/mcentral.go:106 +0x2ff fp=0x7fff534ae858 sp=0x7fff534ae7f8 pc=0x41728f runtime.(*mcache).refill(0x7fd1a382d008, 0x2a) /usr/local/go/src/runtime/mcache.go:135 +0x86 fp=0x7fff534ae878 sp=0x7fff534ae858 pc=0x416d26 runtime.(*mcache).nextFree(0x7fd1a382d008, 0x2cf6e2a, 0x7fd1a382d008, 0x7fd1a382d000, 0x8) /usr/local/go/src/runtime/malloc.go:786 +0x88 fp=0x7fff534ae8b0 sp=0x7fff534ae878 pc=0x40b648 runtime.mallocgc(0x180, 0x190fb40, 0x1, 0x2d17920) /usr/local/go/src/runtime/malloc.go:939 +0x76e fp=0x7fff534ae950 sp=0x7fff534ae8b0 pc=0x40bf5e runtime.newobject(0x190fb40, 0x4000) /usr/local/go/src/runtime/malloc.go:1068 +0x38 fp=0x7fff534ae980 sp=0x7fff534ae950 pc=0x40c368 runtime.malg(0x74ac0e00008000, 0x2d015f0) /usr/local/go/src/runtime/proc.go:3220 +0x31 fp=0x7fff534ae9c0 sp=0x7fff534ae980 pc=0x436871 runtime.mpreinit(...) /usr/local/go/src/runtime/os_linux.go:311 runtime.mcommoninit(0x2cf9240) /usr/local/go/src/runtime/proc.go:618 +0xc2 fp=0x7fff534ae9f8 sp=0x7fff534ae9c0 pc=0x430222 runtime.schedinit() /usr/local/go/src/runtime/proc.go:540 +0x74 fp=0x7fff534aea50 sp=0x7fff534ae9f8 pc=0x42feb4 runtime.rt0_go(0x7fff534aea88, 0x3, 0x7fff534aea88, 0x0, 0x0, 0x3, 0x7fff534aee07, 0x7fff534aee0f, 0x7fff534aee13, 0x0, ...) /usr/local/go/src/runtime/asm_amd64.s:195 +0x11a fp=0x7fff534aea58 sp=0x7fff534aea50 pc=0x458c4a </code></pre>
<p>I was able to fix the issue, it was due to insufficient memory on the bastion host.</p> <p><strong>Here's how I fixed it</strong>:</p> <p>First I ran the command below to check the memory utilization:</p> <pre><code>free -m </code></pre> <p>which gave me the following output which confirmed that the insufficient memory:</p> <pre><code> total used free shared buff/cache available Mem: 3.8G 3.7G 93M 988K 34M 27M Swap: 0B 0B 0B </code></pre> <p>You can temporarily run the command below to free up little memory space:</p> <pre><code>free &amp;&amp; sync &amp;&amp; echo 3 &gt; /proc/sys/vm/drop_caches &amp;&amp; free </code></pre> <p>What I did next was to upgrade the EC2 Instance type of the Bastion Host since it's an AWS EC2 Instance from <strong>t2.medium (4GB)</strong> to <strong>t2.large (8GB)</strong> using the steps stated here: <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html" rel="nofollow noreferrer">Change the instance type</a>. Basically, all you have to do is to:</p> <ul> <li>choose <strong>Instance state</strong> &gt; <strong>Stop instance</strong></li> <li>choose <strong>Actions</strong> &gt; <strong>Instance settings</strong> &gt; <strong>Change instance type</strong></li> <li>select a new instance type with higher capacity and choose <strong>Apply</strong></li> <li>choose <strong>Instance state</strong> &gt; <strong>Start instance</strong></li> </ul> <p>Wait for a few minutes and everything should take effect. Please take note that your <strong>Public IP address</strong> for the Bastion Host might change after the upgrade.</p> <p><strong>Reference</strong>: <a href="https://github.com/kubeflow/kubeflow/issues/3803#issuecomment-517243802" rel="nofollow noreferrer">Kubeflow deployment error &quot;fatal error: runtime: out of memory&quot; #3803</a></p>
<p>I'm using kubectl to deploy ASP.Net core apps to K8S cluster. At the current moment I hardcode container PORTs and ConnectionString for DataBase like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy spec: replicas: 1 selector: matchLabels: app: mydeploy template: metadata: labels: app: mydeploy spec: containers: - name: mydeploy image: harbor.com/my_app:latest env: - name: MyApp__ConnectionString value: &quot;Host=postgres-ap-svc;Port=5432;Database=db;Username=postgres;Password=password&quot; ports: - containerPort: 5076 </code></pre> <p>But the good solution I think - to use such variables from <code>appsettings.json</code> file (ASP.Net config file). <strong>So my question - How to load this vars from JSON file and use in yaml file?</strong></p> <p>JSON file is like this:</p> <pre><code>{ &quot;MyApp&quot;: { &quot;ConnectionString&quot;: &quot;Host=postgres-ap-svc;Port=5432;Database=db;Username=postgres;Password=password&quot; }, &quot;ClientBaseUrls&quot;: { &quot;MyApp&quot;: &quot;http://mydeploy-svc:5076/api&quot;, } } </code></pre>
<p>You can use following command to create configmap and then mount it to your contianer</p> <pre><code>kubectl create configmap appsettings --from-file=appsettings.json --dry-run -o yaml </code></pre> <p>For mounting you should add a volume to your deployment or sts like this:</p> <pre><code> volumes: - name: config configMap: name: appsettings </code></pre> <p>And then mount it to your container:</p> <pre><code> volumeMounts: - mountPath: /appropriate/path/to/config/appsettings.json subPath: appsettings.json name: config </code></pre> <p>Also if you want you can use the config map as your environment variables source like this:</p> <pre><code>envFrom: - configMapRef: name: config </code></pre>
<p>I faced this problem since yesterday, no problems before.<br /> My environment is</p> <ul> <li>Windows 11</li> <li>Docker Desktop 4.4.4</li> <li>minikube 1.25.1</li> <li>kubernetes-cli 1.23.3</li> </ul> <h1>Reproduce</h1> <h2>1. Start minikube and create cluster</h2> <pre><code>minikube start </code></pre> <h2>2. Check pods</h2> <pre><code>kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-64897985d-z7rpf 1/1 Running 0 22s kube-system etcd-minikube 1/1 Running 1 34s kube-system kube-apiserver-minikube 1/1 Running 1 34s kube-system kube-controller-manager-minikube 1/1 Running 1 33s kube-system kube-proxy-zdr9n 1/1 Running 0 22s kube-system kube-scheduler-minikube 1/1 Running 1 34s kube-system storage-provisioner 1/1 Running 0 29s </code></pre> <h2>3. Add new pod (in this case, use istio)</h2> <pre><code>istioctl manifest apply -y </code></pre> <h2>4. Check pods</h2> <pre><code>kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE istio-system istio-ingressgateway-c6d9f449-nhbvg 1/1 Running 0 13s istio-system istiod-5ffcccb477-5hzgs 1/1 Running 0 19s kube-system coredns-64897985d-nxhxm 1/1 Running 0 67s kube-system etcd-minikube 1/1 Running 2 79s kube-system kube-apiserver-minikube 1/1 Running 2 82s kube-system kube-controller-manager-minikube 1/1 Running 2 83s kube-system kube-proxy-8jfz7 1/1 Running 0 67s kube-system kube-scheduler-minikube 1/1 Running 2 83s kube-system storage-provisioner 1/1 Running 1 (45s ago) 77s </code></pre> <h2>5. Restart minikube</h2> <pre><code>minikube stop </code></pre> <p>then, back to 1 and and check pod, <code>kubectl get po -A</code> returns same pods as <strong>#2</strong>.<br /> (In this case, istio-system is lost.)</p> <p>Created pods etc. was retained till yesterday even restart minikube or PC.</p> <p>Does anyone face same problem or have any solution?</p>
<p>This seems to be a bug introduced with 1.25.0 version of minikube: <a href="https://github.com/kubernetes/minikube/issues/13503" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/13503</a> . A PR to revert the changes introducing the bug is already open: <a href="https://github.com/kubernetes/minikube/pull/13506" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/pull/13506</a></p> <p>The fix is scheduled for minikube v1.26.</p>
<p>I was trying to implement the pdb in kubernetes for our application. but I am confused with How the PDB and HPA are together related or If rolling update strategy is defined is there any dependency with PDB ?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: pdbtest namespace: test-fix labels: app: pdbtest environment: dev spec: replicas: 4 strategy: type: &quot;RollingUpdate&quot; rollingUpdate: maxSurge: 25% maxUnavailable: 25% selector: matchLabels: app: pdbtest template: metadata: labels: app: pdbtest environment: dev spec: containers: - name: pdbtest image: nginx imagePullPolicy: &quot;Always&quot; ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 256Mi cpu: 0.2 limits: memory: 2048Mi cpu: 1 env: - name: APPLICATIONNAME value: pdbtest nodeSelector: agentpool: devpool01 apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: pdbtest spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: pdbtest maxReplicas: 6 minReplicas: 4 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: labels: app: pdbtest name: pdbtest namespace: test-fix spec: minAvailable: 4 selector: matchLabels: app: pdbtest </code></pre> <p>In order to test the pdb behavior , I just tried to delete multiple pods, But what i expected is, pdb will block the deletion of the pods as its expected 4 replicas always available as per the pdb, but actually 4new pods are getting created.</p> <p>So would like to understand How pdb works? whether its applicable for deleting pods/deployments or rolling update? How the PDB related to HPA. if we are planning for a pdb with minAvailable property, then what value for the minAvailable in PDB will be considered ? (is it same as hpa minpods or it should be less than the minpods in HPA)? Will PDB have impact on new release rolling update? How PDB is different from Rolling Update strategy?</p>
<p>If you read the <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">documentation</a>, you find a disclaimer that is applicable to your test scenario.</p> <blockquote> <p><strong>Caution</strong>: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets.</p> </blockquote> <p>That's why you could delete it. Apart from that, it's different from min replicas of an HPA in that it will provide protection against disruptions, in <em>certain</em> scenarios. For example, when you drain a node.</p> <p>For your test, you could create a deployment with 6 replicas and create a disruption budget of 4 pods. Finally, try to update the deployment to a replica count of 2.</p>
<p>I have been working on creating a application which can perform verification test on the deployed istio components in the kube-cluster. The constraint in my case is that I have run this application as a pod inside the kubernetes and I cannot provide cluster-admin role to the pod of the application so that it can do all the operations. I have to create a restricted <code>ClusterRole</code> just to provide enough access so that application list and get all the required deployed istio resources (Reason for creating a cluster role is because when istio is deployed it created both namespace level and cluster level resources). Currently my application won't run at all if I use my restricted <code>ClusterRole</code> and outputs and error</p> <pre><code>Error: failed to fetch istiod pod, error: pods is forbidden: User &quot;system:serviceaccount:istio-system:istio-deployment-verification-sa&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;istio-system&quot; </code></pre> <p>Above error doesn't make sense as I have explicitly mentioned the core api group in my <code>ClusterRole</code> and also mentioned pods as a resource in the <code>resourceType</code> child of my <code>ClusterRole</code> definition.</p> <p><strong>Clusterrole.yaml</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: {{ .Values.clusterrole.name }} namespace: {{ .Values.clusterrole.clusterrolens}} rules: - apiGroups: - &quot;rbac.authorization.k8s.io&quot; - &quot;&quot; #enabling access to core API - &quot;networking.istio.io&quot; - &quot;install.istio.io&quot; - &quot;autoscaling&quot; - &quot;apps&quot; - &quot;admissionregistration.k8s.io&quot; - &quot;policy&quot; - &quot;apiextensions.k8s.io&quot; resources: - &quot;clusterroles&quot; - &quot;clusterolebindings&quot; - &quot;serviceaccounts&quot; - &quot;roles&quot; - &quot;rolebindings&quot; - &quot;horizontalpodautoscalers&quot; - &quot;configmaps&quot; - &quot;deployments&quot; - &quot;mutatingwebhookconfigurations&quot; - &quot;poddisruptionbudgets&quot; - &quot;envoyfilters&quot; - &quot;validatingwebhookconfigurations&quot; - &quot;pods&quot; - &quot;wasmplugins&quot; - &quot;destinationrules&quot; - &quot;envoyfilters&quot; - &quot;gateways&quot; - &quot;serviceentries&quot; - &quot;sidecars&quot; - &quot;virtualservices&quot; - &quot;workloadentries&quot; - &quot;workloadgroups&quot; - &quot;authorizationpolicies&quot; - &quot;peerauthentications&quot; - &quot;requestauthentications&quot; - &quot;telemetries&quot; - &quot;istiooperators&quot; resourceNames: - &quot;istiod-istio-system&quot; - &quot;istio-reader-istio-system&quot; - &quot;istio-reader-service-account&quot; - &quot;istiod-service-account&quot; - &quot;wasmplugins.extensions.istio.io&quot; - &quot;destinationrules.networking.istio.io&quot; - &quot;envoyfilters.networking.istio.io&quot; - &quot;gateways.networking.istio.io&quot; - &quot;serviceentries.networking.istio.io&quot; - &quot;sidecars.networking.istio.io&quot; - &quot;virtualservices.networking.istio.io&quot; - &quot;workloadentries.networking.istio.io&quot; - &quot;workloadgroups.networking.istio.io&quot; - &quot;authorizationpolicies.security.istio.io&quot; - &quot;peerauthentications.security.istio.io&quot; - &quot;requestauthentications.security.istio.io&quot; - &quot;telemetries.telemetry.istio.io&quot; - &quot;istiooperators.install.istio.io&quot; - &quot;istiod&quot; - &quot;istiod-clusterrole-istio-system&quot; - &quot;istiod-gateway-controller-istio-system&quot; - &quot;istiod-clusterrole-istio-system&quot; - &quot;istiod-gateway-controller-istio-system&quot; - &quot;istio&quot; - &quot;istio-sidecar-injector&quot; - &quot;istio-reader-clusterrole-istio-system&quot; - &quot;stats-filter-1.10&quot; - &quot;tcp-stats-filter-1.10&quot; - &quot;stats-filter-1.11&quot; - &quot;tcp-stats-filter-1.11&quot; - &quot;stats-filter-1.12&quot; - &quot;tcp-stats-filter-1.12&quot; - &quot;istio-validator-istio-system&quot; - &quot;istio-ingressgateway-microservices&quot; - &quot;istio-ingressgateway-microservices-sds&quot; - &quot;istio-ingressgateway-microservices-service-account&quot; - &quot;istio-ingressgateway-public&quot; - &quot;istio-ingressgateway-public-sds&quot; - &quot;istio-ingressgateway-public-service-account&quot; verbs: - get - list </code></pre> <p>Application I have built leverage the <code>istioctl</code> docker container published by istio on dockerhub. <a href="https://hub.docker.com/r/istio/istioctl/tags" rel="nofollow noreferrer">Link</a>.</p> <p>I want to understand what changes are required in above <code>ClusterRole</code> definition so that I can perform the get and list operations for the pods in the namespace.</p> <p>I would also want to understand that is it possible that the error I am getting is trying to reference some other resource in the cluster?</p> <p>Cluster information:</p> <pre><code>Kubernetes version: 1.20 Istioctl docker image version: 1.12.2 Istio version: 1.12.1 </code></pre>
<p>As OP mentioned in the comment problem is resolved after my suggestion:</p> <blockquote> <p>Please run the command <code>kubectl auth can-i list pods --namespace istio-system --as system:serviceaccount:istio-system:istio-deployment-verification-sa</code> and attach result to the question. Look also <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">here</a></p> </blockquote> <p>OP has confirmed that problem is resolved:</p> <blockquote> <p>thanx for the above command using above I was finally able to nail down the issue and found the issue to be with first resourceName and second we need to mention core api in the api group before any other. Thank you issue is resolved now.</p> </blockquote>
<p>I have a Kubernetes cluster of 3 nodes in Amazon EKS. It's running 3 pods of Cockroachdb in a StatefulSet. Now I want to use another instance type for all nodes of my cluster. So my plan was this:</p> <ol> <li>Add 1 new node to the cluster, increase replicas in my StatefulSet to 4 and wait for the new Cockroachdb pod to fully sync.</li> <li>Decommission and stop one of the old Cockroachdb nodes.</li> <li>Decrease replicas of the StatefulSet back to 3 to get rid of one of the old pods.</li> <li>Repeat steps 1-3 two more times.</li> </ol> <p>Obviously, that doesn't work because StatefulSet deletes most recent pods first when scaling down, so my new pod gets deleted instead of the old one. I guess I could just create a new StatefulSet and make it use existing PVs, but that doesn't seem like the best solution for me. Is there any other way to do the migration?</p>
<p>You can consider make a copy of your ASG current launch template -&gt; upgrade the instance type of the copied template -&gt; point your ASG to use this new launch template -&gt; perform ASG instance refresh. Cluster of 3 nodes with minimum 90% of healthy % ensure only 1 instance will be replace at a time. Affected pod on the drained node will enter pending state for 5~10 mins and redeploy on the new node. This way you do not need to scale up StatefulSet un-necessary.</p>
<p>I just installed redis (actually a reinstall and upgrade) on GKE via helm. It was a pretty standard install and nothing too out of the norm. Unfortunately my &quot;redis-master&quot; container logs are showing sync errors over and over again:</p> <pre><code>Info 2022-02-01 12:58:22.733 MST redis1:M 01 Feb 2022 19:58:22.733 * Waiting for end of BGSAVE for SYNC Info 2022-02-01 12:58:22.733 MST redis 8085:C 01 Feb 2022 19:58:22.733 # Write error saving DB on disk: No space left on device Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Background saving error Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Connection with replica redis-replicas-0.:6379 lost. Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # SYNC failed. BGSAVE child returned an error Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # Connection with replica redis-replicas-1.:6379 lost. Info 2022-02-01 12:58:22.830 MST redis 1:M 01 Feb 2022 19:58:22.829 # SYNC failed. BGSAVE child returned an error Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Replica redis-replicas-0.:6379 asks for synchronization Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Full resync requested by replica redis-replicas-0.:6379 Info 2022-02-01 12:58:22.832 MST redis 1:M 01 Feb 2022 19:58:22.832 * Starting BGSAVE for SYNC with target: disk Info 2022-02-01 12:58:22.833 MST redis 1:M 01 Feb 2022 19:58:22.833 * Background saving started by pid 8086 </code></pre> <p>I then looked at my persistent volume claim specification &quot;redis-data&quot; and it is in the &quot;Pending&quot; Phase and never seems to get out of that phase. If I look at all my PVCs though then they are all bound and appear to be healthy.</p> <p>Clearly something isn't as healthy as it seems but I am not sure how to diagnose. Any help would be appreciated.</p>
<p>i know it late to the party but to add more if any of get stuck into the same scenario and <strong>can't</strong> delete the <strong>PVC</strong> they can increase <strong>size of the PVC</strong> in <strong>GKE</strong>.</p> <p>Check <strong>storageclass</strong> :</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: … provisioner: kubernetes.io/gce-pd allowVolumeExpansion: true </code></pre> <p>Edit the <strong>PVC</strong></p> <pre><code>spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p><strong>Field</strong> that you need to update in <strong>PVC</strong></p> <pre><code>spec: accessModes: - ReadWriteOnce resources: requests: &lt;== make sure in requests section storage: 30Gi &lt;========= </code></pre> <p>Once changes are applied for <strong>PVC</strong> and saved just <strong>Restart</strong> the <strong>POD</strong> now.</p> <p>Sharing linke below : <a href="https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/resizing-pvc-disk-in-gke-c5b882c90f7b</a></p>
<pre><code>from kubernetes import client, config config.load_kube_config() api = client.AppsV1Api() deployment = api.read_namespaced_deployment(name='foo', namespace='bar') </code></pre> <p>i tried to add affinity object to deployment spec i got this error</p> <pre><code>deployment.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms = [{'nodeSelectorTerms':[{'matchExpressions':[{'key': 'kubernetes.io/hostname','operator': 'NotIn','values': [&quot;awesome-node&quot;]}]}]}] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; AttributeError: 'V1DeploymentSpec' object has no attribute 'affinity' </code></pre>
<p>You're looking at the wrong place. Affinity belongs to pod template spec (<code>deployment.spec.template.spec.affinity</code>) while you're looking at deployment spec (<code>deployment.spec.affinity</code>).</p> <p>Here's how to completely replace existing affinity (even if it's None):</p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config config.load_kube_config() api = client.AppsV1Api() # read current state deployment = api.read_namespaced_deployment(name='foo', namespace='bar') # check current state #print(deployment.spec.template.spec.affinity) # create affinity objects terms = client.models.V1NodeSelectorTerm( match_expressions=[ {'key': 'kubernetes.io/hostname', 'operator': 'NotIn', 'values': [&quot;awesome-node&quot;]} ] ) node_selector = client.models.V1NodeSelector(node_selector_terms=[terms]) node_affinity = client.models.V1NodeAffinity( required_during_scheduling_ignored_during_execution=node_selector ) affinity = client.models.V1Affinity(node_affinity=node_affinity) # replace affinity in the deployment object deployment.spec.template.spec.affinity = affinity # finally, push the updated deployment configuration to the API-server api.replace_namespaced_deployment(name=deployment.metadata.name, namespace=deployment.metadata.namespace, body=deployment) </code></pre>
<p>In Azure K8s service, you can scale up the node pool but only we define the min and max nodes. When i check the node pool scale set scale settings, i found it set to manual. So i assume that the Node Pool auto scale does't rely on the belonging scale set, but i wonder, can we just rely on the scale set auto scale with the several metric roles instead of the very limited Node Pool scale settings ?</p>
<p>The AKS autoscaling works slightly different as the VMSS autoscaling.</p> <p>From the <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler" rel="nofollow noreferrer">official docs</a>:</p> <blockquote> <p>The cluster autoscaler watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.</p> </blockquote> <p>The AKS autoscaler is tightly coupled with the control plane and the kube-scheduler, so it takes resource requests and limits into account that is far the better scaling method as the VMSS autoscaler (for k8s workload) that is anyway not supported for AKS:</p> <blockquote> <p>The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI.</p> </blockquote>
<p>I'm new to k8s. I deployed an ingress on minikube, and I found out its address to be <code>localhost</code> which it shouldn't be, I guess. For this, I don't know how to continue, for I should edit <code>/etc/hosts/</code> to add dns item, but no I could not.<br /> <a href="https://i.stack.imgur.com/UPwcf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UPwcf.png" alt="enter image description here" /></a></p> <p>And this is my configuration file</p> <pre><code>kiloson@ubuntu:~$ cat kubia-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kubia annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: kubia.example.com http: paths: - path: / pathType: Prefix backend: service: name: kubia-nodeport port: number: 80 </code></pre> <p>minikube version</p> <pre><code>kiloson@ubuntu:~$ minikube version minikube version: v1.24.0 commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b </code></pre> <p>Ubuntu Info</p> <pre><code>kiloson@ubuntu:~$ neofetch .-/+oossssoo+/-. kiloson@ubuntu `:+ssssssssssssssssss+:` -------------- -+ssssssssssssssssssyyssss+- OS: Ubuntu 20.04.3 LTS x86_64 .ossssssssssssssssssdMMMNysssso. Host: Virtual Machine 7.0 /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 5.11.0-1022-azure +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 2 hours, 33 mins /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 648 (dpkg), 4 (snap) .ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 5.0.17 +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Terminal: /dev/pts/0 ossyNMMMNyMMhsssssssssssssshmmmhssssssso CPU: Intel Xeon E5-2673 v4 (2) @ 2.294GHz ossyNMMMNyMMhsssssssssssssshmmmhssssssso GPU: 00:08.0 Microsoft Corporation Hyper-V virtual VGA +sssshhhyNMMNyssssssssssssyNMMMysssssss+ Memory: 1493MiB / 7959MiB .ssssssssdMMMNhsssssssssshNMMMdssssssss. /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ +sssssssssdmydMMMMMMMMddddyssssssss+ /ssssssssssshdmNNNNmyNMMMMhssssss/ .ossssssssssssssssssdMMMNysssso. -+sssssssssssssssssyyyssss+- `:+ssssssssssssssssss+:` .-/+oossssoo+/-. </code></pre>
<p>I was butting heads with this for a while and just got it working, so I'll add color to the other answers.</p> <p>First of all, as pointed out in <a href="https://serverfault.com/questions/1052349/unable-to-connect-to-minikube-ingress-via-minikube-ip">this related question</a>, when you intially run <code>minikube addons enable ingress</code> it prints <code>After the addon is enabled, please run &quot;minikube tunnel&quot; and your ingress resources would be available at &quot;127.0.0.1&quot;</code>.</p> <p>After completing the steps in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="noreferrer">tutorial</a> and running into the issue in this question, I was able to hit the hello-world app by starting <code>minikube tunnel</code> and adding <code>127.0.0.1 hello-world.info</code> to my <code>/etc/hosts</code>.</p> <p>I am not sure why the tutorial execution of <code>kubectl get ingress</code> returns a non-localhost IP, but returns <code>localhost</code> when running locally, but it seems like the primary issue here is that while you are creating an ingress on the docker containers running minikube, you need to forward traffic to the containers through <code>minikube tunnel</code> to hit the now open ingress in minikube.</p>
<p>I want to delete single pod of kubernetes permanently but it will recreate that pod</p> <p>i have tried many commands but it doesn't help me.</p> <pre><code>1. kubectl delete pod &lt;pod-name&gt; </code></pre> <p>2nd</p> <pre><code>kubectl get deployments kubectl delete deployments &lt;deployments- name&gt; </code></pre> <pre><code>kubectl get rs --all-namespaces kubectl delete rs your_app_name </code></pre> <p>but None of that works</p>
<p><code>my replica count is 0</code></p> <p><code>...it will successfully delete the pod but then after it will restart</code></p> <p>Try:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: ... spec: restartPolicy: Never # &lt;-- add this containers: - name: ... </code></pre> <p>If the pod still restart, post output of <code>kubectl describe pod &lt;pod name&gt; --namespace &lt;name&gt;</code> to your question.</p>
<p>I have a OpenShift/Kubernetes new cluster and I need to create multiple ResourceQuota(s) and apply them to existing projects.</p> <p>Every ResourceQuota should be applied to a given namespace and should contain a specific number of requests and limits. I would like to generate all the needed ResourceQuota YAMs and deploy them by using Helm.</p> <p>I created a new custom Helm Chart with the command</p> <pre><code>helm template resourcequota </code></pre> <p>and I see Helm creating the following files:</p> <pre><code> . .. .helmignore Chart.yaml charts templates values.yaml </code></pre> <p>I then wrote a <code>resourcequota.yaml</code> in the <code>templates</code> folder. I would like helm to fill the ResourceQuota specs for me:</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: {{ .Values.namespace }}-quota namespace: {{ .Values.namepace }} spec: hard: cpu: {{ .Values.namespace.requests.cpu }} limits.cpu: {{ .Values.namespace.limits.cpu }} memory: {{ .Values.namespace.requests.memory }} limits.memory: {{ .Values.namespace.limits.cpu }} </code></pre> <p>The <code>values.yaml</code> file contains a first namespace name with sample values:</p> <pre><code>... 83 ... 84 namespace: &quot;123-testnamespace&quot; 85 requests: 86 cpu: &quot;2&quot; 87 memory: &quot;1Gi&quot; 88 limits: 89 cpu: &quot;2&quot; 90 memory: &quot;1Gi&quot; </code></pre> <p>I then asked Helm to locally generate a template with <code>helm template resourcequota</code> but it seems he doesn't know the namespace key (row 84):</p> <p><code>Error: cannot load values.yaml: error converting YAML to JSON: yaml: line 84: did not find expected key</code></p> <p>What am I doing wrong ? I tried to write the <code>values.yaml</code> from scratch but it seems like I am missing out some helm fundamentals here.</p> <p>Thank you in advance</p>
<blockquote> <p>it seems like I am missing out some helm fundamentals here</p> </blockquote> <p>True, As mentioned in error there is an issue at line 84</p> <pre><code>84 namespace: &quot;123-testnamespace&quot; 85 requests: 86 cpu: &quot;2&quot; 87 memory: &quot;1Gi&quot; 88 limits: 89 cpu: &quot;2&quot; 90 memory: &quot;1Gi&quot; </code></pre> <p>basic YAML can not create an array of value and use it as key also.</p> <p>You should have to use the values.yaml like</p> <pre><code> namespacename : &quot;123-testnamespace&quot; namespace: requests: cpu: &quot;2&quot; memory: &quot;1Gi&quot; limits: cpu: &quot;2&quot; memory: &quot;1Gi&quot; </code></pre> <p><strong>YAML template</strong></p> <p>Using change at <strong>name</strong> &amp; <strong>namespace</strong> values in template</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: {{ .Values.namespacename }}-quota namespace: {{ .Values.namespacename }} spec: hard: cpu: {{ .Values.namespace.requests.cpu }} limits.cpu: {{ .Values.namespace.limits.cpu }} memory: {{ .Values.namespace.requests.memory }} limits.memory: {{ .Values.namespace.limits.cpu }} </code></pre> <p>Example of nested <strong>YAML</strong></p> <pre><code>cartParams: - title: Test 1 options: - Oh lala - oh lalalalala - title: Title test 2 options: - oh lala - oh lala - oh lalala - oh lalalalalalalala - oh lalalalalalalalalala </code></pre>
<p>I want to delete the single node of cluster</p> <p>here is my problem i am create the node where 2 nodes are running only but for sometime i need more nodes for few minutes only then after using scaling down i want delete the drain node only from cluster. i do scaling up/down manually<br /> here is the step i follow</p> <ol> <li>create cluster with 2 node</li> <li>scale up the cluster and add 2 more.</li> <li>after i want to delete the 2 node with all backup pod only</li> </ol> <p>i tried it with command</p> <pre><code>eksctl scale nodegroup --cluster= cluster-name --name= name --nodes=4 --nodes-min=1 --nodes-max=4 </code></pre> <p>but it doesn't help it will delete random node also manager will crash.</p>
<p>One option is using a separate node group for the transient load, use taints/tolerations for laod to be scheduled on that node group, drain/delete that particular node group if not needed.</p> <p>Do you manually scale up/down nodes? If you are using something like cluster auto scaler, there will be variables like <code>&quot;cluster-autoscaler.kubernetes.io/safe-to-evict&quot;: &quot;false&quot;</code> to protect pods from scaling down.</p>
<p>which service assigns nameservers under /etc/resolv.conf of pods , generally it should pickup from host /etc/resolv.conf , i'm seeing different nameservers under /etc/resolv.conf of pods, is there is any configuration on kbernetes(kubedns) which i can configure so that pods /etc/resolv.conf have 8.8.8.8</p>
<p>I had the same issue over Jenkins deployed on Kubernetes. If you don't mention a nameserver then <em>/etc/resolv.conf</em> shows the default nameserver (ip of k8s). I solved this by modifying the deploy file with</p> <pre><code> dnsPolicy: &quot;None&quot; dnsConfig: nameservers: - 8.8.8.8 </code></pre> <p>and applying it.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: jenkins namespace: jenkins spec: replicas: 1 selector: matchLabels: app: jenkins template: metadata: labels: app: jenkins spec: containers: - name: jenkins image: jenkins/jenkins:lts ports: - name: http-port containerPort: 8080 - name: jnlp-port containerPort: 50000 volumeMounts: - name: jenkins-vol mountPath: /var/jenkins_vol dnsPolicy: &quot;None&quot; dnsConfig: nameservers: - 8.8.8.8 volumes: - name: jenkins-vol emptyDir: {} </code></pre>
<p>We are using envoy access logs <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage</a> , does envoy validate the fields that are passed to the access logs, e.g. the field format.</p> <p>I ask it from basic security reason to verify that if I use for example <code>%REQ(:METHOD)</code> I will get a real http method like <code>get</code> <code>post</code> etc and not something like <code>foo</code>. or <code>[%START_TIME%]</code> is in time format and I will not get something else...</p> <p>I think it's related to this envoy code</p> <p><a href="https://github.com/envoyproxy/envoy/blob/24bfe51fc0953f47ba7547f02442254b6744bed6/source/common/access_log/access_log_impl.cc#L54" rel="nofollow noreferrer">https://github.com/envoyproxy/envoy/blob/24bfe51fc0953f47ba7547f02442254b6744bed6/source/common/access_log/access_log_impl.cc#L54</a></p> <p>I ask it since we are sending the data from the access logs to another system and we want to verify that the data is as its defined in the access logs and no one will change it from security perspective.</p> <p>like <code>ip</code> is real ip format and <code>path</code> is in path format and <code>url</code> is in url format</p>
<p>I'm not sure I understand the question. Envoy doesn't have to validate anything as it is generating those logs. Envoy is HTTP proxy who receives the request and does some routing/rewriting/auth/drop/.. actions based on the configuration (configured by virtualservice / destinationrule / envoyfilter if we're talking about istio). After the action it generates the log entry and fills the fields with details about original request and actions taken.</p> <p>Also there is nothing like 'real' http method. HTTP method is just a string and it can hold any value. Envoy is just the proxy who sits between client and application and passes the requests (unless you explicitly configure it i.e. drop some method).</p> <p>It depends on application who receives the method how it's treated. GET/POST/HEAD are commonly associated with standard HTTP and static pages. PUT/DELETE/PATCH are used in REST APIs. But nothing prevents you to develop application who will accept 'FOOBAR' method and runs some code over it.</p>
<p>I've recently started using KOPS as a tool to provision Kubernetes clusters and from what I've seen so far, it stores it's CA key and certificates in its S3 bucket, which is fine.</p> <p>But out curiosity, would it be possible to store these in Hashicorp Vault instead, as opposed to s3?</p>
<blockquote> <p>But out curiosity, would it be possible to store these in Hashicorp Vault instead, as opposed to s3?</p> </blockquote> <p>Yes. User <a href="https://stackoverflow.com/users/5343387/matt-schuchard" title="16,592 reputation">Matt Schuchard</a> has mentioned in the comment:</p> <blockquote> <p>Yes you can store them in the KV2 secrets engine, or use the PKI secrets engine to generate them instead.</p> </blockquote> <p>For more details look at this <a href="https://kops.sigs.k8s.io/state/" rel="nofollow noreferrer">kops documentation</a>. The most interesting part should be <a href="https://kops.sigs.k8s.io/state/#node-authentication-and-configuration" rel="nofollow noreferrer">Node authentication and configuration</a>:</p> <blockquote> <p>The vault store uses IAM auth to authenticate against the vault server and expects the vault auth plugin to be mounted on <code>/aws</code>.</p> <p>Instructions for configuring your vault server to accept IAM authentication are at <a href="https://learn.hashicorp.com/vault/identity-access-management/iam-authentication" rel="nofollow noreferrer">https://learn.hashicorp.com/vault/identity-access-management/iam-authentication</a></p> <p>To configure kOps to use the Vault store, add this to the cluster spec:</p> </blockquote> <pre><code>spec: secretStore: vault://&lt;vault&gt;:&lt;port&gt;/&lt;kv2 mount&gt;/clusters/&lt;clustername&gt;/secrets keyStore: vault://&lt;vault&gt;:&lt;port&gt;/&lt;kv2 mount&gt;/clusters/&lt;clustername&gt;/keys </code></pre> <p>Look also at this <a href="https://learn.hashicorp.com/tutorials/vault/approle" rel="nofollow noreferrer">hashicorp site</a>.</p>
<p>I have an EKS cluster and a nodegroup running 6 nodes. For some reson nodes get marked as <code>unschedulable</code> randomly, once a week or two and they stay that way. When I notice that I uncordon them manually and everything works fine.</p> <p>Why does this happen and how can I debug it, prevent it or configure cluster to fix it automatically?</p>
<p>In my case the problem was <code>AWS Termination Handler</code> daemonset that was running. It was outdated and not really used in the cluster and after removing it, problems with nodes getting marked Unschedulable just went away.</p>
<p>I am trying to extract podname using the below jq query.</p> <pre class="lang-sh prettyprint-override"><code>❯ kubectl get pods -l app=mssql-primary --output jsonpath='{.items[0].metadata.name}' mssqlag-primary-deployment-77b8974bb9-dbltl% ❯ kubectl get pods -l app=mssql-primary -o json | jq -r '.items[0].metadata.name' mssqlag-primary-deployment-77b8974bb9-dbltl </code></pre> <p>While they both provide the same out put the first one has a % character at the end of the pod name. Any reason why ? Is there something wrong with the jsonpath representation in the first command ?</p>
<p>I'm guessing <code>zsh</code> is your shell. The <code>%</code> is an indicator output by your shell to say that the last line output by <code>kubectl</code> had no newline character at the end. So it's not <em>extra</em> output, it's actually an indicator that the raw <code>kubectl</code> command outputs <em>less</em> than <code>jq</code>.</p> <p>You could explicitly add a newline to the jsonpath output if you want it:</p> <pre><code>kubectl get pods -l app=mssql-primary --output jsonpath='{.items[0].metadata.name}{&quot;\n&quot;}' </code></pre> <p>Or in the other direction you could tell <code>jq</code> not to add newlines at all by specifying <code>-j</code> instead of <code>-r</code>:</p> <pre><code>kubectl get pods -l app=mssql-primary -o json | jq -j '.items[0].metadata.name' </code></pre>
<p>I have configured a spring-boot pod and configured the <code>liveness</code> and <code>readiness</code> probes. When I start the pod, the <code>describe</code> command is showing the below output.</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 92s default-scheduler Successfully assigned pradeep-ns/order-microservice-rs-8tqrv to pool-h4jq5h014-ukl3l Normal Pulled 43s (x2 over 91s) kubelet Container image &quot;classpathio/order-microservice:latest&quot; already present on machine Normal Created 43s (x2 over 91s) kubelet Created container order-microservice Normal Started 43s (x2 over 91s) kubelet Started container order-microservice Warning Unhealthy 12s (x6 over 72s) kubelet Liveness probe failed: Get &quot;http://10.244.0.206:8222/actuator/health/liveness&quot;: dial tcp 10.244.0.206:8222: connect: connection refused Normal Killing 12s (x2 over 52s) kubelet Container order-microservice failed liveness probe, will be restarted Warning Unhealthy 2s (x8 over 72s) kubelet Readiness probe failed: Get &quot;http://10.244.0.206:8222/actuator/health/readiness&quot;: dial tcp 10.244.0.206:8222: connect: connection refused </code></pre> <p>The pod definition is like below</p> <pre><code>apiVersion: apps/v1 kind: ReplicaSet metadata: name: order-microservice-rs labels: app: order-microservice spec: replicas: 1 selector: matchLabels: app: order-microservice template: metadata: name: order-microservice labels: app: order-microservice spec: containers: - name: order-microservice image: classpathio/order-microservice:latest imagePullPolicy: IfNotPresent env: - name: SPRING_PROFILES_ACTIVE value: dev - name: SPRING_DATASOURCE_USERNAME valueFrom: secretKeyRef: key: username name: db-credentials - name: SPRING_DATASOURCE_PASSWORD valueFrom: secretKeyRef: key: password name: db-credentials volumeMounts: - name: app-config mountPath: /app/config - name: app-logs mountPath: /var/log livenessProbe: httpGet: port: 8222 path: /actuator/health/liveness initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: port: 8222 path: /actuator/health/readiness initialDelaySeconds: 10 periodSeconds: 10 resources: requests: memory: &quot;550Mi&quot; cpu: &quot;500m&quot; limits: memory: &quot;550Mi&quot; cpu: &quot;750m&quot; volumes: - name: app-config configMap: name: order-microservice-config - name: app-logs emptyDir: {} restartPolicy: Always </code></pre> <p>If I disable the <code>liveness</code> and <code>readiness</code> probe in the <code>replica-set</code> manifest and I <code>exec</code> into the pod, I am getting a valid response when invoking <code>http://localhost:8222/actuator/health/liveness</code> and <code>http://localhost:8222/actuator/health/readiness</code> endpoint. Why is my pod restarting and failing when invoking the <code>readiness</code> and <code>liveness</code> endpoint with Kubernetes. Where am I going wrong?</p> <p><strong>Update</strong> If I remove the <code>resource</code> section, the pods are running but when added the <code>resource</code> parameters, the <code>probes</code> are failing.</p>
<p>When you limit the container / spring application to 0.5 cores (500 millicores) the startup probably takes longer than the given liveness probe thresholds.</p> <p>You can either increase them, or use a startupProbe with more relaxed settings (f.e. failureThreshold 10). You can reduce the period for the liveness probe in that case and get faster feedback after a successful container start was detected.</p>
<p>I have installed RabbitMQ to my Kubernetes Cluster via Google Cloud Platform's marketplace.</p> <p>I can connect to it fine in my other applications hosted in the Kubernetes Cluster, I can create queues and setup consumers from them without any problems too.</p> <p>I can temporarily port forward port 15672 so that I can access the management user interface from my machine. I can login fine and I get a list of queues and exchanges when visiting their pages. But as soon as I select a queue or an exchange to load that specific item, I get a 404 response and the following message. I get them same when trying to add a new queue.</p> <pre><code>Not found The object you clicked on was not found; it may have been deleted on the server. </code></pre> <p>They definitely exist, because when I go back to the listing page, they're there. It's really frustrating as it would be nice to test my microservices by simply publishing a message to a queue using RabbitMQ management, but I'm currently blocked from doing so!</p> <p>Any help would be appreciated, thanks!</p> <p><strong>Edit</strong><br /> A screenshot provided for clarity (after clicking the queue in the list): <a href="https://i.stack.imgur.com/nVww2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nVww2.png" alt="rabbitmq admin"></a></p> <p>If I try to add a new queue, I don't get that message, instead I get a 405.</p>
<p>This is because the default virtual-host is '/'. RabbitMQ admin uses this in the URL when you access the exchanges/queues pages. URL encoded it becomes '%2F'. However, the Ingress Controller (in my case nginx) converts that back to '/' so the admin app can't find that URL (hence the 404).</p> <p>The work-around I came up with was to change the <strong>default_vhost</strong> setting in rabbitmq to something without '/' in it (e.g. 'vhost').</p> <p>In the bitnami rabbitmq <a href="https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq" rel="nofollow noreferrer">Helm chart</a> I'm using, this is configured using:</p> <pre><code>rabbitmq: extraConfiguration: |- default_vhost = vhost </code></pre> <p>You do have to update your clients to explicitly specify this new virtual-host though as they generally default to using '/'. In Spring Boot this is as simple as adding:</p> <pre><code>spring: rabbitmq: virtual-host: vhost </code></pre>
<p>I need to run a python script from a KubernetesPodOperator, so I want to mount the python file into the Python docker Image. Reading some posts</p> <ul> <li><a href="https://stackoverflow.com/questions/57754521/how-to-mount-volume-of-airflow-worker-to-airflow-kubernetes-pod-operator">How to mount volume of airflow worker to airflow kubernetes pod operator?</a></li> <li><a href="https://stackoverflow.com/questions/60950206/mounting-volume-issue-through-kubernetespodoperator-in-gke-airflow">Mounting volume issue through KubernetesPodOperator in GKE airflow</a></li> <li><a href="https://stackoverflow.com/questions/69539197/mounting-folders-with-kubernetespodoperator-on-google-composer-airflow">Mounting folders with KubernetesPodOperator on Google Composer/Airflow</a></li> <li><a href="https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/example_dags/example_kubernetes.py#L107" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/airflow/providers/cncf/kubernetes/example_dags/example_kubernetes.py#L107</a></li> <li><a href="https://www.aylakhan.tech/?p=655" rel="nofollow noreferrer">https://www.aylakhan.tech/?p=655</a></li> </ul> <p>it doesn't get clear at all for me.</p> <p>The python file is located in the route <code>/opt/airflow/dags/test_dag</code>, so I would like to mount the entire folder and not only the script. I have tried with:</p> <pre><code> vol1 = k8s.V1VolumeMount( name='test_volume', mount_path='/opt/airflow/dags/test_dag' ) volume = k8s.V1Volume( name='test-volume', persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'), ) k = KubernetesPodOperator( task_id=&quot;dry_run_demo&quot;, cluster_name=&quot;eks&quot;, namespace=&quot;data&quot;, image=&quot;python:3.9-buster&quot;, volumes=[volume], volume_mounts=[vol1], arguments=[&quot;echo&quot;, &quot;10&quot;], ) </code></pre> <p>But I am getting the error:</p> <blockquote> <p>Pod &quot;pod.388baaaa7c27489c9dd5f7f37ee8ce5b&quot; is invalid: spec.containers[0].volumeMounts[0].name: Not found: &quot;test_volume\</p> </blockquote> <p>I am using Airflow 2.1.1 deployed in a EC2 with docker-compose and <code>apache-airflow-providers-cncf-kubernetes==3.0.1</code></p> <p>EDIT: with Elad's suggestion the question was &quot;solved&quot;. Then I got the error <code>Pod Event: FailedScheduling - persistentvolumeclaim &quot;test-volume&quot; not found</code>, so I just took out the <code>persistent_volume_claim</code> argument and I didn't get any error, but I am getting an empty directory in the POD, without any file. I have read something about creating the persistentvolumeclain in the namespace, but it would be very convenient to create it manually instead of dynamically with every operator</p>
<p>The error means that the names don't match. you defined <code>name='test_volume'</code> for <code>V1VolumeMount</code> and <code>name='test-volume</code> for <code>V1Volume</code>.</p> <p>To solve your issue names should be identical.</p> <pre><code>vol1 = k8s.V1VolumeMount( name='test-volume', mount_path='/opt/airflow/dags/test_dag' ) volume = k8s.V1Volume( name='test-volume', persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='test-volume'), ) </code></pre>
<p>I've been using a kubernetes ingress config file to assign a static external ip address created by GCP. The ingress and the deployment are managed by GKE.</p> <p>ingress.yaml</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.global-static-ip-name: test-address spec: backend: serviceName: test-service servicePort: 80 </code></pre> <p>With this yaml file, the static ip address created already is successfully attached to the ingress.</p> <p>On External IP Address on VPC Network menu, the ip is in use by forwarding rule.</p> <p><code>Name External Address Region Type Version In use by</code></p> <p><code>test-address 12.34.56.78 asia-northeast2 Static IPv4 Forwarding rule k8s2-ab-blablablabla</code></p> <p>However, Recently I tried to test Terraform to deploy the infrastructure to GCP and I made a Terraform config file exactly the same with above ingress.yaml.</p> <p>ingress.tf</p> <pre><code>resource &quot;kubernetes_ingress&quot; &quot;test_ingress&quot; { metadata { name = &quot;test-ingress&quot; annotations = { &quot;kubernetes.io/ingress.global-static-ip-name&quot; = &quot;test-address&quot; } } spec { backend { service_name = test-service service_port = &quot;80&quot; } } } </code></pre> <p>After I apply this config to GCP, the ingress was created successfully but the ip address does not attach to the ingress.</p> <p>In Ingress detail in GCP, an error occurred with the message</p> <p><code>Error syncing to GCP: error running load balancer syncing routine: loadbalancer blablablablabla does not exist: the given static IP name test-address doesn't translate to an existing static IP.</code></p> <p>And on External IP Address on VPC Network menu, the IP address row at <code>In use by</code> shows <code>None</code>.</p> <p>What is the problem here? Did I miss something with Terraform?</p>
<p>As @MattBrowne said in the comments, needs to be global IP and not regional. This also fixed for me.</p>
<p>I'm running a google cloud composer GKE cluster. I have a default node pool of 3 normal CPU nodes and one nodepool with a GPU node. The GPU nodepool has autoscaling activated.</p> <p>I want to run a script inside a docker container on that GPU node.</p> <p>For the GPU operating system I decided to go with cos_containerd instead of ubuntu.</p> <p>I've followed <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/gpus</a> and ran this line:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml </code></pre> <p>The GPU now shows up when I run &quot;kubectl describe&quot; on the GPU node, however my test scripts debug information tells me, that the GPU is not being used.</p> <p>When I connect to the autoprovisioned GPU node via ssh, I can see, that I still need to run the</p> <pre><code>cos extensions gpu install </code></pre> <p>in order to use the GPU.</p> <p>I now want to make my cloud composer GKE cluster to run &quot;cos-extensions gpu install&quot; whenever a node is being created by the autoscaler feature.</p> <p>I would like to apply something like this yaml:</p> <pre><code>#cloud-config runcmd: - cos-extensions install gpu </code></pre> <p>to my cloud composer GKE cluster.</p> <p>Can i do that with kubectl apply ? Ideally I would like to only run that yaml code onto the GPU node. How can I achieve that?</p> <p>I'm new to Kubernetes and I've already spent a lot of time on this without success. Any help would be much appreciated.</p> <p>Best, Phil</p> <p><strong>UPDATE:</strong> ok thx to Harsh I realized I have to go via Daemonset + ConfigMap like here: <a href="https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial</a></p> <p>My GPU node has the label</p> <pre><code>gpu-type=t4 </code></pre> <p>so I've created and kubectl applied this ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: phils-init-script labels: gpu-type: t4 data: entrypoint.sh: | #!/usr/bin/env bash ROOT_MOUNT_DIR=&quot;${ROOT_MOUNT_DIR:-/root}&quot; chroot &quot;${ROOT_MOUNT_DIR}&quot; cos-extensions gpu install </code></pre> <p>and here is my DaemonSet (I also kubectl applied this one):</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: phils-cos-extensions-gpu-installer labels: gpu-type: t4 spec: selector: matchLabels: gpu-type: t4 updateStrategy: type: RollingUpdate template: metadata: labels: name: phils-cos-extensions-gpu-installer gpu-type: t4 spec: volumes: - name: root-mount hostPath: path: / - name: phils-init-script configMap: name: phils-init-script defaultMode: 0744 initContainers: - image: ubuntu:18.04 name: phils-cos-extensions-gpu-installer command: [&quot;/scripts/entrypoint.sh&quot;] env: - name: ROOT_MOUNT_DIR value: /root securityContext: privileged: true volumeMounts: - name: root-mount mountPath: /root - name: phils-init-script mountPath: /scripts containers: - image: &quot;gcr.io/google-containers/pause:2.0&quot; name: pause </code></pre> <p>but nothing happens, i get the message &quot;Pods are pending&quot;.</p> <p>During the run of the script I connect via ssh to the GPU node and can see that the ConfigMap shell code didn't get applied.</p> <p>What am I missing here?</p> <p>I'm desperately trying to make this work.</p> <p>Best, Phil</p> <p>Thanks for all your help so far!</p>
<blockquote> <p>Can i do that with kubectl apply ? Ideally I would like to only run that yaml code onto the GPU node. How can I achieve that?</p> </blockquote> <p>Yes, You can run the Deamon set on each node which will run the command on Nodes.</p> <p>As you are on GKE and Daemon set will also run the command or script on New nodes also which are getting scaled up also.</p> <p>Daemon set is mainly for running applications or deployment on each available node in the cluster.</p> <p>We can leverage this deamon set and run the command on each node that exist and is also upcoming.</p> <p>Example <strong>YAML</strong> :</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: node-initializer labels: app: default-init spec: selector: matchLabels: app: default-init updateStrategy: type: RollingUpdate template: metadata: labels: name: node-initializer app: default-init spec: volumes: - name: root-mount hostPath: path: / - name: entrypoint configMap: name: entrypoint defaultMode: 0744 initContainers: - image: ubuntu:18.04 name: node-initializer command: [&quot;/scripts/entrypoint.sh&quot;] env: - name: ROOT_MOUNT_DIR value: /root securityContext: privileged: true volumeMounts: - name: root-mount mountPath: /root - name: entrypoint mountPath: /scripts containers: - image: &quot;gcr.io/google-containers/pause:2.0&quot; name: pause </code></pre> <p>Github link for example : <a href="https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial</a></p> <p>Exact deployment step : <a href="https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets#deploying_the_daemonset" rel="nofollow noreferrer">https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets#deploying_the_daemonset</a></p> <p>Full article : <a href="https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets" rel="nofollow noreferrer">https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets</a></p>
<p>I'm trying to deploy a cluster with self managed node groups. No matter what config options I use, I always come up with the following error:</p> <p><strong>Error: Post &quot;</strong><a href="http://localhost/api/v1/namespaces/kube-system/configmaps" rel="noreferrer"><strong>http://localhost/api/v1/namespaces/kube-system/configmaps</strong></a><strong>&quot;: dial tcp 127.0.0.1:80: connect: connection refusedwith module.eks-ssp.kubernetes_config_map.aws_auth[0]on .terraform/modules/eks-ssp/aws-auth-configmap.tf line 19, in resource &quot;kubernetes_config_map&quot; &quot;aws_auth&quot;:resource &quot;kubernetes_config_map&quot; &quot;aws_auth&quot; {</strong></p> <p>​</p> <p>The .tf file looks like this:</p> <pre><code>module &quot;eks-ssp&quot; { source = &quot;github.com/aws-samples/aws-eks-accelerator-for-terraform&quot; # EKS CLUSTER tenant = &quot;DevOpsLabs2&quot; environment = &quot;dev-test&quot; zone = &quot;&quot; terraform_version = &quot;Terraform v1.1.4&quot; # EKS Cluster VPC and Subnet mandatory config vpc_id = &quot;xxx&quot; private_subnet_ids = [&quot;xxx&quot;,&quot;xxx&quot;, &quot;xxx&quot;, &quot;xxx&quot;] # EKS CONTROL PLANE VARIABLES create_eks = true kubernetes_version = &quot;1.19&quot; # EKS SELF MANAGED NODE GROUPS self_managed_node_groups = { self_mg = { node_group_name = &quot;DevOpsLabs2&quot; subnet_ids = [&quot;xxx&quot;,&quot;xxx&quot;, &quot;xxx&quot;, &quot;xxx&quot;] create_launch_template = true launch_template_os = &quot;bottlerocket&quot; # amazonlinux2eks or bottlerocket or windows custom_ami_id = &quot;xxx&quot; public_ip = true # Enable only for public subnets pre_userdata = &lt;&lt;-EOT yum install -y amazon-ssm-agent \ systemctl enable amazon-ssm-agent &amp;&amp; systemctl start amazon-ssm-agent \ EOT disk_size = 20 instance_type = &quot;t2.small&quot; desired_size = 2 max_size = 10 min_size = 2 capacity_type = &quot;&quot; # Optional Use this only for SPOT capacity as capacity_type = &quot;spot&quot; k8s_labels = { Environment = &quot;dev-test&quot; Zone = &quot;&quot; WorkerType = &quot;SELF_MANAGED_ON_DEMAND&quot; } additional_tags = { ExtraTag = &quot;t2x-on-demand&quot; Name = &quot;t2x-on-demand&quot; subnet_type = &quot;public&quot; } create_worker_security_group = false # Creates a dedicated sec group for this Node Group }, } } module &quot;eks-ssp-kubernetes-addons&quot; { source = &quot;github.com/aws-samples/aws-eks-accelerator-for-terraform//modules/kubernetes-addons&quot; eks_cluster_id = module.eks-ssp.eks_cluster_id # EKS Addons enable_amazon_eks_vpc_cni = true enable_amazon_eks_coredns = true enable_amazon_eks_kube_proxy = true enable_amazon_eks_aws_ebs_csi_driver = true #K8s Add-ons enable_aws_load_balancer_controller = true enable_metrics_server = true enable_cluster_autoscaler = true enable_aws_for_fluentbit = true enable_argocd = true enable_ingress_nginx = true depends_on = [module.eks-ssp.self_managed_node_groups] } </code></pre> <p>Providers:</p> <pre><code>terraform { backend &quot;remote&quot; {} required_providers { aws = { source = &quot;hashicorp/aws&quot; version = &quot;&gt;= 3.66.0&quot; } kubernetes = { source = &quot;hashicorp/kubernetes&quot; version = &quot;&gt;= 2.6.1&quot; } helm = { source = &quot;hashicorp/helm&quot; version = &quot;&gt;= 2.4.1&quot; } } } </code></pre>
<p>Based on the example provided in the Github repo [1], my guess is that the <code>provider</code> configuration blocks are missing for this to work as expected. Looking at the code provided in the question, it seems that the following needs to be added:</p> <pre><code>data &quot;aws_region&quot; &quot;current&quot; {} data &quot;aws_eks_cluster&quot; &quot;cluster&quot; { name = module.eks-ssp.eks_cluster_id } data &quot;aws_eks_cluster_auth&quot; &quot;cluster&quot; { name = module.eks-ssp.eks_cluster_id } provider &quot;aws&quot; { region = data.aws_region.current.id alias = &quot;default&quot; # this should match the named profile you used if at all } provider &quot;kubernetes&quot; { experiments { manifest_resource = true } host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token } </code></pre> <p>If <code>helm</code> is also required, I think the following block [2] needs to be added as well:</p> <pre><code>provider &quot;helm&quot; { kubernetes { host = data.aws_eks_cluster.cluster.endpoint token = data.aws_eks_cluster_auth.cluster.token cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) } } </code></pre> <p>Provider argument reference for <code>kubernetes</code> and <code>helm</code> is in [3] and [4] respectively.</p> <hr /> <p>[1] <a href="https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47" rel="noreferrer">https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-self-managed-node-groups/main.tf#L23-L47</a></p> <p>[2] <a href="https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55" rel="noreferrer">https://github.com/aws-samples/aws-eks-accelerator-for-terraform/blob/main/examples/eks-cluster-with-eks-addons/main.tf#L49-L55</a></p> <p>[3] <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference" rel="noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference</a></p> <p>[4] <a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference" rel="noreferrer">https://registry.terraform.io/providers/hashicorp/helm/latest/docs#argument-reference</a></p>
<p>i have created a deployment on gke!!</p> <p>When i run <code>kubectl get pods</code> from my local machine, it returns me the existing pods and the deployment is running, but if i run on worker node via ssh <code>docker ps</code>, it doesn't return any container..</p> <p>i used cos_containered. I have one node in my cluster and the pod has been scheduled there!</p> <p>Does anyone have any idea?</p>
<p>GKE is moving away from <code>docker</code> however it is still available on the node, but it is not used.</p> <p>To list the containers use the below command:</p> <pre><code>crictl ps </code></pre> <p>For the complete reference: <a href="https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md</a></p>
<p>I'm working on a microservice feature from scratch and have to make some design choices.</p> <p>My needs are :</p> <ul> <li>be able to periodically run a batch job (CronJob) that aggregates data</li> <li>then send the data set to a service for processing</li> <li>have resiliency over service failure (pod rescheduled, pod crashing, ..)</li> <li>the data processing involve long database queries and even Big Queries, so parallelization is required</li> </ul> <p>I came up with an architecture in <strong>two parts</strong>, composed of a Kubernetes CronJob doing the initial data aggregation, and publishing messages into a queue. Then this queue is consumed by the service itself, inside a Go routine, so that the load is shared across all instances (pods) of the service, and the resiliency is ensured.</p> <p>Now I am stuck in a decision dilemma between :</p> <ul> <li>decoupling the queue consumer from the service in separate pods</li> <li>or keep the queue consumer in the same component (in the sense the executable) as the service</li> </ul> <p>While <strong>on one side</strong> I find it more logic in terms of architecture to decouple the consumer from the service (separate executables, separate pods), for example, so both can scale independently inside the cluster, <strong>on the other side</strong> the initial wish of introducing a queue in the design was to have resiliency over service failures and the ability to dispatch the load across instances that will consume messages, and that looks <strong>overkill</strong> to me.</p> <p>I didn't choose the simple option of a CronJob doing API calls to the service because only the service instance receiving the call will do the data processing.</p> <p>Is it a non-sense to have a service component that also runs a queue consumer, knowing that the service will have multiple instances of itself, in terms of design?</p> <p>What kind of advantage or disadvantage do I have if I decouple both or keep them coupled?</p>
<p>Easy to maintain/manage if you <strong>HTTP</strong> service (API) running in a different pod and your consumer as different service pod</p> <p>OOM killing not affecting each other (if both running in different pods, considering memory leak scenario)</p> <p>Things might get weird when you scale your system</p> <p>Consider you have 15-20 queues in each you are adding different types of message for processing</p> <p>If you will be running each consumer differently in Kubernetes, you might need to deploy 15-20 deployment.</p> <p>Will you manage 15-20 different repo for each different consumer script? (mostly people follow this trend) If <strong>not</strong> you will be creating the <strong>Mono</strong> repo, now it's on the DevOps side if a single script gets changed in that mono repo only that build should get deployed.</p>
<p>I have created a service account SA1 in namespace NS1 and set a full configuration for SA1 (workload identity in GCP). I need to use the service account SA1 in pods from different namespaces. for now I have the pods in namespace NS1 using the SA1</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod namespace: NS1 spec: serviceAccountName: SA1 </code></pre>
<p>ServiceAccount is a namespaced resource in Kubernetes, meaning that it can only be referenced from pods deployed in the same namespace.</p> <p>This is by design, Namespaces act as logical containers on which you apply access policies and a pod in one namespace should not be able to &quot;steal&quot; the serviceaccount from another (possibly unrelated) namespace</p>
<p>I want to edit my <code>Nginx.conf</code> file present inside Nginx controller pod in AKS, but the edit command is not working using the exec command, is there any way else I could edit my <code>nginx.conf</code>.</p> <p>the command which I tried:</p> <pre><code>kubectl exec -it nginx-nginx-ingress-controller -n nginx -- cat /etc/nginx/nginx.conf </code></pre>
<p>As mentioned by CrowDev, it's not good practice to update the config of Nginx controller like that.</p> <p>Nginx controller is the backend of the <strong>ingress</strong> you can use the <strong>config map</strong> to update the configuration of the Nginx controller and redeploy the pod of the controller.</p> <p>Some of the <strong>Nginx controller</strong> config could be also overwritten using the ingress config and annotation inside it.</p> <p>You can read more about annotation here : <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/</a></p> <p><strong>Update</strong> :</p> <p>You can separate out different ingress by their <strong>name</strong>. if you want to manage different configs or headers you need to separate out ingress for managing different configs.</p> <p>Example :</p> <p><strong>ingress : 1</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-one annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;3600&quot; spec: rules: - http: paths: - path: /one pathType: Prefix backend: service: name: one port: number: 80 </code></pre> <p><strong>ingress : 2</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-two annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /two pathType: Prefix backend: service: name: two port: number: 80 </code></pre> <p>now <code>nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;3600&quot;</code> will get apply to only one ingress or service.</p>
<p>Don't know if this is an error from AWS or something. I created an IAM user and gave it full admin policies. I then used this user to create an EKS cluster using the <code>eksctl</code> CLI but when I logging to AWS console with the <strong>root user</strong> I got the below error while trying to access the cluster nodes.</p> <p><em><strong>Your current user or role does not have access to Kubernetes objects on this EKS cluster This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.</strong></em></p> <p>I have these questions</p> <ol> <li>Does not the root user have full access to view every resource from the console?</li> <li>If the above is true, does it mean when I create a resource from the CLI I must login with the same user to view it?</li> <li>Or is there way I could attach policies to the root user? Didn't see anything like in the console.</li> </ol> <p>AWS itself does not recommend creating access keys for root user and using it for programmable access, so I'm so confused right now. Someone help</p> <p>All questions I have seen so far and the link to the doc <a href="https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html#security-iam-troubleshoot-cannot-view-nodes-or-workloads" rel="noreferrer">here</a> are talking about a user or role created in the AWS IAM and not the root user.</p>
<p>If you're logged in with the root user and get this error, run the below command to edit the <code>aws-auth</code> configMap:</p> <pre><code>kubectl edit configmap aws-auth -n kube-system </code></pre> <p>Then go down to <code>mapUsers</code> and add the following (replace <code>[account_id]</code> with your Account ID)</p> <pre><code>mapUsers: | - userarn: arn:aws:iam::[account_id]:root groups: - system:masters </code></pre>