prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have 2 pods, one that is writing files to a persistent volume and the other one supposedly reads those files to make some calculations.</p> <p>The first pod writes the files successfully and when I display the content of the persistent volume using <code>print(os.listdir(persistent_volume_path))</code> I get all the expected files. However, the same command on the second pod shows an empty directory. (The mountPath directory <code>/data</code> is created but empty.)</p> <p>This is the TFJob yaml file:</p> <pre><code>apiVersion: kubeflow.org/v1 kind: TFJob metadata: name: pod1 namespace: my-namespace spec: cleanPodPolicy: None tfReplicaSpecs: Worker: replicas: 1 restartPolicy: Never template: spec: containers: - name: tensorflow image: my-image:latest imagePullPolicy: Always command: - &quot;python&quot; - &quot;./program1.py&quot; - &quot;--data_path=./dataset.csv&quot; - &quot;--persistent_volume_path=/data&quot; volumeMounts: - mountPath: &quot;/data&quot; name: my-pv volumes: - name: my-pv persistentVolumeClaim: claimName: my-pvc </code></pre> <p>(respectively pod2 and program2.py for the second pod)</p> <p>And this is the volume configuration:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc namespace: my-namespace labels: type: local app: tfjob spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <hr /> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: my-pv namespace: my-namespace labels: type: local app: tfjob spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/data&quot; </code></pre> <p>Does anyone have any idea where's the problem exactly and how to fix it?</p>
<p>When two pods should access a shared Persistent Volume with <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a> <code>ReadWriteOnce</code>, concurrently - then the two pods must be running on the <strong>same node</strong> since the volume can only be mounted on a single node at a time with this access mode.</p> <p>To achieve this, some form of <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Pod Affinity</a> must be applied, such that they are scheduled to the same node.</p>
<p>I'm passing the following string through values.yaml:</p> <pre><code>urls: http://example.com http://example2.com http://example3.com </code></pre> <p>Is there a way to create a list from this, so i can then do something like:</p> <pre><code>{{ range $urls }} {{ . }} {{ end }} </code></pre> <p>The problem is I'm passing the urls var in a dynamic fashion, and I'm also can't avoid using a single string for that (ArgoCD ApplicationSet wont let me pass a list).</p>
<p>Basically all you need is just add this line in your template <code>yaml</code>:</p> <pre><code>{{- $urls := splitList &quot; &quot; .Values.urls }} </code></pre> <p>It will import <code>urls</code> string from <code>values.yaml</code> <a href="http://masterminds.github.io/sprig/string_slice.html" rel="nofollow noreferrer">as the list</a> so you will be able run your code which you posted in your question.</p> <p><strong>Simple example based on <a href="https://helm.sh/docs/chart_template_guide/control_structures/" rel="nofollow noreferrer">helm docs</a></strong>:</p> <ol> <li><p>Let's get simple chart used in <a href="https://helm.sh/docs/chart_template_guide/getting_started/" rel="nofollow noreferrer">helm docs</a> and prepare it:</p> <pre><code>helm create mychart rm -rf mychart/templates/* </code></pre> </li> <li><p>Edit <code>values.yaml</code> and insert <code>urls</code> string:</p> <pre><code>urls: http://example.com http://example2.com http://example3.com </code></pre> </li> <li><p>Create ConfigMap in <code>templates</code> folder (name it <code>configmap.yaml</code>)</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: {{- $urls := splitList &quot; &quot; .Values.urls }} urls: |- {{- range $urls }} - {{ . }} {{- end }} </code></pre> <p>As can see, I'm using your loop (with &quot;- &quot; to avoid creating empty lines).</p> </li> <li><p>Install chart and check it:</p> <pre><code>helm install example ./mychart/ helm get manifest example </code></pre> <p>Output:</p> <pre><code>--- # Source: mychart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: example-configmap data: urls: |- - http://example.com - http://example2.com - http://example3.com </code></pre> </li> </ol>
<p>I have redis DB setup running on my minikube cluster. I have shutdown my minikube and started after 3 days and I can see my redis pod is failing to come up with below error from pod log</p> <pre><code>Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix &lt;filename&gt;. </code></pre> <p>Below is my Stateful Set yaml file for redis master deployed via a helm chart</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: annotations: meta.helm.sh/release-name: test-redis meta.helm.sh/release-namespace: test generation: 1 labels: app.kubernetes.io/component: master app.kubernetes.io/instance: test-redis app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: redis helm.sh/chart: redis-14.8.11 name: test-redis-master namespace: test resourceVersion: &quot;191902&quot; uid: 3a4e541f-154f-4c54-a379-63974d90089e spec: podManagementPolicy: OrderedReady replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/component: master app.kubernetes.io/instance: test-redis app.kubernetes.io/name: redis serviceName: test-redis-headless template: metadata: annotations: checksum/configmap: dd1f90e0231e5f9ebd1f3f687d534d9ec53df571cba9c23274b749c01e5bc2bb checksum/health: xxxxx creationTimestamp: null labels: app.kubernetes.io/component: master app.kubernetes.io/instance: test-redis app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: redis helm.sh/chart: redis-14.8.11 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: master app.kubernetes.io/instance: test-redis app.kubernetes.io/name: redis namespaces: - tyk topologyKey: kubernetes.io/hostname weight: 1 containers: - args: - -c - /opt/bitnami/scripts/start-scripts/start-master.sh command: - /bin/bash env: - name: BITNAMI_DEBUG value: &quot;false&quot; - name: REDIS_REPLICATION_MODE value: master - name: ALLOW_EMPTY_PASSWORD value: &quot;no&quot; - name: REDIS_PASSWORD valueFrom: secretKeyRef: key: redis-password name: test-redis - name: REDIS_TLS_ENABLED value: &quot;no&quot; - name: REDIS_PORT value: &quot;6379&quot; image: docker.io/bitnami/redis:6.2.5-debian-10-r11 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - sh - -c - /health/ping_liveness_local.sh 5 failureThreshold: 5 initialDelaySeconds: 20 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 6 name: redis ports: - containerPort: 6379 name: redis protocol: TCP readinessProbe: exec: command: - sh - -c - /health/ping_readiness_local.sh 1 failureThreshold: 5 initialDelaySeconds: 20 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 resources: {} securityContext: runAsUser: 1001 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /opt/bitnami/scripts/start-scripts name: start-scripts - mountPath: /health name: health - mountPath: /data name: redis-data - mountPath: /opt/bitnami/redis/mounted-etc name: config - mountPath: /opt/bitnami/redis/etc/ name: redis-tmp-conf - mountPath: /tmp name: tmp dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 serviceAccount: test-redis serviceAccountName: test-redis terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 493 name: test-redis-scripts name: start-scripts - configMap: defaultMode: 493 name: test-redis-health name: health - configMap: defaultMode: 420 name: test-redis-configuration name: config - emptyDir: {} name: redis-tmp-conf - emptyDir: {} name: tmp updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: null labels: app.kubernetes.io/component: master app.kubernetes.io/instance: test-redis app.kubernetes.io/name: redis name: redis-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi volumeMode: Filesystem status: phase: Pending </code></pre> <p>Please let me know your suggestions on how can I fix this.</p>
<p>I am not an Redis expert but from what I can see:</p> <pre><code>kubectl describe pod red3-redis-master-0 ... Bad file format reading the append only file: make a backup of your AOF file, then use ./redis-check-aof --fix &lt;filename&gt; ... </code></pre> <p>Means that your appendonly.aof file was corrupted with invalid byte sequences in the middle.</p> <p>How we can proceed if redis-master is not working?:</p> <ul> <li>Verify <code>pvc</code> attached to the <code>redis-master-pod</code>:</li> </ul> <pre><code>kubectl get pvc NAME STATUS VOLUME redis-data-red3-redis-master-0 Bound pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359 </code></pre> <ul> <li>Create new <code>redis-client pod</code> wit the same <code>pvc</code> <code>redis-data-red3-redis-master-0</code>:</li> </ul> <pre><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: redis-client spec: volumes: - name: data persistentVolumeClaim: claimName: redis-data-red3-redis-master-0 containers: - name: redis image: docker.io/bitnami/redis:6.2.3-debian-10-r0 command: [&quot;/bin/bash&quot;] args: [&quot;-c&quot;, &quot;sleep infinity&quot;] volumeMounts: - mountPath: &quot;/tmp&quot; name: data EOF </code></pre> <ul> <li>Backup your files:</li> </ul> <pre><code>kubectl cp redis-client:/tmp . </code></pre> <ul> <li>Repair appendonly.aof file:</li> </ul> <pre><code>kubectl exec -it redis-client -- /bin/bash cd /tmp # make copy of appendonly.aof file: cp appendonly.aof appendonly.aofbackup # verify appendonly.aof file: redis-check-aof appendonly.aof ... 0x 38: Expected prefix '*', got: '&quot;' AOF analyzed: size=62, ok_up_to=56, ok_up_to_line=13, diff=6 AOF is not valid. Use the --fix option to try fixing it. ... # repair appendonly.aof file: redis-check-aof --fix appendonly.aof # compare files using diff: diff appendonly.aof appendonly.aofbackup </code></pre> <p>Note:</p> <blockquote> <p><a href="https://redis.io/topics/persistence" rel="noreferrer">As per docs</a>:</p> <p><strong>The best thing to do is</strong> to <strong>run the redis-check-aof utility, initially without the --fix option</strong>, then understand the problem, jump at the given offset in the file, <strong>and see if it is possible to manually repair the file</strong>: the AOF uses the same format of the Redis protocol and is quite simple to fix manually. <strong>Otherwise it is possible to let the utility fix the file</strong> for us, <strong>but in that case all the AOF portion from the invalid part to the end of the file may be discarded</strong>, <strong>leading to a massive amount of data loss if the corruption happened to be in the initial part of the file</strong>.</p> </blockquote> <p>In addition as described in the comments by <a href="https://stackoverflow.com/questions/68889647/redis-pod-failing/68902852#comment121826920_68902852">@Miffa Young</a> you can verify where your data is stored using <code>k8s.io/minikube-hostpath provisioner</code>:</p> <pre><code>kubectl get pv ... NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359 8Gi RWO Delete Bound default/redis-data-red3-redis-master-0 ... kubectl describe pv pvc-cf59a0b2-a3ee-4f7f-9f07-8f4922518359 ... Source: Type: HostPath (bare host directory volume) Path: /tmp/hostpath-provisioner/default/redis-data-red3-redis-master-0 ... </code></pre> <p>Your redis instance is failing down because your <code>appendonly.aof</code> is malformed and stored permanently under this location.</p> <p>You can ssh into your vm:</p> <pre><code>minikube -p redis ssh cd /tmp/hostpath-provisioner/default/redis-data-red3-redis-master-0 # from there you can backup/repair/remove your files: </code></pre> <p>Another solution is to install this chart using new name in this case new set of pv,pvc for redis StatefulSets will be created.</p>
<p>I wish to run the Spring batch application in Azure Kubernetes.</p> <p>At present, my on-premise VM has the below configuration</p> <ul> <li>CPU Speed: 2,593</li> <li>CPU Cores: 4</li> </ul> <p>My application uses multithreading(~15 threads)</p> <p>how do I define the CPU in AKS.</p> <pre><code>resources: limits: cpu: &quot;4&quot; requests: cpu: &quot;0.5&quot; args: - -cpus - &quot;4&quot; </code></pre> <p><strong>Reference:</strong> <a href="https://stackoverflow.com/questions/53276398/kubernetes-cpu-multithreading">Kubernetes CPU multithreading</a></p> <p><strong>AKS Node Pool:</strong> <a href="https://i.stack.imgur.com/HMpKg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HMpKg.png" alt="enter image description here" /></a></p>
<p>I'm afraid there is no easy answer to your question, while planning the right size of VM Node Pools for Kubernetes cluster to fit appropriately your workload requirements for resource consumption . This is a constant effort for cluster operators, and requires you to take into account many factors, let's mention few of them:</p> <ol> <li><p>What Quality of Service (QoS) class (Guaranteed, Burstable, BestEffort) should I specify for my Pod Application, and how many of them I plan to run ?</p> </li> <li><p>Do I really know the actual usage of CPU/Memory resources by my app VS. how much of VM compute resources stay idle ? (any on-prem monitoring solution in place right now, that could prove it, or be easily moved to Kubernetes in-cluster one ?)</p> </li> <li><p>Do my cluster is multi-tenant environment, where I need to share cluster resources with different teams ?</p> </li> <li><p>Node (VM) capacity is not the same as total available resources to workloads</p> </li> </ol> <p>You should think here in terms of cluster Allocatable resources:</p> <p><strong>Allocatable</strong> = Node Capacity - <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kube-reserved</a> - <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">system-reserved</a></p> <p>In case of <em>Standard_D16ds_v4</em> VM size in AZ, you would have for workloads disposal: 14 CPU Cores not 16 as assumed earlier.</p> <h2></h2> <p><strong>I hope you are aware</strong>, that scpecified through args number of CPUs:</p> <pre><code> args: - -cpus - &quot;2&quot; </code></pre> <p>is app specific approach (in this case the 'stress' utility written in go), not general way to spawn a declared number of threads per CPU.</p> <h2></h2> <p><strong>My suggestion:</strong></p> <p>To avoid over-provisioning or under-provisioning of cluster resources to your workload application (requested resource VS. actually utilized resources), and to optimize costs and performance of your applications, I would in your place do a preliminary sizing estimation on your own of VM Node Pool size and type required by your SpringBoot multithreaded app, and thus familiarize first with concepts like <code>bin-packing</code> and <code>app right-sizing</code>. For these two last topics I don't know a better public guide than recently published by GCP tech team:</p> <blockquote> <p>&quot;<a href="https://cloud.google.com/architecture/monitoring-gke-clusters-for-cost-optimization-using-cloud-monitoring" rel="nofollow noreferrer">Monitoring gke-clusters for cost optimization using cloud monitoring</a>&quot;</p> </blockquote> <p>I would encourage you to find an answer to your question by your self. Do the proof of concept on GKE first (with <a href="https://cloud.google.com/free" rel="nofollow noreferrer">free trial</a>), replace in quide above the demo app with your own workload, come back here, and share your own observation, would be valuable for others too with similar task !</p>
<p>I am trying to connect my Kubernetes Cluster in Digital Ocean with a Managed Database.</p> <p>I need to add the <code>CA CERTIFICATE</code> that is a file with extension <code>cer</code>. Is this the right way to add this file/certificate to a secret?</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secret-db-ca type: kubernetes.io/tls data: .tls.ca: | &quot;&lt;base64 encoded ~/.digitalocean-db.cer&gt;&quot; </code></pre>
<p><strong>How to create a secret from certificate</strong></p> <hr /> <p>The easiest and fastest way is to create a secret from command line:</p> <pre><code>kubectl create secret generic secret-db-ca --from-file=.tls.ca=digitalocean-db.cer </code></pre> <p>Please note that type of this secret is <code>generic</code>, not <code>kubernetes.io/tls</code> because <code>tls</code> one requires both keys provided: <code>tls.key</code> and <code>tls.crt</code></p> <p>Also it's possible to create a key from manifest, however you will need to provide full <code>base64 encoded</code> string to the data field and again use the type <code>Opaque</code> in manifest (this is the same as generic from command line).</p> <p>It will look like:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secret-db-ca type: Opaque data: .tls.ca: | LS0tLS1CRUdJTiBDRVJ.......... </code></pre> <p>Option you tried to use is used for <code>docker config</code> files. Please see <a href="https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets" rel="nofollow noreferrer">Docker config - secrets</a></p> <hr /> <p><strong>Note!</strong> I tested the above with <code>cer</code> certificate.</p> <p>DER (Distinguished Encoding Rules) is a binary encoding for X.509 certificates and private keys, they do not contain plain text (extensions .cer and .der). Secret was saved in <code>etcd</code> (generally speaking database for kubernetes cluster), however there may be issues with workability of secrets based on this type of secrets.</p> <p>There is a chance that different type/extension of certificate should be used (Digital Ocean has a lot of useful and good documentation).</p> <hr /> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secrets in kubernetes page</a>.</p>
<p>I have installed a Kubernetes cluster using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind k8s</a> as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to <strong>kind k8s</strong> local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using <strong>kubectl apply -f config.yaml</strong> CLI command. Using similar method I've also deployed <strong>fluentd</strong> hoping to collect logs from <code>/var/log/containers</code> that would be mounted to fluentD container.</p> <p>I noticed <code>/var/log/containers/</code> symlink link doesn't exist. However there is <code>/var/lib/docker/containers/</code> and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in <code>/var/lib/docker/containers/</code> either.</p> <p>I can see logs in the console when I run <code>kubectl logs pod-name</code> even though I'm unable to find the logs in the local storage.</p> <p>Following the answer in another <a href="https://stackoverflow.com/questions/67931771/logs-of-pods-missing-from-var-logs-in-kubernetes">thread</a> given by stackoverflow member, I was able to get some information but not all.</p> <p>I have confirmed Docker is configured with json logging driver by running the following command. <code>docker info | grep -i logging</code></p> <p>When I run the following command (found in the thread given above) I can get the image ID. <code>kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'</code></p> <p>However I cannot use it to inspect the docker image using <code>docker inspect</code> as Docker is not aware of such image which I assume it due to the fact it is managed by <strong>kind</strong> control plane.</p> <p>Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the <code>/var/log/containers</code> symbolink link to access the container logs.</p>
<p>It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.</p> <p>First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it <strong>not</strong> for running containers inside pods. It needs Docker to <strong><a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">create container which will be Kubernetes node</a></strong> - and on this container you will have pods which will have containers that are you looking for.</p> <blockquote> <p><a href="https://sigs.k8s.io/kind" rel="nofollow noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container “nodes”.</p> </blockquote> <p>So basically the layers are : your VM -&gt; container hosted on yours VM's docker which is acting as Kubernetes node -&gt; on this container there are pods -&gt; in those pods are containers.</p> <p>In kind <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">quickstart section</a> you can find more detailed information about image used by kind:</p> <blockquote> <p>This will bootstrap a Kubernetes cluster using a pre-built <a href="https://kind.sigs.k8s.io/docs/design/node-image" rel="nofollow noreferrer">node image</a>. Prebuilt images are hosted at<a href="https://hub.docker.com/r/kindest/node/" rel="nofollow noreferrer"><code>kindest/node</code></a>, but to find images suitable for a given release currently you should check the <a href="https://github.com/kubernetes-sigs/kind/releases" rel="nofollow noreferrer">release notes</a> for your given kind version (check with <code>kind version</code>) where you'll find a complete listing of images created for a kind release.</p> </blockquote> <p>Back to your question, let's find missing containers!</p> <p>On my local VM, I setup <a href="https://kind.sigs.k8s.io/docs/user/quick-start/#installation" rel="nofollow noreferrer">kind Kubernetes</a> and I have installed <code>kubectl</code> tool Then, I created an example <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">nginx-deployment</a>. By running <code>kubectl get pods</code> I can confirm pods are working.</p> <p>Let's find container which is acting as node by running <code>docker ps -a</code>:</p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1d2892110866 kindest/node:v1.21.1 &quot;/usr/local/bin/entr…&quot; 50 minutes ago Up 49 minutes 127.0.0.1:43207-&gt;6443/tcp kind-control-plane </code></pre> <p>Okay, now we can exec into it and find containers. Note that <code>kindest/node</code> image is not using docker as the container runtime but <a href="https://github.com/kubernetes-sigs/cri-tools" rel="nofollow noreferrer"><code>crictl</code></a>.</p> <p>Let's exec into node: <code>docker exec -it 1d2892110866 sh</code>:</p> <pre><code># ls bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var # </code></pre> <p>Now we are in node - time to check if containers are here:</p> <pre><code># crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 135c7ad17d096 295c7be079025 47 minutes ago Running nginx 0 4e5092cab08f6 ac3b725061e12 295c7be079025 47 minutes ago Running nginx 0 6ecda41b665da a416c226aea6b 295c7be079025 47 minutes ago Running nginx 0 17aa5c42f3512 455c69da57446 296a6d5035e2d 57 minutes ago Running coredns 0 4ff408658e04a d511d62e5294d e422121c9c5f9 57 minutes ago Running local-path-provisioner 0 86b8fcba9a3bf 116b22b4f1dcc 296a6d5035e2d 57 minutes ago Running coredns 0 9da6d9932c9e4 2ebb6d302014c 6de166512aa22 57 minutes ago Running kindnet-cni 0 6ef310d8e199a 2a5e0a2fbf2cc 0e124fb3c695b 57 minutes ago Running kube-proxy 0 54342daebcad8 1b141f55ce4b2 0369cf4303ffd 57 minutes ago Running etcd 0 32a405fa89f61 28c779bb79092 96a295389d472 57 minutes ago Running kube-controller-manager 0 2b1b556aeac42 852feaa08fcc3 94ffe308aeff9 57 minutes ago Running kube-apiserver 0 487e06bb5863a 36771dbacc50f 1248d2d503d37 58 minutes ago Running kube-scheduler 0 85ec6e38087b7 </code></pre> <p>Here they are. You can also notice that there are other container which are acting as <a href="https://kubernetes.io/docs/concepts/overview/components/" rel="nofollow noreferrer">Kubernetes Components</a>.</p> <p>For further debugging containers I would suggest reading documentation about <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/" rel="nofollow noreferrer">debugging Kubernetes nodes with crictl</a>.</p> <p>Please also note that on your local VM there is file <code>~/.kube/config</code> which has information needed for <code>kubectl</code> to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).</p> <p>Hope It will help you. Feel free to ask any question.</p> <p><strong>EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS</strong></p> <p>Answering question from comment about mounting directory from node to local VM. We need to setup <a href="https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts" rel="nofollow noreferrer">&quot;Extra Mounts&quot;</a>. Let's create a definition needed for kind Kubernetes:</p> <pre><code>kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane # add a mount from /path/to/my/files on the host to /files on the node extraMounts: - hostPath: /tmp/logs/ containerPath: /var/log/pods # optional: if set, the mount is read-only. # default false readOnly: false # optional: if set, the mount needs SELinux relabeling. # default false selinuxRelabel: false # optional: set propagation mode (None, HostToContainer or Bidirectional) # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation # default None propagation: Bidirectional </code></pre> <p>Note that I'm using <code>/var/log/pods</code> instead of <code>/var/log/containers/</code> - it is because on the cluster created by kind Kubernetes <code>containers</code> directory has only symlinks to logs in <code>pod</code> directory.</p> <p>Save this <code>yaml</code>, for example as <code>cluster-with-extra-mount.yaml</code> , then create a cluster using this (create a directory <code>/tmp/logs</code> before applying this command!):</p> <pre><code>kind create cluster --config=/tmp/cluster-with-extra-mount.yaml </code></pre> <p>Then all containers logs will be in <code>/tmp/logs</code> on your VM.</p>
<p>I have an openjdk:8 image running on the Kubernetes cluster. I added memory HPA (Horizontal Pod Autoscaling) which scales up fine but since JVM doesn't release the memory back from the heap to the OS, pods do not scale down. Following is the hpa.yaml</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: image-server spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: image-server minReplicas: 2 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 60 </code></pre> <p>One way to solve this is to use the right GC and make it release the memory, but since JVM has been designed to not release from the heap frequently for performance reasons, doing this isn't a good idea. Is there a way to handle this from Kubernetes? Like instead of checking OS's memory usage, can we not just check the memory usage from heap and scale on that?</p>
<p>Scaling Java applications in Kubernetes is a bit tricky. The HPA looks at system memory only and as pointed out, the JVM generally do not release commited heap space (at least not immediately).</p> <p>There are two main approaches one could take to solve this</p> <h2>1. Tune JVM Parameters so that the commited heap follows the used heap more closely</h2> <p>Depending on which JVM and GC is in use the tuning options may be slightly different, but the most important ones would be</p> <ul> <li><code>MaxHeapFreeRatio</code> - How much of the commited heap that is allowed to be unused</li> <li><code>GCTimeRatio</code> - How often GC is allowed to run (impacts performance)</li> <li><code>AdaptiveSizePolicyWeight</code> - How to weigh older vs newer GC runs when calculating new heap</li> </ul> <p>Giving exact values for these are not easy, it is a compromise between releasing memory fast and application performance. The best settings will be dependant on the load characteristics of the application.</p> <p>Patrick Dillon has written an article published by RedHat called <a href="https://cloud.redhat.com/blog/scaling-java-containers" rel="noreferrer">Scaling Java containers</a> that deep dives into this subject.</p> <h2>2. Custom scaling logic</h2> <p>Instead of using the HPA you could create your own scaling logic and deploy it into Kubernetes as a job running periodically to do:</p> <ol> <li>Check the heap usage in all pods (for example by running jstat inside the pod)</li> <li>Scale out new pods if the max threshold is reached</li> <li>Scale in pods if the min threshold is reached</li> </ol> <p>This approach has the benefit of looking at the actual heap usage, but requires a custom component.</p> <p>An example of this can be found in the article <a href="https://www.powerupcloud.com/autoscaling-based-on-cpu-memory-in-kubernetes-part-ii/#1cac" rel="noreferrer">Autoscaling based on CPU/Memory in Kubernetes — Part II</a> by powercloudup</p>
<p>I was trying to configure a new installation of Lens IDE to work with my remote cluster (on a remote server, on a VM), but encountered some errors and can't find a proper explanation for this case.</p> <p>Lens expects a config file, I gave it to it from my cluster having it changed from</p> <p><code>server: https://127.0.0.1:6443</code></p> <p>to</p> <p><code>server: https://</code><strong>(address to the remote server)</strong><code>:</code><strong>(assigned intermediate port to 6443 of the VM with the cluster)</strong></p> <p>After which in Lens I'm getting this:</p> <pre><code>2021/06/14 22:55:13 http: proxy error: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.122, not (address to the remote server) </code></pre> <p>I can see that some cert has to be reconfigured, but I'm absolutely new to the thing.</p> <p>Here the full contents of the original config file:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0... server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS0... client-key-data: LS0... </code></pre>
<p>The solution is quite obvious and easy.</p> <p>k3s has to add the new IP to the certificate. Since by default, it includes only localhost and the IP of the node it's running on, if you (like me) have some kind of machine in from of it(like an lb or a dedicated firewall), the IP of one has to be added manually.</p> <p>There are two ways how it can be done:</p> <ol> <li>During the installation of k3s:</li> </ol> <blockquote> <pre><code>curl -sfL https://get.k3s.io | sh -s - server --tls-san desired IP </code></pre> </blockquote> <ol start="2"> <li>Or this argument can be added to already installed k3s:</li> </ol> <blockquote> <pre><code>sudo nano /etc/systemd/system/k3s.service </code></pre> </blockquote> <blockquote> <pre><code>ExecStart=/usr/local/bin/k3s \ server \ '--tls-san' \ 'desired IP' \ </code></pre> </blockquote> <blockquote> <pre><code>sudo systemctl daemon-reload </code></pre> </blockquote> <p>P.S. Although, I have faced issues with the second method.</p>
<p>I am merging my applications from <code>docker</code> to <code>kubernetes</code>.</p> <p>I am currently working with <code>minikube</code>.</p> <p>I am using traefik as my reverse proxy, and it was installed in kubernetes using the oficial <code>helm chart</code> available at <a href="https://github.com/traefik/traefik-helm-chart" rel="nofollow noreferrer">https://github.com/traefik/traefik-helm-chart</a>.</p> <p>Here is my working docker-compose file:</p> <pre class="lang-yaml prettyprint-override"><code>version: &quot;3.7&quot; services: myapp: image: nexus/my-app labels: - &quot;traefik.enable=true&quot; - &quot;traefik.http.routers.myapp.rule=Host(`myapp.localhost`)&quot; - &quot;traefik.http.routers.myapp.entrypoints=web,web-secure&quot; - &quot;traefik.http.routers.myapp.tls=true&quot; - &quot;traefik.http.routers.myapp.service=myapp&quot; - &quot;traefik.http.middlewares.myapp.redirectscheme.scheme=https&quot; - &quot;traefik.http.middlewares.myapp.redirectscheme.permanent=true&quot; - &quot;traefik.http.services.myapp.loadbalancer.server.port=8080&quot; - &quot;traefik.http.services.myapp.loadbalancer.sticky=true&quot; - &quot;traefik.http.services.myapp.loadbalancer.sticky.cookie.name=StickyCookieMyApp&quot; - &quot;traefik.http.services.myapp.loadbalancer.sticky.cookie.secure=true&quot; # enable the property below if your are running on https - &quot;traefik.http.services.myapp.loadbalancer.server.scheme=https&quot; environment: - JAVA_OPTS=-Xmx512m -Xms256m - SPRING_PROFILES_ACTIVE=prod </code></pre> <p>In Kubernetes I have the following configuration that is working:</p> <pre class="lang-yaml prettyprint-override"><code># HTTPS ingress kind: Ingress apiVersion: extensions/v1beta1 metadata: name: myapp-ingress annotations: traefik.ingress.kubernetes.io/router.entrypoints: websecure traefik.ingress.kubernetes.io/router.tls: &quot;true&quot; spec: rules: - host: myapp.localhost http: paths: - backend: serviceName: myapp servicePort: 8080 tls: - secretName: myapp-cert --- # Ingresses # this middleware is used as the following annotation: # traefik.ingress.kubernetes.io/router.middlewares: default-redirect@kubernetescrd apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: redirect spec: redirectScheme: scheme: https permanent: true --- # http ingress for http-&gt;https redirection kind: Ingress apiVersion: extensions/v1beta1 metadata: name: myapp-redirect annotations: traefik.ingress.kubernetes.io/router.middlewares: default-redirect@kubernetescrd traefik.ingress.kubernetes.io/router.entrypoints: web spec: rules: - host: myapp.localhost http: paths: - backend: serviceName: myapp servicePort: 8080 </code></pre> <p>How do I transform these three lines into Kubernetes spec?</p> <pre class="lang-yaml prettyprint-override"><code> - &quot;traefik.http.services.myapp.loadbalancer.sticky=true&quot; - &quot;traefik.http.services.myapp.loadbalancer.sticky.cookie.name=StickyCookieMyApp&quot; - &quot;traefik.http.services.myapp.loadbalancer.sticky.cookie.secure=true&quot; </code></pre>
<p>You would set annotations on your service:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: traefik.ingress.kubernetes.io/service.sticky.cookie: &quot;true&quot; traefik.ingress.kubernetes.io/service.sticky.cookie.name: cookie traefik.ingress.kubernetes.io/service.sticky.cookie.secure: &quot;true&quot; [...] </code></pre> <p>See docs: <a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/</a></p>
<p>I'm pretty well versed in Docker, but I haven't got Minikube/K8s working yet. I first tried setting up artifactory-oss in helm but failed to connect to the LoadBalancer. Now I'm just trying the <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">basic hello-minikube NodePort setup as a sanity check</a>.</p> <p>When I do <code>minikube start</code>, it starts up minikube in Docker:</p> <pre><code>&gt; docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ebabea521ffe gcr.io/k8s-minikube/kicbase:v0.0.18 &quot;/usr/local/bin/entr…&quot; 2 weeks ago Up 36 minutes 127.0.0.1:49167-&gt;22/tcp, 127.0.0.1:49166-&gt;2376/tcp, 127.0.0.1:49165-&gt;5000/tcp, 127.0.0.1:49164-&gt;8443/tcp, 127.0.0.1:49163-&gt;32443/tcp minikube </code></pre> <p>So Minikube only has ports 4916(3/4/5/6/7) open?</p> <p>So I installed hello-minikube:</p> <pre><code>&gt; kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 &gt; kubectl expose deployment hello-minikube --type=NodePort --port=8080 &gt; minikube ip 192.168.49.2 &gt; minikube service list |----------------------|------------------------------------|--------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |----------------------|------------------------------------|--------------|---------------------------| | default | hello-minikube | 8080 | http://192.168.49.2:30652 | | default | kubernetes | No node port | | kube-system | ingress-nginx-controller-admission | No node port | | kube-system | kube-dns | No node port | | kubernetes-dashboard | dashboard-metrics-scraper | No node port | | kubernetes-dashboard | kubernetes-dashboard | No node port | |----------------------|------------------------------------|--------------|---------------------------| &gt; minikube service --url hello-minikube http://192.168.49.2:30652 </code></pre> <p>I check firewall, and it has the ports I've opened:</p> <pre><code>&gt; sudo firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: ens192 sources: services: dhcpv6-client http https ssh ports: 8000-9000/tcp 30000-35000/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: &gt; kubectl get pods NAME READY STATUS RESTARTS AGE hello-minikube-6ddfcc9757-hxxmf 1/1 Running 0 40m &gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-minikube NodePort 10.97.233.42 &lt;none&gt; 8080:30652/TCP 36m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 19d &gt; kubectl describe services hello-minikube Name: hello-minikube Namespace: default Labels: app=hello-minikube Annotations: &lt;none&gt; Selector: app=hello-minikube Type: NodePort IP Families: &lt;none&gt; IP: 10.97.233.42 IPs: 10.97.233.42 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP NodePort: &lt;unset&gt; 30652/TCP Endpoints: 172.17.0.6:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>I've tried every IP and port combination, <code>minikube tunnel</code>, and <code>kube proxy</code> and a few other things but I just can't find any port to access this service from another machine. I can't get an 'External-IP'. nmap finds a bunch of ports if i search from the machine itself.</p> <pre><code>&gt; nmap -p 1-65000 localhost Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-26 15:16 SAST Nmap scan report for localhost (127.0.0.1) Host is up (0.0013s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 64971 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 80/tcp open http 111/tcp open rpcbind 443/tcp open https 631/tcp open ipp 3000/tcp open ppp 5000/tcp open upnp 5050/tcp open mmcc 8060/tcp open unknown 8080/tcp open http-proxy 8082/tcp open blackice-alerts 9090/tcp open zeus-admin 9093/tcp open unknown 9094/tcp open unknown 9100/tcp open jetdirect 9121/tcp open unknown 9168/tcp open unknown 9187/tcp open unknown 9229/tcp open unknown 9236/tcp open unknown 33757/tcp open unknown 35916/tcp open unknown 41266/tcp open unknown 49163/tcp open unknown 49164/tcp open unknown 49165/tcp open unknown 49166/tcp open unknown 49167/tcp open unknown </code></pre> <p>But if I scan that machine from another machine on the network:</p> <pre><code>&gt; nmap -p 1-65000 10.20.2.26 Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-26 15:23 SAST Nmap scan report for 10.20.2.26 Host is up (0.00032s latency). Not shown: 58995 filtered ports, 6001 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 443/tcp open https 8060/tcp open unknown </code></pre> <p>those ports don't seem to be accessible. Any ideas?</p> <p>-- EDIT 1: The sys admin says only <code>10.20.x.x</code> IPs will resolve. So <code>192.168.x.x</code> and <code>10.96.x.x</code> won't work. So perhaps this <code>--service-cluster-ip-range</code> field is what I'm looking for. I will try it out next.</p>
<p>I faced a similar issue that I was banging my head upon, <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">this documentation</a> was quite helpful. In my case I was accessing a Jenkins build server running in a Kubernetes cluster via minikube on my Mac OS.</p> <p>I followed steps to get this port forwarding working:</p> <ol> <li><p>Confirm the port of your pod :</p> <p><code>kubectl get pod &lt;podname-f5d-48kbr&gt; --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{&quot;\n&quot;}}' -n &lt;namespace-name&gt;</code></p> </li> </ol> <p>Say the output displays</p> <pre><code>&gt; 27013 </code></pre> <ol start="2"> <li>Forward a local port to a port on the Pod like so :</li> </ol> <p><code>kubectl port-forward &lt;podname-deployment-f5db75f7-48kbr&gt; 8080:27013 -n &lt;namespace-name&gt;</code></p> <p>and that should start the port forwarding, the output like :</p> <pre><code>Forwarding from 127.0.0.1:8080 -&gt; 27013 Forwarding from [::1]:8080 -&gt; 27013 </code></pre> <p>now access your application on the browser via http://localhost:8080/</p>
<p>I have nginx ingress installed on my cluster. Here is the yaml</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-client annotations: kubernetes.io/ingress.class: nginx namespace: dev spec: rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: client-service port: number: 80 - path: /api pathType: Prefix backend: service: name: api-service port: number: 80 </code></pre> <p>When I hit / prefix it is good. curl <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a> (All Good)</p> <p>Problem:</p> <p>But when I hit / api prefix, it returns /api of the service not / of the service</p> <p>curl <a href="http://example.com/api" rel="nofollow noreferrer">http://example.com/api</a> (It <strong>should link to api-service</strong>, but it is linking to api-service/api)</p> <p>Any help will be appreciated!</p>
<p>This is because a path <code>/</code> with type <code>Prefix</code> will match <code>/</code> and everything after, including <code>/api</code>. So your first rule overshadows the second rule in some sense.</p> <hr /> <p>I don't know if it's an option for you, but it would be probably most elegant and idiomatic to use different hostnames for both services. If you deploy <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager</a>, this shouldn't be a problem.</p> <pre class="lang-yaml prettyprint-override"><code>rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: client-service port: number: 80 # use a different hostname for the api - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: api-service port: number: 80 </code></pre> <hr /> <p>Another option could be to use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">regex</a> in your frontend path rule. And let it not match when the slash is followed by api. For that, you need to set an annotation.</p> <pre class="lang-yaml prettyprint-override"><code>annotations: nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; </code></pre> <p>Then you can do something like the below for your frontend service. Using a <a href="https://www.regextutorial.org/positive-and-negative-lookahead-assertions.php" rel="nofollow noreferrer">negative lookahead</a>.</p> <pre><code>- path: /(?!api).* </code></pre> <hr /> <p>Alternatively, but less pretty, you could add a path prefix to your frontend service and strip it away via <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">path rewrite</a> annotation. But then you may have to write two separate ingress manifests as this is annotation counts for both, or you need to use a more complex path rewrite rule.</p> <pre class="lang-yaml prettyprint-override"><code>annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 </code></pre> <pre class="lang-yaml prettyprint-override"><code>- path: /ui(/|$)(.*) </code></pre> <hr /> <p>Perhaps it's also sufficient to move the more specific route, <code>/api</code>, before the generic <code>/</code> route. In that case, switch the path around in the list. If they end up in that order in the nginx config, nginx should be able to handle it as desired.</p>
<p>I am running the command:</p> <pre><code>kubectl run testbox -it --rm --restart=Never --image=python:buster -- python3 </code></pre> <p>which will launch a python session and then I input <code>exit()</code> to quit the session. But the session hangs forever there. If I do <code>kubectl get po testbox</code> I can see the pod is already completed.</p> <p>Then if I hit <code>enter</code> key the console will then output:</p> <pre><code>E0826 22:43:38.790348 1551782 v2.go:105] EOF </code></pre> <p>I noticed that this will not happen if I set <code>--restart=Always</code>. Not sure if it is expected?</p> <p>Thanks!</p>
<p>There is a similar issue posted on <code>kubectl</code> GtiHub page (<a href="https://github.com/kubernetes/kubectl/issues/1098" rel="nofollow noreferrer">run commands don't return when using kubectl 1.22.x #1098</a>), created 3 days ago. Currently awaiting triage.</p> <p>This is most probably a bug in <em>1.22</em> version of <code>kubectl</code>. If this issue causes you problems, I suggest downgrading to <em>1.21</em>, as this bug does not occurs in older versions.</p>
<p>I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?</p> <p>Thanks a million Abdel</p>
<p>Similar question has been asked <a href="https://techcommunity.microsoft.com/t5/azure-devops/run-a-powershell-script-on-azure-aks-nodes/m-p/2689781" rel="nofollow noreferrer">here</a>. User <a href="https://techcommunity.microsoft.com/t5/user/viewprofilepage/user-id/1138854" rel="nofollow noreferrer">philipwelz</a> has written:</p> <blockquote> <p>Hey,</p> <p>although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.</p> <p>I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.</p> <p>BR,<br /> Philip</p> </blockquote> <p>This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the <a href="https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html" rel="nofollow noreferrer">following doc</a>. You are right that install scripts work only on Linux:</p> <blockquote> <p>Install scripts work on Linux hosts only.</p> </blockquote> <p>But, for the Windows and Mac software you have specific yaml files:</p> <blockquote> <p>For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.</p> </blockquote> <p>The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:</p> <p>STEP 3:</p> <blockquote> <ul> <li>Generate a <a href="https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#" rel="nofollow noreferrer">defender.yaml</a> file, where:</li> </ul> </blockquote> <pre><code> The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user &lt;ADMIN&gt; (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console. $ &lt;PLATFORM&gt;/twistcli defender export kubernetes \ --user &lt;ADMIN_USER&gt; \ --address &lt;PRISMA_CLOUD_COMPUTE_CONSOLE_URL&gt; \ --cluster-address &lt;PRISMA_CLOUD_COMPUTE_HOSTNAME&gt; - &lt;PLATFORM&gt; can be linux, osx, or windows. - &lt;ADMIN_USER&gt; is the name of a Prisma Cloud user with the System Admin role. </code></pre> <p>and then STEP 4:</p> <pre><code>kubectl create -f ./defender.yaml </code></pre>
<p>I deployed an Azure AKS cluster via the following terraform statements into an existing vnet. It worked, the AKS cluster is created with an Azure load balancer and an public IP address assigned to it. I need a setup with an internal Azure load balancer only. How do I have to change the terraform code to only get an internal Azure load balancer? Thanks</p> <pre class="lang-hcl prettyprint-override"><code>resource &quot;azurerm_kubernetes_cluster&quot; &quot;aks&quot; { name = &quot;${var.tags.department}-${var.tags.stage}-${var.tags.environment}_aks&quot; location = var.location resource_group_name = azurerm_resource_group.aksrg.name dns_prefix = lower(&quot;${var.tags.department}-${var.tags.stage}-${var.tags.environment}-aks&quot;) private_link_enabled = true node_resource_group = &quot;${var.tags.department}-${var.tags.stage}-${var.tags.environment}_aks_nodes_rg&quot; linux_profile { admin_username = &quot;testadmin&quot; ssh_key { key_data = file(&quot;/ssh/id_rsa.pub&quot;) #ssh-keygen } } default_node_pool { name = &quot;default&quot; vm_size = &quot;Standard_DS1_v2&quot; enable_auto_scaling = false enable_node_public_ip = false node_count = 1 vnet_subnet_id = azurerm_subnet.akssubnet.id } network_profile { network_plugin = &quot;azure&quot; service_cidr = &quot;172.100.0.0/24&quot; dns_service_ip = &quot;172.100.0.10&quot; docker_bridge_cidr = &quot;172.101.0.1/16&quot; load_balancer_sku = &quot;standard&quot; } service_principal { client_id = azurerm_azuread_service_principal.aks_sp.application_id client_secret = azurerm_azuread_service_principal_password.aks_sp_pwd.value } addon_profile { kube_dashboard { enabled = true } } role_based_access_control { enabled = false } } </code></pre>
<p>This is configured in the Kubernetes LoadBalancer service using annotations. Specifically you need to add <code>service.beta.kubernetes.io/azure-load-balancer-internal: &quot;true&quot;</code> to the spec of your <code>LoadBalancer</code> service.</p> <p>This is documented here <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#additional-customizations-via-kubernetes-annotations" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard#additional-customizations-via-kubernetes-annotations</a></p>
<p>I am trying to understand the retry behavior for liveness probe, its not clear from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">documentation</a>.</p> <p>I will to illustrate with an example. Consider the following spec for liveness probe</p> <pre><code>periodSeconds: 60 timeoutSeconds: 15 failureThreshold: 3 </code></pre> <p>Lets assume the service is down</p> <p>Which behavior is expected?</p> <pre><code>the probe kicks off at 0s sees a failure at 15s, (due to timeoutSeconds 15) retry1 at ~15s, fail at ~30s and retry2 at ~30s, fail at ~45 (retry immediately after failure) ultimately restart pod at ~45s (due to failureThreshold 3) </code></pre> <p>or</p> <pre><code>the probe kicks off at 0s sees a failure at 15s, (due to timeoutSeconds 15) retry1 at ~60s, fail at ~75s and retry2 at ~120s, fail at ~135s (due to periodSeconds 60, doesnt really do retry after a failure) ultimately restart pod at ~180s (due to failureThreshold 3) </code></pre>
<p><code>periodSeconds</code> is how often it checks. If you mean retry after crossing the failure threshold it never will because the container is full restarted from scratch.</p>
<p><strong>The problem:</strong></p> <p><code>Websockets</code> and <code>Socket.io</code> allow for rich 2-way asynchronous notifications between client and webserver.</p> <p><code>Socket.io</code> between HTML/javascript client and ingress &quot;cookie&quot; routing creates a stateful association between a Pod in a Deployment (let's call these pods of Deployment A) and the HTML/javascript client. Other Pods in other Deployments (let call these Pods of Deployment B and Deployment C) may want to notify the specific Pod in Deployment A of events specific to what Pod A is displaying.</p> <p>Is there a Kubernetes mechanism to allow this registration and communication between pods?</p> <p><strong>Generic Setup:</strong></p> <p>Deployment A, B, C each have multiple replicas.</p> <p>The pods from A, B, and C can read and update records in a dumb distributed store.</p> <p>Each pod from A will be responsible for a set of pages (i.e., webserver). The set of pages that a specific pod A is responsible for can change dynamically (i.e., user decides what page to edit). pod A are not stateless since &quot;cookie&quot; control ingress routing and pod A maintains a <code>socket.io</code> to user html/javascript page.</p> <p>Pods in B or C update components on the pages. If a page component is updated by B/C that is currently being edited by pod in A, the B/C pod must notify the specific pod A of the update.</p> <p>More than one pod A may be editing the same page.</p> <p><strong>More detail:</strong></p> <p>Deployment A is a nodejs express server hosting <code>socket.io</code> from html/javascript clients. Traffic is routed from an ingress using <code>nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot;</code> so the pod hosting a specific client can send unsolicited traffic to the client. i.e., asynchronous two-way traffic.</p> <p>Deployment B is is a backend for Deployment A. A simple <code>socket.AF_INET</code> is opened from a pod in deployment A to a Service for deployment B. Response from B goes to A and then to the client. So far, so good, everything works, but only tested on 1 node configuration.</p> <p>Deployment C is a backend to Deployment B. A socket is opened from B to a Service for C. Response from C to B to A to the WebClient works fine (again on a 1 node configuration)</p> <p><strong>The problem:</strong></p> <p>Deployment B and C may get requests from other sources to process information that would change the content displayed to the user. I want to update any pod on deployment A that is hosting <code>socket.io</code> to the client displaying this page.</p> <p>The description/implementation so far does not asynchronously update the pod A unless the user does a full page refresh.</p> <p>Multiple users may display the same page but be associated via ingress cookies to different deployment A pods.</p> <p>As it stands now, user 1 only sees updates initiated by user 1, and user 2 only sees updates from user 2. Unless each user does a page refresh.</p> <p>I want B and C to send updates to all pods in A that are displaying the page being updated.</p> <p><strong>Solution that does not feel Kubernetes clean:</strong></p> <p>A pod wanting notifications of changes to components on a page would create a record indicating it's interested in changes to this page containing its pod's IP address, and a good until timestamp.</p> <p>When a client displays a page, the hosting A pods will update a record in the distributed datastore to indicate it wants updates to components on this page. Every so often, Pod A will update the keep alive time in this records. When the user leaves this page, associated Pod A will remove this record.</p> <p>Any pod updating records for a page would check this record and open a socket to this other pod to notify it of changes.</p> <p>An audit would remove any records expired that were not correctly cleanup because of abnormal termination of a pod after it registered its interest in being notified of changes.</p> <p><strong>Question Restated:</strong></p> <p>Is this a clean Kubernetes solution or is there something in Kubernetes to make this cleaner?</p> <p>New to Kubernetes: did I mess up Kubernetes nomenclature anywhere in my question?</p>
<p>There is no native way to reliably communicate between pods of different Deployments. It's kind of possible in case of StatefulSets where all pods have known and stable names.</p> <p>In your case, assuming I understand your requirements correctly, I would recommend using queue messaging software like <a href="https://www.rabbitmq.com/" rel="nofollow noreferrer">RabbitMQ</a> or <a href="https://kafka.apache.org/" rel="nofollow noreferrer">Kafka</a></p>
<p>I am trying out to have a volume mount on Kubernetes.</p> <p>Currently I have a Docker image which I run like:</p> <pre><code>docker run --mount type=bind,source=&quot;$(pwd)&quot;&lt;host_dir&gt;,target=&lt;docker_dir&gt; container </code></pre> <p>To have this run on Google Kubernetes cluster, I have:</p> <ol> <li>Create a Google Compute Disk</li> <li>Created a persistent volume which refers to the disk:</li> </ol> <pre><code> kind: PersistentVolume ... namespace: default name: pvc spec: claimRef: namespace: default name: pvc gcePersistentDisk: pdName: disk-name fsType: ext4 --- ... kind: PersistentVolumeClaim metadata: name: pvc spec: storageClassName: &quot;storage&quot; ... accessModes: - ReadWriteOnce resources: requests: storage: 2000Gi </code></pre> <ol start="3"> <li>Created pod with mount</li> </ol> <pre><code>kind: Pod apiVersion: v1 metadata: name: k8s-pod spec: volumes: - name: pvc persistentVolumeClaim: claimName: pvc containers: - name: image_name image: eu.gcr.io/container:latest volumeMounts: - mountPath: &lt;docker_dir&gt; name: dir </code></pre> <p>I am missing out where the binding between the host and container/pod directories will take place. Also where do I mention that binding in my yaml files.</p> <p>I will appreciate any help :)</p>
<p>You are on the right path here. In your Pod spec, the name of the volumeMount should match the name of the volumes. So in your case,</p> <pre><code>volumes: - name: pvc persistentVolumeClaim: claimName: pvc </code></pre> <p>volume name is <code>pvc</code>. So your volumeMount should be</p> <pre><code>volumeMounts: - mountPath: &quot;/path/in/container&quot; name: pvc </code></pre> <p>So, for example, to mount this volume at <code>/mydata</code> in your container, your Pod spec would look like</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: k8s-pod spec: volumes: - name: pvc persistentVolumeClaim: claimName: pvc containers: - name: image_name image: eu.gcr.io/container:latest volumeMounts: - mountPath: &quot;/mydata&quot; name: pvc </code></pre>
<p>lets say, we have two independent Kubernetes clusters <code>Cluster 1 &amp; Cluster 2 </code>, Each of them has two replicas of same application Pod. Like <code>Cluster 1 : Pod A &amp; Pod B</code> <code>Cluster 2 : Pod C &amp; Pod D</code> Application code in Pod A(client) wants to connect to any Pod running in cluster 2 via NodePort/Loadbalancer service over UDP protocol to send messages. The only requirement is, to maintain affinity so that all messages from Pod A should go to any one pod only (either Pod C or Pod D). Since, UDP is a connectionless protocol, my concern is around the session Affinity based on ClientIP. Should setting the sessionAffinity as client IP solve my issue ?</p>
<blockquote> <p>Since, UDP is a connectionless protocol, my concern is around the session Affinity based on ClientIP. Should setting the sessionAffinity as client IP solve my issue ?</p> </blockquote> <p><code>sessionAffinity</code> keeps each session based on sourceIP regardless of the protocols at the same cluster. But it does not mean your real session is kept as you expected on your env across your whole access path journey. <strong>In other words, just only using sessionAffinity does not ensure keeping whole session on your access paths.</strong></p> <p>For example, Pod A outbound IP is translated as running node IP(SNAT) if you does not use egress IP solutions for the Pod A. It also depends your NodePort and LoadBalancer Service config about source IP in cluster 2. Refer <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">Using Source IP</a> for more details.</p> <p>So you should consider how to keep session safely while accessing each other between other clusters. Personally I think you had better consider application layer(7Layer) sticky session for keeping the session, not sessionAffinity of the service.</p>
<p>I have a web application hosted in EKS and there is a matrix in place for CPU utilization for scaling the pods horizontally.</p> <p>If the current number of pods is 10, and I increase the load (increasing requests per minute) then the desired number of pods is dependent on how aggressively I am increasing the load, so it could be 13, 16 etc.</p> <p>But I want that the number of pods should always increase in a multiple of 5 and decrease in a multiple of 3. Is this possible?</p>
<p>Went through documentation and some code, this looks impossible to force horizontal pod autoscaler (HPA) to scale down or up in exact numbers of pods since there's no flags/options for it.</p> <p><strong>The closest you can get</strong> is to set up <code>scaleDown</code> and <code>scaleUp</code> policies.</p> <p>Below the example (<strong>note</strong>, this will work with <code>v2beta2</code> api version), this part should be located under <code>spec</code>:</p> <pre><code>behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 3 periodSeconds: 15 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Pods value: 5 periodSeconds: 15 </code></pre> <p><strong>What this means:</strong></p> <ul> <li><code>scaleDown</code> will be performed at most of 3 pods every 15 seconds.</li> <li><code>scaleUp</code> will be performed at most of 5 pods every 15 seconds.</li> <li><code>stabilizationWindowSeconds</code> - The stabilization window is used to restrict the flapping of replicas when the metrics used for scaling keep fluctuating. The stabilization window is used by the autoscaling algorithm to consider the computed desired state from the past to prevent scaling</li> </ul> <p>This doesn't guarantee that HPA will scale up or down the exact number of specified pods, it's just a policy. However if workload increase or decrease will happen fast, it should be close to behaviour you'd like to see.</p> <p><strong>Useful link:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">Support for configurable scaling behavior</a></li> </ul>
<p>I was trying to add a poststart hook for my pod using curl, say sending a message to my slack channel in shell, the command looks like this</p> <pre class="lang-sh prettyprint-override"><code>curl -d &quot;text=Hi I am a bot that can post messages to any public channel.&quot; -d &quot;channel=C1234567&quot; -H &quot;Authorization: Bearer xoxb-xxxxxxxxxxxxxxxx&quot; -X POST https://slack.com/api/chat.postMessage </code></pre> <p>and in my pod definition, i tried sth like this</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: curlimages/curl env: - name: TOKEN valueFrom: configMapKeyRef: name: my-config key: token command: [&quot;sleep&quot;] args: [&quot;3000&quot;] lifecycle: postStart: exec: command: - &quot;sh&quot; - &quot;-c&quot; - | curl -d &quot;text=Hi going to start.&quot; -d &quot;channel=C1234567&quot; -H &quot;Authorization: Bearer $(TOKEN)&quot; -X POST https://slack.com/api/chat.postMessage </code></pre> <p>Unlike the container-&gt;command, it has <code>args</code> parameter which i could pass multi line command with quote, but in lifecycle-&gt;poststart-&gt;exec-&gt;command it doesn't support args parameter</p> <p>I also tried sth like but no luck</p> <pre><code>command: [&quot;curl&quot;,&quot;-d&quot;,&quot;text=Hi going to start.&quot;,....] </code></pre> <p>but i never got my slack message</p> <p>My question is, how can i pass long curl command with quote in lifecycle-&gt;poststart-&gt;exec-&gt;command?</p>
<p>it finally solved by replacing () with {} to use a env variable in command, it should be ${TOKEN}</p>
<p>I have attributes that include dashes in their names, which I don't have any issue while using them along with <code>index</code> i.e:</p> <pre><code>{{ toYaml index .Values.myService &quot;my-service-config&quot; | b64enc }} </code></pre> <p>However, if I want to use an <code>if or</code> for multiple evaluations, I cannot successfully use index for them, i.e:</p> <pre><code>{{- if or index .Values.serviceA.appyml .Values.serviceB.log4j2xml .Values.serviceC &quot;some-service-configyml&quot; .Values.serviceC &quot;another-serviceyml&quot; }} </code></pre> <p>I receive the error message:</p> <pre><code>at &lt;index&gt;: wrong number of args for index: want at least 1 got 0 </code></pre>
<p>I had to use <code>index</code> for each attribute name having dashes within parentheses:</p> <pre><code>{{- if or .Values.serviceA.appyml .Values.serviceB.log4j2xml ( index .Values.serviceC &quot;some-service-configyml&quot;) ( index .Values.serviceD &quot;another-serviceyml&quot;) }} </code></pre>
<p>I'm using Spark 3.0.0 with Kubernetes master. I am using a cluster mode to rung the spark job. Please find the spark submit command as below</p> <pre><code>./spark-submit \ --master=k8s://https://api.k8s.my-domain.com \ --deploy-mode cluster \ --name sparkle \ --num-executors 2 \ --executor-cores 2 \ --executor-memory 2g \ --driver-memory 2g \ --class com.myorg.sparkle.Sparkle \ --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties \ --conf spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/opt/spark/conf/log4j.properties \ --conf spark.kubernetes.submission.waitAppCompletion=false \ --conf spark.kubernetes.allocation.batch.delay=10s \ --conf spark.kubernetes.appKillPodDeletionGracePeriod=20s \ --conf spark.kubernetes.node.selector.workloadType=spark \ --conf spark.kubernetes.driver.pod.name=sparkle-driver \ --conf spark.kubernetes.container.image=custom-registry/spark:latest \ --conf spark.kubernetes.namespace=spark \ --conf spark.eventLog.dir='s3a://my-bucket/spark-logs' \ --conf spark.history.fs.logDirectory='s3a://my-bucket/spark-logs' \ --conf spark.eventLog.enabled='true' \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark \ --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \ --conf spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider \ --conf spark.kubernetes.driver.annotation.iam.amazonaws.com/role=K8sRoleSpark \ --conf spark.kubernetes.executor.annotation.iam.amazonaws.com/role=K8sRoleSpark \ --conf spark.kubernetes.driver.secretKeyRef.AWS_ACCESS_KEY_ID=aws-secrets:key \ --conf spark.kubernetes.driver.secretKeyRef.AWS_SECRET_ACCESS_KEY=aws-secrets:secret \ --conf spark.kubernetes.executor.secretKeyRef.AWS_ACCESS_KEY_ID=aws-secrets:key \ --conf spark.kubernetes.executor.secretKeyRef.AWS_SECRET_ACCESS_KEY=aws-secrets:secret \ --conf spark.hadoop.fs.s3a.endpoint=s3.ap-south-1.amazonaws.com \ --conf spark.hadoop.com.amazonaws.services.s3.enableV4=true \ --conf spark.yarn.maxAppAttempts=4 \ --conf spark.yarn.am.attemptFailuresValidityInterval=1h \ s3a://dp-spark-jobs/sparkle/jars/sparkle.jar \ --commonConfigPath https://my-bucket.s3.ap-south-1.amazonaws.com/sparkle/configs/prod_main_configs.yaml \ --jobConfigPath https://my-bucket.s3.ap-south-1.amazonaws.com/sparkle/configs/cc_configs.yaml \ --filePathDate 2021-03-29 20000 </code></pre> <p>I have hosted a different pod running history server with the same image. The history server is able to read all the event logs and shows details. The job is executed successfully.</p> <p>I do not see driver pod and worker pod's stdout and stderr logs in History server. How can I enable it?</p> <p>Similar to <a href="https://stackoverflow.com/questions/67282866/capture-kubernetes-spark-driver-and-executor-logs-in-s3-and-view-in-history-serv">this question</a></p>
<p>Unfortunately, it appears that there is no way for the driver-scoped logs to be piped to the spark-submit scope. From the <a href="http://df_check%20=%20spark.sql(%22%22%22%20SELECT%20%20%20%20batch,%20%20%20count(1)%20as%20c%20FROM%20bronze%20GROUP%20BY%20batch%20ORDER%20BY%20c%20DESC%20%22%22%22)%20display(df_check)" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>Logs can be accessed using the Kubernetes API and the kubectl CLI. When a Spark application is running, it’s possible to stream logs from the application using:</p> </blockquote> <pre><code>kubectl -n=&lt;namespace&gt; logs -f &lt;driver-pod-name&gt; </code></pre>
<p>While working with Kubernetes for some months now, I found a nice way to use one single existing domain name and expose the cluster-ip through a sub-domain but also most of the microservices through different sub-sub-domains using the ingress controller.</p> <p>My ingress example code:</p> <pre><code>kind: Ingress apiVersion: networking.k8s.io/v1beta1 metadata: name: cluster-ingress-basic namespace: ingress-basic selfLink: &gt;- /apis/networking.k8s.io/v1beta1/namespaces/ingress-basic/ingresses/cluster-ingress-basic uid: 5d14e959-db5f-413f-8263-858bacc62fa6 resourceVersion: '42220492' generation: 29 creationTimestamp: '2021-06-23T12:00:16Z' annotations: kubernetes.io/ingress.class: nginx managedFields: - manager: Mozilla operation: Update apiVersion: networking.k8s.io/v1beta1 time: '2021-06-23T12:00:16Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': .: {} 'f:kubernetes.io/ingress.class': {} 'f:spec': 'f:rules': {} - manager: nginx-ingress-controller operation: Update apiVersion: networking.k8s.io/v1beta1 time: '2021-06-23T12:00:45Z' fieldsType: FieldsV1 fieldsV1: 'f:status': 'f:loadBalancer': 'f:ingress': {} spec: rules: - host: microname1.subdomain.domain.com http: paths: - pathType: ImplementationSpecific backend: serviceName: kylin-job-svc servicePort: 7070 - host: microname2.subdomain.domain.com http: paths: - pathType: ImplementationSpecific backend: serviceName: superset servicePort: 80 - {} status: loadBalancer: ingress: - ip: xx.xx.xx.xx </code></pre> <p>With this configuration:</p> <ol> <li>microname1.subdomain.domain.com is pointing into Apache Kylin</li> <li>microname2.subdomain.domain.com is pointing into Apache Superset</li> </ol> <p>This way all microservices can be exposed using the same Cluster-Load-Balancer(IP) but the different sub-sub domains.</p> <p>I tried to do the same for the SQL Server but this is not working, not sure why but I have the feeling that the reason is that the SQL Server communicates using TCP and not HTTP.</p> <pre><code>- host: microname3.subdomain.domain.com http: paths: - pathType: ImplementationSpecific backend: serviceName: mssql-linux servicePort: 1433 </code></pre> <p>Any ideas on how I can do the same for TCP services?</p>
<p>Your understanding is good, by default NGINX Ingress Controller only supports HTTP and HTTPs traffic configuration (Layer 7) so probably your SQL server is not working because of this.</p> <p>Your SQL service is operating using TCP connections so it is <a href="https://stackoverflow.com/questions/56798909/resolve-domain-name-uri-when-listening-to-tcp-socket-in-c-sharp/56799133#56799133">does not take into consideration custom domains that you are trying to setup as they are using same IP address anyway</a> .</p> <p>The solution for your issue is not use custom sub-domain(s) for this service but to setup <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">exposing TCP service in NGINX Ingress Controller</a>. For example you can setup this SQL service to be available on ingress IP on port 1433:</p> <blockquote> <p>Ingress controller uses the flags <code>--tcp-services-configmap</code> and <code>--udp-services-configmap</code> to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <code>&lt;namespace/service name&gt;:&lt;service port&gt;:[PROXY]:[PROXY]</code></p> </blockquote> <p>To setup it you can follow steps provided in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">official NGINX Ingress documentation</a> but there are also some more detailed instructions on StackOverflow, for example <a href="https://stackoverflow.com/questions/61430311/exposing-multiple-tcp-udp-services-using-a-single-loadbalancer-on-k8s/61461960#61461960">this one</a>.</p>
<p>I searched on the net but couldn't find a single use case of having this empty key in config file. I tried to comment it and <code>kubectl</code> worked perfectly fine.</p> <p>so my question is, what on earth it is solving? :)</p>
<p>Interesting question. Looking at the source code here <a href="https://github.com/kubernetes/client-go/blob/d412730e5f0160f6dc0a83459c14b05df8ea56fb/tools/clientcmd/api/v1/types.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/d412730e5f0160f6dc0a83459c14b05df8ea56fb/tools/clientcmd/api/v1/types.go</a> it seems that preferences is used for ”holding general information to be use for cli interactions”. It can hold two config parameters: colors(boolean) and extensions(array of extension descriptor objects). There was also this comment here: <a href="https://github.com/kubernetes/client-go/blob/228dada99554f2e0f7ef07e24f2a4a88c0e448bb/tools/clientcmd/config.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/228dada99554f2e0f7ef07e24f2a4a88c0e448bb/tools/clientcmd/config.go</a> saying: ” Preferences and CurrentContext should always be set in the default destination file. Since we can't distinguish between empty and missing values (no nil strings), we're forced have separate handling for them.”</p> <p>So, as I understand, preferences is there because it is required not be nil and technically it’s impossible to distinguish empty and missing values. Does this answer your question?</p>
<p>I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.</p> <p>Why am I getting this error? What can be the cause?</p> <p>EDIT: I am using Windows OS.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myservicepod6 labels: app: servicepod spec: replicas: 1 selector: matchLabels: app: servicepod template: metadata: labels: app: servicepod spec: containers: - name: php image: php:7.2-apache command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: [&quot;service apache2 start; sleep infinity&quot;] ports: - name: serviceport containerPort: 80 volumeMounts: - mountPath: /var/www/html/ name: hostvolume volumes: - name: hostvolume hostPath: path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/* </code></pre> <p>EDIT FOR THE ANSWER -</p> <p>I start the minkube - <code> minikube start --mount-string=&quot;$HOME/test/src/code/file:/data&quot;</code></p> <p>Then I changed the deployment file like below</p> <p>Showing only volume part</p> <pre><code> spec: volumes: - name: hostvolume hostPath: path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src containers: - name: php image: php:7.2-apache command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: [&quot;service apache2 start; sleep infinity&quot;] ports: - name: serviceport containerPort: 80 volumeMounts: - name: hostvolume mountPath: /test/src/code/file </code></pre> <p>When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty</p> <p>let me know what am I missing?</p>
<p>After a detailed search and hit and trial method - Found the way</p> <p>Only for minikube:-</p> <p>First we need to mount the host folder into the directory name:</p> <p><code>minikube mount src/:/var/www/html</code></p> <p>Then we need to define hostPath and mountPath as</p> <p><code>/var/www/html</code></p> <p>Because now we have mount the folder to html folder.</p> <pre><code>volumes: - name: hostvolume hostPath: path: /var/www/html containers: - name: php image: php:7.2-apache command: [&quot;/bin/sh&quot;, &quot;-c&quot;] args: [&quot;service apache2 start; sleep infinity&quot;] workingDir: /var/www/html ports: - name: serviceport containerPort: 80 volumeMounts: - name: hostvolume mountPath: /var/www/html </code></pre>
<p>I have a web applicaton (e.g. <code>&quot;india&quot;</code>) that depends on postgres and redis (e.g. a typical Rails application).</p> <p>I have a <code>docker-compose.yml</code> file that composes the containers to start this application.</p> <pre><code>version: '3' services: redis-india: image: redis:5.0.6-alpine # ..... postgres-india: image: postgres:11.5-alpine # .... india: depends_on: - postgres-india - redis-india image: india/india:local # .... </code></pre> <p>I'd like to run this application deployment with Kubernetes. I'm trying to figure out how to build the k8s resource objects correctly, and I'm weighing two options:</p> <ol> <li><p>I can build <code>india</code>, <code>postgres-india</code>, and <code>redis-india</code> as separate deployments (and therefore separate Services) in k8s</p> </li> <li><p>I can build <code>india</code>, <code>postgres-india</code>, and <code>redis-india</code> as a single deployment (and therfore a single <code>pod</code> / <code>service</code>)</p> </li> </ol> <p>#2 makes more sense to me personally - all 3 items here comprise the entire &quot;application service&quot; that should be exposed as a single service URL (i.e. the frontend for the web application).</p> <p>However, if I use an <a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">automated tool like <code>kompose</code></a> to translate my <code>docker-compose.yml</code> file into k8s resources, it follows approach #1 and creates three individual k8s Services.</p> <p>Is there a &quot;right way&quot; or standard I should follow?</p> <p>Thanks!</p>
<h1>Independent components</h1> <p>Your three components should run as separate deployments on Kubernetes. You want these three components to be:</p> <ul> <li>Independently upgradable and deployable (e.g. you deploy a new version of Redis but not your app or database)</li> <li>Independently scalable - e.g. you might get many users and want to scale up to multiple instances (e.g. 5 replicas) of your app.</li> </ul> <h3>State</h3> <p>Your app should be designed to be <a href="https://12factor.net/processes" rel="nofollow noreferrer">stateless</a>, and can be deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>. But the Redis and PostgreSQL are <em>stateful</em> components and should be deployed as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>.</p> <h3>Availability</h3> <p>In a production environment, you typically want to:</p> <ul> <li>Avoid downtime when upgrading any application or database</li> <li>Avoid downtime if you or the cloud provider upgrade the node</li> <li>Avoid downtime if the node crash, e.g. due to hardware failure or kernel crash.</li> </ul> <p>With a <em>stateless</em> app deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>, this is trivial to solve - run at least two instances (replicas) of it - and make sure they are deployed on different nodes in the cluster. You can do this using <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">Topology Spread Constraints</a>.</p> <p>With a <em>stateful</em> component as e.g. Redis or PostgreSQL, this is more difficult. You typically need to run it as a cluster. See e.g. <a href="https://redis.io/topics/cluster-tutorial" rel="nofollow noreferrer">Redis Cluster</a>. But it is more difficult for PostgreSQL, you could consider a PostgreSQL-compatible db that has a distributed design, e.g. <a href="https://www.cockroachlabs.com/product/sql/" rel="nofollow noreferrer">CockroachDB</a> that is designed to be <a href="https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-with-kubernetes.html" rel="nofollow noreferrer">run on Kubernetes</a> or perhaps consider <a href="https://access.crunchydata.com/documentation/postgres-operator/v5/" rel="nofollow noreferrer">CrunchyData PostgreSQL Operator</a>.</p> <h1>Pod with multiple containers</h1> <p>When you deploy a Pod with <a href="https://kubernetes.io/docs/concepts/workloads/pods/#how-pods-manage-multiple-containers" rel="nofollow noreferrer">multiple containers</a>, one container is the &quot;main&quot; application and the other containers are supposed to be &quot;helper&quot; / &quot;utility&quot; containers to fix a problem for the &quot;main container&quot; - e.g. if your app logs to two different files - you could have helper containers to tail those files and output it to stdout, as is the recommended log output in <a href="https://12factor.net/logs" rel="nofollow noreferrer">Twelve-Factor App</a>. You typically only use &quot;multiple containers&quot; for apps that are not designed to be run on Kubernetes, or if you want to extend with some functionality like e.g. a Service Mesh.</p>
<p>I am using Kubernetes with Minikube on a Windows 10 Home machine to &quot;host&quot; a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:</p> <pre class="lang-sh prettyprint-override"><code>minikube start kubectl create namespace abc kubectl apply -f service.yml -n abc kubectl apply -f gateway.yml istioctl install --set profile=default -y kubectl label namespace abc istio-injection=enabled </code></pre> <p>Nothing is accessible over the network at this point, until I run the following in its own terminal:</p> <pre class="lang-sh prettyprint-override"><code>minikube tunnel </code></pre> <p>Now I can access the gRPC service directly using IPv4: <code>127.0.0.1:5000</code>. However, accessing the gateway is inaccessible from <code>127.0.0.1:443</code> and instead is only accessible from <code>[::1]:443</code>.</p> <p>Here is the service.yml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: account-grpc spec: ports: - name: grpc port: 5000 protocol: TCP targetPort: 5000 selector: service: account ipc: grpc type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: labels: service: account ipc: grpc name: account-grpc spec: replicas: 1 selector: matchLabels: service: account ipc: grpc template: metadata: labels: service: account ipc: grpc spec: containers: - image: account-grpc name: account-grpc imagePullPolicy: Never ports: - containerPort: 5000 </code></pre> <p>Here is the gateway.yml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway servers: - port: number: 443 name: grpc protocol: GRPC hosts: - &quot;*&quot; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-service spec: hosts: - &quot;*&quot; gateways: - gateway http: - match: - uri: prefix: /account route: - destination: host: account-grpc port: number: 5000 </code></pre> <p>And here are the results of <code>kubectl get service istio-ingressgateway -n istio-system -o yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: ... creationTimestamp: &quot;2021-08-27T01:21:21Z&quot; labels: app: istio-ingressgateway install.operator.istio.io/owning-resource: unknown install.operator.istio.io/owning-resource-namespace: istio-system istio: ingressgateway istio.io/rev: default operator.istio.io/component: IngressGateways operator.istio.io/managed: Reconcile operator.istio.io/version: 1.11.1 release: istio name: istio-ingressgateway namespace: istio-system resourceVersion: &quot;4379&quot; uid: b4db0e2f-0f45-4814-b187-287acb28d0c6 spec: clusterIP: 10.97.4.216 clusterIPs: - 10.97.4.216 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: status-port nodePort: 32329 port: 15021 protocol: TCP targetPort: 15021 - name: http2 nodePort: 31913 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 32382 port: 443 protocol: TCP targetPort: 8443 selector: app: istio-ingressgateway istio: ingressgateway sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 127.0.0.1 </code></pre>
<p>Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.</p>
<p>does anyone know if this is possible?</p> <p>All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.</p> <p>I tried this, but still get error.</p> <pre><code>k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d error: ephemeral containers are disabled for this cluster (error from server: &quot;the server could not find the requested resource&quot;). </code></pre> <p>Thanks</p>
<p>I had the same intent (as have others in <a href="https://github.com/docker/for-mac/issues/2382" rel="nofollow noreferrer">this feature request</a>). After several hours of trial and error, I finally found out a way to do so.</p> <p>Steps:</p> <ol> <li>Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press &quot;Quit Docker Desktop&quot;, then run <code>wsl --shutdown</code>, then run <code>wsl</code>)</li> <li>Open the <code>[...]/kubeadm/manifests</code> folder, in the Docker filesystem.</li> </ol> <blockquote> <p>On Windows, navigate Windows Explorer to:</p> <ul> <li>For Docker Desktop 4.2.0: <code>\\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests</code></li> <li>For Docker Desktop 4.11.0: <code>\\wsl$\docker-desktop-data\data\kubeadm\manifests</code></li> </ul> </blockquote> <ol start="3"> <li>Open the <code>kube-controller-manager.yaml</code>, <code>kube-apiserver.yaml</code>, and <code>kube-scheduler.yaml</code> files, adding the line below:</li> </ol> <pre><code>spec: containers: - command: [...] - --feature-gates=EphemeralContainers=true &lt;-- add this line </code></pre> <ol start="4"> <li>Start Docker Desktop again.</li> </ol> <p>It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.</p> <p>Some of the slowdowns I hit:</p> <ol> <li>It took me quite a while to even find those manifest files. (eventually found it using <a href="https://tools.stefankueng.com/grepWin.html" rel="nofollow noreferrer">grepWin</a>, searching through the whole <code>\\wsl$\docker-desktop-data</code> folder for any matches of a line I grabbed from the <code>kube-apiserver-docker-desktop</code> pod's config, which I viewed using <a href="https://k8slens.dev/" rel="nofollow noreferrer">Lens</a>)</li> <li>Once I found it, I got confused by <a href="https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/#ephemeral-containers-api" rel="nofollow noreferrer">this documentation</a>. When I read <code>FEATURE STATE: Kubernetes v1.22 [alpha]</code>, I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)</li> <li>When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message <code>unexpected EOF</code>. And I didn't even see that error message at first because it was not being propagated to the <code>kube-controller-manager-docker-desktop</code> pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...</li> <li>I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling <code>etcdctl</code> within the <code>etcd-docker-desktop</code> pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use <code>stdin</code> to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably &quot;breaks&quot; if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)</li> <li>After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line &quot;half-indented&quot; since it's &quot;in-between&quot; the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.</li> </ol> <p>The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as <code>unexpected EOF</code>. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.</p>
<p>How can I configure something so that the injected istio sidecar uses the recent kubernetes container lifecycle of sidecar? The sidecar lifecycle is discussed <a href="https://medium.com/@chan_seeker/sidecar-containers-improvement-in-kubernetes-1-18-b5eb66ee2b83" rel="nofollow noreferrer">here</a> and <a href="https://banzaicloud.com/blog/k8s-sidecars/" rel="nofollow noreferrer">here</a>. More specifically is there an annotation similar to <code>sidecar.istio.io/inject: &quot;true&quot;</code>, or some other attributes in a CRD, that can do this? I am using istio 1.6. Thank you.</p>
<p>The mentioned Kubernetes sidecar proposal was withdrawn. So there is no support for that, neither in Istio 1.6 nor in more recent versions.</p> <p>See <a href="https://github.com/kubernetes/enhancements/issues/753" rel="nofollow noreferrer">https://github.com/kubernetes/enhancements/issues/753</a></p>
<p>I'm trying to install an nginx ingress controller into an Azure Kubernetes Service cluster using helm. I'm following <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip" rel="nofollow noreferrer">this Microsoft guide</a>. It's failing when I use helm to try to install the ingress controller, because it needs to pull a &quot;kube-webhook-certgen&quot; image from a local Azure Container Registry (which I created and linked to the cluster), but the kubernetes pod that's initially scheduled in the cluster fails to pull the image and shows the following error when I use <code>kubectl describe pod [pod_name]</code>:</p> <pre><code>failed to resolve reference &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized] </code></pre> <p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip#ip-and-dns-label" rel="nofollow noreferrer">This section describes using helm to create an ingress controller</a>.</p> <p>The guide describes creating an Azure Container Registry, and link it to a kubernetes cluster, which I've done successfully using:</p> <pre><code>az aks update -n myAKSCluster -g myResourceGroup --attach-acr &lt;acr-name&gt; </code></pre> <p>I then import the required 3rd party repositories successfully into my 'local' Azure Container Registry as detailed in the guide. I checked that the cluster has access to the Azure Container Registry using:</p> <pre><code>az aks check-acr --name MyAKSCluster --resource-group myResourceGroup --acr letsencryptdemoacr.azurecr.io </code></pre> <p>I also used the Azure Portal to check permissions on the Azure Container Registry and the specific repository that has the issue. It shows that both the cluster and repository have the ACR_PULL permission)</p> <p>When I run the helm script to create the ingress controller, it fails at the point where it's trying to create a kubernetes pod named <code>nginx-ingress-ingress-nginx-admission-create</code> in the ingress-basic namespace that I created. When I use <code>kubectl describe pod [pod_name_here]</code>, it shows the following error, which prevents creation of the ingress controller from continuing:</p> <pre><code>Failed to pull image &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: [rpc error: code = NotFound desc = failed to pull and unpack image &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: failed to resolve reference &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: failed to resolve reference &quot;letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen@sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068&quot;: failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized] </code></pre> <p>This is the helm script that I run in a linux terminal:</p> <pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-basic --set controller.replicaCount=1 --set controller.nodeSelector.&quot;kubernetes\.io/os&quot;=linux --set controller.image.registry=$ACR_URL --set controller.image.image=$CONTROLLER_IMAGE --set controller.image.tag=$CONTROLLER_TAG --set controller.image.digest=&quot;&quot; --set controller.admissionWebhooks.patch.nodeSelector.&quot;kubernetes\.io/os&quot;=linux --set controller.admissionWebhooks.patch.image.registry=$ACR_URL --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG --set defaultBackend.nodeSelector.&quot;kubernetes\.io/os&quot;=linux --set defaultBackend.image.registry=$ACR_URL --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG --set controller.service.loadBalancerIP=$STATIC_IP --set controller.service.annotations.&quot;service\.beta\.kubernetes\.io/azure-dns-label-name&quot;=$DNS_LABEL </code></pre> <p>I'm using the following relevant environment variables:</p> <pre><code>$ACR_URL=letsencryptdemoacr.azurecr.io $PATCH_IMAGE=jettech/kube-webhook-certgen $PATCH_TAG=v1.5.1 </code></pre> <p>How do I fix the authorization?</p>
<p>It seems like the issue is caused by the new ingress-nginx/ingress-nginx helm chart release. I have fixed it by using version 3.36.0 instead of the latest (4.0.1).</p> <pre><code>helm upgrade -i nginx-ingress ingress-nginx/ingress-nginx \ --version 3.36.0 \ ... </code></pre>
<p>I've made my cluster using minukube.</p> <p>as I know, Indexed-Job feature is added at kubernetes version <a href="https://sysdig.com/blog/kubernetes-1-21-whats-new/" rel="nofollow noreferrer">1.21</a>.</p> <p>but when I made my job, it looks like there is no $JOB_COMPLETION_INDEX environ variable.</p> <p>here is my yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: Job metadata: name: my-job spec: parallelism: 2 completions: 2 completionMode: Indexed template: spec: restartPolicy: Never containers: - name: 'my-container' image: 'busybox' command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;echo My Index is $JOB_COMPLETION_INDEX &amp;&amp; sleep 3600&quot;, ] </code></pre> <pre class="lang-sh prettyprint-override"><code>$ job.batch/my-job created </code></pre> <p>but as I mentioned before, it looks like the job is <strong>NOT Indexed-Job</strong></p> <p>below is logs of my pods(controlled by my-job)</p> <pre><code>$ kubectl logs pod/my-job-wxhh6 My Index is $ kubectl logs pod/my-job-nbxkr My Index is </code></pre> <p>It seems that the <strong>$JOB_COMPLETION_INDEX</strong> environ variable is empty.</p> <p>I'll skip it to make it brief, but when I directly accessed the container, there is also no $JOB_COMPLETION_INDEX.</p> <p>and below is result of <code>kubectl describe job.batch/my-job</code> command</p> <pre><code>Name: my-job Namespace: default Selector: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf job-name=my-job Annotations: &lt;none&gt; Parallelism: 2 Completions: 2 Start Time: Sat, 28 Aug 2021 03:56:46 +0900 Pods Statuses: 2 Running / 0 Succeeded / 0 Failed Pod Template: Labels: controller-uid=6e8bda0c-f1ee-47b9-95ec-87419b3dfaaf job-name=my-job Containers: my-container: Image: busybox Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c echo My Index is $JOB_COMPLETION_INDEX &amp;&amp; sleep 3600 Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 100s job-controller Created pod: my-job-wxhh6 Normal SuccessfulCreate 100s job-controller Created pod: my-job-nbxkr </code></pre> <p>there is no annotation. <a href="https://kubernetes.io/blog/2021/04/19/introducing-indexed-jobs/" rel="nofollow noreferrer">as document</a>, <code>batch.kubernetes.io/job-completion-index</code> annotation must be there.</p> <p>My version is upper than kubernetes <a href="https://sysdig.com/blog/kubernetes-1-21-whats-new/" rel="nofollow noreferrer">1.21</a>, where the Indexed-Job feature is introduced.</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.3&quot;, GitCommit:&quot;ca643a4d1f7bfe34773c74f79527be4afd95bf39&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-07-15T21:04:39Z&quot;, GoVersion:&quot;go1.16.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.2&quot;, GitCommit:&quot;092fbfbf53427de67cac1e9fa54aaa09a28371d7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-16T12:53:14Z&quot;, GoVersion:&quot;go1.16.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} $ minukube kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.2&quot;, GitCommit:&quot;092fbfbf53427de67cac1e9fa54aaa09a28371d7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-16T12:59:11Z&quot;, GoVersion:&quot;go1.16.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.2&quot;, GitCommit:&quot;092fbfbf53427de67cac1e9fa54aaa09a28371d7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-16T12:53:14Z&quot;, GoVersion:&quot;go1.16.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>What should I do?</p>
<p>as of now(2021.08.29), the <strong>IndexedJob</strong> is an alpha feature, so I started minikube with feature-gates flag</p> <pre><code>minikube start --feature-gates=IndexedJob=true </code></pre> <p>and It works well</p>
<p>I want to connect Galera cluster from <code>haproxy</code> pod deployed in kubernetes.</p> <p>Docker file for the image.</p> <pre><code>FROM haproxy:2.3 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg </code></pre> <p><code>haproxy.cfg</code> File</p> <pre><code>defaults log global mode tcp retries 10 timeout client 10000 timeout connect 100500 timeout server 10000 frontend mysql-router-service bind *:6446 mode tcp option tcplog default_backend galera_cluster_backend # MySQL Cluster BE configuration backend galera_cluster_backend mode tcp #option mysql-check user haproxy option tcp-check balance source server mysql_cluster_01 192.168.1.2:3306 check weight 1 server mysql_cluster_02 192.168.1.3:3306 check weight 1 server mysql_cluster_03 192.168.1.4:3306 check weight 1 </code></pre> <p>Here <code>mysql-router-service</code> may mislead, but we used it as it was the earlier db connectivity service.</p> <p>Kubernetes deployment manifest</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ha-proxy namespace: mysql-router spec: replicas: 1 selector: matchLabels: app: ha-proxy template: metadata: labels: app: ha-proxy version: v1 spec: imagePullSecrets: - name: dreg containers: - name: ha-proxy image: our-registry:5000/haproxy:v14 imagePullPolicy: Always ports: - containerPort: 6446 </code></pre> <p>Kubernetes service manifest :</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql-router-service namespace: mysql-router labels: app: ha-proxy spec: selector: app: ha-proxy version: v1 ports: - name: ha-proxy port: 6446 protocol: TCP targetPort: 6446 type: LoadBalancer loadBalancerIP: 192.168.1.101 </code></pre> <p>Followings were seen in <code>ha-proxy</code> pod logs</p> <pre><code>[WARNING] 237/114804 (1) : config : log format ignored for frontend 'mysql-router-service' since it has no log address. [NOTICE] 237/114804 (1) : New worker #1 (8) forked </code></pre> <p>If we use <code>option mysql-check user haproxy</code> in config file, Galera logs <code>/var/log/mysql/error.log</code> has</p> <pre><code>[Warning] Access denied for user 'haproxy'@'192.168.1.10' (using password NO) </code></pre> <p>Here <code>192.168.1.10</code> is one of k8 worker.</p> <p>In galera cluster we have following users</p> <pre><code>+---------------+-------------+ | Host | User | +---------------+-------------+ | 192.168.1.% | haproxy | | localhost | mariadb.sys | | localhost | mysql | | localhost | root | +---------------+-------------+ </code></pre> <p>All nodes are running <code>Ubuntu 18-04</code>, <code>ufw</code> is disabled.</p> <p>We can telnet to galera nodes from k8 workers. (telnet <code>192.168.1.2 3306</code>)</p> <p>What we had missed here?</p>
<p>Depends on :</p> <p><code>[Warning] Access denied for user 'haproxy'@'192.168.1.10' (using password NO)</code></p> <p>I think you have to set password for user : <code>haproxy</code>.</p>
<p>A container running behind a K8s service fails to make network requests with the error <code>x509: certificate signed by unknown authority</code>.</p> <p>The container is an API that serves incoming requests and makes external network requests before responding, it's running in a local K8s cluster managed by Docker desktop. The third party API being called is failing the certificate validation and Im not using a proxy or VPN.</p> <p>What could be the cause of this?</p>
<p>I hope this helps someone else as there are many different discussions about this topic online.</p> <p>The fix seems to be that when doing a multi stage docker build and using e.g. <code>FROM golang:alpine3.14 AS build</code> along with <code>FROM scratch</code>, the root certificates are not copied into the image.</p> <p>adding this to the Dockerfile after the <code>FROM scratch</code> line removes the error.</p> <pre><code>COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ </code></pre> <p>This was found on this <a href="https://stackoverflow.com/a/52979541/356387">stackoverflow answer</a></p>
<p>I've been looking for documentation for a long time and still couldn't find any clear connection procedure. I came up with this code sample :</p> <pre><code>package aws import ( &quot;fmt&quot; &quot;net/http&quot; &quot;github.com/aws/aws-sdk-go/aws/session&quot; &quot;github.com/aws/aws-sdk-go/service/eks&quot; &quot;github.com/joho/godotenv&quot; ) func Connect() { godotenv.Load(&quot;.env&quot;) session := session.Must(session.NewSession()) svc := eks.New(session) clusters, err := svc.ListClusters(&amp;eks.ListClustersInput{}) if err != nil { fmt.Println(err.Error()) } fmt.Println(clusters) } </code></pre> <p>i mean, this still returns a 403 forbidden error because of env variable mess, but the code is valid i guess. My question is, having this connection established : how to convert this <code>svc</code> variable into the <code>*kubernetes.Clientset</code> one from the go driver ?</p>
<p>Have you had a look at the <a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go" rel="nofollow noreferrer">client-go example</a> on how to authenticate in-cluster?</p> <p>Code that authenticate to the Kubernetes API typically start like this:</p> <pre><code> // creates the in-cluster config config, err := rest.InClusterConfig() if err != nil { panic(err.Error()) } // creates the clientset clientset, err := kubernetes.NewForConfig(config) if err != nil { panic(err.Error()) } </code></pre>
<p>Is the host needed when connecting to a Postgres container behind a K8s service?</p> <p>I was able to connect via the sql driver using just the username, password and dbname. When using the postgres driver with GORM it fails without the host in the url.</p> <p>The issue is that the service IP address routing to the Postgres container is only known once creating and applying the service to the cluster. The container running code to connect to the Postgres container doesn't have access to this host variable, without hardcoding or adding the host to the secrets and reapplying the secrets file.</p> <p>What is the correct way to dynamically connect to the Postgres container via the K8s service from another container?</p>
<p>The idiomatic way to achieve this is to use the <code>service name</code> as <code>hostname</code> and the <code>port</code> the <code>service</code> is exposing.</p> <p>When in the same <code>namespace</code>, the <code>service name</code> alone is enough. I.E.</p> <pre class="lang-sh prettyprint-override"><code>my-service-name:8080 </code></pre> <p>When in different namespaces, you can append the namespace to the service name as its part of the container's fully qualified domain name. I.E.</p> <pre class="lang-sh prettyprint-override"><code>my-service.my-namespace:8080 </code></pre> <p>Additionally, many tools and even the browser is using <code>default ports</code>. So if you were to expose your <code>service</code> on <code>port 80</code> and wanted to reach it over <code>HTTP</code> with let's say <code>curl</code>, you could omit the <code>port</code> since <code>curl</code> would use <code>port 80</code> by default.</p> <pre class="lang-sh prettyprint-override"><code>curl my-service </code></pre> <p>Note that in this instance even the protocol is left out, as <code>curl</code> uses <code>HTTP</code> by default. If you wanted to connect over <code>HTTPS</code> on <code>port 8080</code> to a <code>service</code>, then you would need to provide both <code>protocol</code> and <code>port</code>.</p> <pre class="lang-sh prettyprint-override"><code>curl https://my-service:8080 </code></pre> <p>This is because, Kubernetes runs its own <code>DNS resolver</code>. To learn more about this, you can maybe look at this question.</p> <p><a href="https://stackoverflow.com/questions/50668124/why-dig-does-not-resolve-k8s-service-by-dns-name-while-nslookup-has-no-problems/68130684#68130684">Why dig does not resolve K8s service by dns name while nslookup has no problems with it?</a></p> <p>Furthermore, it is a common pattern to use <code>environment variables</code> for your container to set the <code>URI</code> to connect to other <code>services</code>. That has the benefit of being able to deploy your <code>service</code> everywhere without having to know how the other service may be reachable upfront. Otherwise, you would potentially need to rebuild the <code>container</code>, if something outside such as <code>service name</code> or <code>port</code> has changed.</p>
<p>I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster. The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:</p> <p>I found that I did not receive the package back when I captured the package of tun0 on the server.</p>
<p>When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.</p>
<p>I want a deployment in kubernetes to have the permission to restart itself, from within the cluster.</p> <p>I know I can create a serviceaccount and bind it to the pod, but I'm missing the name of the most specific permission (i.e. not just allowing <code>'*'</code>) to allow for the command</p> <pre><code>kubectl rollout restart deploy &lt;deployment&gt; </code></pre> <p>here's what I have, and ??? is what I'm missing</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: restart-sa --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: restarter rules: - apiGroups: [&quot;apps&quot;] resources: [&quot;deployments&quot;] verbs: [&quot;list&quot;, &quot;???&quot;] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: testrolebinding namespace: default subjects: - kind: ServiceAccount name: restart-sa namespace: default roleRef: kind: Role name: restarter apiGroup: rbac.authorization.k8s.io --- apiVersion: v1 kind: Pod metadata: name: example spec: containers: - image: nginx name: nginx serviceAccountName: restart-sa </code></pre>
<p>I believe the following is the minimum permissions required to restart a deployment:</p> <pre><code>rules: - apiGroups: [&quot;apps&quot;, &quot;extensions&quot;] resources: [&quot;deployments&quot;] resourceNames: [$DEPLOYMENT] verbs: [&quot;get&quot;, &quot;patch&quot;] </code></pre>
<p>I have read in ingress-nginx documentation that the rewrite is being performed thanks to an annotation like this:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) </code></pre> <p>I have a case where I have multiple hosts and I want URL rewriting for some particular paths only:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; name: ingress spec: rules: - host: somehost.westeurope.cloudapp.azure.com http: paths: - path: /rest-smtp-sink # I want to rewrite this path pathType: Prefix backend: service: name: rest-smtp-sink-svc port: number: 80 - path: /backend # This one too pathType: Prefix backend: service: name: server-svc port: number: 80 - path: / # But not this one pathType: Prefix backend: service: name: client-svc port: number: 80 </code></pre> <p>However, the annotation seems to be global. How do I enable URL rewriting for some paths only?</p>
<p>I managed to get the desired result with this configuration:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/rewrite-target: /$2 name: ingress spec: rules: - host: somehost.westeurope.cloudapp.azure.com http: paths: - path: /rest-smtp-sink(/|$)(.*) pathType: Prefix backend: service: name: rest-smtp-sink-svc port: number: 80 - path: /backend(/|$)(.*) pathType: Prefix backend: service: name: server-svc port: number: 80 - path: /()(.*) pathType: Prefix backend: service: name: client-svc port: number: 80 </code></pre> <p>As the <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation is global, I've used <code>/$2</code> as the rewrite target and <code>/()(.*)</code> as a noop for the root path.</p>
<p>I'm new to running Prometheus and Graphana. I want to create an alert that fires when a Kubernetes pod is in a pending state for more than 15 minutes. The PromQL query I'm using is:</p> <blockquote> <p>kube_pod_status_phase{exported_namespace=&quot;mynamespace&quot;, phase=&quot;Pending&quot;} &gt; 0</p> </blockquote> <p>What I haven't been able to figure out is how to construct an alert based upon how long the pod has been in that state. I've tried a few permutations of alert conditions in Graphana along the lines of:</p> <blockquote> <p>WHEN avg() OF query (A, 15m, now) IS ABOVE 1</p> </blockquote> <p>The all fire an alert based upon the number of pods in the state rather than the duration.</p> <p>How can an alert be constructed based upon the time in the state?</p> <p>Please &amp; Thank You</p>
<pre><code>- alert: KubernetesPodNotHealthy expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~&quot;Pending|Unknown|Failed&quot;})[15m:1m]) &gt; 0 for: 0m labels: severity: critical annotations: summary: Kubernetes Pod not healthy (instance {{ $labels.instance }}) description: &quot;Pod has been in a non-ready state for longer than 15 minutes.\n VALUE = {{ $value }}\n LABELS = {{ $labels }}&quot; </code></pre>
<p>There is only 1 version per object in k8s 1.20 as can be checked by command:</p> <pre><code>kubectl api-resources </code></pre> <p>Also, creating custom objects with different versions is not allowed. <code>AlreadyExists</code> is thrown on trying.</p> <p>In what use cases providing <code>--api-version</code> option is useful then?</p>
<p>Command:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl api-resources </code></pre> <p>Print the supported API resources on the server. You can read more about this command and allowed options <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#api-resources" rel="nofollow noreferrer">here</a>. Supported flags are:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Shorthand</th> <th>Default</th> <th>Usage</th> </tr> </thead> <tbody> <tr> <td>api-group</td> <td></td> <td></td> <td>Limit to resources in the specified API group.</td> </tr> <tr> <td>cached</td> <td></td> <td>false</td> <td>Use the cached list of resources if available.</td> </tr> <tr> <td>namespaced</td> <td></td> <td>true</td> <td>If false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.</td> </tr> <tr> <td>no-headers</td> <td></td> <td>false</td> <td>When using the default or custom-column output format, don't print headers (default print headers).</td> </tr> <tr> <td>output</td> <td>o</td> <td></td> <td>Output format. One of: wide</td> </tr> <tr> <td>sort-by</td> <td></td> <td></td> <td>If non-empty, sort list of resources using specified field. The field can be either 'name' or 'kind'.</td> </tr> <tr> <td>verbs</td> <td></td> <td>[]</td> <td>Limit to resources that support the specified verbs.</td> </tr> </tbody> </table> </div> <p>You can use <code>--api-group</code> option to limit to resources in the specified API group.</p> <p>There also exist the command:</p> <pre><code>kubectl api-versions </code></pre> <p>and it prints the supported API versions on the server, in the form of &quot;group/version&quot;. You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#api-versions" rel="nofollow noreferrer">here</a>.</p> <p>You can also read more about <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning" rel="nofollow noreferrer">API groups and versioning</a>.</p> <p><strong>EDIT:</strong> In the comment:</p> <blockquote> <p>No, see example &quot;kubectl explain deployment --api-version v1&quot;. In other words: when there can be more then one api version of a resource?</p> </blockquote> <p>you are referring to a completely different command which is <code>kubectl explain</code>. Option <code>--api-version</code> gets different explanations for particular API version (API group/version). You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#explain" rel="nofollow noreferrer">here</a>.</p>
<p>I'm trying to build a self managed kubernetes cluster on AWS/EC2 using Ubuntu VM's running 18.04 (so not EKS). I've managed to get the Master built which is integrated with ELB/Classic LB (i couldnt get this working with NLB) to allow me to expose services via type=LoadBalancer before moving over to an ingress controller such as nginx or istio to do more L7 stuff.</p> <p>The Master is healthy and in Ready status running K8's version 1.20.5</p> <p>I've managed to join a worker node to the cluster.</p> <p>If I run kubectl get node on the master both the master and worker node are showing as Ready.</p> <p>But as the worker node is joining the cluster i see an error relating to the below.</p> <p>Seems to be an error relating to error uploading crisocket.</p> <p>Anyone got any ideas why? I dont want to move on before clearning the error even though both my master and worker nodes are 'Ready'. Thanks!</p> <p>error uploading crisocket: timed out waiting for the condition</p> <p>This is the debug from the joining process</p> <pre><code>I0326 11:53:48.564188 4751 join.go:395] [preflight] found NodeName empty; using OS hostname as NodeName I0326 11:53:48.564426 4751 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock [preflight] Running pre-flight checks I0326 11:53:48.564662 4751 preflight.go:90] [preflight] Running general checks I0326 11:53:48.564821 4751 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I0326 11:53:48.564946 4751 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I0326 11:53:48.565004 4751 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0326 11:53:48.565050 4751 checks.go:102] validating the container runtime I0326 11:53:48.623727 4751 checks.go:128] validating if the &quot;docker&quot; service is enabled and active I0326 11:53:48.694853 4751 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0326 11:53:48.695050 4751 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0326 11:53:48.695164 4751 checks.go:649] validating whether swap is enabled or not I0326 11:53:48.695282 4751 checks.go:376] validating the presence of executable conntrack I0326 11:53:48.695382 4751 checks.go:376] validating the presence of executable ip I0326 11:53:48.695487 4751 checks.go:376] validating the presence of executable iptables I0326 11:53:48.695608 4751 checks.go:376] validating the presence of executable mount I0326 11:53:48.695691 4751 checks.go:376] validating the presence of executable nsenter I0326 11:53:48.695805 4751 checks.go:376] validating the presence of executable ebtables I0326 11:53:48.695874 4751 checks.go:376] validating the presence of executable ethtool I0326 11:53:48.695961 4751 checks.go:376] validating the presence of executable socat I0326 11:53:48.696007 4751 checks.go:376] validating the presence of executable tc I0326 11:53:48.696101 4751 checks.go:376] validating the presence of executable touch I0326 11:53:48.696213 4751 checks.go:520] running all checks I0326 11:53:48.766440 4751 checks.go:406] checking whether the given node name is reachable using net.LookupHost I0326 11:53:48.767324 4751 checks.go:618] validating kubelet version I0326 11:53:48.858929 4751 checks.go:128] validating if the &quot;kubelet&quot; service is enabled and active I0326 11:53:48.871674 4751 checks.go:201] validating availability of port 10250 I0326 11:53:48.871944 4751 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt I0326 11:53:48.872045 4751 checks.go:432] validating if the connectivity type is via proxy or direct I0326 11:53:48.872194 4751 join.go:465] [preflight] Discovering cluster-info I0326 11:53:48.872309 4751 token.go:78] [discovery] Created cluster-info discovery client, requesting info from &quot;internal-k8-lb-1843285331.eu-west-1.elb.amazonaws.com:6443&quot; I0326 11:53:48.901218 4751 token.go:116] [discovery] Requesting info from &quot;internal-k8-lb-1843285331.eu-west-1.elb.amazonaws.com:6443&quot; again to validate TLS against the pinned public key I0326 11:53:48.913626 4751 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server &quot;internal-k8-lb-1843285331.eu-west-1.elb.amazonaws.com:6443&quot; I0326 11:53:48.913749 4751 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process I0326 11:53:48.913840 4751 join.go:479] [preflight] Fetching init configuration I0326 11:53:48.913948 4751 join.go:517] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' I0326 11:53:48.929632 4751 interface.go:400] Looking for default routes with IPv4 addresses I0326 11:53:48.929749 4751 interface.go:405] Default route transits interface &quot;eth0&quot; I0326 11:53:48.930180 4751 interface.go:208] Interface eth0 is up I0326 11:53:48.930365 4751 interface.go:256] Interface &quot;eth0&quot; has 2 addresses :[172.31.27.238/20 fe80::47a:b6ff:fe55:969d/64]. I0326 11:53:48.930482 4751 interface.go:223] Checking addr 172.31.27.238/20. I0326 11:53:48.930569 4751 interface.go:230] IP found 172.31.27.238 I0326 11:53:48.930674 4751 interface.go:262] Found valid IPv4 address 172.31.27.238 for interface &quot;eth0&quot;. I0326 11:53:48.930758 4751 interface.go:411] Found active IP 172.31.27.238 I0326 11:53:48.940030 4751 preflight.go:101] [preflight] Running configuration dependant checks I0326 11:53:48.940151 4751 controlplaneprepare.go:211] [download-certs] Skipping certs download I0326 11:53:48.940238 4751 kubelet.go:110] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I0326 11:53:48.941312 4751 kubelet.go:118] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt I0326 11:53:48.942266 4751 kubelet.go:139] [kubelet-start] Checking for an existing Node in the cluster with name &quot;ip-172-31-27-238&quot; and status &quot;Ready&quot; I0326 11:53:48.946297 4751 kubelet.go:153] [kubelet-start] Stopping the kubelet [kubelet-start] Writing kubelet configuration to file &quot;/var/lib/kubelet/config.yaml&quot; [kubelet-start] Writing kubelet environment file with flags to file &quot;/var/lib/kubelet/kubeadm-flags.env&quot; [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... I0326 11:53:54.169977 4751 kubelet.go:188] [kubelet-start] preserving the crisocket information for the node I0326 11:53:54.170123 4751 patchnode.go:30] [patchnode] Uploading the CRI Socket information &quot;/var/run/dockershim.sock&quot; to the Node API object &quot;ip-172-31-27-238&quot; as an annotation I0326 11:53:54.170218 4751 cert_rotation.go:137] Starting client certificate rotation controller [kubelet-check] Initial timeout of 40s passed. timed out waiting for the condition error uploading crisocket </code></pre>
<p>sudo kubeadm reset</p> <p>sudo systemctl enable docker</p> <p>sudo systemctl enable kubelet</p> <p>sudo systemctl daemon-reload</p> <p>sudo systemctl restart docker</p> <p>sudo netstat -lnp | grep 1025</p> <p>sudo rm -rf /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt</p> <p>sudo kubeadm join ipaddress:6443 --token</p>
<p>I'm trying to install Redis on Kubernetes environment with <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="nofollow noreferrer">Bitnami Redis HELM Chart</a>. I want to use a defined password rather than randomly generated one. But i'm getting error below when i want to connect to redis master or replicas with redis-cli.</p> <pre><code>I have no name!@redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. Warning: AUTH failed </code></pre> <p>I created a Kubernetes secret like this.</p> <pre><code>--- apiVersion: v1 kind: Secret metadata: name: redis-secret namespace: redis type: Opaque data: redis-password: YWRtaW4xMjM0Cg== </code></pre> <p>And in values.yaml file i updated auth spec like below.</p> <pre><code>auth: enabled: true sentinel: false existingSecret: &quot;redis-secret&quot; existingSecretPasswordKey: &quot;redis-password&quot; usePasswordFiles: false </code></pre> <p>If i don't define <code>existingSecret</code> field and use randomly generated password then i can connect without an issue. I also tried <code>AUTH admin1234</code> after <code>Warning: AUTH failed</code> error but it didn't work either.</p>
<p>You can achieve it in much simpler way i.e. by running:</p> <pre><code>$ helm install my-release \ --set auth.password=&quot;admin1234&quot; \ bitnami/redis </code></pre> <p>This will update your <code>&quot;my-release-redis&quot;</code> secret, so when you run:</p> <pre><code>$ kubectl get secrets my-release-redis -o yaml </code></pre> <p>you'll see it contains your password, already <code>base64</code>-encoded:</p> <pre><code>apiVersion: v1 data: redis-password: YWRtaW4xMjM0Cg== kind: Secret ... </code></pre> <p>In order to get your password, you need to run:</p> <pre><code>export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath=&quot;{.data.redis-password}&quot; | base64 --decode) </code></pre> <p>This will set and export <code>REDIS_PASSWORD</code> environment variable containing your redis password.</p> <p>And then you may run your <code>redis-client</code> pod:</p> <pre><code>kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.4-debian-10-r13 --command -- sleep infinity </code></pre> <p>which will set <code>REDIS_PASSWORD</code> environment variable within your <code>redis-client</code> pod by assigning to it the value of <code>REDIS_PASSWORD</code> set locally in the previous step.</p>
<p>I am using Kubernetes v1.20.10 baremetal installation. It has one master node and 3 worker nodes. The application simply served HTTP requests.</p> <p>I am scaling the deployment based on the (HPA) Horizontal Pod Autoscaler and I noticed that the load is not getting evenly across pods. Only the first pod is getting 95% of the load and the other Pod is getting very low load.</p> <p>I tried the answer mentioned here but did not work : <a href="https://stackoverflow.com/questions/61586163/kubernetes-service-does-not-distribute-requests-between-pods">Kubernetes service does not distribute requests between pods</a></p>
<p>Based on the information provided I assume that you are using http-keepalive which is a persistent tcp connection. A kubernetes service distributes load for each (new) tcp connection. If you have persistent connections, only the additional connections will be distributed which is the effect that you observe.</p> <p>Try: Disable http keepalive or set the maximum keepalive time to something like 15 seconds, maximum requests to 50.</p>
<p>I'm having some trouble getting the Nginx ingress controller working in my Minikube cluster. It's likely to be some faults in Ingress configuration but I cannot pick it out.</p> <p>First, I deployed a service and it worked well without ingress.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: online labels: app: online spec: selector: app: online ports: - protocol: TCP port: 8080 targetPort: 5001 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: online labels: app: online spec: replicas: 1 selector: matchLabels: app: online template: metadata: labels: app: online annotations: dapr.io/enabled: &quot;true&quot; dapr.io/app-id: &quot;online&quot; dapr.io/app-port: &quot;5001&quot; dapr.io/log-level: &quot;debug&quot; dapr.io/sidecar-liveness-probe-threshold: &quot;300&quot; dapr.io/sidecar-readiness-probe-threshold: &quot;300&quot; spec: containers: - name: online image: online:latest ports: - containerPort: 5001 env: - name: ADDRESS value: &quot;:5001&quot; - name: DAPR_HTTP_PORT value: &quot;8080&quot; imagePullPolicy: Never </code></pre> <p>Then check its url</p> <pre><code>minikube service online --url http://192.168.49.2:32323 </code></pre> <p>It looks ok for requests.</p> <pre><code>curl http://192.168.49.2:32323/userOnline OK </code></pre> <p>After that I tried to apply nginx ingress offered by minikube. I installed ingress and run an example by referring to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">this</a> and it's all ok.</p> <p>Lastly, I configured my Ingress.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: online-ingress annotations: spec: rules: - host: online http: paths: - path: / pathType: Prefix backend: service: name: online port: number: 8080 </code></pre> <p>And changed /etc/hosts by adding line</p> <pre><code>192.168.49.2 online </code></pre> <p>And Test:</p> <pre><code>curl online/userOnline 502 Bad Gateway </code></pre> <p>The logs are like this:</p> <pre><code>192.168.49.1 - - [26/Aug/2021:09:45:56 +0000] &quot;GET /userOnline HTTP/1.1&quot; 502 150 &quot;-&quot; &quot;curl/7.68.0&quot; 80 0.002 [default-online-8080] [] 172.17.0.5:5001, 172.17.0.5:5001, 172.17.0.5:5001 0, 0, 0 0.004, 0.000, 0.000 502, 502, 502 578ea1b1471ac973a2ac45ec4c35d927 2021/08/26 09:45:56 [error] 2514#2514: *426717 upstream prematurely closed connection while reading response header from upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; 2021/08/26 09:45:56 [error] 2514#2514: *426717 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; 2021/08/26 09:45:56 [error] 2514#2514: *426717 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.49.1, server: online, request: &quot;GET /userOnline HTTP/1.1&quot;, upstream: &quot;http://172.17.0.5:5001/userOnline&quot;, host: &quot;online&quot; W0826 09:45:56.918446 7 controller.go:977] Service &quot;default/online&quot; does not have any active Endpoint. I0826 09:46:21.345177 7 status.go:281] &quot;updating Ingress status&quot; namespace=&quot;default&quot; ingress=&quot;online-ingress&quot; currentValue=[] newValue=[{IP:192.168.49.2 Hostname: Ports:[]}] I0826 09:46:21.349078 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;online-ingress&quot;, UID:&quot;b69e2976-09e9-4cfc-a8e8-7acb51799d6d&quot;, APIVersion:&quot;networking.k8s.io/v1beta1&quot;, ResourceVersion:&quot;23100&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync </code></pre> <p>I found the error is very about annotations of Ingress. If I changed it to:</p> <pre><code> annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>The error would be:</p> <pre><code>404 page not found </code></pre> <p>and logs:</p> <pre><code>I0826 09:59:21.342251 7 status.go:281] &quot;updating Ingress status&quot; namespace=&quot;default&quot; ingress=&quot;online-ingress&quot; currentValue=[] newValue=[{IP:192.168.49.2 Hostname: Ports:[]}] I0826 09:59:21.347860 7 event.go:282] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;online-ingress&quot;, UID:&quot;8ba6fe97-315d-4f00-82a6-17132095fab4&quot;, APIVersion:&quot;networking.k8s.io/v1beta1&quot;, ResourceVersion:&quot;23760&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'Sync' Scheduled for sync 192.168.49.1 - - [26/Aug/2021:09:59:32 +0000] &quot;GET /userOnline HTTP/1.1&quot; 404 19 &quot;-&quot; &quot;curl/7.68.0&quot; 80 0.002 [default-online-8080] [] 172.17.0.5:5001 19 0.000 404 856ddd3224bbe2bde9d7144b857168e0 </code></pre> <p>Other infos.</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE online LoadBalancer 10.111.34.87 &lt;pending&gt; 8080:32323/TCP 6h54m </code></pre> <p>The example I mentioned above is a <code>NodePort</code> service and mine is a <code>LoadBalancer</code>, that's the biggest difference. But I don't know why it does not work for me.</p>
<p>Moving this out of comments so it will be visible.</p> <hr /> <p><strong>Ingress</strong></p> <p>Main issue was with <code>path</code> in ingress rule since application serves traffic on <code>online/userOnline</code>. If requests go to <code>online</code> then ingress returns <code>404</code>.</p> <p>Rewrite annotation is not needed in this case as well.</p> <p><code>ingress.yaml</code> should look like:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: online-ingress # annotations: spec: rules: - host: online http: paths: - path: /userOnline pathType: Prefix backend: service: name: online port: number: 8080 </code></pre> <p>More details about <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress</a></p> <hr /> <p><strong>LoadBalancer on Minikube</strong></p> <p>Since minikube is considered as <code>bare metal</code> installation, to get <code>external IP</code> for service/ingress, it's necessary to use specially designed <code>metallb</code> solution.</p> <p><a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.</p> <p>It ships as add-on for <code>minikube</code> and can be enabled with:</p> <pre><code>minikube addons enable metallb </code></pre> <p>And it needs to create a <code>configMap</code> with setup. Please refer to <a href="https://metallb.universe.tf/configuration/" rel="nofollow noreferrer">metallb configuration</a></p>
<p>I am using Kubernetes in Azure with Virtual Nodes this is a plugin that creates virtual nodes using Azure Container Instances.</p> <p>The instructions to set this up require creating a AKSNet/AKSSubnet which seems to automatically come along with A VMSS called something like. aks-control-xxx-vmss I followed the instruction on the link below.</p> <p><a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli</a></p> <p>This comes with a single instance VM that I am being charged full price for regardless of container instances I create and I am charged extra for every container instance I provision onto my virtual node pool even if they should fit on just 1 VM. These resources do not seem to be related.</p> <p>I am currently having this out as unexpected billing with Microsoft but the process has been very slow so I am reverting to here to find out if anyone else has had this experience?</p> <p>The main questions I have are:</p> <ul> <li>Can I use Azure Container Instances without the VMSS?</li> <li>If not can I somehow make this VM visible to my cluster so I can at least use it to provision containers onto and get some value out of it?</li> <li>Have I just done something wrong?</li> </ul> <p>Update, NB: this is not my control node that is a B2s which I can see my system containers running on.</p> <p>Any advice would be a great help.</p>
<blockquote> <p>Can I use Azure Container Instances without the VMSS?</p> </blockquote> <p>In an AKS cluster currently you <em><strong>cannot</strong></em> have virtual nodes without a <strong>node pool</strong> of type <code>VirtualMachineScaleSets</code> or <code>AvailabilitySet</code>. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. [<a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#nodes-and-node-pools" rel="nofollow noreferrer">Reference</a>] Every AKS cluster must contain at least one system node pool with at least one node. System node pools serve the primary purpose of hosting critical system pods such as <code>CoreDNS</code>, <code>kube-proxy</code> and <code>metrics-server</code>. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster.</p> <p>For more information on System Node Pools please check <a href="https://learn.microsoft.com/en-us/azure/aks/use-system-pools" rel="nofollow noreferrer">this document</a>.</p> <p>In fact, if you run <code>kubectl get pods -n kube-system -o wide</code> you will see all the system pods running on the VMSS-backed node pool node including the aci-connector-linux-xxxxxxxx-xxxxx pod which connects the cluster to the virtual node, as shown below:</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES aci-connector-linux-859d9ff5-24tgq 1/1 Running 0 49m 10.240.0.30 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; azure-cni-networkmonitor-7zcvf 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; azure-ip-masq-agent-tdhnx 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-autoscaler-6699988865-k7cs5 1/1 Running 0 58m 10.240.0.31 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-d4866bcb7-4r9tj 1/1 Running 0 49m 10.240.0.12 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-d4866bcb7-5vkhc 1/1 Running 0 58m 10.240.0.28 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-d4866bcb7-b7bzg 1/1 Running 0 49m 10.240.0.11 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-d4866bcb7-fltbf 1/1 Running 0 49m 10.240.0.29 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; coredns-d4866bcb7-n94tg 1/1 Running 0 57m 10.240.0.34 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; konnectivity-agent-7564955db-f4fm6 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; kube-proxy-lntqs 1/1 Running 0 58m 10.240.0.4 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; metrics-server-97958786-bmmv9 1/1 Running 1 58m 10.240.0.24 aks-nodepool1-29819654-vmss000000 &lt;none&gt; &lt;none&gt; </code></pre> <p>However, you can deploy <a href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-overview" rel="nofollow noreferrer">Azure Container Instances</a> [<a href="https://learn.microsoft.com/en-us/azure/container-instances/container-instances-quickstart" rel="nofollow noreferrer">How-to</a>] without an AKS cluster altogether. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend <a href="https://learn.microsoft.com/en-us/azure/aks/" rel="nofollow noreferrer">Azure Kubernetes Service (AKS)</a>.</p> <blockquote> <p>If not can I somehow make this VM visible to my cluster so I can at least use it to provision containers onto and get some value out of it?</p> </blockquote> <p>Absolutely, you can. In fact if you do a <code>kubectl get nodes</code> and the node from the VMSS-backed node pool (in your case aks-control-xxx-vmss-x) shows <code>STATUS</code> as Ready, then it is available to the <code>kube-scheduler</code> for <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">scheduling</a> workloads. Please check <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#condition" rel="nofollow noreferrer">this document</a>.</p> <p>If you do a <code>kubectl describe node virtual-node-aci-linux</code> you should find the following in the output:</p> <pre><code>... Labels: alpha.service-controller.kubernetes.io/exclude-balancer=true beta.kubernetes.io/os=linux kubernetes.azure.com/managed=false kubernetes.azure.com/role=agent kubernetes.io/hostname=virtual-node-aci-linux kubernetes.io/role=agent node-role.kubernetes.io/agent= node.kubernetes.io/exclude-from-external-load-balancers=true type=virtual-kubelet ... Taints: virtual-kubelet.io/provider=azure:NoSchedule ... </code></pre> <p>In <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli" rel="nofollow noreferrer">the document</a> that you are following, in the <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli#deploy-a-sample-app" rel="nofollow noreferrer">Deploy a sample app section</a> to schedule the container on the virtual node, a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">nodeSelector</a> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">toleration</a> are defined in the <a href="https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli#deploy-a-sample-app" rel="nofollow noreferrer">Deploy a sample app section</a> as follows:</p> <pre><code>... nodeSelector: kubernetes.io/role: agent beta.kubernetes.io/os: linux type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule </code></pre> <p>If you remove this part from the <em>Deployment</em> manifest, or do not specify this part in the manifest of a workload that you are deploying, then the corresponding resource(s) will be scheduled on a VMSS-backed node.</p> <blockquote> <p>Have I just done something wrong?</p> </blockquote> <p>Maybe you can evaluate the answer to this based on my responses to your earlier questions. However, here's a little more to help you understand:</p> <p>If a node doesn't have sufficient compute resources to run a requested pod, that pod can't progress through the scheduling process. The pod can't start unless additional compute resources are available within the node pool.</p> <p>When the cluster autoscaler notices pods that can't be scheduled because of node pool resource constraints, the number of nodes within the node pool is increased to provide the additional compute resources. When those additional nodes are successfully deployed and available for use within the node pool, the pods are then scheduled to run on them.</p> <p>If your application needs to scale rapidly, some pods may remain in a state waiting to be scheduled until the additional nodes deployed by the cluster autoscaler can accept the scheduled pods. For applications that have high burst demands, you can scale with virtual nodes and Azure Container Instances.</p> <p>This however <strong>does not mean that we can dispense with the VMSS or Availability Set backed node pools</strong>.</p>
<p>I am currently using the KubernetesPodOperator to run a Pod on a Kubernetes cluster. I am getting the below error:</p> <blockquote> <p>kubernetes.client.rest.ApiException: (403) Reason: Forbidden</p> <p>HTTP response headers: HTTPHeaderDict({'Audit-Id': '', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 30 Aug 2021 00:12:57 GMT', 'Content-Length': '309'})</p> <p>HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;pods is forbidden: User &quot;system:serviceaccount:airflow10:airflow-worker-serviceaccount&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;kind&quot;:&quot;pods&quot;},&quot;code&quot;:403}</p> </blockquote> <p>I can resolve this by running the below commands:</p> <blockquote> <p>kubectl create clusterrole pod-creator --verb=create,get,list,watch --resource=pods</p> <p>kubectl create clusterrolebinding pod-creator-clusterrolebinding --clusterrole=pod-creator --serviceaccount=airflow10:airflow-worker-serviceaccount</p> </blockquote> <p>But I want to be able to setup the service account with the correct permissions inside airflow automatically. What would be a good approach to do this without having to run the above commands?</p>
<p>You can't really. You need to assign and create the roles when you deploy airflow, otherwise that would mean that you have huge security risk because deployed application would be able to give more permissions.</p> <p>This can be done in multiple ways &quot;automatically&quot; if your intention was to somewhat automate the deployment. For example if your airflow deployment is done via Helm chart, the chart can add an configure the right resources to create appropriate role bindings. You can see how our Official Helm chart does it:</p> <ul> <li><a href="https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-role.yaml" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-role.yaml</a></li> <li><a href="https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-rolebinding.yaml" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-rolebinding.yaml</a></li> </ul>
<p>I have a secretsProviderClass resource defined for my Azure Kubernetes Service deployment, which allows me to create secrets from Azure Key Vault. I'd like to use Kustomize with it in order to unify my deployments across multiple environments. Here is my manifest:</p> <pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1 kind: SecretProviderClass metadata: name: azure-kvname spec: provider: azure secretObjects: - data: - key: dbuser objectName: db-user - key: dbpassword objectName: db-pass - key: admin objectName: admin-user - key: adminpass objectName: admin-password secretName: secret type: Opaque parameters: usePodIdentity: &quot;true&quot; keyvaultName: &quot;dev-keyvault&quot; cloudName: &quot;&quot; objects: | array: - | objectName: db-user objectType: secret objectVersion: &quot;&quot; - | objectName: db-pass objectType: secret objectVersion: &quot;&quot; - | objectName: admin-user objectType: secret objectVersion: &quot;&quot; - | objectName: admin-password objectType: secret objectVersion: &quot;&quot; tenantId: &quot;XXXXXXXXXXXX&quot; </code></pre> <p>This is the manifest that I use as a base. I'd like to use overlay on this and apply values depending on the environment that I am deploying to. To be specific, I'd like to modify the <code>objectName</code> property. I tried applying the Json6902 patch:</p> <pre><code>- op: replace path: /spec/parameters/objects/array/0/objectName value: &quot;dev-db-user&quot; - op: replace path: /spec/parameters/objects/array/1/objectName value: &quot;dev-db-password&quot; - op: replace path: /spec/parameters/objects/array/2/objectName value: &quot;dev-admin-user&quot; - op: replace path: /spec/parameters/objects/array/3/objectName value: &quot;dev-admin-password&quot; </code></pre> <p>Unfortunately, it's not working and it is not replacing the values. Is it possible with Kustomize?</p>
<p>Unfortunately - the value that you're trying to access is not another nested YAML array - the pipe symbol at the end of a line in YAML signifies that any indented text that follows should be interpreted as a multi-line scalar value</p> <p>With kustomize you'd probably need to replace whole <code>/spec/parameters/objects</code> value</p> <p>if you haven't started using kustomize for good yet, you may consider rather templating engine like <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, which should allow you to replace value inside of this string</p> <p>...or you can use a combination of Helm for templating and the Kustomize for resource management, patches for specific configuration, and overlays.</p>
<p>I'm using <strong>AWS EKS 1.21 with Fargate (serverless)</strong>. I'm trying to run Fluentd as a daemonset however the daemonset is not running at all.</p> <p>All the other objects like role, rolebinding, serviceaccount, configmap are already in place in the cluster.</p> <pre><code>NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE aws-node 0 0 0 0 0 &lt;none&gt; 8d fluentd-cloudwatch 0 0 0 0 0 &lt;none&gt; 3m36s kube-proxy 0 0 0 0 0 &lt;none&gt; 8d </code></pre> <p>This is my Daemonset:-</p> <pre><code>apiVersion: apps/v1 #Latest support AWS EKS 1.21 kind: DaemonSet metadata: labels: k8s-app: fluentd-cloudwatch name: fluentd-cloudwatch namespace: kube-system spec: selector: matchLabels: k8s-app: fluentd-cloudwatch template: metadata: labels: k8s-app: fluentd-cloudwatch spec: containers: - env: - name: REGION value: us-east-1 # Correct AWS EKS region should be verified before running this Daemonset - name: CLUSTER_NAME value: eks-fargate-alb-demo # AWS EKS Cluster Name should be verified before running this Daemonset image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch imagePullPolicy: IfNotPresent name: fluentd-cloudwatch resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf - mountPath: /var/log name: varlog - mountPath: /var/lib/docker/containers name: varlibdockercontainers readOnly: true - mountPath: /run/log/journal name: runlogjournal readOnly: true dnsPolicy: ClusterFirst initContainers: - command: - sh - -c - cp /config-volume/..data/* /fluentd/etc image: busybox imagePullPolicy: Always name: copy-fluentd-config resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /config-volume name: config-volume - mountPath: /fluentd/etc name: fluentdconf serviceAccount: fluentd serviceAccountName: fluentd terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: fluentd-config name: config-volume - emptyDir: {} name: fluentdconf - hostPath: path: /var/log type: &quot;&quot; name: varlog - hostPath: path: /var/lib/docker/containers type: &quot;&quot; name: varlibdockercontainers - hostPath: path: /run/log/journal type: &quot;&quot; name: runlogjournal </code></pre> <p>When I describe it, I do not see any events as well. I can run other pods like Nginx etc on this cluster but this is not running at all.</p> <pre><code>kubectl describe ds fluentd-cloudwatch -n kube-system Name: fluentd-cloudwatch Selector: k8s-app=fluentd-cloudwatch Node-Selector: &lt;none&gt; Labels: k8s-app=fluentd-cloudwatch Annotations: deprecated.daemonset.template.generation: 1 Desired Number of Nodes Scheduled: 0 Current Number of Nodes Scheduled: 0 Number of Nodes Scheduled with Up-to-date Pods: 0 Number of Nodes Scheduled with Available Pods: 0 Number of Nodes Misscheduled: 0 Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: k8s-app=fluentd-cloudwatch Service Account: fluentd Init Containers: copy-fluentd-config: Image: busybox Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c cp /config-volume/..data/* /fluentd/etc Environment: &lt;none&gt; Mounts: /config-volume from config-volume (rw) /fluentd/etc from fluentdconf (rw) Containers: fluentd-cloudwatch: Image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch Port: &lt;none&gt; Host Port: &lt;none&gt; Limits: memory: 200Mi Requests: cpu: 100m memory: 200Mi Environment: REGION: us-east-1 CLUSTER_NAME: eks-fargate-alb-demo Mounts: /config-volume from config-volume (rw) /fluentd/etc from fluentdconf (rw) /run/log/journal from runlogjournal (ro) /var/lib/docker/containers from varlibdockercontainers (ro) /var/log from varlog (rw) Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: fluentd-config Optional: false fluentdconf: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; varlog: Type: HostPath (bare host directory volume) Path: /var/log HostPathType: varlibdockercontainers: Type: HostPath (bare host directory volume) Path: /var/lib/docker/containers HostPathType: runlogjournal: Type: HostPath (bare host directory volume) Path: /run/log/journal HostPathType: Events: &lt;none&gt; </code></pre> <p>ConfigMap:-</p> <pre><code>apiVersion: v1 data: containers.conf: | &lt;source&gt; @type tail @id in_tail_container_logs @label @containers path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag * read_from_head true &lt;parse&gt; @type json time_format %Y-%m-%dT%H:%M:%S.%NZ &lt;/parse&gt; &lt;/source&gt; &lt;label @containers&gt; &lt;filter **&gt; @type kubernetes_metadata @id filter_kube_metadata &lt;/filter&gt; &lt;filter **&gt; @type record_transformer @id filter_containers_stream_transformer &lt;record&gt; stream_name ${tag_parts[3]} &lt;/record&gt; &lt;/filter&gt; &lt;match **&gt; @type cloudwatch_logs @id out_cloudwatch_logs_containers region &quot;#{ENV.fetch('REGION')}&quot; log_group_name &quot;/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/containers&quot; log_stream_name_key stream_name remove_log_stream_name_key true auto_create_stream true &lt;buffer&gt; flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true &lt;/buffer&gt; &lt;/match&gt; &lt;/label&gt; fluent.conf: | @include containers.conf @include systemd.conf &lt;match fluent.**&gt; @type null &lt;/match&gt; systemd.conf: | &lt;source&gt; @type systemd @id in_systemd_kubelet @label @systemd filters [{ &quot;_SYSTEMD_UNIT&quot;: &quot;kubelet.service&quot; }] &lt;entry&gt; field_map {&quot;MESSAGE&quot;: &quot;message&quot;, &quot;_HOSTNAME&quot;: &quot;hostname&quot;, &quot;_SYSTEMD_UNIT&quot;: &quot;systemd_unit&quot;} field_map_strict true &lt;/entry&gt; path /run/log/journal pos_file /var/log/fluentd-journald-kubelet.pos read_from_head true tag kubelet.service &lt;/source&gt; &lt;source&gt; @type systemd @id in_systemd_kubeproxy @label @systemd filters [{ &quot;_SYSTEMD_UNIT&quot;: &quot;kubeproxy.service&quot; }] &lt;entry&gt; field_map {&quot;MESSAGE&quot;: &quot;message&quot;, &quot;_HOSTNAME&quot;: &quot;hostname&quot;, &quot;_SYSTEMD_UNIT&quot;: &quot;systemd_unit&quot;} field_map_strict true &lt;/entry&gt; path /run/log/journal pos_file /var/log/fluentd-journald-kubeproxy.pos read_from_head true tag kubeproxy.service &lt;/source&gt; &lt;source&gt; @type systemd @id in_systemd_docker @label @systemd filters [{ &quot;_SYSTEMD_UNIT&quot;: &quot;docker.service&quot; }] &lt;entry&gt; field_map {&quot;MESSAGE&quot;: &quot;message&quot;, &quot;_HOSTNAME&quot;: &quot;hostname&quot;, &quot;_SYSTEMD_UNIT&quot;: &quot;systemd_unit&quot;} field_map_strict true &lt;/entry&gt; path /run/log/journal pos_file /var/log/fluentd-journald-docker.pos read_from_head true tag docker.service &lt;/source&gt; &lt;label @systemd&gt; &lt;filter **&gt; @type record_transformer @id filter_systemd_stream_transformer &lt;record&gt; stream_name ${tag}-${record[&quot;hostname&quot;]} &lt;/record&gt; &lt;/filter&gt; &lt;match **&gt; @type cloudwatch_logs @id out_cloudwatch_logs_systemd region &quot;#{ENV.fetch('REGION')}&quot; log_group_name &quot;/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/systemd&quot; log_stream_name_key stream_name auto_create_stream true remove_log_stream_name_key true &lt;buffer&gt; flush_interval 5 chunk_limit_size 2m queued_chunks_limit_size 32 retry_forever true &lt;/buffer&gt; &lt;/match&gt; &lt;/label&gt; kind: ConfigMap metadata: labels: k8s-app: fluentd-cloudwatch name: fluentd-config namespace: kube-system </code></pre> <p>Please let me know where the problem is, thanks</p>
<p>As you figured, EKS/Fargate does not support Daemonsets (because there are no [real] nodes). Actually, you don't need to run FluentBit as a sidecar in every pod. EKS/Fargate supports a logging feature called Firelens that allows you to just configure where you want to log (destination) and Fargate will configure a &quot;hide-car&quot; in the back end (not visible to the user) to do that. Please see <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="nofollow noreferrer">this page of the documentation</a> with the details.</p> <p>Snippet:</p> <p><code>Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don't explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated ConfigMap....</code></p>
<p>I enabled Istio on GKE using istio-addon. According to the images the version of Istio is <code>1.6</code>. Deployment of the application, which contains <code>RequestAuthentication</code> resource gives the following error:</p> <pre><code> admission webhook &quot;pilot.validation.istio.io&quot; denied the request: unrecognized type RequestAuthentication </code></pre> <p><code>RequestAuthentication</code> must be available in the version <code>1.6</code>. Is there any way to check the compatibility?</p> <p>Updated: On my on-premise installation everything works with Istio <code>1.9</code>. The configuration is the following:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: xxx-auth spec: selector: matchLabels: app: xxx-fe jwtRules: - issuer: &quot;{{ .Values.idp.issuer }}&quot; jwksUri: &quot;{{ .Values.idp.jwksUri }}&quot; </code></pre>
<p>I have posted community wiki answer for better visibility.</p> <p>As <a href="https://stackoverflow.com/users/9496448/katya-gorshkova">Katya Gorshkova</a> has mentioned in the comment:</p> <blockquote> <p>Finally, I turned off istio addon and installed the newest istio 1.11.1. It worked without any problems</p> </blockquote> <p>See also</p> <ul> <li><a href="https://istio.io/latest/news/releases/1.11.x/announcing-1.11.1/" rel="nofollow noreferrer">release notes for istio 1.11.1</a></li> <li><a href="https://istio.io/latest/docs/setup/upgrade/" rel="nofollow noreferrer">how to upgrade istio</a></li> <li><a href="https://istio.io/latest/news/releases/1.11.x/announcing-1.11/upgrade-notes/" rel="nofollow noreferrer">Important changes to consider when upgrading to Istio 1.11.0</a></li> </ul>
<p>In my helm chart, I have a few files that need credentials to be inputted For example</p> <pre><code>&lt;Resource name=&quot;jdbc/test&quot; auth=&quot;Container&quot; driverClassName=&quot;com.microsoft.sqlserver.jdbc.SQLServerDriver&quot; url=&quot;jdbc:sqlserver://{{ .Values.DB.host }}:{{ .Values.DB.port }};selectMethod=direct;DatabaseName={{ .Values.DB.name }};User={{ Values.DB.username }};Password={{ .Values.DB.password }}&quot; /&gt; </code></pre> <p>I created a secret</p> <pre><code>Name: databaseinfo Data: username password </code></pre> <p>I then create environment variables to retrieve those secrets in my deployment.yaml:</p> <pre><code>env: - name: DBPassword valueFrom: secretKeyRef: key: password name: databaseinfo - name: DBUser valueFrom: secretKeyRef: key: username name: databaseinfo </code></pre> <p>In my values.yaml or this other file, I need to be able to reference to this secret/environment variable. I tried the following but it does not work: values.yaml</p> <pre><code>DB: username: $env.DBUser password: $env.DBPassword </code></pre>
<p>you can't pass variables from any template to <code>values.yaml</code> with helm. Just from <code>values.yaml</code> to the templates.</p> <p>The answer you are seeking was posted by <a href="https://stackoverflow.com/users/3061469/mehowthe">mehowthe</a> :</p> <p>deployment.yaml =</p> <pre><code> env: {{- range .Values.env }} - name: {{ .name }} value: {{ .value }} {{- end }} </code></pre> <p>values.yaml =</p> <pre><code>env: - name: &quot;DBUser&quot; value: &quot;&quot; - name: &quot;DBPassword&quot; value: &quot;&quot; </code></pre> <p>then</p> <p><code>helm install chart_name --name release_name --set env.DBUser=&quot;FOO&quot; --set env.DBPassword=&quot;BAR&quot;</code></p>
<p>Why I'd want to have multiple replicas of my DB?</p> <ol> <li>Redundancy: I have &gt; 1 replicas of my app code. Why? In case one node fails, another can fill its place when run behind a load balancer.</li> <li>Load: A load balancer can distribute traffic to multiple instances of the app.</li> <li>A/B testing. I can have one node serve one version of the app, and another serve a different one.</li> <li>Maintenance. I can bring down one instance for maintenance, and keep the other one up with 0 down-time.</li> </ol> <p>So, I assume I'd want to do the same with the backing db if possible too.</p> <p>I realize that many nosql dbs are better configured for multiple instances, but I am interested in relational dbs.</p> <p>I've played with operators like <a href="https://github.com/CrunchyData/postgres-operator" rel="nofollow noreferrer">this</a> and <a href="https://github.com/mysql/mysql-operator" rel="nofollow noreferrer">this</a> but have found problems with the docs, have not been able to get them up and running and found the community a bit lacking. Relying on this kind of thing in production makes me nervous. The Mysql operator has a note even, saying it's not for production use.</p> <p>I see that native k8s <a href="https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/" rel="nofollow noreferrer">statefulsets have scaling</a> but these docs aren't specific to dbs at all. I assume the complication is that dbs need to write persistently to disk via a volume and that data has to be synced and routed somehow if you have more than one instance.</p> <p>So, is this something that's non-trivial to do myself? Or, am I better off having a dev environment that uses a one-replica db image in the cluster in order to save on billing, and a prod environment that uses a fully managed db, something like <a href="https://cloud.google.com/sql/docs/postgres#docs" rel="nofollow noreferrer">this</a> that takes care of the scaling/HA for me? Then I'd use kustomize to manage the yaml variances.</p> <p><strong>Edit</strong>:</p> <p>I actually found a postgres operator that worked great. Followed the docs one time through and it all worked, and it's from <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">postgres docs</a>.</p>
<p>I have created this community wiki answer to summarize the topic and to make pertinent information more visible.</p> <p>As <a href="https://stackoverflow.com/users/4216641/turing85" title="14,085 reputation">Turing85</a> well mentioned in the comment:</p> <blockquote> <p>Do NOT share a pvc to multiple db instances. Even if you use the right backing volume (it must be an object-based storage in order to be read-write many), with enough scaling, performance will take a hit (after all, everything goes to one file system, this will stress the FS). The proper way would be to configure clustering. All major relational databases (mssql, mysql, postgres, oracle, ...) do support clustering. To be on the secure side, however, I would recommend to buy a scalable database &quot;as a service&quot; unless you know exactly what you are doing.</p> </blockquote> <blockquote> <p>The good solution might be to use a single replica StatefulSet for development, to avoid billing and use a fully managed cloud based sql solution in prod. Unless you have the knowledge or a suffiiciently professional operator to deploy a clustered dbms.</p> </blockquote> <p>Another solution may be to use a different operator as <a href="https://stackoverflow.com/users/299180/aaron">Aaron</a> did:</p> <blockquote> <p>I actually found a postgres operator that worked great. Followed the docs one time through and it all worked, and it's from postgres: <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">https://www.kubegres.io/doc/getting-started.html</a></p> </blockquote> <p>See also <a href="https://stackoverflow.com/questions/27157227/can-relational-database-scale-horizontally/27162851">this similar question</a>.</p>
<p>I have a cluster deployed with: <a href="https://bitnami.com/stack/prometheus-operator/helm" rel="nofollow noreferrer">https://bitnami.com/stack/prometheus-operator/helm</a> and <a href="https://github.com/prometheus-community/postgres_exporter" rel="nofollow noreferrer">https://github.com/prometheus-community/postgres_exporter</a></p> <p>How to properly add own metrics to the prometheus-operator list of metrics that postgres_exporter exports ?</p> <p>I tried it that way:</p> <pre><code>helm upgrade kube-prometheus bitnami/kube-prometheus \ --set prometheus.additionalScrapeConfigs.enabled=true \ --set prometheus.additionalScrapeConfigs.type=internal \ --set prometheus.additionalScrapeConfigs.internal.jobList= - job_name: 'prometheus-postgres-exporter' static_configs: - targets: ['prometheus-postgres-exporter.default:80'] </code></pre> <p>but it doesn't work ('job_name:' are not added to the Prometheus Configuration).</p>
<p>Use <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/" rel="nofollow noreferrer">ServiceMonitors</a> if using Prometheus Operator.</p> <p>Configure the following block under the values.yaml in the postgres-exporter helm chart.This allows Prometheus Operator to read and configure jobs automatically.</p> <pre><code>serviceMonitor: # When set true then use a ServiceMonitor to configure scraping enabled: true # Set the namespace the ServiceMonitor should be deployed namespace: &lt;namespace where prometheus operator is deployed.&gt; # Set how frequently Prometheus should scrape interval: 30s # Set path to cloudwatch-exporter telemtery-path telemetryPath: /metrics # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator # labels: # Set timeout for scrape # timeout: 10s # Set of labels to transfer from the Kubernetes Service onto the target # targetLabels: [] # MetricRelabelConfigs to apply to samples before ingestion # metricRelabelings: [] </code></pre>
<p>Say you have 3 or more services that communicate with each other constantly, if they are deployed remotely to the same cluster all is good cause they can see each other.</p> <p>However, I was wondering how could I deploy one of those locally, using minikube for instance, in a way that they are still able to talk to each other.</p> <p>I am aware that I can port-forward the other two so that the one I have locally deployed can send calls to the others but I am not sure how I could make it work for the other two also be able to send calls to the local one.</p>
<p><strong>TL;DR Yes, it is possible but not recommended, it is difficult and comes with a security risk.</strong></p> <p><a href="https://stackoverflow.com/users/4185234/charlie" title="19,519 reputation">Charlie</a> wrote very well in the comment and is absolutely right:</p> <blockquote> <p>Your local service will not be discoverable by a remote service unless you have a direct IP. One other way is to establish RTC or Web socket connection between your local and remote services using an external server.</p> </blockquote> <p>As you can see, it is possible, but also not recommended. Generally, both containerization and the use of kubernetes tend to isolate environments. If you want your services to communicate with each other anyway being in completely different clusters on different machines, you need to configure the appropriate network connections over the public internet. It also may come with a security risk.</p> <p>If you want to set up the environment locally, it will be a much better idea to run these 3 services as an independent whole. Also take into account that the Minikube is mainly designed for learning and testing certain solutions and is not entirely suitable for production solutions.</p>
<p>I have a small Kubernetes on prem cluster (Rancher 2.3.6) consisting of three nodes. The deployments inside the cluster are provisioned dynamically by an external application and always have their replica count set to 1, because these are stateful applications and high availability is not needed.</p> <p>The applications are exposed to the internet by NodePort services with a random port and ExternalTrafficPolicy set to Cluster. So if the user requests one of the three nodes, the k8s proxy will route and s-NAT the request to the correct node with the application pod. </p> <p>To this point, everything works fine.</p> <p>The problem started when we added Applications that rely on the requests source IP. Since the s-NAT replaces the request IP with an internal IP this applications don't work properly.</p> <p>I know, that setting the services ExternalTrafficPolicy to local will disabke s-natting. But this will also break the architecture, because not every pod has an instance of the application running.</p> <p>Is there a way to preserve the original client IP and still make use of the internal routing, so i won't have to worry about on which node the request will land?</p>
<p>It depends on how the traffic gets into your cluster. But let's break it down a little bit:</p> <p>Generally, there are two strategies on how to handle source ip preservation:</p> <ul> <li>SNAT (packet IP)</li> <li>proxy/header (passing the original IP in an additional header)</li> </ul> <h4>1) SNAT</h4> <p>By default packets to <em>NodePort</em> and <em>LoadBalancer</em> are SourceNAT'd (to the node's IP that received the request), while packets send to <em>ClusterIP</em> are <strong>not</strong> SourceNAT'd.</p> <p>As you metioned already, there is a way to turn off SNAT for <em>NodePort</em> and <em>LoadBalancer</em> Services by setting <code>service.spec.externalTrafficPolicy: Local</code> which preserves the original source IP address, but with the undesired effect that kube-proxy only proxies proxy requests to local endpoints, and does not forward traffic to other nodes.</p> <h4>2) Header + Proxy IP preservation</h4> <p><em><strong>a) Nginx Ingress Controller and L7 LoadBalancer</strong></em></p> <ul> <li>When using L7 LoadBalancers which are sending a <code>X-Forwarded-For</code> header, Nginx by default evaluates the header containing the source ip if we have set the LB CIDR/Address in the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr" rel="nofollow noreferrer"><code>proxy-real-ip-cidr</code></a></li> <li>you might need to set <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers" rel="nofollow noreferrer"><code>use-forwarded-headers</code></a> explicitly to make nginx forward the header information</li> <li>additionally you might want to enable <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-real-ip" rel="nofollow noreferrer"><code>enable-real-ip</code></a> so the realip_module replaces the real ip that has ben set in the <code>X-Forwarded-For</code> header provided by the trusted LB specified in <code>proxy-real-ip-cidr</code>.</li> </ul> <p><em><strong>b) Proxy Protocol and L4 LoadBalancer</strong></em></p> <ul> <li>With enabled <code>use-proxy-protocol: &quot;true&quot;</code> the header is <strong>not</strong> evaulated and the connection details will be send before forwarding the TCP actual connection. The LBs must support this.</li> </ul>
<p>I want to schedule kubernetes cronjob in my local timezone (GMT+7), currently when I schedule cronjob in k8s I need to schedule in UTC but I want to schedule in my local timezone, As specify in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">Kubernetes document</a>, that I need to change timezone in kube-controller manager as follows</p> <blockquote> <p>All CronJob schedule: times are based on the timezone of the kube-controller-manager.</p> <p>If your control plane runs the kube-controller-manager in Pods or bare containers, the timezone set for the kube-controller-manager container determines the timezone that the cron job controller uses.</p> </blockquote> <p>But I can't find a way to set timezone for kube-controller-manager, I'm using Kuberenetes on-premise v1.17, I found controller manager manifest file in - /etc/kubernetes/manifests/kube-controller-manager.yaml but can't find a way to or document to change the it's timezone.</p>
<p>UPDATE 2023: WARNING OLD SYNTAX. See @Thanawat's answer, the official syntax has changed from what is shown in this answer.</p> <hr /> <pre class="lang-yaml prettyprint-override"><code>spec: schedule: &quot;CRON_TZ=America/New_York */5 * * * *&quot; </code></pre> <pre class="lang-yaml prettyprint-override"><code>spec: schedule: &quot;CRON_TZ=Etc/UTC */5 * * * *&quot; </code></pre> <p>By an <em>&quot;accident of implementation&quot;</em> timezone support was added to kubernetes cron scheduler.</p> <p>available in v1.22 (next release)<br /> Note: v1.22 should be released by end-of-year, Oct/Nov 2021</p> <ul> <li><a href="https://github.com/kubernetes/kubernetes/issues/47202#issuecomment-901294870" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/47202#issuecomment-901294870</a> <ul> <li>warning: very long issue thread</li> </ul> </li> <li>mentioned in <a href="https://kubernetespodcast.com/episode/160-keda/" rel="nofollow noreferrer">https://kubernetespodcast.com/episode/160-keda/</a></li> <li>this accidental feature snuck in when a dependency was updated</li> <li>warning: not entirely clear if k8s maintainers might remove/block this feature instead of supporting it <ul> <li>unit tests to cover the new user-facing feature have not yet been merged as of 2021-08-31: <a href="https://github.com/kubernetes/kubernetes/pull/104404" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/104404</a></li> <li>static TZ or daylight savings schedule is one thing, but TZ and DST schedules change over time</li> </ul> </li> </ul>
<p>I'm using <code>kubectl</code> to access the api server on my minikube cluster on ubuntu but when try to use <code>kubectl</code> command I got an error certificate expired:</p> <pre><code>/home/ayoub# kubectl get pods Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z </code></pre> <p>Here's my kubectl config:</p> <pre><code>/home/ayoub# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://127.0.0.1:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: REDACTED root@ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://127.0.0.1:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: REDACTED root@ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# </code></pre> <p>How I can renew this certificate?</p>
<p>Posted community wiki for better visibility. Feel free to expand it.</p> <hr /> <p>There is similar issue opened on <a href="https://github.com/kubernetes/minikube/issues/10122" rel="nofollow noreferrer">minikube GitHub</a>.</p> <p>The temporary workaround is to remove some files in the <code>/var/lib/minikube/</code> directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in <a href="https://github.com/kubernetes/minikube/issues/10122#issuecomment-888315632" rel="nofollow noreferrer">this answer</a>.</p>
<p>According to documentation (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a>) I can create cron job in k8s with specify timezone like: <code>&quot;CRON_TZ=UTC 0 23 * * *&quot;</code></p> <p>My deployment file is:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: scheduler spec: schedule: &quot;CRON_TZ=UTC 0 23 * * *&quot; ... </code></pre> <p>During the deploy I am getting an error:</p> <blockquote> <p>The CronJob &quot;scheduler&quot; is invalid: spec.schedule: Invalid value: &quot;CRON_TZ=UTC 0 23 * * *&quot;: Expected exactly 5 fields, found 6: CRON_TZ=UTC 0 23 * * *</p> </blockquote> <p>Cron is working without perfectly timezone (<code>schedule: &quot;0 23 * * *&quot;</code>)</p> <p>Cluster version is: <code>Kubernetes 1.21.2-do.2</code> - digitalocean.</p> <p>What is wrong?</p>
<p>The <code>CRON_TZ=&lt;timezone&gt;</code> prefix won't be available yet, not until 1.22. The inclusion in the 1.21 release docs was an error.</p> <p>Originally, the change adding the syntax was <a href="https://github.com/kubernetes/website/pull/29455/files" rel="nofollow noreferrer">included for 1.22</a>, but it appears someone got confused and <a href="https://github.com/kubernetes/website/pull/29492/files" rel="nofollow noreferrer">moved the documentation over to 1.21</a>. Supporting the <code>CRON_TZ=&lt;timezone&gt;</code> syntax is accidental, purely because the <a href="https://github.com/robfig/cron" rel="nofollow noreferrer">package used to handle the scheduling</a> was <a href="https://github.com/kubernetes/kubernetes/commit/4b36a5cbe95d9e305d67cfda2ffa87bb3a0ccd47" rel="nofollow noreferrer">recently upgraded to version 3</a>, which added support for the syntax. The package is the key component that makes the syntax possible and is only part of 1.22.</p> <p>As of <a href="https://github.com/kubernetes/website/commit/b5e83e89448bcb69c095b7a056aa3f5fa4dcd4ed" rel="nofollow noreferrer">November 2021</a> the wording in the documentation has been adjusted to state that <code>CRON_TZ</code> is not officially supported:</p> <blockquote> <p><strong>Caution</strong>:</p> <p>The v1 CronJob API does not officially support setting timezone as explained above.</p> <p>Setting variables such as <code>CRON_TZ</code> or <code>TZ</code> is not officially supported by the Kubernetes project. <code>CRON_TZ</code> or <code>TZ</code> is an implementation detail of the internal library being used for parsing and calculating the next Job creation time. Any usage of it is not recommended in a production cluster.</p> </blockquote> <p>If you can upgrade to 1.24, you can instead use the new <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones" rel="nofollow noreferrer"><code>CronJobTimeZone</code> feature gate</a> to enable the new, official, time-zone support added with <a href="https://github.com/kubernetes/enhancements/blob/2853299b8e330f12584624326fee186b56d4614c/keps/sig-apps/3140-TimeZone-support-in-CronJob/README.md" rel="nofollow noreferrer">KEP 3140</a>. Note that this is still an alpha-level feature; hopefully it will reach beta in 1.25. If all goes well, the feature should reach maturity in release 1.27.</p> <p>With the feature-gate enabled, you can add a <code>timeZone</code> field to your CronJob <code>spec</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: CronJob metadata: name: scheduler spec: schedule: &quot;0 23 * * *&quot; timeZone: &quot;Etc/UTC&quot; </code></pre>
<p>Can someone help me to understand if service mesh itself is a type of ingress or if there is any difference between service mesh and ingress?</p>
<p>An &quot;Ingress&quot; is responsible for Routing Traffic into your Cluster (from the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Docs</a>: <em>An API object that manages external access to the services in a cluster, typically HTTP.</em>)</p> <p>On the other side, a Service-Mesh is a tool that adds proxy-Containers as Sidecars to your Pods and Routs traffic between your Pods through those proxy-Containers.</p> <p>use-Cases for Service-Meshes are i.E.</p> <ul> <li>distributed tracing</li> <li>secure (SSL) connections between pods</li> <li>resilience (service-mesh can reroute traffic from failed requests)</li> <li>network-performance-monitoring</li> </ul>
<p>I have a Kubernetes Cluster on Azure and I need to connect it to an external tool, this tool needs the API server address and the Bearer token, I was able to get the url of the api, but the bearer token, I am not finding.</p> <p>Does anyone know how to generate this token?</p>
<p>Bearer Tokens are just normal Tokens from ServiceAccounts.</p> <blockquote> <p>Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API.</p> </blockquote> <p>You should def. create a new ServiceAccount with limit permissions (maybe the Tool has a documentation which permissions are needed).</p> <p>Generally i would advise to never let any external Tool connect directly to the API Server, but this is up to you.</p> <p>You can check about ServiceAccount Tokens here: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens</a></p>
<p>I have a set of environment variables in my <code>deployment</code> using <code>EnvFrom</code> and <code>configMapRef</code>. The environment variables held in these configMaps were set by kustomize originally from json files.</p> <pre><code>spec.template.spec.containers[0]. envFrom: - secretRef: name: eventstore-login - configMapRef: name: environment - configMapRef: name: eventstore-connection - configMapRef: name: graylog-connection - configMapRef: name: keycloak - configMapRef: name: database </code></pre> <p>The issue is that it's not possible for me to access the specific environment variables directly.</p> <p>Here is the result of running <code>printenv</code> in the pod:</p> <pre><code>... eventstore-login={ &quot;EVENT_STORE_LOGIN&quot;: &quot;admin&quot;, &quot;EVENT_STORE_PASS&quot;: &quot;changeit&quot; } evironment={ &quot;LOTUS_ENV&quot;:&quot;dev&quot;, &quot;DEV_ENV&quot;:&quot;dev&quot; } eventstore={ &quot;EVENT_STORE_HOST&quot;: &quot;eventstore-cluster&quot;, &quot;EVENT_STORE_PORT&quot;: &quot;1113&quot; } graylog={ &quot;GRAYLOG_HOST&quot;:&quot;&quot;, &quot;GRAYLOG_SERVICE_PORT_GELF_TCP&quot;:&quot;&quot; } ... </code></pre> <p>This means that from my nodejs app I need to do something like this</p> <pre><code>&gt; process.env.graylog '{\n &quot;GRAYLOG_HOST&quot;:&quot;&quot;,\n &quot;GRAYLOG_SERVICE_PORT_GELF_TCP&quot;:&quot;&quot;\n}\n' </code></pre> <p>This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:</p> <pre><code>process.env.GRAYLOG_HOST </code></pre> <p>To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:</p> <pre><code> env: - name: NODE_ENV value: dev - name: EVENT_STORE_HOST valueFrom: secretKeyRef: name: eventstore-secret key: EVENT_STORE_HOST - name: EVENT_STORE_PORT valueFrom: secretKeyRef: name: eventstore-secret key: EVENT_STORE_PORT - name: KEYCLOAK_REALM_PUBLIC_KEY valueFrom: configMapKeyRef: name: keycloak-local key: KEYCLOAK_REALM_PUBLIC_KEY </code></pre> <p>Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.</p>
<h2>Short answer:</h2> <p>You will need to define variables explicitly or change configmaps so they have <code>1 environment variable = 1 value</code> structure, this way you will be able to refer to them using <code>envFrom</code>. E.g.:</p> <pre><code>&quot;apiVersion&quot;: &quot;v1&quot;, &quot;data&quot;: { &quot;EVENT_STORE_LOGIN&quot;: &quot;admin&quot;, &quot;EVENT_STORE_PASS&quot;: &quot;changeit&quot; }, &quot;kind&quot;: &quot;ConfigMap&quot;, </code></pre> <h2>More details</h2> <p><code>Configmaps</code> are key-value pairs that means for one key there's only one value, <code>configmaps</code> can get <code>string</code> as data, but they can't work with <code>map</code>.</p> <p>I tried edited manually the <code>configmap</code> to confirm the above and got following:</p> <pre><code>invalid type for io.k8s.api.core.v1.ConfigMap.data: got &quot;map&quot;, expected &quot;string&quot; </code></pre> <p>This is the reason why environment comes up as one string instead of structure.</p> <p>For example this is how <code>configmap.json</code> looks:</p> <pre><code>$ kubectl describe cm test2 Name: test2 Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== test.json: ---- environment={ &quot;LOTUS_ENV&quot;:&quot;dev&quot;, &quot;DEV_ENV&quot;:&quot;dev&quot; } </code></pre> <p>And this is how it's stored in kubernetes:</p> <pre><code>$ kubectl get cm test2 -o json { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;data&quot;: { &quot;test.json&quot;: &quot;evironment={\n \&quot;LOTUS_ENV\&quot;:\&quot;dev\&quot;,\n \&quot;DEV_ENV\&quot;:\&quot;dev\&quot;\n}\n&quot; }, </code></pre> <p>In other words observed behaviour is expected.</p> <h2>Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">ConfigMaps</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Configure a Pod to Use a ConfigMap</a></li> </ul>
<p>In Kubernetes job, there is a spec for .spec.activeDeadlineSeconds. If you don't explicitly set it, what will be the default value? 600 secs?</p> <p>here is the example from k8s doc</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: pi-with-timeout spec: backoffLimit: 5 activeDeadlineSeconds: 100 template: spec: containers: - name: pi image: perl command: [&quot;perl&quot;, &quot;-Mbignum=bpi&quot;, &quot;-wle&quot;, &quot;print bpi(2000)&quot;] restartPolicy: Never </code></pre> <p>assume I remove the line</p> <pre><code>activeDeadlineSeconds: 100 </code></pre>
<p>By default, a Job will run uninterrupted. If you don't set <code>activeDeadlineSeconds</code>, the job will not have active deadline limit. It means <code>activeDeadlineSeconds</code> doesn't have default value.</p> <p>By the way, there are several ways to terminate the job.(Of course, when a Job completes, no more Pods are created.)</p> <ul> <li><p>Pod backoff failure policy(<code>.spec,.backofflimit</code>) You can set <code>.spec.backoffLimit</code> to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.</p> </li> <li><p>Setting an active deadline(<code>.spec.activeDeadlineSeconds</code>) The activeDeadlineSeconds applies to the duration of the job, no matter how many Pods are created. Once a Job reaches activeDeadlineSeconds, all of its running Pods are terminated and the Job status will become type: Failed with reason: DeadlineExceeded.</p> </li> </ul> <p>Note that a Job's .spec.activeDeadlineSeconds takes precedence over its .spec.backoffLimit. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by activeDeadlineSeconds, even if the backoffLimit is not yet reached.</p>
<p>In <code>Chart.yaml</code> I specified dependency:</p> <pre><code>dependencies: - name: redis version: 15.0.3 repository: https://charts.bitnami.com/bitnami </code></pre> <p>In <code>deployment.yaml</code> I specify service:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: redis name: redis-svc spec: clusterIP: None ports: - port: 6355 selector: app: redis </code></pre> <p>But what I see after <code>kubectl get all</code>:</p> <pre><code>service/redis-svc ClusterIP None &lt;none&gt; 6355/TCP 36s statefulset.apps/myapp-redis-master 0/1 37s statefulset.apps/myapp-redis-replicas 0/3 37s </code></pre> <p>I want single redis instance as Service. What do I do wrong?</p>
<p>Helm supports passing arguments to dependent sub-charts. You can override the architecture of your <code>redis</code> sub-chart by adding this to your <code>values.yaml</code> file.</p> <pre><code>redis: architecture: standalone </code></pre>
<p>My own app dockerized on my MacOS M1 Silicon host machine, fails with <code>standard_init_linux.go:190: exec user process caused &quot;exec format error&quot;</code> when launched on Kubernetes cluster, with runs on Linux server.</p> <p>I have my app with this Dockerfile:</p> <pre><code>FROM openjdk:11-jre-slim as jdkbase FROM jdkbase COPY app/target/dependency-jars /run/dependency-jars COPY app/target/resources /run/resources ADD app/target/app-1.0.3.jar /run/app-1.0.3.jar CMD java -jar run/app-1.0.3.jar </code></pre> <p>,this docker-compose.yaml:</p> <pre><code>version: &quot;3.8&quot; services: myapp: build: context: . dockerfile: myapp/Dockerfile hostname: myapphost </code></pre> <p>and this pod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myapp spec: containers: - name: myapp-container image: my.private.repo/app_dir/myapp:1.0.3 imagePullSecrets: - name: st-creds </code></pre> <p>I've spent several hours trying to figure it out how to solve it, trying to use diff. variants of Entrypoints and cmds, but non of this was a working option.</p>
<p>The final solution was to build app for certain architecture, adding platform (<code>platform: linux/amd64</code>) to the compose file:</p> <pre><code>version: &quot;3.8&quot; services: myapp: platform: linux/amd64 build: context: . dockerfile: myapp/Dockerfile hostname: myapphost </code></pre> <p>and also to change base image in the Dockerfile to any java image, which works with amd64, for example, to <code>amd64/openjdk:11-slim</code>, like this:</p> <pre><code>FROM amd64/openjdk:11-slim as jdkbase FROM jdkbase COPY app/target/dependency-jars /run/dependency-jars COPY app/target/resources /run/resources ADD app/target/app-1.0.3.jar /run/app-1.0.3.jar CMD java -jar run/app-1.0.3.jar </code></pre> <p>Hope this'll save time to anyone else who is new to Docker as me</p>
<pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql namespace: cp1 spec: selector: matchLabels: app: mysql serviceName: mysql replicas: 1 template: metadata: labels: app: mysql spec: terminationGracePeriodSeconds: 10 containers: - name: mysql image: mysql:latest env: - name: MYSQL_ROOT_HOST value: '%' - name: MYSQL_LOG_CONSOLE value: &quot;true&quot; - name: MYSQL_ROOT_PASSWORD valueFrom: configMapKeyRef: key: MYSQL_PASSWORD name: env-map-service ports: - containerPort: 3306 name: mysql resources: requests: cpu: 500m memory: 1Gi volumeMounts: - name: mysql-vol mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: name: mysql-vol spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 1Gi storageClassName: test-sc </code></pre> <hr /> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql namespace: cp1 annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: &quot;true&quot; spec: ports: - port: 3306 name: mysql targetPort: 3306 clusterIP: None selector: app: mysql </code></pre> <p>Its my app deployment yaml file. which work perfectly,</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: kube-child-app labels: app: kube-child-app namespace: cp1 spec: replicas: 1 template: metadata: name: kube-child-app labels: app: kube-child-app spec: containers: - name: kube-child-app image: jahadulrakib/kube-child-app:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8081 restartPolicy: Always selector: matchLabels: app: kube-child-app </code></pre> <p>I want to deploy a database application in my local kubernetes cluster. But its gives following error and do not run pod. here i attach my yaml file for pod up. i create for this StroageClass, PV.</p> <p><strong>Error: Error from server (BadRequest): pod mysql-0 does not have a host assigned</strong></p> <p><strong>UPDATE 1:</strong></p> <p>Warning FailedScheduling 5h40m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims</p> <p>pvc and pv are in pending status</p> <p><strong>UPDATE 2:</strong></p> <p>PVC in <code>pending</code> status because storage class cant create the needed PV</p> <p>15m Warning FailedScheduling pod/mysql-0 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. 4m12s Normal ExternalProvisioning persistentvolumeclaim/mysql-vol-mysql-0 waiting for a volume to be created, either by external provisioner &quot;docker.io/hostpath&quot; or manually created by system administrator</p> <p><strong>UPDATE 3:</strong></p> <p>the issue seems to stem from difference between the available PV labels and configuration we want to use and the ones set in the statefulSet <code>volumeClaimTemplates</code></p>
<p>Looks like the PVC was not satisfied with the needed PV because the storageClass didnt create a new PV and the existing PV didnt match the <code>Pending</code> PVC so they couldnt bound together. after changing the appropriate fields they bound together.</p> <p>Although it only uses a single replica a statefulSet is still the best option over using a deployment with a pvc instead mainly because of the application intention being a DB scaling a birds eye view of the project explain the application meaning better when using a statefulSet since it is stateful and it can in the future turn into a set of pods instead of just one.</p> <p>check out <a href="https://kubernetes.io/blog/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes/" rel="nofollow noreferrer">kubernetes article</a> on it when is was first popularised and set as in useful state, and theire current <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">docs</a> on it</p>
<p>I created a kubernetes cluster using <code>kubeadm</code>. Services are declared as <code>ClusterIP</code>. At the moment I'm trying to deploy my app as ingress of type <code>loadbalancer</code> with Metallb but I faced some problems. If I deploy my app as ingress some jv and css components are not found. There was no problem running the application as a service but the problem appeared while I used Ingress. It is an ASP.NET Core application</p> <p>My Ingress source:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / name: taco-ingress spec: rules: - host: tasty.taco.com http: paths: - path: /web1 pathType: Prefix backend: service: name: web1 port: number: 80 - path: /web2 pathType: Prefix backend: service: name: web2 port: number: 80 </code></pre> <p>My Deployment source:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: web1 labels: app: taco taco: web1 spec: replicas: 2 selector: matchLabels: app: taco task: web1 template: metadata: labels: app: taco task: web1 version: v0.0.1 spec: containers: - name: taco image: yatesu0x00/webapplication1:ingress-v1 ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: web2 labels: app: taco taco: web2 spec: replicas: 2 selector: matchLabels: app: taco task: web2 template: metadata: labels: app: taco task: web2 version: v0.0.1 spec: containers: - name: taco image: yatesu0x00/webapplication2:ingress-v1 ports: - containerPort: 80 </code></pre> <p>My Service source:</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: web1 spec: ports: - targetPort: 80 port: 80 selector: app: taco task: web1 --- apiVersion: v1 kind: Service metadata: name: web2 spec: ports: - targetPort: 80 port: 80 selector: app: taco task: web2 </code></pre> <p>The html file of the app:</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot; /&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot; /&gt; &lt;title&gt;Epic Website - (⌐■_■)&lt;/title&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;/lib/bootstrap/dist/css/bootstrap.min.css&quot; /&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;/css/site.css&quot; /&gt; &lt;/head&gt; &lt;body&gt; &lt;header&gt; &lt;nav class=&quot;navbar navbar-expand-sm navbar-toggleable-sm navbar-light bg-white border-bottom box-shadow mb-3&quot;&gt; &lt;div class=&quot;container&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;/&quot;&gt;Home&lt;/a&gt; &lt;button class=&quot;navbar-toggler&quot; type=&quot;button&quot; data-toggle=&quot;collapse&quot; data-target=&quot;.navbar-collapse&quot; aria-controls=&quot;navbarSupportedContent&quot; aria-expanded=&quot;false&quot; aria-label=&quot;Toggle navigation&quot;&gt; &lt;span class=&quot;navbar-toggler-icon&quot;&gt;&lt;/span&gt; &lt;/button&gt; &lt;div class=&quot;navbar-collapse collapse d-sm-inline-flex flex-sm-row-reverse&quot;&gt; &lt;ul class=&quot;navbar-nav flex-grow-1&quot;&gt; &lt;li class=&quot;nav-item&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;/Home2&quot;&gt;Home2&lt;/a&gt; &lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;/Home/ItWorks&quot;&gt;Click me!&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/nav&gt; &lt;/header&gt; &lt;div class=&quot;container&quot;&gt; &lt;main role=&quot;main&quot; class=&quot;pb-3&quot;&gt; &lt;h2&gt;Want this taco?&lt;/h2&gt; &lt;pre&gt; {\__/} ( ●.●) ( &gt;🌮 &lt;/pre&gt; &lt;/main&gt; &lt;/div&gt; &lt;footer class=&quot;border-top footer text-muted&quot;&gt; &lt;div class=&quot;container&quot;&gt; &amp;copy; 2020 - &lt;a href=&quot;/Home/Privacy&quot;&gt;Privacy&lt;/a&gt; &lt;/div&gt; &lt;/footer&gt; &lt;script src=&quot;/lib/jquery/dist/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;/lib/bootstrap/dist/js/bootstrap.bundle.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://cdn.jsdelivr.net/npm/darkmode-js@1.5.7/lib/darkmode-js.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;/js/site.js?v=8ZRc1sGeVrPBx4lD717BgRaQekyh78QKV9SKsdt638U&quot;&gt;&lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>If I open up the console in browser I can see that there is <code>404 not found</code> on all elements of type <code>&lt;script&gt;</code>.</p>
<p>If you look closer in the logs, you can find that the cause of your problem is that your app is requesting static content (example for <code>css/site.css</code> file) in the path <code>tasty.taco.com/css/site.css</code> and as Ingress Controller doesn't have definition for prefix <code>/css</code> in it's definition it is returning 404 error code.</p> <p>The static content is available in the following path <code>tasty.taco.com/web1/css/site.css</code> - look that I used <code>web1</code> prefix so Ingress knows to which service re-direct this request.</p> <p>Generally, using annotation <code>nginx.ingress.kubernetes.io/rewrite-target</code> in the apps that are requesting static content often causes issues like this.</p> <p>Fix for this is <strong>not</strong> to use this annotation and add to your app possibility to setup base URL, in your example by using <a href="https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.builder.extensions.usepathbasemiddleware?view=aspnetcore-5.0" rel="nofollow noreferrer"><code> UsePathBaseMiddleware</code> class</a>.</p> <blockquote> <p>Represents a middleware that extracts the specified path base from request path and postpend it to the request path base.</p> </blockquote> <p>For detailed steps, I'd recommend following steps presented in <a href="https://stackoverflow.com/questions/56625822/asp-net-core-2-2-kubernetes-ingress-not-found-static-content-for-custom-path/57212033#57212033">this answer</a>.</p>
<p>I am trying to run a Spring Batch application in kubernetes cluster. I am able to enforce resource limits to the main application pod by placing the following snippet in the deployment yaml:</p> <pre><code>resources: limits: cpu: 500m ephemeral-storage: 500Mi memory: 250Mi </code></pre> <p>These settings are getting applied and can be seen in the pod yaml (kubectl edit pod batch).</p> <p>However, these limits are not propagated to the worker pods. I tried adding the following properties in configmap of batch to set the cpu and memory limits:</p> <pre><code>SPRING.CLOUD.DEPLOYER.KUBERNETES.CPU: 500m SPRING.CLOUD.DEPLOYER.KUBERNETES.MEMORY: 250Mi </code></pre> <p>However, the worker pods are not getting these limits. I tried providing the following env variables too, but still the limits were not applied to the worker pod:</p> <pre><code>SPRING_CLOUD_DEPLOYER_KUBERNETES_CPU: 500m SPRING_CLOUD_DEPLOYER_KUBERNETES_MEMORY: 250Mi </code></pre> <p>The versions involved are:</p> <ul> <li>Spring Boot: 2.1.9.RELEASE</li> <li>Spring Cloud: 2020.0.1</li> <li>Spring Cloud Deployer: 2.5.0</li> <li>Spring Cloud Task: 2.1.1.RELEASE</li> <li>Kubernetes: 1.21</li> </ul> <p>How can I set these limits?</p> <p>EDIT: Adding code for DeployerPartitionerHandler:</p> <pre><code>public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer) { Resource resource = this.resourceLoader.getResource(resourceSpec); DeployerPartitionHandler partitionHandler = new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, &quot;worker&quot;); commandLineArgs.add(&quot;--spring.profiles.active=worker&quot;); commandLineArgs.add(&quot;--spring.cloud.task.initialize.enable=false&quot;); commandLineArgs.add(&quot;--spring.batch.initializer.enabled=false&quot;); commandLineArgs.add(&quot;--spring.cloud.task.closecontext_enabled=true&quot;); commandLineArgs.add(&quot;--logging.level.root=DEBUG&quot;); partitionHandler.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs)); partitionHandler.setEnvironmentVariablesProvider(environmentVariablesProvider()); partitionHandler.setApplicationName(appName + &quot;worker&quot;); partitionHandler.setMaxWorkers(maxWorkers); return partitionHandler; } @Bean public EnvironmentVariablesProvider environmentVariablesProvider() { return new SimpleEnvironmentVariablesProvider(this.environment); } </code></pre>
<p>Since the <code>DeployerPartitionHandler</code> is created using the <code>new</code> operator in the <code>partitionHandler</code> method, it is not aware of the values from the properties file. The <code>DeployerPartitionHandler</code> provides a setter for <code>deploymentProperties</code>. You should use this parameter to specify deployment properties for worker tasks.</p> <p>EDIT: Based on comment by Glenn Renfro</p> <p>The deployment property should be <code>spring.cloud.deployer.kubernetes.limits.cpu</code> and not <code>spring.cloud.deployer.kubernetes.cpu</code>.</p>
<p><code>kubectl top pod --all-namespaces | sort --reverse --key 4 --numeric | head -10</code> gives top pods in a cluster. How to get top memory consuming pods per node?</p>
<p>In Ubuntu, this is the command which works for me:</p> <p>Sort by MEMORY Usage:</p> <p><code>kubectl get po -A -owide | grep ${NODE_NAME} | awk '{print $1, $2}' | xargs -n2 kubectl top pod --no-headers -n $1 | sort --key 3 -nr | column -t</code></p> <p>Sort by CPU Usage:</p> <p><code>kubectl get po -A -owide | grep ${NODE_NAME} | awk '{print $1, $2}' | xargs -n2 kubectl top pod --no-headers -n $1 | sort --key 2 -nr | column -t</code></p>
<p>Im new to kubernetes. In the yaml file to create services, i define externalIPs value in order to access services from outside the cluster:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: mytestservice spec: type: ClusterIP clusterIP: 10.96.1.113 externalIPs: - 172.16.80.117 ports: - name: tcp-8088 protocol: TCP port: 8088 targetPort: 8088: selector: service-app: mytestservice </code></pre> <p>and it works just fine, i can call to my service by using externalIp:port (this case 172.16.80.117:8088). But i heard people talking about ingress controller (and some API gateways) that provide access from outside. I do read about them a bit but still cant tell what're the differences!? and does my cluster need them or not?</p> <p>(According to the accepted answer i found here <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">Kubernetes service external ip pending</a></p> <p><code>With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.</code></p> <p>My cluster is a custom K8s Cluster too, using kubeadm. If i have no need of using domain name, just address my services directly by externalip and port then i can totally ignore ingress controller. Am i right?</p> <p>Thanks!</p>
<p>Welcome to the community.</p> <p><strong>Short answer:</strong></p> <p>At this point the answer to your question is yes, for simple cases you may completely ignore ingress. It will be a good option when it's time to go to production.</p> <hr /> <p><strong>A bit more details:</strong></p> <p>Main point why you may need to look at <code>ingress</code> is because it manages incoming traffic: works with HTTP/HTTPS requests, provides routing based on paths, can work with TLS/SSL and can perform <a href="https://en.wikipedia.org/wiki/TLS_termination_proxy" rel="nofollow noreferrer">TLS termination</a> and many more.</p> <p>There are different ingresses available, most common is <code>nginx ingress</code>. It has almost all features regular <code>nginx</code> has. You can find <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">nginx ingress annotations</a> to see what it can do.</p> <p>For example, if you have microservices application, for each service separate load balancer will be required while everything can be directed to single ingress and routed further to services (see examples in useful links).</p> <p>If you only play with kubernetes and single service, no need to even have a loadbalancer, you can use a <code>nodePort</code> or <code>externalIP</code>.</p> <p>Also with <code>ingress</code> deployed, there's no need to specify a port. Usually <code>ingress</code> listens to <code>80</code> and <code>443</code> respectively.</p> <p>I'd say it's worth trying to see how it works and make routing and managing service cleaner.</p> <hr /> <p><strong>Useful links:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress - description and examples</a></li> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="nofollow noreferrer">basic usage - host based routing</a></li> </ul>
<p>When deciding on update strategy for a kubernetes application, there is an option to use <code>Recreate</code> strategy.</p> <p>How would this be different from just uninstalling and installing the app?</p>
<p><code>Recreate</code> strategy will delete your Pods and then add new Pods - you will get short downtime, but on the other side you will not use much extra resources during upgrade.</p> <p>You typically want <code>RollingUpgrade</code> since that takes a few Pods at a time and you can deploy stateless applications without downtime.</p>
<p>How can I use <code>kubectl</code> to list all the installed operators in my cluster? For instance running:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.9/kubegres.yaml </code></pre> <p>installs the <strong>Kubegres</strong> (Postgres cluster provider) operator, but then how do I actually see that in a list of operators. Equally important to that, how do I uninstall the operator from my cluster via <code>kubectl</code>, or is that not possible to do?</p>
<p>Unless you are using <a href="https://github.com/operator-framework/operator-lifecycle-manager" rel="noreferrer">OLM</a> to manage operator, there is no universal way to get rid of it.</p> <p>Some operator might be installed using <a href="https://helm.sh" rel="noreferrer">Helm</a>, then it's just matter of <code>helm delete ...</code></p> <p>You can always try to remove it using</p> <pre><code>kubectl delete -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.9/kubegres.yaml </code></pre> <p>Generally speaking, to remove something, use same tool that you used for installation.</p>
<p><em>*Cross-posted to <a href="https://github.com/rancher/k3d/discussions/691" rel="noreferrer">k3d github discussions</a>, to a thread in <a href="https://forums.rancher.com/t/k3s-traefik-dashboard-activation/17142/11?u=iseric" rel="noreferrer">Rancher forums</a>, and to <a href="https://community.traefik.io/t/exposing-the-traefik-dashboard-in-k3d-k3s/11340" rel="noreferrer">traefik's community discussion board</a></em></p> <p><a href="https://www.youtube.com/watch?v=12taKl5iCpA&amp;pp=sAQA" rel="noreferrer">Tutorials from 2020</a> refer to editing the traefik configmap. Where did it go?</p> <p>The <a href="https://doc.traefik.io/traefik/getting-started/install-traefik/#exposing-the-traefik-dashboard" rel="noreferrer">traefik installation instructions</a> refer to a couple of ways to expose the dashboard:</p> <ol> <li><p>This works, but isn't persistent: Using a 1-time command <code>kubectl -n kube-system port-forward $(kubectl -n kube-system get pods --selector &quot;app.kubernetes.io/name=traefik&quot; --output=name) 9000:9000</code></p> </li> <li><p>I cannot get this to work: Creating an &quot;IngressRoute&quot; yaml file and applying it to the cluster. This might be due to the Klipper LB and/or my ignorance.</p> </li> </ol> <h3>No configmap in use by traefik deployment</h3> <p>For a 2-server, 2-agent cluster... <code>kubectl -n kube-system describe deploy traefik</code> does not show any configmap:</p> <pre><code> Volumes: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; Priority Class Name: system-cluster-critical </code></pre> <h3>No &quot;traefik&quot; configmap</h3> <p>And, <code>kubectl get -n kube-system cm</code> shows:</p> <pre><code>NAME DATA AGE chart-content-traefik 0 28m chart-content-traefik-crd 0 28m chart-values-traefik 1 28m chart-values-traefik-crd 0 28m cluster-dns 2 28m coredns 2 28m extension-apiserver-authentication 6 28m k3s 0 28m k3s-etcd-snapshots 0 28m kube-root-ca.crt 1 27m local-path-config 4 28m </code></pre> <h3>No configmap in use by traefik pods</h3> <p>Describing the pod doesn't turn up anything either. <code>kubectl -n kube-system describe pod traefik-....</code> results in no configmap too.</p> <h3>Traefik ports in use, but not responding</h3> <p>However, I did see arguments to the traefik pod with ports in use:</p> <pre><code> --entryPoints.traefik.address=:9000/tcp --entryPoints.web.address=:8000/tcp --entryPoints.websecure.address=:8443/tcp </code></pre> <p>However, these ports are not exposed. So, I tried port-forward with <code>kubectl port-forward pods/traefik-97b44b794-r9srz 9000:9000 8000:8000 8443:8443 -n kube-system --address 0.0.0.0</code>, but when I <code>curl -v localhost:9000</code> (or 8000 or 8443) and <code>curl -v localhost:9000/dashboard</code>, I get &quot;404 Not Found&quot;.</p> <p>After getting a terminal to traefik, I discovered the local ports that are open, but it seems nothing is responding:</p> <pre><code>/ $ netstat -lntu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 :::8443 :::* LISTEN tcp 0 0 :::8000 :::* LISTEN tcp 0 0 :::9000 :::* LISTEN / $ wget localhost:9000 Connecting to localhost:9000 ([::1]:9000) wget: server returned error: HTTP/1.1 404 Not Found / $ wget localhost:8000 Connecting to localhost:8000 ([::1]:8000) wget: server returned error: HTTP/1.1 404 Not Found / $ wget localhost:8443 Connecting to localhost:8443 ([::1]:8443) wget: server returned error: HTTP/1.1 404 Not Found </code></pre> <h3>Versions</h3> <pre><code>k3d version v4.4.7 k3s version v1.21.2-k3s1 (default) </code></pre>
<p>I found a solution and hopefully someone find a better one soon</p> <ol> <li>you need to control your k3s cluster from your pc and not to ssh into master node, so add <code>/etc/rancher/k3s/k3s.yaml</code> into your local <code>~/.kube/config</code> (in order to port forward in last step into your pc)</li> <li>now get your pod name as follows:</li> </ol> <blockquote> <p><code>kubectl get pod -n kube-system</code></p> </blockquote> <p>and seach for <code>traefik-something-somethingElse</code> mine was <code>traefik-97b44b794-bsvjn</code></p> <ol start="3"> <li>now this part is needed from your local pc</li> </ol> <blockquote> <p><code>kubectl port-forward traefik-97b44b794-bsvjn -n kube-system 9000:9000</code></p> </blockquote> <ol start="4"> <li>get <code>http://localhost:9000/dashboard/</code> in your favorite browser and <strong>don't forget the second slash</strong></li> <li>enjoy the dashboard</li> </ol> <p>please note you have to enable the dashboard first in <code>/var/lib/rancher/k3s/server/manifests/traefik.yaml</code> by adding</p> <pre><code>dashboard: enabled: true </code></pre>
<p>I have a <code>k3s</code> cluster that have system pods with <code>calico</code> policy applied:</p> <pre><code>kube-system pod/calico-node-xxxx kube-system pod/calico-kube-controllers-xxxxxx kube-system pod/metrics-server-xxxxx kube-system pod/local-path-provisioner-xxxxx kube-system pod/coredns-xxxxx app-system pod/my-app-xxxx </code></pre> <p>I ran <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="nofollow noreferrer">/usr/local/bin/k3s-killall.sh</a> to clean up containers and networks. Will this clean/remove/reset my calico networking also? (though after <code>killall.sh</code> the iptables of calico still present)</p> <p><strong>Quoting from the killall.sh link:</strong></p> <blockquote> <p>The killall script cleans up containers, K3s directories, and networking components while also removing the iptables chain with all the associated rules.</p> </blockquote> <p>It says that networking component will also be cleaned up though but is it kubernetes networking or any networking applied to cluster?</p>
<p>When you install <code>k3s</code> based on the instructions <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/" rel="nofollow noreferrer">here</a> it won't install Calico CNI by default. There is a need to install Calico CNI <a href="https://docs.projectcalico.org/getting-started/kubernetes/k3s/quickstart" rel="nofollow noreferrer">separately</a>.</p> <p>To answer you question, let's analyse <code>/usr/local/bin/k3s-killall.sh</code> file, especially the part with <code>iptables</code> command:</p> <pre><code>... iptables-save | grep -v KUBE- | grep -v CNI- | iptables-restore </code></pre> <p>As can see, this command only removes <code>iptables</code> <a href="https://unix.stackexchange.com/questions/506729/what-is-a-chain-in-iptables">chains</a> starting with <code>KUBE</code> or <code>CNI</code>.</p> <p>If you run command <code>iptables -S</code> on cluster setup with <code>k3s</code> and Calico CNI you can see that chains used by Calico are starting with <code>cali-</code>:</p> <pre><code> iptables -S -A cali-FORWARD -m comment --comment &quot;cali:vjrMJCRpqwy5oRoX&quot; -j MARK --set-xmark 0x0/0xe0000 -A cali-FORWARD -m comment --comment &quot;cali:A_sPAO0mcxbT9mOV&quot; -m mark --mark 0x0/0x10000 -j cali-from-hep-forward ... </code></pre> <p>Briefly answering your questions:</p> <blockquote> <p>I ran <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="nofollow noreferrer">/usr/local/bin/k3s-killall.sh</a> to clean up containers and networks. Will this clean/remove/reset my calico networking also ?</p> </blockquote> <p>No. There will be still some of Calico CNI components for example earlier mentioned <code>iptables</code> chains but also network interfaces:</p> <pre><code> ip addr 6: calicd9e5f8ac65@if4: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; 7: cali6fcd2eeafde@if4: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; ... </code></pre> <blockquote> <p>It says that networking component will also be cleaned up though, but is it kubernetes networking or any networking applied to cluster ?</p> </blockquote> <p>Those are network components provided by <code>k3s</code> by default like earlier mentioned <code>KUBE-</code> and <code>CNI-</code> <code>iptables</code> chains. To get more information what exatcly <code>k3s-killall.sh</code> script does, I'd recommend reading it's <a href="https://get.k3s.io/" rel="nofollow noreferrer">code</a> (<code>k3s-killall.sh</code> script starting from <code># --- create killall script ---</code>, line 575).</p>
<p>I'm struggling to setup a kubernetes secret using GoDaddy certs in order to use it with the Ingress Nginx controller in a Kubernetes cluster.</p> <p>I know that GoDaddy isn't the go-to place for that but that's not on my hands...</p> <p>Here what I tried (mainly based on this <a href="https://rammusxu.github.io/2019/11/06/k8s-Use-crt-and-key-in-Ingress/" rel="nofollow noreferrer">github post</a>):</p> <p>I have a mail from GoDaddy with two files: <code>generated-csr.txt</code> and <code>generated-private-key.txt</code>.</p> <p>Then I downloaded the cert package on GoDaddy's website (I took the &quot;Apache&quot; server type, as it's the recommended on for Nginx). The archive contains three files (with generated names): <code>xxxx.crt</code> and <code>xxxx.pem</code> (same content for both files, they represent the domain cert) and <code>gd_bundle-g2-g1.crt</code> (which is the intermediate cert).</p> <p>Then I proceed to concat the domain cert and the intermediate cert (let's name it chain.crt) and tried to create a secret using these file with the following command:</p> <pre><code>kubectl create secret tls organization-tls --key generated-private-key.txt --cert chain.crt </code></pre> <p>And my struggle starts here, as it throw this error:</p> <pre><code>error: tls: failed to find any PEM data in key input </code></pre> <p>How can I fix this, or what I'm missing?</p> <p>Sorry to bother with something trivial like this, but it's been two days and I'm really struggling to find proper documentation or example that works for the Ingress Nginx use case...</p> <p>Any help or hint is really welcome, thanks a lot to you!</p>
<p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p> <p>As OP mentioned in comments, <strong>the issue was solved by adding a new line in the beginning of the file</strong>.</p> <blockquote> <p>&quot;The key wasn't format correctly as it was lacking a newline in the beginning of the file. So this particular problem is now resolved.&quot;</p> </blockquote> <p>Similar issue was also addressed in <a href="https://stackoverflow.com/a/57614637/11714114">this answer</a>.</p>
<p>i've found two similar posts here but one hasn't been answered and the other was about android. I have a spring boot project and I want to access GCP Storage files within my application.</p> <p>Locally everything works fine I can access my bucket and read as well as store files in my storage. But when i upload it to gcp kubernetes I get the following exception:</p> <blockquote> <p>&quot;java.nio.file.FileSystemNotFoundException: Provider &quot;gs&quot; not installed at java.nio.file.Paths.get(Paths.java:147) ~[na:1.8.0_212] at xx.xx.StorageService.saveFile(StorageService.java:64) ~[classes!/:0.3.20-SNAPSHOT]</p> </blockquote> <p>My line of code where it appears is like follows:</p> <pre><code>public void saveFile(MultipartFile multipartFile, String path) { String completePath = filesBasePath + path; Path filePath = Paths.get(URI.create(completePath)); // &lt;- exception appears here Files.createDirectories(filePath); multipartFile.transferTo(filePath); } </code></pre> <p>}</p> <p>The completePath could result in something like &quot;gs://my-storage/path/to/file/image.jpg&quot;</p> <p>I have the following dependencies:</p> <pre><code>&lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-gcp-starter-storage&lt;/artifactId&gt; &lt;version&gt;1.2.6.RELEASE&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;com.google.cloud&lt;/groupId&gt; &lt;artifactId&gt;google-cloud-nio&lt;/artifactId&gt; &lt;version&gt;0.122.5&lt;/version&gt; &lt;/dependency&gt; </code></pre> <p>Does anyone have a clue where to look at? The only real difference except for the infrastructure is that i don't explicitly don't use authentication on kubernetes as it is not required according to the documentation</p> <blockquote> <p>When using Google Cloud libraries from a Google Cloud Platform environment such as Compute Engine, Kubernetes Engine, or App Engine, no additional authentication steps are necessary.</p> </blockquote>
<p>Following <a href="https://stackoverflow.com/a/67526886/11604596">Averi Kitsch's answer</a> and using the same <a href="https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/appengine-java11/springboot-helloworld" rel="nofollow noreferrer">springboot-helloworld</a> example, I was able to get it working locally after updating pom.xml. However, much like it did for you, it only worked when I ran it locally and would fail when I deployed it on Google Cloud. The issue was that the Dockerfile I was using was ignoring all of the jar files in the /target/lib directory, and I needed to copy that directory to the image. Note that I used Google Cloud Run, but I believe this should work for most deployments using Dockerfile.</p> <p>Here's what I ended up with:</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM maven:3.8-jdk-11 as builder WORKDIR /app COPY pom.xml . COPY src ./src RUN mvn package -DskipTests FROM adoptopenjdk/openjdk11:alpine-jre COPY --from=builder /app/target/springboot-helloworld-*.jar /springboot-helloworld.jar # IMPORTANT! - Copy the library jars to the production image! COPY --from=builder /app/target/lib /lib CMD [&quot;java&quot;, &quot;-Djava.security.egd=file:/dev/./urandom&quot;, &quot;-jar&quot;, &quot;/springboot-helloworld.jar&quot;] </code></pre> <p><strong>pom.xml</strong></p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;project xmlns=&quot;http://maven.apache.org/POM/4.0.0&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot; xsi:schemaLocation=&quot;http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd&quot;&gt; &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt; &lt;groupId&gt;com.example.appengine&lt;/groupId&gt; &lt;artifactId&gt;springboot-helloworld&lt;/artifactId&gt; &lt;version&gt;0.0.1-SNAPSHOT&lt;/version&gt; &lt;parent&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-parent&lt;/artifactId&gt; &lt;version&gt;2.5.4&lt;/version&gt; &lt;relativePath/&gt; &lt;/parent&gt; &lt;properties&gt; &lt;maven.compiler.target&gt;11&lt;/maven.compiler.target&gt; &lt;maven.compiler.source&gt;11&lt;/maven.compiler.source&gt; &lt;/properties&gt; &lt;dependencies&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-starter-web&lt;/artifactId&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;com.google.cloud&lt;/groupId&gt; &lt;artifactId&gt;google-cloud-nio&lt;/artifactId&gt; &lt;version&gt;0.123.8&lt;/version&gt; &lt;/dependency&gt; &lt;/dependencies&gt; &lt;build&gt; &lt;plugins&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt; &lt;executions&gt; &lt;execution&gt; &lt;id&gt;copy-dependencies&lt;/id&gt; &lt;phase&gt;prepare-package&lt;/phase&gt; &lt;goals&gt; &lt;goal&gt;copy-dependencies&lt;/goal&gt; &lt;/goals&gt; &lt;configuration&gt; &lt;outputDirectory&gt; ${project.build.directory}/lib &lt;/outputDirectory&gt; &lt;/configuration&gt; &lt;/execution&gt; &lt;/executions&gt; &lt;/plugin&gt; &lt;plugin&gt; &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt; &lt;artifactId&gt;maven-jar-plugin&lt;/artifactId&gt; &lt;configuration&gt; &lt;archive&gt; &lt;manifest&gt; &lt;addClasspath&gt;true&lt;/addClasspath&gt; &lt;classpathPrefix&gt;lib/&lt;/classpathPrefix&gt; &lt;mainClass&gt; com.example.appengine.springboot.SpringbootApplication &lt;/mainClass&gt; &lt;/manifest&gt; &lt;/archive&gt; &lt;/configuration&gt; &lt;/plugin&gt; &lt;/plugins&gt; &lt;/build&gt; &lt;/project&gt; </code></pre>
<p>Apache Airflow version: v2.1.1</p> <p>Kubernetes version (if you are using kubernetes) (use kubectl version):- Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;21&quot;, GitVersion:&quot;v1.21.2&quot;, GitCommit:&quot;092fbfbf53427de67cac1e9fa54aaa09a28371d7&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-06-16T12:52:14Z&quot;, GoVersion:&quot;go1.16.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19+&quot;, GitVersion:&quot;v1.19.8-eks-96780e&quot;, GitCommit:&quot;96780e1b30acbf0a52c38b6030d7853e575bcdf3&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-03-10T21:32:29Z&quot;, GoVersion:&quot;go1.15.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;}</p> <p>Environment: Development</p> <p>Cloud provider or hardware configuration: AWS EKS OS (e.g. from /etc/os-release): Kernel (e.g. uname -a): Install tools: Others: What happened: I am not able to create SparkApplications on the Kubernetes cluster using SparkKubernetesOperator from Airflow DAG. I have hosted Airflow and Spark-operator on EKS. I have created a connection on Airflow to connect to the Kubernetes cluster by using &quot;in cluster configuration&quot;. I am just running the sample application just to check the execution of spark on Kubernetes through Airflow.</p> <p>Application YAML file:-</p> <pre><code>apiVersion: &quot;sparkoperator.k8s.io/v1beta2&quot; kind: SparkApplication metadata: name: spark-pi-airflow namespace: spark-apps spec: type: Scala mode: cluster image: &quot;gcr.io/spark-operator/spark:v3.1.1&quot; imagePullPolicy: Always mainClass: org.apache.spark.examples.SparkPi mainApplicationFile: &quot;local:///opt/spark/examples/jars/spark-examples_2.12-3.1.1.jar&quot; sparkVersion: &quot;3.1.1&quot; restartPolicy: type: Never volumes: - name: &quot;test-volume&quot; hostPath: path: &quot;/tmp&quot; type: Directory driver: cores: 1 coreLimit: &quot;1200m&quot; memory: &quot;512m&quot; labels: version: 3.1.1 serviceAccount: spark volumeMounts: - name: &quot;test-volume&quot; mountPath: &quot;/tmp&quot; executor: cores: 1 instances: 1 memory: &quot;512m&quot; labels: version: 3.1.1 volumeMounts: - name: &quot;test-volume&quot; mountPath: &quot;/tmp&quot; </code></pre> <p>Airflow DAG:-</p> <pre><code> from datetime import timedelta # [START import_module] # The DAG object; we'll need this to instantiate a DAG from airflow import DAG # Operators; we need this to operate! from airflow.providers.cncf.kubernetes.operators.spark_kubernetes import SparkKubernetesOperator from airflow.providers.cncf.kubernetes.sensors.spark_kubernetes import SparkKubernetesSensor from airflow.utils.dates import days_ago # [END import_module] # [START default_args] # These args will get passed on to each operator # You can override them on a per-task basis during operator initialization default_args = { 'owner': 'airflow', 'depends_on_past': False, 'email': ['airflow@example.com'], 'email_on_failure': False, 'email_on_retry': False, 'max_active_runs': 1, } # [END default_args] # [START instantiate_dag] dag = DAG( 'spark_pi_airflow', default_args=default_args, description='submit spark-pi as sparkApplication on kubernetes', schedule_interval=timedelta(days=1), start_date=days_ago(1), ) t1 = SparkKubernetesOperator( task_id='spark_pi_submit', namespace=&quot;spark-apps&quot;, application_file=&quot;example_spark_kubernetes_spark_pi.yaml&quot;, kubernetes_conn_id=&quot;kubernetes_default&quot;, do_xcom_push=True, dag=dag, ) t2 = SparkKubernetesSensor( task_id='spark_pi_monitor', namespace=&quot;spark-apps&quot;, application_name=&quot;{{ task_instance.xcom_pull(task_ids='spark_pi_submit')['metadata']['name'] }}&quot;, kubernetes_conn_id=&quot;kubernetes_default&quot;, dag=dag, ) t1 &gt;&gt; t2 </code></pre> <p>Error Message:-</p> <pre><code> [2021-07-12 10:18:46,629] {spark_kubernetes.py:67} INFO - Creating sparkApplication [2021-07-12 10:18:46,662] {taskinstance.py:1501} ERROR - Task failed with exception Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/hooks/kubernetes.py&quot;, line 174, in create_custom_object response = api.create_namespaced_custom_object( File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py&quot;, line 183, in create_namespaced_custom_object (data) = self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py&quot;, line 275, in create_namespaced_custom_object_with_http_info return self.api_client.call_api( File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py&quot;, line 340, in call_api return self.__call_api(resource_path, method, File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py&quot;, line 172, in __call_api response_data = self.request( File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py&quot;, line 382, in request return self.rest_client.POST(url, File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py&quot;, line 272, in POST return self.request(&quot;POST&quot;, url, File &quot;/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py&quot;, line 231, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': '45712aa7-85e3-4beb-85f7-b94a77cda196', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 12 Jul 2021 10:18:46 GMT', 'Content-Length': '406'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;sparkapplications.sparkoperator.k8s.io is forbidden: User \&quot;system:serviceaccount:airflow:airflow-cluster\&quot; cannot create resource \&quot;sparkapplications\&quot; in API group \&quot;sparkoperator.k8s.io\&quot; in the namespace \&quot;spark-apps\&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;group&quot;:&quot;sparkoperator.k8s.io&quot;,&quot;kind&quot;:&quot;sparkapplications&quot;},&quot;code&quot;:403} During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py&quot;, line 1157, in _run_raw_task self._prepare_and_execute_task_with_callbacks(context, task) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py&quot;, line 1331, in _prepare_and_execute_task_with_callbacks result = self._execute_task(context, task_copy) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py&quot;, line 1361, in _execute_task result = task_copy.execute(context=context) File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py&quot;, line 69, in execute response = hook.create_custom_object( File &quot;/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/hooks/kubernetes.py&quot;, line 180, in create_custom_object raise AirflowException(f&quot;Exception when calling -&gt; create_custom_object: {e}\n&quot;) airflow.exceptions.AirflowException: Exception when calling -&gt; create_custom_object: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': '45712aa7-85e3-4beb-85f7-b94a77cda196', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 12 Jul 2021 10:18:46 GMT', 'Content-Length': '406'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;sparkapplications.sparkoperator.k8s.io is forbidden: User \&quot;system:serviceaccount:***:***-cluster\&quot; cannot create resource \&quot;sparkapplications\&quot; in API group \&quot;sparkoperator.k8s.io\&quot; in the namespace \&quot;spark-apps\&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;group&quot;:&quot;sparkoperator.k8s.io&quot;,&quot;kind&quot;:&quot;sparkapplications&quot;},&quot;code&quot;:403} </code></pre> <p>What you expected to happen: Kubernetes Airflow should schedule and run spark job using SparkKubernetesOperator.</p> <p>How to reproduce it: Deploy Spark operator using helm on Kubernetes cluster. Deploy Airflow using helm on Kubernetes cluster. Deploy the above-mentioned application and Airflow DAG.</p> <p>Anything else we need to know:-</p> <p>I have already created service account:-</p> <pre><code>$ kubectl create serviceaccount spark </code></pre> <p>Given the service account the edit role on the cluster:-</p> <pre><code>$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=airflow:airflow-cluster --namespace=spark-apps </code></pre>
<p>Here are kube cluster role resources. Create with <code>kubectl -n &lt;namespace&gt; apply -f &lt;filename.yaml&gt;</code></p> <pre class="lang-yaml prettyprint-override"><code># Role for spark-on-k8s-operator to create resources on cluster apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: spark-cluster-cr labels: rbac.authorization.kubeflow.org/aggregate-to-kubeflow-edit: &quot;true&quot; rules: - apiGroups: - sparkoperator.k8s.io resources: - sparkapplications verbs: - '*' --- # Allow airflow-worker service account access for spark-on-k8s apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: airflow-spark-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: spark-cluster-cr subjects: - kind: ServiceAccount name: airflow-cluster namespace: airflow </code></pre> <p>Notes:</p> <ul> <li>The above is assuming the error message <pre><code>sparkapplications.sparkoperator.k8s.io is forbidden: User &quot;system:serviceaccount:airflow:airflow-cluster\&quot; cannot create resource \&quot;sparkapplications\&quot; in API group \&quot;sparkoperator.k8s.io\&quot; in the namespace spark-apps </code></pre> <ul> <li>Airflow namespace: <code>airflow</code></li> <li>Airflow serviceaccount: <code>airflow-cluster</code></li> <li>Spark jobs namespace: <code>spark-apps</code></li> </ul> </li> <li>You should also have <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/quick-start-guide.md" rel="noreferrer">spark-on-k8s-operator</a> installed <ul> <li>With helm <code> --set webhook.enable=true</code> if you want to use <code>env</code> in your <code>spec.driver</code></li> </ul> </li> </ul>
<p>I have installed Prometheus using <a href="https://prometheus-community.github.io/helm-charts" rel="nofollow noreferrer">prometheus community</a> on my EKS cluster.</p> <p>Everything is working as expected. However I want it to scrape data from other sources. How do I add new targets? Can't find a documentation for it. Please help.</p>
<p>Prometheus has a <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="noreferrer">scraping configuration</a> that allows you to add target you want to scrape. This is the documentation (a good starting point).</p>
<p>I am learning Kubernetes, by following the course, <a href="https://www.udemy.com/course/kubernetes-microservices/" rel="nofollow noreferrer">https://www.udemy.com/course/kubernetes-microservices/</a></p> <p>When i try to build an image, using the file <a href="https://github.com/fleetman-ci-cd-demo/jenkins" rel="nofollow noreferrer">https://github.com/fleetman-ci-cd-demo/jenkins</a>, using minikube's docker daemon. it fails due to the below reason</p> <p><em>curl: (6) Could not resolve host: updates.jenkins.io</em></p> <p>I logged into minikube shell as well and did a ping, it didn't work</p> <pre><code> ping updates.jenkins.io PING updates.jenkins.io (52.202.51.185): 56 data bytes --- updates.jenkins.io ping statistics --- 68 packets transmitted, 0 packets received, 100% packet loss </code></pre> <p>I am able to reach google.com from minikube shell.</p> <p>Please let me know how can i fix this?</p> <p>The build log</p> <pre><code> WARN: install-plugins.sh is deprecated, please switch to jenkins-plugin-cli Creating initial locks... Analyzing war /usr/share/jenkins/jenkins.war... Registering preinstalled plugins... curl: (6) Could not resolve host: updates.jenkins.io The command '/bin/sh -c /usr/local/bin/install-plugins.sh workflow-aggregator &amp;&amp; /usr/local/bin/install-plugins.sh github &amp;&amp; /usr/local/bin/install-plugins.sh ws-cleanup &amp;&amp; /usr/local/bin/install-plugins.sh greenballs &amp;&amp; /usr/local/bin/install-plugins.sh simple-theme-plugin &amp;&amp; /usr/local/bin/install-plugins.sh kubernetes &amp;&amp; /usr/local/bin/install-plugins.sh docker-workflow &amp;&amp; /usr/local/bin/install-plugins.sh kubernetes-cli &amp;&amp; /usr/local/bin/install-plugins.sh github-branch-source' returned a non-zero code: 6 </code></pre>
<p>I think it's the container that has an issue with connecting to the Internet. I ran into the same problem with Jenkins running in Docker.</p> <p>I restarted Docker with <code>sudo service docker restart</code>, and solved my problem.</p> <p>Now if you're concerned with the depracated warning:</p> <pre><code>WARN: install-plugins.sh is deprecated, please switch to jenkins-plugin-cli </code></pre> <p>You may refer to <a href="https://github.com/jenkinsci/docker/blob/ebb29a040aca944474542400a34f48e1eb8fa628/README.md#preinstalling-plugins" rel="nofollow noreferrer">Jenkins' Docker image guide</a>. Cut to the chase, you need to replace the RUN command in Dockerfile. From something like:</p> <pre><code>RUN /usr/local/bin/install-plugins.sh &lt; /usr/share/jenkins/ref/plugins.txt </code></pre> <p>to:</p> <pre><code>RUN jenkins-plugin-cli -f /usr/share/jenkins/ref/plugins.txt </code></pre> <p>Even with <code>jenkins-plugin-cli</code>, I faced a similar problem of reaching to Jenkins plugin server. It shows</p> <pre><code>Error getting update center json </code></pre> <p>In my case, it was solved by the same method, just restarting Docker service.</p>
<p>I'm trying to implement in Kubernetes Dask distributed with one scheduler and three workers. I have one pod (<code>frontend.yaml</code>) for the scheduler and three other pods (<code>replicas: 3 in worker.yaml</code>) for the workers. The problem is that the workers are trying to connect to the scheduler and get a timeout as they don't know the IP address of the scheduler. Since these IP addresses are defined dynamically by Kubernetes I need to understand how this should be configured.</p> <p>The IP addresses of the scheduler and the workers are determined dynamically:</p> <pre><code>sofer@abc:~/aks$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP img-python-6ff4d88876-n97g8 2/2 Running 0 4m42s 10.1.0.34 worker-5f6d948874-fn67p 1/1 Running 4 4m22s 10.1.0.18 worker-5f6d948874-hhjs7 0/1 Running 3 4m22s 10.1.0.38 worker-5f6d948874-jbh4k 1/1 Running 4 4m22s 10.1.0.22 </code></pre> <p>frontend.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: img-python namespace: default spec: replicas: 1 selector: matchLabels: app: cont-python template: metadata: labels: app: cont-python spec: containers: - name: cont-scheduler image: xxx.azurecr.io/img-python-001:latest command: [&quot;dask-scheduler&quot;] ports: - containerPort: 8786 - containerPort: 8787 </code></pre> <p>worker.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: worker namespace: default spec: replicas: 3 selector: matchLabels: app: cont-worker template: metadata: labels: app: cont-worker spec: containers: - name: cont-worker image: sofacr.azurecr.io/img-python-001:latest workingDir: /code command: [&quot;dask-worker&quot;] args: [&quot;scheduler:8786&quot;] env: - name: PYTHONPATH value: &quot;/code&quot; - name: SCHEDULER_ADDRESS value: &quot;scheduler&quot; - name: SCHEDULER_PORT value: &quot;8786&quot; </code></pre>
<p>Thank you <a href="https://stackoverflow.com/users/3888719/michael-delgado">Michael Delgado</a> . Posting your suggestion as an answer to help community members.</p> <p>You can check the <a href="https://github.com/dask/helm-chart/tree/main/dask" rel="nofollow noreferrer">dask</a> and <a href="https://github.com/dask/helm-chart/tree/main/daskhub" rel="nofollow noreferrer">daskhub</a> helm charts. These set up full deployments with <a href="https://kubernetes.dask.org/en/latest/" rel="nofollow noreferrer">dask_kubernetes</a> or <a href="https://gateway.dask.org/" rel="nofollow noreferrer">dask-gateway</a>. Rather than manually provisioning workers and specifying the IP address, these deployments create the scheduler and workers dynamically</p> <p>As suggested by <a href="https://stackoverflow.com/users/1362485/ps0604">ps0604</a>, to deploy dask using the Dask Helm chart, you can refer to <a href="https://docs.dask.org/en/latest/setup/kubernetes-helm.html#kubernetes-helm-single" rel="nofollow noreferrer">Helm Install Dask for a Single User</a></p>
<p>Can you help me? I want to deploy Ingres for nodeport. But I can't understand if this possible? </p> <p>I tried to find some information in Google but I got Ingress for load balancing or some difficult examples Ingress Ruby on rails and etc. </p>
<p>Create deployment and service</p> <pre><code>kubectl create deploy test --image httpd kubectl expose deploy test --port 80 --target-port 80 </code></pre> <p>Check if the service is working</p> <pre><code>kubectl get svc </code></pre> <p>returns</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test ClusterIP 11.105.18.110 &lt;none&gt; 80/TCP 51s </code></pre> <p>Then</p> <pre><code>curl 11.105.18.110:80 </code></pre> <p>returns</p> <pre><code>&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>Create bare-metal ingress controller</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml </code></pre> <p><a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal</a></p> <p>This returns</p> <pre><code>namespace/ingress-nginx unchanged serviceaccount/ingress-nginx unchanged configmap/ingress-nginx-controller configured clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged role.rbac.authorization.k8s.io/ingress-nginx unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx unchanged validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured serviceaccount/ingress-nginx-admission unchanged clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created </code></pre> <p>Create ingress rules for nginx controller</p> <pre><code>kubectl apply -f -&lt;&lt;EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test annotations: kubernetes.io/ingress.class: nginx spec: rules: - http: paths: - backend: service: name: test port: number: 80 path: / pathType: Prefix EOF </code></pre> <p><strong>Annotation &quot;kubernetes.io/ingress.class: nginx&quot; is required to bind Ingress to the controller</strong></p> <p><a href="https://www.fairwinds.com/blog/intro-to-kubernetes-ingress-set-up-nginx-ingress-in-kubernetes-bare-metal" rel="nofollow noreferrer">https://www.fairwinds.com/blog/intro-to-kubernetes-ingress-set-up-nginx-ingress-in-kubernetes-bare-metal</a></p> <p>Get nodes ips</p> <pre><code>kubectl get no -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP master Ready control-plane,master 6d20h v1.21.0 134.156.0.81 node1 Ready &lt;none&gt; 5d v1.21.0 134.156.0.82 node2 Ready &lt;none&gt; 6d19h v1.21.0 134.156.0.83 node3 Ready &lt;none&gt; 6d19h v1.21.0 134.156.0.84 </code></pre> <p>Find ingress controller nodeport</p> <pre><code>kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 15.234.89.16 &lt;none&gt; 80:31505/TCP,443:32191/TCP 21m </code></pre> <p>Its 31505. Access test service over ingress node port on one of your nodes, for example on node1 134.156.0.82</p> <pre><code>curl 134.156.0.82:31505 </code></pre> <p>returns</p> <pre><code>&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>This was tested on Google Cloud virtual machines but without GKE</p>
<p>We are using a self-hosted microk8s cluster (single-node for now) for our internal staging workloads. From time to time, the server becomes unresponsive and I can't even ssh into it. The only way out is a restart.</p> <p>I can see that before the server crashes, its memory usage goes to the limit and the CPU load shoots up to over 1000. So running out of resources is likely to blame.</p> <p>That brings me to the question - <strong>how can I set global limits for microk8s to not consume <em>everything</em>?</strong></p> <hr /> <p>I know there are resource limits that can be assigned to Kubernetes pods, and ResourceQuotas to limit aggregate namespace resources. But that has the downside of low resource utilization (if I understand those right). For simplicity, let's say:</p> <ul> <li>each pod is the same</li> <li>its real memory needs can go from <code>50 MiB</code> to <code>500 MiB</code></li> <li>each pod is running in its own namespace</li> <li>there are 30 pods</li> <li>the server has 8 GiB of RAM</li> </ul> <ol> <li><p>I assign <code>request: 50 Mi</code> and <code>limit: 500 Mi</code> to the pod. As long as the node has at least <code>50 * 30 Mi = 1500 Mi</code> of memory, it should run all the requested pods. But there is nothing stopping all of the pods using <code>450 Mi</code> of memory each, which is under the individual limits, but still in total being <code>450 Mi * 30 = 13500 Mi</code>, which is more than the server can handle. And I suspect this is what leads to the server crash in my case.</p> </li> <li><p>I assign <code>request: 500 Mi</code> and <code>limit: 500 Mi</code> to the pod to ensure the total memory usage never goes above what I anticipate. This will of course allow me to only schedule 16 pods. But when the pods run with no real load and using just <code>50 Mi</code> of memory, there is severe RAM underutilization.</p> </li> <li><p>I am looking for a third option. Something to let me schedule pods freely and <em>only</em> start evicting/killing them when the total memory usage goes above a certain limit. And that limit needs to be configurable and lower than the total memory of the server, so that it does not die.</p> </li> </ol> <hr /> <p>We are using microk8s but I expect this is a problem all self-hosted nodes face, as well as something AWS/Google/Azure have to deal with too.</p> <p>Thanks</p>
<p>Since microk8s running on the host machine, then all resources of the host are allocated for it. That is why if you want to keep your cluster resources in borders, you have to manage them in one of the ways below:</p> <ol> <li>Setup <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">LimitRange</a> policy for pods in a namespace.</li> </ol> <blockquote> <p>A <em>LimitRange</em> provides constraints that can:</p> <ul> <li>Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.</li> <li>Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.</li> <li>Enforce a ratio between request and limit for a resource in a namespace.</li> <li>Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.</li> </ul> </blockquote> <ol start="2"> <li>Use <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#compute-resource-quota" rel="nofollow noreferrer">Resource Quotas</a> per namespace.</li> </ol> <blockquote> <p>A resource quota, defined by a <em>ResourceQuota</em> object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace.</p> </blockquote> <ol start="3"> <li>Assign necessary <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">requests and limits</a> for each pod.</li> </ol> <blockquote> <p>When you specify the resource <em>request</em> for Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource <em>limit</em> for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the <em>request</em> amount of that system resource specifically for that container to use.</p> </blockquote>
<p>I would like to change the default tcp keep alive value in a Kubernetes pod, what's the recommended approach?</p>
<p>You could do this via sysctls on the pod manifest in AKS/Kubernetes:</p> <pre><code>spec: securityContext: sysctls: - name: &quot;net.ipv4.tcp_keepalive_time&quot; value: &quot;45&quot; </code></pre> <p>Here is also further documentation:</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/</a></p> <p><a href="https://docs.syseleven.de/metakube/de/tutorials/confiugre-unsafe-sysctls" rel="nofollow noreferrer">https://docs.syseleven.de/metakube/de/tutorials/confiugre-unsafe-sysctls</a></p>
<p>If my pod is exceeding the memory requested (but still under the limit) is more memory available at runtime on request or does the pod needs to restarts to allocate more memory?</p>
<p>Memory up to the limit is available to the pod (as long as the host has enough free) at all times. The &quot;request&quot; amount is used for scheduling, and the limit is used for actually restricting the pod to an amount. I would caution against setting too high of a gap between request and limit, as it can result in the node itself exhausting memory.</p>
<p>I found that creating a yaml object description using <code>--dry-run=client</code> and providing <code>--command</code> only works when the provided arguments are in a very specific order.</p> <p>This works:</p> <pre><code>k run nginx --image=nginx --restart=Never --dry-run=client -o yaml --command -- env &gt; nginx.yaml </code></pre> <p>This does NOT:</p> <pre><code>k run nginx --image=nginx --restart=Never --command -- env --dry-run=client -o yaml &gt; nginx.yaml </code></pre> <p>I feel a bit confused because the version that does not work looks a lot more intuitive to me then the one that does work. Ideally both should work in my opinion. Is this intended behavior? I can't find any documentation about it.</p>
<p>Everything after <code>--</code> is a positional argument (until the <code>&gt;</code> which is a shell metachar), not an option.</p>
<p>I have VPC A with CIDR <code>10.A.0.0/16</code> and VPC B with CIDR <code>10.B.0.0/16</code>. I have VPC A and B peered and updated the route tables and from a server in <code>10.B.0.0/16</code> can ping a server in <code>10.A.0.0/16</code> and vice versa.</p> <p>The applications on VPC A also use some IPs in the <code>192.168.0.0/16</code> range. Not something I can easily change, but I need to be able to reach <code>192.168.0.0/16</code> on VPC A from VPC B.</p> <p>I've tried adding <code>192.168.0.0/16</code> to the route table used for VPC B and setting the target of the peered connection. That does not work, I believe because <code>192.168.0.0/16</code> is not in the CIDR block for VPC A.</p> <p>I'm unable to add <code>192.168.0.0/16</code> as a secondary CIDR in VPC A because it is restricted. See <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#add-cidr-block-restrictions" rel="nofollow noreferrer">CIDR block association restrictions</a> and <a href="https://stackoverflow.com/questions/66381126/how-do-we-allot-the-secondary-cidr-to-vpc-in-aws">related question</a>. I understand it is restricted, but why is it restricted? RFC1918 doesn't seem to say anything against using more than one of the private address spaces.</p> <p>I've also tried making a Transit Gateway, attaching both VPCs, and adding a static route to the Transit Gateway Route Table for <code>192.168.0.0/16</code> that targets the VPC A attachment. But still cannot reach that range from within VPC B.</p> <p><strong>Is there another way to peer to both <code>10.0.0.0/8</code> and <code>192.168.0.0/16</code> CIDR blocks on the same VPC?</strong></p> <h4>Updated, background info</h4> <p>The VPCs are used by two different kubernetes clusters. The older one uses project-calico that uses the default cluster CIDR <code>192.168.0.0/16</code> and pod IPs get assigned in that range. The newer one is an EKS cluster and pod IPs are assigned from the VPC's CIDR range. During the transition period I've got the two clusters' VPCs peered together.</p> <h4>Route Table</h4> <p>The route table for the private subnet for VPC A</p> <pre><code>10.A.0.0/16 local 10.B.0.0/16 pcx-[VPC A - VPC B peering connection] 0.0.0.0/0 nat-[gateway for cluster A] </code></pre> <p>Route table for the private subnet for VPC B</p> <pre><code>10.B.0.0/16 local 10.A.0.0/16 pcx-[VPC A - VPC B peering connection] 192.168.0.0/16 pcx-[VPC A - VPC B peering connection] 0.0.0.0/0 nat-[gateway for cluster B] </code></pre> <p>This does not work, of course, because <code>192.168.0.0/16</code> is not in VPC A's CIDR block, nor can it be added.</p>
<p>Calico creates an overlay network using the specified cluster CIDR (192.168.x.x) on top of VPC (A) CIDR, so pods/services in this k8s cluster can communicate. The overlay network routing information is neither expose nor usable for AWS route table. This is different from k8s cluster running in VPC (B) which uses VPC CNI that leverage on the VPC CIDR as the cluster CIDR.</p> <p>Calico <a href="https://docs.projectcalico.org/networking/bgp" rel="nofollow noreferrer">BGP Peering</a> offers a way here but it is not going to be an easy route for this case.</p> <blockquote> <p>Calico nodes can exchange routing information over BGP to enable reachability for Calico networked workloads (Kubernetes pods or OpenStack VMs).</p> </blockquote> <p>If you must achieve pod to pod communication in different k8s clusters and networks but not via Ingress/LB, migrate one of the k8s cluster CNI to be the same as the other so you can fully leverage on their unique peering capabilities.</p>
<p>I am setting up a <code>kind</code> cluster</p> <pre><code>Creating cluster &quot;kind&quot; ... ✓ Ensuring node image (kindest/node:v1.22.1) 🖼 ✓ Preparing nodes 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 ✓ Waiting ≤ 5m0s for control-plane = Ready ⏳ • Ready after 0s 💚 </code></pre> <p>and then trying to install ECK operator as per <a href="https://www.elastic.co/guide/en/cloud-on-k8s/1.6/k8s-deploy-eck.html" rel="noreferrer">instructions</a> about version 1.6</p> <pre><code>kubectl apply -f https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml </code></pre> <p>However the process fails, as if <code>kind</code> does not support CRDs...Is this the case?</p> <pre><code>namespace/elastic-system created serviceaccount/elastic-operator created secret/elastic-webhook-server-cert created configmap/elastic-operator created clusterrole.rbac.authorization.k8s.io/elastic-operator created clusterrole.rbac.authorization.k8s.io/elastic-operator-view created clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created service/elastic-webhook-server created statefulset.apps/elastic-operator created unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;CustomResourceDefinition&quot; in version &quot;apiextensions.k8s.io/v1beta1&quot; unable to recognize &quot;https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml&quot;: no matches for kind &quot;ValidatingWebhookConfiguration&quot; in version &quot;admissionregistration.k8s.io/v1beta1&quot; </code></pre>
<p>The problem you're seeing here isn't related to kind, instead it's the manifest you're trying to apply is using outdated API versions, which were removed in Kubernetes 1.22</p> <p>Specifically the manifest is using the v1beta1 version of the customresourcedefinition object and validatingadmissionwebhook object</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition </code></pre> <p>As noted in <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="noreferrer">this post</a> that was one of the versions removed when 1.22 went in.</p> <p>There's a couple of fixes for this. Firstly you could get the manifest and just change the customresourcedefinitions to use the new API version <code>apiextensions.k8s.io/v1</code> and validatingadmissionwebhook to use <code>admissionregistration.k8s.io/v1</code>.</p> <p>The other fix would be to use an older version of Kubernetes. if you use 1.21 or earler, that issue shouldn't occur, so something like <code> kind create cluster --image=kindest/node:v1.21.2</code> should work.</p>
<p>I am a newbie to Kubernetes and trying to install Kubernetes locally on my Mac machine.</p> <p>I am following the below tutorial on Kubernetes website</p> <p><a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/start/</a></p> <p>Now, I am at a step where it tells to deploy simple application in my local cluster. I deployed it. Now when I try to access my service using the following command</p> <pre><code>minikube service hello-minikube </code></pre> <p>It launches the browser. Though the browser never shows the service and keeps hanging at 127.0.0.1.</p> <p>Due to the above problem, I used to the following command</p> <pre><code>kubectl port-forward service/hello-minikube 7080:8080 </code></pre> <p>When i try to launch my browser using http://localhost:7080/</p> <p>I get error attached in the snapshot which is a connection refused issue. <a href="https://i.stack.imgur.com/dW1Ba.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dW1Ba.png" alt="enter image description here" /></a></p> <p>Can someone guide me what's going wrong here?</p> <p>Cheers,</p> <p><strong>After the below answer</strong></p> <p>I tried both of the commands, but the browser keeps revolving, but the service is never loaded. Here is a snapshot for you</p> <p><a href="https://i.stack.imgur.com/uC8XU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uC8XU.png" alt="enter image description here" /></a></p> <p><strong>YAML File</strong> <a href="https://i.stack.imgur.com/jmKke.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jmKke.jpg" alt="enter image description here" /></a></p> <p><strong>Added Endpoints</strong> <a href="https://i.stack.imgur.com/HjfaP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjfaP.png" alt="enter image description here" /></a></p> <p><strong>Pods Status</strong> <a href="https://i.stack.imgur.com/r9cjd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r9cjd.png" alt="enter image description here" /></a></p> <p><strong>Adding Docker Image<a href="https://i.stack.imgur.com/y40TO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y40TO.jpg" alt="Docker Image" /></a></strong></p> <p><strong>Events Command update</strong> <a href="https://i.stack.imgur.com/wqMlZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wqMlZ.jpg" alt="Events command error" /></a></p>
<p>to access your application either use <code>minikube tunnel</code> and then access it by one of the endopoints of your servers IPs and and port.</p> <p>Another option would be to expose your application via a service of type NodePort and then use <code>minikube service $servicename</code></p> <p>Edit:</p> <p>just a quick sanity check, can you try running the following commands?</p> <pre><code>$ kubectl create deployment nginx --image nginx --port 80 $ kubectl expose deployment nginx --name nginx-svc --type NodePort $ minikube service nginx-svc </code></pre> <p>--&gt; this should open your browser with IP of minikube cluster using the services NodePort and you should see the nginx welcome page</p> <p>The other way you should be able to access the service is, after creation of deployment and service:</p> <pre><code>$ minukube tunnel nginx-svc $ kubectl port-forward $nginx-podname 8080:80 </code></pre> <p>--&gt; browsing to localhost:8080 should show nginx welcome page.</p> <p>Please check if this is how your setup currently behaves. if it doesn't there's a problem in your setup that needs to be further explored. if it does, please share your yaml resources, so we might be able to find the problem</p>
<p>I am started GKE cluster using Terraform (<a href="https://learn.hashicorp.com/terraform/kubernetes/provision-gke-cluster" rel="nofollow noreferrer">link</a>), Now I am trying to release the helm charts on the cluster, and tried for "Nginx Ingress" helm chart which is as follow:</p> <pre><code>resource "helm_release" "ingress" { name = "ingress" repository = "https://kubernetes.github.io/ingress-nginx" chart = "ingress-nginx" } </code></pre> <p>Terraform Plan:</p> <pre><code>Terraform will perform the following actions: # helm_release.ingress will be created + resource "helm_release" "ingress" { + chart = "ingress-nginx" + disable_webhooks = false + force_update = false + id = (known after apply) + metadata = (known after apply) + name = "ingress" + namespace = "default" + recreate_pods = false + repository = "https://kubernetes.github.io/ingress-nginx" + reuse = false + reuse_values = false + status = "DEPLOYED" + timeout = 300 + verify = false + version = "2.3.0" + wait = true } </code></pre> <p>But I am getting an error</p> <pre><code>Error: Kubernetes cluster unreachable: Get https://35.232.164.12/version?timeout=32s: dial tcp 35.232.164.12:443: i/o timeout on helm.tf line 36, in resource "helm_release" "ingress": 36: resource "helm_release" "ingress" { </code></pre>
<p>Here is an example with Helm3:</p> <p>Note:</p> <ul> <li><code>[cluster endpoint]</code> and <code>[ca certificate]</code> are outputs of the cluster that was created with Terraform</li> <li>You will need a <code>cluster.admin</code> and <code>iam.serviceAccountTokenCreator</code> roles on the service account</li> </ul> <pre> data "google_service_account_access_token" "kubernetes_sa" { target_service_account = "" scopes = ["userinfo-email", "cloud-platform"] lifetime = "3600s" } provider "kubernetes" { host = "https://[cluster endpoint]" token = data.google_service_account_access_token.kubernetes_sa.access_token cluster_ca_certificate = base64decode( module.gitlab-gke.cluster_ca_certificate ) } provider "helm" { kubernetes { host = "https://[cluster endpoint]" token = data.google_service_account_access_token.kubernetes_sa.access_token cluster_ca_certificate = base64decode( [ca certificate] ) } } </pre>
<p>I am recently new to Kubernetes and Docker in general and am experiencing issues.</p> <p>I am running a single local Kubernetes cluster via Docker and am using skaffold to control the build up and teardown of objects within the cluster. When I run <code>skaffold dev</code> the build seems successful, yet when I attempt to make a request to my cluster via Postman the request hangs. I am using an ingress-nginx controller and I feel the bug lies somewhere here. My request handling logic is simple and so I feel the issue is not in the route handling but the configuration of my cluster, specifically with the ingress controller. I will post below my skaffold yaml config and my ingress yaml config.</p> <p>Any help is greatly appreciated as I have struggled with this bug for sometime.</p> <p>ingress yaml config :</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: 'true' spec: rules: - host: ticketing.dev http: paths: - path: /api/users/?(.*) pathType: Prefix backend: service: name: auth-srv port: number: 3000 </code></pre> <p>Note that I have a redirect in my /etc/hosts file from ticketing.dev to 127.0.0.1</p> <p>Auth service yaml config :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: auth-depl spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth spec: containers: - name: auth image: conorl47/auth --- kind: Service metadata: name: auth-srv spec: selector: app: auth ports: - name: auth protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>skaffold yaml config :</p> <pre><code>apiVersion: skaffold/v2alpha3 kind: Config deploy: kubectl: manifests: - ./infra/k8s/* build: local: push: false artifacts: - image: conorl47/auth context: auth docker: dockerfile: Dockerfile sync: manual: - src: 'src/**/*.ts' dest: . </code></pre> <p>For installing the ingress nginx controller I followed the installation instructions at <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a> , namely the Docker desktop installation instruction.</p> <p>After running that command I see the following two Docker containers running in Docker desktop</p> <p><a href="https://i.stack.imgur.com/nBnqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nBnqv.png" alt="enter image description here" /></a></p> <p>The two services created in the ingress-nginx namespace are :</p> <pre><code>❯ k get services -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.103.6.146 &lt;pending&gt; 80:30036/TCP,443:30465/TCP 22m ingress-nginx-controller-admission ClusterIP 10.108.8.26 &lt;none&gt; 443/TCP 22m </code></pre> <p>When I kubectl describe both of these services I see the following :</p> <pre><code>❯ kubectl describe service ingress-nginx-controller -n ingress-nginx Name: ingress-nginx-controller Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=1.0.0 helm.sh/chart=ingress-nginx-4.0.1 Annotations: &lt;none&gt; Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.103.6.146 IPs: 10.103.6.146 Port: http 80/TCP TargetPort: http/TCP NodePort: http 30036/TCP Endpoints: 10.1.0.10:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 30465/TCP Endpoints: 10.1.0.10:443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32485 Events: &lt;none&gt; </code></pre> <p>and :</p> <pre><code>❯ kubectl describe service ingress-nginx-controller-admission -n ingress-nginx Name: ingress-nginx-controller-admission Namespace: ingress-nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/version=1.0.0 helm.sh/chart=ingress-nginx-4.0.1 Annotations: &lt;none&gt; Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.108.8.26 IPs: 10.108.8.26 Port: https-webhook 443/TCP TargetPort: webhook/TCP Endpoints: 10.1.0.10:8443 Session Affinity: None Events: &lt;none&gt; </code></pre>
<p>As it seems, you have made the ingress service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>, this will usually provision an external loadbalancer from your cloud provider of choice. That's also why It's still pending. Its waiting for the loadbalancer to be ready, but it will never happen.</p> <p>If you want to have that ingress service reachable outside your cluster, you need to use type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>.</p> <p>Since their docs are not great on this point, and it seems to be by default like this. You could download the content of <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml</a> and modify it before applying. Or you use <code>helm</code>, then you probably can configure this.</p> <p>You could also do it in this dirty fashion.</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply --dry-run=client -o yaml -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yaml \ | sed s/LoadBalancer/NodePort/g \ | kubectl apply -f - </code></pre> <p>You could also edit in place.</p> <pre class="lang-sh prettyprint-override"><code>kubectl edit svc ingress-nginx-controller-admission -n ingress-nginx </code></pre>
<p>Previously we only use <code>helm template</code> to generate the manifest and apply to the cluster, recently we start planning to use <code>helm install</code> to manage our deployment, but running into following problems:</p> <p>Our deployment is a simple backend api which contains &quot;Ingress&quot;, &quot;Service&quot;, and &quot;Deployment&quot;, when there is a new commit, the pipeline will be triggered to deploy. We plan to use the short commit sha as the image tag and helm release name. Here is the command <code>helm upgrade --install releaseName repo/chartName -f value.yaml --set image.tag=SHA</code></p> <p>This runs perfectly fine for the first time, but when I create another release it fails with following error message</p> <pre><code>rendered manifests contain a resource that already exists. Unable to continue with install: Service &quot;app-svc&quot; in namespace &quot;ns&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-name&quot; must equal &quot;rel-124&quot;: current value is &quot;rel-123&quot; </code></pre> <p>The error message is pretty clear on what the issue is, but I am just wondering what's &quot;correct&quot; way of using helm in this case?</p> <p>It is not practical that I uninstall everything for a new release, and I also dont want to keep using the same release.</p>
<p>You are already doing it &quot;right&quot; way, just don't change <code>release-name</code>. That's key for Helm to identify resources. It seems that you previously used different name for release (<code>rel-123</code>) then you are using now (<code>rel-124</code>).</p> <p>To fix your immediate problem, you should be able to proceed by updating value of annotation <code>meta.helm.sh/release-name</code> on problematic resource. Something like this should do it:</p> <pre><code>kubectl annotate --overwrite service app-svc meta.helm.sh/release-name=rel-124 </code></pre>
<p>I could not understand what the package can do, the offical doc show nothing about <code>unstructured</code>. What the package used for ? Is it used for converting map[string]interface{} to K8S Obj ?</p> <p><a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" rel="nofollow noreferrer">https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured</a></p>
<p>As @kseniia-churiumova suggested it is used when you don't know the object type. Here is the use case to understand it better. Let us say your organisation has a policy that all Kubernetes object must have annotation &quot;owner&quot; with value pointing to email ID of a person or group. You have tasked with finding all the resources that violates this policy.</p> <p>You can have a configuration file that has list of <code>GroupVersionKind</code> and use unstructured to query them. If a new type needs a check you can add to configuration without changing the code.</p> <p><strong>Note:</strong> This is just an example. In production you will have to use something like Gatekeeper that implements OPA specification to enforce policies.</p>
<p>Kubernetes doesn't allow RoleRef of RoleBinding to be updated.<br /> When I command like below, kubernetes shows error:&quot;cannot change roleRef&quot;</p> <pre><code>$ kubectl apply -f - &lt;&lt; EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-crb subjects: - kind: User name: user@acme.com apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io EOF clusterrolebinding &quot;test-crb&quot; created $ kubectl apply -f - &lt;&lt; EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-crb subjects: - kind: User name: user@acme.com apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: view apiGroup: rbac.authorization.k8s.io EOF The ClusterRoleBinding &quot;test-crb&quot; is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:&quot;rbac.authorization.k8s.io&quot;, Kind:&quot;ClusterRole&quot;, Name:&quot;view&quot;}: cannot change roleRef </code></pre> <p>What I want to know is the reason.<br /> Why kubernetes doesn't allow users to update RoleRef?<br /> Is there some special reason?</p>
<p>roleRef is immutable,You need to delete and recreate the referenced role/clusterorle. Here is the related <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrolebinding-example" rel="nofollow noreferrer">documentation</a>.</p> <blockquote> <p>After you create a binding, you cannot change the Role or ClusterRole that it refers to. If you try to change a binding's roleRef, you get a validation error. If you do want to change the roleRef for a binding, you need to remove the binding object and create a replacement.</p> </blockquote> <p><strong>&gt; There are two reasons for this restriction:</strong></p> <blockquote> <ol> <li>Making roleRef immutable allows granting someone update permission on an existing binding object, so that they can manage the list of subjects, without being able to change the role that is granted to those subjects.</li> </ol> </blockquote> <blockquote> <ol start="2"> <li>A binding to a different role is a fundamentally different binding. Requiring a binding to be deleted/recreated in order to change the roleRef ensures the full list of subjects in the binding is intended to be granted the new role (as opposed to enabling or accidentally modifying only the roleRef without verifying all of the existing subjects should be given the new role's permissions).</li> </ol> </blockquote>
<p>I found that creating a yaml object description using <code>--dry-run=client</code> and providing <code>--command</code> only works when the provided arguments are in a very specific order.</p> <p>This works:</p> <pre><code>k run nginx --image=nginx --restart=Never --dry-run=client -o yaml --command -- env &gt; nginx.yaml </code></pre> <p>This does NOT:</p> <pre><code>k run nginx --image=nginx --restart=Never --command -- env --dry-run=client -o yaml &gt; nginx.yaml </code></pre> <p>I feel a bit confused because the version that does not work looks a lot more intuitive to me then the one that does work. Ideally both should work in my opinion. Is this intended behavior? I can't find any documentation about it.</p>
<blockquote> <p>Ideally both should work in my opinion.</p> </blockquote> <p>Unfortunately, the commands you presented are not the same. They will never work the same either. This is correct behaviour. <a href="https://unix.stackexchange.com/questions/11376/what-does-double-dash-mean">Double dash</a> (<code>--</code>) is of special importance here:</p> <blockquote> <p>a double dash (<code>--</code>) is used in most Bash built-in commands and many other commands to signify the end of command options, after which only positional arguments are accepted.</p> </blockquote> <p>So you can't freely swap &quot;parameters&quot; places. Only these options can be freely set</p> <pre class="lang-yaml prettyprint-override"><code>--image=nginx --restart=Never --dry-run=client -o yaml --command </code></pre> <p>Then you have <code>-- env</code> (double dash, space, and another command). After <code>--</code> (double dash and space) only positional arguments are accepted.</p> <p>Additionally, <code>&gt;</code> is shell meta-character to set <a href="https://www.gnu.org/software/bash/manual/html_node/Redirections.html" rel="nofollow noreferrer">redirection</a>.</p>
<p>I have made a HA Kubernetes cluster. FIrst I added a node and joined the other node as master role. I basically did the multi etcd set up. This worked fine for me. I did the fail over testing which also worked fine. Now the problem is once I am done working, I drained and deleted the other node and then I shut down the other machine( a VM on GCP). But then my kubectl commands dont work... Let me share the steps:</p> <p>kubectl get node(when multi node is set up)</p> <pre><code>NAME STATUS ROLES AGE VERSION instance-1 Ready &lt;none&gt; 17d v1.15.1 instance-3 Ready &lt;none&gt; 25m v1.15.1 masternode Ready master 18d v1.16.0 </code></pre> <p>kubectl get node ( when I shut down my other node)</p> <pre><code>root@masternode:~# kubectl get nodes The connection to the server k8smaster:6443 was refused - did you specify the right host or port? </code></pre> <p>Any clue?</p>
<p>After reboot the server you need to do some step below:</p> <ol> <li><p>sudo -i</p> </li> <li><p>swapoff -a</p> </li> <li><p>exit</p> </li> <li><p>strace -eopenat kubectl version</p> </li> </ol>
<p>I have been trying to deploy a Kubernetes cluster in Digital Ocean. Everything seems to work except when I try to apply the tls certificates. I have been following these steps, but with <code>Nginx Ingress Controller</code> v1.0.0 and <code>cert-manager</code> v1.5.0.</p> <p>I have two urls, let's say <code>api.example.com</code> and <code>www.example.com</code></p> <p>Checking the challenge I saw <code>Waiting for HTTP-01 challenge propagation: failed to perform self check GET request...</code></p> <p>I tried adding the following annotations to the ingress:</p> <pre><code>kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; </code></pre> <p>Or using this service as a workaround:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx annotations: service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: &quot;true&quot; service.beta.kubernetes.io/do-loadbalancer-hostname: &quot;www.example.com&quot; labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: #CHANGE/ADD THIS externalTrafficPolicy: Cluster type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https </code></pre> <p>If I go to the URL challenge I am able to see the hash, but I am stuck, I am not sure why it is failing or the steps to solve this.</p>
<p>As <a href="https://stackoverflow.com/users/2853555/agusgambina">agusgambina</a> has mentioned in the comment, problem is solved:</p> <blockquote> <p>I was able to make this work, first I need to get the load balancer id executing <code> k describe svc ingress-nginx-controller --namespace=ingress-nginx</code> and then pasting in the annotation <code>kubernetes.digitalocean.com/load-balancer-id: “xxxx-xxxx-xxxx-xxxx-xxxxx”</code> thanks for your comments, it helped me to solve the issue.</p> </blockquote> <p>This problem described also <a href="https://www.digitalocean.com/community/questions/issue-with-waiting-for-http-01-challenge-propagation-failed-to-perform-self-check-get-request-from-acme-challenges" rel="nofollow noreferrer">here</a> and there is also a tutorial<br /> <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes#step-2-%E2%80%94-setting-up-the-kubernetes-nginx-ingress-controller" rel="nofollow noreferrer">How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes</a>.</p>
<p>I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues. I downloaded the docker image using below pull request:</p> <pre><code>docker pull store/oracle/database-enterprise:12.2.0.1-slim </code></pre> <p>then I started the container from my docker-desktop using port 1521. The container started with a warning.</p> <p><img src="https://i.stack.imgur.com/YKUXA.png" alt="enter image description here" /></p> <p>terminal message:</p> <pre><code>docker run -d -it -p 1521:1521 --name oracle store/oracle/database-enterprise:12.2.0.1-slim WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested 5ea14c118397ce7ef2880786ac1fac061e8c92f9b09070edffe365653dcc03af </code></pre> <p>Now when I try connect to db using below command :</p> <pre><code>docker exec -it 5ea14c118397 bash -c &quot;source /home/oracle/.bashrc; sqlplus /nolog&quot; SQL&gt; connect sys as sysdba; Enter password: ERROR: ORA-12547: TNS:lost contact </code></pre> <p>it shows this message, PASSWORD I USE IS Oradoc_db1.</p> <p>Now after seeing some suggestions I tried using the below command for connecting to sqlplus:</p> <pre><code> docker exec -it f888fa9d0247 bash -c &quot;source /home/oracle/.bashrc; sqlplus / as sysdba&quot; SQL*Plus: Release 12.2.0.1.0 Production on Mon Sep 6 06:15:58 2021 Copyright (c) 1982, 2016, Oracle. All rights reserved. ERROR: ORA-12547: TNS:lost contact </code></pre> <p>I also tried changing permissions of oracle file in $ORACLE_HOME as well for execution permissions as well but it didn't work.</p> <p>Please help me out as I am stuck and don't know what to do.</p>
<p>There are two issues here:</p> <ol> <li>Oracle Database is not supported on ARM processors, only Intel. See here: <a href="https://github.com/oracle/docker-images/issues/1814" rel="nofollow noreferrer">https://github.com/oracle/docker-images/issues/1814</a></li> <li>Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: <a href="https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance" rel="nofollow noreferrer">https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance</a></li> </ol> <blockquote> <p>Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: <strong>Oracle Support for Database Running on Docker (Doc ID 2216342.1)</strong></p> </blockquote> <p>The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.</p> <p>Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always <em>when</em>, not <em>if</em> in IT) things don't work as expected. You might not even be able to <em>tell</em> when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the <em>supported</em> hardware architecture and operating system version. Anything else is a complete gamble.</p>
<p>I see a lot of heavy documentation online related to Kubernetes deployment but still can't find the definition of <code>0/0</code>.</p> <pre><code>&gt; $ k get deployment NAME READY UP-TO-DATE AVAILABLE AGE async-handler-redis-master 1/1 1 1 211d bbox-inference-4k-pilot-2d-boxes 0/0 0 0 148d </code></pre> <p>What exactly does it mean to be <code>0/0</code>? It's deployed but not ready? Why is it not ready? How do I make this deployment <code>READY</code>?</p>
<p>It means replica of your deployment is 0. In other words you don't have any pods under this deployment so 0/0 means 0 out of 0 pod is ready.</p> <p>You can;</p> <pre><code>kubectl scale deployment &lt;deployment-name&gt; --replicas=1 </code></pre>
<p>In the past I've installed them using:</p> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx-01 ingress-nginx/ingress-nginx </code></pre> <p>and could have multiple.</p> <p>Now I'm getting this error when I try to install another:</p> <pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass &quot;nginx&quot; in namespace &quot;&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-name&quot; must equal &quot;ingress-nginx-02&quot;: current value is &quot;ingress-nginx-01&quot;; annotation validation error: key &quot;meta.helm.sh/release-namespace&quot; must equal &quot;ingress-02&quot;: current value is &quot;ingress-01&quot; </code></pre>
<p>You have to set the <strong>Class Name</strong> while installing the new Nginx ingress controller again.</p> <p>For example :</p> <pre><code>helm install stable/nginx-ingress --set controller.ingressClass=gce --namespace kube-system --set controller.replicaCount=2 --set rbac.create=false helm install stable/nginx-ingress --set controller.ingressClass=nginx --namespace kube-system --set controller.replicaCount=2 --set rbac.create=false helm install stable/nginx-ingress --set controller.ingressClass=third --namespace kube-system --set controller.replicaCount=2 --set rbac.create=false </code></pre> <p>based on your helm version you can pass the name of Helm as you did <code>ingress-nginx-01</code>, <code>ingress-nginx-02</code> but main thing is class name: <code>--set controller.ingressClass=gce</code></p> <p>as error says</p> <pre><code>install: IngressClass &quot;nginx&quot; in namespace &quot;&quot; exists**strong text** </code></pre> <p><strong>Multiple Ingress controllers</strong></p> <p>If you're running multiple ingress controllers or running on a cloud provider that natively handles ingress such as <strong>GKE</strong>, you need to specify the annotation <code>kubernetes.io/ingress.class: &quot;nginx&quot;</code> in all ingresses that you would like the <code>ingress-nginx</code> controller to claim.</p> <p>For instance,</p> <pre><code>metadata: name: foo annotations: kubernetes.io/ingress.class: &quot;gce&quot; </code></pre> <p>will target the <strong>GCE controller</strong>, forcing the Nginx controller to ignore it, while an annotation like</p> <pre><code>metadata: name: foo annotations: kubernetes.io/ingress.class: &quot;nginx&quot; </code></pre> <p>will target the nginx controller, forcing the GCE controller to ignore it.</p> <p>Example : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/</a></p> <p>Ref : <a href="https://vincentlauzon.com/2018/11/28/understanding-multiple-ingress-in-aks/" rel="nofollow noreferrer">https://vincentlauzon.com/2018/11/28/understanding-multiple-ingress-in-aks/</a></p>
<p>I am using this <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/multibases/README.md" rel="noreferrer">example</a>:</p> <pre><code>├── base │   ├── kustomization.yaml │   └── pod.yaml ├── dev │   └── kustomization.yaml ├── kustomization.yaml ├── production │   └── kustomization.yaml └── staging └── kustomization.yaml </code></pre> <p>and in <code>kustomization.yaml</code> file in root:</p> <pre><code>resources: - ./dev - ./staging - ./production </code></pre> <p>I also have the image transformer code in <code>dev, staging, production</code> kustomization.yaml:</p> <pre><code>images: - name: my-app newName: gcr.io/my-platform/my-app </code></pre> <p>To build a single deployment manifest, I use:</p> <pre><code>(cd dev &amp;&amp; kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>which simply works!</p> <p>to build deployment manifest for all overlays (dev, staging, production), I use:</p> <pre><code>(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>which uses the <code>kustomization.yaml</code> in root which contains all resources(dev, staging, production).</p> <p>It does work and the final build is printed on console but without the image tag.</p> <p>It seems like the <code>kusotmize edit set image</code> only updates the <code>kustomizaion.yaml</code> of the current dir.</p> <p>Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?</p> <p><a href="https://github.com/D-GC/kustomize-multibase" rel="noreferrer">To test please use this repo</a></p>
<p>It took some time to realise what happens here. I'll explain step by step what happens and how it should work.</p> <h2>What happens</h2> <p>Firstly I re-created the same structure:</p> <pre><code>$ tree . ├── base │   ├── kustomization.yaml │   └── pod.yaml ├── dev │   └── kustomization.yaml ├── kustomization.yaml └── staging └── kustomization.yaml </code></pre> <p>When you run this command for single deployment:</p> <pre><code>(cd dev &amp;&amp; kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p>you change working directory to <code>dev</code>, manually override image from <code>gcr.io/my-platform/my-app</code> and adding tag <code>0.0.2</code> and then render the deployment.</p> <p>The thing is previously added <code>transformer code</code> gets overridden by the command above. You can remove <code>transformer code</code>, run the command above and get the same result. And after running the command you will find out that your <code>dev/kustomization.yaml</code> will look like:</p> <pre><code>resources: - ./../base namePrefix: dev- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: my-app newName: gcr.io/my-platform/my-app newTag: 0.0.2 </code></pre> <p><strong>Then</strong> what happens when you run this command from main directory:</p> <pre><code>(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 &amp;&amp; kustomize build .) </code></pre> <p><code>kustomize</code> firstly goes to overlays and do <code>transformation code</code> which is located in <code>overlays/kustomization.yaml</code>. When this part is finished, image name is <strong>not</strong> <code>my-app</code>, but <code>gcr.io/my-platform/my-app</code>.</p> <p>At this point <code>kustomize edit</code> command tries to find image with name <code>my-app</code> and can't do so and therefore does NOT apply the <code>tag</code>.</p> <h2>What to do</h2> <p>You need to use transformed image name if you run <code>kustomize edit</code> in main working directory:</p> <pre><code>$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 &amp;&amp; kustomize build . apiVersion: v1 kind: Pod metadata: labels: app: my-app name: dev-myapp-pod spec: containers: - image: gcr.io/my-platform/my-app:0.0.4 name: my-app --- apiVersion: v1 kind: Pod metadata: labels: app: my-app name: stag-myapp-pod spec: containers: - image: gcr.io/my-platform/my-app:0.0.4 name: my-app </code></pre>
<p>I am using <a href="https://www.npmjs.com/package/memory-cache" rel="nofollow noreferrer">https://www.npmjs.com/package/memory-cache</a> package for caching purpose. I've given time value to expire cached data but need to create route using express so it can be cleared forcefully in case of any issue. is there any way to hit all server pods and clear server cache? Now When I hit that route it just clears data for one pod only where request goes</p>
<p>You can use the <strong>Headless service</strong> in this case which will return IPs of PODs running behind the service.</p> <p><strong>Or else</strong></p> <p>you can use the command :</p> <p><code>kubectl get endpoints &lt;your-service&gt;</code> provides a list of <code>IPs</code> that you can use to route to each of the pods. Once you get the IPs you can hit the each PODs one by one using code or command. Headless service would be better option instead.</p> <p>Example : <a href="https://dev.to/kaoskater08/building-a-headless-service-in-kubernetes-3bk8" rel="nofollow noreferrer">https://dev.to/kaoskater08/building-a-headless-service-in-kubernetes-3bk8</a></p>
<p>I have set up a custom docker image registry on Gitlab and AKS for some reason fails to pull the image from there.<br /> Error that is being thrown out is:</p> <pre><code>Failed to pull image &quot;{registry}/{image}:latest&quot;: rpc error: code = FailedPrecondition desc = failed to pull and unpack image &quot;{registry}/{image}:latest&quot;: failed commit on ref &quot;layer-sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546&quot;: &quot;layer-sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546&quot; failed size validation: 0 != 27145985: failed precondition </code></pre> <p>What is interesting is that the image does not have the layer with id</p> <pre><code>sha256:e1acddbe380c63f0de4b77d3f287b7c81cd9d89563a230692378126b46ea6546 </code></pre> <p>Perhaps something is cached on AKS side? I deleted the pod along with the deployment before redeploying.</p> <p>I couldn't find much about this kind of errors and I have no idea what may be causing that. Pulling the same image from local docker environment works flawlessly.<br /> Any tip would be much appreciated!</p>
<p>• You can try scaling up the registry to run on all nodes. Kubernetes controller tries to be smart and routes node requests internally, instead of sending traffic to the loadbalancer IP. The issue though that if there is no registry service on that node, the packets go nowhere. So, scale up or route through a non-AKS LB.</p> <p>• Also, clean the image layer cache folder in ${containerd folder}/io.containerd.content.v1.content/ingest.Containerd would not clean this cache automatically when some layer data broken. You can also try purging the contents in this path ${containerd folder}/io.containerd.content.v1.content/ingest.</p> <p>• Might be this can be a TCP network connection issue between the AKS cluster and the docker image registry on Gitlab, so you can try using a proxy and configure it to close the connection between them after ‘X’ bytes of data are transferred as the retry of the pull starts over at 0% for the layer which then results in the same error because after some time we get a connection close and the layer was again not pulled completely. So, will recommend using a registry which is located near their cluster to have the higher throughput.</p> <p>• Also try restarting the communication pipeline between AKS cluster and the docker image registry on gitlab, it fixes this issue for the time being until it re-occurs.</p> <p>Please find the below link for more information: -</p> <p><a href="https://docs.gitlab.com/ee/user/packages/container_registry/" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/packages/container_registry/</a></p>