prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>currently I am setting up Kubernetes on a 1 Master 2 Node enviorement.</p> <p>I succesfully initialized the Master and added the nodes to the Cluster</p> <p><a href="https://i.stack.imgur.com/w4lXG.png" rel="noreferrer">kubectl get nodes</a></p> <p>When I joined the Nodes to the cluster, the kube-proxy pod started succesfully, but the kube-flannel pod gets an error and runs into a CrashLoopBackOff.</p> <p><strong><em>flannel-pod.log:</em></strong></p> <pre><code>I0613 09:03:36.820387 1 main.go:475] Determining IP address of default interface, I0613 09:03:36.821180 1 main.go:488] Using interface with name ens160 and address 172.17.11.2, I0613 09:03:36.821233 1 main.go:505] Defaulting external address to interface address (172.17.11.2), I0613 09:03:37.015163 1 kube.go:131] Waiting 10m0s for node controller to sync, I0613 09:03:37.015436 1 kube.go:294] Starting kube subnet manager, I0613 09:03:38.015675 1 kube.go:138] Node controller sync successful, I0613 09:03:38.015767 1 main.go:235] Created subnet manager: Kubernetes Subnet Manager - caasfaasslave1.XXXXXX.local, I0613 09:03:38.015828 1 main.go:238] Installing signal handlers, I0613 09:03:38.016109 1 main.go:353] Found network config - Backend type: vxlan, I0613 09:03:38.016281 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false, E0613 09:03:38.016872 1 main.go:280] Error registering network: failed to acquire lease: node "caasfaasslave1.XXXXXX.local" pod cidr not assigned, I0613 09:03:38.016966 1 main.go:333] Stopping shutdownHandler..., </code></pre> <p>On the Node, I can verify that the PodCDIR is available:</p> <pre><code>kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' 172.17.12.0/24 </code></pre> <p>On the Masters kube-controller-manager, the pod cidr is also there</p> <pre><code>[root@caasfaasmaster manifests]# cat kube-controller-manager.yaml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --leader-elect=true - --controllers=*,bootstrapsigner,tokencleaner - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --address=127.0.0.1 - --use-service-account-credentials=true - --kubeconfig=/etc/kubernetes/controller-manager.conf - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --allocate-node-cidrs=true - --cluster-cidr=172.17.12.0/24 - --node-cidr-mask-size=24 env: - name: http_proxy value: http://ntlmproxy.XXXXXX.local:3154 - name: https_proxy value: http://ntlmproxy.XXXXXX.local:3154 - name: no_proxy value: .XXXXX.local,172.17.11.0/24,172.17.12.0/24 image: k8s.gcr.io/kube-controller-manager-amd64:v1.10.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10252 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/kubernetes/controller-manager.conf name: kubeconfig readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/pki type: DirectoryOrCreate name: ca-certs-etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/kubernetes/controller-manager.conf type: FileOrCreate name: kubeconfig status: {} </code></pre> <p>XXXXX for anonymization</p> <p>I initialized the master with the following kubeadm comman (which also went through without any errors)</p> <pre><code>kubeadm init --pod-network-cidr=172.17.12.0/24 --service- cidr=172.17.11.129/25 --service-dns-domain=dcs.XXXXX.local </code></pre> <p>Does anyone know what could cause my issues and how to fix them?</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE kube-system etcd-caasfaasmaster.XXXXXX.local 1/1 Running 0 16h 172.17.11.1 caasfaasmaster.XXXXXX.local kube-system kube-apiserver-caasfaasmaster.XXXXXX.local 1/1 Running 1 16h 172.17.11.1 caasfaasmaster.XXXXXX.local kube-system kube-controller-manager-caasfaasmaster.XXXXXX.local 1/1 Running 0 16h 172.17.11.1 caasfaasmaster.XXXXXX.local kube-system kube-dns-75c5968bf9-qfh96 3/3 Running 0 16h 172.17.12.2 caasfaasmaster.XXXXXX.local kube-system kube-flannel-ds-4b6kf 0/1 CrashLoopBackOff 205 16h 172.17.11.2 caasfaasslave1.XXXXXX.local kube-system kube-flannel-ds-j2fz6 0/1 CrashLoopBackOff 191 16h 172.17.11.3 caasfassslave2.XXXXXX.local kube-system kube-flannel-ds-qjd89 1/1 Running 0 16h 172.17.11.1 caasfaasmaster.XXXXXX.local kube-system kube-proxy-h4z54 1/1 Running 0 16h 172.17.11.3 caasfassslave2.XXXXXX.local kube-system kube-proxy-sjwl2 1/1 Running 0 16h 172.17.11.2 caasfaasslave1.XXXXXX.local kube-system kube-proxy-zc5xh 1/1 Running 0 16h 172.17.11.1 caasfaasmaster.XXXXXX.local kube-system kube-scheduler-caasfaasmaster.XXXXXX.local 1/1 Running 0 16h 172.17.11.1 caasfaasmaster.XXXXXX.local </code></pre>
<p>Failed to acquire lease simply means, the pod didn't get the podCIDR. Happened with me as well although the manifest on master-node says podCIDR true but still it wasn't working and funnel going in crashbackloop. This is what i did to fix it.</p> <p>From the master-node, first find out your funnel CIDR </p> <pre><code>sudo cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep -i cluster-cidr </code></pre> <p>Output:</p> <pre><code>- --cluster-cidr=172.168.10.0/24 </code></pre> <p>Then run the following from the master node:</p> <pre><code>kubectl patch node slave-node-1 -p '{"spec":{"podCIDR":"172.168.10.0/24"}}' </code></pre> <p>where, slave-node-1 is your node where acquire lease is failing podCIDR is the cidr that you found in previous command</p> <p>Hope this helps.</p>
<p>I've done the installation of <code>Nginx Ingress Controller</code> by using this guide (<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) in 2 steps: </p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml </code></pre> <p>Then it created for me 2 entities :</p> <p><a href="https://i.stack.imgur.com/rYaJl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rYaJl.jpg" alt="enter image description here"></a></p> <p>first entity is called <code>ingress-nginx</code> in <code>service &amp; ingress</code> section and the second one:</p> <p><a href="https://i.stack.imgur.com/JUvL9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JUvL9.png" alt="enter image description here"></a></p> <p>inside workloads section called <code>nginx-ingress-controller</code>. Next step of my configuration process was creating service with Ingress type: </p> <p><strong>ingress.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx-ingress-controller" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/from-to-www-redirect: "true" kubernetes.io/ingress.global-static-ip-name: my static ip name: booknotes-ingress namespace: 'default' spec: rules: - host: www.domain.com http: paths: - path: /* backend: serviceName: booknotes-service servicePort: 80 </code></pre> <p>Then I've exposed <code>booknotes-service</code> from my <code>booknotes</code> (Depoloyment) with <code>Custer IP</code> type . But when I go to www.domain.com it is in a pending state . What I've done wrong ? And I don't really understand well all the flow from the request to my pod in this case. </p>
<p>Have you binded the external ip address of your ingress controller to your dns? Can you resolve the dns? Try <code>nslookup www.domain.com</code> and check if the ip address is the one of your ingress controller. </p> <p>Once you resolve the problem with the dns you may get either a 404 or a 502 as response. Which means you are resolving but not passing traffic to the service. Update your question with that and we can continue. </p> <p>Pd.- Remove the <code>/*</code> in the path definition of the ingress resource. Just <code>/</code></p>
<p>We use <code>prometheus(v: 1.7.0)</code> as monitor for a <code>k8s(v: 1.10.11)</code> cluster. In k8s, we have multiple <code>namespaces</code>. Are there in <code>prometheus metric</code> to tell the <code>CPU</code> and <code>memory limit</code> in each <code>namespace</code>?</p> <p>Or in the other words, how to find metrics in prometheus to read resourcequota's limits.cpu and limits.memory</p>
<p>There are no built-in CPU and memory limits in namespaces, but you can define them with <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">resource quotas</a>.</p> <p>So, you don't need Prometheus to get this information, but you can just query the ResourceQuota objects through the API server.</p> <p>If you need this information in Promtheus, you can use the <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> Prometheus exporter, which exposes <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/resourcequota-metrics.md" rel="nofollow noreferrer">metrics about ResourceQuota objects</a>.</p>
<p>I want to <a href="https://docs.docker.com/engine/reference/commandline/image_prune/" rel="noreferrer">prune</a> docker images, I wrote a small Docker image using <code>node-docker-api</code> and I was able to test it locally with success.<br> As I've deployed the <code>DaemonSet</code> to Kubernetes, the pod fails to access the Docker socket:</p> <pre><code>Error: connect EACCES /var/run/docker.sock </code></pre> <p>The <code>deployment.yaml</code> looks as following:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: labels: name: docker-image-cleanup name: docker-image-cleanup spec: template: metadata: labels: app: docker-image-cleanup spec: volumes: - name: docker-sock hostPath: path: "/var/run/docker.sock" type: File - name: docker-directory hostPath: path: "/var/lib/docker" containers: - name: docker-image-cleanup image: image:tag securityContext: privileged: true env: - name: PRUNE_INTERVAL_SECONDS value: "30" - name: PRUNE_DANGLING value: "true" volumeMounts: - mountPath: /var/run/docker.sock name: docker-sock readOnly: false - mountPath: "/var/lib/docker" name: docker-directory readOnly: false </code></pre> <p>Running AKS v1.13.10 - if relevant </p>
<p>There is no guarantee that your kubernetes cluster is actually using docker as container engine. As there are many alternatives like cri-o and kata containers your application/deployment should make no assumptions about the underlying container engine.</p> <p>Kubernetes takes care about cleaning up unused container images automatically. See documentation on how to configure it, if you run the cluster yourself: <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/</a></p> <p>Aside from that it looks like you have a simple permission problem with the socket: Make sure your application in the cleanup container runs as root or has appropriate user to access the socket.</p>
<p><strong>Facing: fluentd log unreadable. it is excluded and would be examined next time</strong></p> <p>I have a simple configuration for fluentD daemon set running in kubernetes setup.</p> <p>Fluentd version: <strong>fluentd-0.12.43</strong></p> <p>Below is my configuration.</p> <pre><code> &lt;source&gt; @type tail path /var/log/containers/sample*.log time_format %Y-%m-%dT%H:%M:%S.%NZ tag sample.* format json read_from_head true &lt;/source&gt; &lt;match sample.**&gt; @type forward heartbeat_type tcp send_timeout 60s recover_wait 10s hard_timeout 60s &lt;server&gt; name worker-node2 host 10.32.0.15 port 24224 weight 60 &lt;/server&gt; &lt;/match&gt; </code></pre> <p>Getting below warning and NO logs are forwarded</p> <blockquote> <p>2018-08-03 06:36:53 +0000 [warn]: /var/log/containers/samplelog-79bd66868b-t7xn9_logging1_fluentd-70e85c5d6328e7d.log unreadable. It is excluded and would be examined next time.</p> <p>2018-08-03 06:37:53 +0000 [warn]: /var/log/containers/samplelog-79bd66868b-t7xn9_logging1_fluentd-70e85c5bc89ab24.log unreadable. It is excluded and would be examined next time.</p> </blockquote> <p><strong>Permission for log file:</strong></p> <pre><code>[root@k8s-master fluentd-daemonset]# ls -lrt **/var/log/containers/** **lrwxrwxrwx** Jun 25 06:25 sample-77g68_kube-system_kube-proxy-9f3c3951c32ee.log -&gt; /var/log/pods/aa1f8d5b-746f-11e8-95c0-005056b9ff3a/sample/7.log </code></pre> <p><strong>YAML file for daemon set have mount instructions:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd namespace: logging1 labels: k8s-app: fluentd-logging version: v1 kubernetes.io/cluster-service: "true" spec: template: ----- ----- ----- volumeMounts: - name: fluentd-config mountPath: /fluentd/etc/ - name: varlog mountPath: /var/log readOnly: true - name: varlogpods mountPath: /var/log/pods readOnly: true - name: varlogcontainers mountPath: /var/log/containers readOnly: true - name: varlibdocker mountPath: /var/lib/docker readOnly: true - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: fluentd-config configMap: name: fluentd-config - name: varlog hostPath: path: /var/log - name: varlogpods hostPath: path: /var/log/pods - name: varlogcontainers hostPath: path: /var/log/containers - name: varlibdocker hostPath: path: /var/lib/docker - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers </code></pre> <p>Getting no clue even when <strong>permission is correct</strong>, <strong>fluentD version is correct</strong>, <strong>Mount instruction are their in kubernetes daemonset</strong>, why I am getting this warning.</p>
<p>I faced a similar issue. So, what actually happens is -<br><br> 1. Fluentd creates a symbolic links in /var/log/containers/ which are further a symbolic link of files that are in /var/log/pods/ like - </p> <pre class="lang-sh prettyprint-override"><code>root@fluentd-forwarders-5bfzm:/home/fluent# ls -ltr /var/log/containers/consul-0_default_consul-c4dbf47bf46b4cacfb0db67885fdba73835e05b45b14ec7dc746cc2d5ed92ea3.log lrwxrwxrwx. 1 root root 83 Oct 30 07:42 /var/log/containers/consul-0_default_consul-c4dbf47bf46b4cacfb0db67885fdba73835e05b45b14ec7dc746cc2d5ed92ea3.log -&gt; /var/log/pods/default_consul-0_2a206546-73b3-4d05-bd7a-0b307c8b24d1/consul/1628.log </code></pre> <p><br> 2. /var/log/pods are symbolic links of the log files mounted at host. In my setup I am using /data/ directory of host/node to store docker data. </p> <pre class="lang-sh prettyprint-override"><code>root@fluentd-forwarders-5bfzm:/home/fluent# ls -ltr /var/log/pods/default_consul-0_2a206546-73b3-4d05-bd7a-0b307c8b24d1/consul/1629.log lrwxrwxrwx. 1 root root 162 Oct 30 07:47 /var/log/pods/default_consul-0_2a206546-73b3-4d05-bd7a-0b307c8b24d1/consul/1629.log -&gt; /data/docker/containers/478642a56a6e15e7398391a2526ec52ad1aa24341e95aa32063163da11f4cc8b/478642a56a6e15e7398391a2526ec52ad1aa24341e95aa32063163da11f4cc8b-json.log </code></pre> <p><br></p> <p>So, in my deployment.yaml I had to mount /data/docker/containers rather /var/lib/containers/ to solve the issue i.e </p> <pre class="lang-sh prettyprint-override"><code> volumeMounts: - mountPath: /var/log name: varlog - mountPath: /data/docker/containers name: datadockercontainers readOnly: true - mountPath: /fluentd/etc name: config-path </code></pre>
<p>Not sure why this is happening but we're seeing old replicasets with active pods running in our Kubernetes cluster despite the fact the deployments they are attached to have been long deleted (up to 82 days old). Our deployments have <code>spec.replicas</code> set to a max of 2, however we're seeing up to 6/8 active pods in these deployments. </p> <p>We are currently running k8s version 1.14.6. Also below is a sample deployment</p> <pre><code>{ "kind": "Deployment", "apiVersion": "extensions/v1beta1", "metadata": { "name": "xxxxxxxxxxxxxxxx", "namespace": "default", "annotations": { "deployment.kubernetes.io/revision": "15", } }, "spec": { "replicas": 2, "selector": { "matchLabels": { "app": "xxxxxxxx" } }, "template": { "spec": { "containers": [ { "name": "xxxxxxxx", "image": "xxxxxxxx", "ports": [ { "containerPort": 80, "protocol": "TCP" } ], "resources": {}, "imagePullPolicy": "Always" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "securityContext": {}, "schedulerName": "default-scheduler" } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": 1, "maxSurge": 1 } }, "minReadySeconds": 10, "revisionHistoryLimit": 2147483647, "progressDeadlineSeconds": 2147483647 }, "status": { "observedGeneration": 15, "replicas": 2, "updatedReplicas": 2, "readyReplicas": 2, "availableReplicas": 2, "conditions": [ { "type": "Available", "status": "True", "reason": "MinimumReplicasAvailable", "message": "Deployment has minimum availability." } ] } } </code></pre>
<p>Changes to label selectors make existing pods fall out of ReplicaSet's scope, so if you change labels and label selector the pods are no longer "controlled" by ReplicaSet.</p> <p>If you run <code>kubectl get pods &lt;pod_name&gt; -o yaml</code> where <code>&lt;pod_name&gt;</code> is a pod created by ReplicaSet, you will see owner reference. However if you change labels and run the same command, owner reference is no longer visible because it fell out of ReplicaSet scope.</p> <p>Also if you create bare pods and they happen to have the same labels as ReplicaSet, they will be acquired by ReplicaSet. It happens because RS is not limited to pods created by its template- it can acquire pods matching its selectors and terminate them as desired number specified in RS manifest will be exceeded.</p> <p>If a bare pod is created before RS with the same labels, RS will count this pod and deploy only required number of pods to achieve desired number of replicas.</p> <p>You can also remove ReplicaSet without affecting any of its Pods by using <code>kubectl delete</code> with <code>--cascade=false</code> option.</p>
<p>I am currently using bitnami/kafka image(<a href="https://hub.docker.com/r/bitnami/kafka" rel="nofollow noreferrer">https://hub.docker.com/r/bitnami/kafka</a>) and deploying it on kubernetes. </p> <ul> <li>kubernetes master: 1</li> <li>kubernetes workers: 3</li> </ul> <p>Within the cluster the other application are able to find kafka. The problem occurs when trying to access the kafka container from outside the cluster. When reading little bit I read that we need to set property "advertised.listener=PLAINTTEXT://hostname:port_number" for external kafka clients. </p> <p>I am currently referencing "<a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/kafka</a>". Inside my values.yaml file I have added </p> <p><strong>values.yaml</strong></p> <ul> <li>advertisedListeners1: 10.21.0.191</li> </ul> <p>and <strong>statefulset.yaml</strong></p> <pre><code> - name: KAFKA_CFG_ADVERTISED_LISTENERS value: 'PLAINTEXT://{{ .Values.advertisedListeners }}:9092' </code></pre> <p><strong>For a single kafka instance it is working fine.</strong></p> <p>But for 3 node kafka cluster, I changed some configuration like below: <strong>values.yaml</strong></p> <ul> <li>advertisedListeners1: 10.21.0.191 </li> <li>advertisedListeners2: 10.21.0.192</li> <li>advertisedListeners3: 10.21.0.193</li> </ul> <p>and <strong>Statefulset.yaml</strong></p> <pre><code> - name: KAFKA_CFG_ADVERTISED_LISTENERS {{- if $MY_POD_NAME := "kafka-0" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092' {{- else if $MY_POD_NAME := "kafka-1" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092' {{- else if $MY_POD_NAME := "kafka-2" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092' {{- end }} </code></pre> <p>Expected result is that all the 3 kafka instances should get advertised.listener property set to worker nodes ip address.</p> <p>example:</p> <ul> <li><p>kafka-0 --> "PLAINTEXT://10.21.0.191:9092"</p></li> <li><p>kafka-1 --> "PLAINTEXT://10.21.0.192:9092"</p></li> <li><p>kafka-3 --> "PLAINTEXT://10.21.0.193:9092"</p></li> </ul> <p>Currently only one kafka pod in up and running and the other two are going to crashloopbackoff state. </p> <p>and the other two pods are showing error as:</p> <p>[2019-10-20 13:09:37,753] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2019-10-20 13:09:37,786] ERROR [KafkaServer id=1002] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.lang.IllegalArgumentException: requirement failed: Configured end points 10.21.0.191:9092 in advertised listeners are already registered by broker 1001 at scala.Predef$.require(Predef.scala:224) at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:399) at kafka.server.KafkaServer$$anonfun$createBrokerInfo$2.apply(KafkaServer.scala:397) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.server.KafkaServer.createBrokerInfo(KafkaServer.scala:397) at kafka.server.KafkaServer.startup(KafkaServer.scala:261) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala)</p> <p>That means the logic applied in statefulset.yaml is not working. Can anyone help me in resolving this..? </p> <p>Any help would be appreciated..</p> <p>The output of <code>kubectl get statefulset kafka -o yaml</code></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: creationTimestamp: "2019-10-29T07:04:12Z" generation: 1 labels: app.kubernetes.io/component: kafka app.kubernetes.io/instance: kafka app.kubernetes.io/managed-by: Tiller app.kubernetes.io/name: kafka helm.sh/chart: kafka-6.0.1 name: kafka namespace: default resourceVersion: "12189730" selfLink: /apis/apps/v1/namespaces/default/statefulsets/kafka uid: d40cfd5f-46a6-49d0-a9d3-e3a851356063 spec: podManagementPolicy: Parallel replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/component: kafka app.kubernetes.io/instance: kafka app.kubernetes.io/name: kafka serviceName: kafka-headless template: metadata: creationTimestamp: null labels: app.kubernetes.io/component: kafka app.kubernetes.io/instance: kafka app.kubernetes.io/managed-by: Tiller app.kubernetes.io/name: kafka helm.sh/chart: kafka-6.0.1 name: kafka spec: containers: - env: - name: MY_POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KAFKA_CFG_ZOOKEEPER_CONNECT value: kafka-zookeeper - name: KAFKA_PORT_NUMBER value: "9092" - name: KAFKA_CFG_LISTENERS value: PLAINTEXT://:$(KAFKA_PORT_NUMBER) - name: KAFKA_CFG_ADVERTISED_LISTENERS value: PLAINTEXT://10.21.0.191:9092 - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: KAFKA_CFG_BROKER_ID value: "-1" - name: KAFKA_CFG_DELETE_TOPIC_ENABLE value: "false" - name: KAFKA_HEAP_OPTS value: -Xmx1024m -Xms1024m - name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MESSAGES value: "10000" - name: KAFKA_CFG_LOG_FLUSH_INTERVAL_MS value: "1000" - name: KAFKA_CFG_LOG_RETENTION_BYTES value: "1073741824" - name: KAFKA_CFG_LOG_RETENTION_CHECK_INTERVALS_MS value: "300000" - name: KAFKA_CFG_LOG_RETENTION_HOURS value: "168" - name: KAFKA_CFG_LOG_MESSAGE_FORMAT_VERSION - name: KAFKA_CFG_MESSAGE_MAX_BYTES value: "1000012" - name: KAFKA_CFG_LOG_SEGMENT_BYTES value: "1073741824" - name: KAFKA_CFG_LOG_DIRS value: /bitnami/kafka/data - name: KAFKA_CFG_DEFAULT_REPLICATION_FACTOR value: "1" - name: KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR value: "1" - name: KAFKA_CFG_TRANSACTION_STATE_LOG_REPLICATION_FACTOR value: "1" - name: KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM value: https - name: KAFKA_CFG_TRANSACTION_STATE_LOG_MIN_ISR value: "1" - name: KAFKA_CFG_NUM_IO_THREADS value: "8" - name: KAFKA_CFG_NUM_NETWORK_THREADS value: "3" - name: KAFKA_CFG_NUM_PARTITIONS value: "1" - name: KAFKA_CFG_NUM_RECOVERY_THREADS_PER_DATA_DIR value: "1" - name: KAFKA_CFG_SOCKET_RECEIVE_BUFFER_BYTES value: "102400" - name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES value: "104857600" - name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES value: "102400" - name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS value: "6000" image: docker.io/bitnami/kafka:2.3.0-debian-9-r88 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 2 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 tcpSocket: port: kafka timeoutSeconds: 5 name: kafka ports: - containerPort: 9092 name: kafka protocol: TCP readinessProbe: failureThreshold: 6 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 tcpSocket: port: kafka timeoutSeconds: 5 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/kafka name: data dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 runAsUser: 1001 terminationGracePeriodSeconds: 30 updateStrategy: type: RollingUpdate volumeClaimTemplates: - metadata: creationTimestamp: null name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 8Gi volumeMode: Filesystem status: phase: Pending status: collisionCount: 0 currentReplicas: 3 currentRevision: kafka-56ff499d74 observedGeneration: 1 readyReplicas: 1 replicas: 3 updateRevision: kafka-56ff499d74 updatedReplicas: 3 </code></pre>
<p>I see you have some trouble with passing different environment variables for differents pods in a StatefulSet.</p> <p>You are trying to achieve this using helm templates:</p> <pre><code>- name: KAFKA_CFG_ADVERTISED_LISTENERS {{- if $MY_POD_NAME := "kafka-0" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners1 }}:9092' {{- else if $MY_POD_NAME := "kafka-1" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners2 }}:9092' {{- else if $MY_POD_NAME := "kafka-2" }} value: 'PLAINTEXT://{{ .Values.advertisedListeners3 }}:9092' {{- end }} </code></pre> <p>In <a href="https://helm.sh/docs/chart_template_guide/" rel="nofollow noreferrer">helm template guide documentation</a> you can find this explaination:</p> <blockquote> <p>In Helm templates, a variable is a named reference to another object. It follows the form $name. Variables are assigned with a special assignment operator: :=.</p> </blockquote> <p>Now let's look at your code:</p> <pre><code>{{- if $MY_POD_NAME := "kafka-0" }} </code></pre> <p>This is variable assignment, not comparasion and after this assignment, <code>if</code> statement evaluates this expression to <code>true</code> and that's why in your staefulset <code>yaml</code> manifest you see this as an output:</p> <pre><code>- name: KAFKA_CFG_ADVERTISED_LISTENERS value: PLAINTEXT://10.21.0.191:9092 </code></pre> <hr> <p>To make it work as expected, you shouldn't use helm templating. It's not going to work.</p> <p>One way to do it would be to create separate enviroment variable for every kafka node and pass all of these variables to all pods, like this:</p> <pre><code>- env: - name: MY_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KAFKA_0 value: 10.21.0.191 - name: KAFKA_1 value: 10.21.0.192 - name: KAFKA_2 value: 10.21.0.193 # - name: KAFKA_CFG_ADVERTISED_LISTENERS # value: PLAINTEXT://$MY_POD_NAME:9092 </code></pre> <p>and also create your own docker image with modified starting script that will export <code>KAFKA_CFG_ADVERTISED_LISTENERS</code> variable with appropriate value depending on <code>MY_POD_NAME</code>.</p> <p>If you dont want to create your own image, you can create a <code>ConfigMap</code> with modified <code>entrypoint.sh</code> and mount it in place of old <code>entrypoint.sh</code> (you can also use any other file, just take a look <a href="https://github.com/bitnami/bitnami-docker-kafka/tree/master/2/debian-9" rel="nofollow noreferrer">here</a> for more information on how kafka image is built).</p> <p>Mounting <code>ConfigMap</code> looks like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: test-container image: docker.io/bitnami/kafka:2.3.0-debian-9-r88 volumeMounts: - name: config-volume mountPath: /entrypoint.sh subPath: entrypoint.sh volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: kafka-entrypoint-config defaultMode: 0744 # remember to add proper (executable) permissions apiVersion: v1 kind: ConfigMap metadata: name: kafka-entrypoint-config namespace: default data: entrypoint.sh: | #!/bin/bash # Here add modified entrypoint script </code></pre> <p>Please let me know if it helped.</p>
<p>I am currently trying to implement the CI/CD pipeline using docker , Kubernetes and Jenkins. When I created the pipeline deployment Kubernetes deployment YAML file, I was not included the time stamp. Only I was using the imagePullPolicy as <code>latest</code> in YAML file. Regarding with latest pull I had already one discussion here, The following is the link for that discussion,</p> <p><a href="https://stackoverflow.com/questions/58539362/docker-image-not-pulling-latest-from-dockerhub-com-registry?noredirect=1#comment103401768_58539362">Docker image not pulling latest from dockerhub.com registry</a></p> <p>After This discussion , I included the time stamp in my deployment YAML like the following,</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-kube-deployment labels: app: test-kube-deployment spec: replicas: 3 selector: matchLabels: app: test-kube-deployment template: metadata: labels: app: test-kube-deployment annotations: date: "+%H:%M:%S %d/%m/%y" spec: imagePullSecrets: - name: "regcred" containers: - name: test-kube-deployment-container image: spacestudymilletech010/spacestudykubernetes:latest imagePullPolicy: Always ports: - name: http containerPort: 8085 protocol: TCP </code></pre> <p>Here I modified my script to include the time stamp by adding the following in template, </p> <pre><code>annotations: date: "+%H:%M:%S %d/%m/%y" </code></pre> <p>My service file like following,</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: type: NodePort ports: - port: 8085 targetPort: 8085 protocol: TCP name: http selector: app: test-kube-deployment </code></pre> <p>My jenkinsfile conatining the following,</p> <pre><code>stage ('imagebuild') { steps { sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline' sh 'docker login --username=&lt;my-username&gt; --password=&lt;my-password&gt;' sh 'docker push spacestudymilletech010/spacestudykubernetes:latest' } } stage ('Test Deployment') { steps { sh 'kubectl apply -f deployment/testdeployment.yaml' sh 'kubectl apply -f deployment/testservice.yaml' } } </code></pre> <p>But still the deployment not pulling the latest one from Dockerhub registry. How I can modify these script for resolving the latest pulling problem?</p>
<pre><code>The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following: set the imagePullPolicy of the container to Always. omit the imagePullPolicy and use :latest as the tag for the image to use. omit the imagePullPolicy and the tag for the image to use. enable the AlwaysPullImages admission controller. </code></pre> <p>Basically, either use <code>:latest</code> or then use <code>imagePullPolicy: Always</code> </p> <p>Try it and let me know how it goes!</p> <p>Referenced from <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">here</a> </p>
<p>Is there a way to restart pods automatically after some time or when they reach some memory limit?</p> <p>I want achieve the same behavior as gunicorn(or any mainstream process manager does)</p>
<h1>Memory limit</h1> <p>If you set a limit for the memory on a container in the podTemplate, this pod will be restarted if it uses more than specified memory.</p> <pre><code>resources: limits: memory: 128Mi </code></pre> <p>See <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">Managing Compute Resources for Containers </a> for documentation</p> <h1>Time limit</h1> <p>This can be done in many different ways, internally by calling <code>exit(1)</code> or stop responding on a configured <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">livenessProbe</a>. Or externally, e.g. by configuring a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a>.</p>
<p>Totally new to GCP and trying to deploy first kubernetes cluster and getting below error.</p> <p>(1) insufficient regional quota to satisfy request: resource "IN_USE_ADDRESSES": request requires '9.0' and is short '1.0'. project has a quota of '8.0' with '8.0' available. View and manage quotas at <a href="https://console.cloud.google.com/iam-admin/quotas?usage=USED&amp;project=test-255811" rel="noreferrer">https://console.cloud.google.com/iam-admin/quotas?usage=USED&amp;project=test-255811</a></p> <p>Have already request for quota increase but I want to know what does this "8.0" limit mean? how many IP addressed are available in "1.0"? from where I can reduce the size of my network. I am using "default" Network and default "/20" Node Subnet options. </p>
<p>The easy way to check quota usage for the current project is to go to </p> <p>GCP Navigation => IAM &amp; admin => Quotas, </p> <p>then sort data by Current Usage. </p> <p>There are regional hard limits that you could have exceeded (<code>In-use IP addresses</code> in your case). </p> <p>The numbers in the error message are just decimal values in the format the <code>gcloud</code> and API commonly use for quotas. You might try the following commands to see how the quota values are actually displayed: </p> <pre><code>$ gcloud compute project-info describe --project project-name $ gcloud compute regions describe region-name </code></pre> <p>In your particular case 9 addresses were requested, and the deployment was short of 1 address because of the quota of 8 addresses. </p> <p>Google Cloud documentation provides viable explanation of quotas: </p> <p><a href="https://cloud.google.com/compute/quotas" rel="noreferrer">Resource quotas</a></p> <p><a href="https://cloud.google.com/docs/quota" rel="noreferrer">Working with Quotas</a></p>
<p>I am following the instructions from this link: <a href="https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640" rel="nofollow noreferrer">https://kubecloud.io/kubernetes-dashboard-on-arm-with-rbac-61309310a640</a></p> <p>and I run this command:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml </code></pre> <p>But I'm getting this output/error:</p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created service/kubernetes-dashboard created error: unable to recognize "https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml": no match es for kind "Deployment" in version "apps/v1beta2" </code></pre> <p>I'm not sure how to proceed from here? I'm trying to install the Kubernetes Dashboard for a Raspberry PI cluster.</p> <p>Here is my setup:</p> <pre><code>pi@k8s-master:/etc/kubernetes$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 2d11h v1.16.2 k8s-node1 Ready worker 2d3h v1.16.2 k8s-node2 Ready worker 2d2h v1.15.2 k8s-node3 Ready worker 2d2h v1.16.2 </code></pre>
<p>The reason behind your error is that after 1.16.0 kubernetes stopped using <code>apps/v1beta2</code>for deployments. You should use <code>apps/v1</code> instead. </p> <p>Please donwnload the file: </p> <pre><code>wget https://raw.githubusercontent.com/kubernetes/dashboard/72832429656c74c4c568ad5b7163fa9716c3e0ec/src/deploy/recommended/kubernetes-dashboard-arm.yaml </code></pre> <p>Edit the file using <code>nano</code> or <code>vi</code> and change the deployment api version to <code>apps/v1</code>. </p> <p>Don`t forget to save the file when exiting. </p> <p>Then: </p> <pre><code>kubectl apply -f [file_name] </code></pre> <p>You may find more about there release changes <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16" rel="nofollow noreferrer">here</a>.</p>
<p>From here i realized, if container is not given CPU limits, then it takes up default CPU limits from Namespace level: <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</a></p> <p>My question is, what if we have not set default CPU limits (LimitRange) in Namespace level. In this case what CPU limits does Container is assigned ?</p> <p>Thanks. </p>
<p>If a container doesn't specify its own CPU request and limit, it is assigned the default CPU request and limit from the LimitRange, if such LimitRange is configured for the namespace.</p> <p>If LimitRange isn't configured for the namespace and container doesn't specify its own CPU request and limit, the pod runs in the <code>BestEffort</code> QoS (Quality of Service) class. In this case, the CPU is given from a <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="noreferrer">shared pool</a> for the node, up to the available CPU in the shared pool and if there is CPU available in it. In practice, there may not be any CPU available and the pod/container could "starve" for CPU.</p>
<p>Im trying to follow the get started docker's tutorials, but I get stuck when you have to work with kuberetes. I'm using microk8s to create the clusters.</p> <p>My Dockerfile:</p> <pre><code>FROM node:6.11.5WORKDIR /usr/src/app COPY package.json . RUN npm install COPY . . CMD [ "npm", "start" ] </code></pre> <p>My bb.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: bb-demo namespace: default spec: replicas: 1 selector: matchLabels: bb: web template: metadata: labels: bb: web spec: containers: - name: bb-site image: bulletinboard:1.0 --- apiVersion: v1 kind: Service metadata: name: bb-entrypoint namespace: default spec: type: NodePort selector: bb: web ports: - port: 8080 targetPort: 8080 nodePort: 30001 </code></pre> <p>I create the image with </p> <pre><code>docker image build -t bulletinboard:1.0 . </code></pre> <p>And I create the pod and the service with:</p> <pre><code>microk8s.kubectl apply -f bb.yaml </code></pre> <p>The pod is created, but, when I look for the state of my pods with </p> <pre><code>microk8s.kubectl get all </code></pre> <p>It says:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/bb-demo-7ffb568776-6njfg 0/1 ImagePullBackOff 0 11m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/bb-entrypoint NodePort 10.152.183.2 &lt;none&gt; 8080:30001/TCP 11m service/kubernetes ClusterIP 10.152.183.1 &lt;none&gt; 443/TCP 4d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/bb-demo 0/1 1 0 11m NAME DESIRED CURRENT READY AGE replicaset.apps/bb-demo-7ffb568776 1 1 0 11m </code></pre> <p>Also, when I look for it at the kubernetes dashboard it says:</p> <p>Failed to pull image "bulletinboard:1.0": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/bulletinboard:1.0": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed </p> <p><strong>Q: Why do I get this error? Im just following the tutorial without skipping anything.</strong></p> <p>Im already logged with docker.</p>
<p>You need to push this locally built image to the Docker Hub registry. For that, you need to create a Docker Hub account if you do not have one already. </p> <p>Once you do that, you need to login to Docker Hub from your command line. </p> <p><code>docker login</code> </p> <p>Tag your image so it goes to your Docker Hub repository. </p> <p><code>docker tag bulletinboard:1.0 &lt;your docker hub user&gt;/bulletinboard:1.0</code> </p> <p>Push your image to Docker Hub </p> <p><code>docker push &lt;your docker hub user&gt;/bulletinboard:1.0</code></p> <p>Update the yaml file to reflect the new image repo on Docker Hub. </p> <p><code>spec: containers: - name: bb-site image: &lt;your docker hub user&gt;/bulletinboard:1.0</code></p> <p>re-apply the yaml file </p> <p><code>microk8s.kubectl apply -f bb.yaml</code></p>
<p>I have a pod wiht an app server that is connecting to an external database. For redundancy, I want to run multiple pods, so I scaled up the deployment to 3 with a rolingupdate strategy (maxSurge = 1 and maxUnavailable = 1).</p> <p>Sometimes (most of the times) the pods fail on first create, because I am using liquibase, and all the pods try to lock the database at the same time.</p> <p>The easiest solution to me seems to have the pods start up sequentially. So startup pod 1, wait for 60 secs and startup pod 2 etc.</p> <p>Is that a valid solution? How can I achieve that in k8s (v1.14)?</p> <p>Here's the output of <code>kubectl describe deploy</code>:</p> <pre><code>Name: jx-apollon Namespace: jx-staging CreationTimestamp: Sun, 27 Oct 2019 21:28:07 +0100 Labels: chart=apollon-1.0.348 draft=draft-app jenkins.io/chart-release=jx jenkins.io/namespace=jx-staging jenkins.io/version=4 Annotations: deployment.kubernetes.io/revision: 3 jenkins.io/chart: env kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"jenkins.io/chart":"env"},"labels":{"chart":"apollon-1.0... Selector: app=jx-apollon,draft=draft-app Replicas: 0 desired | 0 updated | 0 total | 0 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 1 max unavailable, 1 max surge Pod Template: Labels: app=jx-apollon draft=draft-app Init Containers: postgres-listener-short: Image: alpine Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c echo 'Waiting in init container for DB to become available.'; echo $DB; for i in $(seq 1 5); do echo 'nc ping' $i &amp;&amp; nc -z -w3 $DB 5432 &amp;&amp; echo 'DB is available, continuing now to application initialization.' &amp;&amp; exit 0 || sleep 3; done; echo 'DB is not yet available.'; exit 1 Environment: DB: jx-apollon-postgresql-db-alias Mounts: &lt;none&gt; postgres-listener-longer: Image: alpine Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c echo 'Waiting in init container for DB to become available.'; echo $DB; for i in $(seq 1 100); do echo 'nc ping' $i &amp;&amp; nc -z -w3 $DB 5432 &amp;&amp; echo 'DB is available, continuing now to application initialization.' &amp;&amp; exit 0 || sleep 3; done; echo 'DB is not yet available.'; exit 1 Environment: DB: jx-apollon-postgresql-db-alias Mounts: &lt;none&gt; Containers: apollon: Image: &lt;redacted&gt; Ports: 8080/TCP, 8443/TCP Host Ports: 0/TCP, 0/TCP Limits: cpu: 2 memory: 6Gi Requests: cpu: 100m memory: 3584Mi Liveness: http-get http://:8080/ delay=60s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: DB: jx-apollon-postgresql-db-alias POSTGRES_PASSWORD: &lt;redacted&gt; RULES_CLIENT: demo _JAVA_OPTIONS: -XX:+UseContainerSupport -XX:MaxRAMPercentage=90.0 -XX:+UseG1GC Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: jx-apollon-95b4c77cb (0/0 replicas created) Events: &lt;none&gt; </code></pre>
<p>I changed the initContainer part to this code, which waits until it reads a false value from the databasechangeloglock table created by liquibase.</p> <pre><code> initContainers: - name: postgres-listener image: postgres env: - name: DB value: jx-apollon-postgresql-db-alias command: ['sh', '-c', '\ until psql -qtAX -h $DB -d postgres -c \ "select count(locked) from databasechangeloglock where locked = false group by locked"; do echo waiting for dbchangeloglock of postgres db to be false; sleep 2; done; '] </code></pre>
<p>I have a stateful spring application and I want to deploy it to kubernetes cluster. There will be more than one instance of the application so i need to enable sticy session using ingress-nginx controller. I made the following configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "JSESSIONID" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" nginx.ingress.kubernetes.io/session-cookie-path: /ingress-test # UPDATE THIS LINE ABOVE spec: rules: - http: paths: - path: /ingress-test backend: serviceName: ingress-test servicePort: 31080 </code></pre> <p>ingress-nginx redirect subsequent request to correct pod if login is successful. However, it sometimes switches to other pod just after JSESSIONID is changed (JSESSIONID cookie is changed by spring-security afer successful login) and frontend redirects back to login page even user credentials are correct. Is there anyone that tried ingress-nginx with spring-security?</p> <p>Best Regards</p>
<p>Following change fixed the problem. Without a host definition in rules, ingress-nginx doesn't set session cookie. </p> <p>There is an open issue: <a href="https://github.com/kubernetes/ingress-nginx/issues/3989" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3989</a></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" nginx.ingress.kubernetes.io/session-cookie-path: /ingress-test # UPDATE THIS LINE ABOVE spec: rules: - host: www.domainname.com http: paths: - path: /ingress-test backend: serviceName: ingress-test servicePort: 31080 </code></pre>
<p>I've been following <a href="https://www.youtube.com/watch?v=DwlIn9zOcfc" rel="nofollow noreferrer">tutorial videos</a> and trying to understand to build a small minimalistic application. The videos I followed are pulling containers from the registries while I'm trying to test, build and deploy everything locally at the moment if possible. Here's my setup.</p> <ol> <li><p>I've the latest docker installed with Kubernetes enabled on mac OS.</p></li> <li><p>A helloworld NodeJS application running with Docker and Docker Compose</p></li> </ol> <p><strong>TODO:</strong> I'd like to be able to start my instances, let's say 3 in the kubernetes cluster</p> <hr> <p><strong>Dockerfile</strong></p> <pre><code>FROM node:alpine COPY package.json package.json RUN npm install COPY . . CMD ["npm", "start"] </code></pre> <hr> <p><strong>docker-compose.yml</strong></p> <pre><code>version: '3' services: user: container_name: users build: context: ./user dockerfile: Dockerfile </code></pre> <hr> <p>Creating a deployment file with the help of this <a href="https://www.mirantis.com/blog/introduction-to-yaml-creating-a-kubernetes-deployment/" rel="nofollow noreferrer">tutorial</a> and it may have problems since I'm merging information both from youtube as well as the web link.</p> <p>Creating a miminalistic yml file for to be able to get up and running, will study other aspects like readiness and liveness later.</p> <hr> <pre><code>apiVersion: v1 kind: Service metadata: name: user spec: selector: app: user ports: - port: 8080 type: NodePort </code></pre> <p>Please review the above yml file for correctness, so the question is what do I do next?</p>
<p>The snippets you provide are regrettably insufficient but you have the basics.</p> <p>I had a Google for you for a tutorial and -- unfortunately -- nothing obvious jumped out. That doesn't mean that there isn't one, just that I didn't find it.</p> <p>You've got the right idea and there are quite a few levels of technology to understand but, I commend your approach and think we can get you there.</p> <ol> <li>Let's start with a helloworld Node.JS tutorial</li> </ol> <p><a href="https://nodejs.org/en/docs/guides/getting-started-guide/" rel="nofollow noreferrer">https://nodejs.org/en/docs/guides/getting-started-guide/</a></p> <ol start="2"> <li>Then you want to containerize this</li> </ol> <p><a href="https://nodejs.org/de/docs/guides/nodejs-docker-webapp/" rel="nofollow noreferrer">https://nodejs.org/de/docs/guides/nodejs-docker-webapp/</a></p> <p>For #3 below, the last step here is:</p> <pre class="lang-sh prettyprint-override"><code>docker build --tag=&lt;your username&gt;/node-web-app . </code></pre> <p>But, because you're using Kubernetes, you'll want to push this image to a public repo. This is so that, regardless of where your cluster runs, it will be able to access the container image.</p> <p>Since the example uses DockerHub, let's continue using that:</p> <pre class="lang-sh prettyprint-override"><code>docker push &lt;your username&gt;/node-web-app </code></pre> <p><strong>NB</strong> There's an implicit <code>https://docker.io/&lt;your username&gt;/node-web-app:latest</code> here</p> <ol start="3"> <li>Then you'll need a Kubernetes cluster into which you can deploy your app</li> </ol> <ul> <li>I think <a href="https://microk8s.io/" rel="nofollow noreferrer">microk8s</a> is excellent</li> <li>I'm a former Googler but <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">Kubernetes Engine</a> is the benchmark (requires $$$)</li> <li>Big fan of DigitalOcean too and it has <a href="https://www.digitalocean.com/products/kubernetes/" rel="nofollow noreferrer">Kubernetes</a> (also $$$)</li> </ul> <p>My advice is (except microk8s and minikube) don't ever run your own Kubernetes clusters; leave it to a cloud provider.</p> <ol start="4"> <li>Now that you have all the pieces, I recommend you just:</li> </ol> <pre class="lang-sh prettyprint-override"><code>kubectl run yourapp \ --image=&lt;your username&gt;/node-web-app:latest \ --port=8080 \ --replicas=1 </code></pre> <p>I believe <code>kubectl run</code> is deprecated but use it anyway. It will create a Kubernetes Deployment (!) for you with 1 Pod (==replica). Feel free to adjust that value (perhaps <code>--replicas=2</code>) if you wish.</p> <p>Once you've created a Deployment, you'll want to create a Service to make your app accessible (top of my head) this command is:</p> <pre class="lang-sh prettyprint-override"><code>kubectl expose deployment/yourapp --type=NodePort </code></pre> <p>Now you can query the service:</p> <pre><code>kubectl get services/yourapp NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE yourapp NodePort 10.152.183.27 &lt;none&gt; 80:32261/TCP 7s </code></pre> <p><strong>NB</strong> The NodePort that's been assigned (in this case!) is <code>:32261</code> and so I can then interact with the app using <code>curl http://localhost:32261</code> (localhost because I'm using microk8s).</p> <p><code>kubectl</code> is powerful. Another way to determine the NodePort is:</p> <pre><code>kubectl get service/yourapp \ --output=jsonpath=&quot;{.spec.ports[0].nodePort}&quot; </code></pre> <p>The advantage of the approach of starting from <code>kubectl run</code> is you can then easily determine the Kubernetes configuration that is needed to recreate this Deployment|Service by:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get deployment/yourapp \ --format=yaml \ &gt; ./yourapp.deployment.yaml kubectl get service/yourapp \ --format=yaml \ &gt; ./yourapp.service.yaml </code></pre> <p>These commands will interrogate the cluster, retrieve the configuration for you and pump it into the files. It will include some instance data too but the gist of it shows you what you would need to recreate the deployment. You will need to edit this file.</p> <p>But, you can test this by first deleting the deployment and the service and then recreating it from the configuration:</p> <pre class="lang-sh prettyprint-override"><code>kubectl delete deployment/yourapp kubectl delete service/yourapp kubectl apply --filename=./yourapp.deployment.yaml kubectl apply --filename=./yourapp.service.yaml </code></pre> <p><strong>NB</strong> You'll often see multiple resource configurations merged into a single YAML file. This is perfectly valid YAML but you only ever see it used by Kubernetes. The format is:</p> <pre><code>... some: yaml --- ... some: yaml --- </code></pre> <p>Using this you could merge the <code>yourapp.deployment.yaml</code> and <code>yourapp.service.yaml</code> into a single Kubernetes configuration.</p>
<p>I have created common helm charts. In <em>values.yml</em> file, I have set of env variables that need to be set as part of deployment.yaml file.</p> <p>Snippet of values file.</p> <pre><code>env: name: ABC value: 123 name: XYZ value: 567 name: PQRS value: 345 </code></pre> <p>In deployment.yaml, when the values are referred, only the last name/value are set, other values are overwritten. How to read/set all the names/values in the deployment file? </p>
<p>I've gone through a few iterations of how to handle setting sensitive environment variables. Something like the following is the simplest solution I've come up with so far:</p> <p>template:</p> <pre><code>{{- if or $.Values.env $.Values.envSecrets }} env: {{- range $key, $value := $.Values.env }} - name: {{ $key }} value: {{ $value | quote }} {{- end }} {{- range $key, $secret := $.Values.envSecrets }} - name: {{ $key }} valueFrom: secretKeyRef: name: {{ $secret }} key: {{ $key | quote }} {{- end }} {{- end }} </code></pre> <p>values:</p> <pre><code>env: ENV_VAR: value envSecrets: SECRET_VAR: k8s-secret-name </code></pre> <p>Pros:</p> <p>syntax is pretty straightforward</p> <p>keys are easily mergeable. This came in useful when creating CronJobs with shared secrets. I was able to easily override "global" values using the following:</p> <pre><code> {{- range $key, $secret := merge (default dict .envSecrets) $.Values.globalEnvSecrets }} </code></pre> <p>Cons:</p> <p>This only works for secret keys that exactly match the name of the environment variable, but it seems like that is the typical use case.</p>
<p>I am trying to configure a Kafka cluster behind Traefik but my producers and client (that are outside kubernetes) don't connect to the bootstrap-servers. They keep saying: </p> <blockquote> <p>"no resolvable boostrap servers in the given url"</p> </blockquote> <p>Actually here is the Traefik ingress:</p> <pre><code>{ "apiVersion": "extensions/v1beta1", "kind": "Ingress", "metadata": { "name": "nppl-ingress", "annotations": { "kubernetes.io/ingress.class": "traefik", "traefik.frontend.rule.type": "PathPrefixStrip" } }, "spec": { "rules": [ { "host": "" , "http": { "paths": [ { "path": "/zuul-gateway", "backend": { "serviceName": "zuul-gateway", "servicePort": "zuul-port" } }, { "path": "/kafka", "backend": { "serviceName": "kafka-broker", "servicePort": "kafka-port" } [..] } </code></pre> <p>What I give to the kafka consumers/producers is the public IP of Traefik. Here is the flow: [Kafka producers/consumers] -> Traefik(exposed as Load Balancer) -> [Kafka-Cluster]</p> <p>Is there any solution? Otherwise I was thinking to add a kafka-rest proxy (<a href="https://docs.confluent.io/current/kafka-rest/docs/index.html" rel="nofollow noreferrer">https://docs.confluent.io/current/kafka-rest/docs/index.html</a>) between Traefik and the kafka brokers but I think isn't the ideal solution.</p>
<p>I did. You can refer to it, in kubernetes ,deployment kafka.yaml</p> <pre><code> env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_CREATE_TOPICS value: "test:1:1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper:2181" - name: KAFKA_ADVERTISED_LISTENERS value: "INSIDE://:9092,OUTSIDE://kafka-com:30322" - name: KAFKA_LISTENERS value: "INSIDE://:9092,OUTSIDE://:30322" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "INSIDE" </code></pre> <p>kafka service,the external service invocation address, or traefik proxy address</p> <pre><code>--- kind: Service apiVersion: v1 metadata: name: kafka-com namespace: dev labels: k8s-app: kafka spec: selector: k8s-app: kafka ports: - port: 9092 name: innerport targetPort: 9092 protocol: TCP - port: 30322 name: outport targetPort: 30322 protocol: TCP nodePort: 30322 type: NodePort </code></pre> <p>Ensure that Kafka external port and nodePort port are consistent,Other services call kafka-com:30322, my blog write this <a href="https://bbotte.github.io/virtualization/config_kafka_in_kubernetes" rel="nofollow noreferrer">config_kafka_in_kubernetes</a>, hope to help U !</p>
<p>I am deploying stolon via statefulset (default from stolon repo). I have define in statefulset config </p> <pre><code>volumeClaimTemplates: - metadata: name: data spec: accessModes: ["ReadWriteOnce"] storageClassName: stolon-local-storage resources: requests: storage: 1Gi </code></pre> <p>and here is my storageClass:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: stolon-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p>statefulset was created fine, but pod has error: <strong>pod has unbound immediate PersistentVolumeClaims</strong></p> <p>How can I resolve it?</p>
<blockquote> <p>pod has unbound immediate PersistentVolumeClaims</p> </blockquote> <p>In this case pvc could not connect to storageclass because it wasn't make as a <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">default</a>.</p> <blockquote> <p>Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">PersistentVolumeClaim documentation</a> for details.</p> </blockquote> <p>Command which can be used to make your new created storageclass a default one.</p> <pre><code>kubectl patch storageclass &lt;name_of_storageclass&gt; -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' </code></pre> <p>Then You can use <code>kubectl get storageclass</code> and it should look like this</p> <pre><code>NAME PROVISIONER AGE stolon-local-storage (default) kubernetes.io/gce-pd 1d </code></pre>
<p>I deploy my cluster on GKE with an Ingress Controller</p> <p>I use <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">Helm</a> to install the following:</p> <ul> <li>Installed <strong><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Ingress Controller</a></strong></li> <li>Deployed Load Balancer Service (Create a Load Balancer on GCP as well)</li> </ul> <p>I also deployed the <strong>Ingress Object</strong> (Config as below)</p> <hr> <p>Then I observed the following status ...</p> <p>The <strong>Ingress Controller</strong> is exposed (By Load Balancer Service) with two endpoints: 35.197.XX.XX:80, 35.197.XX.XX:443 </p> <p>These two endpoints are exposed by the Cloud load balancer. I have no problem with it.</p> <p>However, when I execute <code>kubectl get ing ingress-service -o wide</code>, it prints out the following info.</p> <pre><code>NAME HOSTS ADDRESS PORTS AGE ingress-service k8s.XX.com.tw 34.87.XX.XX 80, 443 5h50m </code></pre> <p>I really don't under the use of the IP under the ADDRESS column.</p> <p>I can also see that Google add some extra info to the end of my Ingress config file about load balancer IP for me.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: ....(ommitted) spec: rules: - host: k8s.XX.com.tw http: paths: - backend: serviceName: client-cluster-ip-service servicePort: 3000 path: /?(.*) - backend: serviceName: server-cluster-ip-service servicePort: 5000 path: /api/?(.*) tls: - hosts: - k8s.XX.com.tw secretName: XX-com-tw status: loadBalancer: ingress: - ip: 34.87.XX.XX </code></pre> <p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_4_visit_your_application" rel="nofollow noreferrer">Google's doc</a>, this (<strong>34.87.XX.XX</strong>) looks like an external IP, but I can't access it with <a href="http://34.87.XX.XX" rel="nofollow noreferrer">http://34.87.XX.XX</a></p> <hr> <p>My question is that since we already have an external IP (35.197.XX.XX) to receive the traffic, why do we need this ADDRESS for the <strong>ingress-service</strong>?</p> <p>If it's an internal or external IP ADDRESS? What is this ADDRESS bound to? What exactly is this ADDRESS used for?</p> <p>Can anyone shed some light? Thanks a lot!</p>
<p>If you simply go take a look at the documentation you will have your answer.</p> <p>What is an ingress ressource: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p> <p>So following the doc: </p> <blockquote> <p>Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.</p> </blockquote> <p>To be more precise on cloud provider, the ingress will create a load-balancer to expose the service to the internet. The cocumentation on the subject specific to gke: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p> <p>That explains why you have an external ip for the ingress.</p> <p>What you should do now:</p> <ul> <li>If you don't want to expose HTTP or/and HTTPS ports just delete the ingress ressource, you don't use it so it's pretty much useless.</li> <li>If you are using HTTP/HTTPS ressources, change your service type to nodePort and leave the management of the load balancer to the ingress.</li> </ul> <p>My opinion is that, as you are deploying the ingress-controller, you should select the second option and leave the management of the load-balancer to it. For the ingress of the ingress-controller, don't define rules just the backend to the nodePort service, the rules should be defined in specific ingress for each app and be managed by the ingress-controller.</p>
<p>I'm creating kubernetes service in Azure with Advanced Networking options where I've selected particular vNet and Subnet for it.</p> <p>I'm getting error as below:</p> <pre><code>{"code":"InvalidTemplateDeployment","message":"The template deployment failed with error: 'Authorization failed for template resource '&lt;vnetid&gt;/&lt;subnetid&gt;/Microsoft.Authorization/xxx' of type 'Microsoft.Network/virtualNetworks/subnets/providers/roleAssignments'. The client '&lt;emailid&gt;' with object id 'xxx' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/&lt;subid&gt;/resourceGroups/&lt;rgid&gt;/providers/Microsoft.Network/virtualNetworks/&lt;vnetid&gt;/subnets/&lt;subnetid&gt;/providers/Microsoft.Authorization/roleAssignments/xxx'.'."} </code></pre> <p>I've got contributor role.</p>
<p>the existing answer is not exactly true, you can get away with the <code>Owner</code> role, obviously, but you only need <code>Microsoft.Authorization/roleAssignments/write</code> over the scope of the subnet (might be vnet, didnt test this). which helps lock down security little bit. You'd need a custom role to do that. In case you dont want to go for the custom role, existing answer will be just fine.</p>
<p>I want to run Eventstore in Kubernetes node. I start the node with <code>minikube start</code>, then I apply this yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: eventstore-deployment spec: selector: matchLabels: app: eventstore replicas: 1 template: metadata: labels: app: eventstore spec: containers: - name: eventstore image: eventstore/eventstore ports: - containerPort: 1113 protocol: TCP - containerPort: 2113 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: eventstore spec: selector: app: eventstore ports: - protocol: TCP port: 1113 targetPort: 1113 --- apiVersion: v1 kind: Service metadata: name: eventstore-dashboard spec: selector: app: eventstore ports: - protocol: TCP port: 2113 targetPort: 2113 nodePort: 30113 type: NodePort </code></pre> <p>the deployment, the replica set and the pod starts, but nothing happens: Eventstore doesn't print to the log, I can't open its dashboard. Also other services can't connect to <em>eventstore:1113</em>. No errors and the pods doesn't crash. The only I see in logs is "The selected container has not logged any messages yet".</p> <p><a href="https://i.stack.imgur.com/9fTDD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9fTDD.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/YG1EH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YG1EH.png" alt="enter image description here"></a></p> <p>I've tried a clean vanilla minukube node with different vm drivers, and also a node with configured Ambassador + Linkerd. The results are the same. </p> <p>But when I run Eventstore in Docker with this yaml file via <em>docker-compose</em></p> <pre><code>eventstore: image: eventstore/eventstore ports: - '1113:1113' - '2113:2113' </code></pre> <p>Everything works fine: Eventstore outputs to logs, other services can connect to it and I can open its dashboard on 2113 port.</p> <p><strong>UPDATE:</strong> Eventstore started working after about 30-40 minutes after deployment. I've tried several times, and had to wait. Other pods start working almost immediately (30 secs - 1 min) after deployment. </p>
<p>As @ligowsky confirmed in comment section, issue was cause due to VM Performance. Posting this as Community Wiki for better visibility. </p> <p><code>Minikube</code> as default is running with <code>2 CPUs</code> and <code>2048 Memory</code>. More details can be found <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L103" rel="nofollow noreferrer">here</a>.</p> <p>You can change this if your VM has more resources.</p> <p><strong>- During Minikube start</strong></p> <pre><code>$ sudo minikube start --cpus 2 --memory 8192 --vm-driver=&lt;driverType&gt; </code></pre> <p><strong>- When Minikube is running, however minikube need to be restarted</strong></p> <pre><code>$ minikube config set memory 4096 ⚠️ These changes will take effect upon a minikube delete and then a minikube start </code></pre> <p>More commands can be found in <a href="https://minikube.sigs.k8s.io/docs/examples/" rel="nofollow noreferrer">Minikube docs</a>.</p> <p>In my case when <code>Minikube</code> resources was 4CPUs and 8192 memory I didn't have any issues with <code>eventstore</code>.</p> <p><strong>OP's Solution</strong></p> <p>OP used <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">Kind</a> to run <code>eventstore</code> deployment. </p> <blockquote> <p>Kind is a tool for running local Kubernetes clusters using Docker container "nodes". kind is primarily designed for testing Kubernetes 1.11+</p> </blockquote> <p><code>Kind</code> documentation can be found <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">here</a>.</p>
<p>I would like to reserve some worker nodes for a namespace. I see the notes of stackflow and medium</p> <p><a href="https://stackoverflow.com/questions/52487333/how-to-assign-a-namespace-to-certain-nodes">How to assign a namespace to certain nodes?</a></p> <p><a href="https://medium.com/@alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076" rel="nofollow noreferrer">https://medium.com/@alejandro.ramirez.ch/reserving-a-kubernetes-node-for-specific-nodes-e75dc8297076</a></p> <p>I understand we can use taint and nodeselector to achieve that. My question is if people get to know the details of nodeselector or taint, how can we prevent them to deploy pods into these dedicated worker nodes.</p> <p>thank you</p>
<p>To accomplish what you need, basically you have to use <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taint</a>. Let's suppose you have a Kubernetes cluster with one Master and 2 Worker nodes: </p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION knode01 Ready &lt;none&gt; 8d v1.16.2 knode02 Ready &lt;none&gt; 8d v1.16.2 kubemaster Ready master 8d v1.16.2 </code></pre> <p>As example I'll setup knode01 as Prod and knode02 as Dev.</p> <pre><code>$ kubectl taint nodes knode01 key=prod:NoSchedule </code></pre> <pre><code>$ kubectl taint nodes knode02 key=dev:NoSchedule </code></pre> <p>To run a pod into these nodes, we have to specify a toleration in spec session on you yaml file: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod1 labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: - key: "key" operator: "Equal" value: "dev" effect: "NoSchedule" </code></pre> <p>This pod (pod1) will always run in knode02 because it's setup as dev. If we want to run it on prod, our tolerations should look like that: </p> <pre><code> tolerations: - key: "key" operator: "Equal" value: "prod" effect: "NoSchedule" </code></pre> <p>Since we have only 2 nodes and both are specified to run only prod or dev, if we try to run a pod without specifying tolerations, the pod will enter on a pending state: </p> <pre><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod0 1/1 Running 0 21m 192.168.25.156 knode01 &lt;none&gt; &lt;none&gt; pod1 1/1 Running 0 20m 192.168.32.83 knode02 &lt;none&gt; &lt;none&gt; pod2 1/1 Running 0 18m 192.168.25.157 knode01 &lt;none&gt; &lt;none&gt; pod3 1/1 Running 0 17m 192.168.32.84 knode02 &lt;none&gt; &lt;none&gt; shell-demo 0/1 Pending 0 16m &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt; </code></pre> <p>To remove a taint: </p> <pre class="lang-sh prettyprint-override"><code>$ kubectl taint nodes knode02 key:NoSchedule- </code></pre>
<p>I have a spark job that runs via a Kubernetes pod . Till now I was using an Yaml file to run my jobs manually. Now , I want to schedule my spark jobs via airflow. This is the first time I am using airflow and I am unable to figure out how I can add my Yaml file in the airflow. From what I have read is that I can schedule my jobs via a DAG in Airflow. A dag example is this :</p> <pre><code>from airflow.operators import PythonOperator from airflow.models import DAG from datetime import datetime, timedelta args = {'owner':'test', 'start_date' : datetime(2019, 4, 3), 'retries': 2, 'retry_delay': timedelta(minutes=1) } dag = DAG('test_dag', default_args = args, catchup=False) def print_text1(): print("hell-world1") def print_text(): print('Hello-World2') t1 = PythonOperator(task_id='multitask1', python_callable=print_text1, dag=dag) t2 = PythonOperator(task_id='multitask2', python_callable=print_text, dag=dag) t1 &gt;&gt; t2 </code></pre> <p>In this case the above methods will get executed on after the other once I play the DAG. Now , in case I want to run a spark submit job , what should I do? I am using Spark 2.4.4</p>
<p>Airflow has a concept of <a href="https://airflow.apache.org/howto/operator/index.html" rel="noreferrer">operators</a>, which represent Airflow tasks. In your example <a href="https://airflow.apache.org/howto/operator/python.html" rel="noreferrer">PythonOperator</a> is used, which simply executes Python code and most probably not the one you are interested in, unless you submit Spark job within Python code. There are several operators that you can take use of:</p> <ul> <li><a href="https://airflow.apache.org/howto/operator/bash.html" rel="noreferrer">BashOperator</a>, which executes the given bash script for you. You may run <code>kubectl</code> or <code>spark-submit</code> using it directly</li> <li><a href="https://airflow.apache.org/_api/airflow/contrib/operators/spark_submit_operator/index.html?highlight=spark#module-airflow.contrib.operators.spark_submit_operator" rel="noreferrer">SparkSubmitOperator</a>, the specific operator to call <code>spark-submit</code></li> <li><a href="https://airflow.apache.org/_api/airflow/contrib/operators/kubernetes_pod_operator/index.html?highlight=kubernetes%20operator#module-airflow.contrib.operators.kubernetes_pod_operator" rel="noreferrer">KubernetesPodOperator</a>, creates Kubernetes pod for you, you can launch your Driver pod directly using it</li> <li>Hybrid solutions, eg. <a href="https://airflow.apache.org/_api/airflow/operators/http_operator/index.html?highlight=http%20operator#module-airflow.operators.http_operator" rel="noreferrer">HttpOperator</a> + <a href="https://github.com/jahstreet/spark-on-kubernetes-helm" rel="noreferrer">Livy on Kubernetes</a>, you spin up Livy server on Kubernetes, which serves as a Spark Job Server and provides REST API to be called by Airflow HttpOperator</li> </ul> <p>Note: for each of the operators you need to ensure that your Airflow environment contains all the required dependencies for execution as well as the credentials configured to access the required services.</p> <p>Also you can refer the existing thread:</p> <ul> <li><a href="https://stackoverflow.com/questions/53773678/airflow-sparksubmitoperator-how-to-spark-submit-in-another-server">Airflow SparkSubmitOperator - How to spark-submit in another server</a></li> </ul>
<p>I have a persistent volume claim created and a random persistent volume name is bound to the claim. Can the persistent volume name be modified, if so whats the process?</p> <p>Tried the following: Open the dashboard of k8, edit the pvc. but it throws the below error: spec: Forbidden: is immutable after creation except resources.requests for bound claims </p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume namespace: xxxxxxxx labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: task-pv-claim namespace: xxxxx spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: manual volumeMode: Filesystem NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE task-pv-claim Bound task-pv-volume 10Gi RWX manual 16m </code></pre>
<p>Once the persistent volume is bound to pvc you cant change persistent volume name. Delete the pvc and pv and recreate the pv with the required name.</p> <p>Alternatively, you can pre-bind the pv with a specific pvc. Follow the below steps</p> <ol> <li>Create the pv and bind it with specific pvc</li> <li>Create pvc from the name given in pv</li> <li>The pv and pvc should be bound together</li> </ol>
<p>I would like to run <code>kubectl</code> command from a pre-upgrade <code>helm hook</code>, but I can't seem to any documentation on how to achieve this. </p> <p>Do I have to create a docker image that contains <code>kubectl</code> in order to achieve this? </p> <p>or is there some way of achieving this without using a container? </p> <p>I have a basic <code>helm hook</code> which looks like this </p> <pre><code> apiVersion: batch/v1 kind: Job metadata: name: {{ .Chart.Name }}-change-pvc-hook labels: app: {{ .Chart.Name }} annotations: "helm.sh/hook": pre-upgrade "helm.sh/hook-delete-policy": hook-succeeded, before-hook-creation spec: template: metadata: name: "{{.Release.Name}}" labels: app: {{ .Chart.Name }} spec: restartPolicy: Never containers: - name: pre-upgrade-change-pvc </code></pre> <p>if someone could explain how to run <code>kubectl</code> on without a container or how I can achieve this, that would be great</p>
<p>You can do it like Prometheus operator does the cleanup (pre-delete hook) in their helm chart: <a href="https://github.com/helm/charts/blob/d58247103fafd5930792bb1fd3ac240547523200/stable/prometheus-operator/templates/prometheus-operator/cleanup-crds.yaml#L33" rel="noreferrer">prometheus operator kubectl usage</a></p> <p>basically, you can use the image = <code>k8s.gcr.io/hyperkube:v1.12.1</code> something like this:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name:somename-operator-cleanup namespace: somenamespace annotations: "helm.sh/hook": pre-delete "helm.sh/hook-weight": "3" "helm.sh/hook-delete-policy": hook-succeeded labels: app: someapp-operator spec: template: metadata: name: somename-operator-cleanup labels: app: someapp spec: {{- if .Values.global.rbac.create }} serviceAccountName: {{ template "prometheus-operator.operator.serviceAccountName" . }} {{- end }} containers: - name: kubectl image: "k8s.gcr.io/hyperkube:v1.12.1" imagePullPolicy: "IfNotPresent" command: - /bin/sh - -c - &gt; kubectl your command here. kubectl delete alertmanager --all; kubectl delete prometheus --all; kubectl delete prometheusrule --all; kubectl delete servicemonitor --all; sleep 10; kubectl delete crd alertmanagers.monitoring.coreos.com; kubectl delete crd prometheuses.monitoring.coreos.com; kubectl delete crd prometheusrules.monitoring.coreos.com; kubectl delete crd servicemonitors.monitoring.coreos.com; kubectl delete crd podmonitors.monitoring.coreos.com; restartPolicy: OnFailure </code></pre> <p>Other option is to CURL to the Kubernetes API like <a href="https://github.com/ethz-hpc/k8s-OpenNebula/blob/2bbec2e31ca20159a51abbf7d51624d8e86ac1f2/Chart/templates/auto-ssh-secret-job.yaml#L42" rel="noreferrer">here</a> note you need <code>automountServiceAccountToken: true</code> and then you can use the Barear token from <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code></p> <p>You just need an image with curl for that. You can use zakkg3/opennebula-alpine-bootstrap for this.</p> <p>For example here i create a secret based on a file using curl instead of kubectl:</p> <pre><code>url -s -X POST -k https://kubernetes.default.svc/api/v1/namespaces/${NAMESPACE}/secrets \ -H "Authorization: Bearer $( cat /var/run/secrets/kubernetes.io/serviceaccount/token )" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d "{ \"kind\": \"Secret\", \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"{{ include "opennebula.fullname" . }}-ssh-keys\", \"namespace\": \"${NAMESPACE}\" }, \"type\": \"Opaque\", \"data\": { \"authorized_keys\": \"$( cat opennebula-ssh-keys/authorized_keys | base64 | tr -d '\n' )\", \"config\": \"$( cat opennebula-ssh-keys/config | base64 | tr -d '\n' )\", \"id_rsa\": \"$( cat opennebula-ssh-keys/id_rsa | base64 | tr -d '\n' )\", \"id_rsa.pub\": \"$( cat opennebula-ssh-keys/id_rsa.pub | base64 | tr -d '\n' )\" } }" &gt; /dev/null </code></pre> <p>Note its good practice to output to > /dev/null otherwise you will end up with this output in your logging management (ELK / LOKI).</p>
<p>I am having tar of my service specific images. I am importing it in containerd so that it will be used by k3s to deploy PODs. command used to import image's tar is-<br> <code>k3s ctr images import XXX.tar</code> </p> <p>By default it loads images in <code>/var/lib/rancher/data</code> dir. However, I would like to load images in different directory. Anyone know, how to specify custom directory while loading images?</p>
<p>I didn't find anything that may natively allow for changing this directory. This doesn't mean nothing can be done. You can always create a symlink like this:</p> <pre><code>ln -s target_path /var/lib/rancher/data </code></pre> <p>Let me know if it helped.</p>
<p>I have to create a readyness and liveness probe for a node.js container (docker) in kubernetes. My problem is that the container is NOT a server, so I cannot use an http request to see if it is live. </p> <p>My container runs a <a href="https://www.npmjs.com/package/cron" rel="nofollow noreferrer">node-cron</a> process that download some csv file every 12 h, parse them and insert the result in elasticsearch. </p> <p>I know I could add express.js but I woud rather not do that just for a probe.</p> <p>My question is:</p> <ol> <li>Is there a way to use some kind of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness command probe</a>? If it is possible, what command can I use?</li> <li>Inside the container, I have <a href="https://www.npmjs.com/package/pm2" rel="nofollow noreferrer">pm2</a> running the process. Can I use it in any way for my probe and, if so, how?</li> </ol>
<p><strong>Liveness command</strong></p> <p>You <em>can</em> use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness command</a> as you describe. However, I would recommend to design your job/task for Kubernetes.</p> <p><strong>Design for Kubernetes</strong></p> <blockquote> <p>My container runs a node-cron process that download some csv file <strong>every 12 h</strong>, parse them and insert the result in elasticsearch.</p> </blockquote> <p>Your job is not executing so often, if you deploy it as a service, it will <strong>take up resources all the time</strong>. And when you write that you want to use <a href="https://www.npmjs.com/package/pm2" rel="nofollow noreferrer">pm2</a> for your process, I would recommend another design. As what I understand, PM2 is a process manager, but Kubernetes is also a process manager <em>in a way</em>.</p> <p><strong>Kubernetes native CronJob</strong></p> <p>Instead of handling a <em>process</em> with pm2, implement your <strong>process</strong> as a <strong>container image</strong> and schedule your job/task with <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a> where you specify your <strong>image</strong> in the <code>jobTemplate</code>. With this design, you don't have any <em>livenessProbe</em> but your task will be restarted if it fails, e.g. fail to insert the result to elasticSearch due to a network problem.</p>
<p>Right now, I am using helm to substitute variables in a file using values from my <code>values.yaml</code> file to create a ConfigMap which is working well for entries that are spaced correctly.</p> <p>I am having a problem with the following:</p> <pre><code>CELERY_BROKER_URL = "amqp://admin:admin@{{ .Values.env.RABBITMQ_HOST }}:5672/" </code></pre> <p>Results in this error when running <code>helm install</code>:</p> <pre><code>Error: render error in "mynamespace-nonprod-chart/templates/configmap.yaml": template: mynamespace-nonprod-chart/templates/configmap.yaml:10:4: executing "mynamespace-nonprod-chart/templates/configmap.yaml" at &lt;tpl (.Files.Glob "conf/*").AsConfig .&gt;: error calling tpl: Error during tpl function execution for "settings.py: \"\\\"\\\"\\\"\\nDjango settings for imagegateway project.\\n\\nGenerated by 'django-admin\n startproject' using Django 1.11.3.\\n\\nFor more information on this file, see\\nhttps://docs.djangoproject.com/en/1.11/topics/settings/\\n\\nFor\n the full list of settings and their values, see\\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\\n\\\"\\\"\\\"\\n\\nimport\n os\\n\\nfrom kombu import Exchange, Queue\\n\\n\\n# Image Config\\nIMAGE_ASPECT_RATIO\n = (4,3) # 4:3\\nMLS_BASE_URL = 'http://www.torontomls.net/MLSMULTIPHOTOS/FULL'\\nCALLBACK_KEY\n = 'SECRET'\\n\\nBASE_URL = 'https://media-dev.mydomain.ca'\\n\\n#\n Build paths inside the project like this: os.path.join(BASE_DIR, ...)\\nBASE_DIR\n = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\\n\\nMAXIMUM_HASH_DISTANCE\n = 5\\n\\n# SECURITY WARNING: keep the secret key used in production secret!\\nSECRET_KEY\n = 'mysecretkey'\\n\\n# SECURITY WARNING: don't run with debug turned on in production!\\n#\n DEBUG = False\\nDEBUG = True\\n\\nALLOWED_HOSTS = ['0.0.0.0', '*']\\n\\n# Celery\\n\\n#\n {{ .Values.env.INGRESS_CONTROLLER_ENDPOINT }} TEST this\\nCELERY_BROKER_URL = \\\"amqp://admin:xWQ7ye7YotNif@{{\n .Values.env.RABBITMQ_HOST }}:5672/\\\"\\nCELERY_ACCEPT_CONTENT = ['application/json']\\nCELERY_TASK_SERIALIZER\n = 'json'\\nCELERY_RESULT_SERIALIZER = 'json'\\nCELERY_RESULT_BACKEND = 'redis://dummyuser:R3dis0nEKS!@redis-imagegateway-dev-shared-headless'\\nCELERY_RESULT_PERSISTENT\n = False\\nCELERY_TIMEZONE = 'Canada/Eastern'\\nCELERY_ALWAYS_EAGER = False \\nCELERY_ACKS_LATE\n = True \\nCELERYD_PREFETCH_MULTIPLIER = 1\\nCELERY_TASK_PUBLISH_RETRY = True \\nCELERY_DISABLE_RATE_LIMITS\n = False\\nCELERY_IMPORTS = ('garden.tasks', 'keeper.tasks', 'tourist.tasks',)\\nCELERY_DEFAULT_QUEUE\n = 'default'\\nCELERY_DEFAULT_EXCHANGE_TYPE = 'topic'\\nCELERY_DEFAULT_ROUTING_KEY\n = 'default'\\nCELERYD_TASK_SOFT_TIME_LIMIT = 60\\n\\nCELERY_TASK_QUEUES = (\\n Queue('tourist',\n Exchange('tourist'), routing_key='tourist.#', queue_arguments={'x-max-priority':\n 1}),\\n Queue('default', Exchange('default'), routing_key='default', queue_arguments={'x-max-priority':\n 5}),\\n Queue('garden', Exchange('garden'), routing_key='garden.#', queue_arguments={'x-max-priority':\n 10}),\\n Queue('keeper', Exchange('keeper'), routing_key='keeper.#', queue_arguments={'x-max-priority':\n 10}),\\n)\\n\\nCELERY_TASK_ROUTES = {\\n 'garden.tasks.fetch_mls_image' : {'queue':\n 'garden'},\\n 'garden.tasks.save_image' : {'queue': 'garden'},\\n 'garden.tasks.do_callback'\n : {'queue': 'garden'},\\n 'garden.tasks.clear_cloudflare_cache' : {'queue': 'garden'},\\n\n \\ 'tourist.tasks.get_fetcher' : {'queue': 'tourist'},\\n 'tourist.tasks.get_tour_urls'\n : {'queue': 'tourist'},\\n 'tourist.tasks.download_tour_images' : {'queue': 'tourist'},\\n\n \\ 'tourist.tasks.smallest_image_size' : {'queue': 'tourist'},\\n 'tourist.tasks.hash_and_compare'\n : {'queue': 'tourist'},\\n 'tourist.tasks.process_tours' : {'queue': 'tourist'},\\n\n \\ 'tourist.tasks.all_done' : {'queue': 'tourist'},\\n 'tourist.tasks.save_hd_images'\n : {'queue': 'tourist'},\\n 'keeper.tasks.create_sizes' : {'queue': 'keeper'},\\n\n \\ 'celery_tasks.debug_task': {'queue': 'default'},\\n}\\n\\n# Application definition\\n\\nINSTALLED_APPS\n = [\\n # Project\\n 'celery',\\n 'garden', # deals with retrieval of images\\n\n \\ 'keeper', # deals with resizing and image processing tasks\\n 'tourist', #\n deals with virtual tours\\n # Vendor Packages\\n 'djangocms_admin_style',\\n\n \\ # Django\\n 'django.contrib.admin',\\n 'django.contrib.auth',\\n 'django.contrib.contenttypes',\\n\n \\ 'django.contrib.sessions',\\n 'django.contrib.messages',\\n 'django.contrib.staticfiles',\\n]\\n\\nMIDDLEWARE\n = [\\n 'django.middleware.security.SecurityMiddleware',\\n 'django.contrib.sessions.middleware.SessionMiddleware',\\n\n \\ 'django.middleware.common.CommonMiddleware',\\n 'django.middleware.csrf.CsrfViewMiddleware',\\n\n \\ 'django.contrib.auth.middleware.AuthenticationMiddleware',\\n 'django.contrib.messages.middleware.MessageMiddleware',\\n\n \\ 'django.middleware.clickjacking.XFrameOptionsMiddleware',\\n]\\n\\nROOT_URLCONF\n = 'imagegateway.urls'\\n\\nTEMPLATES = [\\n {\\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\\n\n \\ 'DIRS': ['templates'],\\n 'APP_DIRS': True,\\n 'OPTIONS': {\\n\n \\ 'context_processors': [\\n 'django.template.context_processors.debug',\\n\n \\ 'django.template.context_processors.request',\\n 'django.contrib.auth.context_processors.auth',\\n\n \\ 'django.contrib.messages.context_processors.messages',\\n ],\\n\n \\ },\\n },\\n]\\n\\nWSGI_APPLICATION = 'imagegateway.wsgi.application'\\n\\n\\n#\n Database\\n\\nDATABASES = {\\n 'default': {\\n 'ENGINE': 'django.db.backends.postgresql',\\n\n \\ 'NAME': 'imagegateway',\\n 'USER': 'postgres',\\n 'PASSWORD':\n 'PostGresQL123',\\n 'HOST': 'postgresql-imagegateway-dev-shared-headless',\\n\n \\ 'PORT': 5432,\\n }\\n}\\n\\n\\n# Password validation\\n\\nAUTH_PASSWORD_VALIDATORS\n = [\\n {\\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\\n\n \\ },\\n {\\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\\n\n \\ },\\n {\\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\\n\n \\ },\\n {\\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\\n\n \\ },\\n]\\n\\n\\n# Internationalization\\n\\nLANGUAGE_CODE = 'en-us'\\n\\nTIME_ZONE =\n 'UTC'\\n\\nUSE_I18N = True\\n\\nUSE_L10N = True\\n\\nUSE_TZ = True\\n\\n\\n# Static files\n (CSS, JavaScript, Images)\\n\\nSTATIC_URL = '/static/'\\nSTATIC_ROOT = os.path.join(BASE_DIR,\n 'static/')\\n\\n\\n# Media files (Images)\\n\\nMEDIA_URL = '/media/'\\n\\n# Controls whether\n Keras and backend is imported every server restart to improve restart times\\n# Only\n set the first bool!!! Then in production when DEBUG is set to false it will default\n to True.\\n# Example: \\n# WITH_IMAGE_CLASSIFIER = True if DEBUG else True \\n#\n or WITH_IMAGE_CLASSIFIER = False if DEBUG else True\\nWITH_IMAGE_CLASSIFIER = True\n if DEBUG else True\\n\\n# 20MB\\nFILE_UPLOAD_MAX_MEMORY_SIZE = 20971520\\nDATA_UPLOAD_MAX_MEMORY_SIZE\n = 20971520\\n\\n# Cloudflare\\nCLOUDFLARE_API_URL = 'https://api.cloudflare.com/client/v4/'\\nCLOUDFLARE_API_KEY\n = ''\\nCLOUDFLARE_API_EMAIL = ''\\nCLOUDFLARE_ZONE_ID = ''\\n\"\n": parse error in "mynamespace-nonprod-chart/templates/configmap.yaml": template: mynamespace-nonprod-chart/templates/configmap.yaml:12: unexpected unclosed action in command </code></pre> <p>If I had set <code>env.RABBITMQ_HOST:rabbitmq.mydomain.com</code>, i'd expect the substituted value to be:</p> <pre><code>CELERY_BROKER_URL = "amqp://admin:admin@rabbitmq.mydomain.com:5672/" </code></pre> <p>Values:</p> <pre><code>useCustomApplicatonConfigs: - mountPath: /var/www/html/django type: py name: - settings env: CELERY_BROKER_URL: "amqp://admin:xWQ7ye7YotNif@rabbitmq-imagegateway-dev-shared-headless:5672" </code></pre> <p>Does anyone know the correct syntax for this? Do I need to escape the <code>@</code> or something?</p> <p>Edit: Added values section.</p>
<p>There are two things I had to change to get my configuration to work.</p> <p>The first issue was that I was using a yaml comments vs. a template comment which tried to get substituted when running <code>helm install</code> so I removed it completely. I should have been using {{- /* This is a comment. */ -}} according to the helm documentation instead of #.</p> <p>The second issue was with the spaces between the curly braces <code>{{</code> and <code>}}</code>.</p> <p>Before:</p> <pre><code>CELERY_BROKER_URL = "amqp://{{ .Values.env.RABBITMQ_USER }}:{{ .Values.env.RABBITMQ_PASSWORD }}@{{ .Values.env.RABBITMQ_HOST }}:{{ .Values.env.RABBITMQ_PORT }}/" </code></pre> <p>After (working):</p> <pre><code>CELERY_BROKER_URL = "amqp://{{.Values.env.RABBITMQ_USER}}:{{.Values.env.RABBITMQ_PASSWORD}}@{{.Values.env.RABBITMQ_HOST}}:{{.Values.env.RABBITMQ_PORT}}/" </code></pre>
<p>I have an application in a container which reads a YAML file which contains data like </p> <pre><code> initializationCount=0 port=980 </code></pre> <p>Now that I want to remove those hard coded values inside the application and get them out of the container. Hence I created a configMap with all configuration values. I used the config map keys as environmental variables while deploying the pod.</p> <p>My issue is that, If I want to use these environment variables in my yaml file like </p> <pre><code> initializationCount=${iCount} port=${port} </code></pre> <p>The API which reads this YAML file throws number format Exception since the env variables are always strings. I do not have control over the API which reads my yaml file.</p> <p>I have tried </p> <pre><code> initializationCount=!!int ${iCount} </code></pre> <p>but it does not work.</p>
<p>Rather than pulling in the configmap values as environment variables, try mounting the configmap as a volume at runtime. </p> <p>The configmap should have one key which is the name of your YAML file. the value for that key should be the contents of the file. </p> <p>This data will be mounted to the container's filesystem when the pod initializes. That way your app will read the config YAML the same way it has been, but the values will be externalized in the configmap.</p> <p>Something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-app image: my-app:latest volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: app-config </code></pre> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: config.yaml: | initializationCount=0 port=980 </code></pre> <p>Kubernetes docs <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="nofollow noreferrer">here</a></p>
<p>I have a <code>secret.yaml</code> file inside the templates directory with the following data:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: appdbpassword stringData: password: password </code></pre> <p>I also have a ConfigMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: appdbconfigmap data: jdbcUrl: jdbc:oracle:thin:@proxy:service username: bhargav </code></pre> <p>I am using the following pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: expense-pod-sample-1 spec: containers: - name: expense-container-sample-1 image: exm:1 command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] envFrom: - configMapRef: name: appdbconfigmap env: - name: password valueFrom: secretKeyRef: name: appdbpassword key: password </code></pre> <p>When I use the helm install command I see the pod running but if I try to use the environment variable <code>${password}</code> in my application, it just does not work. It says the password is wrong. It does not complain about the username which is a ConfigMap. This happens only if I use helm. If I don't use helm and independently-run all the YAML files using kubectl, my application access both username and password correctly.</p> <p>Am I missing anything here ?</p>
<pre><code>apiVersion: v1 kind: Secret metadata: name: test-secret data: password : cGFzc3dvcmQK </code></pre> <p>You can also add the secret like this where converting data into base64 format. While stringData do it automatically when you create secret.</p> <p>Trying Adding the secrets in environment like this way</p> <pre><code>envFrom: - secretRef: name: test-secret </code></pre>
<p>when i run <code>sudo minikube start --vm-driver=none</code> . I'm getting this error : </p> <pre><code>Error restarting cluster: waiting for apiserver: timed out waiting for the condition 😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new/choose </code></pre>
<p>I myself found an answer so I'm posting it here . Try <code>minikube delete</code> and rerun this command </p>
<p>As the title suggests, I can view Kubernetes bearer tokens in the Jenkins logs (/logs/all endpoints). Isn't this a security concern? Is there a way to stop it without having to meddle with the Kubernetes plugin source code? </p> <p>Edit:</p> <p>Example log:</p> <pre><code>Aug 29, 2020 7:39:41 PM okhttp3.internal.platform.Platform log INFO: Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3Nlcn </code></pre>
<p>See the documentation for <a href="https://github.com/square/okhttp/tree/master/okhttp-logging-interceptor" rel="nofollow noreferrer">okhttp</a></p> <blockquote> <p><strong>Warning:</strong> The logs generated by this interceptor when using the HEADERS or BODY levels have the potential to leak sensitive information such as "Authorization" or "Cookie" headers and the contents of request and response bodies. This data should only be logged in a controlled way or in a non-production environment.</p> </blockquote> <p>So you should probably not activate that logging in an environment where you have <strong>sensitive tokens</strong>.</p>
<p>When I run <code>sudo minikube start --vm-driver=none</code> it gives me this error and I am using Ubuntu 16.0.4.</p> <pre><code>Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap : running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap output: [init] Using Kubernetes version: v1.16.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING FileExisting-ebtables]: ebtables not found in system path [WARNING FileExisting-socat]: socat not found in system path [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09 [WARNING Hostname]: hostname "minikube" could not be reached [WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.1.1:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-10251]: Port 10251 is in use [ERROR Port-10252]: Port 10252 is in use [ERROR Port-2380]: Port 2380 is in use [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher : running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap .: exit status 1 </code></pre>
<p>The <code>none</code> driver makes a lot of assumptions that would normally be handled by the VM setup process used by all other drivers. In this case you can see that some of the ports it expects to use are already in use so it won't continue. You would need to remove whatever is using those ports. The <code>none</code> driver is generally used for very niche situations, almost always in an ephemeral CI environment, though maybe also check out KinD as a newer tool that might address that use case better. If you just want to run a local dev environment on Linux without an intermediary VM, maybe try k3s or microk8s instead.</p>
<p>I'm creating kubernetes service in Azure with Advanced Networking options where I've selected particular vNet and Subnet for it.</p> <p>I'm getting error as below:</p> <pre><code>{"code":"InvalidTemplateDeployment","message":"The template deployment failed with error: 'Authorization failed for template resource '&lt;vnetid&gt;/&lt;subnetid&gt;/Microsoft.Authorization/xxx' of type 'Microsoft.Network/virtualNetworks/subnets/providers/roleAssignments'. The client '&lt;emailid&gt;' with object id 'xxx' does not have permission to perform action 'Microsoft.Authorization/roleAssignments/write' at scope '/subscriptions/&lt;subid&gt;/resourceGroups/&lt;rgid&gt;/providers/Microsoft.Network/virtualNetworks/&lt;vnetid&gt;/subnets/&lt;subnetid&gt;/providers/Microsoft.Authorization/roleAssignments/xxx'.'."} </code></pre> <p>I've got contributor role.</p>
<p>As per the following article, you will need Owner privileges over the vNet to change access to it.</p> <p><a href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#built-in-role-descriptions" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#built-in-role-descriptions</a></p>
<p>Suppose we have 4 nodes eks cluster in ec2-autoscaling min with 4 nodes. A kubernetes application stack deployed on the same with one pod- one node. Now traffic increases HPA triggered on eks level. Now total pods are 8 pods ,two pods - on one node. Also triggered auto-scaling. Now total nodes are 6 nodes. </p> <p>Its observed all pods remain in current state. Post autscaling also.</p> <p>Is there a direct and simpler way? Some of already running pods should automatically launch on the additional nodes (detect it and reschedule itself on the recently added idle worker/nodes (i:e non-utilized - by using force eviction of pods)</p> <p>Thanks in Advance.</p>
<p>One easy way is to delete all those pods by selector using below command and let the deployment recreate those pods in the cluster</p> <p>kubectl delete po -l key=value</p> <p>There could be other possibilities. would be glad to know from others</p>
<p>I am trying to use kubernetes nginx ingress controller: (quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0). Below is my ingress object.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cc-store-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: /$1 nginx.ingress.kubernetes.io/add-base-url: "true" #nginx.ingress.kubernetes.io/configuration-snippet: | # sub_filter "http://my-ip:30021/" "http://my-ip:30021/app/"; # sub_filter_once off; spec: #tls: #- secretName: tls-secret rules: - host: my-ip http: paths: - path: /app/?(.*) backend: serviceName: appsvc servicePort: 7201 </code></pre> <p>When I try to access the this service via ingress I hit a blank page, which I understand is because response (set of few java scripts , css and others) are returning to my-ip:30021/ instead of my-ip:30021/app. (checked the nginx logs initial connection gives 200 response subsequent loading of css and js are failing with 404)</p> <p>Is there a way to overcome this? Neither "sub_filter" nor add-base-url annotations helped.</p> <p>Is there any way to achieve the path rewriting for response. Would using any other ingress controller (instead of nginx) can make this easier to overcome ?</p>
<p>This is a sample of how I did adding base path to a service which completely doesn't support it. As well a solution for handling redirects to the urls without base path.</p> <pre><code>annotations: kubernetes.io/ingress.class: nginx # catch $1 from 'path' capture group nginx.ingress.kubernetes.io/rewrite-target: /$1 # handle redirects # nginx.ingress.kubernetes.io/proxy-redirect-from: http://&lt;host&gt;/ # nginx.ingress.kubernetes.io/proxy-redirect-to: /&lt;basePath&gt;/ nginx.ingress.kubernetes.io/configuration-snippet: proxy_set_header Accept-Encoding ""; sub_filter_last_modified off; # add base path to all static resources sub_filter '&lt;head&gt;' '&lt;head&gt; &lt;base href="/&lt;basePath&gt;/"&gt;'; sub_filter 'href="/' 'href="'; sub_filter 'src="/' 'src="'; # set types of files to 'sub-filter' sub_filter_once off; sub_filter_types text/html text/css text/javascript application/javascript; ... - path: /&lt;basePath&gt;/?(.*) </code></pre>
<p>I have deployed the metric-server on my kubernetes cluster and it's just working fine as I run the following command:</p> <pre><code>kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes </code></pre> <p>I want to access the metric-server from a pod. For that, I use the <code>service/metric-server</code>'s IP and it is in the same namespace that the metric-server is. The way I'm trying to access to the metric is like this:</p> <pre><code> myurl := fmt.Sprintf("https://%s:%s/apis/metrics.k8s.io/v1beta1/nodes/", serviceHost, servicePort) u, err := url.Parse(myurl) if err != nil { panic(err) } req, err := http.NewRequest(httpMethod, u.String(), nil) if err != nil { log.Printf("Cant sned req: %s", err) } caToken, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token") if err != nil { panic(err) // cannot find token file } req.Header.Set("Content-Type", "application/json") req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", string(caToken))) caCertPool := x509.NewCertPool() caCert, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt") if err != nil { panic(err) } caCertPool.AppendCertsFromPEM(caCert) client := &amp;http.Client{ Transport: &amp;http.Transport{ TLSClientConfig: &amp;tls.Config{ RootCAs: caCertPool, }, }, } resp, err := client.Do(req) if err != nil { log.Printf("sending helm deploy payload failed: %s", err.Error()) panic(err) } </code></pre> <p>This is not working nad the <code>logs</code> result for the pod is:</p> <pre><code>Get https://METRIC-SERVER-SERVICE-IP/apis/metrics.k8s.io/v1beta1/nodes: x509: certificate is valid for 127.0.0.1, not METRIC-SERVER-SERVICE-IP </code></pre> <p>Is this the right way to access the metric-server from a pod?</p>
<p>what i did so i can access the metrics-server on a local deployment:</p> <ol> <li>set proper rbac</li> <li>inside the pod : export CURL_CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt curl -H "Authorization: Bearer $TOKEN" -k <a href="https://10.100.123.57/apis/metrics.k8s.io/v1beta1/nodes" rel="nofollow noreferrer">https://10.100.123.57/apis/metrics.k8s.io/v1beta1/nodes</a></li> </ol>
<p>I've searched the internet but I haven't found clear answers.</p> <p>Kops is for production grade clusters and is vendor agnostic and I get that, but compared to Eksctl what are the differences ?</p> <p>Also most of the articles are found are year+ old and with the speed the K8s ecosystem moves it might be outdated.</p>
<p><code>eksctl</code> is specifically meant to bootstrap clusters using Amazon's managed Kubernetes service (EKS). With EKS, Amazon will take responsibility for managing your Kubernetes Master Nodes (at an additional cost). </p> <p><code>kops</code> is a Kubernetes Installer. It will install kubernetes on any type of node (e.g. an amazon ec2 instance, local virtual machine). But you will be responsible for maintaining the master nodes (and the complexity that comes with that).</p>
<p>Suppose we have 4 nodes eks cluster in ec2-autoscaling min with 4 nodes. A kubernetes application stack deployed on the same with one pod- one node. Now traffic increases HPA triggered on eks level. Now total pods are 8 pods ,two pods - on one node. Also triggered auto-scaling. Now total nodes are 6 nodes. </p> <p>Its observed all pods remain in current state. Post autscaling also.</p> <p>Is there a direct and simpler way? Some of already running pods should automatically launch on the additional nodes (detect it and reschedule itself on the recently added idle worker/nodes (i:e non-utilized - by using force eviction of pods)</p> <p>Thanks in Advance.</p>
<p>Take a look at the <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a>. This project runs as a Kubernetes Job that aims at killing pods when it thinks the cluster is unbalanced.</p> <p>The <a href="https://github.com/kubernetes-sigs/descheduler#lownodeutilization" rel="nofollow noreferrer"><code>LowNodeUtilization</code></a> strategy seems to fit your case:</p> <blockquote> <p>This strategy finds nodes that are under utilized and evicts pods, if possible, from other nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes.</p> </blockquote> <hr> <p>Another option is to apply a little of chaos engineering manually, forcing a Rolling Update on your deployment, and hopefully, the scheduler will fix the balance problem when pods are recreated. </p> <p>You can use the <code>kubectl rollout restart my-deployment</code>. It's way better than simply deleting the pods with <code>kubectl delete pod</code>, as the rollout will ensure availability during the "rebalancing" (although deleting the pods altogether increases your chances for a better rebalance).</p>
<p>I have several .net Core applications which shutdown for no obvious reason. It looks like that this happens since the implementation of health-checks but I'm not able to see the killing commands in kubernetes.</p> <p><strong>cmd</strong></p> <pre><code>kubectl describe pod mypod </code></pre> <p><strong>output</strong> (restart count is this high because of daily shutown in the evening; stage-environment)</p> <pre><code>Name: mypod ... Status: Running ... Controlled By: ReplicaSet/mypod-deployment-6dbb6bcb65 Containers: myservice: State: Running Started: Fri, 01 Nov 2019 09:59:40 +0100 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 01 Nov 2019 07:19:07 +0100 Finished: Fri, 01 Nov 2019 09:59:37 +0100 Ready: True Restart Count: 19 Liveness: http-get http://:80/liveness delay=10s timeout=1s period=5s #success=1 #failure=10 Readiness: http-get http://:80/hc delay=10s timeout=1s period=5s #success=1 #failure=10 ... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 18m (x103 over 3h29m) kubelet, aks-agentpool-40946522-0 Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 18m (x29 over 122m) kubelet, aks-agentpool-40946522-0 Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>These are the pods logs</p> <p><strong>cmd</strong></p> <pre><code>kubectl logs mypod --previous </code></pre> <p><strong>output</strong></p> <pre><code>Hosting environment: Production Content root path: /app Now listening on: http://[::]:80 Application started. Press Ctrl+C to shut down. Application is shutting down... </code></pre> <p><strong>corresponding log from azure</strong> <a href="https://i.stack.imgur.com/65rxf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/65rxf.png" alt="azure-log"></a></p> <p><strong>cmd</strong></p> <pre><code>kubectl get events </code></pre> <p><strong>output</strong> (what I'm missing here is the killing-event. My assumption is that the pod was not restarted, caused by multiple failed health-checks) </p> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 39m Normal NodeHasSufficientDisk node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientDisk 39m Normal NodeHasSufficientMemory node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientMemory 39m Normal NodeHasNoDiskPressure node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasNoDiskPressure 39m Normal NodeReady node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeReady 39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress 39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress 7m2s Warning Unhealthy pod/otherpod2 Readiness probe failed: Get http://10.244.0.158:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 7m1s Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 40m Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 44m Warning Unhealthy pod/otherpod1 Liveness probe failed: Get http://10.244.0.151:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 5m35s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 40m Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 8m8s Warning Unhealthy pod/mypod Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 8m7s Warning Unhealthy pod/mypod Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 0s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p><strong>curl from another pod</strong> (I've executed this in a very long loop every second and have never received something else than a 200 OK)</p> <pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/hc % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed {"status":"Healthy","totalDuration":"00:00:00.0647250","entries":{"self":{"data":{},"duration":"00:00:00.0000012","status":"Healthy"},"warmup":{"data":{},"duration":"00:00:00.0000007","status":"Healthy"},"TimeDB-check":{"data":{},"duration":"00:00:00.0341533","status":"Healthy"},"time-blob-storage-check":{"data":{},"duration":"00:00:00.0108192","status":"Healthy"},"time-rabbitmqbus-check":{"data":{},"duration":"00:00:00.0646841","status":"Healthy"}}}100 454 0 454 0 0 6579 0 --:--:-- --:--:-- --:--:-- 6579 </code></pre> <p><strong>curl</strong></p> <pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/liveness Healthy % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7 0 7 0 0 7000 0 --:--:-- --:--:-- --:--:-- 7000 </code></pre>
<p>From the logs it appears like the issue is with the liveness and readiness probes. Those are getting failed and hence the application is not getting restarted.</p> <p>Remove the probes and check if the application comes up. get inside the pod and try to check liveness and readiness probes to investigate why those are failing.</p>
<p>I have several .net Core applications which shutdown for no obvious reason. It looks like that this happens since the implementation of health-checks but I'm not able to see the killing commands in kubernetes.</p> <p><strong>cmd</strong></p> <pre><code>kubectl describe pod mypod </code></pre> <p><strong>output</strong> (restart count is this high because of daily shutown in the evening; stage-environment)</p> <pre><code>Name: mypod ... Status: Running ... Controlled By: ReplicaSet/mypod-deployment-6dbb6bcb65 Containers: myservice: State: Running Started: Fri, 01 Nov 2019 09:59:40 +0100 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 01 Nov 2019 07:19:07 +0100 Finished: Fri, 01 Nov 2019 09:59:37 +0100 Ready: True Restart Count: 19 Liveness: http-get http://:80/liveness delay=10s timeout=1s period=5s #success=1 #failure=10 Readiness: http-get http://:80/hc delay=10s timeout=1s period=5s #success=1 #failure=10 ... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Unhealthy 18m (x103 over 3h29m) kubelet, aks-agentpool-40946522-0 Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 18m (x29 over 122m) kubelet, aks-agentpool-40946522-0 Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p>These are the pods logs</p> <p><strong>cmd</strong></p> <pre><code>kubectl logs mypod --previous </code></pre> <p><strong>output</strong></p> <pre><code>Hosting environment: Production Content root path: /app Now listening on: http://[::]:80 Application started. Press Ctrl+C to shut down. Application is shutting down... </code></pre> <p><strong>corresponding log from azure</strong> <a href="https://i.stack.imgur.com/65rxf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/65rxf.png" alt="azure-log"></a></p> <p><strong>cmd</strong></p> <pre><code>kubectl get events </code></pre> <p><strong>output</strong> (what I'm missing here is the killing-event. My assumption is that the pod was not restarted, caused by multiple failed health-checks) </p> <pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE 39m Normal NodeHasSufficientDisk node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientDisk 39m Normal NodeHasSufficientMemory node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientMemory 39m Normal NodeHasNoDiskPressure node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasNoDiskPressure 39m Normal NodeReady node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeReady 39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress 39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress 7m2s Warning Unhealthy pod/otherpod2 Readiness probe failed: Get http://10.244.0.158:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 7m1s Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 40m Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 44m Warning Unhealthy pod/otherpod1 Liveness probe failed: Get http://10.244.0.151:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 5m35s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 40m Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 8m8s Warning Unhealthy pod/mypod Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 8m7s Warning Unhealthy pod/mypod Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 0s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers) </code></pre> <p><strong>curl from another pod</strong> (I've executed this in a very long loop every second and have never received something else than a 200 OK)</p> <pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/hc % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed {"status":"Healthy","totalDuration":"00:00:00.0647250","entries":{"self":{"data":{},"duration":"00:00:00.0000012","status":"Healthy"},"warmup":{"data":{},"duration":"00:00:00.0000007","status":"Healthy"},"TimeDB-check":{"data":{},"duration":"00:00:00.0341533","status":"Healthy"},"time-blob-storage-check":{"data":{},"duration":"00:00:00.0108192","status":"Healthy"},"time-rabbitmqbus-check":{"data":{},"duration":"00:00:00.0646841","status":"Healthy"}}}100 454 0 454 0 0 6579 0 --:--:-- --:--:-- --:--:-- 6579 </code></pre> <p><strong>curl</strong></p> <pre><code>kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/liveness Healthy % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 7 0 7 0 0 7000 0 --:--:-- --:--:-- --:--:-- 7000 </code></pre>
<p>I think you can:</p> <ol> <li><p>Modify livenessprobe and readinessprobe check only <code>http://80</code>, cut the path in URL</p></li> <li><p>Remove the livenessprobe and readinessprobe (enabled=false)</p></li> <li><p>Just increase the delay time to 5 or 10 mins, after that you can <code>kubectl exec -it &lt;pod-name&gt; sh/bash</code> into that pod and debug. You can use command <code>netstat</code> to check the service you want start on port 80 or not. And last thing, you can do the same with readinessprobe or livenessprobe <code>curl -v http://localhost</code>, if this command return code different 200, that why your pods always restart.</p></li> </ol> <p>Hope this help you, guy.</p>
<p>I specified ephemeral storage limits and requests on a container, and I’m using latest amazon-eks-node-1.12-v20190701 image.</p> <p>My issue is that when I specify <code>ephemeral-storage: 1Mi</code> my pod is being evicted for using too much. I can easily add more but I would like to know how exactly this works and kubernetes documentation is not very helpful.</p> <p>From what I read, there is a Garbage Collection <code>that will clean up unused images and unused containers. Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.</code></p> <p>Is that apply to ephemeral storage too?</p> <p>How do I check what’s inside the ephemeral storage (via container or a node?) and how can I reduce it?</p> <p>Is it true that when ephemeral-storage is up to 85% and when garbage collection run, it will be cleaned up?</p> <p>Sorry for a lot of questions, just trying to understand and it’s difficult to find a simple answe</p>
<p>As far as I'm aware (please correct me if this is wrong) there is no Garbage Collection for Ephemeral Storage.</p> <p>As we can read in the Kubernetes documentation regarding <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">Garbage Collection</a>.</p> <blockquote> <p>Some Kubernetes objects are owners of other objects. For example, a ReplicaSet is the owner of a set of Pods. The owned objects are called <em>dependents</em> of the owner object. Every dependent object has a <code>metadata.ownerReferences</code> field that points to the owning object.</p> <p>Sometimes, Kubernetes sets the value of <code>ownerReference</code> automatically. For example, when you create a ReplicaSet, Kubernetes automatically sets the <code>ownerReference</code> field of each Pod in the ReplicaSet. In 1.8, Kubernetes automatically sets the value of <code>ownerReference</code> for objects created or adopted by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job and CronJob.</p> <p>You can also specify relationships between owners and dependents by manually setting the <code>ownerReference</code> field.</p> </blockquote> <p>So we can take advantage of Garbage Collection to delete dependent objects.</p> <blockquote> <p>When you delete an object, you can specify whether the object’s dependents are also deleted automatically. Deleting dependents automatically is called <em>cascading deletion</em>. There are two modes of <em>cascading deletion</em>: <em>background</em> and <em>foreground</em>.</p> <p>If you delete an object without deleting its dependents automatically, the dependents are said to be <em>orphaned</em>.</p> </blockquote> <p>In Kubernetes v1.16 new resource was introduced <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#local-ephemeral-storage" rel="nofollow noreferrer">Local ephemeral storage</a> ></p> <blockquote> <p>In each Kubernetes node, kubelet’s root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.</p> </blockquote> <p>And to answer your question </p> <blockquote> <p>How do I check what’s inside the ephemeral storage (via container or a node?) and how can I reduce it?</p> </blockquote> <p>It might be possible to exec into the docker container that have the storage mounted and see whats there, but other then that I don't see a way of accessing the content. As for reducing the storage (I did not tested that) it might remove the content of the ephemeral storage because it's not extendable.</p>
<p>I am using kubernetes cluster with 1 master node and 2 worker with 4 core cpu and 256mb ram. I wanted to know how much cpu and ram is needed for kubelet.</p> <p>Is there any way to set limit (cpu, memory) for kubelet?. I searched for documentation but i found only worker node requirements.</p>
<p>I think you should understand what <code>kubelet</code> does. This can be found in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet documentation</a>.</p> <blockquote> <p>The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.</p> <p>The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms (primarily through the apiserver) and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.</p> <p>Other than from an PodSpec from the apiserver, there are three ways that a container manifest can be provided to the Kubelet.</p> <p>File: Path passed as a flag on the command line. Files under this path will be monitored periodically for updates. The monitoring period is 20s by default and is configurable via a flag.</p> <p>HTTP endpoint: HTTP endpoint passed as a parameter on the command line. This endpoint is checked every 20 seconds (also configurable with a flag).</p> <p>HTTP server: The kubelet can also listen for HTTP and respond to a simple API (underspec’d currently) to submit a new manifest.</p> </blockquote> <p>There is several flags that you could use with kubelet, but they mostly are <strong>DEPRECATED</strong> and parameter should be set via config file specified by Kubelet's --config flag. This is explain on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">Set Kubelet parameters via a config file</a>.</p> <p>The flags that might be interesting for you are:</p> <p><code>--application-metrics-count-limit int</code></p> <blockquote> <p>Max number of application metrics to store (per container) (default 100) (DEPRECATED)</p> </blockquote> <p><code>--cpu-cfs-quota</code></p> <blockquote> <p>Enable CPU CFS quota enforcement for containers that specify CPU limits (default true) (DEPRECATED)</p> </blockquote> <p><code>--event-qps int32</code></p> <blockquote> <p>If > 0, limit event creations per second to this value. If 0, unlimited. (default 5) (DEPRECATED)</p> </blockquote> <p><code>--event-storage-age-limit string</code></p> <blockquote> <p>Max length of time for which to store events (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is a duration. Default is applied to all non-specified event types (default "default=0") (DEPRECATED)</p> </blockquote> <p><code>--event-storage-event-limit string</code></p> <blockquote> <p>Max number of events to store (per type). Value is a comma separated list of key values, where the keys are event types (e.g.: creation, oom) or "default" and the value is an integer. Default is applied to all non-specified event types (default "default=0") (DEPRECATED)</p> </blockquote> <p><code>--log-file-max-size uint</code></p> <blockquote> <p>Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)</p> </blockquote> <p><code>--pods-per-core int32</code></p> <blockquote> <p>Number of Pods per core that can run on this Kubelet. The total number of Pods on this Kubelet cannot exceed max-pods, so max-pods will be used if this calculation results in a larger number of Pods allowed on the Kubelet. A value of 0 disables this limit. (DEPRECATED)</p> </blockquote> <p><code>--registry-qps int32</code></p> <blockquote> <p>If > 0, limit registry pull QPS to this value. If 0, unlimited. (default 5) (DEPRECATED)</p> </blockquote>
<p>I have the following in my dep.yml <code>{{ toYaml .Values.volumes | indent 8 }}</code> which takes an array from <code>values.yml</code> of volumes and then loads it on the dep.yml file.</p> <p>I want the following result on my dep.yml from the initial array</p> <pre><code> volumes: - name: volume persistentVolumeClaim: claimName: {{ Release.Name }}-volume-claim - name: volume-a persistentVolumeClaim: claimName: {{ Release.Name }}-volume-a-claim - name: volume-b persistentVolumeClaim: claimName: {{ Release.Name }}-volume-b-claim </code></pre> <p>Adding the <code>{{ Release.Name }}</code> dynamically to the volume claim name for each element of the array.</p> <p>Is there any way to achieve this modifying the <code>{{ toYaml .Values.volumes | indent 8 }}</code> directive? </p>
<p>Helm includes <a href="https://helm.sh/docs/developing_charts/#using-the-tpl-function" rel="noreferrer">a <code>tpl</code> function</a> that expands template content in a string. I would fit this into the pipeline after rendering the value to a string, but before indenting it; its parameters don't quite fit into the standard pipeline setup.</p> <pre><code>{{ tpl (toYaml .Values.volumes) . | indent 8 }} </code></pre>
<p>I have a pod in my EKS cluster and I want to edit it's yaml so that I can change the <code>read-only</code> values from <code>true</code> to <code>false</code> . This way I want to be able to make changes to the pod's system/image (haven't exactly figured out it's name) that at the moment it <code>read-only file system</code>.</p> <p>Is that possible ? Can I do that ? </p> <p>I tried copying the current yaml contents and creating a new yaml file with the read-only values setted to false ,in order to use it as a replacement for the current one. </p> <p>The command I tried to use is:</p> <pre><code>kubectl apply -f telegraf-new.yaml --namespace examplenamespace -l app=polling-telegraf-s </code></pre> <p>and the error I get is:</p> <blockquote> <p>Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply The Pod "polling-telegraf-s-79f44d578f-khdjf" is invalid: spec: Forbidden: pod updates may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code> (only additions to existing tolerations)</p> </blockquote> <p>I am not sure that this is a good way to approach my problem but I spend the last few days researching about it and the results are not so encouraging.<br> Any help,tip,advice to the correct direction would be appreciated.</p> <p>Edit: <br> My yaml from the <code>kubectl get pod --namespace tick -l app=polling-telegraf-s -o yaml</code> is : <br></p> <pre><code>apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: checksum/config: 45cc44098254d90e88878e037f6eb5803be739890e26d9070e21ac0c0650debd kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"checksum/config":"45cc44098254d90e88878e037f6eb5803be739890e26d9070e21ac0c0650debd","kubernetes.io/psp":"eks.privileged"},"creationTimestamp":"2019-10-30T15:49:57Z","generateName":"polling-telegraf-s-79f44d578f-","labels":{"app":"polling-telegraf-s","pod-template-hash":"79f44d578f"},"name":"polling-telegraf-s-79f44d578f-khdjf","namespace":"tick","ownerReferences":[{"apiVersion":"apps/v1","blockOwnerDeletion":true,"controller":true,"kind":"ReplicaSet","name":"polling-telegraf-s-79f44d578f","uid":"ec1e6988-fb2c-11e9-bdf2-02b7fbdf557a"}],"resourceVersion":"134887","selfLink":"/api/v1/namespaces/tick/pods/polling-telegraf-s-79f44d578f-khdjf","uid":"ec1fa8a5-fb2c-11e9-bdf2-02b7fbdf557a"},"spec":{"containers":[{"image":"telegraf:1.10.3-alpine","imagePullPolicy":"IfNotPresent","name":"polling-telegraf-s","resources":{"limits":{"cpu":"1","memory":"2Gi"},"requests":{"cpu":"100m","memory":"256Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/etc/telegraf","name":"config"},{"mountPath":"/var/run/utmp","name":"varrunutmpro","readOnly":true},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-htxsr","readOnly":true}]}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"nodeName":"ip-192-168-179-5.eu-west-2.compute.internal","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"serviceAccount":"default","serviceAccountName":"default","terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoExecute","key":"node.kubernetes.io/not-ready","operator":"Exists","tolerationSeconds":300},{"effect":"NoExecute","key":"node.kubernetes.io/unreachable","operator":"Exists","tolerationSeconds":300}],"volumes":[{"hostPath":{"path":"/var/run/utmp","type":""},"name":"varrunutmpro"},{"configMap":{"defaultMode":420,"name":"polling-telegraf-s"},"name":"config"},{"name":"default-token-htxsr","secret":{"defaultMode":420,"secretName":"default-token-htxsr"}}]},"status":{"conditions":[{"lastProbeTime":null,"lastTransitionTime":"2019-10-30T15:49:57Z","status":"True","type":"Initialized"},{"lastProbeTime":null,"lastTransitionTime":"2019-10-30T15:49:58Z","status":"True","type":"Ready"},{"lastProbeTime":null,"lastTransitionTime":"2019-10-30T15:49:58Z","status":"True","type":"ContainersReady"},{"lastProbeTime":null,"lastTransitionTime":"2019-10-30T15:49:57Z","status":"True","type":"PodScheduled"}],"containerStatuses":[{"containerID":"docker://a66f40111474ea28d1b1b7adf6d9e0278adb6d6aefa23b345cc1559174018f27","image":"telegraf:1.10.3-alpine","imageID":"docker-pullable://telegraf@sha256:9106295bc67459633b4d6151c2e1b9949e501560b2e659fe541bda691c566bcf","lastState":{},"name":"polling-telegraf-s","ready":true,"restartCount":0,"state":{"running":{"startedAt":"2019-10-30T15:49:58Z"}}}],"hostIP":"192.168.179.5","phase":"Running","podIP":"192.168.159.179","qosClass":"Burstable","startTime":"2019-10-30T15:49:57Z"}} kubernetes.io/psp: eks.privileged creationTimestamp: "2019-10-30T15:49:57Z" generateName: polling-telegraf-s-79f44d578f- labels: app: polling-telegraf-s pod-template-hash: 79f44d578f name: polling-telegraf-s-79f44d578f-khdjf namespace: tick ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: polling-telegraf-s-79f44d578f uid: ec1e6988-fb2c-11e9-bdf2-02b7fbdf557a resourceVersion: "409255" selfLink: /api/v1/namespaces/tick/pods/polling-telegraf-s-79f44d578f-khdjf uid: ec1fa8a5-fb2c-11e9-bdf2-02b7fbdf557a spec: containers: - image: telegraf:1.10.3-alpine imagePullPolicy: IfNotPresent name: polling-telegraf-s resources: limits: cpu: "1" memory: 2Gi requests: cpu: 100m memory: 256Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/telegraf name: config - mountPath: /var/run/utmp name: varrunutmpro readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-htxsr readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-192-168-179-5.eu-west-2.compute.internal priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - hostPath: path: /var/run/utmp type: "" name: varrunutmpro - configMap: defaultMode: 420 name: polling-telegraf-s name: config - name: default-token-htxsr secret: defaultMode: 420 secretName: default-token-htxsr status: conditions: - lastProbeTime: null lastTransitionTime: "2019-10-30T15:49:57Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-10-30T15:49:58Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-10-30T15:49:58Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-10-30T15:49:57Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://a66f40111474ea28d1b1b7adf6d9e0278adb6d6aefa23b345cc1559174018f27 image: telegraf:1.10.3-alpine imageID: docker-pullable://telegraf@sha256:9106295bc67459633b4d6151c2e1b9949e501560b2e659fe541bda691c566bcf lastState: {} name: polling-telegraf-s ready: true restartCount: 0 state: running: startedAt: "2019-10-30T15:49:58Z" hostIP: 192.168.179.5 phase: Running podIP: 192.168.159.179 qosClass: Burstable startTime: "2019-10-30T15:49:57Z" kind: List metadata: resourceVersion: "" selfLink: "" </code></pre> <p>and I want to change the <code>readOnly</code> values from true to false . </p>
<p>You can edit a kubernetes resource yaml using the command <code>kubectl edit [resource] [UID]</code>. For instance, to change the yaml of a pod, you would run <code>kubectl edit $POD_UID</code></p> <p>However this won't work in your case because you are editing a <code>mount</code> and that requires the <code>pod</code> to be restarted. In this sense, the best approach would be what you already did. Start by extracting the yaml from the api by running <code>kubectl get pod $POD_UID -o yaml</code> and then editing its content to deploy it again. However as the error depicts, you are not allowed to edit some parts of the yaml. So you have to remove those parts. </p> <p>Only keep the parts that the log has already pointed to: <code>spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations</code></p> <p>Otherwise, here are the sections you would have to remove:</p> <pre><code>metadata: creationTimestamp: 2019-11-01T13:22:50Z generateName: ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: uid: 947fb7b7-f1ab-11e9-adfb-42010a8001b2 resourceVersion: "103002009" selfLink: uid: b3f96ba4-fcaa-11e9-adfb-42010a8001b2 spec: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-ff27n readOnly: true nodeName: priority: 0 schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-ff27n secret: defaultMode: 420 secretName: default-token-ff27n status: conditions: - lastProbeTime: null lastTransitionTime: 2019-11-01T13:22:50Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2019-11-01T13:22:55Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: null status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2019-11-01T13:22:50Z status: "True" type: PodScheduled containerStatuses: - containerID: image: imageID: lastState: {} name: proxy ready: true restartCount: 0 state: running: startedAt: 2019-11-01T13:22:55Z hostIP: phase: Running podIP: qosClass: Burstable startTime: 2019-11-01T13:22:50Z </code></pre>
<p>I have a cluster that was recently upgraded. Since then none of the pods running in the cluster are able to get a response from <a href="https://kubernetes.default/healthz" rel="nofollow noreferrer">https://kubernetes.default/healthz</a>.</p> <p>To be clear, they can <strong>resolve</strong> the URL, but it constantly times out of comes back with connection refused.</p> <p>I have verified that the api-server is running as I can get a response from <a href="http://localhost:8080/healthz" rel="nofollow noreferrer">http://localhost:8080/healthz</a>, but I can't get anything from within a pod.</p> <p>I've checked all the scripts and configs and compared them to the other clusters that were upgraded at the same time and there appears to be nothing different. </p> <p>I'm sure it's something small I've overlooked but I don't know where else to look. </p> <p>Additional information:</p> <ul> <li>setup was with kops </li> <li>runs in AWS (not the managed services but raw)</li> <li>the above queries work as expected in other clusters</li> <li>upgrade was from 1.11 to 1.13 (yes, I know that is frowned upon. At least now I do.)</li> <li>other clusters had same upgrade path</li> </ul> <p>[edit] </p> <p>providing /etc/resolv.conf</p> <pre><code>cat /etc/resolv.conf nameserver 100.64.0.10 search jenkins.svc.cluster.local svc.cluster.local cluster.local us-west-2.compute.internal options ndots:5 </code></pre>
<p>Check the image I uploaded, you can only resolve the <strong>dns</strong> <code>kubernetes</code> from within a pod in the <code>default</code> namespace. For other namespaces use <code>kubernetes.default.svc</code> like in the following curl. You have the certificate and token already mounted in the pod as well. </p> <p><code>curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Accept: application/json" https://kubernetes.default.svc/api/</code></p> <p><a href="https://i.stack.imgur.com/trffc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/trffc.png" alt="Image"></a></p>
<p>Trying to install a sample container App using Pod in my local environment I'm using kubernates cluster coming with docker desktop. </p> <p>I'm creating the Pod using bellow command with the YML file kubectl create -f test_image_pull.yml</p> <pre><code>apiVersion: v1 kind: Pod metadata: # value must be lower case name: sample-python-web-app spec: containers: - name: sample-hello-world image: local/sample:latest imagePullPolicy: Always command: ["echo", "SUCCESS"] </code></pre> <p>docker file used to build the image and this container running without any issue if u run with docker run</p> <pre><code># Use official runtime python FROM python:2.7-slim # set work directory to app WORKDIR /app # Copy current directory COPY . /app # install needed packages RUN pip install --trusted-host pypi.python.org -r requirement.txt # Make port 80 available to outside container EXPOSE 80 # Define environment variable ENV NAME World # Run app.py when the container launches CMD ["python" , "app.py"] </code></pre> <pre><code>from flask import Flask from redis import Redis, RedisError import os import socket #connect to redis redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2) app = Flask(__name__) @app.route("/") def hello(): try: visits = redis.incr("counter") except RedisError: visits = "&lt;i&gt;cannot connect to Redis, counter disabled&lt;/i&gt;" html = "&lt;h3&gt;Hello {name}!&lt;/h3&gt;" \ "&lt;b&gt;Hostname:&lt;/b&gt; {hostname}&lt;br/&gt;" \ "&lt;b&gt;Visits:&lt;/b&gt; {visits}" return html.format ( name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits ) if __name__ == "__main__": app.run(host="0.0.0.0", port=80) </code></pre> <pre><code>Flask Redis </code></pre> <p>Once I describe the pod it shows me below error</p> <p>kubectl describe pod sample-python-web-app</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m25s default-scheduler Successfully assigned default/sample-python-web-app to docker-desktop Normal Pulling 97s (x4 over 3m22s) kubelet, docker-desktop Pulling image "local/sample:latest" Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Failed to pull image "local/sample:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for local/sample, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Warning Failed 94s (x4 over 3m17s) kubelet, docker-desktop Error: ErrImagePull Normal BackOff 78s (x6 over 3m16s) kubelet, docker-desktop Back-off pulling image "local/sample:latest" Warning Failed 66s (x7 over 3m16s) kubelet, docker-desktop Error: ImagePullBackOff </code></pre>
<p>Kubernetes pulls container images from a Docker Registry. Per the <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="noreferrer">doc</a>:</p> <blockquote> <p>You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.</p> </blockquote> <p>Moreover:</p> <blockquote> <p>The <code>image</code> property of a container supports the same syntax as the docker command does, including private registries and tags.</p> </blockquote> <p>So, the way the image is referenced in the pod's spec - "image: local/sample:latest" - Kubernetes looks on Docker Hub for the image in repository <em>named</em> "local".</p> <p>You can push the image to Docker Hub or some other external Docker Registry, public or private; you can host Docker Registry on the Kubernetes cluster; or, you can run a Docker Registry locally, in a container.</p> <p>To <a href="https://docs.docker.com/registry/deploying/#run-a-local-registry" rel="noreferrer">run a Docker registry locally</a>:</p> <pre><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2 </code></pre> <p>Next, find what is the IP address of the host - below I'll use <code>10.0.2.1</code> as an example.</p> <p>Then, assuming the image name is "local/sample:latest", tag the image:</p> <pre><code>docker tag local/sample:latest 10.0.2.1:5000/local/sample:latest </code></pre> <p>...and push the image to the local registry:</p> <pre><code>docker push 10.0.2.1:5000/local/sample:latest </code></pre> <p>Next, change in pod's configuration YAML how the image is referenced - from</p> <pre><code> image: local/sample:latest </code></pre> <p>to</p> <pre><code> image: 10.0.2.1:5000/local/sample:latest </code></pre> <p>Restart the pod.</p> <p>EDIT: Most likely the local Docker daemon will have to be configured to treat the local Docker registry as <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries" rel="noreferrer">insecure</a>. One way to configure that is described <a href="https://docs.docker.com/registry/insecure/" rel="noreferrer">here</a> - just replace "myregistrydomain.com" with the host's IP (e.g. <code>10.0.2.1</code>). Docker Desktop also allows to edit daemon's configuration file <a href="https://docs.docker.com/docker-for-mac/#docker-engine" rel="noreferrer">through the GUI</a>.</p>
<p>Most Kubernetes objects can be created with <code>kubectl create</code>, but if you need e.g. a <code>DaemonSet</code> — you're out of luck.</p> <p>On top of that, the objects being created through <code>kubectl</code> can only be customized minimally (e.g. <code>kubectl create deployment</code> allows you to only specify the image to run and nothing else).</p> <p>So, considering that Kubernetes actually expects you to either edit a minimally configured object with <code>kubectl edit</code> to suit your needs or write a spec from scratch and then use <code>kubectl apply</code> to apply it, how does one figure out all possible keywords and their meanings to properly describe the object they need?</p> <p>I expected to find something similar to <a href="https://docs.docker.com/compose/compose-file/" rel="nofollow noreferrer">Docker Compose file reference</a>, but when looking at <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet docs</a>, I found only a single example spec that doesn't even explain most of it's keys.</p>
<p>The spec of the resources in <code>.yaml</code> file that you can run <code>kubectl apply -f</code> on is described in <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">Kubernetes API reference</a>.</p> <p>Considering DaemonSet, its <code>spec</code> is described <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/daemon-set-v1/" rel="nofollow noreferrer">here</a>. It's <code>template</code> is actually the same as in Pod resource.</p>
<p><strong>Context</strong></p> <p>We have long running kubernetes jobs based on docker containers. The containers needs resources (eg 15gb memory, 2 cpu) and we use autoscaler to scale up new worker nodes at request.</p> <p><strong>Scenario</strong></p> <p>Users can select the version of the docker image to be used for a job, eg 1.0.0, 1.1.0, or even a commit hash of the code the image was build from in test environment.</p> <p>As we leave the docker tag to be freetext, the user can type a non-existing docker tag. Because of this the job pod comes in ImagePullBackOff state. The pod stays in this state and keeps the resources locked so that they cannot be reused by any other job.</p> <p><strong>Question</strong></p> <p>What is the right solution, that can be applied in kubernetes itself, for failing the pod immediately or at least quickly if a pull fails due to a non existing docker image:tag?</p> <p><strong>Possibilities</strong></p> <p>I looked into backofflimit. I have set it to 0, but this doesn't fail or remove the job. The resources are of course kept as well.</p> <p>Maybe they can be killed by a cron job. Not sure how to do so.</p> <p>Ideally, resources should not even be allocated for a job with an unexisting docker image. But I'm not sure if there is a possibility to easily achieve this.</p> <p>Any other?</p>
<p>After Looking at your design, I would recommend to add <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">InitContainer</a> to Job specification to <a href="https://stackoverflow.com/questions/32113330/check-if-imagetag-combination-already-exists-on-docker-hub">check existence</a> of docker images with the given tag. </p> <p>If the image with the tag doesn't exist in the registry, InitContainer can report an error and fail the Job's Pod by exiting with non-zero exit code. </p> <p>After that Job's Pod will be <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy" rel="nofollow noreferrer">restarted</a>. After certain amount of attempts Job will get <code>Failed</code> state. By configuring <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">.spec.ttlSecondsAfterFinished</a> option, failed jobs can be wiped out. </p> <blockquote> <p>If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.</p> </blockquote> <p>If the image exists, InitContainer script exits with zero exit code and the main Job container image is going to be pulled and container starts.</p>
<p>When attempting to install ElasticSearch for Kubernetes on a PKS instance I am running into an issue where after running <code>kubectl get events --all-namespaces</code> I see <code>create Pod logging-es-default-0 in StatefulSet logging-es-default failed error: pods "logging-es-default-0" is forbidden: SecurityContext.RunAsUser is forbidden</code>. Does this have something to do with a pod security policy? Is there any way to be able to deploy ElasticSearch to Kubernetes if privileged containers are not allowed?</p> <p>Edit: here is the values.yml file that I am passing into the elasticsearch helm chart.</p> <pre><code>--- clusterName: "elasticsearch" nodeGroup: "master" # The service that non master groups will try to connect to when joining the cluster # This should be set to clusterName + "-" + nodeGroup for your master group masterService: "" # Elasticsearch roles that will be applied to this nodeGroup # These will be set as environment variables. E.g. node.master=true roles: master: "true" ingest: "true" data: "true" replicas: 3 minimumMasterNodes: 2 esMajorVersion: "" # Allows you to add any config files in /usr/share/elasticsearch/config/ # such as elasticsearch.yml and log4j2.properties esConfig: {} # elasticsearch.yml: | # key: # nestedkey: value # log4j2.properties: | # key = value # Extra environment variables to append to this nodeGroup # This will be appended to the current 'env:' key. You can use any of the kubernetes env # syntax here extraEnvs: [] # - name: MY_ENVIRONMENT_VAR # value: the_value_goes_here # A list of secrets and their paths to mount inside the pod # This is useful for mounting certificates for security and for mounting # the X-Pack license secretMounts: [] # - name: elastic-certificates # secretName: elastic-certificates # path: /usr/share/elasticsearch/config/certs image: "docker.elastic.co/elasticsearch/elasticsearch" imageTag: "7.4.1" imagePullPolicy: "IfNotPresent" podAnnotations: {} # iam.amazonaws.com/role: es-cluster # additionals labels labels: {} esJavaOpts: "-Xmx1g -Xms1g" resources: requests: cpu: "100m" memory: "2Gi" limits: cpu: "1000m" memory: "2Gi" initResources: {} # limits: # cpu: "25m" # # memory: "128Mi" # requests: # cpu: "25m" # memory: "128Mi" sidecarResources: {} # limits: # cpu: "25m" # # memory: "128Mi" # requests: # cpu: "25m" # memory: "128Mi" networkHost: "0.0.0.0" volumeClaimTemplate: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 30Gi rbac: create: false serviceAccountName: "" podSecurityPolicy: create: false name: "" spec: privileged: false fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - secret - configMap - persistentVolumeClaim persistence: enabled: true annotations: {} extraVolumes: "" # - name: extras # emptyDir: {} extraVolumeMounts: "" # - name: extras # mountPath: /usr/share/extras # readOnly: true extraInitContainers: "" # - name: do-something # image: busybox # command: ['do', 'something'] # This is the PriorityClass settings as defined in # https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass priorityClassName: "" # By default this will make sure two pods don't end up on the same node # Changing this to a region would allow you to spread pods across regions antiAffinityTopologyKey: "kubernetes.io/hostname" # Hard means that by default pods will only be scheduled if there are enough nodes for them # and that they will never end up on the same node. Setting this to soft will do this "best effort" antiAffinity: "hard" # This is the node affinity settings as defined in # https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature nodeAffinity: {} # The default is to deploy all pods serially. By setting this to parallel all pods are started at # the same time when bootstrapping the cluster podManagementPolicy: "Parallel" protocol: http httpPort: 9200 transportPort: 9300 service: labels: {} labelsHeadless: {} type: ClusterIP nodePort: "" annotations: {} httpPortName: http transportPortName: transport updateStrategy: RollingUpdate # This is the max unavailable setting for the pod disruption budget # The default value of 1 will make sure that kubernetes won't allow more than 1 # of your pods to be unavailable during maintenance maxUnavailable: 1 podSecurityContext: fsGroup: null runAsUser: null # The following value is deprecated, # please use the above podSecurityContext.fsGroup instead fsGroup: "" securityContext: capabilities: null # readOnlyRootFilesystem: true runAsNonRoot: null runAsUser: null # How long to wait for elasticsearch to stop gracefully terminationGracePeriod: 120 sysctlVmMaxMapCount: 262144 readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 3 timeoutSeconds: 5 # https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params wait_for_status clusterHealthCheckParams: "wait_for_status=green&amp;timeout=1s" ## Use an alternate scheduler. ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## schedulerName: "" imagePullSecrets: [] nodeSelector: {} tolerations: [] # Enabling this will publically expose your Elasticsearch instance. # Only enable this if you have security enabled on your cluster ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - chart-example.local tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local nameOverride: "" fullnameOverride: "" # https://github.com/elastic/helm-charts/issues/63 masterTerminationFix: false lifecycle: {} # preStop: # exec: # command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /usr/share/message"] # postStart: # exec: # command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /usr/share/message"] sysctlInitContainer: enabled: false keystore: [] </code></pre> <p>The values listed above produce the following error:</p> <pre><code>create Pod elasticsearch-master-0 in StatefulSet elasticsearch-master failed error: pods "elasticsearch-master-0" is forbidden: SecurityContext.RunAsUser is forbidden </code></pre> <p>Solved: I learned that my istio deployment was causing issues when attempting to deploy any other service into my cluster. I had made a bad assumption that istio along with my cluster security policies weren't causing my issue.</p>
<blockquote> <p>is forbidden: <code>SecurityContext.RunAsUser</code> is forbidden. Does this have something to do with a pod security policy?</p> </blockquote> <p>Yes, that's exactly what it has to do with</p> <p>Evidently the <code>StatefulSet</code> has included a <code>securityContext:</code> stanza, but your cluster administrator forbids such an action</p> <blockquote> <p>Is there any way to be able to deploy ElasticSearch to Kubernetes if privileged containers are not allowed?</p> </blockquote> <p>That's not exactly what's going on here -- it's not the "privileged" part that is causing you problems -- it's the <code>PodSpec</code> requesting to run the container as a user other than the one in the docker image. In fact, I would actually be very surprised if any modern elasticsearch docker image requires modifying the user at all, since all the recent ones do not run as <code>root</code> to begin with</p> <p>Remove that <code>securityContext:</code> stanza from the <code>StatefulSet</code> and report back what new errors arise (if any)</p>
<p>I am using kubernetes cluster with 20 worker node. I have set image pull policy as IfNotPresent to reduce creation time. Image is hosted on dockerhub.</p> <p>When i update the image in dockerhub i need to clear cache from all 20 worker nodes. Currently i am doing docker pull on all 20 worker nodes to update latest image.</p> <p>Is there any kubernetes native solution or any other best industry solution to update image on all nodes?</p>
<p>the best industry solution is to use a unique tag for each deployed image. change the image tag and k8s will handle the upgrade for you. you only have this problem because you want to use the same tag even though the image changes. Whatever the reason you think it's not worth explicitly versioning your image, you're wrong :P . Explicit versions are well work the effort of specifying them</p>
<p>There are previous questions about Self managed Kubernetes clusters located here. But they don't cover the use case for Amazon Linux(AWS EKS provided AMIs) <a href="https://stackoverflow.com/questions/34113476/where-are-the-kubernetes-kubelet-logs-located">Where are the Kubernetes kubelet logs located?</a></p> <p>Where are the kubelet logs for EKS nodes?</p>
<p>The answer for the kubelet EKS use case is: /var/log/messages in the AWS Linux AMI</p> <p>You can determine this yourself by checking the AWS documentation when asking for Customer Support:</p> <p>This is how to get to kubelet logs Read the troubleshooting guide, specifically the CNI Log Collection Tool section.</p> <p><a href="https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html" rel="noreferrer">AWS EKS Troubleshooting</a></p> <p>AWS has a CNI support tool installed in their EKS AMI liste in the documentation above as being at:</p> <pre><code>/opt/cni/bin/aws-cni-support.sh </code></pre> <p>Go through that script and find these lines:</p> <pre><code># collect kubelet log cp /var/log/messages $LOG_DIR/ </code></pre> <p>This will allow you to determine how AWS collects the kubelet logs and come up with the file listed above.</p>
<h1>I receive the following error when i try to download calico.yaml files for the pod network</h1> <p><strong>unable to recognize &quot;calico.yaml&quot;: no matches for kind &quot;Deployment&quot; in version &quot;apps/v1beta1&quot; unable to recognize &quot;calico.yaml&quot;: no matches for kind &quot;DaemonSet&quot; in version &quot;extensions/v1beta1&quot;</strong></p> <p><em>here is the full output when i run &quot;kubectl apply -f calico.yaml&quot;</em></p> <p>'configmap/calico-config created service/calico-typha created poddisruptionbudget.policy/calico-typha created serviceaccount/calico-node created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created unable to recognize &quot;calico.yaml&quot;: no matches for kind &quot;Deployment&quot; in version &quot;apps/v1beta1&quot; unable to recognize &quot;calico.yaml&quot;: no matches for kind &quot;DaemonSet&quot; in version &quot;extensions/v1beta1&quot;'</p>
<p>If you are using the latest version Kubernetes, <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">API versions of few resources have been changed</a>. Try converting calico.yaml to use updated API by using <code>kubectl convert</code> command to update API versions</p>
<p>The provided methods on the Kubernetes documentation don't work and <code>brew cask</code> no longer seems to have the minikube formulae as of Mac OS Catalina.</p> <p><code>Error: Cask 'minikube' is unavailable: No Cask with this name exists.</code></p> <p>When I download it with <code>curl</code> it refuses to run with the following error.</p> <p><code>/bin/minikube: cannot execute binary file: Exec format error</code></p> <p>How can I install <strong>minikube</strong> on Mac OS Catalina. Or do I have to rollback to Mojave?</p>
<p>Minikube is no longer available as a <code>cask</code>.</p> <p>Change command</p> <pre><code>brew cask install minikube </code></pre> <p>to</p> <pre><code>brew install minikube </code></pre> <p>or use</p> <pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64 \ &amp;&amp; sudo install minikube-darwin-amd64 /usr/local/bin/minikube </code></pre>
<p>I am start etcd(3.3.13) member using this command:</p> <pre><code>/usr/local/bin/etcd \ --name infra2 \ --cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \ --peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \ --initial-advertise-peer-urls https://172.19.104.230:2380 \ --listen-peer-urls https://172.19.104.230:2380 \ --listen-client-urls http://127.0.0.1:2379 \ --advertise-client-urls https://172.19.104.230:2379 \ --initial-cluster-token etcd-cluster \ --initial-cluster infra1=https://172.19.104.231:2380,infra2=https://172.19.104.230:2380,infra3=https://172.19.150.82:2380 \ --initial-cluster-state new \ --data-dir=/var/lib/etcd </code></pre> <p>but the log shows this error:</p> <pre><code>2019-08-24 13:12:07.981345 I | embed: rejected connection from "172.19.104.231:60474" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.003918 I | embed: rejected connection from "172.19.104.231:60478" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.004242 I | embed: rejected connection from "172.19.104.231:60480" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.045940 E | rafthttp: request cluster ID mismatch (got 52162d7b86a0617a want b125c249de626e35) 2019-08-24 13:12:08.046455 E | rafthttp: request cluster ID mismatch (got 52162d7b86a0617a want b125c249de626e35) 2019-08-24 13:12:08.081290 I | embed: rejected connection from "172.19.104.231:60484" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.101692 I | embed: rejected connection from "172.19.104.231:60489" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.102002 I | embed: rejected connection from "172.19.104.231:60488" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.144928 E | rafthttp: request cluster ID mismatch (got 52162d7b86a0617a want b125c249de626e35) 2019-08-24 13:12:08.145151 E | rafthttp: request cluster ID mismatch (got 52162d7b86a0617a want b125c249de626e35) 2019-08-24 13:12:08.181299 I | embed: rejected connection from "172.19.104.231:60494" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.201722 I | embed: rejected connection from "172.19.104.231:60500" (error "remote error: tls: bad certificate", ServerName "") 2019-08-24 13:12:08.202096 I | embed: rejected connection from "172.19.104.231:60498" (error "remote error: tls: bad certificate", ServerName "") </code></pre> <p>I search from internet and find the reason is: should give all etcd node ip in hosts config when generate CA cert,but I config all my etcd node ip in csr.json,this is my csr.json config:</p> <pre><code>{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "172.19.104.230", "172.19.150.82", "172.19.104.231" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } </code></pre> <p>what should I do to fix the error?</p>
<p>When you run the cfssl generate command, you should provide the IPs of the hosts running etcd.:</p> <pre><code>cfssl gencert \ -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -hostname=IP1, IP..,IPN \ -profile=kubernetes \ kubernetes-csr.json | cfssljson -bare kubernetes </code></pre>
<pre><code>paths: - backend: serviceName: booknotes-front-end-service servicePort: 80 path: / - backend: serviceName: booknotes-back-end-service servicePort: 3000 path: /api </code></pre> <p>Here is a rules in my ingres-nginx resource . I try to direct all traffic which starts from <code>/api</code> to my back end service, which works properly, but if some route in my back end will be like /api/users it doesn't work , my back end send response not found , when I run it locally this route working properly. Also I've tried delete /api prefix from my koa routing and change it to <code>/users</code> and then I've also changed <code>path: /api</code> to <code>path: /users</code> and this stuff are working properly. What should I do for fixing it ? If you need additional info , pls let me know !</p>
<p>Which version of nginx-ingress are you using? They changed the way for defining a path.</p> <p><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p> <blockquote> <p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p> </blockquote> <p>For example you can use a definition like this.</p> <pre><code>kind: Ingress metadata: name: some-ingress-name annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /?(.*) backend: serviceName: booknotes-front-end-service servicePort: 80 - path: /api/?(.*) backend: serviceName: booknotes-back-end-service servicePort: 3000 </code></pre>
<p>I know <a href="https://stackoverflow.com/q/105776/232794">how to restore a dump file from mysqldump</a>. Now, I am attempting to do that using kubernetes and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it. </p> <p>I tried:</p> <pre><code>kubectl run -i -t dbtest --image=mariadb --restart=Never --rm=true --command -- mysql -uroot -ps3kr37 &lt; dump.sql </code></pre> <p>and </p> <pre><code>kubectl exec mariadb-deployment-3614069618-mn524 -i -t -- mysql -u root -p=s3kr37 &lt; dump.sql </code></pre> <p>But neither commands worked -- errors about TTY, sockets, and other things hinting that I am missing something vital here.</p> <p><em>What am I not understanding here?</em></p> <p>I <em>could</em> just stop the deployment, scp the database files, and restart the container and hope for the best. However, what can go right?</p> <hr> <p>The question <a href="https://stackoverflow.com/q/40925822/232794">Install an sql dump file to a docker container with mariaDB</a> sure looks like a duplicate but is not: first, I am on Linux not Windows and more importantly the answers all are about initialising with a dump. I want to be able to trash the data and revert to the dump data. This is a test system that will eventually be the "live" so I need to restore from many potential dumps.</p>
<p>As described in <a href="https://medium.com/@madushagunasekara/export-mysql-db-dump-from-kubernetes-pod-and-restore-mysql-db-on-kubernetes-pod-6f4ecc6b5a64" rel="noreferrer">here</a> you can use the following command to restore a DB on kubernetes pod from a dump in your machine</p> <pre><code>$ kubectl exec -it {{podName}} -n {{namespace}} -- mysql -u {{dbUser}} -p{{password}} {{DatabaseName}} &lt; &lt;scriptName&gt;.sql Example : $ kubectl exec -it mysql-58 -n sql -- mysql -u root -proot USERS &lt; dump_all.sql </code></pre>
<p>I have configured gitlab runner within kubernetes , which is not able to connect to docker daemon . Showing below error .</p> <blockquote> <p>$ docker build --cache-from "${DOCKER_IMAGE_TAG}" -t "${DOCKER_IMAGE_TAG}" .</p> <p>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ERROR: Job failed: command terminated with exit code 1</p> </blockquote> <pre><code>stages: - push_to_docker docker_image: image: 'docker:latest' services: - docker:dind stage: push_to_docker variables: DOCKER_IMAGE_TAG: 'gcr.io/abcd-project/test' script: - docker build --cache-from "${DOCKER_IMAGE_TAG}" -t "${DOCKER_IMAGE_TAG}" . - echo "$SERVICE_ACCOUNT_KEY" &gt; key.json - docker login -u _json_key --password-stdin https://gcr.io &lt; key.json - docker push ${DOCKER_IMAGE_TAG} only: - master tags: - abcd </code></pre> <p>My <strong>config.toml</strong> file is as below </p> <pre><code>listen_address = "[::]:9252" concurrent = 4 check_interval = 3 log_level = "info" [session_server] session_timeout = 1800 [[runners]] name = "runner-gitlab-runner-78c7db94bc-lzv76" request_concurrency = 1 url = "https://gitlab.com/" token = "*********" executor = "kubernetes" [runners.custom_build_dir] [runners.cache] [runners.cache.s3] [runners.cache.gcs] [runners.kubernetes] host = "" bearer_token_overwrite_allowed = false image = "ubuntu:16.04" namespace = "gitlab-managed-apps" namespace_overwrite_allowed = "" privileged = true service_account_overwrite_allowed = "" pod_annotations_overwrite_allowed = "" [runners.kubernetes.pod_security_context] [runners.kubernetes.volumes] </code></pre> <p>Checked with configuration as below</p> <pre><code> image: docker:19.03.1 services: - docker:19.03.1-dind variables: DOCKER_HOST: tcp://docker:2375 </code></pre> <p>And my .gitlab-ci.yml file after changing the configuration is as below:</p> <pre><code>stages: - push_to_docker - deploy_into_kubernetes variables: DOCKER_IMAGE_TAG: 'gcr.io/abcd-project/test:$CI_COMMIT_SHORT_SHA' DOCKER_HOST: tcp://docker:2375 docker_image_creation: image: docker:19.03.1 services: - docker:19.03.1-dind stage: push_to_docker script: - docker build -t "${DOCKER_IMAGE_TAG}" . - echo "$SERVICE_ACCOUNT_KEY" &gt; key.json - docker login -u _json_key --password-stdin https://gcr.io &lt; key.json - docker push ${DOCKER_IMAGE_TAG} tags: - cluster - kubernetes </code></pre> <p>but getting bellow error :</p> <blockquote> <p>Skipping Git submodules setup $ docker build -t "${DOCKER_IMAGE_TAG}" . time="2019-11-04T08:07:37Z" level=error msg="failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial tcp: lookup docker on 10.0.0.10:53: no such host" error during connect: Post <a href="http://docker:2375/v1.40/build?buildargs=%7B%7D&amp;cachefrom=%5B%5D&amp;cgroupparent=&amp;cpuperiod=0&amp;cpuquota=0&amp;cpusetcpus=&amp;cpusetmems=&amp;cpushares=0&amp;dockerfile=Dockerfile&amp;labels=%7B%7D&amp;memory=0&amp;memswap=0&amp;networkmode=default&amp;rm=1&amp;session=l1ce41pzm1p9a4jdhs31z9p64&amp;shmsize=0&amp;t=gcr.io%2Fupbeat-flame-247110%2Fgitlab-runner-poc%3A25b1faa0&amp;target=&amp;ulimits=null&amp;version=1" rel="nofollow noreferrer">http://docker:2375/v1.40/build?buildargs=%7B%7D&amp;cachefrom=%5B%5D&amp;cgroupparent=&amp;cpuperiod=0&amp;cpuquota=0&amp;cpusetcpus=&amp;cpusetmems=&amp;cpushares=0&amp;dockerfile=Dockerfile&amp;labels=%7B%7D&amp;memory=0&amp;memswap=0&amp;networkmode=default&amp;rm=1&amp;session=l1ce41pzm1p9a4jdhs31z9p64&amp;shmsize=0&amp;t=gcr.io%2Fupbeat-flame-247110%2Fgitlab-runner-poc%3A25b1faa0&amp;target=&amp;ulimits=null&amp;version=1</a>: context canceled</p> </blockquote>
<p>use docker19, which automatically configures its host for you:</p> <pre><code>image: docker:19.03.1 services: - docker:19.03.1-dind variables: DOCKER_HOST: tcp://docker:2375 </code></pre> <p><a href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html" rel="noreferrer">https://docs.gitlab.com/ee/ci/docker/using_docker_build.html</a></p>
<p>I have followed instructions from <a href="https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/" rel="nofollow noreferrer">this blog</a> post to set up a k3s cluster on a couple of raspberry pi 4:</p> <p>I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.</p> <p>From the <a href="https://github.com/rancher/k3s/blob/master/README.md" rel="nofollow noreferrer">k3s docs</a></p> <blockquote> <p>It is also possible to deploy Helm charts. k3s supports a CRD controller for installing charts. A YAML file specification can look as following (example taken from <code>/var/lib/rancher/k3s/server/manifests/traefik.yaml</code>):</p> </blockquote> <p>So I have been starting up my k3s with the <code>--no-deploy traefik</code> option to manually add it with settings. So I therefore manually apply a yaml like this:</p> <pre><code>apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: name: traefik namespace: kube-system spec: chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz set: rbac.enabled: "true" ssl.enabled: "true" kubernetes.ingressEndpoint.useDefaultPublishedService: "true" dashboard: enabled: true domain: "traefik.k3s1.local" </code></pre> <p>But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try <code>kubectl delete -f</code> on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.</p> <p>I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.</p> <p>Is there a way to delete all the resources created by a chart like this without the <code>helm</code> cli (which I don't even have)?</p>
<p>Are you sure that <code>kubectl delete -f</code> is hanging?</p> <p>I had the same issue as you and it seemed like <code>kubectl delete -f</code> was hanging, but it was really just taking a long time.</p> <p>As far as I can tell, when you issue the <code>kubectl delete -f</code> a pod in the <code>kube-system</code> namespace with a name of <code>helm-delete-*</code> should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running <code>kubectl -n kube-system get pods</code>, find the one with <code>kube-delete-&lt;name of yaml&gt;-&lt;id&gt;</code>. Then use the pod name to look at the logs using <code>kubectl -n kube-system logs kube-delete-&lt;name of yaml&gt;-&lt;id&gt;</code>.</p> <p>An example of what I did was:</p> <pre><code>kubectl delete -f jenkins.yaml # seems to hang kubectl -n kube-system get pods # look at pods in kube-system namespace kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs </code></pre>
<p>I am trying to update my deployment with latest image content on Azure Kubernetes Service every time some code is committed to github . I have made a stage in my build pipeline to build and push the image on docker hub which is working perfectly fine. however in my release pipeline the image is being used as an artifact and is being deployed to the Azure Kubernetes Service , but the problem is that the image on AKS in not updating according to the image pushed on Docker Hub with latest code.</p> <p>Right now each time some commit happens i have to manually update the image on AKS via the Command</p> <p><em>kubectl set image deployment/demo-microservice demo-microservice=customerandcontact:contact</em> </p> <p>My Yaml File</p> <p><a href="https://i.stack.imgur.com/jAMCZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAMCZ.png" alt="enter image description here"></a></p> <p>Can anyone tell the error/changes if any in my yaml file to automatically update the image on AKS.</p>
<p>When you relese a new image to the container registry under the same tag it does not mean anything to Kubernetes. If you run <code>kubectl apply -f ...</code> and the image name and tag remains the same, it still won't do anything as there is no configuration change. There are two options:</p> <ol> <li><p>Give a new tag on each build and change the <code>:contact</code> to the new tag in the yaml and run kubectl apply</p></li> <li><p>For dev environment only (do not do it in Stage or Prod) leave the same tag (usually a tag <code>:latest</code> is used) and after a new image is deployed to registry run <code>kubectl delete pod demo-microservice</code>. Since you've set image pull policy to Always, this will cause Kubernetes pull a new image from the registry and redeploy the pod.</p></li> </ol> <p>The second approach is a workaround just for testing.</p>
<p>I am currently switching from Service Fabric to Kubernetes and was wondering how to do custom and more complex load balancing.</p> <p>So far I already read about Kubernetes offering "Services" which do load balancing for pods hidden behind them, but this is only available in more plain ways.</p> <p>What I want to rewrite right now looks like the following in Service Fabric:</p> <p>I have this interface: </p> <pre><code>public interface IEndpointSelector { int HashableIdentifier { get; } } </code></pre> <p>A context keeping track of the account in my ASP.Net application e.g. inherits this. Then, I wrote some code which would as of now do service discovery through the service fabric cluster API and keep track of all services, updating them when any instances die or are being respawned.</p> <p>Then, based on the deterministic nature of this identifier (due to the context being cached etc.) and given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.</p> <p>Now, how would I go about doing this in Kubernetes?</p> <p>As I already mentioned, I found "Services", but it seems like their load balancing does not support custom logic and is rather only useful when working with stateless instances.</p> <p>Is there also a way to have service discovery in Kubernetes which I could use here to replace my existing code at some points?</p>
<h1>StatefulSet</h1> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a> is a <em>building block</em> for stateful workload on Kubernetes with certain guarantees.</p> <h2>Stable and unique network identity</h2> <blockquote> <p>StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage.</p> </blockquote> <p>As an example, if your StatefulSet has the name <code>sharded-svc</code></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: sharded-svc </code></pre> <p>And you have e.g. 3 replicas, those will be named by <code>&lt;name&gt;-&lt;ordinal&gt;</code> where <em>ordinal</em> starts from 0 up to replicas-1.</p> <p>The name of your pods will be:</p> <pre><code>sharded-svc-0 sharded-svc-1 sharded-svc-2 </code></pre> <p>and those pods can be reached with a dns-name:</p> <pre><code>sharded-svc-0.sharded-svc.your-namespace.svc.cluster.local sharded-svc-1.sharded-svc.your-namespace.svc.cluster.local sharded-svc-2.sharded-svc.your-namespace.svc.cluster.local </code></pre> <p>given that your <em>Headless Service</em> is named <code>sharded-svc</code> and you deploy it in namespace <code>your-namespace</code>.</p> <h1>Sharding or Partitioning</h1> <blockquote> <p>given multiple replicas of the target service of a frontend -> backend call, I can reliably route traffic for a certain account to a certain endpoint instance.</p> </blockquote> <p>What you describe here is that your stateful service is what is called <em>sharded</em> or <em>partitioned</em>. This does not come out of the box from Kubernetes, but you have all the needed <em>building blocks</em> for this kind of service. <em>It may happen that it exists an 3rd party service providing this feature that you can deploy, or it can be developed.</em></p> <h2>Sharding Proxy</h2> <p>You can create a service <code>sharding-proxy</code> consisting of one of more pods (possibly from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> since it can be stateless). This app need to watch the pods/service/<a href="https://stackoverflow.com/questions/52857825/what-is-an-endpoint-in-kubernetes">endpoints</a> in your <code>sharded-svc</code> to know where it can route traffic. This can be developed using <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a> or other alternatives.</p> <p>This service implements the logic you want in your sharding, e.g. <em>account-nr</em> modulus 3 is routed to the corresponding pod <em>ordinal</em></p> <p><strong>Update:</strong> There are 3rd party proxies with <strong>sharding</strong> functionallity, e.g. <a href="https://github.com/gojek/weaver" rel="noreferrer">Weaver Proxy</a></p> <blockquote> <p>Sharding request based on headers/path/body fields</p> </blockquote> <p>Recommended reading: <a href="https://medium.com/@rbshetty/weaver-proxying-at-scale-b3b8b425a58e" rel="noreferrer">Weaver: Sharding with simplicity</a></p> <h1>Consuming sharded service</h1> <p>To consume your sharded service, the clients send request to your <code>sharding-proxy</code> that then apply your <em>routing</em> or <em>sharding logic</em> (e.g. request with <em>account-nr</em> modulus 3 is routed to the corresponding pod <em>ordinal</em>) and forward the request to <em>the replica</em> of <code>sharded-svc</code> that match your logic.</p> <h1>Alternative Solutions</h1> <p><strong>Directory Service:</strong> It is probably easier to implement <code>sharded-proxy</code> as a <em>directory service</em> but it depends on your requirements. The clients can ask your <em>directory service</em> to what statefulSet replica should I send <em>account-nr X</em> and your serice reply with e.g. <code>sharded-svc-2</code></p> <p><strong>Routing logic in client:</strong> The probably most easy solution is to have your <em>routing logic</em> in the client, and let this logic calculate to what statefulSet replica to send the request.</p>
<p>I am trying to set up a local storage as outlined here (<a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a>) . I'm getting the following error that the scheduler is unable to schedule the pods . The local storage is mapped to one of the worker node. I tried setting up the local storage on master node and I got the same error. WHere am I going wrong?</p> <p>Warning FailedScheduling 24s (x2 over 24s) default-scheduler 0/3 nodes are available: 1 node(s) didn't match node selector, 2 node(s) didn't find available persistent volumes to bind.</p> <pre><code>------------------------------------------------------------------- kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rpi-k8-workernode-2 Ready &lt;none&gt; 92d v1.15.0 192.168.100.50 &lt;none&gt; Raspbian GNU/Linux 9 (stretch) 4.19.42-v7+ docker://18.9.0 rpi-mon-k8-worker Ready &lt;none&gt; 91d v1.15.0 192.168.100.22 &lt;none&gt; Raspbian GNU/Linux 9 (stretch) 4.19.42-v7+ docker://18.9.0 udubuntu Ready master 92d v1.15.1 192.168.100.24 &lt;none&gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.4 ------------------------------------------------------------------- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer ------------------------------------------------------------------------ apiVersion: v1 kind: PersistentVolume metadata: name: pv-ghost namespace: ghost spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/mydrive/ghost-data/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - rpi-mon-k8-worker ------------------------------------------------------------------------ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ghost namespace: ghost labels: pv: pv-ghost spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: name: pv-ghost ------------------------------------------------------------------------ apiVersion: apps/v1 kind: Deployment metadata: name: deployment-ghost namespace: ghost labels: env: prod app: ghost-app spec: template: metadata: name: ghost-app-pod labels: app: ghost-app env: production spec: containers: - name: ghost image: arm32v7/ghost imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /var/lib/ghost/content name: ghost-blog-data securityContext: privileged: True volumes: - name: ghost-blog-data persistentVolumeClaim: claimName: pvc-ghost nodeSelector: beta.kubernetes.io/arch: arm replicas: 2 selector: matchLabels: app: ghost-app kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS rpi-k8-workernode-2 Ready &lt;none&gt; 93d v1.15.0 beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-k8-workernode-2,kubernetes.io/os=linux rpi-mon-k8-worker Ready &lt;none&gt; 93d v1.15.0 beta.kubernetes.io/arch=arm,beta.kubernetes.io/os=linux,kubernetes.io/arch=arm,kubernetes.io/hostname=rpi-mon-k8-worker,kubernetes.io/os=linux udubuntu Ready master 93d v1.15.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=udubuntu,kubernetes.io/os=linux,node-role.kubernetes.io/master= ----------------------------------------------------------- ud@udubuntu:~/kube-files$ kubectl describe pvc pvc-ghost -n ghost Name: pvc-ghost Namespace: ghost StorageClass: manual Status: Pending Volume: Labels: pv=pv-ghost Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"pv":"pv-ghost"},"name":"pvc-ghost","namespace":"... Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 6s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding </code></pre>
<p>As you can see from the warning <code>1 node(s) didn't match node selector, 2 node(s) didn't find available persistent volumes to bind.</code>, you set a <code>nodeSelector</code> in the deployment-ghost, so one of your worker nodes didn't match with this selector.If you delete the <code>nodeSelector</code> field from that .yaml file. In this way the pod will be deployed to a node where the <code>PV</code> is created. AFAIK, it isn't possible to deploy a pod to a worker which the <code>PV</code> used to claim is in the another worker node. And finally, in the other nodes, no <code>PV</code>s created. You can check the created <code>PV</code>s and <code>PVC</code>s by:</p> <pre><code>kubectl get pv kubectl get pvc -n &lt;namespace&gt; </code></pre> <p>and the check the details of them by:</p> <pre><code>kubectl describe pv &lt;pv_name&gt; kubectl describe pv &lt;pv_name&gt; -n &lt;namespace&gt; </code></pre> <p>You issue is explained in the official documentation in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">her</a> which says :</p> <blockquote> <p>Claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:</p> <p>1- matchLabels - the volume must have a label with this value</p> <p>2- matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist</p> </blockquote> <p>So, edit the PersistentVolume file and add the labels field to be look like this :</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-ghost labels: name: pv-ghost spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/mydrive/ghost-data/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - rpi-mon-k8-worker </code></pre> <p>It isn't necessary to add the <strong>namespace</strong> field to <code>kind: persistantVolume</code> because <code>PersistentVolumes</code> binds are exclusive, and <code>PersistentVolumeClaims</code> are namespaced objects. </p> <p>I test it and it works for me.</p>
<p>I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using <code>helm init</code> and install kubernetes dashboard through <code>helm install stable/kubernetes-dashboard</code>.</p> <p>The <code>helm install</code> command seems to be successful and <code>helm ls</code> outputs:</p> <pre><code>NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default </code></pre> <p>However after waiting a few minutes the deployment is still not ready. </p> <p>Running <code>kubectl get pods</code> shows that the pod's status as <code>CrashLoopBackOff</code>.</p> <pre><code>NAME READY STATUS RESTARTS AGE exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m </code></pre> <p>The description for the pod shows the following events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1 Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container </code></pre> <p>And the logs show the following panic message</p> <pre><code>panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system" </code></pre> <p>Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?</p> <p>Is it possible to setup without giving the dashboard account cluster-admin permissions?</p>
<p>Check this out mate:</p> <p><a href="https://akomljen.com/installing-kubernetes-dashboard-per-namespace/" rel="nofollow noreferrer">https://akomljen.com/installing-kubernetes-dashboard-per-namespace/</a></p> <p>You can create your own roles if you want to.</p>
<p>I am trying to update my deployment with latest image content on Azure Kubernetes Service every time some code is committed to github . I have made a stage in my build pipeline to build and push the image on docker hub which is working perfectly fine. however in my release pipeline the image is being used as an artifact and is being deployed to the Azure Kubernetes Service , but the problem is that the image on AKS in not updating according to the image pushed on Docker Hub with latest code.</p> <p>Right now each time some commit happens i have to manually update the image on AKS via the Command</p> <p><em>kubectl set image deployment/demo-microservice demo-microservice=customerandcontact:contact</em> </p> <p>My Yaml File</p> <p><a href="https://i.stack.imgur.com/jAMCZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jAMCZ.png" alt="enter image description here"></a></p> <p>Can anyone tell the error/changes if any in my yaml file to automatically update the image on AKS.</p>
<p>When you specify your image with the specific image tag Kubernetes will default container's <code>imagePullPolicy</code> to <code>IfNotPresent</code>, which means that image won't be pulled again, and previously pulled image will be deployed.</p> <p>Kubernetes will change policy to <code>Always</code> only if tag is not present (which is effectively same as <code>latest</code> or if tag is set to <code>latest</code> explicitly.</p> <p>Check what is actual imagePull policy on your Deployment template for particular container.</p> <pre><code>kubectl get pod demo-microservice -o yaml | grep imagePullPolicy -A 1 </code></pre> <p>Try patching deployment</p> <pre><code>kubectl patch deployment demo-microservice -p '{"spec": { "template" : { "spec" : { "containers" : [{"name" : "demo-microservice", "image" : "repo/image:tag", "imagePullPolicy": "Always" }]}}}}' </code></pre> <p>Make sure that <code>imagePullPolicy</code> for the container in question is set to <code>Always</code>.</p>
<p><a href="https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls" rel="nofollow noreferrer">https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls</a> </p> <p>I've tried to configure https for my ingress resource by this tutorial. I've done all the needed steps, but when I try to go to my site it send me: </p> <p><a href="https://i.stack.imgur.com/23EGK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/23EGK.png" alt="enter image description here"></a></p> <p>Should I do some additional steps?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: www.domain.com http: paths: - backend: serviceName: front-end-service servicePort: 80 path: / - host: www.domain.com http: paths: - backend: serviceName: back-end-service servicePort: 3000 path: /api tls: - hosts: - www.domain.com secretName: my-sectet </code></pre> <p>Sectet which I've created exist . I've checked it by using this command <code>kubectl get secrets</code> and name the same like I use in ingress resource.</p> <p>If you need additiona info , pls let me know</p>
<p>As mentioned in the comments, this tutorial is guiding you through setting up a self-signed certificate, which is not trusted by your browser. You would need to provide a cert your browser trusts or temporarily ignore the error locally. LetsEncrypt is an easy and free way to get a real cert, and cert-manager is a way to do that via Kubernetes.</p>
<p>Working on putting a Django API into Kubernetes.</p> <p>Any traffic being sent to <code>/</code> the <code>ingress-nginx</code> controller sends to the React FE. Any traffic being sent to <code>/api</code> is sent to the Django BE.</p> <p>This is the relevant part of <code>ingress-serivce.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /?(.*) backend: serviceName: client-cluster-ip-service servicePort: 3000 - path: /api/?(.*) backend: serviceName: server-cluster-ip-service servicePort: 5000 </code></pre> <p>This is the <code>url.py</code>:</p> <p>from django.contrib import admin from django.urls import include, path</p> <pre><code>urlpatterns = [ path('auth/', include('authentication.urls'), name='auth'), path('admin/', admin.site.urls, name='admin'), ] </code></pre> <p>The <code>client</code> portion works just fine. The <code>minikube</code> IP is <code>192.168.99.105</code>. Navigating to that IP loads the react front-end.</p> <p>Navigating to <code>192.168.99.105/api/auth/test/</code> brings me to a `"Hello World!" response I quickly put together.</p> <p>However, when I try to go to <code>192.168.99.105/api/admin</code>. It automatically redirects me to <code>/admin/login/?next=/admin/</code> which doesn't exist given <code>/api</code> is being removed. <strong>Is there anyway to prevent this behavior?</strong></p> <p>I've also just tried this:</p> <p><strong>ingress-service.yaml</strong></p> <pre><code>- http: paths: - path: /?(.*) backend: serviceName: client-cluster-ip-service servicePort: 3000 - path: /api/?(.*) backend: serviceName: server-cluster-ip-service servicePort: 5000 - path: /admin/?(.*) backend: serviceName: server-cluster-ip-service servicePort: 5000 </code></pre> <p><strong>urls.py</strong></p> <pre><code>urlpatterns = [ path('auth/', include('authentication.urls'), name='auth'), path('/', admin.site.urls), ] </code></pre> <p>Which just produces "Not Found".</p> <p>I tried to prefix also using this pattern that shows up in <a href="https://docs.djangoproject.com/en/2.2/topics/http/urls/#including-other-urlconfs" rel="nofollow noreferrer">the documentation</a>:</p> <pre><code>urlpatterns = [ path('api/', include([ path('auth/', include('authentication.urls'), name='auth'), path('admin/', admin.site.urls), ])), ] </code></pre> <p>But that just made it <code>/api/api</code>.</p> <p>Here are the routes that are defined for <code>admin/</code> in the <code>site-packages/django/contrib/admin/sites</code>:</p> <pre><code># Admin-site-wide views. urlpatterns = [ path('', wrap(self.index), name='index'), path('login/', self.login, name='login'), path('logout/', wrap(self.logout), name='logout'), path('password_change/', wrap(self.password_change, cacheable=True), name='password_change'), path( 'password_change/done/', wrap(self.password_change_done, cacheable=True), name='password_change_done', ), path('jsi18n/', wrap(self.i18n_javascript, cacheable=True), name='jsi18n'), path( 'r/&lt;int:content_type_id&gt;/&lt;path:object_id&gt;/', wrap(contenttype_views.shortcut), name='view_on_site', ), ] </code></pre> <p>I guess the <code>''</code> is what is causing Django to strip the <code>/api</code> off the URL and make it just <code>192.168.99.105/admin</code> instead of <code>192.168.99.105/api/admin</code>.</p>
<p>Ok, finally figured it out. Definitely was a Django setting. Added the following to the Django <code>settings.py</code>:</p> <pre><code>FORCE_SCRIPT_NAME = '/api/' </code></pre> <p>Then I had to update the <code>STATIC_URL</code> because it was no longer serving the assets for the admin portal:</p> <pre><code>STATIC_URL = '/api/static/` </code></pre> <p>Full <code>settings.py</code> that I'm using, but really the only things that needed changing were adding <code>FORCE_SCRIPT_NAME</code> and updating <code>STATIC_URL</code>:</p> <pre><code>Django settings for config project. Generated by 'django-admin startproject' using Django 2.2.6. For more information on this file, see https://docs.djangoproject.com/en/2.2/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/2.2/ref/settings/ """ import os from datetime import timedelta # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Forces Django to not strip '/api' from the URI FORCE_SCRIPT_NAME = '/api/' # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = os.environ['SECRET_KEY'] # SECURITY WARNING: don't run with debug turned on in production! DEBUG = os.environ['DEBUG'] == 'True' ALLOWED_HOSTS = [ 'localhost', '127.0.0.1', '192.168.64.7' ] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Third-party packages 'rest_framework', # Local packages 'authentication' ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'whitenoise.middleware.WhiteNoiseMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'config.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'config.wsgi.application' # Database # https://docs.djangoproject.com/en/2.2/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.environ['PGDATABASE'], 'USER': os.environ['PGUSER'], 'PASSWORD': os.environ['PGPASSWORD'], 'HOST': os.environ['PGHOST'], 'PORT': 1423 } } # REST Framework settings # https://www.django-rest-framework.org/ REST_FRAMEWORK = { 'DEFAULT_RENDERER_CLASSES': [ 'rest_framework.renderers.JSONRenderer', ], 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework_simplejwt.authentication.JWTAuthentication', ), 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.AllowAny', ], } # SIMPLE_JWT Settings # https://github.com/davesque/django-rest-framework-simplejwt SIMPLE_JWT = { 'ACCESS_TOKEN_LIFETIME': timedelta(minutes=5), 'REFRESH_TOKEN_LIFETIME': timedelta(days=1), 'ROTATE_REFRESH_TOKENS': False, 'BLACKLIST_AFTER_ROTATION': True, 'ALGORITHM': 'HS256', 'SIGNING_KEY': os.environ['SECRET_KEY'], 'VERIFYING_KEY': None, 'AUDIENCE': None, 'ISSUER': None, 'AUTH_HEADER_TYPES': ('Bearer',), 'USER_ID_FIELD': 'id', 'USER_ID_CLAIM': 'user_id', 'AUTH_TOKEN_CLASSES': ('rest_framework_simplejwt.tokens.AccessToken',), 'TOKEN_TYPE_CLAIM': 'token_type', 'JTI_CLAIM': 'jti', 'SLIDING_TOKEN_REFRESH_EXP_CLAIM': 'refresh_exp', 'SLIDING_TOKEN_LIFETIME': timedelta(minutes=5), 'SLIDING_TOKEN_REFRESH_LIFETIME': timedelta(days=1), } # Password validation # https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/2.2/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'America/Los_Angeles' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.2/howto/static-files/ STATIC_URL = '/api/static/' STATIC_ROOT = os.path.join(BASE_DIR, 'static') </code></pre> <p><strong>Dockerfile</strong></p> <pre><code>FROM python:3.7-slim ENV PYTHONUNBUFFERED 1 WORKDIR /app EXPOSE 5000 COPY requirements*.txt ./ RUN pip install -r requirements.txt COPY . . RUN python manage.py collectstatic CMD ["gunicorn", "-b", ":5000", "--log-level", "info", "config.wsgi:application"] </code></pre>
<p>I am trying to define a collection (<code>dict</code>), and I would like to add a new line on each definition (for readability), Eg:</p> <pre><code>{{ $deployment := dict "Release" .Release "Chart" .Chart "Values" .Values }} </code></pre> <p>But when I do this, helm respond a parse error : </p> <pre><code>Error: parse error in "XXX": template: XXX:2: unclosed action Error: UPGRADE FAILED: parse error in "XXX": template: XXX:2: unclosed action </code></pre> <p>Is there a way in HELM to do this?</p>
<p>I achieved this by defining the dict first and then setting one key per line.</p> <pre><code>{{- $myDict := dict "" "" -}} {{- $_ := set $myDict "myKey1" "myValue1" -}} {{- $_ := set $myDict "myKey2" "myValue2" -}} {{- $_ := set $myDict "myKey3" "myValue3" -}} {{- $_ := set $myDict "myKey4" "myValue4" -}} </code></pre> <p>Bonus Tip: Since dict <code>get</code> function is available seemingly in only helm3 and later, you can use this hack to get a value from a dict to a string.</p> <pre><code>{{/* Hack needed until helm 3 which has 'get' for 'dict' */}} {{- $myValue3Var := pluck "myKey3" $myDict | first -}} </code></pre>
<p><a href="https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls" rel="nofollow noreferrer">https://github.com/kubernetes-retired/contrib/tree/master/ingress/controllers/nginx/examples/tls</a> </p> <p>I've tried to configure https for my ingress resource by this tutorial. I've done all the needed steps, but when I try to go to my site it send me: </p> <p><a href="https://i.stack.imgur.com/23EGK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/23EGK.png" alt="enter image description here"></a></p> <p>Should I do some additional steps?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: rules: - host: www.domain.com http: paths: - backend: serviceName: front-end-service servicePort: 80 path: / - host: www.domain.com http: paths: - backend: serviceName: back-end-service servicePort: 3000 path: /api tls: - hosts: - www.domain.com secretName: my-sectet </code></pre> <p>Sectet which I've created exist . I've checked it by using this command <code>kubectl get secrets</code> and name the same like I use in ingress resource.</p> <p>If you need additiona info , pls let me know</p>
<p>If you are open to use jetstack then you can refer <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="nofollow noreferrer">this</a> for installation via helm chart and thereafter following the steps in <a href="https://stackoverflow.com/questions/58423312/how-do-i-test-a-clusterissuer-solver/58436097?noredirect=1#comment103215785_58436097">this</a> stackoverflow post, you can get this done with a secure connection.</p> <p>Jetstack will create the secret patched to ingress tls itself and just check the status of certificate once you map the secret name with ingress rule, there certificate should attain ready state.</p>
<p>i want to redirect domain in nginx ingress kubernete.</p> <pre><code>https://test.example.io/preview/qLxiVcDGxCaQ134650121853FTg4 </code></pre> <p>if in url <code>preview</code> comes change domain redirect </p> <pre><code>https://test.app.example.io/preview/qLxiVcDGxCaQ134650121853FTg4 </code></pre> <p>what i was trying</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: staging nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx name: staging-ingress spec: rules: - host: test.example.io http: paths: - path: / backend: serviceName: service-1 servicePort: 80 - path: /preview/* backend: url: serviceName: service-2 servicePort: 80 tls: - hosts: - test.example.io secretName: staging </code></pre> <p>for simple nginx block is like </p> <pre><code>location ~ /preview { rewrite /preview https://test.app.example.com$uri permanent; } </code></pre>
<p>My logic thinking, try it : </p> <pre><code>metadata: annotations: nginx.ingress.kubernetes.io/configuration-snippet: | rewrite /preview https://test.app.example.com$uri permanent; spec: rules: - host: test.example.io http: paths: - path: / backend: serviceName: service-1 servicePort: 80 - host: test.app.example.io http: paths: - path: /preview/* backend: serviceName: service-2 servicePort: 80 </code></pre> <p>Hope it works ! </p> <p>On code above: You should not access using: <a href="https://test.app.example.io/preview/" rel="noreferrer">https://test.app.example.io/preview/</a> (It just be redirected link ) at all.</p>
<p>I am using statefulset and I spin up multiple pods but they are not replica of each other. I want to set the hostname of the pods and passing these hostname as a env variable to all the pods so that they can communicate with each other.</p> <p>I tried to use hostname under pod spec but hostname is never to set to specified hostname. However, it is set to hostname as podname-0.</p> <pre><code># Source: testrep/templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: orbiting-butterfly-testrep labels: app.kubernetes.io/name: testrep helm.sh/chart: testrep-0.1.0 app.kubernetes.io/instance: orbiting-butterfly app.kubernetes.io/managed-by: Tiller spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: testrep app.kubernetes.io/instance: orbiting-butterfly strategy: type: Recreate template: metadata: labels: app.kubernetes.io/name: testrep app.kubernetes.io/instance: orbiting-butterfly spec: nodeSelector: testol: ad3 hostname: test1 containers: - name: testrep image: "test/database:v1" imagePullPolicy: IfNotPresent env: - name: DB_HOSTS value: test1,test2,test3 </code></pre>
<ol> <li>As per documentation:</li> </ol> <blockquote> <p>StatefulSet is the workload API object used to manage stateful applications.</p> </blockquote> <blockquote> <p>Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.</p> </blockquote> <p>StatefulSets are valuable for applications that require one or more of the following:</p> <ul> <li><strong>Stable, unique network identifiers</strong>.</li> <li>Stable, persistent storage.</li> <li>Ordered, graceful deployment and scaling.</li> <li>Ordered, automated rolling updates.</li> </ul> <ol start="2"> <li>Statefulset <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations" rel="nofollow noreferrer">Limitations</a>:</li> </ol> <blockquote> <p>StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.</p> </blockquote> <ol start="2"> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity" rel="nofollow noreferrer">Pod Identity</a></li> </ol> <blockquote> <p>StatefulSet Pods have a unique identity that is comprised of an ordinal, a stable network identity, and stable storage. The identity sticks to the Pod, regardless of which node it’s (re)scheduled on. <strong>For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, that is unique over the Set</strong>.</p> </blockquote> <ol start="3"> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">Stable Network ID</a></li> </ol> <blockquote> <p>Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. <strong>The pattern for the constructed hostname is $(statefulset name)-$(ordinal)</strong>. <strong>The example above will create three Pods named web-0,web-1,web-2</strong>. A StatefulSet can use a Headless Service to control the domain of its Pods. The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where “cluster.local” is the cluster domain. As each Pod is created, it gets a matching DNS subdomain, taking the form: $(podname).$(governing service domain), where the governing service is defined by the serviceName field on the StatefulSet.</p> </blockquote> <p><strong>Note</strong>:</p> <p>You are responsible for creating the Headless Service responsible for the network identity of the pods.</p> <p>So as described by <a href="https://stackoverflow.com/a/58669371/11207414">vjdhama</a> Please create your Statefulset with Headless Service.</p> <p>You can find this example in the docs:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: &quot;nginx&quot; # has to match headless Service metadata.name replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 </code></pre> <p>In this scenario Pod DNS and Pod Hostnames should be respectively:</p> <pre><code> Pod DNS web-{0..N-1}.nginx.default.svc.cluster.local Pod Hostname web-{0..N-1} NAME READY STATUS RESTARTS AGE IP pod/web-0 1/1 Running 0 5m 192.168.148.78 pod/web-1 1/1 Running 0 4m53s 192.168.148.79 pod/web-2 1/1 Running 0 4m51s 192.168.148.80 </code></pre> <p>From the Pod perspective:</p> <pre><code> root@web-2:# nslookup nginx Server: 10.96.0.10 Address: 10.96.0.10#53 Name: nginx.default.svc.cluster.local Address: 192.168.148.80 Name: nginx.default.svc.cluster.local Address: 192.168.148.78 Name: nginx.default.svc.cluster.local Address: 192.168.148.79 </code></pre> <p>So you can call each of the respective pods using the Pod DNS, like:</p> <pre><code>web-0.nginx.default.svc.cluster.local </code></pre> <p><strong>Update:</strong></p> <p>Exposing single pod from StatefulSet.</p> <blockquote> <h2>Pod Name Label</h2> <p>When the StatefulSet controller creates a Pod, it adds a label, <code>statefulset.kubernetes.io/pod-name</code>, that is set to the name of the Pod. This label allows you to attach a Service to a specific Pod in the StatefulSet.</p> </blockquote> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label</a></p> <p>You can find <a href="https://itnext.io/exposing-statefulsets-in-kubernetes-698730fb92a1" rel="nofollow noreferrer">here</a> tricky way. Using described above advantages of Statefulset:</p> <blockquote> <p>The pattern for the constructed hostname is $(statefulset name)-$(ordinal). The example above will create three Pods named web-0,web-1,web-2.</p> </blockquote> <p>So as an example:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: app-0 spec: type: LoadBalancer selector: statefulset.kubernetes.io/pod-name: web-0 ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>Will do this for you.</p> <p>Hope this help.</p>
<p>I have one question about Grafana. How I can use exiting Prometheus deamonset on GKE for Grafana. I do not want to spin up one more Prometheus deployment for just Grafana. I come up with this question after I spin up the GKE cluster. I have checked <code>kube-system</code> namespace and it turns out there is Prometheus <code>deamonset</code> already deployed. </p> <pre><code>$ kubectl get daemonsets -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE prometheus-to-sd 2 2 2 2 2 beta.kubernetes.io/os=linux 19d </code></pre> <p>and I would like to use this Prometheus</p> <p>I have Grafana deployment with helm <code>stable/grafana</code></p> <pre><code>$ kubectl get deploy -n dev NAME READY UP-TO-DATE AVAILABLE AGE grafana 1/1 1 1 9m20s </code></pre> <p>Currently, I am using <code>stable/prometheus</code></p>
<p>prometheus-to-sd is not a Prometheus instance, but a component that allows getting data from Prometheus to GCP's stackdriver. More info here: <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd</a></p> <p>If you'd like to have Prometheus you'll have to run it separately. (<a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">prometheus-operator helm chart</a> is able to deploy whole monitoring stack to your GKE cluster easily (which my or may not be exactly what you need here).</p> <p>Note that recent Grafana versions come with Stackdriver datasource, which allows you to query Stackdriver directly from Grafana (if all metrics you need are or can be in Stackdriver).</p>
<p>I am aware that it is possible to enable the master node to execute pods and that is my concern. Since the default configuration is do not allow the master to run pods. Should I change it? What is the reason for the default configuration as it is?</p> <p>If the change can be performed in some situations. I would like to ask if my cluster in one of these. It has only three nodes with exactly the same hardware and possibly more nodes are not going to be added in the foreseeable future. In my opinion, as I have three equal nodes, it will be a waste of resources to use 1/3 of my cluster computational power to run the kubernetes master. Am I right?</p> <p>[Edit1]</p> <p>I have found the following reason in Kubernets documentation. <a href="https://i.stack.imgur.com/BGndB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGndB.png" alt="enter image description here"></a> It is, the security, the only reason?</p>
<p>Technically, it doesn't need to run on a dedicated node. But for your Kubernetes cluster to run, you need your masters to work properly. And one of the ways how to ensure it can be secure, stable and perform well is to use separate node which runs only the master components and not regular pod. If you share the node with different pods, there could be several ways how it can impact the master. For example:</p> <ul> <li>The other pods will impact the perforamnce of the masters (network or disk latencies, CPU cache etc.)</li> <li>They migth be a security risk (if someone manages to hack from some other pod into the master node)</li> <li>A badly written application can cause stability issues to the node</li> </ul> <p>While it can be seen as wasting resources, you can also see it as a price to pay for the stability of your master / Kubernetes cluster. However, it doesn't have to be waste of 1/3 of resources. Depending on how you deploy your Kubernetes cluster you can use different hosts for different nodes. So for example you can use small host for the master and bigger nodes for the workers.</p>
<p>I need a cron job to run every 5 minutes. If an earlier cron job is still running, another cron job should not start. I tried setting concurrency policy to Forbid, but then the cron job does not run at all.</p> <ol> <li>Job gets launched every 5 minutes as expected, but it launches even if the earlier cron job has not completed yet</li> </ol> <pre class="lang-yaml prettyprint-override"><code>spec: concurrencyPolicy: Allow schedule: '*/5 * * * *' </code></pre> <ol start="2"> <li>This is supposed to solve the problem, but the cron job never gets launched with this approach</li> </ol> <pre class="lang-yaml prettyprint-override"><code>spec: concurrencyPolicy: Forbid schedule: '*/5 * * * *' </code></pre> <ol start="3"> <li>Setting the startingDeadlineSeconds to 3600, or even to 10, did not make a difference.</li> </ol> <pre class="lang-yaml prettyprint-override"><code>spec: concurrencyPolicy: Forbid schedule: '*/5 * * * *' startingDeadlineSeconds: 10 </code></pre> <p>Could someone please help me here?</p>
<p>From kubernetes documentation</p> <p>Concurrency Policy specifies how to treat concurrent executions of a job that is created by this cron job. The spec may specify only one of the following concurrency policies:</p> <p><strong>Allow (default)</strong>: The cron job allows concurrently running jobs</p> <p><strong>Forbid</strong>: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run</p> <p><strong>Replace</strong>: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run</p> <p>In your case <strong>'concurrencyPolicy: Forbid'</strong> should work. It will not allow new job to be run if the previous job is still running. The problem is not with concurrencyPolicy in your case.</p> <p>It might be related to startingDeadlineSeconds. can you remove it and try</p>
<p>How can I add alertmanager to istio prometheus deployed by official helm chart?</p> <p><a href="https://istio.io/docs/setup/install/helm/" rel="nofollow noreferrer">https://istio.io/docs/setup/install/helm/</a></p> <pre><code>helm upgrade istio install/kubernetes/helm/istio --namespace istio-system --tiller-namespace istio-system \ --set tracing.enabled=true \ --set tracing.ingress.enabled=true \ --set grafana.enabled=true \ --set kiali.enabled=true \ --set "kiali.dashboard.jaegerURL=http://jaeger-query.kaws.skynet.com/jaeger" \ --set "kiali.dashboard.grafanaURL=http://grafana.kaws.skynet.com" \ --set "kiali.prometheusAddr=http://prometheus.kaws.skynet.com" </code></pre> <p>Is it possible to add alertmanager to istio setup?</p>
<blockquote> <p>Is it possible to add alertmanager to istio setup?</p> </blockquote> <p>Yes, it is possible.</p> <p>As i could read on <a href="https://github.com/istio/istio/issues/17094" rel="nofollow noreferrer">github</a></p> <blockquote> <p>Generally Istio is not trying to manage production grade Prometheus, grafana, etc deployments. We are doing some work to make it easy to integrate istio with your own Prometheus, kiali, etc. See <a href="https://github.com/istio/installer/tree/master/istio-telemetry/prometheus-operator" rel="nofollow noreferrer">https://github.com/istio/installer/tree/master/istio-telemetry/prometheus-operator</a> as one way you can integrate with the Prometheus operator. You can define your own Prometheus setup then just add the configs to scrape istio components.</p> </blockquote> <p>You will have to change prometheus <a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio/charts/prometheus" rel="nofollow noreferrer">values and templates</a> like <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">there</a>, add alertmanager yamls, and then configure it to work on istio namespace.</p> <blockquote> <p>How can I add alertmanager to istio prometheus deployed by official helm chart?</p> </blockquote> <p>I would recommend to use</p> <pre><code>helm fetch istio.io/istio --untar </code></pre> <p>Which Download a chart to your local directory to view.</p> <p>Then add alertmanager, and install istio helm chart from your local directory instead of helm repository.</p>
<p>I have a kubernetes cluster created using kubadm with 1 master and 2 worker. flannel is being used as the network plugin. Noticed that the docker0 bridge is down on all the worker node and master node but the cluster networking is working fine. Is it by design that the docker0 bridge will be down if we are using any network plugin like flannel in kubernetes cluster?</p> <pre><code>docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN link/ether 02:42:ad:8f:3a:99 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 </code></pre>
<p>I am posting a community wiki answer from <a href="https://stackoverflow.com/questions/54102888/what-role-does-network-bridge-docker0-play-in-k8s-with-flannel">this</a> SO thread as I believe it answers your question.</p> <hr /> <p>There are two network models here Docker and Kubernetes.</p> <p>Docker model</p> <blockquote> <p>By default, Docker uses host-private networking. It creates a virtual bridge, called <code>docker0</code> by default, and allocates a subnet from one of the private address blocks defined in <a href="https://www.rfc-editor.org/rfc/rfc1918" rel="nofollow noreferrer">RFC1918</a> for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (called <code>veth</code>) which is attached to the bridge. The veth is mapped to appear as <code>eth0</code> in the container, using Linux namespaces. The in-container <code>eth0</code> interface is given an IP address from the bridge’s address range.</p> <p><strong>The result is that Docker containers can talk to other containers only if they are on the same machine</strong> (and thus the same virtual bridge). <strong>Containers on different machines can not reach each other</strong> - in fact they may end up with the exact same network ranges and IP addresses.</p> </blockquote> <p>Kubernetes model</p> <blockquote> <p>Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):</p> </blockquote> <ul> <li>all containers can communicate with all other containers without NAT</li> <li>all nodes can communicate with all containers (and vice-versa) without NAT</li> <li>the IP that a container sees itself as is the same IP that others see it as</li> </ul> <blockquote> <p>Kubernetes applies IP addresses at the <code>Pod</code> scope - containers within a <code>Pod</code> share their network namespaces - including their IP address. This means that containers within a <code>Pod</code> can all reach each other’s ports on <code>localhost</code>. This does imply that containers within a <code>Pod</code> must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model. This is implemented, using Docker, as a “pod container” which holds the network namespace open while “app containers” (the things the user specified) join that namespace with Docker’s <code>--net=container:&lt;id&gt;</code> function.</p> <p>As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host <code>Node</code> and traffic will be forwarded to the <code>Pod</code>. The <code>Pod</code> itself is blind to the existence or non-existence of host ports.</p> </blockquote> <p>In order to integrate the platform with the underlying network infrastructure Kubernetes provide a plugin specification called <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Container Networking Interface (CNI)</a>. If the Kubernetes fundamental requirements are met vendors can use network stack as they like, typically using overlay networks to support <strong>multi-subnet</strong> and <strong>multi-az</strong> clusters.</p> <p>Bellow is shown how overlay networks are implemented through <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> which is a popular <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">CNI</a>.</p> <p><a href="https://i.stack.imgur.com/DOxTE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DOxTE.png" alt="flannel" /></a></p> <p>You can read more about other CNI's <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">here</a>. The Kubernetes approach is explained in <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> docs. I also recommend reading <a href="https://www.contino.io/insights/kubernetes-is-hard-why-eks-makes-it-easier-for-network-and-security-architects" rel="nofollow noreferrer">Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects</a> which explains how <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> works, also another <a href="https://medium.com/all-things-about-docker/setup-hyperd-with-flannel-network-1c31a9f5f52e" rel="nofollow noreferrer">article from Medium</a></p> <p>Hope this answers your question.</p>
<p>I installed a "nginx ingress controller" on my GKE cluster. I followed <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">this guide</a> to install the nginx ingress controller in the GKE.</p> <p>When deploying resources for the service and ingress resource I realized that the ingress controller was at <code>0/1</code> <a href="https://i.stack.imgur.com/r3luB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r3luB.png" alt="enter image description here"></a></p> <p>Events telling me:</p> <pre><code>0/1 nodes are available: 1 node(s) didn't match node selector. </code></pre> <p>Now I checked the yaml/describe: <a href="https://pastebin.com/QG3GKxh1" rel="nofollow noreferrer">https://pastebin.com/QG3GKxh1</a> And found that:</p> <pre><code>nodeSelector: kubernetes.io/os: linux </code></pre> <p>Which looks fine in my opinion. Since I just used the command of the guide to install the controller I have no idea what went wrong from my side.</p> <h2>Solution:</h2> <p>The provided answer showed me the way. My node was labeled with <code>beta.kubernetes/io: linux</code> while the controller was looking for <code>kubernetes/io: linux</code>. Renaming the <code>nodeSelector</code> in the controller worked.</p>
<p><code>nodeSelector</code> is used to constraint the nodes on which your Pods can be scheduled.</p> <p>With:</p> <pre><code>nodeSelector: kubernetes.io/os: linux </code></pre> <p>You are saying that Pods must be assigned to a node that has the label <code>kubernetes.io/os: linux</code>. If none of your nodes has that label, the Pod will never get scheduled.</p> <p>Removing the selector from the nginx ingress controller or adding the label <code>kubernetes.io/os: linux</code> to any node should fix your issue.</p>
<p>Trying to Install Kubernetes 1.16.2 from the binaries and i see this issue when i try to check the component status.</p> <p>The Response object shows all are Healthy but the table below shows unknown.</p> <pre><code>root@instance:/opt/configs# kubectl get cs -v=8 I1104 05:54:48.554768 25209 round_trippers.go:420] GET http://localhost:8080/api/v1/componentstatuses?limit=500 I1104 05:54:48.555186 25209 round_trippers.go:427] Request Headers: I1104 05:54:48.555453 25209 round_trippers.go:431] Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json I1104 05:54:48.555735 25209 round_trippers.go:431] User-Agent: kubectl/v1.16.2 (linux/amd64) kubernetes/c97fe50 I1104 05:54:48.567372 25209 round_trippers.go:446] Response Status: 200 OK in 11 milliseconds I1104 05:54:48.567388 25209 round_trippers.go:449] Response Headers: I1104 05:54:48.567392 25209 round_trippers.go:452] Cache-Control: no-cache, private I1104 05:54:48.567395 25209 round_trippers.go:452] Content-Type: application/json I1104 05:54:48.567397 25209 round_trippers.go:452] Date: Mon, 04 Nov 2019 05:54:48 GMT I1104 05:54:48.567400 25209 round_trippers.go:452] Content-Length: 661 I1104 05:54:48.567442 25209 request.go:968] Response Body: {"kind":"ComponentStatusList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/componentstatuses"},"items":[{"metadata":{"name":"etcd-0","selfLink":"/api/v1/componentstatuses/etcd-0","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"{\"health\":\"true\"}"}]},{"metadata":{"name":"controller-manager","selfLink":"/api/v1/componentstatuses/controller-manager","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"ok"}]},{"metadata":{"name":"scheduler","selfLink":"/api/v1/componentstatuses/scheduler","creationTimestamp":null},"conditions":[{"type":"Healthy","status":"True","message":"ok"}]}]} I1104 05:54:48.567841 25209 table_printer.go:44] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table I1104 05:54:48.567879 25209 table_printer.go:44] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table I1104 05:54:48.567888 25209 table_printer.go:44] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object into a v1beta1.Table NAME AGE etcd-0 &lt;unknown&gt; controller-manager &lt;unknown&gt; scheduler &lt;unknown&gt; </code></pre>
<p>There appears to be an issue with table converter for component status specifically with k8s version 1.16.2 Already there is a PR raised to address this issue. Follow and track the link </p> <p>--> <a href="https://github.com/kubernetes/kubernetes/issues/83024" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/83024</a></p>
<p>The application which I am working on runs as a deployment in kubernetes cluster. Pods created for this deployment is spread across various nodes in the cluster. Our application can handle only one TCP connection at a time and would reject further connections. Currently we use kube-proxy (Iptables mode) for distributing loads across pods in various nodes, but pods are chosen in a random way and connections are getting dropped when its passed to a busy pod. Can I use Kube-router's least-connection based load balancing algorithm for my usecase. I want the traffic to be load balanced across various pods running in various nodes. Can I achieve this using Kube-router. </p> <p>As far I know kube-proxy's IPVS mode load balances traffic only across pods in same node, as kube-proxy runs as a daemon set. Is it the same with Kube-router as well?</p>
<p>Kube-proxy's IPVS mode does load balancing of traffic across pods placed in different nodes. </p> <p>You can refer to this blog post where have a Deep Dive into this matter: <a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">IPVS-Based In-Cluster Load Balancing Deep Dive</a></p>
<p>I have a problem with my database migration inside kubernetes.</p> <p>Actually, i execute an initcontainer script when my pods started.</p> <p>But this process is very long (10minutes).</p> <p>When i kill my pod, this process of initcontainer restart, but it's not really necessary when it's not an upgrade of version. 10 minutes later....</p> <p>What is the best way to do a database update when my version of image is different ?</p>
<p>Usually, the upgrade engine, e.g. alembic in python or flyweight in java are checking DB schema versions before running migrations. If you use bare SQL scrips you can mimic this behavior - add a column called db_versions, write version there and then check before running.</p>
<p>I created a helm chart which has <code>secrets.yaml</code> as:</p> <pre><code>apiVersion: v1 kind: Secret type: Opaque metadata: name: appdbpassword stringData: password: password@1 </code></pre> <p>My pod is:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: expense-pod-sample-1 spec: containers: - name: expense-container-sample-1 image: exm:1 command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;" ] envFrom: - secretRef: name: appdbpassword </code></pre> <p>Whenever I run the <code>kubectl get secrets</code> command, I get the following secrets:</p> <pre><code>name Type Data Age appdbpassword Opaque 1 41m sh.helm.release.v1.myhelm-1572515128.v1 helm.sh/release.v1 1 41m </code></pre> <p>Why am I getting that extra secret? Am I missing something here?</p>
<p>Helm v2 <a href="http://technosophos.com/2017/03/23/how-helm-uses-configmaps-to-store-data.html" rel="noreferrer">used ConfigMaps</a> by default to store release information. The ConfigMaps were created in the same namespace of the Tiller (generally <code>kube-system</code>).</p> <p>In Helm v3 the Tiller was removed, and the information about each release version <a href="https://helm.sh/docs/faq/changes_since_helm2/#release-names-are-now-scoped-to-the-namespace" rel="noreferrer">had to go somewhere</a>:</p> <blockquote> <p>In Helm 3, release information about a particular release is now stored in the same namespace as the release itself.</p> </blockquote> <p>Furthermore, Helm v3 <a href="https://helm.sh/docs/faq/changes_since_helm2/#secrets-as-the-default-storage-driver" rel="noreferrer">uses Secrets as default</a> storage driver instead of ConfigMaps (i.e., it's expected that you see these helm secrets for each namespace that has a release version on it).</p>
<p>So far I was convinced that one need a PVC to access a PV like in this example from k8s <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer">doc</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myclaim </code></pre> <p>But then I saw in <a href="https://docs.docker.com/ee/ucp/kubernetes/storage/use-nfs-volumes/" rel="nofollow noreferrer">Docker doc</a> that one can use the following syntax (example using nfs):</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: nfs-in-a-pod spec: containers: - name: app image: alpine volumeMounts: - name: nfs-volume mountPath: /var/nfs # Please change the destination you like the share to be mounted too command: ["/bin/sh"] args: ["-c", "sleep 500000"] volumes: - name: nfs-volume nfs: server: nfs.example.com # Please change this to your NFS server path: /share1 # Please change this to the relevant share </code></pre> <p>I am confused: </p> <ul> <li>Is this syntax creating a PVC under the hood?</li> <li>Or is any PV matching the spec mounted without a PVC?</li> <li>Or perhaps the spec selects an existing PVC?</li> </ul>
<p>An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. You dont need pv and pvc for emptyDIr volume.</p> <p>Note that when a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.</p> <p>If you want to retain the data even if the pod crashes or restarts or the pod is deleted or undeployed then you need to use pv and pvc</p> <p>Look at another example below, where you dont need pv and pvc using hostPath</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume hostPath: # directory location on host path: /data # this field is optional type: Directory </code></pre> <p>If you need to store the data on external storage solutions like nfs, azure file storage, aws EBS, google persistentDisk etc then you need to create pv and pvc.</p> <p>mounting pv directly to a pod is not allowed and is against the kubernetes design principles. It would cause tight coupling below the pod vloume and the underlysing storage.</p> <p>pvc enables light coupling between the pod and the persistent volume. The pod doesnt know what the underlying storage is used to store the container data and is not necessary for the pod to know that info.</p> <p>pv and pvc are required for static and dynamic provisioning of storage volumes for work loads in kubernetes cluster</p>
<p>I have created three openshift container registries. For example I will use the below addresses:</p> <p><a href="https://openshift-registry-1.com" rel="nofollow noreferrer">https://openshift-registry-1.com</a><br> <a href="https://openshift-registry-2.com" rel="nofollow noreferrer">https://openshift-registry-2.com</a><br> <a href="https://openshift-registry-3.com" rel="nofollow noreferrer">https://openshift-registry-3.com</a><br></p> <p>I am looking for a way or a tool to push the same image to all three in one action.</p>
<p>I would use the <a href="https://github.com/containers/skopeo" rel="nofollow noreferrer">skopeo</a> cli for this from inside my cicd/orchestration tool of choice (e.g. Jenkins, Ansible Tower, etc). skopeo is a command line utility that performs various operations on container images and image repositories.</p> <p>Take a look at the official OpenShift blog post <a href="https://blog.openshift.com/promoting-container-images-between-registries-with-skopeo/" rel="nofollow noreferrer">Promoting container images between container registries with skopeo</a> for example usage.</p> <p>The example they post there:</p> <pre><code>def namespace, appReleaseTag, webReleaseTag, prodCluster, prodProject, prodToken pipeline { agent { label 'skopeo' } stages { stage('Choose Release Version') { steps { script { openshift.withCluster() { // Login to the production cluster namespace = openshift.project() prodCluster = env.PROD_MASTER.replace("https://","insecure://") withCredentials([usernamePassword(credentialsId: "${namespace}-prod-credentials", usernameVariable: "PROD_USER", passwordVariable: "PROD_TOKEN")]) { prodToken = env.PROD_TOKEN } // Get list of tags in the ImageStream to show the release-manager def appTags = openshift.selector("istag").objects().collect { it.metadata.name }.findAll { it.startsWith 'app:' }.collect { it.replaceAll(/app:(.*)/, "\$1") }.sort() timeout(5) { def inputs = input( ok: "Deploy", message: "Enter release version to promote to PROD", parameters: [ string(defaultValue: "prod", description: 'Name of the PROD project to create', name: 'PROD Project Name'), choice(choices: appTags.join('\n'), description: '', name: 'Application Release Version'), ] ) appReleaseTag = inputs['Application Release Version'] prodProject = inputs['PROD Project Name'] } } } } } stage('Create PROD') { steps { script { openshift.withCluster(prodCluster, prodToken) { openshift.newProject(prodProject, "--display-name='CoolStore PROD'") } } } } stage('Promote Images to PROD') { steps { script { openshift.withCluster() { def srcApplicationRef = openshift.selector("istag", "app:${appReleaseTag}").object().image.dockerImageReference def destApplicationRef = "${env.PROD_REGISTRY}/${prodProject}/app:${appReleaseTag}" def srcToken = readFile "/run/secrets/kubernetes.io/serviceaccount/token" sh "skopeo copy docker://${srcApplicationRef} docker://${destApplicationRef} --src-creds openshift:${srcToken} --dest-creds openshift:${prodToken}" } } } } stage('Deploy to PROD') { steps { script { openshift.withCluster(prodCluster, prodToken) { openshift.withProject(prodProject) { def template = 'https://raw.githubusercontent.com/openshift-labs/myapp/myapp-template.yaml' openshift.apply( openshift.process("-f", template, "-p", "APPLICATION_IMAGE_VERSION=${appReleaseTag}", "-p", "IMAGE_NAMESPACE=") ) } } } } } } } </code></pre>
<p>I have some specific questions regarding <code>gitlab-ci</code> and <code>runner</code>:</p> <ol> <li><p>If my specific runner is configured in kubernetes cluster then how code mirroring happens into runner from Gitlab code repository </p></li> <li><p>How does the build happens in runner when it is configured within kubernetes cluster?</p></li> <li><p>When using any docker image in my .gitlab-ci.yml, how does those images are pulled by runner and how does commands mentioned within "script" tag are executed into those docker containers? Does runner creates pods within the kubernetes cluster (where runner is configured) with the image mentioned within .gitlab-ci.yml, and executes commands within those containers?</p></li> <li><p>Any additional explanations or references to learning material on how Gitlab runner works internally is highly appreciated.</p></li> </ol>
<p>I'm assuming when you say your GitLab Runner is configured in Kubernetes you mean you're using the Kubernetes executor. I marked the sections relevant to your questions. </p> <p><strong>(1)</strong> GitLab CI pulls the code from the repository (if public it's not an issue, but you can also use a <a href="https://docs.gitlab.com/runner/security/index.html#usage-of-private-docker-images-with-if-not-present-pull-policy" rel="nofollow noreferrer">private registry</a>). Basically a helper image is used to clone the repository and download any artifacts into a container.</p> <p>The Kubernetes executor lets you use an existing Kubernetes cluster to execute your pipeline/build step by calling the Kubernetes cluster API and creating a new Pod, with both build and services containers for each job. <strong>(3)</strong></p> <p>A <a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">more detailed view of the steps</a> a Runner takes: </p> <blockquote> <ul> <li>Prepare: Create the Pod against the Kubernetes Cluster. This creates the containers required for the build and services to run.</li> <li>Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special container as part of the Pod. <strong>(2)</strong></li> <li>Build: User build.</li> <li>Post-build: Create cache, upload artifacts to GitLab. This also uses the special container as part of the Pod.</li> </ul> </blockquote> <p>The <a href="https://gitlab.com/gitlab-org/gitlab-runner/tree/master" rel="nofollow noreferrer">GitLab repository for the runners</a> might also be interesting for you.</p>
<p>I'm trying to use <code>extraEnv</code> property in order to add additional environment variables to set in the pod using helm charts.</p> <p>I have a <code>values.yaml</code> file that includes: </p> <pre><code>controller: service: loadBalancerIP: extraEnvs: - name: ev1 value: - name: ev2 value </code></pre> <p>first I've set the loadBalancerIP the following way:</p> <p><code>helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --output-dir .</code></p> <p>In order to set <code>extraEnvs</code> values I've tried to use the same logic by doing: </p> <p><code>helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --set nginx-ingress.controller.extraEnvs[0].value=valueEnv1--set nginx.controller.extraEnvs[1].value=valueEnv2--output-dir .</code></p> <p>But it doesn't work. I looked for the right way to set those variables but couldn't find anything.</p>
<p>Helm <code>--set</code> has some <a href="https://helm.sh/docs/using_helm/#the-format-and-limitations-of-set" rel="nofollow noreferrer">limitations</a>.</p> <p>Your best option is to avoid using the <code>--set</code>, and use the <code>--values</code> flag with your <code>values.yaml</code> file instead:</p> <pre><code>helm template path/charts/ingress \ --values=values.yaml </code></pre> <hr> <p>If you want to use <code>--set</code> anyway, the equivalent command should have this notation:</p> <pre><code>helm template path/charts/ingress \ --set=controller.service.loadBalancerIP=1.1.1.1 \ --set=controller.extraEnvs[0].name=ev1,controller.extraEnvs[0].value=valueEnv1 \ --set=controller.extraEnvs[1].name=ev2,controller.extraEnvs[1].value=valueEnv2 \ --output-dir . </code></pre>
<p>I've recently been making use of the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="noreferrer">GKE Workload Identity</a> feature. I'd be interested to know in more detail how the <code>gke-metadata-server</code> component works.</p> <ol> <li>GCP client code (<code>gcloud</code> or other language SDKs) falls through to the GCE metadata method</li> <li>Request made to <code>http://metadata.google.internal/path</code></li> <li>(guess) Setting <code>GKE_METADATA_SERVER</code> on my node pool configures this to resolve to the <code>gke-metadata-server</code> pod on that node.</li> <li>(guess) the <code>gke-metadata-server</code> pod with --privileged and host networking has a means of determining the source (pod IP?) then looking up the pod and its service account to check for the <code>iam.gke.io/gcp-service-account</code> annotation.</li> <li>(guess) the proxy calls the metadata server with the pods 'pseudo' identity set (e.g. <code>[PROJECT_ID].svc.id.goog[[K8S_NAMESPACE]/[KSA_NAME]]</code>) to get a token for the service account annotated on its Kubernetes service account.</li> <li>If this account has token creator / workload ID user rights to the service account presumably the response from GCP is a success and contains a token, which is then packaged and set back to the calling pod for authenticated calls to other Google APIs.</li> </ol> <p>I guess the main puzzle for me right now is the verification of the calling pods identity. Originally I thought this would use the TokenReview API but now I'm not sure how the Google client tools would know to use the service account token mounted into the pod...</p> <p><em>Edit</em> follow-up questions:</p> <p>Q1: In between step 2 and 3, is the request to <code>metadata.google.internal</code> routed to the GKE metadata proxy by the setting <code>GKE_METADATA_SERVER</code> on the node pool? </p> <p>Q2: Why does the metadata server pod need host networking?</p> <p>Q3: In the video here: <a href="https://youtu.be/s4NYEJDFc0M?t=2243" rel="noreferrer">https://youtu.be/s4NYEJDFc0M?t=2243</a> it's taken as a given that the pod makes a GCP call. How does the GKE metadata server identify the pod making the call to start the process?</p>
<p>Before going into details, please familiarize yourself with these components:</p> <p><em>OIDC provider</em>: Runs on Google’s infrastructure, provides cluster specific metadata and signs authorized JWTs.</p> <p><em>GKE metadata server</em>: It runs as a DaemonSet meaning one instance on every node, exposes pod specific metadata server (it will provide backwards compatibility with old client libraries), emulates existing node metadata server.</p> <p><em>Google IAM</em>: issues access token, validates bindings, validates OIDC signatures. </p> <p><em>Google cloud</em>: accepts access tokens, does pretty much anything.</p> <p><em>JWT</em>: JSON Web token</p> <p><em>mTLS</em>: Mutual Transport Layer Security</p> <p>The steps below explain how GKE metadata server components work:</p> <p><strong>Step 1</strong>: An authorized user binds the cluster to the namespace.</p> <p><strong>Step 2</strong>: Workload tries to access Google Cloud service using client libraries.</p> <p><strong>Step 3</strong>: GKE metadata server is going to request an OIDC signed JWT from the control plane. That connection is authenticated using mutual TLS (mTLS) connection with node credential. </p> <p><strong>Step 4</strong>: Then the GKE metadata server is going use that OIDC signed JWT to request an access token for the <em>[identity namespace]/[Kubernetes service account]</em> from IAM. IAM is going to validate that the appropriate bindings exist on identity namespace and in the OIDC provider.</p> <p><strong>Step 5</strong>: And then IAM validates that it was signed by the cluster’s correct OIDC provider. It will then return an access token for the <em>[identity namespace]/[kubernetes service account].</em></p> <p><strong>Step 6</strong>: Then the metadata server sends the access token it just got back to IAM. IAM will then exchange that for a short lived GCP service account token after validating the appropriate bindings.</p> <p><strong>Step 7</strong>: Then GKE metadata server returns the GCP service account token to the workload.</p> <p><strong>Step 8</strong>: The workload can then use that token to make calls to any Google Cloud Service.</p> <p>I also found a <a href="https://youtu.be/s4NYEJDFc0M?t=1243" rel="noreferrer">video</a> regarding Workload Identity which you will find useful.</p> <p><strong><em>EDIT</em> Follow-up questions' answers:</strong></p> <p>Below are answers to your follow-up questions: </p> <p><strong>Q1</strong>: In between step 2 and 3, is the request to metadata.google.internal routed to the gke metadata proxy by the setting GKE_METADATA_SERVER on the node pool?</p> <p>You are right, GKE_METADATA_SERVER is set on the node pool. This exposes a metadata API to the workloads that is compatible with the V1 Compute Metadata APIs. Once workload tries to access Google Cloud service, the GKE metadata server performs a lookup (the metadata server checks to see if a pod exists in the list whose IP matches the incoming IP of the request) before it goes on to request the OIDC token from the control plane.</p> <p>Keep in mind that <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/NodeConfig#nodemetadata" rel="noreferrer">GKE_METADATA_SERVER</a> enumeration feature can only be enabled if Workload Identity is enabled at the cluster level.</p> <p><strong>Q2</strong>: Why does the metadata server pod need host networking?</p> <p>The gke-metadata-server intercepts all GCE metadata server requests from pods, however pods using the host network are not intercepted.</p> <p><strong>Q3</strong>: How does the GKE metadata server identify the pod making the call to start the process?</p> <p>The pods are identified using iptables rules.</p>
<p>I am trying to set up my app on GKE and use an internal load balancer for public access. I am able to deploy the cluster / load balancer service without any issues, but when I try to access the external ip address of the load balancer, I get Connection Refused and I am not sure what is wrong / how to debug this.</p> <p>These are the steps I did:</p> <p>I applied my deployment <code>yaml</code> file via <code>kubectl apply -f file.yaml</code> then after, I applied my load balancer service <code>yaml</code> file with <code>kubectl apply -f service.yaml</code>. After both were deployed, I did <code>kubectl get service</code> to fetch the External IP Address from the Load Balancer.</p> <p>Here is my <code>deployment.yaml</code> file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app-api image: gcr.io/... ports: - containerPort: 8000 resources: requests: memory: "250M" cpu: "250m" limits: memory: "1G" cpu: "500m" - name: my-app image: gcr.io/... ports: - containerPort: 3000 resources: requests: memory: "250M" cpu: "250m" limits: memory: "1G" cpu: "500m" </code></pre> <p>and here is my <code>service.yaml</code> file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-app-ilb annotations: cloud.google.com/load-balancer-type: "Internal" labels: app: my-app-ilb spec: type: LoadBalancer selector: app: my-app ports: - port: 3000 targetPort: 3000 protocol: TCP </code></pre> <p>My deployment file has two containers; a backend api and a frontend. What I want to happen is that I should be able to go on <code>[external ip address]:3000</code> and see my web app.</p> <p>I hope this is enough information; please let me know if there is anything else I may be missing / can add.</p> <p>Thank you all!</p>
<p>You need to allow traffic to flow into your cluster by creating firewall rule.</p> <pre><code>gcloud compute firewall-rules create my-rule --allow=tcp:3000 </code></pre> <hr> <p>Remove this annotation : </p> <pre><code> annotations: cloud.google.com/load-balancer-type: "Internal" </code></pre> <p>You need external Load Balancer.</p>
<p>So far I was convinced that one need a PVC to access a PV like in this example from k8s <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer">doc</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myclaim </code></pre> <p>But then I saw in <a href="https://docs.docker.com/ee/ucp/kubernetes/storage/use-nfs-volumes/" rel="nofollow noreferrer">Docker doc</a> that one can use the following syntax (example using nfs):</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: nfs-in-a-pod spec: containers: - name: app image: alpine volumeMounts: - name: nfs-volume mountPath: /var/nfs # Please change the destination you like the share to be mounted too command: ["/bin/sh"] args: ["-c", "sleep 500000"] volumes: - name: nfs-volume nfs: server: nfs.example.com # Please change this to your NFS server path: /share1 # Please change this to the relevant share </code></pre> <p>I am confused: </p> <ul> <li>Is this syntax creating a PVC under the hood?</li> <li>Or is any PV matching the spec mounted without a PVC?</li> <li>Or perhaps the spec selects an existing PVC?</li> </ul>
<p>The various kinds of things you can mount are part of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#volume-v1-core" rel="nofollow noreferrer">Volume</a> object in the Kubernetes API (which is part of a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a>, which is part of a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#pod-v1-core" rel="nofollow noreferrer">Pod</a>). None of these are an option to mount a specific PersistentVolume by name.</p> <p>(There are some special cases you can see there for things like NFS and various clustered storage systems. Those mostly predate persistent volumes.)</p> <p>The best you can do here is to create a PVC that's very tightly bound to a single persistent volume, and then reference that in the pod spec.</p>
<p>I have been trying to integrate spark interpreter on zeppelin (v0.7.3) on a Kubernetes cluster. However, as a complication of having k8s version 1.13.10 on the servers <a href="https://issues.apache.org/jira/browse/SPARK-28921" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/SPARK-28921</a></p> <p>I needed to upgrade my spark k8s-client to v4.6.1 as indicated here <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/591#issuecomment-526376703" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/591#issuecomment-526376703</a></p> <p>But when I try executing a spark command <code>sc.version</code> on zeppelin-ui, I get:</p> <pre><code>ERROR [2019-10-25 03:45:35,430] ({pool-2-thread-4} Job.java[run]:181) - Job failed java.lang.NullPointerException at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) </code></pre> <p>Here are the spark-submit configurations I have, but I don't think the error was from these (since I've run these before and they worked fine)</p> <pre><code>spark.kubernetes.driver.docker.image=x spark.kubernetes.executor.docker.image=x spark.local.dir=/tmp/spark-local spark.executor.instances=5 spark.dynamicAllocation.enabled=true spark.shuffle.service.enabled=true spark.kubernetes.shuffle.labels="x" spark.dynamicAllocation.maxExecutors=5 spark.dynamicAllocation.minExecutors=1 spark.kubernetes.docker.image.pullPolicy=IfNotPresent spark.kubernetes.resourceStagingServer.uri="http://xxx:xx" </code></pre> <p>I have tried downgrading the spark-k8s client to 3.x.x until 4.0.x but I get the HTTP error. Thus, I've decided to stick to v4.6.1 . Opening the zeppelin-interpreter logs, I find the following stack-trace:</p> <pre><code>ERROR [2019-10-25 03:45:35,428] ({pool-2-thread-4} Utils.java[invokeMethod]:40) - java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:378) at org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:841) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: io/fabric8/kubernetes/api/model/apps/Deployment at io.fabric8.kubernetes.client.internal.readiness.Readiness.isReady(Readiness.java:62) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl$$anonfun$start$1.apply(KubernetesExternalShuffleManager.scala:82) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl$$anonfun$start$1.apply(KubernetesExternalShuffleManager.scala:81) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.spark.scheduler.cluster.k8s.KubernetesExternalShuffleManagerImpl.start(KubernetesExternalShuffleManager.scala:80) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$$anonfun$start$1.apply(KubernetesClusterSchedulerBackend.scala:212) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$$anonfun$start$1.apply(KubernetesClusterSchedulerBackend.scala:212) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:212) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173) at org.apache.spark.SparkContext.&lt;init&gt;(SparkContext.scala:509) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909) at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901) ... 20 more INFO [2019-10-25 03:45:35,430] ({pool-2-thread-4} SparkInterpreter.java[createSparkSession]:379) - Created Spark session ERROR [2019-10-25 03:45:35,430] ({pool-2-thread-4} Job.java[run]:181) - Job failed java.lang.NullPointerException at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38) at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:398) at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:387) at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:843) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:491) at org.apache.zeppelin.scheduler.Job.run(Job.java:175) at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) INFO [2019-10-25 03:45:35,431] ({pool-2-thread-4} SchedulerFactory.java[jobFinished]:137) - Job remoteInterpretJob_1571975134433 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter819422312 </code></pre> <p>I expect to run this command:</p> <pre><code>%spark %sc.version </code></pre> <p>P.S. This is my first post here, so if I did not follow certain rules, kindly correct me. Thanks!</p>
<p>After some rigorous research and help from my colleagues, I was able to verify that <code>io/fabric8/kubernetes/api/model/apps/Deployment</code> didn't exist at Kubernetes-model-v2.0.0. Upgrading the jar to v3.0.0 fixed the issue. </p>
<p>I am running a Python app on production but my pod is restarting frequently on production environment. While on a staging environment it's not happening.</p> <p>So I thought it could be CPU &amp; Memory limit issue. I have updated that also.</p> <p>Further debug I got <code>137</code> exit code.</p> <p>For more debug I go inside Kubernetes node and check for container.</p> <p>Command used: <code>docker inspect &lt; container id &gt;</code></p> <p>Here is output:</p> <pre><code> { "Id": "a0f18cd48fb4bba66ef128581992e919c4ddba5e13d8b6a535a9cff6e1494fa6", "Created": "2019-11-04T12:47:14.929891668Z", "Path": "/bin/sh", "Args": [ "-c", "python3 run.py" ], "State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 137, "Error": "", "StartedAt": "2019-11-04T12:47:21.108670992Z", "FinishedAt": "2019-11-05T00:01:30.184225387Z" }, </code></pre> <p>OOMKilled is false so I think that is not issue. </p> <p>Using GKE master version: <code>1.13.10-gke.0</code> </p>
<p>Technically all the 137 means is your process was terminated as a result of a SIGKILL. Unfortunately this doesn't have enough info to know where it came from. Tools like auditd or Falco on top of that can help gather that data by recording those kinds of system calls, or at least get you closer. </p>
<p>I am trying to redirect my domain 'www.test.example.com' to <code>test.example.com</code></p> <p>in ingress i have added annotation </p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'www.test.wotnot.io' ) { rewrite ^/(.*)$ https://app.test.wotnot.io/$1 permanent; } </code></pre> <p>it's not working as expected.</p> <p>For testing i have try this</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | if ($host = 'test.example.com' ) { rewrite ^/(.*)$ https://google.com/$1 permanent; } </code></pre> <p>which is working fine.</p> <p>My site is working on <code>test.example.com</code> and ssl certificate.</p> <p>Whole ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: wordpress-staging kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "true" #nginx.ingress.kubernetes.io/configuration-snippet: | #if ($host = 'www.test.wotnot.io' ) { # rewrite ^/(.*)$ https://test.example.io/$1 permanent; #} name: wordpress-staging-ingress spec: rules: - host: test.example.io http: paths: - backend: serviceName: wordpress-site servicePort: 80 path: / tls: - hosts: - test.example.io secretName: wordpress-staging </code></pre>
<p>Ingress has an annotation <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#redirect-fromto-www" rel="noreferrer"><code>nginx.ingress.kubernetes.io/from-to-www-redirect: "true"</code></a> which already handle this:</p> <blockquote> <p>In some scenarios is required to redirect from <code>www.domain.com</code> to <code>domain.com</code> or vice versa. To enable this feature use the annotation <code>nginx.ingress.kubernetes.io/from-to-www-redirect: "true"</code></p> <p><strong>Attention</strong>: For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.</p> </blockquote> <p>It's better that you use it instead of fighting/tweaking the <code>configuration-snippet</code> annotation.</p>
<p>With the storage add-on for MicroK8s, Persistent Volume Claims are by default given storage under <code>/var/snap/microk8s/common/default-storage</code> on the host system. How can that be changed?</p> <p>Viewing the declaration for the <code>hostpath-provisioner</code> pod, shows that there is an environment setting called <code>PV_DIR</code> pointing to <code>/var/snap/microk8s/common/default-storage</code> - seems like what I'd like to change, but how can that be done?</p> <p>Not sure if I'm asking a MicroK8s specific question or if this is something that applies to Kubernetes in general?</p> <pre><code>$ microk8s.kubectl describe -n kube-system pod/hostpath-provisioner-7b9cb5cdb4-q5jh9 Name: hostpath-provisioner-7b9cb5cdb4-q5jh9 Namespace: kube-system Priority: 0 Node: ... Start Time: ... Labels: k8s-app=hostpath-provisioner pod-template-hash=7b9cb5cdb4 Annotations: &lt;none&gt; Status: Running IP: ... IPs: IP: ... Controlled By: ReplicaSet/hostpath-provisioner-7b9cb5cdb4 Containers: hostpath-provisioner: Container ID: containerd://0b74a5aa06bfed0a66dbbead6306a0bc0fd7e46ec312befb3d97da32ff50968a Image: cdkbot/hostpath-provisioner-amd64:1.0.0 Image ID: docker.io/cdkbot/hostpath-provisioner-amd64@sha256:339f78eabc68ffb1656d584e41f121cb4d2b667565428c8dde836caf5b8a0228 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: ... Last State: Terminated Reason: Unknown Exit Code: 255 Started: ... Finished: ... Ready: True Restart Count: 3 Environment: NODE_NAME: (v1:spec.nodeName) PV_DIR: /var/snap/microk8s/common/default-storage Mounts: /var/run/secrets/kubernetes.io/serviceaccount from microk8s-hostpath-token-nsxbp (ro) /var/snap/microk8s/common/default-storage from pv-volume (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: pv-volume: Type: HostPath (bare host directory volume) Path: /var/snap/microk8s/common/default-storage HostPathType: microk8s-hostpath-token-nsxbp: Type: Secret (a volume populated by a Secret) SecretName: microk8s-hostpath-token-nsxbp Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre>
<h2>HostPath</h2> <p>If You want to add your own path to your persistentVolume You can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="noreferrer">spec.hostPath.path</a> value </p> <p>example yamls</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: base provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: base capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: base accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p><strong>kindly reminder</strong></p> <blockquote> <p>Depending on the installation method, your Kubernetes cluster may be deployed with an existing StorageClass that is marked as default. This default StorageClass is then used to dynamically provision storage for PersistentVolumeClaims that do not require any specific storage class. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="noreferrer">PersistentVolumeClaim documentation</a> for details.</p> </blockquote> <p>You can check your storageclass by using </p> <pre><code>kubectl get storageclass </code></pre> <p>If there is no <code>&lt;your-class-name&gt;(default)</code> that means You need to make your own default storage class.</p> <p>Mark a StorageClass as default:</p> <pre><code>kubectl patch storageclass &lt;your-class-name&gt; -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' </code></pre> <p>After You make defualt <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storageClass</a> You can use those yamls to create pv and pvc</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume3 labels: type: local spec: storageClassName: "" capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data2" </code></pre> <pre><code> apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim3 spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <h2>one pv for each pvc</h2> <p>Based on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="noreferrer">kubernetes documentation</a></p> <blockquote> <p>Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. <strong>A PVC to PV binding is a one-to-one mapping</strong>.</p> </blockquote>
<p>Is there a way we can have a K8s pod per user/per firm? I realise, per user/per firm grouping is mixing up the business level semantics with infrastructure but say I had this need for regulatory reasons, etc to keep things separate. Then is there a way to create a pod on the fly when a user logs in for the first time and hold this pod reference and route any further requests to the relevant pod which will host a set of containers each running an instance of one of the modules.</p> <ol> <li>Is this even possible? </li> <li>If possible, what are those identifiers that can be injected into the pod on the fly that I could use to identify that this is USER-A-POD vs USER_B_POD or FIRM_A_POD vs FIRM_B_POD ? Effectively, I need to have a pod template that helps me create identical pods of 1 replica but the only way they differ is they are serving traffic related to one user/one firm only.</li> </ol>
<p>Generally, if you want to send traffic to a specific pod say from a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a> you would use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Labels and Selectors</a>. For example, using the selector <code>app: usera-app</code> in the Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: usera-service spec: selector: app: usera-app ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>Then say if the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> for your pods, using the label <code>app: usera-app</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: usera-deployment spec: selector: matchLabels: app: usera-app replicas: 2 template: metadata: labels: app: usera-app spec: containers: - name: myservice image: nginx ports: - containerPort: 80 </code></pre> <p>More info <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">here</a></p> <p>How you assign your pods and deployments is up to you and whatever configuration you may use. If you'd like to force create some of the labels in deployments/pods you can take a look at <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">MutatingAdminssionWebhooks</a>. </p> <p>If you are looking at projects to facilitate all this you can take a look at:</p> <ul> <li><a href="https://github.com/open-policy-agent/gatekeeper" rel="nofollow noreferrer">Gatekeeper</a> which is an implementation of the <a href="https://www.openpolicyagent.org/" rel="nofollow noreferrer">Open Policy Agent</a> for Kubernetes admission. (Still in alpha as of this writing)</li> </ul> <p>Other tools that can help you with attestation and admission mechanism (would have to be adapted for labels):</p> <ul> <li><a href="https://github.com/grafeas/kritis" rel="nofollow noreferrer">Kritis</a> </li> <li><a href="https://github.com/IBM/portieris" rel="nofollow noreferrer">Portieris</a></li> </ul>
<p>There is a default <code>ClusterRoleBinding</code> named <code>cluster-admin</code>.<br> When I run <code>kubectl get clusterrolebindings cluster-admin -o yaml</code> I get: </p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: 2018-06-13T12:19:26Z labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin resourceVersion: "98" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin uid: 0361e9f2-6f04-11e8-b5dd-000c2904e34b roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters </code></pre> <p>In the <code>subjects</code> field I have: </p> <pre><code>- apiGroup: rbac.authorization.k8s.io kind: Group name: system:masters </code></pre> <p>How can I see the members of the group <code>system:masters</code> ?<br> I read <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-subjects" rel="noreferrer">here</a> about groups but I don't understand how can I see who is inside the groups as the example above with <code>system:masters</code>. </p> <p>I noticed that when I decoded <code>/etc/kubernetes/pki/apiserver-kubelet-client.crt</code> using the command: <code> openssl x509 -in apiserver-kubelet-client.crt -text -noout</code> it contained the subject <code>system:masters</code> but I still didn't understand who are the users in this group: </p> <pre><code>Issuer: CN=kubernetes Validity Not Before: Jul 31 19:08:36 2018 GMT Not After : Jul 31 19:08:37 2019 GMT Subject: O=system:masters, CN=kube-apiserver-kubelet-client Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: </code></pre>
<p>Admittedly, late to the party here.</p> <p>Have a read through <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noreferrer">the Kubernetes 'Authenticating' docs</a>. Kubernetes does not have an in-built mechanism for defining and controlling users (as distinct from ServiceAccounts which are used to provide a cluster identity for Pods, and therefore services running on them).</p> <p>This means that Kubernetes does not therefore have any internal DB to reference, to determine and display group membership.</p> <p>In smaller clusters, x509 certificates are typically used to authenticate users. The API server is configured to trust a CA for the purpose, and then users are issued certificates signed by that CA. As you had noticed, if the subject contains an 'Organisation' field, that is mapped to a Kubernetes group. If you want a user to be a member of more than one group, then you specify multiple 'O' fields. (As an aside, to my mind it would have made more sense to use the 'OU' field, but that is not the case)</p> <p>In answer to your question, it appears that in the case of a cluster where users are authenticated by certificates, your only route is to have access to the issued certs, and to check for the presence of the 'O' field in the subject. I guess in more advanced cases, Kubernetes would be integrated with a centralised tool such as AD, which could be queried natively for group membership.</p>
<p>My Kiali installation returns 520 when logging in.</p> <p>In my console is being printed:</p> <pre><code>W1105 08:23:28.238619 1 kiali.go:145] Kiali is missing a secret that contains both 'username' and 'passphrase' E1105 08:23:34.142346 1 authentication.go:108] Credentials are missing. Create a secret. Please refer to the documentation for more details. </code></pre> <p>This is strange, because running</p> <pre><code>kubectl describe secret kiali -n istio-system </code></pre> <p>provides me with the following output:</p> <pre><code>Name: kiali Namespace: istio-system Labels: app=kiali flux.weave.works/sync-gc-mark=sha256.ZNNGIdiNNcRZl-YCuc551EB3Edthk6kuz-PlDVn6U9k Annotations: flux.weave.works/sync-checksum: ae50afa268598e23696d4e980b1686829b3589e4 Type: Opaque Data ==== passphrase: 5 bytes username: 6 bytes </code></pre> <p>restarting the pod does not solve the issue.</p> <p><strong>Versions used</strong> Kiali: Version: v0.18.1, Commit: ef27faa</p> <p>Istio: 1.1.2</p> <p>Kubernetes flavour and version: Azure AKS</p> <p>To Reproduce Deploy Istio to your AKS cluster using the following resource: <a href="https://github.com/timfpark/fabrikate-cloud-native" rel="nofollow noreferrer">https://github.com/timfpark/fabrikate-cloud-native</a></p> <p><strong>Edit:</strong></p> <p>It turns out updating to 1.1.5 is all that was necessary. Also the repo I was using isn't the official version. That can be found here: <a href="https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native" rel="nofollow noreferrer">https://github.com/microsoft/fabrikate-definitions/tree/master/definitions/fabrikate-cloud-native</a></p>
<p>I had this issue, upgrade to istio 1.1.5 or higher. you can use my <a href="https://dev.azure.com/4c74356b41/_git/fabrikate" rel="nofollow noreferrer">example repo</a> with istio 1.2.0. that would fix it as well.</p> <p><a href="https://github.com/rootsongjc/cloud-native-sandbox/issues/2" rel="nofollow noreferrer">https://github.com/rootsongjc/cloud-native-sandbox/issues/2</a></p>
<p>I applied my PVC yaml file to my GKE cluster and checked it's state. It says the follwing for the yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"teamcity","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"3Gi"}}}} volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd creationTimestamp: "2019-11-05T09:45:20Z" finalizers: - kubernetes.io/pvc-protection name: teamcity namespace: default resourceVersion: "1358093" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/teamcity uid: fb51d295-ffb0-11e9-af7d-42010a8400aa spec: accessModes: - ReadWriteMany dataSource: null resources: requests: storage: 3Gi storageClassName: standard volumeMode: Filesystem status: phase: Pending </code></pre> <p>I did not created anything like a storage or whatever needs to be done for that? Because I read it as this is provided automatically by the GKE. Any idea what I am missing?</p>
<p>GKE includes default support for GCP disk PV provisioning, however those implement ReadWriteOnce and ReadOnlyMany modes. I do not think GKE includes a provisioner for ReadWriteMany by default.</p> <p>EDIT: While it's not set up by default (because it requires further configuration) <a href="https://stackoverflow.com/questions/54796639/how-do-i-create-a-persistent-volume-claim-with-readwritemany-in-gke">How do I create a persistent volume claim with ReadWriteMany in GKE?</a> shows how to use Cloud Filestore to launch a hosted NFS-compatible server and then aim a provisioner at it.</p>
<p><strong>What happened?</strong> kubernetes version: 1.12 promethus operator: release-0.1 I follow the README:</p> <pre><code>$ kubectl create -f manifests/ # It can take a few seconds for the above 'create manifests' command to fully create the following resources, so verify the resources are ready before proceeding. $ until kubectl get customresourcedefinitions servicemonitors.monitoring.coreos.com ; do date; sleep 1; echo ""; done $ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done $ kubectl apply -f manifests/ # This command sometimes may need to be done twice (to workaround a race condition). </code></pre> <p>and then I use the command and then is showed like:</p> <pre><code>[root@VM_8_3_centos /data/hansenwu/kube-prometheus/manifests]# kubectl get pod -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 0 66s alertmanager-main-1 1/2 Running 0 47s grafana-54f84fdf45-kt2j9 1/1 Running 0 72s kube-state-metrics-65b8dbf498-h7d8g 4/4 Running 0 57s node-exporter-7mpjw 2/2 Running 0 72s node-exporter-crfgv 2/2 Running 0 72s node-exporter-l7s9g 2/2 Running 0 72s node-exporter-lqpns 2/2 Running 0 72s prometheus-adapter-5b6f856dbc-ndfwl 1/1 Running 0 72s prometheus-k8s-0 3/3 Running 1 59s prometheus-k8s-1 3/3 Running 1 59s prometheus-operator-5c64c8969-lqvkb 1/1 Running 0 72s [root@VM_8_3_centos /data/hansenwu/kube-prometheus/manifests]# kubectl get pod -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 0/2 Pending 0 0s grafana-54f84fdf45-kt2j9 1/1 Running 0 75s kube-state-metrics-65b8dbf498-h7d8g 4/4 Running 0 60s node-exporter-7mpjw 2/2 Running 0 75s node-exporter-crfgv 2/2 Running 0 75s node-exporter-l7s9g 2/2 Running 0 75s node-exporter-lqpns 2/2 Running 0 75s prometheus-adapter-5b6f856dbc-ndfwl 1/1 Running 0 75s prometheus-k8s-0 3/3 Running 1 62s prometheus-k8s-1 3/3 Running 1 62s prometheus-operator-5c64c8969-lqvkb 1/1 Running 0 75s </code></pre> <p>I don't know why the pod altertmanager-main-0 pending and disaply then restart. And I see the event, it is showed as:</p> <pre><code>72s Warning FailedCreate StatefulSet create Pod alertmanager-main-0 in StatefulSet alertmanager-main failed error: The POST operation against Pod could not be completed at this time, please try again. 72s Warning FailedCreate StatefulSet create Pod alertmanager-main-0 in StatefulSet alertmanager-main failed error: The POST operation against Pod could not be completed at this time, please try again. 72s Warning^Z FailedCreate StatefulSet [10]+ Stopped kubectl get events -n monitoring </code></pre>
<p>Most likely the alertmanager does not get enough time to start correctly.</p> <p>Have a look at this answer : <a href="https://github.com/coreos/prometheus-operator/issues/965#issuecomment-460223268" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/issues/965#issuecomment-460223268</a></p> <p>You can set the paused field to true, and then modify the StatefulSet to try if extending the liveness/readiness solves your issue.</p>
<p>I am looking for these logs:</p> <pre><code>/var/log/kube-apiserver.log /var/log/kube-scheduler.log /var/log/kube-controller-manager.log </code></pre> <p>In EKS user does not have access to the control plane and can't see these files directly.</p> <p>I am aware of <a href="https://docs.aws.amazon.com/eks/latest/userguide/logging-using-cloudtrail.html" rel="noreferrer">CloudTrail</a> integration announced by AWS. But it shows events not from k8s API, but AWS EKS API like <code>CreateCluster</code> event. Also the open question how to get scheduler and controller manager logs.</p> <p>There is no pods for api and controller in pods list.</p> <pre><code>$ kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-9f4lm 1/1 Running 0 2h kube-system aws-node-wj2cg 1/1 Running 0 2h kube-system kube-dns-64b69465b4-4gw6n 3/3 Running 0 2h kube-system kube-proxy-7mt7l 1/1 Running 0 2h kube-system kube-proxy-vflzv 1/1 Running 0 2h </code></pre> <p>There is no master nodes in the node list</p> <pre><code>$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-0-92.ec2.internal Ready &lt;none&gt; 9m v1.10.3 ip-10-0-1-63.ec2.internal Ready &lt;none&gt; 9m v1.10.3 </code></pre>
<p>Logs can be send to <a href="https://aws.amazon.com/cloudwatch/pricing/" rel="nofollow noreferrer">CloudWatch</a> (not free of charge). The following logs can be individually selected to be send to CloudWatch:</p> <ul> <li>API server</li> <li>Audit</li> <li>Authenticator</li> <li>Controller Manager</li> <li>Scheduler</li> </ul> <p>Logs can be enabled via UI or AWS CLI. See <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html" rel="nofollow noreferrer">Amazon EKS Control Plane Logging</a></p>
<p>I want to list all nodes which are in ready state except the ones which have any kind of taint on them. How can I achieve this using jsonpath ?</p> <p>I tried below statement taken from k8s doc but it doesn't print what I want. I am looking for output such as -- <code>node01 node02</code>. There is no master node in the output as it has a taint on it. What kind of taint is not really significant here.</p> <pre><code>JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ &amp;&amp; kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" </code></pre>
<p>I have successfully listed my nodes that are <code>ready</code> and <code>not tainted</code> using <code>jq</code>.</p> <p>Here you have all the nodes: </p> <pre><code>$ kubectl get nodes gke-standard-cluster-1-default-pool-9c101360-9lvw Ready &lt;none&gt; 31s v1.13.11-gke.9 gke-standard-cluster-1-default-pool-9c101360-fdhr Ready &lt;none&gt; 30s v1.13.11-gke.9 gke-standard-cluster-1-default-pool-9c101360-gq9c Ready &lt;none&gt; 31s v1.13.11-gke. </code></pre> <p>Here I have tainted one node: </p> <pre><code>$ kubectl taint node gke-standard-cluster-1-default-pool-9c101360-9lvw key=value:NoSchedule node/gke-standard-cluster-1-default-pool-9c101360-9lvw tainted </code></pre> <p>And finally a command that list the <code>not tainted</code> and <code>ready</code> nodes: </p> <pre><code>$ kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints|not) | select(.status.conditions[].reason=="KubeletReady" and .status.conditions[].status=="True") | .metadata.name' gke-standard-cluster-1-default-pool-9c101360-fdhr gke-standard-cluster-1-default-pool-9c101360-gq9c </code></pre>
<p>I am using minikube </p> <p>My deployment file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpdeployment spec: replicas: 3 selector: matchLabels: app: phpapp template: metadata: labels: app: phpapp spec: containers: - image: rajendar38/myhtmlapp:latest name: php ports: - containerPort: 80 ingress apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress spec: backend: serviceName: php-service servicePort: 80 this is my service apiVersion: v1 kind: Service metadata: name: php-service spec: selector: app: phpapp ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31000 type: NodePort </code></pre> <p>simple php application I build docker image I am able to access in both the ways </p> <ul> <li><a href="http://192.168.99.100/test.html" rel="nofollow noreferrer">http://192.168.99.100/test.html</a></li> <li><a href="http://192.168.99.100:31000/test.html" rel="nofollow noreferrer">http://192.168.99.100:31000/test.html</a></li> </ul> <p>After that I:</p> <ul> <li>updated my php application </li> <li>create the Image again, pushed to Docker Hub</li> <li>deleted all resources </li> <li>kubectl delete all --all</li> <li>Then forced apply deployment and service </li> </ul> <p>but with node port way I am able access old application. but with Ingress way I am able access changes are picked up</p>
<p>Please take look on similar <a href="https://stackoverflow.com/questions/52522570/how-to-expose-a-kubernetes-service-on-a-specific-nodeport">problem</a>. </p> <p>You have to know that container port is the port container listens on. Service port is the port where kubernetes service is exposed on cluster internal ip and mapped to the container port. Nodeport is the port exposed on the host and mapped to kubernetes service.</p> <p>NodePort lets you expose a service by specifying that value in the service’s type. Ingress, on the other hand, is a completely independent resource to your service. You declare, create and destroy it separately to your services. Thanks to service type NodePort you are able to expose both ports(31000, 80).</p> <p>Your configuration files should look similar:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: phpdeployment spec: replicas: 3 selector: matchLabels: app: phpapp template: metadata: labels: app: phpapp spec: containers: - image: rajendar38/myhtmlapp:latest name: php command: [ "/bin/bash", "-ce", "tail -f /dev/null" ] ports: - containerPort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /example backend: serviceName: php servicePort: 80 --- apiVersion: v1 kind: Service metadata: name: php spec: selector: app: php ports: - port: 31000 targetPort: 80 protocol: TCP name: type: NodePort </code></pre> <p>Then expose deployment:</p> <pre><code>$ kubectl expose deployment phpdeployment --type=NodePort </code></pre> <p>Official documentations: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">kubernetes-service-nodeport</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernestes-ingress</a>, <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">kubernetes-deployment-exposing</a>.</p>