prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have an application that has 3 pods and each pod needs a fixed variable name stored in each pod. So if everything is running fine, the three pods would have var1, var2, and var3 stored on the corresponding pods.</p> <p>If the first pod gets replaced which has var1, how can I determine that the other 2 pods have var2 and var3, and thus know that the new pod should be assigned var1?</p> <p>Can this be done with Stateful Sets?</p>
<p>I see two ways of doing that:</p> <ol> <li>Using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a>:</li> </ol> <blockquote> <p>For a StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, that is unique over the Set.</p> </blockquote> <ol start="2"> <li>Creating the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/" rel="nofollow noreferrer">Pods</a> manually. Example: </li> </ol> <blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo-3 namespace: mem-example spec: containers: - name: memory-demo-3-ctr image: polinux/stress </code></pre> </blockquote> <p>If you need your application to be aware of the Pod where it is running on, there is an interesting page in Kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">"Expose Pod Information to Containers Through Environment Variables"</a>.</p> <p>Example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod-var1 spec: containers: - name: test-container image: k8s.gcr.io/busybox env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre>
<p>This is a Kubespray deployment using calico. All the defaults are were left as-is except for the fact that there is a proxy. Kubespray ran to the end without issues.</p> <p>Access to Kubernetes services started failing and after investigation, there was <strong>no route to host</strong> to the <em>coredns</em> service. Accessing a K8S service by IP worked. Everything else seems to be correct, so I am left with a cluster that works, but without DNS.</p> <p>Here is some background information: Starting up a busybox container:</p> <pre><code># nslookup kubernetes.default Server: 169.254.25.10 Address: 169.254.25.10:53 ** server can't find kubernetes.default: NXDOMAIN *** Can't find kubernetes.default: No answer </code></pre> <p>Now the output while explicitly defining the IP of one of the CoreDNS pods:</p> <pre><code># nslookup kubernetes.default 10.233.0.3 ;; connection timed out; no servers could be reached </code></pre> <p>Notice that telnet to the Kubernetes API works:</p> <pre><code># telnet 10.233.0.1 443 Connected to 10.233.0.1 </code></pre> <p><strong>kube-proxy logs:</strong> 10.233.0.3 is the service IP for coredns. The last line looks concerning, even though it is INFO.</p> <pre><code>$ kubectl logs kube-proxy-45v8n -nkube-system I1114 14:19:29.657685 1 node.go:135] Successfully retrieved node IP: X.59.172.20 I1114 14:19:29.657769 1 server_others.go:176] Using ipvs Proxier. I1114 14:19:29.664959 1 server.go:529] Version: v1.16.0 I1114 14:19:29.665427 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I1114 14:19:29.669508 1 config.go:313] Starting service config controller I1114 14:19:29.669566 1 shared_informer.go:197] Waiting for caches to sync for service config I1114 14:19:29.669602 1 config.go:131] Starting endpoints config controller I1114 14:19:29.669612 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1114 14:19:29.769705 1 shared_informer.go:204] Caches are synced for service config I1114 14:19:29.769756 1 shared_informer.go:204] Caches are synced for endpoints config I1114 14:21:29.666256 1 graceful_termination.go:93] lw: remote out of the list: 10.233.0.3:53/TCP/10.233.124.23:53 I1114 14:21:29.666380 1 graceful_termination.go:93] lw: remote out of the list: 10.233.0.3:53/TCP/10.233.122.11:53 </code></pre> <p>All pods are running without crashing/restarts etc. and otherwise services behave correctly.</p> <p>IPVS looks correct. CoreDNS service is defined there:</p> <pre><code># ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -&gt; RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.233.0.1:443 rr -&gt; x.59.172.19:6443 Masq 1 0 0 -&gt; x.59.172.20:6443 Masq 1 1 0 TCP 10.233.0.3:53 rr -&gt; 10.233.122.12:53 Masq 1 0 0 -&gt; 10.233.124.24:53 Masq 1 0 0 TCP 10.233.0.3:9153 rr -&gt; 10.233.122.12:9153 Masq 1 0 0 -&gt; 10.233.124.24:9153 Masq 1 0 0 TCP 10.233.51.168:3306 rr -&gt; x.59.172.23:6446 Masq 1 0 0 TCP 10.233.53.155:44134 rr -&gt; 10.233.89.20:44134 Masq 1 0 0 UDP 10.233.0.3:53 rr -&gt; 10.233.122.12:53 Masq 1 0 314 -&gt; 10.233.124.24:53 Masq 1 0 312 </code></pre> <p>Host routing also looks correct.</p> <pre><code># ip r default via x.59.172.17 dev ens3 proto dhcp src x.59.172.22 metric 100 10.233.87.0/24 via x.59.172.21 dev tunl0 proto bird onlink blackhole 10.233.89.0/24 proto bird 10.233.89.20 dev calib88cf6925c2 scope link 10.233.89.21 dev califdffa38ed52 scope link 10.233.122.0/24 via x.59.172.19 dev tunl0 proto bird onlink 10.233.124.0/24 via x.59.172.20 dev tunl0 proto bird onlink x.59.172.16/28 dev ens3 proto kernel scope link src x.59.172.22 x.59.172.17 dev ens3 proto dhcp scope link src x.59.172.22 metric 100 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown </code></pre> <p>I have redeployed this same cluster in separate environments with flannel and calico with iptables instead of ipvs. I have also disabled the docker http proxy after deploy temporarily. None of which makes any difference.</p> <p>Also: kube_service_addresses: 10.233.0.0/18 kube_pods_subnet: 10.233.64.0/18 (They do not overlap)</p> <p>What is the next step in debugging this issue?</p>
<p>I highly recommend you to avoid using latest busybox image to troubleshoot DNS. There are few <a href="https://github.com/docker-library/busybox/issues/48" rel="nofollow noreferrer">issues</a> reported regarding dnslookup on versions newer than 1.28. </p> <p>v 1.28.4</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox busybox | head -1 BusyBox v1.28.4 (2018-05-22 17:00:17 UTC) multi-call binary. user@node1:~$ kubectl exec -ti busybox -- nslookup kubernetes.default Server: 169.254.25.10 Address 1: 169.254.25.10 Name: kubernetes.default Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>v 1.31.1</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busyboxlatest busybox | head -1 BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary. user@node1:~$ kubectl exec -ti busyboxlatest -- nslookup kubernetes.default Server: 169.254.25.10 Address: 169.254.25.10:53 ** server can't find kubernetes.default: NXDOMAIN *** Can't find kubernetes.default: No answer command terminated with exit code 1 </code></pre> <p>Going deeper and exploring more possibilities, I've reproduced your problem on GCP and after some digging I was able to figure out what is causing this communication problem. </p> <p>GCE (Google Compute Engine) blocks traffic between hosts by default; we have to allow Calico traffic to flow between containers on different hosts. </p> <p>According to calico <a href="https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/gce" rel="nofollow noreferrer">documentation</a>, you can do it by creating a firewall allowing this communication rule: </p> <pre class="lang-sh prettyprint-override"><code>gcloud compute firewall-rules create calico-ipip --allow 4 --network "default" --source-ranges "10.128.0.0/9" </code></pre> <p>You can verify the rule with this command:</p> <pre class="lang-sh prettyprint-override"><code>gcloud compute firewall-rules list </code></pre> <p>This is not present on the most recent calico documentation but it's still true and necessary. </p> <p>Before creating firewall rule:</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox2 -- nslookup kubernetes.default Server: 10.233.0.3 Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 </code></pre> <p>After creating firewall rule:</p> <pre class="lang-sh prettyprint-override"><code>user@node1:~$ kubectl exec -ti busybox2 -- nslookup kubernetes.default Server: 10.233.0.3 Address 1: 10.233.0.3 coredns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.233.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>It doesn't matter if you bootstrap your cluster using kubespray or kubeadm, this problem will happen because calico needs to communicate between nodes and GCE is blocking it as default. </p>
<p>Hi I have followed some k8s tutorials on how to get going with setting up a local db + WordPress installation, but user can't connect to mysql within my cluster. (everything else seems ok - in Kubernetes Dashboard Web UI)</p> <blockquote> <p>Error: [15:40:55][~]#kubectl logs -f website-56677747c7-c7lb6 [21-Nov-2019 11:07:17 UTC] PHP Warning: mysqli::__construct(): php_network_getaddresses: getaddrinfo failed: Name or service not known in Standard input code on line 22 [21-Nov-2019 11:07:17 UTC] PHP Warning: mysqli::__construct(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: Name or service not known in Standard input code on line 22</p> <p>MySQL Connection Error: (2002) php_network_getaddresses: getaddrinfo failed: Name or service not known [21-Nov-2019 11:07:20 UTC] PHP Warning: mysqli::__construct(): (HY000/1045): Access denied for user 'websiteu5er'@'10.1.0.35' (using password: YES) in Standard input code on line 22</p> <p>MySQL Connection Error: (1045) Access denied for user 'websiteu5er'@'10.1.0.35' (using password: YES)</p> <p>MySQL Connection Error: (1045) Access denied for user 'websiteu5er'@'10.1.0.35' (using password: YES)</p> <p>MySQL Connection Error: (1045) Access denied for user 'websiteu5er'@'10.1.0.35' (using password: YES)</p> </blockquote> <p>My Dockerfile (which I used to create the image pushed to docker hub then pulled into k8s service + deployment):</p> <pre><code>FROM mysql:5.7 # This should create the following default root + user? ENV MYSQL_ROOT_PASSWORD=hello123 ENV MYSQL_DATABASE=website ENV MYSQL_USER=websiteu5er ENV MYSQL_PASSWORD=hello123 RUN /etc/init.d/mysql start \ &amp;&amp; mysql -u root --password='hello123' -e "GRANT ALL PRIVILEGES ON *.* TO 'websiteu5er'@'%' IDENTIFIED BY 'hello123';" FROM wordpress:5.2.4-php7.3-apache # Copy wp-config file over COPY configs/wp-config.php . RUN chown -R www-data:www-data * COPY ./src/wp-content/themes/bam /var/www/html/wp-content/themes/bam </code></pre>
<p>The standard Docker Hub <a href="https://hub.docker.com/_/mysql" rel="nofollow noreferrer">mysql</a> image has the ability to run arbitrary SQL scripts <em>on the very first startup of the database only</em>. It can also set up an initial database user with a known password, again on the first startup only. Details are in the linked Docker Hub page.</p> <p>In a Kubernetes context I’d use just the environment variables, and specify them in my pod spec.</p> <pre><code>containers: - name: mysql image: mysql:5.7 # not a custom image env: - name: MYSQL_USER value: websiteu5er - name: MYSQL_PASSWORD value: hello123 </code></pre> <p>If you did need more involved setup, I’d create a ConfigMap that contained SQL scripts, and then mount that into the container in <code>/docker-entrypoint-initdb.d</code>.</p> <hr> <p>There’s two things going on in your Dockerfile. One is that, when you have multiple <code>FROM</code> lines, you’re actually executing a <em>multi-stage build</em>; the image you get out at the end is only the Wordpress image, and the MySQL parts before it get skipped. The second is that you can’t actually create an image <code>FROM mysql</code> that contains any database-level configuration or content, so the image that comes out of the first stage has the environment variables set but won’t actually have executed your <code>GRANT PRIVILEGES</code> statement.</p> <p>I’d just delete everything before the last <code>FROM</code> line and not try to build a derived MySQL image; use the <code>/docker-entrypoint-initdb.d</code> mechanism at startup time instead.</p>
<p>I have just created a GKE cluster on Google Cloud platform. I have installed in the cloud console <code>helm</code> :</p> <pre><code>$ helm version version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} </code></pre> <p>I have also created the necessary <code>serviceaccount</code> and <code>clusterrolebinding</code> objects:</p> <pre><code>$ cat helm-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system $ kubectl apply -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created </code></pre> <p>However trying to initialise <code>tiller</code> gives me the following error:</p> <pre><code>$ helm init --service-account tiller --history-max 300 Error: unknown flag: --service-account </code></pre> <p>Why is that?</p>
<blockquote> <p>However trying to initialise tiller gives me the following error:</p> <p>Error: unknown flag: --service-account</p> <p>Why is that?</p> </blockquote> <p><a href="https://helm.sh/blog/helm-3-released/" rel="noreferrer">Helm <strong>3</strong> is a major upgrade</a>. The <strong>Tiller</strong> component is now obsolete.</p> <p>There is no command <code>helm init</code> therefore also the flag <code>--service-account</code> is removed.</p> <blockquote> <p>The internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change is the <strong>removal of Tiller</strong>.</p> </blockquote>
<p>I am following link: <a href="https://kubernetes.dask.org/en/latest/" rel="nofollow noreferrer">https://kubernetes.dask.org/en/latest/</a>, to run dask array on Kubernetes cluster. While running the example code, the worker pod is showing error status as below:</p> <p>Steps:</p> <ol> <li><p>Installed Kubernetes on 3 nodes(1 Master and 2 workers).</p></li> <li><p>pip install dask-kubernetes</p></li> <li><p>dask_example.py with code to run dask array (same as example given on link)</p></li> <li><p>Worker-spec.yml file with pod configuration (same as example given on link)</p></li> </ol> <pre><code>(base) [root@k8s-master example]# ls dask_example.py worker-spec.yml (base) [root@k8s-master example]# nohup python dask_example.py &amp; [1] 3660 (base) [root@k8s-master example]# cat nohup.out distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://172.16.0.76:40119 distributed.scheduler - INFO - Receive client connection: Client-df4caa18-0bc8-11ea-8e4c-12bd5ffa93ff distributed.core - INFO - Starting established connection (base) [root@k8s-master example]# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default workerpod 1/1 Running 0 70s 10.32.0.2 worker-node1 &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-l4jsd 1/1 Running 0 8m19s 10.32.0.4 k8s-master &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-q679h 1/1 Running 0 8m19s 10.32.0.3 k8s-master &lt;none&gt; &lt;none&gt; kube-system etcd-k8s-master 1/1 Running 0 7m16s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-k8s-master 1/1 Running 0 7m1s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-k8s-master 1/1 Running 0 7m27s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ctgj8 1/1 Running 0 5m7s 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-f78bm 1/1 Running 0 8m18s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ksk59 1/1 Running 0 5m15s 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-k8s-master 1/1 Running 0 7m2s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-q2zwn 2/2 Running 0 6m22s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-r9tzs 2/2 Running 0 5m15s 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system weave-net-tm8xx 2/2 Running 0 5m7s 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; (base) [root@k8s-master example]# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default workerpod 0/1 Error 0 4m23s 10.32.0.2 worker-node1 &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-l4jsd 1/1 Running 0 11m 10.32.0.4 k8s-master &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-q679h 1/1 Running 0 11m 10.32.0.3 k8s-master &lt;none&gt; &lt;none&gt; kube-system etcd-k8s-master 1/1 Running 0 10m 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-k8s-master 1/1 Running 0 10m 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-k8s-master 1/1 Running 0 10m 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ctgj8 1/1 Running 0 8m20s 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-f78bm 1/1 Running 0 11m 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ksk59 1/1 Running 0 8m28s 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-k8s-master 1/1 Running 0 10m 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-q2zwn 2/2 Running 0 9m35s 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-r9tzs 2/2 Running 0 8m28s 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system weave-net-tm8xx 2/2 Running 0 8m20s 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; (base) [root@k8s-master example]# cat nohup.out distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://172.16.0.76:40119 distributed.scheduler - INFO - Receive client connection: Client-df4caa18-0bc8-11ea-8e4c-12bd5ffa93ff distributed.core - INFO - Starting established connection (base) [root@k8s-master example]# kubectl describe pod workerpod Name: workerpod Namespace: default Priority: 0 Node: worker-node1/172.16.0.31 Start Time: Wed, 20 Nov 2019 19:06:36 +0000 Labels: app=dask dask.org/cluster-name=dask-root-99dcf768-4 dask.org/component=worker foo=bar user=root Annotations: &lt;none&gt; Status: Failed IP: 10.32.0.2 IPs: IP: 10.32.0.2 Containers: dask: Container ID: docker://578dc575fc263c4a3889a4f2cb5e06cd82a00e03cfc6acfd7a98fef703421390 Image: daskdev/dask:latest Image ID: docker-pullable://daskdev/dask@sha256:0a936daa94c82cea371c19a2c90c695688ab4e1e7acc905f8b30dfd419adfb6f Port: &lt;none&gt; Host Port: &lt;none&gt; Args: dask-worker --nthreads 2 --no-bokeh --memory-limit 6GB --death-timeout 60 State: Terminated Reason: Error Exit Code: 1 Started: Wed, 20 Nov 2019 19:06:38 +0000 Finished: Wed, 20 Nov 2019 19:08:20 +0000 Ready: False Restart Count: 0 Limits: cpu: 2 memory: 6G Requests: cpu: 2 memory: 6G Environment: EXTRA_PIP_PACKAGES: fastparquet git+https://github.com/dask/distributed DASK_SCHEDULER_ADDRESS: tcp://172.16.0.76:40119 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-p9f9v (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-p9f9v: Type: Secret (a volume populated by a Secret) SecretName: default-token-p9f9v Optional: false QoS Class: Guaranteed Node-Selectors: &lt;none&gt; Tolerations: k8s.dask.org/dedicated=worker:NoSchedule k8s.dask.org_dedicated=worker:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m47s default-scheduler Successfully assigned default/workerpod to worker-node1 Normal Pulled 5m45s kubelet, worker-node1 Container image "daskdev/dask:latest" already present on machine Normal Created 5m45s kubelet, worker-node1 Created container dask Normal Started 5m45s kubelet, worker-node1 Started container dask (base) [root@k8s-master example]# (base) [root@k8s-master example]# kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 21m Normal Starting node/k8s-master Starting kubelet. 21m Normal NodeHasSufficientMemory node/k8s-master Node k8s-master status is now: NodeHasSufficientMemory 21m Normal NodeHasNoDiskPressure node/k8s-master Node k8s-master status is now: NodeHasNoDiskPressure 21m Normal NodeHasSufficientPID node/k8s-master Node k8s-master status is now: NodeHasSufficientPID 21m Normal NodeAllocatableEnforced node/k8s-master Updated Node Allocatable limit across pods 21m Normal RegisteredNode node/k8s-master Node k8s-master event: Registered Node k8s-master in Controller 21m Normal Starting node/k8s-master Starting kube-proxy. 18m Normal Starting node/worker-node1 Starting kubelet. 18m Normal NodeHasSufficientMemory node/worker-node1 Node worker-node1 status is now: NodeHasSufficientMemory 18m Normal NodeHasNoDiskPressure node/worker-node1 Node worker-node1 status is now: NodeHasNoDiskPressure 18m Normal NodeHasSufficientPID node/worker-node1 Node worker-node1 status is now: NodeHasSufficientPID 18m Normal NodeAllocatableEnforced node/worker-node1 Updated Node Allocatable limit across pods 18m Normal Starting node/worker-node1 Starting kube-proxy. 18m Normal RegisteredNode node/worker-node1 Node worker-node1 event: Registered Node worker-node1 in Controller 17m Normal NodeReady node/worker-node1 Node worker-node1 status is now: NodeReady 18m Normal Starting node/worker-node2 Starting kubelet. 18m Normal NodeHasSufficientMemory node/worker-node2 Node worker-node2 status is now: NodeHasSufficientMemory 18m Normal NodeHasNoDiskPressure node/worker-node2 Node worker-node2 status is now: NodeHasNoDiskPressure 18m Normal NodeHasSufficientPID node/worker-node2 Node worker-node2 status is now: NodeHasSufficientPID 18m Normal NodeAllocatableEnforced node/worker-node2 Updated Node Allocatable limit across pods 18m Normal Starting node/worker-node2 Starting kube-proxy. 17m Normal RegisteredNode node/worker-node2 Node worker-node2 event: Registered Node worker-node2 in Controller 17m Normal NodeReady node/worker-node2 Node worker-node2 status is now: NodeReady 14m Normal Scheduled pod/workerpod Successfully assigned default/workerpod to worker-node1 14m Normal Pulled pod/workerpod Container image "daskdev/dask:latest" already present on machine 14m Normal Created pod/workerpod Created container dask 14m Normal Started pod/workerpod Started container dask (base) [root@k8s-master example]# </code></pre> <p>Update- adding pod logs (as suggested by Dawid Kruk):</p> <pre><code>(base) [root@k8s-master example]# kubectl logs workerpod + '[' '' ']' + '[' -e /opt/app/environment.yml ']' + echo 'no environment.yml' + '[' '' ']' + '[' 'fastparquet git+https://github.com/dask/distributed' ']' + echo 'EXTRA_PIP_PACKAGES environment variable found. Installing.' + /opt/conda/bin/pip install fastparquet git+https://github.com/dask/distributed no environment.yml EXTRA_PIP_PACKAGES environment variable found. Installing. Collecting git+https://github.com/dask/distributed Cloning https://github.com/dask/distributed to /tmp/pip-req-build-i3_1vo06 Running command git clone -q https://github.com/dask/distributed /tmp/pip-req-build-i3_1vo06 fatal: unable to access 'https://github.com/dask/distributed/': Could not resolve host: github.com ERROR: Command errored out with exit status 128: git clone -q https://github.com/dask/distributed /tmp/pip-req-build-i3_1vo06 Check the logs for full command output. + exec dask-worker --nthreads 2 --no-bokeh --memory-limit 6GB --death-timeout 60 /opt/conda/lib/python3.7/site-packages/distributed/cli/dask_worker.py:252: UserWarning: The --bokeh/--no-bokeh flag has been renamed to --dashboard/--no-dashboard. "The --bokeh/--no-bokeh flag has been renamed to --dashboard/--no-dashboard. " distributed.nanny - INFO - Start Nanny at: 'tcp://10.32.0.2:45097' distributed.worker - INFO - Start worker at: tcp://10.32.0.2:36389 distributed.worker - INFO - Listening to: tcp://10.32.0.2:36389 distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 6.00 GB distributed.worker - INFO - Local Directory: /worker-55rpow8j distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.worker - INFO - Waiting to connect to: tcp://172.16.0.76:43389 distributed.nanny - INFO - Closing Nanny at 'tcp://10.32.0.2:45097' distributed.worker - INFO - Stopping worker at tcp://10.32.0.2:36389 distributed.worker - INFO - Closed worker has not yet started: None distributed.dask_worker - INFO - Timed out starting worker distributed.dask_worker - INFO - End worker (base) [root@k8s-master example]# git usage: git [--version] [--help] [-C &lt;path&gt;] [-c &lt;name&gt;=&lt;value&gt;] [--exec-path[=&lt;path&gt;]] [--html-path] [--man-path] [--info-path] </code></pre> <p>Seems to me like dask pod connection to worker node is issue but I see worker nodes in ready state and other pods (nginx) are running on worker nodes:</p> <pre><code>(base) [root@k8s-master example]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 20h v1.16.3 worker-node1 Ready worker 20h v1.16.2 worker-node2 Ready worker 20h v1.16.2 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default nginx-deployment-54f57cf6bf-b9nfd 1/1 Running 0 34s 10.32.0.2 worker-node1 &lt;none&gt; &lt;none&gt; default nginx-deployment-54f57cf6bf-pnp59 1/1 Running 0 34s 10.40.0.0 worker-node2 &lt;none&gt; &lt;none&gt; default workerpod 0/1 Error 0 56m 10.32.0.2 worker-node1 &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-l4jsd 1/1 Running 0 21h 10.32.0.4 k8s-master &lt;none&gt; &lt;none&gt; kube-system coredns-5644d7b6d9-q679h 1/1 Running 0 21h 10.32.0.3 k8s-master &lt;none&gt; &lt;none&gt; kube-system etcd-k8s-master 1/1 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-k8s-master 1/1 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-k8s-master 1/1 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ctgj8 1/1 Running 0 21h 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-f78bm 1/1 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system kube-proxy-ksk59 1/1 Running 0 21h 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-k8s-master 1/1 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-q2zwn 2/2 Running 0 21h 172.16.0.76 k8s-master &lt;none&gt; &lt;none&gt; kube-system weave-net-r9tzs 2/2 Running 0 21h 172.16.0.31 worker-node1 &lt;none&gt; &lt;none&gt; kube-system weave-net-tm8xx 2/2 Running 0 21h 172.16.0.114 worker-node2 &lt;none&gt; &lt;none&gt; </code></pre> <p>Update2 - Added nslookup output (Suggested by VAS) Ref: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#create-a-simple-pod-to-use-as-a-test-environment" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#create-a-simple-pod-to-use-as-a-test-environment</a></p> <pre><code>(base) [root@k8s-master example]# kubectl exec -ti workerpod -- nslookup kubernetes.default OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown command terminated with exit code 126 </code></pre> <p>How to add executable file to pod? It is set on my host.</p> <pre><code>(base) [root@k8s-master example]# nslookup github.com Server: 172.31.0.2 Address: 172.31.0.2#53 Non-authoritative answer: Name: github.com Address: 140.82.114.3 </code></pre> <p>Update 3: nslookup for dnsutils (Suggested by VAS)</p> <pre><code>(base) [root@k8s-master example]# kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal options ndots:5 pod "dnsutils" deleted (base) [root@k8s-master example]# kubectl run dnsutils -it --restart=Never --image=tutum/dnsutils nslookup github.com If you don't see a command prompt, try pressing enter. ;; connection timed out; no servers could be reached pod default/dnsutils terminated (Error) (base) [root@k8s-master example]# kubectl logs dnsutils ;; connection timed out; no servers could be reached (base) [root@k8s-master example]# </code></pre> <p>Update 4: </p> <pre><code>(base) [root@k8s-master example]# kubectl exec -ti busybox -- nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 nslookup: can't resolve 'kubernetes.default' command terminated with exit code 1 (base) [root@k8s-master example]# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-l4jsd 1/1 Running 0 25h coredns-5644d7b6d9-q679h 1/1 Running 0 25h (base) [root@k8s-master example]# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-l4jsd 1/1 Running 0 25h coredns-5644d7b6d9-q679h 1/1 Running 0 25h (base) [root@k8s-master example]# for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done .:53 2019-11-20T19:01:42.161Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 2019-11-20T19:01:42.161Z [INFO] CoreDNS-1.6.2 2019-11-20T19:01:42.161Z [INFO] linux/amd64, go1.12.8, 795a3eb CoreDNS-1.6.2 linux/amd64, go1.12.8, 795a3eb .:53 2019-11-20T19:01:41.862Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76 2019-11-20T19:01:41.862Z [INFO] CoreDNS-1.6.2 2019-11-20T19:01:41.862Z [INFO] linux/amd64, go1.12.8, 795a3eb CoreDNS-1.6.2 linux/amd64, go1.12.8, 795a3eb (base) [root@k8s-master example]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 26h (base) [root@k8s-master example]# kubectl get ep kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 10.32.0.3:53,10.32.0.4:53,10.32.0.3:53 + 3 more... 26h (base) [root@k8s-master example]# kubectl -n kube-system edit configmap coredns Edit cancelled, no changes made. </code></pre> <p>nslookup from worker node1</p> <pre><code>[root@worker-node1 ec2-user]# nslookup 10.96.0.10 Server: 172.31.0.2 Address: 172.31.0.2#53 Non-authoritative answer: 10.0.96.10.in-addr.arpa name = ip-10-96-0-10.ec2.internal. Authoritative answers can be found from: [root@worker-node1 ec2-user]# nslookup 10.96.0.1 Server: 172.31.0.2 Address: 172.31.0.2#53 Non-authoritative answer: 1.0.96.10.in-addr.arpa name = ip-10-96-0-1.ec2.internal. Authoritative answers can be found from: [root@worker-node1 ec2-user]# </code></pre> <p>nslookup from worker node2</p> <pre><code>[root@worker-node2 ec2-user]# nslookup 10.96.0.10 Server: 172.31.0.2 Address: 172.31.0.2#53 Non-authoritative answer: 10.0.96.10.in-addr.arpa name = ip-10-96-0-10.ec2.internal. Authoritative answers can be found from: [root@worker-node2 ec2-user]# nslookup 10.96.0.1 Server: 172.31.0.2 Address: 172.31.0.2#53 Non-authoritative answer: 1.0.96.10.in-addr.arpa name = ip-10-96-0-1.ec2.internal. Authoritative answers can be found from: </code></pre>
<p><strong>Update:</strong></p> <p>As I can see from the comments, I also need to cover some basic concepts in the answer.</p> <p>To ensure that Kubernetes cluster works well some requirements should be fulfilled.</p> <ol> <li><p>All Kubernetes nodes must have <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin" rel="nofollow noreferrer">full network connectivity</a>.<br> It means that any cluster node should be able to communicate with any other cluster node using any network protocol and any port (in case of tcp/udp), without NAT. Some cloud environments require additional custom firewall rules to accomplish that. <a href="https://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/gce" rel="nofollow noreferrer">Calico example</a></p></li> <li><p>Kubernetes Pods should be able to communicate with pods scheduled on the other nodes.<br> This functionality is provided by [CNI network add-on]. Most popular add-ons require additional option in Kubernetes control plane, which usually set by <code>kubeadm init --pod-network-cidr=a.b.c.d/16</code> command line option. Note, that default IP subnets for different network add-ons are not the same.<br> If you want to use custom Pod subnet for the particular network add-on, you have to customize network add-on deployment YAML file before applying it to the cluster.<br> The inter Pod connectivity could be easily tested by sending ICMP or <code>curl</code> requests from node CLI or Pod CLI to any IP address of Pod, scheduled on another node. <em>Note, that Service ClusterIP doesn't respond on ICMP requests, because it's nothing more that set of iptables forwarding rules.</em> The full list of pods with node names could be showed using the following command:</p> <pre><code>kubectl get pods --all-namespaces -o wide </code></pre></li> <li>For service discovery functionality, working DNS service in Kubernetes cluster is the must.<br> Usually it's <code>kubedns</code> for Kubernetes versions prior v1.9 and <code>coredns</code> for newer clusters.<br> Kubernetes DNS service usually contains one deployment with two replicas and one ClusterIP Service with default IP address 10.96.0.10.</li> </ol> <hr> <p>Looking at the data in the question I suspect that you may have problem with network add-on. I would test it using the following commands, which should return successful results on the healthy cluster:</p> <pre><code># check connectivity from the master node to nginx pods: k8s-master$ ping 10.32.0.2 k8s-master$ curl 10.32.0.2 k8s-master$ ping 10.40.0.0 k8s-master$ curl 10.40.0.0 # check connectivity from other nodes to coredns pods and DNS Service: worker-node1$ nslookup github.com 10.32.0.4 worker-node1$ nslookup github.com 10.32.0.3 worker-node1$ nslookup github.com 10.96.0.10 worker-node2$ nslookup github.com 10.32.0.4 worker-node2$ nslookup github.com 10.32.0.3 worker-node2$ nslookup github.com 10.96.0.10 </code></pre> <p>Troubleshooting network plugin is a quite big piece of knowledge to write in one answer, so, if you need to fix network add-on, please do some search through existing answers and ask another question in case you found nothing suitable.</p> <hr> <p>Below part describes how to check DNS service in Kubernetes cluster:</p> <p>In case of <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">dnsPolicy</a> <code>ClusterFirst</code>(which is default) any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node.</p> <p>How to check DNS client configuration on the cluster node: </p> <pre><code>$ cat /etc/resolv.conf $ systemd-resolve --status </code></pre> <p>How to check if DNS client on the node works well:</p> <pre><code>$ nslookup github.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: github.com Address: 140.82.118.4 </code></pre> <p>How to get Kubernetes cluster DNS configuration:</p> <pre><code>$ kubectl get svc,pods -n kube-system -o wide | grep dns service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 185d k8s-app=kube-dns pod/coredns-fb8b8dccf-5jjv8 1/1 Running 122 185d 10.244.0.16 kube-master2 &lt;none&gt; &lt;none&gt; pod/coredns-fb8b8dccf-5pbkg 1/1 Running 122 185d 10.244.0.17 kube-master2 &lt;none&gt; &lt;none&gt; $ kubectl get configmap coredns -n kube-system -o yaml apiVersion: v1 data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap metadata: creationTimestamp: "2019-05-20T16:10:42Z" name: coredns namespace: kube-system resourceVersion: "1657005" selfLink: /api/v1/namespaces/kube-system/configmaps/coredns uid: d1598034-7b19-11e9-9137-42010a9c0004 </code></pre> <p>How to check if the cluster DNS service (coredns) works well:</p> <pre><code>$ nslookup github.com 10.96.0.10 Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: github.com Address: 140.82.118.4 </code></pre> <p>How to check if the regular pod can resolve particular DNS name:</p> <pre><code>$ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 $ kubectl run dnsutils -it --rm=true --restart=Never --image=tutum/dnsutils nslookup github.com Server: 10.96.0.10 Address: 10.96.0.10#53 Non-authoritative answer: Name: github.com Address: 140.82.118.3 </code></pre> <p>More details about DNS troubleshooting can be found in the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">official documentation</a></p>
<p>I've setup a VPC consisting of a public subnet and multiple private subnets, the public subnet hosts a OpenVpn access server through which I can access my instances running in private subnets, I've all the NAT and Internet Gateway running fine and I can access internet from instances running in private subnet over VPN.</p> <p>Everything was running fine until I decided to run a EKS instances in one of my private subnet with "Public Access" feature disabled, I cannot reach to my EKS endpoint (K8 API service endpoint API) over VPN or from any instances running into my public/private subnets (i.e. using a Jump box).</p> <p>I googled a lot and found that I've to enable <code>enableDnsHostnames</code> and <code>enableDnsSupport</code> with my VPC, but enabling these did not help. I also checked my Master node security group which allows an inbound traffic from anywhere i.e. <code>0.0.0.0/0</code> over port 443, so security group is not a concern.</p> <p>However everything runs just fine if I turn on "Public Access" flag to <code>Enabled</code> but that fails purpose of creating K8 cluster in private subnet.</p> <p>Can someone please point out where I'm mistaking, thanks in advance.</p>
<h3>Intro</h3> <p>If you are setting up an EKS Kubernetes cluster on AWS then you would probably want a cluster that is not accessible to the world, then you'll access it privately via a VPN. Considering all disclosed <a href="https://www.cvedetails.com/vulnerability-list/vendor_id-15867/product_id-34016/Kubernetes-Kubernetes.html" rel="nofollow noreferrer">vulnerabilities</a> this setup looks as a more secure design, and it allows you to isolate the Kubernetes control plane and worker nodes within your VPC, providing an additional layer of protection to harden clusters against malicious attack and accidental exposure. </p> <p>You do that by toggling off <em>Public Access</em> while creating the cluster, however a problem with that is the <a href="https://github.com/aws/containers-roadmap/issues/221" rel="nofollow noreferrer">automated DNS Resolution for EKS with private endpoint</a> is still not supported as it is, for example with RDS private endpoints.</p> <p>When creating your EKS cluster:</p> <ul> <li>AWS does not allow you to change the DNS name of the endpoint.</li> <li>AWS creates a managed private hosted zone for the endpoint DNS (not editable).</li> </ul> <h3>Solution N°1</h3> <p>One suggested solution is to create Route53 inbound and outbound endpoints as described in this <a href="https://aws.amazon.com/blogs/compute/enabling-dns-resolution-for-amazon-eks-cluster-endpoints/" rel="nofollow noreferrer">official AWS blog post</a>. </p> <p>However, the problem with that is that every time you create a cluster you will need to add IPs to our local resolver and If your local infrastructure is maintained by someone else then it might take days to get that done.</p> <h3>Solution N°2</h3> <p>You could solve that problem by writing a small script that updates <code>/etc/hosts</code> with the <strong>IP</strong> and <strong>dns</strong> name of the EKS private endpoint. This is kind of a hack but works well. </p> <p>Here’s how the <code>eks-dns.sh</code> script looks:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/bash # # eg: bash ~/.aws/eks-dns.sh bb-dev-eks-BedMAWiB bb-dev-devops # clusterName=$1 awsProfile=$2 # # Get EKS ip addrs # ips=`aws ec2 describe-network-interfaces --profile $awsProfile \ --filters Name=description,Values="Amazon EKS $clusterName" \ | grep "PrivateIpAddress\"" | cut -d ":" -f 2 | sed 's/[*",]//g' | sed 's/^\s*//'| uniq` echo "#-----------------------------------------------------------------------#" echo "# EKS Private IP Addresses: " echo $ips echo "#-----------------------------------------------------------------------#" echo "" # # Get EKS API endpoint # endpoint=`aws eks describe-cluster --profile $awsProfile --name $clusterName \ | grep endpoint\" | cut -d ":" -f 3 | sed 's/[\/,"]//g'` echo "#-----------------------------------------------------------------------#" echo "# EKS Private Endpoint " echo $endpoint echo "#-----------------------------------------------------------------------#" echo "" IFS=$'\n' # # Create backup of /etc/hosts # sudo cp /etc/hosts /etc/hosts.backup.$(date +%Y-%m-%d) # # Clean old EKS endpoint entries from /etc/hots # if grep -q $endpoint /etc/hosts; then echo "Removing old EKS private endpoints from /etc/hosts" sudo sed -i "/$endpoint/d" /etc/hosts fi # # Update /etc/hosts with EKS entry # for item in $ips do echo "Adding EKS Private Endpoint IP Addresses" echo "$item $endpoint" | sudo tee -a /etc/hosts done </code></pre> <h3>Exec Example</h3> <pre><code>╭─delivery at delivery-I7567 in ~ using ‹› 19-11-21 - 20:26:27 ╰─○ bash ~/.aws/eks-dns.sh bb-dev-eks-BedMAWiB bb-dev-devops </code></pre> <p>Resulting <code>/etc/hosts</code></p> <pre><code>╭─delivery at delivery-I7567 in ~ using ‹› 19-11-21 - 20:26:27 ╰─○ cat /etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.18.3.111 D4EB2912FDB14E8DAB358D471DD0DC5B.yl4.us-east-1.eks.amazonaws.com 172.18.1.207 D4EB2912FDB14E8DAB358D471DD0DC5B.yl4.us-east-1.eks.amazonaws.com </code></pre> <ul> <li><strong>Ref Article:</strong> studytrails.com/devops/kubernetes/local-dns-resolution-for-eks-with-private-endpoint/</li> </ul> <h3>Important Consideration</h3> <p>Also as stated in the related Question: <a href="https://stackoverflow.com/questions/55510783/cant-access-eks-api-server-endpoint-within-vpc-when-private-access-is-enabled?rq=1">Can&#39;t access EKS api server endpoint within VPC when private access is enabled</a> , your VPC must have <code>enableDnsHostnames</code> and <code>enableDnsSupport</code> set to true.</p> <blockquote> <p>I had to enable enableDnsHostnames and enableDnsSupport for my VPC.</p> <p>When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have enableDnsHostnames and enableDnsSupport set to true.</p> <p>Note: Wait for a while for changes to be reflected(about 5 minutes).</p> </blockquote>
<p>I have a "big" micro-service (website) with 3 pods deployed with Helm Chart in production env, but when I deploy a new version of the Helm chart, during 40 seconds (time to start my big microservice) I have a problem with the website <code>(503 Service Unavailable)</code> </p> <p>So, I look at a solution to tell to kubernetes do not kill the old pod before the complete start of the new version</p> <p>I tried the <code>--wait --timeout</code> but it did not work for me.</p> <p>My EKS version : "v1.14.6-eks-5047ed"</p>
<p>Without more details about the Pods, I'd suggest:</p> <p>Use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> (if not already) so that Pods are managed by a Replication Controller, which allows to do <a href="https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/" rel="nofollow noreferrer">rolling updates</a>, and that <em>in combination</em> with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">configured</a> Startup Probe (if on k8s v1.16+) or Readiness Probe so that Kubernetes knows when the new Pods are ready to take on traffic (a Pod is considered ready when all of its Containers are ready).</p>
<p>I have the same problem as the following: <a href="https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod">Dual nginx in one Kubernetes pod</a></p> <p>In my Kubernetes <code>Deployment</code> template, I have 2 containers that are using the same port 80. I understand that containers within a <code>Pod</code> are actually under the same network namespace, which enables accessing another container in the <code>Pod</code> with <code>localhost</code> or <code>127.0.0.1</code>. It means containers can't use the same port.</p> <p>It's very easy to achieve this with the help of <code>docker run</code> or <code>docker-compose</code>, by using <code>8001:80</code> for the first container and <code>8002:80</code> for the second container.</p> <p>Is there any similar or better solution to do this in Kubernetes Pod ? Without separating these 2 containers into different Pods.</p>
<p>Basically I totally agree with <em>@David's</em> and <em>@Patric's</em> comments but I decided to add to it a few more things expanding it into an answer.</p> <blockquote> <p>I have the same problem as the following: <a href="https://stackoverflow.com/questions/54289786/dual-nginx-in-one-kubernetes-pod">Dual nginx in one Kubernetes pod</a></p> </blockquote> <p>And there is already a pretty good answer for that problem in a mentioned thread. From the technical point of view it provides ready solution to your particular use-case however it doesn't question the idea itself.</p> <blockquote> <p>It's very easy to achieve this with the help of docker run or docker-compose, by using 8001:80 for the first container and 8002:80 for the second container.</p> </blockquote> <p>It's also very easy to achieve in <strong>Kubernetes</strong>. Simply put both containers in different <code>Pods</code> and you will not have to manipulate with nginx config to make it listen on a port different than <code>80</code>. Note that those two docker containers that you mentioned don't share a single network namespace and that's why they can both listen on ports <code>80</code> which are mapped to different ports on host system (<code>8001</code> and <code>8002</code>). This is not the case with <strong>Kubernetes</strong> <em>Pods</em>. Read more about <strong>microservices architecture</strong> and especially how it is implemented on <strong>k8s</strong> and you'll notice that placing a few containers in a single <code>Pod</code> is really rare use case and definitely should not be applied in a case like yours. There should be a good reason to put 2 or more containers in a single <code>Pod</code>. Usually the second container has some complimentary function to the main one.</p> <p>There are <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">3 design patterns for multi-container Pods, commonly used in <strong>Kubernetes</strong></a>: sidecar, ambassador and adapter. Very often all of them are simply referred to as <strong>sidecar containers</strong>.</p> <p>Note that 2 or more containers coupled together in a single <code>Pod</code> in all above mentioned use cases <em>have totally different function</em>. Even if you put more than just one container in a single <code>Pod</code> (which is most common), in practice it is never a container of the same type (like two nginx servers listening on different ports in your case). They should be complimentary and there should be a good reason why they are put together, why they should start and shut down at the same time and share same network namespace. Sidecar container with a monitoring agent running in it has complimentary function to the main container which can be e.g. nginx webserver. You can read more about container design patterns in general in <a href="https://techbeacon.com/enterprise-it/7-container-design-patterns-you-need-know" rel="noreferrer">this</a> article.</p> <blockquote> <p>I don't have a very firm use case, because I'm still very new to Kubernetes and the concept of a cluster. </p> </blockquote> <p>So definitely don't go this way if you don't have particular reason for such architecture.</p> <blockquote> <p>My initial planning of the cluster is putting all my containers of the system into a pod. So that I can replicate this pod as many as I want.</p> </blockquote> <p>You don't need a single <code>Pod</code> to replicate it. You can have in your cluster a lot of <code>replicaSets</code> (usually managed by <code>Deployments</code>), each of them taking care of running declared number of replicas of a <code>Pod</code> of a certain kind.</p> <blockquote> <p>But according to all the feedback that I have now, it seems like I going in the wrong direction.</p> </blockquote> <p>Yes, this is definitely wrong direction, but it was actually already said. I'd like only to highlight why namely this direction is wrong. Such approach is totally against the idea of <em>microservices architecture</em> and this is what <strong>Kubernetes</strong> is designed for. Putting all your infrastructure in a single huge <code>Pod</code> and binding all your containers tightly together makes no sense. Remember that a <code>Pod</code> <em>is the smallest deployable unit in <strong>Kubernetes</em></strong> and when one of its containers crashes, the whole <code>Pod</code> crashes. There is no way you can manually restart just one container in a <code>Pod</code>.</p> <blockquote> <p>I'll review my structure and try with the suggests you all provided. Thank you, everyone! =)</p> </blockquote> <p>This is a good idea :)</p>
<p>How do I ping my api which is running in kubernetes environment in other namespace rather than default. Lets say I have pods running in 3 namespaces - default, dev, prod. I have ingress load balancer installed and configured the routing. I have no problem in accessing default namespace - <a href="https://localhost/myendpoint" rel="nofollow noreferrer">https://localhost/myendpoint</a>.... But how do I access the apis that are running different image versions in other namespaces eg dev or prod? Do I need to add additional configuration in service or ingress-service files?</p> <p>EDIT: my pods are restful apis that communicates over http requests. All I’m asking how to access my pod which runs in other namespace rather than default. The deployments communicate between each other with no problem. Let’s say I have a front end application running and want to access it from the browser, how is it done? I can access if the pods are in the default namespace by hitting <a href="http://localhost/path" rel="nofollow noreferrer">http://localhost/path</a>... but if I delete all the pods from default namespace and move all the services and deoloyments into dev namespace, I cannot access it anymore from the browser with the same url. Does it have a specific path for different namespaces like <a href="http://localhost/dev/path" rel="nofollow noreferrer">http://localhost/dev/path</a>? Do I need to cinfigure it</p> <p>Hopefully it's clear enough. Thank you</p>
<h2>Route traffic with Ingress to Service</h2> <p>When you want to route request from external clients, via <code>Ingress</code> to a <code>Service</code>, you should put the <code>Ingress</code> and <code>Service</code> object in the <em>same namespace</em>. I recommend to use different <em>domains</em> in your <code>Ingress</code> for the environments.</p> <h2>Route traffic from Service to Service</h2> <p>When you want to route traffic from a pod in your cluster to a <code>Service</code>, possible in another namespace, it is easiest to use <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Service Discovery with DNS</a>, e.g. send request to:</p> <pre><code>&lt;service-name&gt;.&lt;namespace&gt;.svc.&lt;configured-cluster-name&gt;.&lt;configured-name&gt; </code></pre> <p>this is most likely</p> <pre><code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local </code></pre>
<p>I have a deployment, that looks as follows: <a href="https://i.stack.imgur.com/j5TPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j5TPW.png" alt="enter image description here"></a></p> <p>The question is, what is the difference between the red border label and the violet one? </p>
<p>blue border labels are applied to pod spec where as red border labels are part of deployment spec. </p> <p>Notice that replicaSet selector uses same key:value pair to identify the related pods.</p> <pre><code>You can query deployment object using below command kubectl get deploy -l app=nginx same way, you can query pod using kubectl get po -l app=nginx </code></pre>
<p>I have a deployment, that looks as follows: <a href="https://i.stack.imgur.com/j5TPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j5TPW.png" alt="enter image description here"></a></p> <p>The question is, what is the difference between the red border label and the violet one? </p>
<p>These key fields are refereed as Labels on Kubernetes. These labels are used in Kubernetes in order to organize our cluster. </p> <p>Labels are key/value pairs that are attached to objects that can be used to identify or group resources in Kubernetes. They can be used to select resources from a list. </p> <p>Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.</p> <p>Going deeper on it, lets suppose you have this Pod in your cluster:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: sample-pod namespace: default labels: env: development spec: containers: - name: busybox image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always </code></pre> <p>As you can see we are setting one label: <code>env: development</code></p> <p>If you deploy this pod you can run the following command to list all lables set to this pod: </p> <pre class="lang-sh prettyprint-override"><code>kubectl get pod sample-pod --show-labels NAME READY STATUS RESTARTS AGE LABELS sample-pod 1/1 Running 0 28s env=development </code></pre> <p>You can also list all pods with <code>development</code> label:</p> <pre><code>$ kubectl get pods -l env=development NAME READY STATUS RESTARTS AGE sample-pod 1/1 Running 0 106s </code></pre> <p>You can also delete a pod using the label selection: </p> <pre><code>pod "sample-pod" deleted </code></pre> <p>Matching objects must satisfy all of the specified label constraints, though they may have additional labels as well. Three kinds of operators are admitted <code>=</code>,<code>==</code>,<code>!=</code>. The first two represent <em>equality</em> (and are simply synonyms), while the latter represents <em>inequality</em>. For example:</p> <pre><code>environment = production tier != frontend </code></pre> <p>You can read more about labels on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Kubernetes Documentation</a>.</p>
<p>How do I use minikube's (cluster's) DNS? I want to receive all IP addresses associated with all pods for selected headless service? I don’t want to expose it outside the cluster. I am currently creating back-end layer.</p> <p>As stated in the following answer: <a href="https://stackoverflow.com/questions/52707840/what-exactly-is-a-headless-service-what-does-it-do-accomplish-and-what-are-som/52713482">What exactly is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?</a></p> <p><em>„Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment.”</em></p> <p>Thus the pods in back-end layer can communicate to each other.</p> <p>I can’t use dig command. It is not installed in minikube. Eventually how do I install it? There is no apt available.</p> <p>I hope this explains more accurately what I want to achieve.</p>
<p>You mentioned that you want to receive IP addresses associated with pods for selected service name for testing how does headless service work.</p> <p>For only testing purposes you can use port-forwarding. You can forward traffic from your local machine to dns pod in your cluster. To do this, you need to run:</p> <pre><code>kubectl port-forward svc/kube-dns -n kube-system 5353:53 </code></pre> <p>and it will expose kubs-dns service on your host. Then all you need is to use <code>dig</code> command (or alternative) to query the dns server.</p> <pre><code>dig @127.0.0.1 -p 5353 +tcp +short &lt;service&gt;.&lt;namespace&gt;.svc.cluster.local </code></pre> <p>You can also test your dns from inside of cluster e.g. by running a pod with interactive shell:</p> <pre><code>kubectl run --image tutum/dnsutils dns -it --rm -- bash root@dns:/# dig +search &lt;service&gt; </code></pre> <p>Let me know it it helped.</p>
<p>I am trying to enable feature gates on a Kubernetes cluster. I have tried using kubeadm call </p> <pre><code>kubeadm config images list --feature-gates TTLAfterFinished=true </code></pre> <p>however I get the error </p> <pre><code>unrecognized feature-gate key: TTLAfterFinished </code></pre> <p>I am using Rancher to build and deploy the cluster. I am using this site to determine the name of the feature gates to enable. <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/</a></p>
<p>I struggled with this as well. Here is the solution to update the feature gates via Rancher API or Rancher UI. You can use the edited cluster via API feature as shown in the answer that worked for me.</p> <p><a href="https://stackoverflow.com/a/51373855/5617140">https://stackoverflow.com/a/51373855/5617140</a></p> <p>Hope this helps.</p>
<p>I am spinning up a <code>kubernetes</code> job as a <code>helm</code> <code>pre-install</code> hook on GKE. The job uses <code>google/cloud-sdk</code> image and I want it to create a compute engine persistent disk.</p> <p>Here is its <code>spec</code>:</p> <pre><code> spec: restartPolicy: OnFailure containers: - name: create-db-hook-container image: google/cloud-sdk:latest command: ["gcloud"] args: ["compute", "disks", "create", "--size={{ .Values.volumeMounts.gceDiskSize }}", "--zone={{ .Values.volumeMounts.gceDiskZone }}", "{{ .Values.volumeMounts.gceDiskName }}"] </code></pre> <p>However this fails with the following error:</p> <pre><code>brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container ERROR: (gcloud.compute.disks.create) Could not fetch resource: brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container - Insufficient Permission: Request had insufficient authentication scopes. brazen-lobster-create-pd-hook-nc2v9 create-db-hook-container </code></pre> <p>Apparently I have to grant the <code>gcloud.compute.disks.create</code> permission.</p> <p>My question is <strong>to whom I have to grant this permission</strong>?</p> <p>This is a GCP IAM permission therefore I assume it cannot be granted specifically on a <code>k8s</code> resource (?) so it cannot be dealt within the context of <code>k8s</code> RBAC, right?</p> <p><strong>edit</strong>: I have created a <code>ComputeDiskCreate</code> custom role, that encompasses two permissions:</p> <ul> <li><code>gcloud.compute.disks.create</code></li> <li><code>gcloud.compute.disks.list</code></li> </ul> <p>I have attached it to service account</p> <p><code>service-2340842080428@container-engine-robot.uam.gserviceaccount.com</code> that my <code>IAM</code> google cloud console has given the name </p> <blockquote> <p>Kubernetes Engine Service Agent</p> </blockquote> <p>but the outcome is still the same.</p>
<p>In GKE, all nodes in a cluster are actually Compute Engine VM instances. They're assigned a service account at creation time to authenticate them to other services. You can check the service account assigned to nodes by checking the corresponding node pool.</p> <p>By default, GKE nodes are assigned the Compute Engine default service account, which looks like <code>PROJECT_NUMBER-compute@developer.gserviceaccount.com</code>, unless you set a different one at cluster/node pool creation time.</p> <p>Calls to other Google services (like the <code>compute.disks.create</code> endpoint in this case) will come from the node and be authenticated with the corresponding service account credentials.</p> <p>You should therefore add the <code>gcloud.compute.disks.create</code> permission to your nodes' service account (likely <code>PROJECT_NUMBER-compute@developer.gserviceaccount.com</code>) in your Developer Console's <a href="https://console.cloud.google.com/iam-admin/iam" rel="nofollow noreferrer">IAM page</a>.</p> <p><strong>EDIT</strong>: Prior to any authentication, the mere ability for a node to access a given Google service is defined by its <a href="https://cloud.google.com/compute/docs/access/service-accounts#accesscopesiam" rel="nofollow noreferrer">access scope</a>. This is defined at node pool's creation time and can't be edited. You'll need to create a new node pool and ensure you grant it the <code>https://www.googleapis.com/auth/compute</code> access scope to Compute Engine methods. You can then instruct your particular pod to <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">run on those specific nodes</a>.</p>
<p>I have one application which servers for REST request and also is listening on a Kafka topic. I deployed the application to Kubernetes and configure the readiness probe like this</p> <pre><code>readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>basically following the instruction from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">[configure-liveness-readiness-startup-probes]</a></p> <p>After deployment is done, I can see the pod readiness probe fails</p> <pre><code>Readiness probe failed: cat: can't open '/tmp/healthy': No such file or directory </code></pre> <p>That is expected. Then I sent a kafka message to the topic . I observed that </p> <p>1) the kafka message has been consumed by my application and saved to database.<br> 2) the rest api can't be accessed.</p> <p>I assumed if the pod's readiness probe is failed, the application can neither receive kafka message nor the rest request. But why in my test, the REST request and Kafka message are handled differently. </p> <p>According to the Kubernete documentation:</p> <pre><code>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic </code></pre> <p>But it doesn't say clearly what kind of traffic it really means. Does kubernetes only restrict the http traffic to the pod if readiness probe failes but not restrict tcp traffic (as Kafka is working over tcp)? </p> <p>My actual intention is to make my service application (kafka consumer) able to control when to receive kafka messages (and REST request as well). E.g. if there is heavy opertion, my service will delete the /tmp/healthy file and thus make the pod not ready for recieving kafka message and Rest request. When the heavy operation is finished, the app write the healthy file to make the pod ready for receiving message.</p> <p>Some more information, in my test, the kubernetes version is v1.14.3 and kafka broker is running in a separated vm outside of kubernetes.</p>
<p>This is two very different things:</p> <ul> <li><strong>Receiving requests</strong>: An <em>external service</em> is sending a request and expect a response.</li> <li><strong>Sending requests</strong>: Your service is sending a request and waiting for a response.</li> </ul> <h2>ReadinessProbe</h2> <p>When a ReadinessProbe fails, <strong>no new requests will be routed to the pod</strong>.</p> <h2>Kafka consumer</h2> <p>If your pod is a <em>Kafka consumer</em>, then your <strong>pod is initializing requests</strong> to Kafka, to retrieve messages from the <em>topic</em>.</p> <p><strong>Check for required directory</strong></p> <blockquote> <p>can't open '/tmp/healthy': No such file or directory</p> </blockquote> <p>If the directory <code>/tmp/healthy</code> is needed for your service to work correctly, your service should check for it on startup, and <code>exit(1)</code> (crash with an error message) if the required directory isn't available. This should be done before connecting to Kafka. If your application uses the directory continually, e.g. writing to it, any operations <strong>error codes should be checked and handled properly</strong> - log and crash depending on your situation.</p> <h2>Consuming Kafka messages</h2> <blockquote> <p>My actual intention is to make my service application (kafka consumer) able to control when to receive kafka messages ( and REST request as well). E.g. if there is heavy opertion, my service will delete the /tmp/healthy file and thus make the pod not ready for recieving kafka message and Rest request.</p> </blockquote> <p>Kafka consumers <strong>poll</strong> Kafka for more data, whenever the consumer want. In other words, the Kafka consumer <em>ask</em> for more data whenever it is ready for more data.</p> <p>Example consumer code:</p> <pre><code> while (true) { ConsumerRecords&lt;String, String&gt; records = consumer.poll(100); for (ConsumerRecord&lt;String, String&gt; record : records) { // process your records } } </code></pre> <p>Remember to <code>commit</code> the records that you have <em>processed</em> so that the messages aren't processed multiple times e.g. after a crash.</p>
<p>We have Spring Boot applications deployed on OKD (The Origin Community Distribution of Kubernetes that powers Red Hat OpenShift). Without much tweaking by devops team, we got in prometheus scraped kafka consumer metrics from kubernetes-service-endpoints exporter job, as well as some producer metrics, but only for kafka connect api, not for standard kafka producer api. This is I guess a configuration for that job:</p> <p><a href="https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml</a></p> <p>What is needed to change in scrape config in order to collect what's been missing?</p>
<p>This <a href="https://github.com/micrometer-metrics/micrometer/issues/1095" rel="nofollow noreferrer">issue</a> with micrometer is the source of the problem. </p> <p>So, we could add jmx exporter, or wait for the issue resolution.</p>
<p>I am deploying prometheus which needs persistent volume(i have also tried with other statefulset), but persistent volume is not created and persistent volume clam shows the flowing error after kubectl describe -n {namespace} {pvc-name}.</p> <pre><code>Type: Warning Reason: ProvisioningFailed From: persistentvolume-controller Message: (combined from similar events): Failed to provision volume with StorageClass "gp2": error querying for all zones: error listing AWS instances: "UnauthorizedOperation: You are not authorized to perform this operation.\n\tstatus code: 403, request id: d502ce90-8af0-4292-b872-ca04900d41dc" </code></pre> <pre><code>kubectl get sc NAME PROVISIONER AGE gp2 (default) kubernetes.io/aws-ebs 7d17h </code></pre> <pre><code>kubectl describe sc gp2 Name: gp2 IsDefaultClass: Yes Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"} ,storageclass.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/aws-ebs Parameters: fsType=ext4,type=gp2 AllowVolumeExpansion: &lt;unset&gt; MountOptions: &lt;none&gt; ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: &lt;none&gt; </code></pre> <p>K8s versions(aws eks):</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T23:41:55Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>helm version</p> <pre><code>version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} </code></pre>
<p>I solved this problem by adding <code>AmazonEKSClusterPolicy</code> and <code>AmazonEKSServicePolicy</code> to the eks cluster role.<a href="https://i.stack.imgur.com/PSKRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PSKRc.png" alt="enter image description here"></a></p>
<p>I have a deployment, that looks as follows: <a href="https://i.stack.imgur.com/j5TPW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j5TPW.png" alt="enter image description here"></a></p> <p>The question is, what is the difference between the red border label and the violet one? </p>
<p>You can add labels to any k8s object. Pod are one of the objects that take most advantage of it as services target them through labels, but again, any object can have labels.</p> <p>About deployments, a deployment creates a replicaSet, that in turn creates the pods. The red square is the deployment labels, while the violet is the labels that will have the pods that the replicaSet will create. Note that it is under template section (Pod template).</p> <p>Now, and this is the important thing with deployments, these two labels must match, in order for a deployment to recognize its "children", otherwise the pods will be orphan. So, if you would change any of these labels, the deployment will create new pods with its labels, and the pods that do not match will run without any controller to back them up.</p>
<p>I am using kubectl in order to retrieve a list of pods:</p> <pre><code> kubectl get pods --selector=artifact=boot-example -n my-sandbox </code></pre> <p>The results which I am getting are:</p> <pre><code>NAME READY STATUS RESTARTS AGE boot-example-757c4c6d9c-kk7mg 0/1 Running 0 77m boot-example-7dd6cd8d49-d46xs 1/1 Running 0 84m boot-example-7dd6cd8d49-sktf8 1/1 Running 0 88m </code></pre> <p>I would like to get only those pods which are "<strong>ready</strong>" (passed readinessProbe). Is there any kubectl command which returns only "<strong>ready</strong>" pods? If not kubectl command, then maybe some other way?</p>
<p>You can use this command:</p> <pre><code>kubectl -n your-namespace get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY-true:status.containerStatuses[*].ready | grep true </code></pre> <p>This will return you the pods with containers that are &quot;<strong>ready</strong>&quot;.</p> <p>To do this without grep, you can use the following commands:</p> <pre><code>kubectl -n your-namespace get pods -o go-template='{{range $index, $element := .items}}{{range .status.containerStatuses}}{{if .ready}}{{$element.metadata.name}}{{&quot;\n&quot;}}{{end}}{{end}}{{end}}' kubectl -n your-namespace get pods -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ &quot;\n&quot;}{end}' </code></pre> <p>This will return you the pod names that are &quot;<strong>ready</strong>&quot;.</p>
<p>beginner here. I am currently trying to configure Ingress to do two things - if the fibonacci route exists, redirect to the function and pass the parameter, if the route doesn't exist, redirect to another website and attach the input there.</p> <p>So, for example, there are two basic scenarios.</p> <ol> <li><a href="https://xxx.amazonaws.com/fibonacci/10" rel="nofollow noreferrer">https://xxx.amazonaws.com/fibonacci/10</a> -&gt; calls fibonacci function with parameter 10 (that works)</li> <li><a href="https://xxx.amazonaws.com/users/jozef" rel="nofollow noreferrer">https://xxx.amazonaws.com/users/jozef</a> -&gt; calls redirect function which redirects to <a href="https://api.github.com/users/jozef" rel="nofollow noreferrer">https://api.github.com/users/jozef</a></li> </ol> <p>I think the service doing the redirect is written correctly, it looks like this.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: api-gateway-redirect-service spec: type: ExternalName externalName: api.github.com ports: - protocol: TCP targetPort: 443 port: 80 # Default port for image </code></pre> <p>This is how my Ingress looks like. Experimented with default-backend annotation as well as various placement of the default backend, nothing worked. When I try to curl <a href="https://xxx.amazonaws.com/users/jozef" rel="nofollow noreferrer">https://xxx.amazonaws.com/users/jozef</a>, I keep getting 301 message but the location is unchanged. The final output looks like this</p> <pre><code>HTTP/1.1 301 Moved Permanently Server: openresty/1.15.8.2 Date: Wed, 13 Nov 2019 15:52:14 GMT Content-Length: 0 Connection: keep-alive Location: https://xxx.amazonaws.com/users/jozef * Connection #0 to host xxx.amazonaws.com left intact * Maximum (50) redirects followed curl: (47) Maximum (50) redirects followed </code></pre> <p>Does someone have an idea what am I doing wrong? This is my Ingress. Also, if it helps, we use Kubernetes version 1.14.6. Thanks a million</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-nginx annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;false&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /fibonacci/(.*) backend: serviceName: fibonacci-k8s-service servicePort: 80 - path: /(.*) backend: serviceName: api-gateway-redirect-service servicePort: 80 </code></pre>
<p>The resolution to the problem was the addition of the <code>'Host: hostname'</code> header in the curl command. </p> <p>The service that was handling the request needed <code>Host: hostname</code> header to properly reply to this request. After the <code>hostname</code> header was provided the respond was correct. </p> <p>Links: </p> <p><a href="https://curl.haxx.se/docs/" rel="nofollow noreferrer">Curl docs</a></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress docs</a></p>
<p>There are multiple same pods running in one cluster but different namespaces. This is the web application running in Kubernetes. I have the URL <code>&lt;HOSTNAME&gt;:&lt;PORT&gt;/context/abc/def/.....</code>. I want to redirect to particular service based on the context. Is there a way i can achieve it using ingress controller ? Or Is there any way i can achieve it using different ports through ingress ?</p> <p>My web application works fine if the URL is <code>&lt;HOSTNAME&gt;:&lt;PORT&gt;/abc/def/.....</code>. Since i have to access the different pods using the same URL, I am adding context to it. Do we have any other way to achieve this use case ?</p>
<p>You can do that with <code>rewrite-target</code>. In example below i used <code>&lt;HOSTNAME&gt;</code> value of <code>rewrite.bar.com</code> and <code>&lt;PORT&gt;</code> with value <code>80</code>.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: context-service servicePort: 80 path: /context1(/|$)(.*) - backend: serviceName: context-service2 servicePort: 80 path: /context2(/|$)(.*) </code></pre> <p>For example, the ingress definition above will result in the following rewrites:</p> <p><code>rewrite.bar.com/context1</code> rewrites to <code>rewrite.bar.com/</code> for context 1 service.</p> <p><code>rewrite.bar.com/context2</code> rewrites to <code>rewrite.bar.com/</code> for context 2 service.</p> <p><code>rewrite.bar.com/context1/new</code> rewrites to <code>rewrite.bar.com/new</code> for context 1 service.</p> <p><code>rewrite.bar.com/context2/new</code> rewrites to <code>rewrite.bar.com/new</code> for context 2 service.</p>
<p>I have <a href="https://docs.gitlab.com/charts/installation/" rel="nofollow noreferrer">installed Gitlab CE using Helm</a>, but our AD users can't login to the platform. The following error is shown in the login UI: <code>Could not authenticate you from Ldapmain because "Invalid credentials for userX"</code> <a href="https://imgur.com/a/TmdnF64" rel="nofollow noreferrer">Invalid credentials for user</a> (but credentials are ok!) </p> <p><strong>Installation</strong>: </p> <pre><code>helm upgrade --install gitlab gitlab/gitlab --namespace my-ns --tiller-namespace tiller-ns --timeout 600 --set global.edition=ce --set global.hosts.domain=example.com --set global.hosts.externalIP=&lt;ExternalIPAddressAllocatedToTheNGINXIngressControllerLBService&gt; --set nginx-ingress.enabled=false --set global.ingress.class=mynginx-ic --set certmanager.install=false --set global.ingress.configureCertmanager=false --set gitlab-runner.install=false --set prometheus.install=false --set registry.enabled=false --set gitlab.gitaly.persistence.enabled=false --set postgresql.persistence.enabled=false --set redis.persistence.enabled=false --set minio.persistence.enabled=false --set global.appConfig.ldap.servers.main.label='LDAP' --set global.appConfig.ldap.servers.main.host=&lt;IPAddressOfMyDomainController&gt; --set global.appConfig.ldap.servers.main.port='389' --set global.appConfig.ldap.servers.main.uid='sAMAccountName' --set global.appConfig.ldap.servers.main.bind_dn='CN=testuser,OU=sampleOU3,OU=sampleOU2,OU=sampleOU1,DC=example,DC=com' --set global.appConfig.ldap.servers.main.password.secret='gitlab-ldap-secret' --set global.appConfig.ldap.servers.main.password.key='password' </code></pre> <p><strong>Notes</strong>:<br> -I have installed previously my own NGINX Ingress Controller separately: </p> <pre><code>helm install stable/nginx-ingress --name nginx-ingress --namespace my-ns --tiller-namespace tiller-ns --set controller.ingressClass=mynginx-ic </code></pre> <p>-I have previously created a secret with the password for the user configured as bind_dn ('CN=testuser,OU=sampleOU3,OU=sampleOU2,OU=sampleOU1,DC=example,DC=com'). The password is encoded using base64, as indicated in <a href="https://docs.gitlab.com/charts/charts/globals.html#ldap" rel="nofollow noreferrer">the documentation</a> </p> <p>File: gitlab-ldap-secret.yaml </p> <pre><code>apiVersion: v1 kind: Secret metadata: name: gitlab-ldap-secret data: password: encodedpass-blablabla </code></pre> <p>-Instead of providing all these parameters in the commandline during the chart installation, I have tried just configuring everything in the various values.yaml that <a href="https://gitlab.com/gitlab-org/charts/gitlab/tree/master" rel="nofollow noreferrer">this Gitlab Helm chart</a> provides, but it just seemed easier to document here this way, for reproduction purposes. </p> <p>-I have tried adding these parameters, no luck: </p> <pre><code>--set global.appConfig.ldap.servers.main.encryption='plain' --set global.appConfig.ldap.servers.main.base='OU=sampleOU1,DC=example,DC=com' </code></pre> <p>-To make it even simpler, we are not considering persistency for any component. That is why, these are all set to false: </p> <pre><code>--set gitlab.gitaly.persistence.enabled=false --set postgresql.persistence.enabled=false --set redis.persistence.enabled=false --set minio.persistence.enabled=false </code></pre> <p><em>*I do need persistency, but let's just focus on LDAP authentication this time, which is my main issue at the moment.</em> </p> <p>-I have checked with my sysadmin, and we use plain 389 in Active Directory. No encryption</p> <p><strong>My environment</strong> </p> <pre><code>kubectl.exe version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} helm version Client: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} helm ls --tiller-namespace tiller-ns NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE gitlab 1 Tue Oct 29 18:16:06 2019 DEPLOYED gitlab-2.3.7 12.3.5 my-ns kubectl.exe get nodes NAME STATUS ROLES AGE VERSION kubernetes01.example.com Ready master 102d v1.15.1 kubernetes02.example.com Ready &lt;none&gt; 7h16m v1.15.1 kubernetes03.example.com Ready &lt;none&gt; 102d v1.15.1 kubernetes04.example.com Ready &lt;none&gt; 11d v1.15.1 </code></pre> <p>After installing this chart, everything seems to work fine:</p> <pre><code>kubectl.exe get pods NAME READY STATUS RESTARTS AGE gitlab-gitaly-0 1/1 Running 0 65m gitlab-gitlab-exporter-5b649bfbb-5pn7q 1/1 Running 0 65m gitlab-gitlab-shell-7d9497fcd7-h5478 1/1 Running 0 65m gitlab-gitlab-shell-7d9497fcd7-jvt9p 1/1 Running 0 64m gitlab-migrations.1-gf8jr 0/1 Completed 0 65m gitlab-minio-cb5945f79-kztmj 1/1 Running 0 65m gitlab-minio-create-buckets.1-d2bh5 0/1 Completed 0 65m gitlab-postgresql-685b68b4d7-ns2rw 2/2 Running 0 65m gitlab-redis-5cb5c8b4c6-jtfnr 2/2 Running 0 65m gitlab-sidekiq-all-in-1-5b997fdffd-n5cj2 1/1 Running 0 65m gitlab-task-runner-5777748f59-gkf9v 1/1 Running 0 65m gitlab-unicorn-764f6548d5-fmggl 2/2 Running 0 65m gitlab-unicorn-764f6548d5-pqcm9 2/2 Running 0 64m </code></pre> <p>Now, if I try to login with a LDAP user, I get the error mentioned before. If I go inside the unicorn pod, I can see the following messages in the <code>/var/log/gitlab/production.log</code>: <a href="https://i.stack.imgur.com/UGpIp.png" rel="nofollow noreferrer">Production.log</a></p> <p>What am I missing? Do I need to configure anything else? I have configured all the parameters for LDAP Authentication mentioned <a href="https://docs.gitlab.com/charts/charts/globals.html#ldap" rel="nofollow noreferrer">here</a> but still I'm having trouble trying to authenticate. </p> <p>Sorry, but I am new with Gitlab and all its internal components. I can't seem to find where to edit this file for example: <code>/etc/gitlab/gitlab.rb</code> (<em>in which pod should I enter? I literally entered each one of them, and did not find this configuration file</em>). Also, I noticed some of the documentation says that some diagnostics tools can be executed such as <code>gitlab-rake</code> <code>gitlab:ldap:check</code>, or utilities such as <code>gitlab-ctl reconfigure</code>, but again.... where to run these?? On the unicorn pod? gitlab-shell? I noticed various Gitlab documentation pages reference to some of these tools to troubleshoot incidents, but I don't think this chart follows the same architecture.</p> <p>I have looked <a href="https://stackoverflow.com/questions/56161699/could-not-authenticate-you-from-ldapmain-because-invalid-credentials-for-user-n/56165991?noredirect=1#comment98973794_56165991">this post</a> for example, because it seems the same issue, but I can't find <code>/etc/gitlab/gitlab.rb</code> </p> <p>Any help will be much appreciated. It's been a couple of weeks since I've been dealing with this issue.</p>
<p>I forgot to answer this. I managed to solve this issue by adding again the base parameter.</p> <p>I was already trying this as explained in my original post but it turns out that I needed to escape the commas:</p> <pre><code>--set global.appConfig.ldap.servers.main.base='OU=sampleOU1\,DC=example\,DC=com' </code></pre> <p>*The same with your bind_dn, commas need to be escaped.</p> <p>I was able to realize that thanks to the LDAP diagnostic tool <code>gitlab-rake</code>, which in this helm chart is a bit different. You have to go inside <code>/srv/gitlab</code> directory on the unicorn container, and execute <code>./bin/rake gitlab:ldap:check</code>. That helped me understand where the problem was. Also you can check the <code>/srv/gitlab/config/gitlab.yaml</code> file to see if your LDAP parameters were successfully loaded/parsed.</p> <p>You can also run the following command to see more information about your Gitlab deployment: <code>./bin/rake gitlab:env:info</code></p>
<ul> <li>Istio: 1.3 (also tried 1.1 before update to 1.3)</li> <li>K8s: 1.16.2</li> <li>Cloud provider: DigitalOcean</li> </ul> <p>I have a cluster setup with Istio. I have enabled grafana/kiali and also installed kibana and RabbitMQ management UI and for all of those I have gateways and virtual services configured (all in istio-system namespace) along with HTTPS using SDS and cert-manager and all works fine. It means I can access these resources in the browser over HTTPS with a sub domain.</p> <p>Then I deployed a microservice (part of a real application) and created <code>Service</code>, <code>VirtualService</code> and <code>Gateway</code> resources for it (for now it is the only one service and gateway except rabbitmq which uses different sub domain and differend port). And it is located in default namespace.</p> <pre><code>$ kubectl get gateway NAME AGE gateway-rabbit 131m tg-gateway 45m $ kubectl get po NAME READY STATUS RESTARTS AGE rabbit-rabbitmq-0 2/2 Running 2 134m tg-app-auth-79c578b94f-mqsz9 2/2 Running 0 46m </code></pre> <p>If I try to connect to my service with port forwarding I can get a success response from <code>localhost:8000/api/me</code> (also healthz, readyz both return 200 and pod has 0 restarts) so it is working fine.</p> <pre><code>kubectl port-forward $(kubectl get pod --selector="app=tg-app-auth" --output jsonpath='{.items[0].metadata.name}') 8000:8000 </code></pre> <p>But I can't access it neither via HTTP nor HTTPS. I get <code>404</code> using HTTP and the following response using HTTPS:</p> <pre><code>* Trying MYIP... * TCP_NODELAY set * Connected to example.com (MYIP) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.example.com:443 * Closing connection 0 curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.example.com:443 </code></pre> <p>Here are my yaml files:</p> <p><strong>Gateway:</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: tg-gateway namespace: default spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.example.com tls: httpsRedirect: true - port: number: 443 name: https protocol: HTTPS hosts: - www.example.com tls: mode: SIMPLE serverCertificate: sds privateKey: sds credentialName: tg-certificate </code></pre> <p><strong>Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: tg-app-auth namespace: default labels: app: tg-app-auth spec: selector: app: tg-app-auth ports: - name: http port: 8000 </code></pre> <p><strong>VirtualService</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: tg-app-auth-vs namespace: default spec: hosts: - www.example.com gateways: - tg-gateway http: - match: - port: 443 - uri: prefix: /api/auth rewrite: uri: /api route: - destination: host: tg-app-auth port: number: 8000 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tg-app-auth-dr namespace: default spec: host: tg-app-auth trafficPolicy: tls: mode: DISABLE </code></pre> <p>I tried to remove all the HTTPS and TLS details and configure it with HTTP only but still can not get any response. I read all the issues on github but nothing helps and it seems like I have a very silly mistake. All these configurations are pretty much the same as I have for grafana/kibana/kiali/rabbit and all of them works fine.</p> <p>UPD: Tried to get response with and it also works fine but I can't get response from LB IP or domain</p> <pre><code>kubectl exec $(kubectl get pod --selector app=tg-app-auth --output jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl -v http://$(kubectl get endpoints tg-app-auth -o jsonpath='{.subsets[0].addresses[0].ip}'):8000/api/me </code></pre> <pre><code>$ kubectl get endpoints tg-app-auth NAME ENDPOINTS AGE tg-app-auth 10.244.0.37:8000 22h </code></pre> <p><strong>UPD</strong></p> <p>All statuses are OK. There are a lot more with different ports but I copied 80/443 only.</p> <pre><code>$ istioctl authn tls-check &lt;pod_name&gt; cert-manager-webhook.istio-system.svc.cluster.local:443 istio-galley.istio-system.svc.cluster.local:443 istio-ingressgateway.istio-system.svc.cluster.local:80 istio-ingressgateway.istio-system.svc.cluster.local:443 istio-sidecar-injector.istio-system.svc.cluster.local:443 kubernetes.default.svc.cluster.local:443 </code></pre> <pre><code>$ kubectl get ingress --all-namespaces No resources found. $ kubectl get gateways --all-namespaces default gateway-rabbit 3d2h default tg-gateway 17h istio-system gateway-grafana 3d2h istio-system gateway-kiali 3d2h istio-system istio-autogenerated-k8s-ingress 3d2h logging gateway-kibana 3d2h </code></pre>
<p>Issue was really simple and silly. I had enabled global.k8sIngress.enabled = true in Istio values.yml. After changing it to false all starts working.</p>
<p>I have an ingress providing routing for two microservices running on GKE, and intermittently when the microservice returns a 404/422, the ingress returns a 502.</p> <p>Here is my ingress definition:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress annotations: kubernetes.io/ingress.global-static-ip-name: develop-static-ip ingress.gcp.kubernetes.io/pre-shared-cert: dev-ssl-cert spec: rules: - http: paths: - path: /* backend: serviceName: srv servicePort: 80 - path: /c/* backend: serviceName: collection servicePort: 80 - path: /w/* backend: serviceName: collection servicePort: 80 </code></pre> <p>I run tests that hit the <code>srv</code> back-end where I expect a 404 or 422 response. I have verified when I hit the <code>srv</code> back-end directly (bypassing the ingress) that the service responds correctly with the 404/422.</p> <p>When I issue the same requests through the ingress, the ingress will intermittently respond with a 502 instead of the 404/422 coming from the back-end.</p> <p>How can I have the ingress just return the 404/422 response from the back-end?</p> <p>Here's some example code to demonstrate the behavior I'm seeing (the expected status is 404):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; for i in range(10): resp = requests.get('https://&lt;server&gt;/a/v0.11/accounts/junk', cookies=&lt;token&gt;) print(resp.status_code) 502 502 404 502 502 404 404 502 404 404 </code></pre> <p>And here's the same requests issued from a python prompt within the pod, i.e. bypassing the ingress:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; for i in range(10): ... resp = requests.get('http://0.0.0.0/a/v0.11/accounts/junk', cookies=&lt;token&gt;) ... print(resp.status_code) ... 404 404 404 404 404 404 404 404 404 404 </code></pre> <p>Here's the output of the kubectl commands to demonstrate that the loadbalancer is set up correctly (I never get a 502 for a 2xx/3xx response from the microservice):</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES srv-799976fbcb-4dxs7 2/2 Running 0 19m 10.24.3.8 gke-develop-default-pool-ea507abc-43h7 &lt;none&gt; &lt;none&gt; srv-799976fbcb-5lh9m 2/2 Running 0 19m 10.24.1.7 gke-develop-default-pool-ea507abc-q0j3 &lt;none&gt; &lt;none&gt; srv-799976fbcb-5zvmv 2/2 Running 0 19m 10.24.2.9 gke-develop-default-pool-ea507abc-jjzg &lt;none&gt; &lt;none&gt; collection-5d9f8586d8-4zngz 2/2 Running 0 19m 10.24.1.6 gke-develop-default-pool-ea507abc-q0j3 &lt;none&gt; &lt;none&gt; collection-5d9f8586d8-cxvgb 2/2 Running 0 19m 10.24.2.7 gke-develop-default-pool-ea507abc-jjzg &lt;none&gt; &lt;none&gt; collection-5d9f8586d8-tzwjc 2/2 Running 0 19m 10.24.2.8 gke-develop-default-pool-ea507abc-jjzg &lt;none&gt; &lt;none&gt; parser-7df86f57bb-9qzpn 1/1 Running 0 19m 10.24.0.8 gke-develop-parser-pool-5931b06f-6mcq &lt;none&gt; &lt;none&gt; parser-7df86f57bb-g6d4q 1/1 Running 0 19m 10.24.5.5 gke-develop-parser-pool-5931b06f-9xd5 &lt;none&gt; &lt;none&gt; parser-7df86f57bb-jchjv 1/1 Running 0 19m 10.24.0.9 gke-develop-parser-pool-5931b06f-6mcq &lt;none&gt; &lt;none&gt; $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE srv NodePort 10.0.2.110 &lt;none&gt; 80:30141/TCP 129d collection NodePort 10.0.4.237 &lt;none&gt; 80:30270/TCP 129d kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 130d $ kubectl get endpoints NAME ENDPOINTS AGE srv 10.24.1.7:80,10.24.2.9:80,10.24.3.8:80 129d collection 10.24.1.6:80,10.24.2.7:80,10.24.2.8:80 129d kubernetes 35.237.239.186:443 130d </code></pre>
<p>tl;dr: GCP LoadBalancer/GKE Ingress will 502 if 404/422s from the back-ends don't have response bodies.</p> <p>Looking at the LoadBalancer logs, I would see the following errors:</p> <pre><code>502: backend_connection_closed_before_data_sent_to_client 404: backend_connection_closed_after_partial_response_sent </code></pre> <p>Since everything was configured correctly (even the LoadBalancer said the backends were healthy)--backend was working as expected and no failed health checks--I experimented with a few things and noticed that all of my 404 responses had empty bodies.</p> <p>Sooo, I added a body to my 404 and 422 responses and lo and behold no more 502s!</p>
<p>I'm struggling with Kubernetes' service without a selector. The cluster is installed on AWS with the kops. I have a deployment with 3 nginx pods exposing port 80:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ngix-dpl # Name of the deployment object labels: app: nginx spec: replicas: 3 # Number of instances in the deployment selector: # Selector identifies pods to be matchLabels: # part of the deployment app: nginx # by matching of the label "app" template: # Templates describes pods of the deployment metadata: labels: # Defines key-value map app: nginx # Label to be recognized by other objects spec: # as deployment or service containers: # Lists all containers in the pod - name: nginx-pod # container name image: nginx:1.17.4 # container docker image ports: - containerPort: 80 # port exposed by container </code></pre> <p>After creation of the deployment, I noted the IP addresses:</p> <pre><code>$ kubectl get pods -o wide | awk {'print $1" " $3" " $6'} | column -t NAME STATUS IP curl Running 100.96.6.40 ngix-dpl-7d6b8c8944-8zsgk Running 100.96.8.53 ngix-dpl-7d6b8c8944-l4gwk Running 100.96.6.43 ngix-dpl-7d6b8c8944-pffsg Running 100.96.8.54 </code></pre> <p>and created a service that should serve the IP addresses:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: dummy-svc labels: app: nginx spec: ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Endpoints metadata: name: dummy-svc subsets: - addresses: - ip: 100.96.8.53 - ip: 100.96.6.43 - ip: 100.96.8.54 ports: - port: 80 name: http </code></pre> <p>The service is successfully created:</p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dummy-svc ClusterIP 100.64.222.220 &lt;none&gt; 80/TCP 32m kubernetes ClusterIP 100.64.0.1 &lt;none&gt; 443/TCP 5d14h </code></pre> <p>Unfortunately, my attempt to connect to the nginx through the service from another pod of the same namespace fails:</p> <pre><code>$ curl 100.64.222.220 curl: (7) Failed to connect to 100.64.222.220 port 80: Connection refused </code></pre> <p>I can successfully connect to the nginx pods directly:</p> <pre><code>$ curl 100.96.8.53 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; .... </code></pre> <p>I noticed that my service does not have any endpoints. But I'm not sure that the manual endpoints should be shown there:</p> <pre><code>$ kubectl get svc/dummy-svc -o yaml apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"dummy-svc","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}]}} creationTimestamp: "2019-11-22T08:41:29Z" labels: app: nginx name: dummy-svc namespace: default resourceVersion: "4406151" selfLink: /api/v1/namespaces/default/services/dummy-svc uid: e0aa9d01-0d03-11ea-a19c-0a7942f17bf8 spec: clusterIP: 100.64.222.220 ports: - port: 80 protocol: TCP targetPort: 80 sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>I understand that it is not a proper use case for services and using of a pod selector will bring it to work. But I want to understend why this configuration does not work. I don't know where to look for the solution. Any hint will be appreciated.</p>
<p>it works if you remove the "name" field from the endpoints configuration. it should look like this:</p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: dummy-svc subsets: - addresses: - ip: 172.17.0.4 - ip: 172.17.0.5 - ip: 172.17.0.6 ports: - port: 80 </code></pre>
<p>I am trying to create a GKE cluster of node size 1. However, it always create a cluster of 3 nodes. Why is that? </p> <pre><code>resource "google_container_cluster" "gke-cluster" { name = "sonarqube" location = "asia-southeast1" remove_default_node_pool = true initial_node_count = 1 } resource "google_container_node_pool" "gke-node-pool" { name = "sonarqube" location = "asia-southeast1" cluster = google_container_cluster.gke-cluster.name node_count = 1 node_config { machine_type = "n1-standard-1" metadata = { disable-legacy-endpoints = "true" } labels = { app = "sonarqube" } } } </code></pre> <p><a href="https://i.stack.imgur.com/6bwMX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6bwMX.png" alt="enter image description here"></a></p>
<p>Ok, found I can do so using <code>node_locations</code>: </p> <pre><code>resource "google_container_cluster" "gke-cluster" { name = "sonarqube" location = "asia-southeast1" node_locations = [ "asia-southeast1-a" ] remove_default_node_pool = true initial_node_count = 1 } </code></pre> <p>Without that, it seems GKE will create 1 node per zone. </p>
<p>In kubernetes kustomize.yml, when I use configMapGenerator to pass in some values as env, can I access those variables in deployed springboot application, application.properties file?</p> <p>kustomize.yml</p> <pre><code>... configMapGenerator: - name: test-app-config env: ./test/applicationsetup.env ... </code></pre> <p>test/applicationsetup.env</p> <pre><code>some_key=data1 some_key1=data2 </code></pre> <p>application.properties</p> <pre><code>APPLICATION_KEY=${some_key} APPLICATION_KEY1=${some_key1} </code></pre>
<p>I missed to add configMapRef inside container where I was trying to access the data.</p> <pre><code>containers: - name: test-container image: &lt;image&gt; envFrom: - configMapRef: name: test-app-config </code></pre>
<p>I'm working on a personal project involving a browser-based code editor (think <a href="https://repl.it" rel="nofollow noreferrer">https://repl.it</a>). My plan:</p> <p>1) Constantly stream the code being written to a remote docker volume on kubernetes.</p> <p>2) Execute this code when the user presses "run".</p> <p>I've already started working on the streaming infrastructure, and have a good grasp on how I'd like to do it. Regarding the code execution, however, I'm in need of some guidance.</p> <p>Idea A: I was thinking that I could have two docker containers, one web server and one "environment", sitting on the same VM. When a request would come into the webserver, it would then run a <code>docker exec ...</code> on the environment. </p> <p>Idea B: I use <code>kubectl</code>, specifically <code>kubectl exec</code> to execute the code on the container.</p> <p>A few things to note. I want to make the "environment" container interchangeable, that is, my app should be able to support python, js, etc.. Any thoughts? </p>
<ol> <li>THIS IS A VERY BAD IDEA DO NOT DO IT</li> <li>You would want to run each snippet in a new container for maximum isolation.</li> </ol>
<p>I have created mysql k8s container and nodejs k8s container under same namespace.I can't able to connect mysql db.(sequalize)</p> <p>I have tried to connect using '''<a href="http://mysql.e-commerce.svc.cluster.local:3306" rel="nofollow noreferrer">http://mysql.e-commerce.svc.cluster.local:3306</a>'''.But i got "SequelizeHostNotFoundError" error. </p> <p>Here is my service and deployment yaml files.</p> <pre><code>kind: Service metadata: labels: app: mysql name: mysql namespace: e-commerce spec: type: NodePort ports: - port: 3306 targetPort: 3306 nodePort: 30306 selector: app: mysql --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mysql namespace: e-commerce spec: replicas: 1 template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql-container env: - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim``` </code></pre>
<p>From the ClusterIP worked for me or better way to go with the host name of the local cluster service ex. <code>db-mysql.default.svc.cluster.local</code>. This way if you cluster restarts and your IP changes, then you got it covered. </p> <p><a href="https://i.stack.imgur.com/x1JIi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x1JIi.png" alt="enter image description here"></a></p>
<ul> <li><p>I have a kubernetes cluster which is spread across 2 zones- zone1 and zone2.</p></li> <li><p>I have 2 applications- a web application and a database. The web application's configurations are stored in the database. Both the application as well as the database are deployed as stateful applications.</p></li> <li><p>The idea is to deploy 2 replica sets for web application (application-0 and application-1) and 2 replica for database (database-0 and database-1). application-0 points to database-0, application-1 points to database-1.</p></li> <li><p>Pod anti-affinity has been enabled. So preferably application-0 and application-1 will not be in same zone. Also database-0 and database-1 will not be in same zone.</p></li> <li><p>I want to ensure application-0 and database-0 are in the same zone. And application-1 and database-1 are in another zone. So that the performance of the web application is not compromised. Is that possible?</p></li> </ul>
<p>If you want to have strict separation of the workloads over the two zones - I'd suggest using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a> on a node's zone.</p> <p>A similar result is possible with pod affinity but it's more complex and to get the clear split you describe. You'd need to use the requiredDuringScheduling / execution rules which are usually best avoided unless you really need them.</p>
<p>There are many guides, answers, etc... that specifically show how to enable the kubernetes dashboard, and several that attempt to explain how to remotely access them, but many have an issue with regard to accepting the token once you get to the login screen.</p> <p>The problem as I understand it is that the service does not (rightfully) accept remote tokens over http. Even though I can get to the login screen I can't get into the dashboard due to the inability to use the token. How can I get around this limitation?</p>
<p>Taken from <a href="https://www.edureka.co/community/31282/is-accessing-kubernetes-dashboard-remotely-possible" rel="nofollow noreferrer">https://www.edureka.co/community/31282/is-accessing-kubernetes-dashboard-remotely-possible</a>:</p> <p>you need to make the request from the remote host look like it's coming from a localhost (where the dashboard is running):</p> <p><strong>From the system running kubernetes / dashboard:</strong></p> <p>Deploy the dashboard UI:</p> <blockquote> <p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml</a></p> </blockquote> <p>Start the proxy:</p> <blockquote> <p>kubectl proxy&amp;</p> </blockquote> <p>Create a secret:</p> <blockquote> <p>kubectl create serviceaccount [account name]</p> <p>kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=default:[account name]</p> <p>kubectl get secret</p> <p>kubectl describe secret [account name]</p> </blockquote> <p><strong>From the system you wish to access the dashboard:</strong></p> <p>Create an ssh tunnel to the remote system (the system running the dashboard):</p> <blockquote> <p>ssh -L 9999:127.0.0.1:8001 -N -f -l [remote system username] [ip address of remote system] -P [port you are running ssh on]</p> </blockquote> <p>You will likely need to enter a password unless you are using keys. Once you've done all this, from the system you established the ssh connection access <a href="http://localhost:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:9999/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p> <p>You can change the port 9999 to anything you'd like. </p> <p>Once you open the browser url, copy the token from the "describe secret" step and paste it in.</p>
<p>I tried to use K8s to setup spark cluster (I use standalone deployment mode, and I cannot use k8s deployment mode for some reason)</p> <p>I didn't set any cpu related arguments.</p> <p>for spark, that means:</p> <blockquote> <p>Total CPU cores to allow Spark applications to use on the machine (default: all available); only on worker</p> <p><a href="http://spark.apache.org/docs/latest/spark-standalone.html" rel="nofollow noreferrer">http://spark.apache.org/docs/latest/spark-standalone.html</a></p> </blockquote> <p>for k8s pods, that means:</p> <blockquote> <p>If you do not specify a CPU limit for a Container, then one of these situations applies:</p> <ul> <li><p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.</p></li> <li><p>The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.</p></li> </ul> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p> </blockquote> <pre><code>... Addresses: InternalIP: 172.16.197.133 Hostname: ubuntu Capacity: cpu: 4 memory: 3922Mi pods: 110 Allocatable: cpu: 4 memory: 3822Mi pods: 110 ... </code></pre> <p>But my spark worker only use 1 core (I have 4 cores on the worker node and the namespace has no resource limits).</p> <p>That means the spark worker pod only used 1 core of the node (which should be 4).</p> <p>How can I write yaml file to set the pod to use all available cpu cores?</p> <p>Here is my yaml file:</p> <pre><code>--- apiVersion: v1 kind: Namespace metadata: name: spark-standalone --- kind: DaemonSet apiVersion: apps/v1 metadata: name: spark-slave namespace: spark-standalone labels: k8s-app: spark-slave spec: selector: matchLabels: k8s-app: spark-slave updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: name: spark-slave namespace: spark-standalone labels: k8s-app: spark-slave spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/edge operator: Exists hostNetwork: true containers: - name: spark-slave image: spark:2.4.3 command: ["/bin/sh","-c"] args: - " ${SPARK_HOME}/sbin/start-slave.sh spark://$(SPARK_MASTER_IP):$(SPARK_MASTER_PORT) --webui-port $(SPARK_SLAVE_WEBUI_PORT) &amp;&amp; tail -f ${SPARK_HOME}/logs/* " env: - name: SPARK_MASTER_IP value: "10.4.20.34" - name: SPARK_MASTER_PORT value: "7077" - name: SPARK_SLAVE_WEBUI_PORT value: "8081" --- </code></pre>
<h2>Kubernetes - No upper bound</h2> <blockquote> <p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running</p> </blockquote> <p>Unless you confgure a <code>limit</code> on CPU for your pod, it <em>can</em> use all available CPU resources on the node.</p> <p><strong>Consider dedicated nodes</strong></p> <p>If you are running other workload on the same node, they also consume CPU resources, and may be guaranteed CPU resources if they have configured <code>request</code> for CPU. Consider use a dedicated node for your workload using <code>NodeSelector</code> and <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a>.</p> <h2>Spark - No upper bound</h2> <p>You <a href="https://spark.apache.org/docs/latest/spark-standalone.html" rel="nofollow noreferrer">configure the slave</a> with parameters to the <code>start-slave.sh</code> e.g. <code>--cores X</code> to <em>limit</em> CPU core usage.</p> <blockquote> <p>Total CPU cores to allow Spark applications to use on the machine (<strong>default: all available</strong>); only on worker</p> </blockquote> <h2>Multithreaded workload</h2> <p>In the end, if pod can use multiple CPU cores depends on how your application uses threads. Some things only uses a single thread, so the application must be designed for <strong>multithreading</strong> and have something <strong>parallelized</strong> to do.</p>
<p>I have some problems with Kubernetes ExternalName Service. I want to access server 'dummy.restapiexample.com' from the cluster . I created the following service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: dummy-svc spec: type: ExternalName externalName: dummy.restapiexample.com </code></pre> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dummy-svc ExternalName &lt;none&gt; dummy.restapiexample.com &lt;none&gt; 33m kubernetes ClusterIP 100.64.0.1 &lt;none&gt; 443/TCP 6d19h </code></pre> <p>But when I try to access the service from a pod from the same namespace, I'm getting code HTTP 403.</p> <pre><code>$ curl -v http://dummy-svc/api/v1/employee/1 &gt; GET /api/v1/employee/1 HTTP/1.1 &gt; User-Agent: curl/7.35.0 &gt; Host: dummy-svc &gt; Accept: */* &gt; &lt; HTTP/1.1 403 Forbidden &lt; Content-Type: text/plain &lt; Date: Sat, 23 Nov 2019 14:21:05 GMT &lt; Content-Length: 9 &lt; </code></pre> <p>I can access the external server w/o any problem:</p> <pre><code>$ curl -v http://dummy.restapiexample.com/api/v1/employee/1 │ &gt; GET /api/v1/employee/1 HTTP/1.1 &gt; User-Agent: curl/7.35.0 &gt; Host: dummy.restapiexample.com &gt; Accept: */* &lt; HTTP/1.1 200 OK ... &lt; Content-Length: 104 {"id":"1","employee_name":"56456464646","employee_salary":"2423","employee_age":"23","profile_image":""} </code></pre> <p>What is wrong with my code? Any hint will be highly appreciated.The cluster is runnung on AWS and installed with kops.</p>
<p>As pointed by Patrik W, the service works correctly. It routes requests to the remote server. Ping reaches the remote server:</p> <pre><code>$ ping dummy-svc PING dummy.restapiexample.com (52.209.246.67) 56(84) bytes of data. 64 bytes from ec2-52-209-246-67.eu-west-1.compute.amazonaws.com (52.209.246.67): icmp_seq=1 ttl=62 time=1.29 ms </code></pre> <p>Code 403 received from the remote server because of different URLs.</p> <p>@Patrik W: Thanks for the help.</p>
<p>I have issued below command in aws firecracker to configure the VM. I have only 8 vcpu in my host machine.</p> <pre><code>curl --unix-socket /tmp/firecracker.socket -i \ -X PUT 'http://localhost/machine-config' \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "vcpu_count": 20, "mem_size_mib": 1024, "ht_enabled": false }' </code></pre> <p>In Kubernetes, if we tried to configure a pod with vcpu more than max vcpu in host it will move to the pending state. but firecracker not showed any error or warning it just started the vm.</p> <p>Anyone kindly explain how firecracker handling the vcpu?</p>
<p>Firecracker is a VMM, and vCPUs are just a thread running on the host system. </p> <p>I wouldn't mix up Kubernetes resource management with how VMMs behave -- they are orthogonal. Firecracker starts virtual machines, not pods. </p> <p>If you were to use an OCI runtime in Kubernetes that utilizes Firecracker for isolation, the number of requests/limits for the resulting pod would be restricted by Kubernetes (scheduler/kubelet). Again, this is orthogonal to how the VMM behaves.</p>
<p>When I am giving hostpath it is showing that it is read-only file system since I am new to kubernetes I didn't find any other way please let me know such that is there any other way of implementation of volumes and I am doing this on GKE here is my yaml code</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "10" creationTimestamp: "2019-11-22T10:52:16Z" generation: 17 labels: app: dataset name: dataset namespace: default resourceVersion: "283767" selfLink: /apis/apps/v1/namespaces/default/deployments/dataset uid: 26111fe8-0d16-11ea-a66e-42010aa00042 spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: dataset strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: dataset spec: containers: - env: - name: RABBIT_MQ_HOST valueFrom: configMapKeyRef: key: RABBIT_MQ_HOST name: dataset-config - name: RABBIT_MQ_USER valueFrom: configMapKeyRef: key: RABBIT_MQ_USER name: dataset-config - name: RABBIT_MQ_PASSWORD valueFrom: configMapKeyRef: key: RABBIT_MQ_PASSWORD name: dataset-config - name: DATASET_DB_HOST valueFrom: configMapKeyRef: key: DATASET_DB_HOST name: dataset-config - name: DATASET_DB_NAME valueFrom: configMapKeyRef: key: DATASET_DB_NAME name: dataset-config - name: LICENSE_SERVER valueFrom: configMapKeyRef: key: LICENSE_SERVER name: dataset-config - name: DATASET_THUMBNAIL_SIZE valueFrom: configMapKeyRef: key: DATASET_THUMBNAIL_SIZE name: dataset-config - name: GATEWAY_URL valueFrom: configMapKeyRef: key: GATEWAY_URL name: dataset-config - name: DEFAULT_DATASOURCE_ID valueFrom: configMapKeyRef: key: DEFAULT_DATASOURCE_ID name: dataset-config - name: RABBIT_MQ_QUEUE_NAME valueFrom: configMapKeyRef: key: RABBIT_MQ_QUEUE_NAME name: dataset-config - name: RABBIT_MQ_PATTERN valueFrom: configMapKeyRef: key: RABBIT_MQ_PATTERN name: dataset-config image: gcr.io/gcr-testing-258008/dataset@sha256:8416ec9b023d4a4587a511b855c2735b25a16dbb1a15531d8974d0ef89ad3d73 imagePullPolicy: IfNotPresent name: dataset-sha256 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: ./data/uploads name: dataset-volume-uploads - mountPath: ./data/thumbnails name: dataset-volume-thumbnails dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - hostPath: path: /build/uploads type: "" name: dataset-volume-uploads - hostPath: path: /build/thumbnails type: "" name: dataset-volume-thumbnails status: availableReplicas: 2 conditions: - lastTransitionTime: "2019-11-23T07:19:13Z" lastUpdateTime: "2019-11-23T07:19:13Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2019-11-23T06:31:03Z" lastUpdateTime: "2019-11-23T07:24:42Z" message: ReplicaSet "dataset-75b46f868f" is progressing. reason: ReplicaSetUpdated status: "True" type: Progressing observedGeneration: 17 readyReplicas: 2 replicas: 3 unavailableReplicas: 1 updatedReplicas: 1 </code></pre> <p>Here is my description of pod</p> <pre><code> Path: /build/uploads HostPathType: dataset-volume-thumbnails: Type: HostPath (bare host directory volume) Path: /build/thumbnails HostPathType: default-token-x2wmw: Type: Secret (a volume populated by a Secret) SecretName: default-token-x2wmw Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 96s default-scheduler Successfully assigned default/dataset-75b46f868f-wffm7 to gke-teric-ai-default-pool-41929025-fxnx Warning BackOff 15s (x6 over 93s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Back-off restarting failed container Normal Pulled 2s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Container image "gcr.io/gcr-testing-258008/dataset@sha256:8416ec9b023d4a4587a511b855c2735b25a16dbb1a15531d8974d0ef89ad3d73" already present on machine Normal Created 2s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Created container Warning Failed 1s (x5 over 95s) kubelet, gke-teric-ai-default-pool-41929025-fxnx Error: failed to start container "dataset-sha256": Error response from daemon: error while creating mount source path '/build/uploads': mkdir /build/uploads: read-only file system </code></pre> <p>So here is the problem even though I am giving chmod permissions dynamically it is not allowing to do write operations.i have tried persistent volumes it is also not worked so please tell me know in which way I have to mount volumes.</p>
<ul> <li>First of all, do not use hostPath, go for persistence storage</li> <li>I would use StatefulSets instead of Deployment if you need to store some data.</li> </ul> <p>I was able to create both hostPath on my GKE instance manually as root user. </p> <p>I guess you have to specify type for hostPath to create request directory if it doesn't exist. <code>type: DirectoryOrCreate</code> you can read more about <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> and available type values. Moreover, if you are using hostPath permissions of your user inside a container must match ownership on the node so it makes it more complicated, of course, you could run it as root, but it is not recommended way.</p> <p>To sum it up, just use persistent storage provisioned by google. If you encounter problems with permissions you probably need <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init contaner</a> to change permission or you have to set proper <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">fsGroup</a> for your container.</p>
<p>It's possible to perform an authorization(rule-based like) into Kubernetes ingress(like kong, nginx). For example, i have this:</p> <p>apiVersion: extensions/v1beta1</p> <pre><code>kind: Ingress metadata: name: foo-bar spec: rules: - host: api.foo.bar http: paths: - path: /service backend: serviceName: service.foo.bar servicePort: 80 </code></pre> <p>But before redirect to /service, I need to perform a call in my authorization api to valid if the request token has the rule to pass for /service.</p> <p>Or I really need to use an API gateway behind ingress like a spring zuul to do this?</p>
<p><code>Ingress</code> manifest is just input for a controller. You also need an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a>, an proxy that understand the <code>Ingress</code> object. Kong and Nginx is two examples of implementation.</p> <p><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress Controller</a> is provided from the Kubernetes community and it has an example of <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/oauth-external-auth" rel="nofollow noreferrer">configuring an external oauth2 proxy</a> using annotations</p> <pre><code>annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" </code></pre>
<p>What is the best way to define a rule that allows egress only to the kube-apiserver with a Network Policy?</p> <p>While there's a <code>Service</code> resource for the kube-apiserver, there's not <code>Pods</code>, so as far as I know this can't be done with labels. With IP whitelisting, this isn't guaranteed to work across clusters. Is there any recommended practice here?</p>
<p>You have to use the IP address of the apiserver. You cannot use labels.<br> To find the IP address of the apiserver run:<br> <code>kubectl cluster-info</code> </p> <p>Look for a line like this in the output:<br> <code>Kubernetes master is running at https://&lt;ip&gt;</code><br> This is the IP address of your apiserver IP.</p> <p>The network policy should look like this (assuming the apiserver IP is 34.76.197.27):</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: egress-apiserver spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 34.76.197.27/32 ports: - protocol: TCP port: 443 </code></pre> <p><a href="https://orca.tufin.io/netpol/?yaml=apiVersion:%20networking.k8s.io%2Fv1%0Akind:%20NetworkPolicy%0Ametadata:%0A3name:%20egress-apiserver%0Aspec:%0A3podSelector:%20%7B%7D%0A3policyTypes:%0A3-%20Egress%0A3egress:%0A3-%20to:%0A5-%20ipBlock:%0A9cidr:%2034.76.197.27%2F32%0A5ports:%0A5-%20protocol:%20TCP%0A7port:%20443%0A" rel="nofollow noreferrer">The policy above applies to all pods in the namespaces it is applied to</a>.<br> To select specific pods, edit the podSelector section with the tags of the pods that require apiserver access:</p> <pre><code> podSelector: matchLabels: app: apiserver-allowed </code></pre> <p>Remember that the default egress policy is ALLOW ALL which means other pods will still have access to the apiserver.<br> <a href="https://orca.tufin.io/netpol/?yaml=apiVersion:%20networking.k8s.io%2Fv1%0Akind:%20NetworkPolicy%0Ametadata:%0A3name:%20deny-all-egress%0Aspec:%0A3podSelector:%20%7B%7D%0A3policyTypes:%0A3-%20Egress%0A" rel="nofollow noreferrer">You can change this behavior by adding a "BLOCK ALL" egress policy per namespace</a> but remember not to block access to the DNS server and other essential services.<br> <a href="https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d" rel="nofollow noreferrer">For more info see "Egress and DNS (Pitfall!)" in this post</a>.</p> <p>Note that in some cases there may be more than one apiservers (for scalability) in which case you will need to add all the IP addresses.</p>
<p>i want to do something like</p> <pre><code>nodeSelector: role: "!database" </code></pre> <p>in order to schedule pods on nodes which don't host the database.</p> <p>Thank you</p>
<p>from here: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: NotIn values: - database </code></pre>
<p>I have issues with CoreDNS on some nodes are in Crashloopback state due to error trying to reach the kubernetes internal service.</p> <p>This is a new K8s cluster deployed using Kubespray, the network layer is Weave with Kubernetes version 1.12.5 on Openstack. I've already tested the connection to the endpoints and have no issue reaching to 10.2.70.14:6443 for example. But telnet from the pods to 10.233.0.1:443 is failing.</p> <p>Thanks in advance for the help</p> <pre><code>kubectl describe svc kubernetes Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: ClusterIP IP: 10.233.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 10.2.70.14:6443,10.2.70.18:6443,10.2.70.27:6443 + 2 more... Session Affinity: None Events: &lt;none&gt; </code></pre> <p>And from CoreDNS logs:</p> <pre><code>E0415 17:47:05.453762 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:311: Failed to list *v1.Service: Get https://10.233.0.1:443/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused E0415 17:47:05.456909 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?limit=500&amp;resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused E0415 17:47:06.453258 1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:318: Failed to list *v1.Namespace: Get https://10.233.0.1:443/api/v1/namespaces?limit=500&amp;resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused </code></pre> <p>Also, checking out the logs of kube-proxy from one of the problematic nodes revealed the following errors:</p> <pre><code>I0415 19:14:32.162909 1 graceful_termination.go:160] Trying to delete rs: 10.233.0.1:443/TCP/10.2.70.36:6443 I0415 19:14:32.162979 1 graceful_termination.go:171] Not deleting, RS 10.233.0.1:443/TCP/10.2.70.36:6443: 1 ActiveConn, 0 InactiveConn I0415 19:14:32.162989 1 graceful_termination.go:160] Trying to delete rs: 10.233.0.1:443/TCP/10.2.70.18:6443 I0415 19:14:32.163017 1 graceful_termination.go:171] Not deleting, RS 10.233.0.1:443/TCP/10.2.70.18:6443: 1 ActiveConn, 0 InactiveConn E0415 19:14:32.215707 1 proxier.go:430] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed ) </code></pre>
<p>I had exactly the same problem, and it turned out that my kubespray config was wrong. Especially the nginx ingress setting <code>ingress_nginx_host_network</code></p> <p>As it turns our you have to set <code>ingress_nginx_host_network: true</code> (defaults to false)</p> <p><strong>If you do not want to rerun the whole kubespray script, edit the nginx ingress deamon set</strong></p> <p><code>$ kubectl -n ingress-nginx edit ds ingress-nginx-controller</code></p> <ol> <li>Add <code>--report-node-internal-ip-address</code> to the commandline:</li> </ol> <pre><code>spec: container: args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/ingress-nginx - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --report-node-internal-ip-address # &lt;- new </code></pre> <ol start="2"> <li>Set the following two properties on the same level as e.g <code>serviceAccountName: ingress-nginx</code>:</li> </ol> <pre><code>serviceAccountName: ingress-nginx hostNetwork: true # &lt;- new dnsPolicy: ClusterFirstWithHostNet # &lt;- new </code></pre> <p>Then save and quit <code>:wq</code>, check the pod status <code>kubectl get pods --all-namespaces</code>.</p> <p><em>Source:</em> <a href="https://github.com/kubernetes-sigs/kubespray/issues/4357" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/issues/4357</a></p>
<p>Why the property <strong>STATUS</strong> does not show?</p> <pre><code>kubectl get componentstatuses NAME AGE scheduler &lt;unknown&gt; controller-manager &lt;unknown&gt; etcd-2 &lt;unknown&gt; etcd-3 &lt;unknown&gt; etcd-1 &lt;unknown&gt; etcd-4 &lt;unknown&gt; etcd-0 &lt;unknown&gt; </code></pre> <p>I am missing also the property <strong>MESSAGE</strong> and <strong>ERROR</strong></p>
<p>If you want to see all the details, you can do:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get componentstatuses -o yaml </code></pre> <p>This outputs the YAML representations of the objects with all the properties whereas the default <code>kubectl get</code> output format outputs only a selection of the properties.</p>
<p>I have k8s cluster with pods, deployments etc. I am using helm to deploy my app. I want to delete all deployment and using below command</p> <pre><code>helm delete myNamespace --purge </code></pre> <p>If I will look at status of my pods, I will see that there are in terminating state, problem is that it takes time. Is there any way to remove it like instantly with some force flag or something?</p>
<p>You can try the following command:</p> <pre><code>helm delete myNamespace --purge --no-hooks </code></pre> <p>Also, you can use kubectl to forcefully delete the pods, instead of waiting for termination.</p> <p>Here's what I got from this link. <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/</a></p> <p>If you want to delete a Pod forcibly using kubectl version >= 1.5, do the following:</p> <pre><code>kubectl delete pods &lt;pod&gt; --grace-period=0 --force </code></pre> <p>If you’re using any version of kubectl &lt;= 1.4, you should omit the --force option and use:</p> <pre><code>kubectl delete pods &lt;pod&gt; --grace-period=0 </code></pre> <p>If even after these commands the pod is stuck on Unknown state, use the following command to remove the pod from the cluster:</p> <pre><code>kubectl patch pod &lt;pod&gt; -p '{"metadata":{"finalizers":null}}' </code></pre> <p>Always perform force deletion of StatefulSet Pods carefully and with complete knowledge of the risks involved.</p>
<p>I don't believe that it is a good idea to host a cluster of message queues that persist messages inside Kubernetes. I don't think hosting any cluster inside kubernetes nor any stateful system inside Kubernetes is a good idea. Kubernetes is designed for stateless restful services, to scale in and out, etc. First of all, am I right? If I am right, then I just need some good valid points to convince some of my colleagues. Thanks</p>
<p>Kubernetes was primarily designed for stateless applications, and managing state in Kubernetes introduces additional complexities that you have to manage (persistence, backups, recovery, replication, high availability, etc.).</p> <p>Let's cite some sources (emphases added):</p> <blockquote> <p>While it’s perfectly possible to run stateful workloads like databases in Kubernetes with enterprise-grade reliability, it requires a <strong>large investment of time and engineering</strong> that it may not make sense for your company to make [...] It’s usually <strong>more cost-effective to use managed services instead</strong>.</p> </blockquote> <p><em>Cloud Native DevOps with Kubernetes, p. 13, O'Reilly, 2019.</em></p> <blockquote> <p>The decision to use Statefulsets should be taken judiciously because usually stateful applications require <strong>much deeper management</strong> that the orchestrator cannot really manage well yet.</p> </blockquote> <p><em>Kubernetes Best Practices, Chapter 16, O'Reilly, 2019.</em></p> <p><strong>But...</strong></p> <p>Support for running stateful applications in Kubernetes is steadily increasing, the main tools being <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a></strong> and <strong><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">Operators</a></strong>:</p> <blockquote> <p>Stateful applications require much more due diligence, but the reality of running them in clusters <strong>has been accelerated by the introduction of StatefulSets and Operators</strong>.</p> </blockquote> <p><em>Kubernetes Best Practices, Chapter 16, O'Reilly, 2019.</em></p> <blockquote> <p>Managing stateful applications such as database systems in Kubernetes is still a complex distributed system and needs to be carefully orchestrated using the native Kubernetes primitives of pods, ReplicaSets, Deployments, and StatefulSets, but <strong>using Operators that have specific application knowledge built into them as Kubernetes-native APIs may help to elevate these systems into production-based clusters</strong>.</p> </blockquote> <p><em>Kubernetes Best Practices, Chapter 16, O'Reilly, 2019.</em></p> <h2>Conclusion</h2> <p>As a best practice at the time of this writing, I would say:</p> <ul> <li>Avoid managing state inside Kubernetes if you can. Use external services (e.g. cloud services, like <a href="https://aws.amazon.com/dynamodb/" rel="nofollow noreferrer">DynamoDB</a> or <a href="https://www.cloudamqp.com/" rel="nofollow noreferrer">CloudAMQP</a>) for managing state.</li> <li>If you have to manage state inside the Kubernetes cluster, <a href="https://operatorhub.io/" rel="nofollow noreferrer">check if an Operator exists</a> exists for the type of application that you want to run, and if yes, use it.</li> </ul>
<p>I am trying to perform my first deployment of an application in a Kubernetes cluster in GCP.</p> <p>I have the image of my application in Container Registration.</p> <pre><code>eu.gcr.io/diaphanum/bonsai-landing:v1 </code></pre> <p>The manifest file I use is deploy-ironia.yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: bonsai-landing spec: selector: matchLabels: app: bonsai-landing replicas: 3 template: metadata: labels: app: bonsai-landing spec: containers: - name: bonsai-landing image: "eu.gcr.io/diaphanum/bonsai-landing:v1" ports: - containerPort: 8080 </code></pre> <p>Use the following command to deploy from the GCP shell:</p> <pre><code>kubectl apply -f deploy-ironia.yaml </code></pre> <p>And I get the following error:</p> <pre><code>error: error validating "deploy-ironia.yaml": error validating data: [ValidationError (Deployment.spec): unknown field "containers" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError (Deployment.spec) : "mandatory" field selector "is missing in io.k8s.api.apps.v1.DeploymentSpec, ValidationError (Deployment.spec): the mandatory field" template "is missing in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, disable validation with --validate = false </code></pre> <p>Any suggestions to solve it?</p> <p>UPDATE:1 </p> <p>When run with --validate=false the message is:</p> <pre><code>The Deployment "landing" is invalid: * spec.selector: Required value * spec.template.metadata.labels: Invalid value: map[string]string(nil): `selector` does not match template `labels` * spec.template.spec.containers: Required value </code></pre>
<p><code>deploy-ironia.yaml</code> file should be : </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: bonsai-landing spec: selector: matchLabels: app: bonsai-landing replicas: 3 template: metadata: labels: app: bonsai-landing spec: containers: - name: bonsai-landing image: "eu.gcr.io/diaphanum/bonsai-landing:v1" ports: - containerPort: 8080 </code></pre>
<p>Have a couple of question on headless services:</p> <ol> <li>Is it required to use a headless service for statefulsets? Can I use the normal service.yml for the stateful sets also? </li> <li>Is it required to use a service &amp; then a headless service? Can I just use headless service instead?</li> <li>Can I not use the pods to attach to the service instead of headless service?</li> </ol>
<ol> <li>You don't have to use a Headless Service technically, but it often makes sense to use one if you want to take advantage of the sticky identity of each Pod in a StatefulSet (i.e. address a specific Pod by name rather than any of the Pods at random as it's the case with a normal Service).</li> <li>Not sure how you mean this. You create a Headless Service by defining a ClusterIP Service and setting the <code>clusterIP</code> field to <code>None</code>.</li> <li>You can use the Pod names directly too with some tools (e.g. <code>kubectl port-forward</code>), but the Headless Service creates DNS names for all the Pods so you can address them in a more general way.</li> </ol> <p><strong>References:</strong></p> <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#headless-services</a></li> </ul>
<p>We have an EKS cluster running and we are looking for best practices to ship application logs from pods to Elastic. In the EKS workshop there is an option to ship the logs to cloudwatch and then to Elastic.</p> <p>Wondered if there is an option to ship the logs directly to Elastic, or to understand best practices.</p> <p>Additional requirement: We need the logs to determine from which namespace the logs is coming from and to deliver a dedicated index</p>
<p>You can deploy EFK stack in kubernetes cluster. Follow the reference --> <a href="https://github.com/acehko/kubernetes-examples/tree/master/efk/production" rel="nofollow noreferrer">https://github.com/acehko/kubernetes-examples/tree/master/efk/production</a></p> <p>Fluentd would be deployed as DaemonSet so that one replica is run on each node collecting the logs from all pods and push them to elasticsearch</p>
<p>We have 8 java microservices talking to each other in kubeneters cluster. Each microservice is bundled with auth library which intercepts and validates/renews JWT token for each REST request to controllers. </p> <p>Scenario: From Frontend, we get access token for the first time, Authentication gets successful. Lets say</p> <ol> <li>Frontend hit 'Microservice A' with access token - Successful</li> <li>'Microservice A' internally hits 'Microservice B' via restTemplate. My 'Microservice B' also needs logged in user details.</li> </ol> <p>Issue: I have to pass same access token from 'A' to 'B' but I am not able to get access token in Controller/Service logic but can get only in filters where token is being validated. I can get token in Rest Controllers by adding following argument in all rest methods in controller:</p> <pre><code>@RequestHeader (name="Authorization") String token </code></pre> <p>But I dont want to go with this approach as I have to pass this token to everywhere till end and have to declare this argument in all APIS.</p> <p>I want to get token from TokenStore by passing authentication object. We are using Oauth2 and I checked the code in library, There are many tokenStore providers.</p> <p>In DefaultTokenServices.java class, I am calling </p> <pre><code>Authentication auth = SecurityContextHolder.getContext().getAuthentication() // Passed this auth to tokenStore String token = tokenStore.getAccessToken(auth).getValue(); // NullPointerException </code></pre> <p>My code is going through JWTTokenStore provider which is returning null. I checked, there is a provider called InMemoryTokenStore.class which actually extrActs token from store. But my flow is not going into in memory implementation. </p> <p>Is there any way I can get token afterwards without grabbing it in controller via arguments? or how can I enable/use inMemoryTokenStore?</p> <p>Also recommend something better for kubernetes intercommunication authentication?</p> <p>TIA</p>
<p>It looks like you're using Spring (and Spring Security), so I believe the relevant part of the docs is the part on <a href="https://docs.spring.io/spring-security/site/docs/current/reference/html5/#bearer-token-propagation" rel="nofollow noreferrer">Bearer Token Propagation</a>.</p> <p>Its recommendation is to use a <code>WebClient</code> (the recommended replacement for <code>RestTemplate</code> as of Spring 5) that uses the provided <code>ServletBearerExchangeFilterFunction</code> to automagically propagate the JWT token from the incoming request into the outgoing request:</p> <pre><code>@Bean public WebClient rest() { return WebClient.builder() .filter(new ServletBearerExchangeFilterFunction()) .build(); } </code></pre> <p>On <code>RestTemplate</code>, the docs say:</p> <blockquote> <p>"There is no dedicated support for RestTemplate at the moment, but you can achieve propagation quite simply with your own interceptor"</p> </blockquote> <p>and the following example is provided:</p> <pre><code>@Bean RestTemplate rest() { RestTemplate rest = new RestTemplate(); rest.getInterceptors().add((request, body, execution) -&gt; { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); if (authentication == null) { return execution.execute(request, body); } if (!(authentication.getCredentials() instanceof AbstractOAuth2Token)) { return execution.execute(request, body); } AbstractOAuth2Token token = (AbstractOAuth2Token) authentication.getCredentials(); request.getHeaders().setBearerAuth(token.getTokenValue()); return execution.execute(request, body); }); return rest; } </code></pre> <p>I don't believe you need to be looking at <code>TokenStore</code>s if all you're trying to do is propagate the token. Remember everything relevant about a JWT should be inside the token itself. (Which is why <a href="https://docs.spring.io/spring-security/oauth/apidocs/org/springframework/security/oauth2/provider/token/store/JwtTokenStore.html" rel="nofollow noreferrer">the doc for the JwtTokenStore</a> explains that it doesn't actually store anything, but just pulls info out of the token, and will return null for some methods, including the <code>getAccessToken()</code> method you're calling.)</p>
<p>I have a Jhipster application which I want to deploy to Kubernetes. I used the <code>jhipster kubernetes</code> command to create all the k8s objects and I provided an Docker Hub repository in which to push them. The Docker Hub repository is a private one. </p> <p>The deployment object looks like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: demodevices namespace: demo spec: replicas: 1 selector: matchLabels: app: demodevices version: 'v1' template: metadata: labels: app: demodevices version: 'v1' spec: initContainers: - name: init-ds image: busybox:latest command: - '/bin/sh' - '-c' - | while true do rt=$(nc -z -w 1 demodevices-postgresql 5432) if [ $? -eq 0 ]; then echo "DB is UP" break fi echo "DB is not yet reachable;sleep for 10s before retry" sleep 10 done containers: - name: demodevices-app image: myRepo/demo:demodevices-1.0.0 env: ... resources: ... ports: ... readinessProbe: ... livenessProbe: ... imagePullSecrets: - name: regcred </code></pre> <p>Because I used a private Docker Hub repo, I added the <code>imagePullSecret</code>. The secret is created and deployed to k8s. </p> <p>When applying the file, in the pods I see the following messages: </p> <pre><code> Warning Failed &lt;invalid&gt; (x4 over &lt;invalid&gt;) kubelet, k8node1 Failed to pull image "busybox:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/library/busybox/manifests/latest: unauthorized: incorrect username or password Warning Failed &lt;invalid&gt; (x4 over &lt;invalid&gt;) kubelet, k8node1 Error: ErrImagePull Normal BackOff &lt;invalid&gt; (x6 over &lt;invalid&gt;) kubelet, k8node1 Back-off pulling image "busybox:latest" Warning Failed &lt;invalid&gt; (x6 over &lt;invalid&gt;) kubelet, k8node1 Error: ImagePullBackOff </code></pre> <p>As I understood, it tries do pull the busybox:latest image using the credentials for the private repository. The expected result is to pull the busybox:latest without errors and pull my custom image from my private repo. How to fix the above issue?</p>
<p>This error is not connected to the fact you are using <code>imagePullSecret</code>. </p> <p><a href="https://kubernetes.io/docs/concepts/containers/images/#creating-a-secret-with-a-docker-config" rel="nofollow noreferrer">Review</a> the process you used to create your secret, here is an example: </p> <pre><code>kubectl create secret docker-registry anyname \ --docker-server=docker.io \ --docker-username=&lt;username&gt; \ --docker-password=&lt;password&gt; \ --docker-email=&lt;email&gt; </code></pre> <p>I have reproduced your case and I have the same error when I create the secret with wrong information. </p>
<p>I'm trying to run the playbook</p> <pre><code>ansible-playbook \ -i inventory/preprod/inventory.ini \ --private-key ~/.ssh/id_rsa_stagging \ -u cloud-user \ --become \ --become-user=root \ cluster.yml \ --tags resolvconf </code></pre> <p>And it returns this error:</p> <pre><code>fatal: [tivit-aiops-k8s-preprd-app-1]: FAILED! =&gt; { &quot;msg&quot;: &quot;The field 'environment' has an invalid value, which includes an undefined variable. The error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_hostname'&quot; } </code></pre> <p>I don't understand what I'm doing wrong ....</p>
<p>Take a look at the <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/inventory.ini" rel="nofollow noreferrer">sample inventory</a> provided from Kubespray:</p> <pre><code># ## Configure 'ip' variable to bind kubernetes services on a # ## different ip than the default iface # ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value. [all] # node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1 # node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2 # node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3 # node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4 # node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5 # node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6 </code></pre> <p>There's a name which can be used to reference this host through the inventory file (for assigning groups to the host), and there's also an <code>ansible_host</code> value which specifies the IP that Ansible will connect to (separately from the name).</p>
<p>I'm not having any luck in changing the name of a worker node in AWS EKS. I have not been able to find any documentation regarding how the node is named by default.</p> <p>Currently my nodes are named as follows, for example</p> <pre><code>NAME STATUS ROLES AGE VERSION ip-10-241-111-216.us-west-2.compute.internal Ready 44m v1.14.7-eks-1861c5 </code></pre> <p>I've tried passing in <code>--hostname-override</code> through the user data but it didn't seem to have any effect.</p>
<p>This is a known limitation in AWS EKS. It's discussed in <a href="https://github.com/kubernetes/kubernetes/issues/54482" rel="nofollow noreferrer">this GitHub issue</a> and it's still open.</p>
<p>I am new to K8s. I am researching benefits of using k8s on GKE/AKS over Azure VM Scalesets used in our system.</p> <p>For k8s, let's say we deploy worker application to 5 nodes cluster, each node can runs 10 pods. Each node is 16 core, 64G memory VM.</p> <p>My question is, since we are paying cloud service provider by node, not by pod, why we want to scale down pod on the node at all?</p> <p>Does it make more sense to horizontally scale up and down nodes only, while having maximum pods on each node?</p> <p>Does 10 pods, 20 pods, 30 pods, 40 pods, 50 pods makes more sense, meanwhile 11 pods, 21 pods, 31 pods, 41 pods etc sounds wasting resources we paid?</p> <p>I must missed key points of doing pods scaling. Please point them out. Thanks.</p>
<p>If all your pods are all identical, then yes, it likely makes sense to scale them only in node-sized equivalents (but maybe in that case you want to actually make sure the nodes can scale in increments that make sense to your workload size -- e.g. do you really need another 32 or 64 or 96 cores worth of your application every time? This is one of the things that the new Google <a href="https://cloud.google.com/batch/" rel="nofollow noreferrer">batch for Kubernetes</a> product tries to address -- including rightsizing of machines).</p> <p>But think if you have a heterogeneous set of workloads (far more likely with k8s!) -- then one of the advantages of k8s is that you can binpack different workloads onto the same node.</p> <p>Imagine if one workload needs a lot of RAM, but not much CPU, and another workload needs a lot of CPU and not a lot of RAM -- you wouldn't want them to both scale up in one-node increments, you'd want the pods to scale with the demands on each application, and the nodes to scale when you can no longer binpack onto the existing machines.</p>
<p>As the title says, I can't figure out how you're supposed to do this. The pricing calculator allows for it, so I'm assuming it's possible.</p> <p>I've tried:</p> <p>1) Creating a new cluster </p> <p>2) Creating a vm and adding it to an existing cluster, then deleting the initial node (Tried with and without the scaleset option)</p> <p>For #1, I see no options to add a reserved instance during cluster initialization. For #2, I see no options to add an existing vm to an existing aks cluster.</p> <p>Anyone know how to do this?</p>
<blockquote> <p>After you buy an Azure Reserved Virtual Machine Instance, the reservation discount is automatically applied to virtual machines that match the attributes and quantity of the reservation. A reservation covers the compute costs of your virtual machines. <a href="https://learn.microsoft.com/en-us/azure/billing/billing-understand-vm-reservation-charges" rel="nofollow noreferrer">Source</a></p> </blockquote> <p>In the <a href="https://learn.microsoft.com/en-us/azure/billing/billing-understand-vm-reservation-charges#instance-size-flexibility-setting" rel="nofollow noreferrer">documentation</a> you can see that this also applies to AKS. </p> <p>In other words, you buy a reserved instance and after you create your AKS cluster selecting instances with the same size, the discount will be automatically applied. </p> <blockquote> <p>A reservation discount applies to the base VMs that you purchase from the Azure Marketplace. </p> </blockquote> <p>By Marketplace you can also read AKS.</p>
<p>Wordpress has been written as stateful application. However, when it comes to container strategy, some hacking is needed.</p> <p>In particular, I was thinking about a storage configuration as follows:</p> <ul> <li>a Persistent Volume Claim (PVC) for the content serving: mount <code>/var/www/html/wp-content/</code></li> <li>another PVC for the WP-Admin: mount <code>/var/www/html/wp-admin/</code></li> <li>an NFS-based (Google Filestore) PVC for the user uploads: mount <code>/var/www/html/wp-content/uploads</code></li> </ul> <p>The k8s manifest should look like:</p> <pre><code>volumeMounts: - name: admin mountPath: /var/www/html/wp-admin/ - name: content mountPath: /var/www/html/ - name: uploads mountPath: /var/www/html/wp-content/uploads/ </code></pre> <p>Since I never tried this setup, does it make sense? Do you have any additional suggestions to take into account in having an ideal Wordpress deployment on Kubernetes?</p> <p><strong>Separation strategy:</strong></p> <p>Furthermore, are there well-known strategy to separate the WP-Admin in a completely separated container?</p>
<p>WordPress was not designed for containers. Your strategy will break quickly. </p> <p>Set up an NFS server in a container to store the entire configuration and files for Apache, PHP, Let's Encrypt, WordPress and logging. </p> <p>Don't forget to create a separate instance for MySQL. </p> <p>Basically, you want everything related to this WordPress instance stored on NFS file shares. You will want to hand install everything so that you know exactly where the files are being stored. </p> <p>When you are finished you will have a container that only mounts NFS shares and launches services. The data/files for the services is on NFS shares. </p> <p>MySQL provides the database. I recommend Cloud SQL to improve reliability and split the most critical service away from Kubernetes because backups and snapshots are much easier to create and schedule.</p> <blockquote> <p>Furthermore, are there well-known strategy to separate the WP-Admin in a completely separated container?</p> </blockquote> <p>Do not attempt that. You want the root and all subfolders of your WordPress website stored as an NFS share. Do not try to split the directories apart.</p> <blockquote> <p>an NFS-based (Google Filestore) PVC for the user uploads: mount /var/www/html/wp-content/uploads</p> </blockquote> <p>Make sure you double-check pricing. This is an expensive solution for storing WordPress. I recommend creating an NFS container connected to a persistent volume and using NFS to carve out the data. This does bring up a single point of failure for NFS. Depending on how reliable you want the setup to be, Filestore may be the correct solution. Otherwise, you can also look at Active/Passive NFS or even NFS clustering on multiple containers.</p>
<p>My yaml file.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: -http: paths: - path: /?(.*) backend: serviceName: nginx-service servicePort: 80 </code></pre> <pre><code>kubectl apply -f file.yaml error: error validating &quot;ingress.yaml&quot;: error validating data: ValidationError(Ingress.spec.rules): invalid type for io.k8s.api.networking.v1beta1.IngressSpec.rules: got &quot;map&quot;, expected &quot;array&quot;; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p>this is yaml template error only, use</p> <pre><code>kubectl explain ingress.spec.rules </code></pre> <p>to get idea.</p> <p>The hostname is also missing in the ingress rule, should have been something like</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: -host: {replace-me-with-hostname} http: paths: - path: /?(.*) backend: serviceName: nginx-service servicePort: 80 </code></pre> <hr />
<p><strong>Problem</strong></p> <p>I would like to host multiple services on a single domain name under different paths. The problem is that I'm unable to get request path rewriting working using <code>nginx-ingress</code>.</p> <p><strong>What I've tried</strong></p> <p>I've installed nginx-ingress using <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm" rel="nofollow noreferrer">these instructions</a>:</p> <pre class="lang-sh prettyprint-override"><code>helm install stable/nginx-ingress --name nginx-ingress --set controller.publishService.enabled=true </code></pre> <pre><code>CHART APP VERSION nginx-ingress-0.3.7 1.5.7 </code></pre> <p>The example works great with hostname based backends:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-kubernetes-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: first.testdomain.com http: paths: - backend: serviceName: hello-kubernetes-first servicePort: 80 </code></pre> <p>However, I can't get path rewriting to work. This version redirects requests to the <code>hello-kubernetes-first</code> service, but doesn't do the path rewrite so I get a 404 error from that service because it's looking for the /foo directory within that service (which doesn't exist).</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-kubernetes-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: first.testdomain.com http: paths: - backend: serviceName: hello-kubernetes-first servicePort: 80 path: /foo </code></pre> <p>I've also tried <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this example</a> for paths / rewriting:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hello-kubernetes-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: first.testdomain.com http: paths: - backend: serviceName: hello-kubernetes-first servicePort: 80 path: /foo(/|$)(.*) </code></pre> <p>But the requests aren't even directed to the <code>hello-kubernetes-first</code> service.</p> <p>It appears that my rewrite configuration isn't making it to the <code>/etc/nginx/nginx.conf</code> file. When I run the following, I get no results:</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec nginx-ingress-nginx-ingress-XXXXXXXXX-XXXXX cat /etc/nginx/nginx.conf | grep rewrite </code></pre> <p>How do I get the path rewriting to work?</p> <p><strong>Additional information:</strong></p> <ul> <li>kubectl / kubernetes version: <code>v1.14.8</code></li> <li>Hosting on Azure Kubernetes Service (AKS)</li> </ul>
<p>This is not likely to be an issue with AKS, as the components you use are working on top of Kubernetes layer. However, if you want to be sure you can deploy this on top of minikube locally and see if the problem persists. </p> <p>There are also few other things to consider:</p> <ol> <li>There is a <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-tls" rel="nofollow noreferrer">detailed guide</a> about creating ingress controller on AKS. The guide is up to date and confirmed to be working fine. </li> </ol> <blockquote> <p>This article shows you how to deploy the NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster. The cert-manager project is used to automatically generate and configure Let's Encrypt certificates. Finally, two applications are run in the AKS cluster, each of which is accessible over a single IP address.</p> </blockquote> <ol start="2"> <li>You may also want to use alternative like <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">Traefik</a>:</li> </ol> <blockquote> <p>Traefik is a modern HTTP reverse proxy and load balancer made to deploy microservices with ease.</p> </blockquote> <ol start="3"> <li>Remember that:</li> </ol> <blockquote> <p>Operators will typically wish to install this component into the <code>kube-system</code> namespace where that namespace's default service account will ensure adequate privileges to watch Ingress resources cluster-wide.</p> </blockquote> <p>Please let me know if that helped. </p>
<p>I'm trying to get rid of a bunch of orphaned volumes in heketi. When I try, I get "Error" and then information about the volume I just tried to delete, serialized as JSON. There's nothing else. I've tried to dig into the logs but they don't reveal anything.</p> <p>This is the command I used to try and delete the volume:</p> <pre><code>heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret ${KEY} volume delete 22f1a960651f0f16ada20a15d68c7dd6 Error: {"size":30,"name":"vol_22f1a960651f0f16ada20a15d68c7dd6","durability":{"type":"none","replicate":{},"disperse":{}},"gid":2008,"glustervolumeoptions":["","cluster.post-op-delay-secs 0"," performance.client-io-threads off"," performance.open-behind off"," performance.readdir-ahead off"," performance.read-ahead off"," performance.stat-prefetch off"," performance.write-behind off"," performance.io-cache off"," cluster.consistent-metadata on"," performance.quick-read off"," performance.strict-o-direct on"," storage.health-check-interval 0",""],"snapshot":{"enable":true,"factor":1},"id":"22f1a960651f0f16ada20a15d68c7dd6","cluster":"e924a50aa93d9eae1132c60eb1f36310","mount":{"glusterfs":{"hosts":["&lt;SECRET&gt;"],"device":"&lt;SECRET&gt;:vol_22f1a960651f0f16ada20a15d68c7dd6","options":{"backup-volfile-servers":""}}},"blockinfo":{},"bricks":[{"id":"0f4c6d7f605e9368bfe3dc7cc117b69a","path":"/var/lib/heketi/mounts/vg_970f0faf60f8dfc6f6a0d6bd25bdea7c/brick_0f4c6d7f605e9368bfe3dc7cc117b69a/brick","device":"970f0faf60f8dfc6f6a0d6bd25bdea7c","node":"107894a855c9d2c34509b18272e6c298","volume":"22f1a960651f0f16ada20a15d68c7dd6","size":31457280}]} </code></pre> <p>Notice that the second line only contains Error, then the info about the volume serialized as json.</p> <p>The volume doesn't exist in gluster. I used the below commands to verify the volume was no longer there:</p> <pre><code>kubectl -n default exec -t -i glusterfs-rgz9g bash gluster volume info &lt;shows volume i did not delete&gt; </code></pre> <p>Kubernetes does not show a PersistentVolumeClaim or PersistentVolume:</p> <pre><code>kubectl get pvc -A No resources found. kubectl get pv -A No resources found. </code></pre> <p>I tried looking at the heketi logs, but it only reports a GET for the volume</p> <pre><code>kubectl -n default logs heketi-56f678775c-nrbwd [negroni] 2019-11-25T21:29:19Z | 200 | 1.407715ms | &lt;SECRET&gt;:8080 | GET /volumes/22f1a960651f0f16ada20a15d68c7dd6 [negroni] 2019-11-25T21:29:19Z | 200 | 1.111984ms | &lt;SECRET&gt;:8080 | GET /volumes/22f1a960651f0f16ada20a15d68c7dd6 [negroni] 2019-11-25T21:29:19Z | 200 | 1.540357ms | &lt;SECRET&gt;:8080 | GET /volumes/22f1a960651f0f16ada20a15d68c7dd6 </code></pre> <p>I've tried setting more verbose log level but the setting doesn't stick:</p> <pre><code>heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret ${KEY} loglevel set debug Server log level updated heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret ${KEY} loglevel get info </code></pre> <p>My CLI uses</p> <pre><code>heketi-cli -v heketi-cli v9.0.0 </code></pre> <p>And the Heketi server is running:</p> <pre><code>kubectl -n default exec -t -i heketi-56f678775c-nrbwd bash heketi -v Heketi v9.0.0-124-gc2e2a4ab </code></pre> <p>Based on the logs, I believe heketi-cli has an issue and then never actually sends the POST or DELETE request to the heketi server.</p> <p>How do I proceed to debug this? At this point my only work around is to recreate my cluster but I'd like to avoid that especially if something like this comes back. </p>
<p>Looks like there's a bug in <code>heketi-cli</code> because if I manually craft the request using ruby and curl, I'm able to delete the volume:</p> <pre><code>TOKEN=$(ruby makeToken.rb DELETE /volumes/22f1a960651f0f16ada20a15d68c7dd6) curl -X DELETE -H "Authorization: Bearer $TOKEN" http://10.233.21.178:8080/volumes/22f1a960651f0f16ada20a15d68c7dd6 </code></pre> <p>Please see <a href="https://github.com/heketi/heketi/blob/master/docs/api/api.md#authentication-model" rel="nofollow noreferrer">https://github.com/heketi/heketi/blob/master/docs/api/api.md#authentication-model</a> for how to generate the jwt token.</p> <p>I manually created the request, expected getting a better error message that the command line tool swallowed. Turns out the cli was actually busted.</p> <p>ruby code for making the jwt token (makeToken.rb). You need to fill in pass and server.</p> <pre><code>#!/usr/bin/env ruby require 'jwt' require 'digest' user = "admin" pass = "&lt;SECRET&gt;" server = "http://localhost:8080/" method = "#{ARGV[0]}" uri = "#{ARGV[1]}" payload = {} headers = { iss: 'admin', iat: Time.now.to_i, exp: Time.now.to_i + 600, qsh: Digest::SHA256.hexdigest("#{method}&amp;#{uri}") } token = JWT.encode headers, pass, 'HS256' print("#{token}") </code></pre>
<p>Is there some tool available that could tell me whether a K8s YAML configuration (to-be-supplied to <code>kubectl apply</code>) is valid for the target Kubernetes version without requiring a connection to a Kubernetes cluster?</p> <p>One concrete use-case here would be to detect incompatibilities before actual deployment to a cluster, just because some already-deprecated label has been finally dropped in a newer Kubernetes version, e.g. as has happened for Helm and the switch to Kubernetes 1.16 (see <a href="https://github.com/helm/helm/issues/6374" rel="nofollow noreferrer">Helm init fails on Kubernetes 1.16.0</a>):</p> <p>Dropped:</p> <pre><code>apiVersion: extensions/v1beta1 </code></pre> <p>New:</p> <pre><code>apiVersion: apps/v1 </code></pre> <p>I want to check these kind of incompatibilities within a CI system, so that I can reject it before even attempting to deploy it.</p>
<p>just run below command to validate the syntax</p> <pre><code>kubectl create -f &lt;yaml-file&gt; --dry-run </code></pre> <p>In fact the dry-run option is to validate the YAML syntax and the object schema. You can grab the output into a variable and if there is no error then rerun the command without dry-run </p>
<p>I have a sample crd defined as </p> <p>crd.yaml</p> <pre><code>kind: CustomResourceDefinition metadata: name: testconfig.demo.k8s.com namespace: testns spec: group: demo.k8s.com versions: - name: v1 served: true storage: true scope: Namespaced names: plural: testconfigs singular: testconfig kind: TestConfig </code></pre> <p>I want to create a custom resource based on above crd but i dont want to assign a fixed name to the resource rather use the generateName field. So i generated the below cr.yaml. But when i apply it gives error that name field is mandatory</p> <pre><code>kind: TestConfig metadata: generateName: test-name- namespace: testns spec: image: testimage </code></pre> <p>Any help is highly appreciated.</p>
<p>You should use <code>kubectl create</code> to create your CR with <code>generateName</code>.</p> <p>"<code>kubectl apply</code> will verify the existence of the resources before take action. If the resources do not exist, it will firstly create them. If use <code>generateName</code>, the resource name is not yet generated when verify the existence of the resource." <a href="https://github.com/kubernetes/kubernetes/issues/44501#issuecomment-294255660" rel="noreferrer">source</a></p>
<p>I want to calculate the CPU and Memory Percentage of Resource utilization of an individual pod in Kubernetes. For that, I am using metrics server API</p> <ol> <li>From the metrics server, I get the utilization from this command</li> </ol> <p>kubectl top pods --all-namespaces</p> <pre><code>kube-system coredns-5644d7b6d9-9whxx 2m 6Mi kube-system coredns-5644d7b6d9-hzgjc 2m 7Mi kube-system etcd-manhattan-master 10m 53Mi kube-system kube-apiserver-manhattan-master 23m 257Mi </code></pre> <p>But I want the percentage utilization of individual pod Both CPU % and MEM%</p> <p>From this output by top command it is not clear that from how much amount of cpu and memory it consumes the resultant amount?</p> <p>I don't want to use Prometheus operator I saw one formula for it</p> <pre><code>sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name) </code></pre> <p>Can I calculate it with <a href="https://github.com/kubernetes-sigs/metrics-server" rel="noreferrer">MetricsServer</a> API?</p> <p>I thought to calculate like this </p> <p><strong>CPU%</strong> = ((2+2+10+23)/ Total CPU MILLICORES)*100</p> <p><strong>MEM%</strong> = ((6+7+53+257)/AllocatableMemory)* 100</p> <p>Please tell me if I right or wrong. Because I didn't see any standard formula for calculating pod utilization in Kubernetes documentation</p>
<p>Unfortunately <code>kubectl top pods</code> provides only a <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md#quantity-values" rel="nofollow noreferrer">quantity values</a> and not a percentages.</p> <p><a href="https://github.com/kubernetes-sigs/metrics-server/issues/193#issuecomment-451309811" rel="nofollow noreferrer">Here</a> is a good explanation of how to interpret those values.</p> <p>It is currently not possible to list pod resource usage in percentages with a <code>kubectl top</code> command.</p> <p>You could still chose <a href="https://prometheus.io/docs/visualization/grafana/" rel="nofollow noreferrer">Grafana with Prometheus</a> but it was already stated that you don't want to use it (however maybe another member of the community with similar problem would do so I am mentioning it here).</p> <p><strong>EDIT:</strong></p> <p>Your formulas are correct. They will calculate how much CPU/Mem is being consumed by all Pods relative to total CPU/Mem you got. </p> <p>I hope it helps. </p>
<h1>Objective</h1> <p>I want to deploy Airflow on Kubernetes where pods have access to the same DAGs, in a Shared Persistent Volume. According to the documentation (<a href="https://github.com/helm/charts/tree/master/stable/airflow#using-one-volume-for-both-logs-and-dags" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/airflow#using-one-volume-for-both-logs-and-dags</a>), it seems I have to set and pass these values to Helm: <code>extraVolume</code>, <code>extraVolumeMount</code>, <code>persistence.enabled</code>, <code>logsPersistence.enabled</code>, <code>dags.path</code>, <code>logs.path</code>.</p> <h1>Problem</h1> <p>Any custom values I pass when installing the official Helm chart results in errors similar to:</p> <pre><code>Error: YAML parse error on airflow/templates/deployments-web.yaml: error converting YAML to JSON: yaml: line 69: could not find expected ':' </code></pre> <ul> <li>Works fine: <code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow</code></li> <li><strong>Not working</strong>:</li> </ul> <pre><code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow \ --set airflow.extraVolumes=/home/*user*/github/airflowDAGs \ --set airflow.extraVolumeMounts=/home/*user*/github/airflowDAGs \ --set dags.path=/home/*user*/github/airflowDAGs/dags \ --set logs.path=/home/*user*/github/airflowDAGs/logs \ --set persistence.enabled=false \ --set logsPersistence.enabled=false </code></pre> <ul> <li><strong>Also not working</strong>: <code>microk8s.helm install --namespace "airflow" --name "airflow" stable/airflow --values=values_pv.yaml</code>, with <code>values_pv.yaml</code>: <a href="https://pastebin.com/PryCgKnC" rel="nofollow noreferrer">https://pastebin.com/PryCgKnC</a> <ul> <li>Edit: Please change <code>/home/*user*/github/airflowDAGs</code> to a path on your machine to replicate the error.</li> </ul></li> </ul> <h1>Concerns</h1> <ol> <li>Maybe it is going wrong because of these lines in the default <code>values.yaml</code>:</li> </ol> <pre><code>## Configure DAGs deployment and update dags: ## ## mount path for persistent volume. ## Note that this location is referred to in airflow.cfg, so if you change it, you must update airflow.cfg accordingly. path: /home/*user*/github/airflowDAGs/dags </code></pre> <p>How do I configure <code>airflow.cfg</code> in a Kubernetes deployement? In a non-containerized deployment of Airflow, this file can be found in <code>~/airflow/airflow.cfg</code>.</p> <ol start="2"> <li>Line 69 in <code>airflow.cfg</code> refers to: <a href="https://github.com/helm/charts/blob/master/stable/airflow/templates/deployments-web.yaml#L69" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/airflow/templates/deployments-web.yaml#L69</a></li> </ol> <p>Which contains <code>git</code>. Are the <code>.yaml</code> wrongly configured, and it falsely is trying to use <code>git pull</code>, but since no git path is specified, this fails?</p> <h1>System</h1> <ul> <li>OS: Ubuntu 18.04 (single machine)</li> <li>MicroK8s: v1.15.4 Rev:876</li> <li><code>microk8s.kubectl version</code>: v1.15.4</li> <li><code>microk8s.helm version</code>: v2.14.3</li> </ul> <h1>Question</h1> <p>How do I correctly pass the right values to the Airflow Helm chart to be able to deploy Airflow on Kubernetes with Pods having access to the same DAGs and logs on a Shared Persistent Volume?</p>
<p>Not sure if you have this solved yet, but if you haven't I think there is a pretty simple way close to what you are doing.</p> <p>All of the Deployments, Services, Pods need the persistent volume information - where it lives locally and where it should go within each kube kind. It looks like the values.yaml for the chart provides a way to do this. I'll only show this with dags below, but I think it should be roughly the same process for logs as well.</p> <p>So the basic steps are, 1) tell kube where the 'volume' (directory) lives on your computer, 2) tell kube where to put that in your containers, and 3) tell airflow where to look for the dags. So, you can copy the values.yaml file from the helm repo and alter it with the following.</p> <ol> <li>The <code>airflow</code> section</li> </ol> <p>First, you need to create a volume containing the items in your local directory (this is the <code>extraVolumes</code> below). Then, that needs to be mounted - luckily putting it here will template it into all kube files. Once that volume is created, then you should tell it to mount <code>dags</code>. So basically, <code>extraVolumes</code> creates the volume, and <code>extraVolumeMounts</code> mounts the volume.</p> <pre><code>airflow: extraVolumeMounts: # this will get the volume and mount it to that path in the container - name: dags mountPath: /usr/local/airflow/dags # location in the container it will put the directory mentioned below. extraVolumes: # this will create the volume from the directory - name: dags hostPath: path: "path/to/local/directory" # For you this is something like /home/*user*/github/airflowDAGs/dags </code></pre> <ol start="2"> <li>Tell the airflow config where the dags live in the container (same yaml section as above).</li> </ol> <pre><code>airflow: config: AIRFLOW__CORE__DAGS_FOLDER: "/usr/local/airflow/dags" # this needs to match the mountPath in the extraVolumeMounts section </code></pre> <ol start="3"> <li>Install with helm and your new <code>values.yaml</code> file.</li> </ol> <pre><code>helm install --namespace "airflow" --name "airflow" -f local/path/to/values.yaml stable/airflow </code></pre> <p>In the end, this should allow airflow to see your local directory in the dags folder. If you add a new file, it should show up in the container - though it may take a minute to show up in the UI - I don't think the dagbag process is constantly running? Anyway, hope this helps!</p>
<p>We are setting up a Jenkins-based CI pipeline on our Kubernetes cluster (Rancher if that matters) and up to now we have used the official <code>maven:3-jdk-11-slim</code> image for experiments. Unfortunately it does not provide any built-in way of overriding the default settings.xml to use a mirror, which we need - preferably just by setting an environment variable. I am not very familar with kubernetes so I may be missing something simple.</p> <p>Is there a simple way to add a file to the image? Should I use another image with this functionality built in?</p> <pre><code> pipeline { agent { kubernetes { yaml """ kind: Pod metadata: name: kaniko spec: containers: - name: maven image: maven:3-jdk-11-slim command: - cat tty: true - name: kaniko .... etc </code></pre>
<p><strong>Summary</strong>: you can mount <strong>your</strong> settings.xml file on the pod at some specific path and use that file with command <code>mvn -s /my/path/to/settings.xml</code>.</p> <p>Crou's ConfigMap approach is one way to do it. However, since the <code>settings.xml</code> file usually contains credentials, I would treat it as <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secrets</a>.</p> <p>You can create a Secret in Kubernetes with command:</p> <pre><code>$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml </code></pre> <p>The pod definition will be something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - name: maven image: maven:3-jdk-11-slim command: - cat tty: true volumeMounts: - name: mvn-settings-vol mountPath: /my/path/to volumes: - name: mvn-settings-vol secret: secretName: mvn-settings </code></pre> <p><strong>Advanced/Optional</strong>: If you practice "Infrastructure as Code", you might want to save the manifest file for that secret for recovery. This can be achieved by this command after secret already created: </p> <pre><code>$ kubectl get secrets mvn-settings -o yaml </code></pre> <p>You can keep <code>secrets.yml</code> file but do not check into any VCS/Github repo since this version of <code>secrets.yml</code> contains unencrypted data.</p> <p>Some k8s administrators may have <a href="https://github.com/bitnami-labs/sealed-secrets" rel="noreferrer">kubeseal</a> installed. In that case, I'd recommend using kubeseal to get encrypted version of <code>secrets.yml</code>.</p> <pre><code>$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml --dry-run -o json | kubeseal --controller-name=controller --controller-namespace=k8s-sealed-secrets --format=yaml &gt;secrets.yml # Actually create secrets $ kubectl apply -f secrets.yml </code></pre> <p>The <code>controller-name</code> and <code>controller-namespace</code> should be obtained from k8s administrators. This <code>secrets.yml</code> contains encrypted data of your <code>settings.xml</code> and can be safely checked into VCS/Github repo.</p>
<p>I am trying to setup an IPv6 kubernetes cluster. I have two IPv6 interfaces and one docker interface (172.17.0.1). The docker interface is setup by docker itself.</p> <pre><code>kahou@kahou-master:~$ ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens192: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:50:56:af:1d:25 brd ff:ff:ff:ff:ff:ff inet6 2001:420:293:242d:250:56ff:feaf:1d25/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 2591949sec preferred_lft 604749sec inet6 fe80::250:56ff:feaf:1d25/64 scope link valid_lft forever preferred_lft forever 3: ens224: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:50:56:af:a5:15 brd ff:ff:ff:ff:ff:ff inet6 2000::250:56ff:feaf:a515/64 scope global dynamic mngtmpaddr noprefixroute valid_lft 2591933sec preferred_lft 604733sec inet6 2000::3/64 scope global valid_lft forever preferred_lft forever inet6 fe80::250:56ff:feaf:a515/64 scope link valid_lft forever preferred_lft forever 4: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:53:f2:46:8c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 5: tunl0@NONE: &lt;NOARP,UP,LOWER_UP&gt; mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 </code></pre> <p>When I initialize my cluster thru kubeadm, all the hostnetwork pods IP are using the docker IP addresses:</p> <pre><code>etcd-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-apiserver-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-controller-manager-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; kube-proxy-pnq7g 1/1 Running 0 178m 172.17.0.1 kahou-master &lt;none&gt; kube-scheduler-kahou-master 1/1 Running 0 177m 172.17.0.1 kahou-master &lt;none&gt; </code></pre> <p>Is it possible to tell kubeadm which interface I use during the installation?</p> <p>Below is my api-server call (generated by kubeadm)</p> <pre><code>kube-apiserver --authorization-mode=Node,RBAC --bind-address=2001:420:293:242d:250:56ff:feaf:1d25 --service-cluster-ip-range=fd03::/112 --advertise-address=2001:420:293:242d:250:56ff:feaf:1d25 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key </code></pre> <p>This is my kubeadm config file:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha2 kind: MasterConfiguration api: advertiseAddress: 2001:420:293:242d:250:56ff:feaf:1d25 apiServerExtraArgs: bind-address: 2001:420:293:242d:250:56ff:feaf:1d25 service-cluster-ip-range: fd03::/112 controllerManagerExtraArgs: node-cidr-mask-size: "96" cluster-cidr: fd02::/80 service-cluster-ip-range: fd03::/112 networking: serviceSubnet: fd03::/112 nodeRegistration: node-ip: 2001:420:293:242d:250:56ff:feaf:1d25 </code></pre>
<p>A helpful note for configuring <code>node-ip</code> to be passed on to kubelet via kubeadm config file: according to <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1</a> and some experimentation, it should be under <code>kubeletExtraArgs</code> of the <code>nodeRegistration</code> section (example using IP from your config file): </p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: node-ip: 2001:420:293:242d:250:56ff:feaf:1d25 </code></pre>
<p>My service running in a pod output too much log and cause low ephemeral storage. As a result, the pod is evicted and other services can't deploy to k8s. </p> <p>So how I can determine pod resource ephemeral storage requests and limit to avoid this situation? I can't find any best practice about ephemeral storage.</p>
<p>Note that by default, if you have not set any limits on <code>ephemeral-storage</code> the pod has access to the entire disk of the node it is running on, so if you are certain that the pod is being evicted because of this, then you are certain that the pod consumed it all. You can check this from kubelet logs, as kubelet is the guy in charge of detecting this behavior and evicting the pod.</p> <p>Now, here you have two options. Either you can set an <code>ephemeral-storage</code> limit, and make a controlled pod eviction, or just get an external volume, map it into the container, and get the logs outside of the node.</p> <p>You can also monitor the disk usage, as suggesting shubham_asati, but if it is eating it all, it is eating it all. You are just going to look at how it is getting filled out.</p>
<p>this question is about k8s readiness probe. I am trying to add the command in readiness probe curl to the new pod that is creating.</p> <p>I mean that I want to check that the new pod that is created is ready to accept traffic, before the old one is terminated. I alrady have a command that is execute in the readines probe, so it is not possible for me to add an httpGet in this way:</p> <pre><code>readinessProbe: httpGet: path: /health </code></pre> <p>because I saw that there is an <a href="https://github.com/kubernetes/kubernetes/issues/37218" rel="nofollow noreferrer">issue</a> that it is not possible to add httpGet &amp; command that will be execute.</p> <p>Therefore, I must add this curl to the script that is running each time before new pod is created.</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status" if [ "$status" -ne "200" ]; exit 1 fi </code></pre> <p>My problem is that that it is not working, and using <code>kubectl describe po XXXXX</code> I see this output:</p> <pre><code> Readiness probe failed: statusCode: 000000 if [ 000000 -ne 200 ] </code></pre> <p>So, I'm not sure how to make request to the new pod, because the only thing that I know about the new pod in this level is that it include an api named <strong>health</strong>.</p> <p>am I making the request correct?</p>
<p>You are missing some colons and a then.</p> <p>You have:</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status" if [ "$status" -ne "200" ]; exit 1 fi </code></pre> <p>instead of</p> <pre><code>status=$( curl -s -o -k /dev/null -w %{http_code} /health); echo "statusCode: $status"; if [ "$status" -ne "200" ]; then exit 1; fi </code></pre>
<p>I am facing an issue while reading a file stored in my system in spark cluster mode program.It is giving me an error that "File not found" but file is present at defined location.Please suggest me some idea so that i can read local file in spark cluster using kubernetes.</p>
<p>You cannot refer local files on your machine when you submit Spark on Kubernetes. </p> <p>The available solutions for your case might be:</p> <ul> <li>Use <a href="https://apache-spark-on-k8s.github.io/userdocs/running-on-kubernetes.html#dependency-management" rel="noreferrer">Resource staging server</a>. Is not available in the main branch of Apache Spark codebase, so the whole integration is on your side.</li> <li>Put your file to the http/hdfs accessible location: refer <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#dependency-management" rel="noreferrer">docs</a></li> <li>Put your file inside Spark Docker image and refer it as <code>local:///path/to/your-file.jar</code></li> </ul> <p>If you are running local Kubernetes cluster like Minikube you can also create a Kubernetes Volume with files you are interested in and mount it to the Spark Pods: refer <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-kubernetes-volumes" rel="noreferrer">docs</a>. Be sure to mount that volume to both Driver and Executors.</p>
<p>My yaml file.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: -http: paths: - path: /?(.*) backend: serviceName: nginx-service servicePort: 80 </code></pre> <pre><code>kubectl apply -f file.yaml error: error validating &quot;ingress.yaml&quot;: error validating data: ValidationError(Ingress.spec.rules): invalid type for io.k8s.api.networking.v1beta1.IngressSpec.rules: got &quot;map&quot;, expected &quot;array&quot;; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
<p><strong>The issue</strong> is with typo in "-http" (no whitespace char between the hyphen and the 'h' letter).</p> <pre><code>spec: rules: -http: &lt;--- this doesn't create an array, and array is expected instead of map paths: </code></pre> <p><strong>How to troubleshoot</strong> such issues (and what Kubernetes does "under_the_hood"):</p> <ul> <li>yamllint . It is possible to put the yaml code to any online <a href="http://www.yamllint.com/" rel="nofollow noreferrer">YAML syntax validator</a> . Let's check the line_11. There is <code>?</code> symbol that states that yamllint had to guess what is wrong with that line. Please note that yamllint still considers this YAML as Valid.</li> </ul> <p><a href="https://i.stack.imgur.com/C5s4P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C5s4P.png" alt="enter image description here"></a></p> <ul> <li>convert yaml to JSON. That's exactly what <code>kubectl</code> does before sending data to <code>kube-apiserver</code>. it is possible to do that with <a href="https://www.json2yaml.com/" rel="nofollow noreferrer">json2yaml</a> tool for example. You'll immidiately see that the tool parses it as a map and not as an array.</li> </ul> <p>Tha is how it converted to JSON with incorrect YAML. <a href="https://i.stack.imgur.com/ZFzhr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZFzhr.png" alt="incorrect YAML"></a></p> <p>And with the corrected YAML it parsed as <code>array</code> <a href="https://i.stack.imgur.com/Y3Sln.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y3Sln.png" alt="correct YAML . &lt;code&gt;rules&lt;/code&gt; contain an array"></a></p> <p><strong>How it is supposed to work</strong>. As with all other Kubernetes resources, an Ingress needs apiVersion, kind, and metadata fields. Additionally, Ingress frequently uses annotations to configure some options depending on the Ingress controller, an example of which is the rewrite-target annotation. Different Ingress controllers support different annotations. </p> <p>If anyone needs comprehensive info on rewrite-target, that can be found <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md" rel="nofollow noreferrer">here</a> </p> <p>IngressRule represents the rules mapping the paths under a specified host to the related backend services. Incoming requests are first evaluated for a host match, then routed to the backend associated with the matching IngressRuleValue.</p> <p><code>host</code> have to be of a <code>string</code> type and contain fully qualified domain name (FQDN) of a network host. Current Limitations (client GitVersion:"v1.16.3", Server GitVersion:"v1.14.8-gke.12"):</p> <ul> <li>IPs aren't allowed. IngressRuleValue can only apply to the IP in the Spec of the parent Ingress.</li> <li>The <code>:</code> delimiter is not respected because ports are not allowed. As of now, the port of an Ingress is implicitly :80 for http and :443 for https. </li> </ul> <p>Both these may change in the future.</p> <p>Incoming requests are matched against the host before the IngressRuleValue. If the host is unspecified, the Ingress routes all traffic based on the specified IngressRuleValue.</p> <p>So, the structure shall be as the following: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dual-ingress annotations: &lt;your-annotations-go-here&gt; spec: rules: - host: &lt;your-hostname-1-goes-here&gt; http: paths: - path: &lt;host-1-path-1&gt; backend: serviceName: &lt;service1&gt; servicePort: &lt;service1-port&gt; - path: &lt;host-1-path-2&gt; backend: serviceName: &lt;service2&gt; servicePort: &lt;service2-port&gt; - host: &lt;your-hostname-2-goes-here&gt; http: paths: - path: &lt;host-2-path-1&gt; backend: serviceName: &lt;service3&gt; servicePort: &lt;service3-port&gt; - path: &lt;host-2-path-2&gt; backend: serviceName: &lt;service4&gt; servicePort: &lt;service4-port&gt; </code></pre> <p>the above YAML creates the following Ingress :</p> <pre><code>kubectl get ingress NAME HOSTS ADDRESS PORTS AGE dual-ingress your-hostname-1,your-hostname-2,bar.com 80 14s </code></pre> <p>In case you don't specify host in YAML all traffic will be routed according to IngressRuleValue. YAML example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: mono-ingress annotations: &lt;your-annotations-go-here&gt; spec: rules: - http: paths: - path: &lt;path-1&gt; backend: serviceName: &lt;service1&gt; servicePort: &lt;service1-port&gt; - path: &lt;path-2&gt; backend: serviceName: &lt;service2&gt; servicePort: &lt;service2-port&gt; </code></pre> <p>The above YAML creates the following Ingress :</p> <pre><code>kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS AGE mono-ingress * 80 10s </code></pre> <p>So, to sum up, <code>rules:</code> field shall contain an <code>array</code> (and topic starter missed single space char between "-" and "http", so it wasn't parsed as an <code>array</code>, but <code>map</code>)</p> <p>Hope that helps.</p>
<p>Have a Next.js project. </p> <p>This is my next.config.js file, which I followed through with on this guide: <a href="https://dev.to/tesh254/environment-variables-from-env-file-in-nextjs-570b" rel="nofollow noreferrer">https://dev.to/tesh254/environment-variables-from-env-file-in-nextjs-570b</a></p> <pre><code>module.exports = withCSS(withSass({ webpack: (config) =&gt; { config.plugins = config.plugins || [] config.module.rules.push({ test: /\.svg$/, use: ['@svgr/webpack', { loader: 'url-loader', options: { limit: 100000, name: '[name].[ext]' }}], }); config.plugins = [ ...config.plugins, // Read the .env file new Dotenv({ path: path.join(__dirname, '.env'), systemvars: true }) ] const env = Object.keys(process.env).reduce((acc, curr) =&gt; { acc[`process.env.${curr}`] = JSON.stringify(process.env[curr]); return acc; }, {}); // Fixes npm packages that depend on `fs` module config.node = { fs: 'empty' } /** Allows you to create global constants which can be configured * at compile time, which in our case is our environment variables */ config.plugins.push(new webpack.DefinePlugin(env)); return config } }), ); </code></pre> <p>I have a .env file which holds the values I need. It works when run on localhost.</p> <p>In my Kubernetes environment, within the deploy file which I can modify, I have the same environment variables set up. But when I try and identify them they come off as undefined, so my application cannot run.</p> <p>I refer to it like:</p> <pre><code>process.env.SOME_VARIABLE </code></pre> <p>which works locally. </p> <p>Does anyone have experience making environment variables function on Next.js when deployed? Not as simple as it is for a backend service. :(</p> <p><strong>EDIT:</strong> This is what the environment variable section looks like.</p> <p><strong>EDIT 2:</strong> Full deploy file, edited to remove some details</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "38" creationTimestamp: xx generation: 40 labels: app: appname name: appname namespace: development resourceVersion: xx selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname uid: xxx spec: progressDeadlineSeconds: xx replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: appname tier: sometier strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: appname tier: sometier spec: containers: - env: - name: NODE_ENV value: development - name: PORT value: "3000" - name: SOME_VAR value: xxx - name: SOME_VAR value: xxxx image: someimage imagePullPolicy: Always name: appname readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 3000 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 100m memory: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: xxx lastUpdateTime: xxxx message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 40 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre>
<p>.env Works in docker or docker-compose, they do not work in Kubernetes, if you want to add them you can by configmaps objects or add directly to each deployment an example (from documentation):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: envar-demo labels: purpose: demonstrate-envars spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: "Hello from the environment" - name: DEMO_FAREWELL value: "Such a sweet sorrow </code></pre> <p>Also, the best and standard way is to use config maps, for example:</p> <pre><code> containers: - env: - name: DB_DEFAULT_DATABASE valueFrom: configMapKeyRef: key: DB_DEFAULT_DATABASE name: darwined-env </code></pre> <p>And the config map:</p> <pre><code>apiVersion: v1 data: DB_DEFAULT_DATABASE: darwined_darwin_dev_1 kind: ConfigMap metadata: creationTimestamp: null labels: io.kompose.service: darwin-env name: darwined-env </code></pre> <p>Hope this helps.</p>
<p>I need to install a helm chart with a key/value that is not present in one of the templates and I prefer not to edit the already existing templates.</p> <p>In particular, I need to change <code>resources.limits.cpu</code> and <code>resources.limits.memory</code> in <code>k8s-job-template.yaml</code> but <code>resources</code> is not even mentioned in that file.</p> <p>Is there a solution for this?</p>
<p>The only customizations it's possible to make for a Helm chart are those the chart author has written in; you can't make arbitrary additional changes to the YAML files.</p> <p>(<a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> allows merges of arbitrary YAML content and is built into recent <code>kubectl</code>, but it doesn't have some of the lifecycle or advanced templating features of Helm.)</p>
<p>I have <strong>tried reinstalling</strong> it but nothing seems to work.</p> <p>console output:</p> <pre><code>E1126 15:42:35.408904 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-scheduler:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.436232 19976 cache_images.go:80] CacheImage gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\gcr.io\k8s-minikube\storage-provisioner_v1.8.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.439164 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-dnsmasq-nanny-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.467462 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-proxy:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-proxy_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.483078 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-sidecar-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.485031 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-addon-manager:v9.0 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-addon-manager_v9.0 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.492838 19976 cache_images.go:80] CacheImage k8s.gcr.io/coredns:1.6.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\coredns_1.6.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.514311 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-controller-manager:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.516262 19976 cache_images.go:80] CacheImage k8s.gcr.io/pause:3.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\pause_3.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.536759 19976 cache_images.go:80] CacheImage k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kubernetes-dashboard-amd64_v1.10.1 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.544566 19976 cache_images.go:80] CacheImage k8s.gcr.io/etcd:3.3.15-0 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\etcd_3.3.15-0 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.544566 19976 cache_images.go:80] CacheImage k8s.gcr.io/kube-apiserver:v1.16.2 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-apiserver_v1.16.2 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% E1126 15:42:35.546525 19976 cache_images.go:80] CacheImage k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 -&gt; C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\k8s-dns-kube-dns-amd64_1.14.13 failed: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% * Starting existing virtualbox VM for "minikube" ... * Waiting for the host to be provisioned ... * Found network options: - NO_PROXY=192.168.99.103 - no_proxy=192.168.99.103 ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository * Preparing Kubernetes v1.16.2 on Docker '18.09.9' ... - env NO_PROXY=192.168.99.103 - env NO_PROXY=192.168.99.103 E1126 15:44:39.347174 19976 start.go:799] Error caching images: Caching images for kubeadm: caching images: caching image C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-scheduler_v1.16.2: getting destination path: parsing docker archive dst ref: replace a Win drive letter to a volume name: exec: "wmic": executable file not found in %PATH% * Unable to load cached images: loading cached images: loading image C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2: CreateFile C:\Users\Sanket1.Gupta\.minikube\cache\images\k8s.gcr.io\kube-controller-manager_v1.16.2: The system cannot find the path specified. </code></pre> <hr>
<p>This problem was raised in <a href="https://stackoverflow.com/questions/55403284/installing-minikube-on-windows">this SO question</a>. I am posting a community wiki answer from it:</p> <hr> <p>You did not provide how you are trying to install minikube and what else is installed on your PC. So it is hard to provide 100% accurate answer. I will try with providing a way that I use to install minikube on Windows, if that does not help please provide more information on what steps did you do that led to this error. I do not want to guess but it seems like you did not add the minikube binary to your PATH:</p> <p><code>executable file not found in %PATH% - Preparing Kubernetes environment ...</code></p> <p>First let's delete all the traces of your current installation. Run <code>minikube delete</code> go to C:\Users\current-user\ and delete <code>.kube</code> and <code>.minikube</code> folders.</p> <p>Open Powershell and install chocolatey as explained <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">here</a>:</p> <p><code>Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))</code></p> <p>After installation run <code>choco install minikube kubernetes-cli</code>.</p> <p>Now depending on what hypervisor you want to use you can follow steps from this <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">tutorial</a> (Hyper-V). You can use VirtualBox as well but then you won't be able to use Docker for Windows (assuming you want to) - you can read more in one of my answers <a href="https://stackoverflow.com/questions/52600524/is-it-possible-to-run-minikube-with-virtualbox-on-windows-10-along-with-docker/52611412#52611412">here</a>. Another possibility is to use Kubernetes in Docker for Windows as explained <a href="https://docs.docker.com/docker-for-windows/install/" rel="nofollow noreferrer">here</a> - but you won't be using minikube in this scenario. </p> <hr> <p>Please let me know if that helped. </p>
<p>I know I can whitelist IPs for the entire ingress object, but is there a way to whitelist IPs for individual paths? For example, if I only want to allow <code>/admin</code> to be accessed from <code>10.0.0.0/16</code>?</p> <p><code>ingress.yml</code>:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend namespace: default labels: app: frontend annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: "letsencrypt-prod" #nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/16" spec: tls: - hosts: - frontend.example.com secretName: frontend-tls rules: - host: frontend.example.com http: paths: - path: / backend: serviceName: frontend servicePort: 80 - path: /api backend: serviceName: api servicePort: 8000 - path: /admin backend: serviceName: api servicePort: 8000 - path: /staticfiles backend: serviceName: api servicePort: 80 </code></pre>
<p>If you would like to split it two Ingres, it would look like example below. First <code>Ingress</code> with <code>/admin</code> path and annotation and second <code>Ingress</code> with others <code>paths</code> allowed by any <code>IP</code>.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-admin namespace: default labels: app: frontend annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: "letsencrypt-prod" nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/16" spec: tls: - hosts: - frontend.example.com secretName: frontend-tls rules: - host: frontend.example.com http: paths: - path: /admin backend: serviceName: api servicePort: 8000 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-all namespace: default labels: app: frontend annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - frontend.example.com secretName: frontend-tls rules: - host: frontend.example.com http: paths: - path: / backend: serviceName: frontend servicePort: 80 - path: /api backend: serviceName: api servicePort: 8000 - path: /staticfiles backend: serviceName: api servicePort: 80 </code></pre> <p>Please keep in mind that annotation <code>nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/16"</code> will override some of your config. As mentioned in <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#whitelist-source-range" rel="noreferrer">Nginx docs</a>:</p> <blockquote> <p>Adding an annotation to an Ingress rule overrides any global restriction.</p> </blockquote> <p><br> Another option is to use <code>ConfigMap</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#whitelist-source-range" rel="noreferrer">whitelist-source-range</a>. Like mentioned in <a href="http://nginx.org/en/docs/http/ngx_http_access_module.html" rel="noreferrer">this example</a>, you can use <code>ngx_http_access_module</code>.</p> <p>As in Nginx config, each <code>path</code> is saved as</p> <pre><code>location / { ... } location /api { ... } </code></pre> <p>you can add thoses restrictions there. Below example:</p> <pre><code>location / { deny 192.168.1.1; allow 192.168.1.0/24; allow 10.1.1.0/16; allow 2001:0db8::/32; deny all; } </code></pre>
<p>I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message using <code>kafkacat</code>, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume. </p> <p>Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.</p>
<p>I assume your scenario is that you have a statefulset which has got a persistentvolumeclaim definition in it - or it is just referencing an existing volume - and you try to delete it. </p> <p>In this case the persistent volume will stay there. Also the pvc won't disappear;</p> <p>This is so that you can, if you wanted to, remount the same volume to a different statefulset pod - or an update of the previous one thereof.</p> <p>If you want to delete the persistent volume you should delete the pvc and the buond PVs will disappear.</p>
<p>i am trying to deploy containers to local kubernetes, for now i have install docker deamon, minikube and minikube dashboard. this all are working fine. i had also setup local container repository on port 5000. i had also push 2 images of my application. i can see them on browser <a href="http://localhost:5000/v2/_catalog" rel="nofollow noreferrer">http://localhost:5000/v2/_catalog</a></p> <p>now when i am trying to up pod using minikube.</p> <pre><code>kubectl apply -f ./docker-compose-k.yml --record </code></pre> <p>I am getting error on dashboard like this:- </p> <pre><code>Failed to pull image "localhost:5000/coremvc2": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused </code></pre> <p>Here is my compose file:-</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: core23 labels: app: codemvc spec: replicas: 1 selector: matchLabels: app: coremvc template: metadata: labels: app: coremvc spec: containers: - name: coremvc image: localhost:5000/coremvc2 ports: - containerPort: 80 imagePullPolicy: Always </code></pre> <p>i don't know why this images are not pulled as docker deamon and kubernetes both are on same machine. i have also try this with dockerhub image and it's working fine, but i want to do this using local images. please give me hint or any guideline.</p> <p>Thank you,</p>
<p>Based on the comment, you started minikube with <code>minikube start</code> (without specifying the driver).</p> <p>That means that the minikube is running inside a <strong>Virtualbox VM</strong>. In order to make your use case work, you have two choices :</p> <ol> <li><strong>The hard way</strong> Set-up the connection between you VM and your host and use your host IP</li> <li><strong>The easy way</strong> Connect to your VM using <code>minikube ssh</code> and install your registry there. Then your deployment should work with your VM's IP.</li> </ol> <p>If you don't want to use Virtual box, you should read the <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/" rel="nofollow noreferrer">documentation</a> about other existing drivers and how to use them.</p> <p>Hope this helps !</p>
<p>Why do I get this error while running the helm upgrade command? I see my Ingress controller running fine </p> <pre><code>Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(Ingress.metadata): unknown field "kubernetes.io/ingress.class" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(Ingress.metadata): unknown field "nginx.ingress.kubernetes.io/enable-cors" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta] </code></pre> <p>Below is my helm version</p> <pre><code>version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"} </code></pre> <p>I do not get and server and client separately for helm version? Not sure if its because due to the latest helm version installed.</p> <p>Any suggestion for my helm error?</p>
<p>you are missing the <code>annotation</code> scope:</p> <pre><code>metadata: annotations: kubernetes.io/ingress.class: &lt;whatever&gt; </code></pre> <p>to debug such issues in the future, you can use <code>kubectl explain</code> which provides you with the optional fields: try</p> <pre><code>kubectl explain ingress.metadata </code></pre>
<p>I am trying to run the helloworld-go example with cert-manager on GKE.<br> I installed Istio without sidecar injection, cert-manager 0.11 and setup Auto SSL and DNS. When I run <code>kubectl get ksvc</code> it shows <code>IngressNotConfigured</code>. Any idea why?</p> <pre><code>$ kubectl get ksvc NAME URL LATESTCREATED LATESTREADY READY REASON helloworld-go https://helloworld-go.default.redhost.cloud helloworld-go-mc27h helloworld-go-mc27h Unknown IngressNotConfigured </code></pre> <pre><code>$ kubectl describe ksvc helloworld-go Status: Address: URL: http://helloworld-go.default.svc.cluster.local Conditions: Last Transition Time: 2019-11-26T15:19:51Z Status: True Type: ConfigurationsReady Last Transition Time: 2019-11-26T15:31:25Z Message: Ingress has not yet been reconciled. Reason: IngressNotConfigured Status: Unknown Type: Ready Last Transition Time: 2019-11-26T15:31:25Z Message: Ingress has not yet been reconciled. Reason: IngressNotConfigured Status: Unknown Type: RoutesReady Latest Created Revision Name: helloworld-go-mc27h Latest Ready Revision Name: helloworld-go-mc27h Observed Generation: 1 Traffic: Latest Revision: true Percent: 100 Revision Name: helloworld-go-mc27h URL: https://helloworld-go.default.redhost.cloud Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Created 23m service-controller Created Configuration "helloworld-go" Normal Created 23m service-controller Created Route "helloworld-go" Normal Updated 11m (x7 over 23m) service-controller Updated Service "helloworld-go" </code></pre> <p>Adding the label <code>serving.knative.dev/visibility=cluster-local</code> makes the problem go away, but then it is only accessible internally without SSL.</p>
<p>Problem was I had installed regular Istio, where you specifically need Istio with SDS. <a href="https://knative.dev/docs/serving/using-auto-tls/" rel="nofollow noreferrer">https://knative.dev/docs/serving/using-auto-tls/</a></p> <p>Also be sure to use cert-manager 0.10.0 as 0.11.0 is currently unsupported. <a href="https://github.com/knative/serving/issues/6011" rel="nofollow noreferrer">https://github.com/knative/serving/issues/6011</a></p> <p>Once I did these everything works:</p> <pre><code>NAME URL LATESTCREATED LATESTREADY READY REASON helloworld-go https://helloworld-go.default.redhost.cloud helloworld-go-mc27h helloworld-go-mc27h True``` </code></pre>
<p>Kubernetes 1.15 introduced the command</p> <pre><code>kubectl rollout restart deployment my-deployment </code></pre> <p>Which would be the endpoint to call through the API? For example if I want to scale a deployment I can call</p> <pre><code>PATCH /apis/apps/v1/namespaces/my-namespace/deployments/my-deployment/scale </code></pre>
<p>If you dig around in the <code>kubectl</code> source you can eventually find <a href="https://github.com/kubernetes/kubectl/blob/release-1.16/pkg/polymorphichelpers/objectrestarter.go#L32" rel="nofollow noreferrer"><code>(k8s.io/kubectl/pkg/polymorphichelpers).defaultObjectRestarter</code></a>. All that does is change an annotation:</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: template: metadata: annotations: kubectl.kubernetes.io/restartedAt: '2006-01-02T15:04:05Z' </code></pre> <p>Anything that changes a property of the embedded pod spec in the deployment object will cause a restart; there isn't a specific API call to do it.</p> <p>The useful corollary to this is that, if your <code>kubectl</code> and cluster versions aren't in sync, you can use <code>kubectl rollout restart</code> in <code>kubectl</code> 1.14 against older clusters, since it doesn't actually depend on any changes in the Kubernetes API.</p>
<p>I'm trying to calculate the average time a pod stays in a pending state in grafana with prometheus. I can generate a graph to get the number of pods in a pending state over time, with this query</p> <pre><code>sum(kube_pod_status_phase{phase="Pending"}) </code></pre> <p>However, I would really like to get an the value of the average time that the pods are staying in this state in the last X hours. How can I do that?</p>
<p>The metric <code>kube_pod_status_phase{phase="Pending"}</code> will only give you binary values i.e., 0/1. 1 if the pod is in pending state, otherwise. Also, the data is being updated updated every 30s. So to find the total time it was in pending in last X hours, you can do something like.</p> <pre><code>sum_over_time(kube_pod_status_phase{phase="Running"}[Xh]) * X * 30 </code></pre> <p>For better visualization, you can use table in grafana.</p>
<p>I'm currently writing a Helm Chart for my multi-service application. In the application, I depend on <code>CustomResources</code>, which I apply before everything else with helm via the <code>"helm.sh/hook": crd-install</code> hook.</p> <p>Now I want to upgrade the application. Helm fails because the CRDs are already installed. In some GH issues, I read about the builtin <code>.Capabilities</code> variable in Helm templates. I want to wrap my CRDs with an "if" checking if the CRD is already installed: </p> <pre><code>{{- if (not (.Capabilities.APIVersions.Has "virtualmachineinstancepresets.kubevirt.io")) }} </code></pre> <p>Unfortunately, I misunderstood the APIVersions property.<br> So my question is, does Helm provide a way of checking whether a <code>CustomAPI</code> is already installed, so I can exclude it from my Helm pre-hook install?</p>
<p>The simple answer for Helm v2 is manually choose <code>--no-crd-hook</code> flag when running <code>helm install</code>.</p> <p>The workaround by using builtin <code>.Capabilities</code> variable can be a workaround. E.g., using this:</p> <pre><code>{{- if not (.Capabilities.APIVersions.Has "virtualmachineinstancepresets.kubevirt.io/v1beta1/MyResource") }} apiVersion: ... {{- end}} </code></pre> <p>However, it also means you will never be able to manage the installed CRDs by Helm again.</p> <p>Checkout a long answer from blog post <a href="https://joejulian.name/post/helm-v2-crd-management/" rel="nofollow noreferrer">Helm V2 CRD Management</a> which explained different approaches. However, I quote this:</p> <blockquote> <p>CRD management in helm is, to be nice about it, utterly horrible.</p> </blockquote> <p>personally I suggest managing CRDs via a separate chart out of app/library charts that depends on it, since they have totally different lifecycle.</p>
<p>We've hired a security consultant to perform a pentest on our Application's public IP (Kubernetes Loadbalancer) and write a report on our security flaws and the measurements required to avoid them. Their report warned us that we have TCP Timestamp enabled, and from what I've read about the issue, It would allow an attacker to predict boot time of the machine thus being able to grant control over it.</p> <p>I also read that TCP Timestamp is important for TCP performance and, most importantly, for Protection Against Wrapping Sequence.</p> <p>But since we use Kubernetes over GKE with Nginx Ingress Controller being in front of it, I wonder if that <code>TCP Timestamp</code> thing really matters for that context. Should we even care? If so, does it really make my network vulnerable for the lack of Protection Against Wrapping sequence?</p> <p>More information about TCP Timestamp on this other question: <a href="https://stackoverflow.com/questions/7880383/what-benefit-is-conferred-by-tcp-timestamp">What benefit is conferred by TCP timestamp?</a></p>
<p>According to RFC 1323 (TCP Extensions for High Performance) TCP Timestamp is used for two main mechanisms: </p> <ul> <li>PAWS (Protect Against Wrapped Sequence) </li> <li>RTT (Round Trip Time)</li> </ul> <p><strong>PAWS</strong> - defense mechanism for identification and rejection of packets that arrived in other wrapping sequence (data integrity). </p> <p><strong>Round Trip Time</strong> - time for packet to get to the destination and sent acknowledgment back to the device it originated.</p> <p>What can happen when you disable TCP Timestamps: </p> <ul> <li>Turning off TCP Timestamp can result with performance issues because the RTT would stop working. </li> <li>It will disable <a href="https://tejparkash.wordpress.com/2010/12/05/paws-tcp-sequence-number-wrapping-explained/" rel="nofollow noreferrer">PAWS</a>. </li> <li>As <a href="https://kc.mcafee.com/corporate/index?page=content&amp;id=KB78776&amp;locale=en_US" rel="nofollow noreferrer">McAfee</a> site says disabling timestamps can allow denial attacks. </li> </ul> <p>As previously mentioned McAfee's site: </p> <blockquote> <p>For these reasons, McAfee strongly recommends keeping this feature enabled and considers the vulnerability as low..</p> <p>-- <a href="https://kc.mcafee.com/corporate/index?page=content&amp;id=KB78776&amp;locale=en_US" rel="nofollow noreferrer">McAfee</a></p> </blockquote> <p>Citation from another site: </p> <blockquote> <p>Vulnerabilities in TCP Timestamps Retrieval is a Low risk vulnerability that is one of the most frequently found on networks around the world. This issue has been around since at least 1990 but has proven either difficult to detect, difficult to resolve or prone to being overlooked entirely.</p> <p>-- <a href="https://beyondsecurity.com/scan-pentest-network-vulnerabilities-tcp-timestamps-retrieval.html" rel="nofollow noreferrer">Beyond Security </a></p> </blockquote> <p>I would encourage you to look on this video: <a href="https://www.youtube.com/watch?v=bXXoz5-Z9h0" rel="nofollow noreferrer">HIP15-TALK:Exploiting TCP Timestamps</a>. </p> <h3>What about GKE</h3> <p>Getting the information about boot time (uptime in this case) can lead to knowledge about what security patches are <strong>not</strong> applied to the cluster. It can lead to exploitation of those unpatched vulnerabilities. </p> <p>The best way to approach that would be <strong>regularly update</strong> existing cluster. GKE implements 2 ways of doing that: </p> <ul> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster" rel="nofollow noreferrer">Manual way </a></li> <li><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades" rel="nofollow noreferrer">Automatic way </a></li> </ul> <p>Even if attacker knows the boot time of your machine it will be useless because system is up to date and all the security patches are applied. There is dedicated site for Kubernetes engine security bulletins: <a href="https://cloud.google.com/kubernetes-engine/docs/security-bulletins" rel="nofollow noreferrer">Security bulletins</a></p>
<p>I am new to Kubernetes and I would like to try different CNI.</p> <p>In my current Cluster, I am using Flannel</p> <p>Now, I would like to use Calico but I cannot find a proper guide to clean up Flannel and install Calico.</p> <p>Could you please point out the correct procedure?</p> <p>Thanks</p>
<p>Calico provides a migration tool that performs a rolling update of the nodes in the cluster. At the end, you will have a fully-functional Calico cluster using VXLAN networking between pods.</p> <p>From the <a href="https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/migration-from-flannel" rel="nofollow noreferrer">documentation</a> we have: </p> <p><strong>Procedure</strong></p> <p>1 - First, install Calico.</p> <pre><code>kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/calico.yaml </code></pre> <p>Then, install the migration controller to initiate the migration.</p> <pre><code>kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/migration-job.yaml </code></pre> <p>Once applied, you will see nodes begin to update one at a time.</p> <p>2 - To monitor the migration, run the following command.</p> <pre><code>kubectl get jobs -n kube-system flannel-migration </code></pre> <p>The migration controller may be rescheduled several times during the migration when the node hosting it is upgraded. The installation is complete when the output of the above command shows 1/1 completions. For example:</p> <pre><code>NAME COMPLETIONS DURATION AGE flannel-migration 1/1 2m59s 5m9s </code></pre> <p>3 - After completion, delete the migration controller with the following command.</p> <pre><code>kubectl delete -f https://docs.projectcalico.org/v3.10/manifests/flannel-migration/migration-job.yaml </code></pre> <p>To know more about it: <strong><a href="https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/migration-from-flannel" rel="nofollow noreferrer">Migrating a cluster from flannel to Calico</a></strong></p> <p>This article describes how migrate an existing Kubernetes cluster with flannel networking to use Calico networking.</p>
<p>I installed stable/prometheus from helm. By default the job_name <code>kubernetes-service-endpoints</code> contains <code>node-exporter</code> and <code>kube-state-metrics</code> as component label. I added the below configuration in prometheus.yml to include namespace, pod and node labels.</p> <pre><code> - source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace - source_labels: [__meta_kubernetes_pod_node_name] separator: ; regex: (.*) target_label: node replacement: $1 action: replace </code></pre> <p><code>kube_pod_info{component="kube-state-metrics"}</code> already had namespace, pod and node labels and hence exported_labels were generated. And the metric <code>node_cpu_seconds_total{component="node-exporter"}</code> now correctly has labels namespace, pod and node. </p> <p>To have these labels correctly, I need to have these 3 labels present in both the above metric names. To achieve that I can override value of exported_labels. I tried adding the below config but to no avail.</p> <pre><code> - source_labels: [__name__, exported_pod] regex: "kube_pod_info;(.+)" target_label: pod - source_labels: [__name__, exported_namespace] regex: "kube_pod_info;(.+)" target_label: namespace - source_labels: [__name__, exported_node] regex: "kube_pod_info;(.+)" target_label: node </code></pre> <p>Similar approach was mentioned <a href="https://stackoverflow.com/questions/54235797/how-to-rename-label-within-a-metric-in-prometheus">here</a>. I can't see the issue with my piece of code. Any directions to resolve would be very helpful.</p> <p><strong>Updated - (adding complete job)</strong> </p> <pre><code> - job_name: kubernetes-service-endpoints kubernetes_sd_configs: - role: endpoints metric_relabel_configs: - source_labels: [__name__, exported_pod] regex: "kube_pod_info;(.+)" target_label: pod - source_labels: [__name__, exported_namespace] regex: "kube_pod_info;(.+)" target_label: namespace - source_labels: [__name__, exported_node] regex: "kube_pod_info;(.+)" target_label: node relabel_configs: - action: keep regex: true source_labels: - __meta_kubernetes_service_annotation_prometheus_io_scrape - action: replace regex: (https?) source_labels: - __meta_kubernetes_service_annotation_prometheus_io_scheme target_label: __scheme__ - action: replace regex: (.+) source_labels: - __meta_kubernetes_service_annotation_prometheus_io_path target_label: __metrics_path__ - action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 source_labels: - __address__ - __meta_kubernetes_service_annotation_prometheus_io_port target_label: __address__ - action: labelmap regex: __meta_kubernetes_service_label_(.+) - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace regex: (.*) replacement: $1 separator: ; source_labels: - __meta_kubernetes_pod_name target_label: pod - action: replace source_labels: - __meta_kubernetes_pod_node_name target_label: node </code></pre> <p><strong>And the result from promql</strong></p> <p><a href="https://i.stack.imgur.com/y83a1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y83a1.png" alt="kube_pod_info results"></a></p>
<p>So your goal is to rename the metric labels <code>exported_pod</code> to <code>pod</code>, etc, for the <code>kube_pod_info</code> metric?</p> <p>In that case, you need <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs" rel="noreferrer">metric relabelling</a> which is done when metrics are fetched from targets:</p> <pre><code>- job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints metric_relabel_configs: - source_labels: [__name__, exported_pod] regex: "kube_pod_info;(.+)" target_label: pod - source_labels: [__name__, exported_namespace] regex: "kube_pod_info;(.+)" target_label: namespace - source_labels: [__name__, exported_node] regex: "kube_pod_info;(.+)" target_label: node relabel_configs: # Insert the same what you have so far </code></pre> <p><strong>Background:</strong></p> <p><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config" rel="noreferrer">Normal relabelling</a> (<code>relabel_configs</code>) is applied at service discovery time to the <em>target labels</em> that are automatically discovered by the service discovery process. It defines the definitive target labels. At scrape time, target labels are added to the metric labels of all the metrics from the target. Normal relabelling can be only used to work on labels of a target after service discovery, which are generally meta labels starting with <code>__</code>.</p> <p><a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs" rel="noreferrer">Metric relabelling</a> (<code>metric_relabel_configs</code>) is applied to the <em>metric labels</em> at scrape time. So, this can be used to rename labels that are defined by the applications exposing the metrics themselves. </p>
<p>What is the command to delete replication controller and its pods? </p> <p>I am taking a course to learn k8s on pluralsight. I am trying to delete the pods that I have just created using Replication controller. Following is my YAML:</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: hello-rc spec: replicas: 2 selector: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-ctr image: nigelpoulton/pluralsight-docker-ci:latest ports: - containerPort: 8080 </code></pre> <p>If I do 'kubectl get pods' following is the how it looks on my mac:</p> <p><a href="https://i.stack.imgur.com/CG7zf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CG7zf.png" alt="enter image description here"></a></p> <p>I have tried the following two commands to delete the pods that are created in the Minikube cluster on my mac, but they are not working:</p> <p><code>kubectl delete pods hello-world</code> <code>kubectl delete pods hello-rc</code></p> <p>Could someone help me understand what I am missing?</p>
<p>you can delete the pods by deleting the replication controller that created them</p> <pre><code>kubectl delete rc hello-rc </code></pre> <p>also, because pods created are just managed by ReplicationController, you can delete only theReplicationController and leave the pods running</p> <pre><code>kubectl delete rc hello-rc --cascade=false </code></pre> <p>this means the pods are no longer managed .you can create a new ReplicationController with the proper label selector and manage them again</p> <p>Also,instead of replicationcontrollers, you can use replica sets. They behave in a similar way, but they have more expressive pod selectors. For example, a ReplicationController can’t match pods with 2 labels </p>
<p>I'm trying to apply podSecurityPolicy and try to test whether it's allowing me to create privileged pod. Below is the podSecurityPolicy resource manifest.</p> <pre><code>kind: PodSecurityPolicy apiVersion: policy/v1beta1 metadata: name: podsecplcy spec: hostIPC: false hostNetwork: false hostPID: false privileged: false readOnlyRootFilesystem: true hostPorts: - min: 10000 max: 30000 runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny supplementalGroups: rule: RunAsAny seLinux: rule: RunAsAny volumes: - '*' </code></pre> <p>current psp as below</p> <pre><code>[root@master ~]# kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES podsecplcy false RunAsAny RunAsAny RunAsAny RunAsAny true * [root@master ~]# </code></pre> <p>After submitted the above manifest,i'm trying to create privileged pod using below manifest.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: pod-privileged spec: containers: - name: main image: alpine command: ["/bin/sleep", "999999"] securityContext: privileged: true </code></pre> <p>Without any issues the pod is created.I hope it should throw error since privileged pod creation is restricted through podSecurityPolicy. Then i realized,it may be a admission controller plugin is not enabled and i saw which admission controller plugins are enabled by describe the kube-apiserver pod(Removed some lines for readability purpose) and able to see only NodeRestriction is enabled </p> <pre><code>[root@master ~]# kubectl -n kube-system describe po kube-apiserver-master.k8s --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true **Attempt:** Tried to edit /etc/systemd/system/multi-user.target.wants/kubelet.service and changed ExecStart=/usr/bin/kubelet --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota restarted kubelet service.But no luck Now how to enable other admission controller plugins? </code></pre>
<p><strong>1.</strong> Locate the static pod manifest-path</p> <p>From systemd status, you will be able to locate the kubelet unit file <code>systemctl status kubelet.service</code></p> <p>Do <code>cat /etc/systemd/system/kubelet.service</code> (replace path with the one you got from above command) Go to the directory which is pointing to <code>--pod-manifest-path=</code></p> <p><strong>2.</strong> Open the yaml which starts kube-apiserver-master.k8s Pod</p> <p>Example steps to locate YAML is below</p> <pre><code>cd /etc/kubernetes/manifests/ grep kube-apiserver-master.k8s * </code></pre> <p><strong>3.</strong> Append <code>PodSecurityPolicy</code> to flag <code>--enable-admission-plugins=</code> in YAML file</p> <p><strong>4.</strong> Create a PSP and corresponding bindings for kube-system namespace</p> <p>Create a PSP to grant access to pods in kube-system namespace including CNI</p> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' name: privileged spec: allowedCapabilities: - '*' allowPrivilegeEscalation: true fsGroup: rule: 'RunAsAny' hostIPC: true hostNetwork: true hostPID: true hostPorts: - min: 0 max: 65535 privileged: true readOnlyRootFilesystem: false runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' volumes: - '*' EOF </code></pre> <p>Cluster role which grants access to the privileged pod security policy</p> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: privileged-psp rules: - apiGroups: - policy resourceNames: - privileged resources: - podsecuritypolicies verbs: - use EOF </code></pre> <p>Role binding</p> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kube-system-psp namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: privileged-psp subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:kube-system EOF </code></pre>
<p>We have 2 services in our cluster in the same namespace, each using their own database like below: <a href="https://i.stack.imgur.com/whVfc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/whVfc.png" alt="enter image description here"></a></p> <p>We added 2 ServiceEntry corresponding to each database:</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: service-1 namespace: mynamespace spec: exportTo: - "." hosts: - service1-db.xxx.com ports: - number: 5432 name: tcp protocol: tcp resolution: DNS location: MESH_EXTERNAL ... --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: service-2 namespace: mynamespace spec: exportTo: - "." hosts: - service2-db.xxx.com ports: - number: 5432 name: tcp protocol: tcp resolution: DNS location: MESH_EXTERNAL ... </code></pre> <p>The resulting interaction looks like this, which is not expected:</p> <p><a href="https://i.stack.imgur.com/Onghi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Onghi.png" alt="enter image description here"></a></p> <p>Any clues on what we are missing?</p>
<p>So, at the end, it happens that the ServiceEntry does not work just based on the host names, but it needs addresses too.</p> <p>Here is what worked:</p> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: service-1 namespace: mynamespace spec: exportTo: - &quot;.&quot; hosts: - service1-db.xxx.com addresses: - xx.xx.xx.xx/32 ports: - number: 5432 name: tcp protocol: tcp resolution: NONE location: MESH_EXTERNAL ... --- apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: service-2 namespace: mynamespace spec: exportTo: - &quot;.&quot; hosts: - service2-db.xxx.com addresses: - xx.xx.xx.yy/32 ports: - number: 5432 name: tcp protocol: tcp resolution: NONE location: MESH_EXTERNAL ... </code></pre> <p>Here are the excerpts from <a href="https://istio.io/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer">the documentation</a> that led us to this conclusion.</p> <blockquote> <p>If the Addresses field is empty, traffic will be identified solely based on the destination port. In such scenarios, the port on which the service is being accessed must not be shared by any other service in the mesh.</p> <p>Note that when resolution is set to type DNS and no endpoints are specified, the host field will be used as the DNS name of the endpoint to route traffic to.</p> </blockquote> <p>NOTE: While this helped resolve this particular instance, it opens up another different question of working with dynamic ip addresses, like some app trying to access AWS secrets manager. The ip address of such services keep changing and there is no way to tie it down to a service entry. So, we added service entries only for the known external traffic and allowed others to be unknown. In Kiali (visualiser for Istio), these &quot;unknowns&quot; are displayed as PassThroughClusters, which is annoying, but only half the problem.</p>
<p>In my next.config.js, I have a part that looks like this:</p> <pre><code>module.exports = { serverRuntimeConfig: { // Will only be available on the server side mySecret: 'secret' }, publicRuntimeConfig: { // Will be available on both server and client PORT: process.env.PORT, GOOGLE_CLIENT_ID: process.env.GOOGLE_CLIENT_ID, BACKEND_URL: process.env.BACKEND_URL } </code></pre> <p>I have a .env file and when run locally, the Next.js application succesfully fetches the environment variables from the .env file.</p> <p>I refer to the env variables like this for example:</p> <pre><code>axios.get(publicRuntimeConfig.BACKOFFICE_BACKEND_URL) </code></pre> <p>However, when I have this application deployed onto my Kubernetes cluster, the environment variables set in the deploy file are not being collected. So they return as undefined. </p> <p>I read that .env files cannot be read due to the differences between frontend (browser based) and backend (Node based), but there must be some way to make this work. </p> <p>Does anyone know how to use environment variables saved in your pods/containers deploy file on your frontend (browser based) application? </p> <p>Thanks.</p> <p><strong>EDIT 1:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "38" creationTimestamp: xx generation: 40 labels: app: appname name: appname namespace: development resourceVersion: xx selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname uid: xxx spec: progressDeadlineSeconds: xx replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: appname tier: sometier strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: appname tier: sometier spec: containers: - env: - name: NODE_ENV value: development - name: PORT value: "3000" - name: SOME_VAR value: xxx - name: SOME_VAR value: xxxx image: someimage imagePullPolicy: Always name: appname readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 3000 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 100m memory: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: xxx lastUpdateTime: xxxx message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 40 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre>
<p>You can create a config-map and then mount it as a file in your deployment with your custom environment variables.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "38" creationTimestamp: xx generation: 40 labels: app: appname name: appname namespace: development resourceVersion: xx selfLink: /apis/extensions/v1beta1/namespaces/development/deployments/appname uid: xxx spec: progressDeadlineSeconds: xx replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: appname tier: sometier strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: app: appname tier: sometier spec: containers: - env: - name: NODE_ENV value: development - name: PORT value: "3000" - name: SOME_VAR value: xxx - name: SOME_VAR value: xxxx volumeMounts: - name: environment-variables mountPath: "your/path/to/store/the/file" readOnly: true image: someimage imagePullPolicy: Always name: appname readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 3000 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 100m memory: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumes: - name: environment-variables configMap: name: environment-variables items: - key: .env path: .env dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: xxx lastUpdateTime: xxxx message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available observedGeneration: 40 readyReplicas: 1 replicas: 1 updatedReplicas: 1 </code></pre> <p>I added the following configuration in your deployment file:</p> <pre><code> volumeMounts: - name: environment-variables mountPath: "your/path/to/store/the/file" readOnly: true volumes: - name: environment-variables configMap: name: environment-variables items: - key: .env path: .env </code></pre> <p>You can then create a config map with key ".env" with your environment variables on kubernetes.</p> <p>Configmap like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: environment-variables namespace: your-namespace data: .env: | variable1: value1 variable2: value2 </code></pre>
<p>I would like to know what is/are differences between an <strong>api gateway</strong> and <strong>Ingress controller</strong>. People tend to use these terms interchangeably due to similar functionality they offer. When I say, '<em>Ingress controller</em>'; don't confuse it with <strong>Ingress</strong> objects provided by kubernetes. Also, it would be nice if you can explain the scenario where one will be more useful than other.</p> <p>Is api gateway a generic term used for traffic routers in cloud-native world and 'Ingress controller' is implementation of api-gateway in kubernetes world? </p>
<p>Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet.</p> <p>An api gateway is used for application routing, rate limiting, security, request and response handling and other application related tasks. Say, you have a microservice based application in which the request needs an information to be collected from multiple micro services. You need a way to distribute the user requests to different services and gather the responses from all micro services and prepare the final response to be sent to the user. API Gateway is the one which does this kind of work for you.</p>
<p>A default Google Kubernetes Engine (GKE) cluster </p> <pre><code>gcloud container clusters create [CLUSTER_NAME] \ --zone [COMPUTE_ZONE] </code></pre> <p>starts with 3 nodes. What's the idea behind that? Shouldn't 2 nodes in the same zone be sufficient for high availability?</p>
<p>Kubernetes uses <a href="https://github.com/etcd-io/etcd" rel="nofollow noreferrer">etcd</a> for state. Etcd uses <a href="https://raft.github.io/" rel="nofollow noreferrer">Raft</a> for consensus to achieve high availability properties.</p> <p>When using a consensus protocol like Raft, you need <em>majority</em> in voting. Using 3 nodes you need 2 of 3 nodes to respond for availability. Using 2 nodes, you can not get majority with only 1 of 2 nodes, so you need both 2 nodes to be available.</p>
<p>I have a set of <code>users(dev-team)</code> who need access only to <code>dev</code> and <code>qa</code> namespaces. I created a service account, cluster role and cluster role binding as shown below.</p> <p><strong>Service Account</strong></p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: dev-team </code></pre> <p><strong>Cluster role</strong> </p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: dev-team-users rules: - apiGroups: ["rbac.authorization.k8s.io",""] resources: ["namespaces"] resourceNames: ["dev","qa"] verbs: ["get","list","create"] </code></pre> <p><strong>Cluster role binding</strong> </p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: dev-team-user-bindings roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: dev-team-users subjects: - kind: User name: dev-team namespace: kube-system apiGroup: rbac.authorization.k8s.io </code></pre> <p>When I try to verify the access <code>kubectl get namespaces --as=dev-team</code></p> <p>I get the below error message</p> <pre><code>Error from server (Forbidden): namespaces is forbidden: User "dev-team" cannot list resource "namespaces" in API group "" at the cluster scope </code></pre> <p>I am expecting to see only <code>dev</code> and <code>qa</code> namespaces to show up. Am I missing something here?</p>
<p>The <em>list</em> operation fails because you are using the <code>resourceNames</code> field in the ClusterRole to restrict the namespace objects to grant access too but <em>list</em> would return <em>all</em> namespace objects.</p> <p><strong>But I guess what you really want is to restrict access to the resources <em>in</em> a namespace, and not the namespace objects themselves (which contain not much more information than the name of the namespace).</strong></p> <p>To achieve this, you have to create Roles (or a ClusterRole) and RoleBindings in those namespaces that you want to grant access to the users.</p> <p>Here is how you can grant access to all resources for the <code>dev-team</code> user in the <code>dev</code> and <code>qa</code> namespace but deny access to any resources in any other namespace.</p> <p>Create a ClusterRole (you could also create a Role in the <code>dev</code> and <code>qa</code> namespaces, but using a ClusterRole allows you to define the permissions only once and then reference it from multiple RoleBindings):</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: dev-team-users rules: - apiGroups: - '*' resources: - '*' verbs: - '*' </code></pre> <p>Create a RoleBinding in both the <code>dev</code> and <code>qa</code> namespaces:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: dev-team-user-bindings namespace: dev roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: dev-team-users subjects: - kind: User name: dev-team apiGroup: rbac.authorization.k8s.io </code></pre> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: dev-team-user-bindings namespace: qa roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: dev-team-users subjects: - kind: User name: dev-team apiGroup: rbac.authorization.k8s.io </code></pre> <p>Test access:</p> <pre><code>kubectl get pods -n qa --as=dev-team # Succeeds kubectl get pods -n dev --as=dev-team # Succeeds kubectl get pods -n default --as=dev-team # Fails kubectl get pods -n kube-system --as=dev-team # Fails </code></pre> <p>See <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">Kubernetes RBAC documentation</a>.</p> <h2>EDIT</h2> <p><strong>1. Identify namespaces a user has created</strong></p> <p>Can't do this with RBAC. You would need some form of <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="noreferrer">auditing</a>.</p> <p><strong>2. Identify namespaces a user has access to</strong></p> <p>Also can't do this easily with RBAC. But you could just iterate through all namespaces and test whether a given user has access:</p> <pre class="lang-sh prettyprint-override"><code>for n in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do echo -n "$n: " kubectl auth can-i get pods -n "$n" --as=dev-team done </code></pre> <p>You can vary the verb/resource part (e.g. <code>get pods</code>) as needed.</p>
<p>What exactly are the practical consequences of missing consensus on a Kubernetes cluster? Or in other words: which functions on a Kubernetes cluster require consensus? What will work, what won't work? </p> <p>For example (and really only for example):</p> <ul> <li>will existing pods keep running?</li> <li>can pods still be scaled horizontally?</li> </ul> <p>Example scenario: A cluster with two nodes loses one node. No consensus possible.</p>
<p>Consensus is fundamental to etcd - the distributed database that Kubernetes is built upon. Without consensus you can <em>read</em> but not <em>write</em> from the database. E.g. if only 1 of 3 nodes is available.</p> <blockquote> <p>When you lose quorum etcd goes into a <strong>read only</strong> state where it can respond with data, but no new actions can take place since it will be unable to decide if the action is allowed.</p> </blockquote> <p><a href="https://blog.containership.io/etcd/" rel="nofollow noreferrer">Understanding Etcd Consensus and How to Recover from Failure </a></p> <p>Kubernetes is designed so pods only need kubernetes for changes, e.g. deployment. After that they run independent of kubernetes in a loosely coupled fashion.</p> <p>Kubernetes is contstructed for keeping <em>desired state</em> in the etcd database. Then controllers watch etcd for changes and act upon change. This means that you can not scale or change any configuration of pods if etcd doesn't have consensus. Kubernetes does many self-healing operations, but they will not work if etcd is not available since all operations is done through the ApiServer and etcd.</p> <blockquote> <p>Loosing quorum means that <strong>no new actions</strong> can take place. Everything that is running will continue to run until there is a failure.</p> </blockquote> <p><a href="https://www.youtube.com/watch?v=n9VKAKwBj_0" rel="nofollow noreferrer">Understanding Distributed Consensus in etcd and Kubernetes</a></p>
<p>I try to setup a haproxy'd multi-master node setup for Kubernetes, as described in [<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="nofollow noreferrer">1</a>]. My network configurations are:</p> <ul> <li>haproxy = 192.168.1.213</li> <li>master0|1|2 = 192.168.1.210|211|212</li> <li>worker0|1|2 = 192.168.1.220|221|222 (not interesting at this point)</li> </ul> <p>all hosts are able to connect to each other (DNS is resolved for each node). Each node is running Ubuntu 18.04.3 (LTS). Docker is installed as </p> <ul> <li>docker.io/bionic-updates,bionic-security,now 18.09.7-0ubuntu1~18.04.4 amd64 [installed] </li> </ul> <p>Kubernetes packages currently installed are</p> <ul> <li>kubeadm/kubernetes-xenial,now 1.16.3-00 amd64 [installed]</li> <li>kubectl/kubernetes-xenial,now 1.16.3-00 amd64 [installed]</li> <li>kubelet/kubernetes-xenial,now 1.16.3-00 amd64 [installed,automatic]</li> <li>kubernetes-cni/kubernetes-xenial,now 0.7.5-00 amd64 [installed,automatic]</li> </ul> <p>using an additional repository as described in [<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management" rel="nofollow noreferrer">2</a>] (i'm aware that i've installed <code>bionic</code> on my VMs, but the "newest" repo available is still <code>xenial</code>).</p> <p>My haproxy is installed as <code>haproxy/bionic,now 2.0.9-1ppa1~bionic amd64 [installed]</code> from [<a href="https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-2.0" rel="nofollow noreferrer">3</a>] repository.</p> <pre><code>global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon defaults log global mode http retries 2 timeout connect 3000ms timeout client 5000ms timeout server 5000ms frontend kubernetes bind *:6443 option tcplog mode tcp default_backend kubernetes-master-nodes backend kubernetes-master-nodes mode tcp balance roundrobin option tcp-check server master0 192.168.1.210:6443 check fall 3 rise 2 server master1 192.168.1.211:6443 check fall 3 rise 2 server master2 192.168.1.212:6443 check fall 3 rise 2 </code></pre> <p>While trying to setup my first control plane, running <code>kubeadm init --control-plane-endpoint "haproxy.my.lan:6443" --upload-certs -v=6</code> as described in [<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes" rel="nofollow noreferrer">4</a>] results in this error:</p> <pre><code>Error writing Crisocket information for the control-plane node </code></pre> <p>full log in [<a href="https://pastebin.com/QD5mbiyN" rel="nofollow noreferrer">5</a>]. I'm pretty lost, if there's a mistake in my haproxy configuration or if there might be some fault in docker or kubernetes itself.</p> <p>My <code>/etc/docker/daemon.json</code> looks like this:</p> <pre><code>{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } </code></pre> <ul> <li>[1] <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/</a></li> <li>[2] <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management</a></li> <li>[3] <a href="https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-2.0" rel="nofollow noreferrer">https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-2.0</a></li> <li>[4] <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes</a></li> <li>[5] <a href="https://pastebin.com/QD5mbiyN" rel="nofollow noreferrer">https://pastebin.com/QD5mbiyN</a></li> </ul>
<p>While not being able to find a decent solution and created an issue in the original "kubeadm" project at github, see here: <a href="https://github.com/kubernetes/kubeadm/issues/1930" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/1930</a> . </p> <p>Since the "triage" suggested in the issue was not feasable (Ubuntu is pretty much "set") for me, I ended in setting up another Docker distribution, as described here: <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a> , purging installed distribution before starting the new setup.</p> <p>While running Docker (Community) <code>v19.03.5</code> through kubeadm <code>v1.16.3</code> throws the following warning:</p> <pre><code>[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09 </code></pre> <p>the results are pretty fine, I managed to setup my ha cluster, as described in the original documentation.</p> <p>So, this can be considered as a <strong>workaround</strong>, <em>NOT</em> as a solution to my original issue!</p>
<p>I am trying to implement autoscaling of pods in my cluster. I have tried with a "dummy" deployment and hpa, and I didn't have problem. Now, I am trying to integrate it into our "real" microservices and it keeps returning </p> <pre><code>Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for memory Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 18m (x5 over 19m) horizontal-pod-autoscaler unable to get metrics for resource memory: no metrics returned from resource metrics API Warning FailedComputeMetricsReplicas 18m (x5 over 19m) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API Warning FailedComputeMetricsReplicas 16m (x7 over 18m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory Warning FailedGetResourceMetric 4m38s (x56 over 18m) horizontal-pod-autoscaler missing request for memory </code></pre> <p>Here is my hpa:</p> <pre><code> apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: #{Name} namespace: #{Namespace} spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: #{Name} minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 </code></pre> <p>The deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: #{Name} namespace: #{Namespace} spec: replicas: 2 selector: matchLabels: app: #{Name} template: metadata: annotations: linkerd.io/inject: enabled labels: app: #{Name} spec: containers: - name: #{Name} image: #{image} resources: limits: cpu: 500m memory: "300Mi" requests: cpu: 100m memory: "200Mi" ports: - containerPort: 80 name: #{ContainerPort} </code></pre> <p>I can see both memory and cpu when I do <code>kubectl top pods</code>. I can see the requests and limits as well when I do <code>kubectl describe pod</code>. </p> <pre><code> Limits: cpu: 500m memory: 300Mi Requests: cpu: 100m memory: 200Mi </code></pre> <p>The only difference I can think of is that my dummy service didn't have linkerd sidecar. </p>
<p>For the HPA to work with resource metrics, <em>every</em> container of the Pod needs to have a request for the given resource (CPU or memory).</p> <p>It seems that the Linkerd sidecar container in your Pod does not define a memory request (it might have a CPU request). That's why the HPA complains about <code>missing request for memory</code>.</p> <p>However, you can configure the memory and CPU requests for the Linkerd container with the <code>--proxy-cpu-request</code> and <code>--proxy-memory-request</code> <a href="https://linkerd.io/2/reference/cli/inject/" rel="noreferrer">injection flags</a>.</p> <p>Another possibility is to use <a href="https://linkerd.io/2/reference/proxy-configuration/" rel="noreferrer">these annotations</a> to configure the CPU and memory requests:</p> <ul> <li><code>config.linkerd.io/proxy-cpu-request</code></li> <li><code>config.linkerd.io/proxy-memory-request</code></li> </ul> <p>Defining a memory request with in either of these ways should make the HPA work.</p> <p><strong>References:</strong></p> <ul> <li>Linkerd issue initially reported <a href="https://github.com/linkerd/linkerd2/issues/1480" rel="noreferrer">in this issue</a> and fixed in <a href="https://github.com/linkerd/linkerd2/pull/1731" rel="noreferrer">this pull request</a></li> <li>In <a href="https://github.com/linkerd/linkerd2/blob/master/CHANGES.md#stable-230" rel="noreferrer">Linkerd 2.3.0</a>, <code>--proxy-cpu-request</code> and <code>--proxy-memory-request</code> replaced <code>--proxy-cpu</code> and <code>--proxy-memory</code></li> <li>There was a similar issue with the <a href="https://github.com/istio/istio/issues/13977" rel="noreferrer">Istio sidecar</a>, but it seems to have been fixed too</li> </ul>
<p>Getting following error while accessing the pod...</p> <blockquote> <p>"OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown command terminated with exit code 126"</p> </blockquote> <p>Tried with /bin/sh &amp; /bin/bash<br> Terminated the node on which this pod is running and bring up the new node, but the result is same.<br> Also tried deleting the pod, but new pod also behaves in the similar way.</p>
<p>This is because the container you're trying to access doesn't have the <code>/bin/bash</code> executable.</p> <p>If you really want to execute a shell in the container, you have to use a container image that includes a shell (e.g. <code>/bin/sh</code> or <code>/bin/bash</code>).</p>
<p>Suppose we have a service replicated to several pods. The first request to the service should be randomly(or by a load balancing algorithm) routed to a pod and the mapping 'value_of_certain_header -> pod_location' should be saved somehow so next request will be routed to specific pod.</p> <p>Are there any Ingress controllers or other approaches for Kubernetes to implement stickiness to specific pod by request header? Basically I need the same behaviour that haproxy does with its sticky tables.</p>
<p>Kubernetes Ingress works on OSI Layer7, so it can take into account HTTP headers, but it only forwards traffic to Kubernetes Services, not to Pods.</p> <p>Unfortunately, Kubernetes Services in turn, can't deliver traffic to specific pods depending on HTTP headers, because Service is basically set of iptables rules that deliver traffic to pod, only analyzing data on OSI Layer4 (IP address, tcp/udp, port number).</p> <p>For example, let's look at kube-dns service iptables rules:</p> <pre><code># kube-dns service -A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU # random load balancing traffic between pods -A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-AALXN3QQZ3U27IAI -A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-WTX2W5TQOZGP42TM # dns pod 1 -A KUBE-SEP-AALXN3QQZ3U27IAI -p udp -m udp -j DNAT --to-destination 10.244.0.16:53 # dns pod 2 -A KUBE-SEP-WTX2W5TQOZGP42TM -p udp -m udp -j DNAT --to-destination 10.244.0.17:53 </code></pre> <p>I can imagine only one realistic way to deliver traffic to specific pod based on HTTP Headers. </p> <ol> <li><p>Configure custom Ingress controller which can set the headers for each Client session and use known DNS names of pods as a back-end destination points. I can't recommend particular solution, so in worst case it could be created using <a href="https://www.geeksforgeeks.org/creating-a-proxy-webserver-in-python-set-1/" rel="nofollow noreferrer">some examples</a>.</p></li> <li><p>Kubernetes StatefulSet creates Pods with predictable names like statefulset-name-0, statefulset-name-1, etc. Corresponding Headless Service (ClusterIP: None) creates DNS names for each Pod.</p></li> </ol> <p>For example, for StatefulSet nginx-ss with three replicas, three pods would be created and Service nginx-ss would create three DNS A records for Pods:</p> <pre><code>nginx-ss-0 1/1 Running 10.244.3.72 nginx-ss-1 1/1 Running 10.244.3.73 nginx-ss-2 1/1 Running 10.244.1.165 nginx-ss-0.nginx-ss.default.svc.cluster.local. 5 IN A 10.244.3.72 nginx-ss-1.nginx-ss.default.svc.cluster.local. 5 IN A 10.244.3.73 nginx-ss-2.nginx-ss.default.svc.cluster.local. 5 IN A 10.244.1.165 </code></pre>
<p>Hello every one i have just started to try out kubernetes and cant get my ingress resource and controllers to work correctly and route external traffic to my service in the cluster. My Environment details Docker-desktop for windows kubernetes version 1.10.11 (obtained by command kubectl version) OS=windows10 64 bit</p> <p>i have fetched ingress from the following link <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p> <p>by using these 2 commands </p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml </code></pre> <p>and then i have created an ingress resource such as </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: sample-ingress1 spec: rules: - host: mysuperbot.com http: paths: - path: /sampleingress backend: serviceName: tomcatappservice servicePort: 8082 </code></pre> <p>my service resource is as follows</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: tomcatappservice labels: app: tomcat-app spec: ports: - port: 8082 protocol: TCP targetPort: 8080 selector: app: tomcat-app type: NodePort --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: tomcat-app labels: app: tomcat-app spec: replicas: 5 selector: matchLabels: app: tomcat-app template: metadata: labels: app: tomcat-app spec: containers: - image: tomcatapp:v1.0.0 name: tomcat-app ports: - containerPort: 8080 </code></pre> <p>and my host file has entry </p> <pre><code>mysuperbot.com localhost </code></pre> <p>but after all this when i try to access my service at mysuperbot.com/sampleingress i get error <code>ERR_NAME_RESOLUTION_FAILED</code> which leads me to believe my ingress controller isnt set up rightly so i check it with command </p> <pre><code>kubectl get pods -n ingress-nginx </code></pre> <p>and output is as follows </p> <pre><code>NAME READY STATUS RESTARTS AGE nginx-ingress-controller-7d84dd6bdf-vnjx5 0/1 Pending 0 2h </code></pre> <p>which means my ingress pods arent starting up.Need help as to how can i test ingress on a local kubernetes cluster that comes with docker-desktop for windows</p> <p>UPDATE </p> <p>after running command </p> <pre><code>kubectl -n ingress-nginx describe pod nginx-ingress-controller-7d84dd6bdf-vnjx5 Name: nginx-ingress-controller-7d84dd6bdf-vnjx5 Namespace: ingress-nginx Node: &lt;none&gt; Labels: app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx pod-template-hash=3840882689 Annotations: prometheus.io/port=10254 prometheus.io/scrape=true Status: Pending IP: Controlled By: ReplicaSet/nginx-ingress-controller-7d84dd6bdf Containers: nginx-ingress-controller: Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: /nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-configuration --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --udp-services-configmap=$(POD_NAMESPACE)/udp-services --publish-service=$(POD_NAMESPACE)/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io Liveness: http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3 Environment: POD_NAME: nginx-ingress-controller-7d84dd6bdf-vnjx5 (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-8md24 (ro) Conditions: Type Status PodScheduled False Volumes: nginx-ingress-serviceaccount-token-8md24: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-serviceaccount-token-8md24 Optional: false QoS Class: BestEffort Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 14s (x583 over 3h) default-scheduler 0/1 nodes are available: 1 node(s) didn't match node selector. </code></pre>
<p>In your ingress controller you have the node-selector kubernetes.io/os=linux you have to edit some of your nodes /your ingress configuration to match this label. </p> <pre><code>kubectl get nodes - - show-labels kubectl label nodes &lt;node-name&gt; &lt;label-key&gt;=&lt;label-value&gt; </code></pre> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/</a></p>
<p>I've multiple secrets created from different files. I'd like to store all of them in common directory <code>/var/secrets/</code>. Unfortunately, I'm unable to do that because kubernetes throws <strong>'Invalid value: "/var/secret": must be unique</strong> error during pod validation step. Below is an example of my pod definition. </p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: alpine-secret name: alpine-secret spec: containers: - command: - sleep - "3600" image: alpine name: alpine-secret volumeMounts: - name: xfile mountPath: "/var/secrets/" readOnly: true - name: yfile mountPath: "/var/secrets/" readOnly: true volumes: - name: xfile secret: secretName: my-secret-one - name: yfile secret: secretName: my-secret-two </code></pre> <p>How can I store files from multiple secrets in the same directory?</p>
<h2>Projected Volume</h2> <p>You can use a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/" rel="noreferrer">projected volume</a> to have two secrets in the same directory</p> <p><strong>Example</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: alpine-secret name: alpine-secret spec: containers: - command: - sleep - "3600" image: alpine name: alpine-secret volumeMounts: - name: xyfiles mountPath: "/var/secrets/" readOnly: true volumes: - name: xyfiles projected: sources: - secret: name: my-secret-one - secret: name: my-secret-two </code></pre>
<p>My kubernetes cluster looks as follow: </p> <pre><code>k get nodes NAME STATUS ROLES AGE VERSION k8s-1 Ready master 2d22h v1.16.2 k8s-2 Ready master 2d22h v1.16.2 k8s-3 Ready master 2d22h v1.16.2 k8s-4 Ready master 2d22h v1.16.2 k8s-5 Ready &lt;none&gt; 2d22h v1.16.2 k8s-6 Ready &lt;none&gt; 2d22h v1.16.2 k8s-7 Ready &lt;none&gt; 2d22h v1.16.2 </code></pre> <p>As you can see, the cluster consists of 4 master and 3 nodes. </p> <p>These are the running pods: </p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 &lt;none&gt; &lt;none&gt; default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 &lt;none&gt; &lt;none&gt; default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 &lt;none&gt; &lt;none&gt; default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 &lt;none&gt; &lt;none&gt; default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 &lt;none&gt; &lt;none&gt; default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 &lt;none&gt; &lt;none&gt; </code></pre> <p>Why the pods greeter-service-v1-8d97f9bcd-gnsvp and helloweb-77c9476f6d-7f76v are running on the master?</p>
<p>By default, there is no restriction for Pod to be scheduled on master unless there is a <code>Taint</code> like <code>node-role.kubernetes.io/master:NoSchedule</code>.</p> <p>You can verify if there is any taint on master node using <code>kubectl describe k8s-1</code></p> <p>or <code>kubectl get node k8s-secure-master.linxlabs.com -o jsonpath={.spec.taints[]} &amp;&amp; echo</code></p> <p>If you want to put a taint then use below </p> <p><code>kubectl taint node k8s-1 node-role.kubernetes.io/master="":NoSchedule</code></p> <p>After adding taint, no new pods will be scheduled on this node unless there is matching toleration on Pod spec.</p> <p>Read more about Taints and Tolerations <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">here</a></p>
<p>I want to deploy Grafana using Kubernetes, but I don't know how to attach provisioned dashboards to the Pod. Storing them as key-value data in a configMap seems to me like a nightmare - example here <a href="https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml" rel="nofollow noreferrer">https://github.com/do-community/doks-monitoring/blob/master/manifest/dashboards-configmap.yaml</a> - in my case it would me much more JSON dashboards - thus the harsh opinion.</p> <p>I didn't had an issue with configuring the Grafana settings, datasources and dashboard providers as configMaps since they are defined in single files, but the dashboards situation is a little bit more tricky for me. </p> <p>All of my dashboards are stored in the repo under "/files/dashboards/", and I wondered how to make them available to the Pod, besides the way described earlier. Wondered about using the hostPath object for a sec, but didn't make sense for multi-node deployment on different hosts.</p> <p>Maybe its easy - but I'm fairly new to Kubernetes and can't figure it out - so any help would be much appreciated. Thank you!</p>
<p>You can automatically generate a ConfigMap from a set fo files in a directory. Each file will be a key-value pair in the ConfigMap with the file name being the key and the file content being the value (like in your linked example but done automatically instead of manually).</p> <p>Assuming that your dashboard files are stored as, for example:</p> <pre><code>files/dashboards/ ├── k8s-cluster-rsrc-use.json ├── k8s-node-rsrc-use.json └── k8s-resources-cluster.json </code></pre> <p>You can run the following command to directly create the ConfigMap in the cluster:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create configmap my-config --from-file=files/dashboards </code></pre> <p>If you prefer to only generate the YAML manifest for the ConfigMap, you can do:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create configmap my-config --from-file=files/dashboards --dry-run -o yaml &gt;my-config.yaml </code></pre>
<p>We are running hazelcast in embedded mode and the application is running in kubernetes cluster. We are using Kubernetes API for discovery. </p> <p>It was all working fine and now we just started using <code>envoy</code> as sidecar for SSL. Now for both <code>inbound</code> and <code>outbound</code> on TCP at <code>hazelcast</code> port <code>5701</code> we have enabled TLS in envoy but are yet to do changes for kubernetes API call. </p> <p>Right now we are getting below Exception :</p> <blockquote> <p>"class":"com.hazelcast.internal.cluster.impl.DiscoveryJoiner","thread_name":"main","type":"log","data_version":2,"description":"[10.22.69.149]:5701 [dev] [3.9.4] Operation: [get] for kind: [Endpoints] with name: [servicename] in namespace: [namespace] failed.","stack_trace":"j.n.ssl.SSLException: Unrecognized SSL message, plaintext connection?\n\tat s.s.ssl.InputRecord.handleUnknownRecord(InputRecord.java:710)\n\tat s.s.ssl.InputRecord.read(InputRecord.java:527)\n\tat s.s.s.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)\n\tat s.s.s.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)\n\tat s.s.s.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)\n\tat s.s.s.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)\n\tat o.i.c.RealConnection.connectTls(RealConnection.java:281)\n\tat o.i.c.RealConnection.establishProtocol(RealConnection.java:251)\n\tat o.i.c.RealConnection.connect(RealConnection.java:151)\n\tat</p> </blockquote> <p>Can someone help with the overall changes which should be needed for Hazelcast k8s discovery using APIs with envoy as sidecar ?</p>
<p>You can find an example config below for how to deploy Hazelcast with Envoy sidecar and use it with mTLS. </p> <p><a href="https://github.com/hazelcast/hazelcast-kubernetes/issues/118#issuecomment-553588983" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes/issues/118#issuecomment-553588983</a></p> <p>If you want to achieve the same with an embedded architecture, you need to create a headless kubernetes service besides your microservice's kubernetes service. Then you need to give headless service name to hazelcast-kubernetes plugin <strong>service-name</strong> parameter.</p> <p>You can find more info on hazelcast-kubernetes plugin <a href="https://github.com/hazelcast/hazelcast-kubernetes/blob/master/README.md" rel="nofollow noreferrer">README.md</a> file. </p> <p><strong>EDIT:</strong> Hazelcast-Istio-SpringBoot step-by-step guide can be found <a href="https://github.com/hazelcast-guides/hazelcast-istio" rel="nofollow noreferrer">here</a>. </p>
<p>I have an <a href="https://www.openebs.io" rel="nofollow noreferrer">OpenEBS</a> setup with 3 data nodes and a default cstore storage class. Creation of a file works pretty good: </p> <pre><code>time dd if=/dev/urandom of=a.log bs=1M count=100 real 0m 0.53s </code></pre> <p>I can delete the file and create it again with the quite the same times. But when I rewrite onto the file it takes ages:</p> <pre><code>time dd if=/dev/urandom of=a.log bs=1M count=100 104857600 bytes (100.0MB) copied, 0.596577 seconds, 167.6MB/s real 0m 0.59s time dd if=/dev/urandom of=a.log bs=1M count=100 104857600 bytes (100.0MB) copied, 16.621222 seconds, 6.0MB/s real 0m 16.62s time dd if=/dev/urandom of=a.log bs=1M count=100 104857600 bytes (100.0MB) copied, 19.621924 seconds, 5.1MB/s real 0m 19.62s </code></pre> <p>When I delete the <code>a.log</code> and write it again, then the ~167MS/s are back. Only writing onto an existing file takes so much time. </p> <p>My problem is that i think this is why some of my applications (e.g. Databases) are too slow. Creation of a table in mysql took over 7sek.</p> <p>Here is the spec of my testcluster: </p> <pre><code># Create the OpenEBS namespace apiVersion: v1 kind: Namespace metadata: name: openebs --- # Create Maya Service Account apiVersion: v1 kind: ServiceAccount metadata: name: openebs-maya-operator namespace: openebs --- # Define Role that allows operations on K8s pods/deployments kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator rules: - apiGroups: ["*"] resources: ["nodes", "nodes/proxy"] verbs: ["*"] - apiGroups: ["*"] resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"] verbs: ["*"] - apiGroups: ["*"] resources: ["statefulsets", "daemonsets"] verbs: ["*"] - apiGroups: ["*"] resources: ["resourcequotas", "limitranges"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] verbs: ["*"] - apiGroups: ["volumesnapshot.external-storage.k8s.io"] resources: ["volumesnapshots", "volumesnapshotdatas"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: [ "get", "list", "create", "update", "delete", "patch"] - apiGroups: ["*"] resources: [ "disks", "blockdevices", "blockdeviceclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "castemplates", "runtasks"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"] verbs: ["*" ] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["get", "watch", "list", "delete", "update", "create"] - nonResourceURLs: ["/metrics"] verbs: ["get"] - apiGroups: ["*"] resources: [ "upgradetasks"] verbs: ["*" ] --- # Bind the Service Account with the Role Privileges. kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator subjects: - kind: ServiceAccount name: openebs-maya-operator namespace: openebs - kind: User name: system:serviceaccount:default:default apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: openebs-maya-operator apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: maya-apiserver namespace: openebs labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.3.0 spec: selector: matchLabels: name: maya-apiserver openebs.io/component-name: maya-apiserver replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: maya-apiserver imagePullPolicy: IfNotPresent image: quay.io/openebs/m-apiserver:1.3.0 ports: - containerPort: 5656 env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: OPENEBS_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName - name: OPENEBS_MAYA_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG value: "true" - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL value: "true" - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE value: "quay.io/openebs/jiva:1.3.0" - name: OPENEBS_IO_JIVA_REPLICA_IMAGE value: "quay.io/openebs/jiva:1.3.0" - name: OPENEBS_IO_JIVA_REPLICA_COUNT value: "1" - name: OPENEBS_IO_CSTOR_TARGET_IMAGE value: "quay.io/openebs/cstor-istgt:1.3.0" - name: OPENEBS_IO_CSTOR_POOL_IMAGE value: "quay.io/openebs/cstor-pool:1.3.0" - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE value: "quay.io/openebs/cstor-pool-mgmt:1.3.0" - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE value: "quay.io/openebs/cstor-volume-mgmt:1.3.0" - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE value: "quay.io/openebs/m-exporter:1.3.0" - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE value: "quay.io/openebs/m-exporter:1.3.0" - name: OPENEBS_IO_ENABLE_ANALYTICS value: "false" - name: OPENEBS_IO_INSTALLER_TYPE value: "openebs-operator" livenessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 readinessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: v1 kind: Service metadata: name: maya-apiserver-service namespace: openebs labels: openebs.io/component-name: maya-apiserver-svc spec: ports: - name: api port: 5656 protocol: TCP targetPort: 5656 selector: name: maya-apiserver sessionAffinity: None --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-provisioner namespace: openebs labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.3.0 spec: selector: matchLabels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: openebs-provisioner imagePullPolicy: IfNotPresent image: quay.io/openebs/openebs-k8s-provisioner:1.3.0 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: exec: command: - pgrep - ".*openebs" initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-snapshot-operator namespace: openebs labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.3.0 spec: selector: matchLabels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: snapshot-controller image: quay.io/openebs/snapshot-controller:1.3.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: exec: command: - pgrep - ".*controller" initialDelaySeconds: 30 periodSeconds: 60 - name: snapshot-provisioner image: quay.io/openebs/snapshot-provisioner:1.3.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: exec: command: - pgrep - ".*provisioner" initialDelaySeconds: 30 periodSeconds: 60 --- # This is the node-disk-manager related config. # It can be used to customize the disks probes and filters apiVersion: v1 kind: ConfigMap metadata: name: openebs-ndm-config namespace: openebs labels: openebs.io/component-name: ndm-config data: node-disk-manager.config: | probeconfigs: - key: udev-probe name: udev probe state: true - key: seachest-probe name: seachest probe state: false - key: smart-probe name: smart probe state: true filterconfigs: - key: os-disk-exclude-filter name: os disk exclude filter state: true exclude: "/,/etc/hosts,/boot" - key: vendor-filter name: vendor filter state: true include: "" exclude: "CLOUDBYT,OpenEBS" - key: path-filter name: path filter state: true include: "" exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: openebs-ndm namespace: openebs labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.3.0 spec: selector: matchLabels: name: openebs-ndm openebs.io/component-name: ndm updateStrategy: type: RollingUpdate template: metadata: labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.3.0 spec: nodeSelector: "openebs.io/nodegroup": "storage-node" serviceAccountName: openebs-maya-operator hostNetwork: true containers: - name: node-disk-manager image: quay.io/openebs/node-disk-manager-amd64:v0.4.3 imagePullPolicy: Always securityContext: privileged: true volumeMounts: - name: config mountPath: /host/node-disk-manager.config subPath: node-disk-manager.config readOnly: true - name: udev mountPath: /run/udev - name: procmount mountPath: /host/proc readOnly: true - name: sparsepath mountPath: /var/openebs/sparse env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: SPARSE_FILE_DIR value: "/var/openebs/sparse" - name: SPARSE_FILE_SIZE value: "10737418240" - name: SPARSE_FILE_COUNT value: "3" livenessProbe: exec: command: - pgrep - ".*ndm" initialDelaySeconds: 30 periodSeconds: 60 volumes: - name: config configMap: name: openebs-ndm-config - name: udev hostPath: path: /run/udev type: Directory - name: procmount hostPath: path: /proc type: Directory - name: sparsepath hostPath: path: /var/openebs/sparse --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-ndm-operator namespace: openebs labels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator openebs.io/version: 1.3.0 spec: selector: matchLabels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: node-disk-operator image: quay.io/openebs/node-disk-operator-amd64:v0.4.3 imagePullPolicy: Always readinessProbe: exec: command: - stat - /tmp/operator-sdk-ready initialDelaySeconds: 4 periodSeconds: 10 failureThreshold: 1 env: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # the service account of the ndm-operator pod - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName - name: OPERATOR_NAME value: "node-disk-operator" - name: CLEANUP_JOB_IMAGE value: "quay.io/openebs/linux-utils:3.9" --- apiVersion: v1 kind: Secret metadata: name: admission-server-certs namespace: openebs labels: app: admission-webhook openebs.io/component-name: admission-webhook type: Opaque data: cert.pem: &lt;...pem...&gt; key.pem: &lt;...pem...&gt; --- apiVersion: v1 kind: Service metadata: name: admission-server-svc namespace: openebs labels: app: admission-webhook openebs.io/component-name: admission-webhook-svc spec: ports: - port: 443 targetPort: 443 selector: app: admission-webhook --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-admission-server namespace: openebs labels: app: admission-webhook openebs.io/component-name: admission-webhook openebs.io/version: 1.3.0 spec: replicas: 1 strategy: type: Recreate rollingUpdate: null selector: matchLabels: app: admission-webhook template: metadata: labels: app: admission-webhook openebs.io/component-name: admission-webhook openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: admission-webhook image: quay.io/openebs/admission-server:1.3.0 imagePullPolicy: IfNotPresent args: - -tlsCertFile=/etc/webhook/certs/cert.pem - -tlsKeyFile=/etc/webhook/certs/key.pem - -alsologtostderr - -v=2 - 2&gt;&amp;1 volumeMounts: - name: webhook-certs mountPath: /etc/webhook/certs readOnly: true volumes: - name: webhook-certs secret: secretName: admission-server-certs --- apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: validation-webhook-cfg labels: app: admission-webhook openebs.io/component-name: admission-webhook webhooks: # failurePolicy Fail means that an error calling the webhook causes the admission to fail. - name: admission-webhook.openebs.io failurePolicy: Ignore clientConfig: service: name: admission-server-svc namespace: openebs path: "/validate" caBundle: &lt;...ca..&gt; rules: - operations: [ "CREATE", "DELETE" ] apiGroups: ["*"] apiVersions: ["*"] resources: ["persistentvolumeclaims"] - operations: [ "CREATE", "UPDATE" ] apiGroups: ["*"] apiVersions: ["*"] resources: ["cstorpoolclusters"] --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-localpv-provisioner namespace: openebs labels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner openebs.io/version: 1.3.0 spec: selector: matchLabels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner openebs.io/version: 1.3.0 spec: serviceAccountName: openebs-maya-operator containers: - name: openebs-provisioner-hostpath imagePullPolicy: Always image: quay.io/openebs/provisioner-localpv:1.3.0 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: OPENEBS_IO_ENABLE_ANALYTICS value: "true" - name: OPENEBS_IO_INSTALLER_TYPE value: "openebs-operator" - name: OPENEBS_IO_HELPER_IMAGE value: "quay.io/openebs/openebs-tools:3.8" livenessProbe: exec: command: - pgrep - ".*localpv" initialDelaySeconds: 30 periodSeconds: 60 </code></pre> <p>I am on Kubernetes: </p> <pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>How can i investigate why it takes so long? What can me bring some more insides? </p>
<p>Spoke to Peter over Slack. The following changes helped to improve the performance:</p> <ul> <li>Increasing the queue size of the iSCSI Initiator</li> <li>Increase the cpu from 2 to 6 for each of the three nodes </li> <li>Create my own sc to use a ext3 instead of the default ext4.</li> </ul> <p>With the above changes, the subsequent writes numbers were around 140-150 MB/s.</p>
<p>I am implementing Prometheus to monitor my Kubernetes system health, where I have multiple clusters and namespaces. </p> <p>My goal is to monitor only a specefic namespace which called <code>default</code> and just my own pods excluding prometheus Pods and monitoring details.</p> <p>I tried to specify the namespace in the <code>kubernetes_sd_configs</code> like this:</p> <pre><code>kubernetes_sd_configs: - role: endpoints namespaces: names: - 'default' </code></pre> <p>Buit I still getting metrics that I don't need.</p> <p>Here is my configMap.yml:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: prometheus-server-conf labels: name: prometheus-server-conf namespace: default data: prometheus.rules: |- groups: - name: devopscube demo alert rules: - alert: High Pod Memory expr: sum(container_memory_usage_bytes) &gt; 1 for: 1m labels: severity: slack annotations: summary: High Memory Usage prometheus.yml: |- global: scrape_interval: 5s evaluation_interval: 5s rule_files: - /etc/prometheus/prometheus.rules alerting: alertmanagers: - scheme: http static_configs: - targets: - "alertmanager.monitoring.svc:9093" scrape_configs: - job_name: 'kubernetes-apiservers' kubernetes_sd_configs: - role: endpoints namespaces: names: - 'default' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node namespaces: names: - 'default' relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod namespaces: names: - 'default' relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: 'kube-state-metrics' static_configs: - targets: ['kube-state-metrics.kube-system.svc.cluster.local:8080'] - job_name: 'kubernetes-cadvisor' scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token kubernetes_sd_configs: - role: node namespaces: names: - 'default' relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints namespaces: names: - 'default' relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name </code></pre> <p>I don't want to this details below for example to be monitored:</p> <pre><code>✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/system.slice/kubelet.service",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux"} ✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/system.slice/docker.service",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux"} ✔container_memory_rss{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",id="/kubepods/podda7b74d8-b611-4dff-885c-70ea40091b7d",instance="minikube",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="minikube",kubernetes_io_os="linux",namespace="kube-system",pod="default-http-backend-59f7ff8999-ktqnl",pod_name="default-http-backend-59f7ff8999-ktqnl"} </code></pre>
<p>If you just want to prevent certain metrics from being ingested (i.e. prevent from being saved in the Prometheus database), you can use <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs" rel="noreferrer">metric relabelling</a> to drop them:</p> <pre><code> - job_name: kubernetes-cadvisor metric_relabel_configs: - source_labels: [__name__] regex: container_memory_rss action: drop </code></pre> <p>Note that in the <code>kubernetes-cadvisor</code> job you use the <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#node" rel="noreferrer"><code>node</code></a> service discovery role. This discovers Kubernetes nodes, which are non-namespaced resources, so your namespace restriction to <code>default</code> might not have any effect in this case.</p>
<p>I have a GET request URL into a service on my kubernetes that's ~9k long and it seems like the request is getting stuck in Kubernetes's ingress. When I tried calling the url from within the docker or from other docker in the cluster it works fine. However when I go through a domain name I'm getting the following response header:</p> <p><a href="https://i.stack.imgur.com/zxvM7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zxvM7.png" alt="enter image description here"></a></p>
<p>I think the parameter you must modify is <strong>Client Body Buffer Size</strong></p> <blockquote> <p>Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule</p> </blockquote> <pre><code>nginx.ingress.kubernetes.io/client-body-buffer-size: "1000" # 1000 bytes nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte </code></pre> <p>So you must add an <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#client-body-buffer-size" rel="nofollow noreferrer">annotation</a> to your nginx ingress config. </p>
<p>I have an application in a container which reads certain data from a configMap which goes like this</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.yaml: | server: port: 8080 host: 0.0.0.0 ## ## UCP configuration. ## If skipped, it will default to looking inside of the connections.xml file. database: ApplicationDB: username: username password: hello123 </code></pre> <p>Now I created a secret for the password and mounted as env variable while starting the container.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: appdbpassword type: Opaque stringData: password: hello123 </code></pre> <p>My pod looks like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: {{ .Values.pod.name }} spec: containers: - name: {{ .Values.container.name }} image: {{ .Values.image }} command: [ "/bin/sh", "-c", "--" ] args: [ "while true; do sleep 30; done;"] env: - name: password valueFrom: secretKeyRef: name: appdbpassword key: password volumeMounts: - name: config-volume mountPath: /app/app-config/application.yaml subPath: application.yaml volumes: - name: config-volume configMap: name: app-config </code></pre> <p>I tried using this env variable inside the configMap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: app-config data: application.yaml: | server: port: 8080 host: 0.0.0.0 ## ## UCP configuration. ## If skipped, it will default to looking inside of the connections.xml file. database: ApplicationDB: username: username **password: ${password}** </code></pre> <p>But my application is unable to read this password. Am I missing something here? </p> <p>EDIT:</p> <p>I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?</p>
<p>your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.</p> <p>consider a scenario where instead of application.yaml pass this file</p> <pre><code>application.sh: | echo "${password}" </code></pre> <p>now go inside /app/app-config you will see application.sh file . And now do <code>sh application.sh</code> you will see the value of environment variable. </p> <p>I hope this might clear your point.</p>