prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>When DNS service Discovery in kubernetes helping us in establishing connection between services . What will be the need to have a service mesh like istio etc in kubernetes.</p>
<p>In addition to the service discovery, few other things are available when you use Istio on top of Kubernetes:</p> <ul> <li>Blue green deployments with request routing to new release based on percentage value given by user.</li> <li>Rate limiting</li> <li>Integration with grafana and prometheus for monitoring </li> <li>Service graph visualisation with Kiali</li> <li>Circuit breaking</li> <li>Tracing and metrics without having to instrument your applications</li> <li>You will be able to secure your connections at mesh level via mTLS</li> </ul> <p>You can read more about the advantages of <a href="https://medium.com/google-cloud/istio-why-do-i-need-it-18d122838ee3" rel="nofollow noreferrer">having Istio in your cluster here</a>.</p>
<p>I have a virtual machine with Minikube inside it.<br> It works great when I run it using <code>minikube start --vm-driver=none</code> command. </p> <p>My host crashed and therefore also the virtual machine. I rebooted the host and then the virtual machine and when I tried to run minikube it failed with lots of information: </p> <pre><code>[root@ubuntu]$ minikube start --vm-driver=none Starting local Kubernetes v1.13.2 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Stopping extra container runtimes... Starting cluster components... E0710 01:30:34.390608 42763 start.go:376] Error starting cluster: kubeadm init: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI output: [init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [WARNING FileExisting-ebtables]: ebtables not found in system path [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "minikube" could not be reached [WARNING Hostname]: hostname "minikube": lookup minikube on 10.240.0.2:53: server misbehaving [WARNING Port-10250]: Port 10250 is in use [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/var/lib/minikube/certs/" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.240.0.23 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.240.0.23 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing up-to-date kubeconfig file: "/etc/kubernetes/scheduler.conf" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster : running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI .: exit status 1 ================================================================================ An error has occurred. Would you like to opt in to sending anonymized crash information to minikube to help prevent future errors? To disable this prompt, run: 'minikube config set WantReportErrorPrompt false' ================================================================================ Please enter your response [Y/n]: Prompt timed out. Bummer, perhaps next time! minikube failed :( exiting with error code 1 </code></pre> <p>When I run <code>minikube status</code> I received: </p> <pre><code>host: Running kubelet: Running apiserver: Stopped kubectl: Correctly Configured: pointing to minikube-vm at 10.240.0.23 </code></pre> <p>It seems that there is problem with the API server. The kubelet server is running but has errors like that: </p> <pre><code>kubelet.go:2266] node "minikube" not found </code></pre> <p>I run <code>minikube logs</code> and I see the same errors: </p> <pre><code>://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&amp;limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused Jul 10 01:35:52 ubuntu kubelet[43030]: E0710 01:35:52.587064 43030 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&amp;limit=500&amp;resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused Jul 10 01:35:52 ubuntu kubelet[43030]: E0710 01:35:52.590979 43030 kubelet.go:2266] node "minikube" not found </code></pre> <p>It seems that it has some problem to connect to 127.0.0.1:8443 and therefore send error that the "minikube" not found. </p> <p>I saw that this issue was already mentioned <a href="https://github.com/kubernetes/minikube/issues/2967#issuecomment-423915639" rel="nofollow noreferrer">here</a> but I tried the solution and it didn't work for me.<br> I stopped minikube and delete it and start it over again but the same problem appear. </p> <p>Any idea what is the reason for this ? </p>
<p>I solved it like that: </p> <pre><code>minikube stop &amp;&amp; minikube delete docker rm -f $(docker ps -aq) reboot </code></pre> <p>After reboot it started correctly.<br> But I still didn't understand how it could be fixed without the restart.<br> The restart is was the missing piece. If you would do these command without restart and try to start it again, it won't start. </p>
<p><em>Let me make a statement first: I'm new to Kubernetes, please take it easy if I'm asking wrong questions.</em></p> <p>Ok, here is what I'm gonna do. I'm planning to build a Kubernetes for my project using some <strong>physical machines</strong>.</p> <p>I have 1 server for master and 2 worker nodes. My service dockers (pods) will be allocated by Kubernetes master, they will need storage for the database <code>(MySQL)</code>.</p> <p>After searching around, I came up with a solution of <code>Persistent Volume</code>, but I don't want to use those online cloud services out there such as <strong>Google Cloud</strong> or <strong>Azure Cloud</strong>, etc. It leads me to another solution - <code>Local Persistent Volume (LPV)</code>, this is where I stuck currently. </p> <p>The problem with <code>LPV</code> is it's attached with a specific node, so I wouldn't be able to replicate (backup) the database on other nodes, if something happens to this node, or something wrong with the physical disk, I'm gonna lose all the databases, right? </p> <p>The question is, are there any solutions to set up replication on the database using <code>Local Persistent Volume</code>? For example, I have a database on <strong>Node 1</strong>, and a backup version on <strong>Node 2</strong>, so when <strong>Node 1</strong> is not available, the pods will mount to the backup database on <strong>Node 2</strong>.</p> <p>Thanks in advance!</p>
<ol> <li><p>You can deploy the database as statefulset using local volumes on nodes.Just create the volumes and put them in a StorageClass</p></li> <li><p>For backup , you need to setup the replication on the database level ( not volume level ) to some other cluster /other database instance running somewhere else/or on some other cluster. </p></li> <li><p>Pod failures are handled by kubernetes anyway , it will restart the pod if goes unhelathy.</p></li> <li><p>Node failures can't be handled in statefulset ( one node can't replace another , in other words , in statefulset a pod will not be restarted on other node , kubernetes will wait for node to come back )</p></li> <li><p>If you are going for simple single pod deployement rather than statefulset , you can deploy the database as single pod and another instance as single pod and use node selector to run those on different nodes , then setup the replication from one instance to another instance on database level , and configure your client app to failover to fallback instance in case the primary is not available , this needs to be synchronous replication.</p></li> </ol> <p>Links:</p> <p><a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">Run a Single-Instance Stateful Application ( MYSQL) </a></p> <p><a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">Run a Replicated Stateful Application ( MYSQL ) </a></p>
<p>Backstory: I was running an Airflow job on a daily schedule, with a <code>start_date</code> of July 1, 2019. The job gathered requested each day's data from a third party, then loaded that data into our database.</p> <p>After running the job successfully for several days, I realized that the third party data source only refreshed their data once a month. As such, I was simply downloading the same data every day. </p> <p>At that point, I changed the <code>start_date</code> to a year ago (to get previous months' info), and changed the DAG's schedule to run once a month.</p> <p>How do I (in the airflow UI) restart the DAG completely, such that it recognizes my new <code>start_date</code> and schedule, and runs a complete backfill as if the DAG is brand new?</p> <p>(I know this backfill can be requested via the command line. However, I don't have permissions for the command line interface and the admin is unreachable.)</p>
<p>Click on the green circle in the Dag Runs column for the job in question in the web interface. This will bring you to a list of all successful runs.</p> <p>Tick the check mark on the top left in the header of the list to select all instances, then in the menu above it choose "With selected" and then "Delete" in the drop down menu. This should clear all existing dag run instances.</p> <p>If catchup_by_default is not enabled on your Airflow instance, make sure <code>catchup=True</code> is set on the DAG until it has finished catching up.</p>
<p>Trying to pull Docker images to a s390x architecture (available as Hyperprotect VS on IBM public Cloud) and the web based dockerhub search interface doesn't really have a way to only list the specific tags where a Docker image exists for a particular architecture. </p> <p>I tried <code>using docker pull</code>, <code>docker search</code>, <code>docker manifest</code>, along with some of the "experimental" features. If a Docker image exists, the command will pull it (for example <code>docker pull node:8.11.2</code>) but what if I wanted to see what Node images actually were in dockerhub (or any other repository for that matter) for the s390x, arm, ppcle64, architectures?</p> <p>Ideas anyone?</p> <pre><code>$ docker search node docker pull node:8.11.2-alpine 8.11.2-alpine: Pulling from library/node no matching manifest for unknown in the manifest list entries </code></pre>
<p>I am posting the answer from <a href="https://stackoverflow.com/questions/31251356/how-to-get-a-list-of-images-on-docker-registry-v2">this question</a>:</p> <blockquote> <p>For the latest (as of 2015-07-31) version of Registry V2, you can get <a href="https://registry.hub.docker.com/u/distribution/registry/" rel="nofollow noreferrer">this image</a> from DockerHub:</p> <pre><code>docker pull distribution/registry:master </code></pre> <p>List all repositories (effectively images):</p> <pre><code>curl -X GET https://myregistry:5000/v2/_catalog &gt; {"repositories":["redis","ubuntu"]} </code></pre> <p>List all tags for a repository:</p> <pre><code>curl -X GET https://myregistry:5000/v2/ubuntu/tags/list &gt; {"name":"ubuntu","tags":["14.04"]} </code></pre> </blockquote>
<p>I'm trying to communicate a service from other Pod, but i'm unable to access on it.</p> <p>Im using GKE, I tried different ports and setups and looking into the code: <a href="https://github.com/spreaker/prometheus-pgbouncer-exporter" rel="nofollow noreferrer">https://github.com/spreaker/prometheus-pgbouncer-exporter</a></p> <p>My deployment file, has:</p> <pre><code>spec: containers: - name: exporter image: ... ports: - containerPort: 9127 env: ... </code></pre> <p>And the service:</p> <pre><code>type: NodePort ports: - port: 9127 protocol: "TCP" name: exporter </code></pre> <p>When I try to describe the svc:</p> <pre><code>Name: ...-pg-bouncer-exporter-service Namespace: backend Labels: app=...-pg-bouncer-exporter Annotations: &lt;none&gt; Selector: app=...-pg-bouncer-exporter Type: NodePort IP: 10.0.19.80 Port: exporter 9127/TCP TargetPort: 9127/TCP NodePort: exporter 31296/TCP Endpoints: 10.36.7.40:9127 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>And the Pod itself:</p> <pre><code>Containers: exporter: Container ID: docker://... Image: ... Image ID: docker-pullable:... Port: 9127/TCP Host Port: 0/TCP State: Running Started: Wed, 10 Jul 2019 11:17:38 +0200 Ready: True Restart Count: 0 </code></pre> <p>If I access to the container, I'm receiving correctly data from curl:</p> <pre><code>/ # curl localhost:9127/metrics # HELP process_virtual_memory_bytes Virtual memory size in bytes. # TYPE process_virtual_memory_bytes gauge process_virtual_memory_bytes .... </code></pre> <p>Also doing a port-forward to the service it works:</p> <pre><code>$ kubectl port-forward services/...-pg-bouncer-exporter-service 9127:9127 -n backend Forwarding from 127.0.0.1:9127 -&gt; 9127 Forwarding from [::1]:9127 -&gt; 9127 Handling connection for 9127 </code></pre> <p>Now im getting this error from other pod in the same network:</p> <pre><code>curl 10.36.7.40:9127/metrics curl: (7) Failed to connect to 10.36.7.40 port 9127: Connection refused </code></pre> <p>Also If I create a LivenessProbe TCP to the 9127, im getting this error:</p> <pre><code>Liveness probe failed: dial tcp 10.36.7.40:9127: connect: connection refused </code></pre> <p>I don't see what im doing wrong, im using the same setup for other services.</p> <p>Thanks in advance!</p>
<p>So the problem was:</p> <p>Instead to use <code>127.0.0.1</code> for the PGBOUNCER_EXPORTER_HOST i have to use <code>0.0.0.0</code>.</p> <p>Thats solves the issue.</p>
<p>I have a cluster created on IBM Cloud that I can't access. When I try the following command </p> <pre><code>ibmcloud ks cluster-config mycluster </code></pre> <p>I get this response :</p> <p>FAILED</p> <pre><code>{"incidentID":"4d426527e73acd83-CDG","code":"A0006","description":"The specified cluster could not be found. Target a region. If you're using resource groups, make sure that you target the correct resource group.","type":"Authentication","recoveryCLI":"To list the clusters you have access to, run 'ibmcloud ks clusters'. To check the resource group ID of the cluster, run 'ibmcloud ks cluster-get \u003ccluster_name_or_ID\u003e'. To list the resource groups that you have access, run 'ibmcloud resource groups'. To target the resource group, run 'ibmcloud target -g \u003cresource_group\u003e'. To target a region, run 'ibmcloud ks region-set'."} </code></pre> <p>I've tried to contact the IBM support team, but I still have no helpful answer to my issue. But when I try this to get the list of my clusters, it works, and I can see that my cluster actually exists.</p> <p>I also tried all the commands from the JSON error message, but still it doesn't work.</p> <pre><code>ibmcloud ks clusters </code></pre> <p>From what I saw on IBM the <code>ibmcloud ks cluster-config mycluster</code> command is supposed to download the configuration file, but since it doesn't even find my cluster, I don't get anything. </p> <p>Hopefully someone has had this issue before and/or can help me to figure it out, I'm running out of ideas.</p> <p><strong>UPDATE</strong> </p> <p>I also tried </p> <pre><code>ibmcloud ks cluster-config --cluster mycluster </code></pre> <p>It returns the same JSON error message.</p> <p>My OS is Ubuntu 16.04</p> <p><strong>UPDATE 2</strong></p> <p>Even though I managed to manually get the cluster config and deploy a "hello-world" app, all commands refering to the cluster such as <code>ibmcloud ks workers &lt;cluster_name_or_ID&gt;</code> and <code>ibmcloud ks cluster-config &lt;cluster_name_or_ID&gt;</code> still don't work, and I firmly believe there is no way to fully use IBM Cloud without these commands working correctly.</p>
<p>I have seen problems using <code>ibmcloud ks cluster-config</code> with <em>free clusters</em> in the eu-gb region. If this applies to you, try using the appropriate regional endpoint to see if it helps.</p> <p>For example:</p> <pre><code>ibmcloud ks init --host https://eu-gb.containers.cloud.ibm.com </code></pre> <p>See <a href="https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones#regions_free" rel="nofollow noreferrer">https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones#regions_free</a></p>
<p>I want to know how I can start my deployments in a specific order. I am aware of <code>initContainers</code> but that is not working for me. I have a huge platform with around 20 deployments and 5 statefulsets that each of them has their own service, environment variables, volumes, horizontal autoscaler, etc. So it is not possible (or I don't know how) to define them in another yaml deployment as <code>initContainers</code>.</p> <p>Is there another option to launch deployments in a specific order?</p>
<p>To create dependency among the deployment, there needs to be a sequence of certain condition true.</p> <p>For example, Wait for the pod &quot;busybox1&quot; to contain the status condition of type &quot;Ready&quot;.</p> <pre><code>kubectl wait --for=condition=Ready pod/busybox1 </code></pre> <p>after that, you can roll out the next deployment.</p> <p>For further detail see <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="noreferrer">kubectl-wait</a></p> <p>Here is an another example from @Michael Hausenblas <a href="https://hackernoon.com/kubectl-tip-of-the-day-wait-like-a-boss-40a818c423ac" rel="noreferrer">job-dependencies</a>, having dependencies among the job objects.</p> <p>If you’d like to kick off another job after worker has completed? Here you go:</p> <pre><code>$ kubectl -n waitplayground \ wait --for=condition=complete --timeout=32s \ job/worker job.batch/worker condition met </code></pre>
<p>I got OutOfcpu in kubernetes on googlecloud what does it mean? My pods seem to be working now, however there there were pods in this same revision which got OutOfcpu.</p>
<p>It means that the <a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/#kube-scheduler" rel="noreferrer">kube-scheduler</a> can't find any node with available CPU to schedule your pods:</p> <blockquote> <p>kube-scheduler selects a node for the pod in a 2-step operation:</p> <ol> <li>Filtering</li> <li>Scoring</li> </ol> <p>The <em>filtering</em> step finds the set of Nodes where it’s feasible to schedule the Pod. For example, the <code>PodFitsResources</code> filter checks whether a candidate Node has enough available resource to meet a Pod’s specific resource requests.<br> [...]<br> <code>PodFitsResources</code>: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.</p> </blockquote> <p>Also, as per <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">Assigning Pods to Nodes</a>:</p> <blockquote> <p>If the named node does not have the resources to accommodate the pod, the pod will fail and its reason will indicate why, e.g. OutOfmemory or <strong>OutOfcpu</strong>.</p> </blockquote>
<p>my vm(virtual machine) have Multiple virtual network cards,so it has multiple ip,When I installed kubernetes, etcd was automatically installed and configured,and it automatically selected a default IP. but This IP is not what I want it to listen for. where and how can I configure the etcd to listen the right ip I wanted?</p> <p>I installed kubernetes and the first control panel(master01) is work(ready). but when i join the second control panel(master02),I get the error like this: "error execution phase check-etcd: error syncing endpoints with etc: dial tcp 10.0.2.15:2379: connect: connection refused". so I checked etcd process found that one of it configuration is"--advertise-client-urls=10.0.2.15:2379",the ip is not what I want it to listen for. my realy ip is 192.168.56.101. And I want it listen for this ip. what should I do? </p> <p>my kubernetes cluster version is v1.14.1 </p> <p>I hope the etcd can listen for the correct IP. And the second kubernetes master node can join the cluster successfully.</p>
<p>Judging by the error message it looks like you're using <code>kubeadm</code>. You need to add <code>extraArgs</code> to your etcd in <code>ClusterConfiguration</code>, something like (untested):</p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration etcd: local: ... extraArgs: advertise-client-urls: "https://192.168.56.101:2379" listen-client-urls: "https://192.168.56.101:2379,https://127.0.0.1:2379" ... </code></pre> <p>Also see the <code>ClusterConfiguration</code> documentation: <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#LocalEtcd" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#LocalEtcd</a></p>
<p>I see the following error when I run my deployment:</p> <p><code>Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found</code></p> <p>My n3deployment.yaml has no reference to n2?</p> <p><strong>Step By Step</strong></p> <ol> <li>Ensure everything is empty</li> </ol> <pre><code>c:\temp\k8s&gt;kubectl get pods No resources found. c:\temp\k8s&gt;kubectl get svc No resources found. c:\temp\k8s&gt;kubectl get deployments No resources found. c:\temp\k8s&gt;kubectl get namespaces NAME STATUS AGE default Active 20h docker Active 20h kube-public Active 20h kube-system Active 20h </code></pre> <ol start="2"> <li>Create files</li> </ol> <pre><code>n3namespace.yaml apiVersion: v1 kind: Namespace metadata: name: n3 n3service.yaml apiVersion: v1 kind: Service metadata: name: my-app-n3 namespace: n3 labels: app: my-app-n3 spec: type: LoadBalancer ports: - name: http port: 80 targetPort: http protocol: TCP selector: app: my-app-n3 n3deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app-n3 labels: app: my-app-n3 spec: replicas: 1 selector: matchLabels: app: my-app-n3 template: metadata: labels: app: my-app-n3 spec: containers: - name: waiter image: waiter:v1 ports: - containerPort: 80 </code></pre> <ol start="3"> <li>Apply configuration</li> </ol> <pre><code>c:\temp\k8s&gt;kubectl apply -f n3namespace.yaml namespace "n3" created c:\temp\k8s&gt;kubectl apply -f n3service.yaml service "my-app-n3" created c:\temp\k8s&gt;kubectl apply -f n3deployment.yaml Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found </code></pre> <p>I used to have a namespace called <code>n2</code> but, as you can see, it no longer exists.</p>
<p>I had previously created two contexts and my minikube cluster was set to still be in the <code>n2</code> context. I deleted the context, re-ran the command and it worked.</p>
<p>I have a meta helm chart which wraps up several charts:</p> <p>meta-helm-chart</p> <pre><code>. |-- Chart.yaml |-- README.md |-- charts |-- dashboards |-- deployment.yaml |-- templates `-- values.yaml </code></pre> <p>I have like 6 sub-helm charts in the folder <strong>charts/</strong>. </p> <pre><code>tree charts/ -L 1 charts/ |-- chart-1 |-- chart-2 |-- chart-3 |-- chart-4 |-- prometheus-rabbitmq-exporter `-- chart-6 </code></pre> <p>When opening main <strong>values.yaml</strong></p> <pre><code> ... ... rabbitmqTarget 10.20.30.40 ... </code></pre> <p>In this fashion <strong>prometheus-rabbitmq-exporter</strong> helm chart will be deployed and RabbitMQ at IP address <code>10.20.30.40</code> will be scraped.</p> <p><strong>What I am trying to do:</strong></p> <p>I would like to have my <strong>values.yaml</strong>:</p> <pre><code> ... ... rabbitmqTarget [10.20.30.40, 50.60.70.80] ... </code></pre> <p>Unfortunately [RabbitMQ prometheus exporter] (<a href="https://github.com/kbudde/rabbitmq_exporter" rel="nofollow noreferrer">https://github.com/kbudde/rabbitmq_exporter</a>) can't scrape more targets.</p> <p>Can anybody advise how to <strong>schedule</strong> as many deployments as specified in <code>rabbitmqTarget [10.20.30.40, 50.60.70.80]</code> ??? simply by using helm chart ?</p>
<p>To my knowledge there is no way to do this "simply by using helm". You would need some sort of wrapper / script to run multiple helm installs. </p>
<p>We have big problems with setting limits/requests in our web django servers/python processors/celery workers. Our current strategy is to look at the usage of the graphs in the last 7 days:</p> <p>1) get raw average excluding peak</p> <p>2) add 30% buffer</p> <p>3) set limits 2 time requests</p> <p>It works more or less, but then service's code is changed and limits which were set before are not valid anymore. What are the other strategies?</p> <p>How would you set up limits/requests for those graphs:</p> <p>1) Processors:</p> <p><a href="https://i.stack.imgur.com/si4CG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/si4CG.png" alt="enter image description here"></a></p> <p>2) Celery-beat</p> <p><a href="https://i.stack.imgur.com/ny7BE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ny7BE.png" alt="enter image description here"></a></p> <p>3) Django (artifacts probably connected somehow to rollouts)</p> <p><a href="https://i.stack.imgur.com/KDmmB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDmmB.png" alt="enter image description here"></a></p>
<p>I would suggest you to start with the average CPU and memory value that the application takes and then enable <strong>auto scaling</strong>. Kunernetes has multiple kinds of autoscaling.</p> <ul> <li>Horizontal pod autoscaler</li> <li>Vertical pod autoscaler</li> </ul> <p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal pod autoscaling</a> is commonly used these days. HPA will automatically create new pods if the pods CPU or memory exceeds the percentage or volume of CPU or memory set as threshold.</p> <p>Monitor the new releases before deployment and see why exactly the new release needs more memory. Troubleshoot and try to reduce the resource consumption limit. If that is not update the resource request with new CPU and memory value.</p>
<p>I tried to setup a Kubernetes Ingress to route external http traffic towards a frontend pod (with path /) and backend pod (with path /rest/*), but I alway get a 400 error instead of the main nginx index.html.</p> <p>So I tried the Google Kubernetes example at page <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a>, but I get always a 400 error. Any idea?</p> <p>Following is the deployment descriptor for the frontend "cup-fe" (running nginx with angular app):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cup-fe namespace: default labels: app: cup-fe spec: replicas: 2 selector: matchLabels: app: "cup-fe" template: metadata: labels: app: "cup-fe" spec: containers: - image: "eu.gcr.io/gpi-cup-242708/cup-fe:latest" name: "cup-fe" </code></pre> <p>Next the service NodePort:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: cup-fe namespace: default spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: run: cup-fe type: NodePort </code></pre> <p>And last, but not least, the Ingress to expose the frontend outside:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: http-ingress spec: rules: - host: cup-fe http: paths: - path: / backend: serviceName: cup-fe servicePort: 80 - path: /rest/* backend: serviceName: cup-be servicePort: 8080 </code></pre> <p>I left behind "cup-be" deployment descriptor (running wildfly), because is pretty similar to the "cup-fe" one. Please note also that if I create a LoadBalancer service instead of NodePort, I can reach the web page but I have some CORS problems to call the backend.</p>
<p>I assume that you have used wrong <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">selector</a> <code>run: cup-fe</code> entire particular service configuration. Since I have replaced label with <code>app: cup-fe</code> in <code>cup-fe</code> service configuration the relevant Pod <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">endpoints</a> showed up and I've received successful responses as well.</p> <pre><code>$ kubectl get ep | grep cup-fe|awk '{print $2}' &lt;IP_address&gt;:80,&lt;IP_address&gt;:80 </code></pre> <p>If the issue still persists just let me know and push a comment below my answer.</p>
<p>I am looking for the way to pass JNLP though Kubernetes nginx ingress. Any advise would be strongly appreciated. Specifically I am using "Rancher 2" cluster with built in Nginx ingress.</p> <p>In details my issue is: I am executing Jenkins master as a workflow on top of Kubernetes cluster and I would like to add external machine as a slave node to this Jenkins instance.</p> <p>Jenkins JNLP is exposed by L7 nginx ingress with dedicated hostname, and this hostname is configure in <code>Advanced/Tunnel connection through</code> of the slave node. I can query this address by http from the slave with curl and it returns valid response:</p> <p><code>Jenkins-Agent-Protocols: JNLP-connect, JNLP2-connect, JNLP3-connect, JNLP4-connect, Ping Jenkins-Version: 2.176.1 Jenkins-Session: 0fe8c345 Client: 10.42.0.0 Server: 10.42.1.37 Remoting-Minimum-Version: 3.4</code></p> <p>However with JNLP it doesn't work. When I try to register new node with this command:</p> <p><code>java -jar agent.jar -jnlpUrl http://devops.xxxx.local/jenkins/computer/EPHEMERAL-WIN-NODE/slave-agent.jnlp -secret xxxxxxxxxxxxxxx</code></p> <p>It returns followin error: <code>ConnectionRefusalException: Server didn't accept the handshake: HTTP/1.1 400 Bad Request</code></p> <p>To make sure that it is not a connectivity issue, I changed tunneling address of the node to direct address of worker node and it is working well in this case. However, it can't be the solution, because Kubernetes can change this address dynamically.</p>
<p>The JNLP port uses a tcp protocol, not http. You can't http proxy it through nginx. You could try with nginx tcp proxying <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p>
<p>On Windows 10 Pro, I installed docker and the Kubernetes cli. I upgraded the kubectl.exe to version 1.15 by replacing the old one in the docker folder. When I run “kubectl version”, it shows the client version as 1.15, but the server version still shows as 1.10. How can I upgrade the server version to 1.15?</p>
<p>Welcome to SO !, I assume you are using Kubernetes cluster, that is available as an installation option of Docker Desktop for Windows. In that case you cannot easily upgrade your Kubernetes cluster (server side), as its particular version is bundled with Docker Desktop installer (e.g. <a href="https://docs.docker.com/docker-for-windows/release-notes/" rel="nofollow noreferrer">Docker Community Edition 2.0.0.2 2019-01-16</a> comes with Kubernetes 1.10.11 version).</p> <p>If you want to have a full control over Kubernetes version (server side/control plane) please check <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">minikube</a> tool, which lets you specify it by adding '<code>--kubernetes-version</code>' argument (<code>minikube start --kubernetes-version v1.15.0</code>). With minikube there is still an option to <a href="https://github.com/kubernetes/minikube/blob/master/docs/reusing_the_docker_daemon.md" rel="nofollow noreferrer">re-use</a> the Docker daemon inside the VM (started with 'minikube start command' in the background).</p>
<p>I have a single node k8s cluster. I have two namespaces, call them <code>n1</code> and <code>n2</code>. I want to deploy the same image, on the same port but in different namespaces.</p> <p>How do I do this?</p> <p>namespace yamls:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: n1 and apiVersion: v1 kind: Namespace metadata: name: n2 </code></pre> <p>service yamls:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-app-n1 namespace: n1 labels: app: my-app-n1 spec: type: LoadBalancer ports: - name: http port: 80 targetPort: http protocol: TCP selector: app: my-app-n1 and apiVersion: v1 kind: Service metadata: name: my-app-n2 namespace: n2 labels: app: my-app-n2 spec: type: LoadBalancer ports: - name: http port: 80 targetPort: http protocol: TCP selector: app: my-app-n2 </code></pre> <p>deployment yamls:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app-n1 labels: app: my-app-n1 spec: replicas: 1 selector: matchLabels: app: my-app-n1 template: metadata: labels: app: my-app-n1 spec: containers: - name: waiter image: waiter:v1 ports: - containerPort: 80 and apiVersion: apps/v1 kind: Deployment metadata: name: my-app-n2 labels: app: my-app-n2 spec: replicas: 1 selector: matchLabels: app: my-app-n2 template: metadata: labels: app: my-app-n2 spec: containers: - name: waiter image: waiter:v1 ports: - containerPort: 80 </code></pre> <p><code>waiter:v1</code> corresponds to this repo: <a href="https://hub.docker.com/r/adamgardnerdt/waiter" rel="nofollow noreferrer">https://hub.docker.com/r/adamgardnerdt/waiter</a></p> <p>Surely I can do this as namespaces are supposed to represent different environments? eg. nonprod vs. prod. So surely I can deploy identically into two different "environments" aka "namespaces"?</p>
<p>For Service you have specified namespaces , that is correct.</p> <p>For Deployments you should also specify <strong>namespaces othervise they will go to default namespace.</strong></p> <p><strong>Example:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-app-n1 namespace: n1 labels: app: my-app-n1 spec: replicas: 1 selector: matchLabels: app: my-app-n1 template: metadata: labels: app: my-app-n1 spec: containers: - name: waiter image: waiter:v1 ports: - containerPort: 80 </code></pre>
<p>I have a GKE cluster (n1-standard-1, master version 1.13.6-gke.13) with 3 nodes on which I have 7 deployments, each running a Spring Boot application. A default Horizontal Pod Autoscaler was created for each deployment, with target CPU 80% and min 1 / max 5 replicas.</p> <p>During normal operation, there is typically 1 pod per deployment and CPU usage at 1-5%. But when the application starts, e.g after performing a rolling update, the CPU usage spikes and the HPA scales up to max number of replicas reporting CPU usage at 500% or more.</p> <p>When multiple deployments are started at the same time, e.g after a cluster upgrade, it often causes various pods to be unschedulable because it's out of CPU, and some pods are at "Preemting" state. </p> <p>I have changed the HPAs to max 2 replicas since currently that's enough. But I will be adding more deployments in the future and it would be nice to know how to handle this correctly. I'm quite new to Kubernetes and GCP so I'm not sure how to approach this.</p> <p>Here is the CPU chart for one of the containers after a cluster upgrade earlier today:</p> <p><a href="https://i.stack.imgur.com/cCh4m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cCh4m.png" alt="CPU usage"></a></p> <p>Everything runs in the default namespace and I haven't touched the default LimitRange with 100m default CPU request. Should I modify this and set limits? Given that the initialization is resource demanding, what would the proper limits be? Or do I need to upgrade the machine type with more CPU?</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">HPA only takes into account ready pods</a>. Since your pods only experience a spike in CPU usage during the early stages, your best bet is to configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">readiness</a> probe that only shows as ready once the CPU usage comes down or has a initialDelaySeconds set longer than the startup period to ensure the spike in CPU usage is not taken into account for the HPA.</p>
<p>We are trying to run a Golang app on Kubernetes which talks to Bigtable. The application seems to be stuck at creating the client:</p> <pre><code>bigtableClient := bigtable.NewClient() </code></pre> <p>upon setting the log level to info using:</p> <pre><code>export GRPC_GO_LOG_SEVERITY_LEVEL="INFO" </code></pre> <p>the error message is like this:</p> <pre><code>WARNING: 2019/06/05 08:14:13 grpc: addrConn.createTransport failed to connect to {dns:///bigtable.googleapis.com:443 0 1}. Err :connection error: desc = "transport: Error while dialing dial tcp: address dns:///bigtable.googleapis.com:443: too many colons in address". Reconnecting... WA </code></pre> <p>We tried using the Alpine docker image but doesn't seem to work. Has anybody faced this before?</p>
<p>Upon debugging, the issue was with one of the dependencies used while building the container. Using Go modules <a href="https://github.com/golang/go/wiki/Modules" rel="nofollow noreferrer">https://github.com/golang/go/wiki/Modules</a> to manage package versions solved the problem.</p>
<p>Prometheus deployed using <a href="https://github.com/coreos/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> can't scrape resources on namespaces other than <code>default</code>, <code>monitoring</code> and <code>kube-system</code>. I added additional namespaces on my <code>jsonnet</code> as described in <a href="https://github.com/coreos/kube-prometheus" rel="nofollow noreferrer">kube-prometheus README</a> but no success...</p> <p>I also tried to create a new <code>ServiceMonitor</code> manually, but no success...</p> <p>I appreciate any help. Thanks.</p>
<p>If you used the pre-compiled manifests <a href="https://github.com/coreos/kube-prometheus/blob/3c109369d4d4e19d2bab9e073797eb971a5eb957/manifests/prometheus-roleBindingSpecificNamespaces.yaml" rel="nofollow noreferrer">here</a> you will only have your service account with 3 rolebindings allowing access to the namespaces you mentioned.</p> <p>You can add more namespaces for example by applying the same <code>roleBinding</code> in more namespaces.</p> <p>This is more secure as opposed to using a <code>clusterRoleBinding</code> since it allows for more finegrained permissions.</p>
<p>I have been trying to authenticate OIDC using DEX for LDAP. I have succeeded in authenticating but the problem is, LDAP search is not returning the groups. Following are my DEX configs and LDAP Data. Please help me out</p> <p>Screenshot: Login successful, groups are empty</p> <p><a href="https://i.stack.imgur.com/YbUwj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbUwj.png" alt="enter image description here"></a></p> <p><strong>My Dex Config</strong></p> <pre><code># User search maps a username and password entered by a user to a LDAP entry. userSearch: # BaseDN to start the search from. It will translate to the query # "(&amp;(objectClass=person)(uid=&lt;username&gt;))". baseDN: ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com # Optional filter to apply when searching the directory. #filter: "(objectClass=posixAccount)" # username attribute used for comparing user entries. This will be translated # and combine with the other filter as "(&lt;attr&gt;=&lt;username&gt;)". username: mail # The following three fields are direct mappings of attributes on the user entry. # String representation of the user. idAttr: uid # Required. Attribute to map to Email. emailAttr: mail # Maps to display name of users. No default value. nameAttr: uid # Group search queries for groups given a user entry. groupSearch: # BaseDN to start the search from. It will translate to the query # "(&amp;(objectClass=group)(member=&lt;user uid&gt;))". baseDN: dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com # Optional filter to apply when searching the directory. #filter: "(objectClass=posixGroup)" # Following two fields are used to match a user to a group. It adds an additional # requirement to the filter that an attribute in the group must match the user's # attribute value. userAttr: uid groupAttr: memberUid # Represents group name. nameAttr: cn </code></pre> <p><strong>My LDAP Data</strong></p> <blockquote> <p>dn: ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com ou: People objectClass: organizationalUnit</p> <p>dn: uid=johndoe,ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com gecos: John Doe uid: johndoe loginShell: / bin / bash mail: john.doe@example.org homeDirectory: / home / jdoe cn: John Doe sn: Doe uidNumber: 10002 objectClass: posixAccount objectClass: inetOrgPerson objectClass: top userPassword: bar gidNumber: 10002</p> <p>dn: uid=janedoe,ou=People,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com gecos: Jane Doe uid: janedoe loginShell: / bin / bash mail: jane.doe@example.org homeDirectory: / home / jdoe cn: Jane Doe sn: Doe uidNumber: 10001 objectClass: posixAccount objectClass: inetOrgPerson objectClass: top userPassword: foo gidNumber: 10001</p> <p>dn: ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com ou: Groups objectClass: organizationalUnit</p> <p>dn: cn=admins,ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com cn: admins objectClass: posixGroup objectClass: top gidNumber: 20001 memberUid: janedoe memberUid: johndoe</p> <p>dn: cn=developers,ou=Groups,dc=ec2-54-185-211-121,dc=us-west-2,dc=compute,dc=amazonaws,dc=com cn: developers objectClass: posixGroup objectClass: top gidNumber: 20002 memberUid: janedoe</p> </blockquote>
<p>Sorry for a late replay but I didnt know the answer until now :)</p> <p>I had the same problem, in my setup I used <code>dex (quay.io/dexidp/dex:v2.16.0)</code> to use MS AD. I used <strong>kubernetes 1.13</strong> in my tests.</p> <p>To generate kubeconfig i used <code>heptiolabs/gangway (gcr.io/heptio-images/gangway:v3.0.0)</code> and for handle dashboard login i used <code>pusher/oauth2_proxy (quay.io/pusher/oauth2_proxy)</code>.</p> <p>I spent a lot of time trying different ldap setups in dex but didnt get the AD groups to show up in dex log or get them to work in kubernetes, and every example I read was using only users.</p> <p>The problem and solution for me was not in the dex config, dex will request groups from ldap if you tell dex to do so. Its all in the clients. OIDC have a "concept" of scopes and I guess that most (all?) oidc clients implement it, at least both gangway and oauth2-proxy does. So the solution for me was to configure the client (gangway and oauth2-proxy in my case) so that they also ask dex for groups.</p> <p>In gangway I used the following config (including the comments)</p> <pre><code># Used to specify the scope of the requested Oauth authorization. # scopes: ["openid", "profile", "email", "offline_access"] scopes: ["openid", "profile", "email", "offline_access", "groups"] </code></pre> <p>For oauth2-proxy I added this to the args deployment</p> <pre><code>- args: - --scope=openid profile email groups </code></pre> <p>And then I could use groups instead of users in my rolebindings, dont forget to also configure the api-server to use dex for its oidc.</p> <p>Hope that helps</p> <p>-Robert</p>
<p>How do i need to configure my ingress that Angular 7 App is working?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/use-regex: "true" spec: rules: - http: paths: - path: backend: serviceName: angular-service servicePort: 80 - path: /test backend: serviceName: angular-service servicePort: 80 </code></pre> <p>Angular is hosted by nginx image:</p> <pre><code>FROM nginx:alpine COPY . /usr/share/nginx/html </code></pre> <p>And on Kubernetes:</p> <p>kind: Service apiVersion: v1 metadata: name: test-service spec: selector: app: test ports:</p> <h2> - port: 80</h2> <p>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test spec: replicas: 2 template: metadata: labels: app: test spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: test image: xxx.azurecr.io/test:latest imagePullPolicy: Always ports: - containerPort: 80</p> <p>On path / everything works. But on /test it doesn't work. Output in console:</p> <blockquote> <p>Uncaught SyntaxError: Unexpected token &lt; runtime.js:1 </p> <p>Uncaught SyntaxError: Unexpected token &lt; polyfills.js:1</p> <p>Uncaught SyntaxError: Unexpected token &lt; main.js:1</p> </blockquote> <p>That´s why I changed on angular.json:</p> <pre><code>"baseHref" : "/test", </code></pre> <p>But now I get the same error on both locations. What I´m doing wrong?</p> <h2>edit details</h2> <h3>ingress-controller (Version 0.25.0):</h3> <p><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml</a></p> <h3>ingress-service (for azure):</h3> <p><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml</a></p> <h3>Test Procedure:</h3> <p>The Application is built with </p> <pre><code>"baseHref" : "", </code></pre> <p>When I run the application on my server everything works (by that baseHref is verified, too).</p> <pre><code>$ sudo docker run --name test -p 80:80 test:v1 </code></pre> <p>On Kubernetes the application works on location / (only if I use annotation nginx.ingress.kubernetes.io/rewrite-target: /). If I try to enter /test I get an empty page.</p> <h3>Logs:</h3> <p>$ sudo kubectl logs app-589ff89cfb-9plfs</p> <pre><code>10.xxx.x.xx - - [11/Jul/2019:18:16:13 +0000] "GET / HTTP/1.1" 200 574 "https://52.xxx.xx.xx/test" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" "93.xxx.xxx.xxx" </code></pre> <p>$ sudo kubectl logs -n ingress-nginx nginx-ingress-controller-6df4d8b446-6rq65</p> <p>Location /</p> <pre><code>93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET / HTTP/2.0" 200 279 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 261 0.002 [default-app2-service-80] [] 10.xxx.x.48:80 574 0.004 200 95abb8e14b1dd95976cd44f23a2d829a 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /runtime.js HTTP/2.0" 200 2565 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 44 0.001 [default-app2-service-80] [] 10.xxx.x.49:80 9088 0.004 200 d0c2396f6955e82824b1dec60d43b4ef 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /polyfills.js HTTP/2.0" 200 49116 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 25 0.000 [default-app2-service-80] [] 10.xxx.x.48:80 242129 0.000 200 96b5d57f9baf00932818f850abdfecca 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /styles.js HTTP/2.0" 200 5464 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.009 [default-app2-service-80] [] 10.xxx.x.49:80 16955 0.008 200 c3a7f1f937227a04c9eec9e1eab107b3 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /main.js HTTP/2.0" 200 3193 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 21 0.019 [default-app2-service-80] [] 10.xxx.x.49:80 12440 0.016 200 c0e12c3eaec99212444cf916c7d6b27b 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /runtime.js.map HTTP/2.0" 200 9220 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 24 0.004 [default-app2-service-80] [] 10.xxx.x.48:80 9220 0.008 200 f1a820a384ee9e7a61c74ebb8f3cbf68 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /vendor.js HTTP/2.0" 200 643193 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.130 [default-app2-service-80] [] 10.xxx.x.48:80 3391734 0.132 200 1cf47ed0d8a0e470a131dddb22e8fc48 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /polyfills.js.map HTTP/2.0" 200 241031 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 26 0.001 [default-app2-service-80] [] 10.xxx.x.49:80 241031 0.000 200 75413e809cd9739dc0b9b300826dd107 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /styles.js.map HTTP/2.0" 200 19626 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.000 [default-app2-service-80] [] 10.xxx.x.48:80 19626 0.004 200 1aa0865cbf07ffb1753d0a3eb630b4d7 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /favicon.ico HTTP/2.0" 200 1471 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 55 0.048 [default-app2-service-80] [] 10.xxx.x.49:80 5430 0.000 200 2c015813c697c61f3cc6f67bb3bf7f75 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /vendor.js.map HTTP/2.0" 200 3493937 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 24 0.377 [default-app2-service-80] [] 10.xxx.x.49:80 3493937 0.380 200 7b41bbbecafc2fb037c934b5509de245 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /main.js.map HTTP/2.0" 200 6410 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 22 0.169 [default-app2-service-80] [] 10.xxx.x.48:80 6410 0.104 200 cc23aa543c19ddc0f55b4a922cc05d04 </code></pre> <p>Location /test</p> <pre><code>93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/ HTTP/2.0" 200 274 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 294 0.000 [default-test-service-80] [] 10.xxx.x.xx:80 574 0.000 200 2b560857eba8dd1242e359d5eea0a84b 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/runtime.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 53 0.000 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 2695a85077c64d40a5806fb53e2977e5 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/polyfills.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 30 0.003 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 8bd4e421ee8f7f9db6b002ac40bf2025 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/styles.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 28 0.001 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 db7f0cb93b90a41d623552694e5e74b6 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/vendor.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 28 0.001 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 0e5eb8fc77a6fb94b87e64384ac083e0 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/main.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 27 0.001 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 408aa3cbfda25f65cb607e1b1ce47566 93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/favicon.ico HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 58 0.001 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 d491507ae073de55a480909b4fab0484 </code></pre>
<p>Without knowing the version of nginx-ingress this is just a guess.</p> <p>Per the documentation at <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target</a> is says:</p> <blockquote> <p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p> </blockquote> <p>This means that you need to explicitly pass the paths like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/use-regex: "true" spec: rules: - http: paths: - path: / backend: serviceName: angular-service servicePort: 80 - path: /test/(/|$)(.*) backend: serviceName: angular-service servicePort: 80 </code></pre>
<p>I'm fairly new into Jenkins Kubernetes Plugin and Kubernetes in general - <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a></p> <p>I want to use the plugin for E2E tests setup inside my CI.</p> <p>Inside my <code>Jenkinsfile</code> I have a <code>podTemplate</code> which looks and used as follows:</p> <pre><code>def podTemplate = """ apiVersion: v1 kind: Pod spec: containers: - name: website image: ${WEBSITE_INTEGRATION_IMAGE_PATH} command: - cat tty: true ports: - containerPort: 3000 - name: cypress resources: requests: memory: 2Gi limit: memory: 4Gi image: ${CYPRESS_IMAGE_PATH} command: - cat tty: true """ pipeline { agent { label 'docker' } stages { stage('Prepare') { steps { timeout(time: 15) { script { ci_machine = docker.build("${WEBSITE_IMAGE_PATH}") } } } } stage('Build') { steps { timeout(time: 15) { script { ci_machine.inside("-u root") { sh "yarn build" } } } } post { success { timeout(time: 15) { script { docker.withRegistry("https://${REGISTRY}", REGISTRY_CREDENTIALS) { integrationImage = docker.build("${WEBSITE_INTEGRATION_IMAGE_PATH}") integrationImage.push() } } } } } } stage('Browser Tests') { agent { kubernetes { label "${KUBERNETES_LABEL}" yaml podTemplate } } steps { timeout(time: 5, unit: 'MINUTES') { container("website") { sh "yarn start" } container("cypress") { sh "yarn test:e2e" } } } } } </code></pre> <p>In <code>Dockerfile</code> that builds an image I added an <code>ENTRYPOINT</code></p> <pre><code>ENTRYPOINT ["bash", "./docker-entrypoint.sh"] </code></pre> <p>However it seems that it's not executed by the kubernetes plugin.</p> <p>Am I missing something?</p>
<p>As per <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">Define a Command and Arguments for a Container</a> docs:</p> <blockquote> <p>The command and arguments that you define in the configuration file override the default command and arguments provided by the container image.</p> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">This</a> table summarizes the field names used by Docker and Kubernetes:</p> <pre><code>| Docker field name | K8s field name | |------------------:|:--------------:| | ENTRYPOINT | command | | CMD | args | </code></pre> <p>Defining a <code>command</code> implies ignoring your Dockerfile <code>ENTRYPOINT</code>:</p> <blockquote> <p>When you override the default <code>ENTRYPOINT</code> and <code>CMD</code>, these rules apply:</p> <ul> <li>If you supply a <code>command</code> but no <code>args</code> for a Container, only the supplied <code>command</code> is used. The default <code>ENTRYPOINT</code> and the default <code>CMD</code> defined in the Docker image are ignored.</li> <li>If you supply only <code>args</code> for a Container, the default <code>ENTRYPOINT</code> defined in the Docker image is run with the <code>args</code> that you supplied.</li> </ul> </blockquote> <p>So you need to replace the <code>command</code> in your pod template by <code>args</code>, which will preserve your Dockerfile <code>ENTRYPOINT</code> (acting equivalent to a Dockerfile <code>CMD</code>).</p>
<p>I've done this previously in python using:</p> <pre><code> with open(path.join(path.dirname(__file__), "job.yaml")) as f: body= yaml.safe_load(f) try: api_response = api_instance.create_namespaced_job(namespace, body) </code></pre> <p>Looking at source of the nodejs api client:</p> <pre><code> public createNamespacedJob (namespace: string, body: V1Job, includeUninitialized?: boolean, pretty?: string, dryRun?: string, options: any = {}) : Promise&lt;{ response: http.IncomingMessage; body: V1Job; }&gt; { </code></pre> <p>How can I generate that the <code>V1Job</code>?</p> <hr> <p>I've tried the below but get back a very verbose error message / response:</p> <pre><code>const k8s = require('@kubernetes/client-node'); const yaml = require('js-yaml'); const fs = require('fs'); const kc = new k8s.KubeConfig(); kc.loadFromDefault(); const k8sApi = kc.makeApiClient(k8s.BatchV1Api); var namespace = { metadata: { name: 'test123', }, }; try { var job = yaml.safeLoad(fs.readFileSync('job.yaml', 'utf8')); k8sApi.createNamespacedJob(namespace, job).then( (response) =&gt; { console.log('Created namespace'); console.log("Success!") }, (err) =&gt; { console.log(err); console.log(job); console.log("Err") }, ); } catch (e) { console.log(e); } </code></pre>
<ol> <li><code>V1Job</code> seems to be an ordinary object so the below worked. </li> <li>Namespace had to be a <code>string</code> rather than an object...</li> </ol> <pre><code>const k8s = require('@kubernetes/client-node'); const yaml = require('js-yaml'); const fs = require('fs'); const kc = new k8s.KubeConfig(); kc.loadFromDefault(); const k8sApi = kc.makeApiClient(k8s.BatchV1Api); try { var job = yaml.safeLoad(fs.readFileSync('job.yaml', 'utf8')); k8sApi.createNamespacedJob("default", job).then( (response) =&gt; { console.log("Success") }, (err) =&gt; { console.log(e); process.exit(1); }, ); } catch (e) { console.log(e); process.exit(1); } </code></pre>
<p>I have setup Kubernetes in CentOS with 1 master and a separate node. Added Ambassador GW and later a service with mapping. When I try to access the end service using the GW mapping it responds with <code>no healthy upstream</code> message.</p>
<p>In my case the mapping was right but the service was actually in different namespace than Ambassador. I was missing the <code>.namespace</code> in the service name:</p> <p>So change the service from <code>service: &lt;service.name&gt;</code> to <br><code>service: &lt;service.name&gt;.&lt;namespace&gt;</code></p>
<p>I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message using <code>kafkacat</code>, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume. </p> <p>Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.</p>
<p>This is the expected behaviour , because the new statefulSet will create a new set of PVs and start over. ( if there is no other choice it can randomly land on old PVs as well , for example local volumes ) </p> <p>StatefulSet doesn't mean that kubernetes will remember what you were doing in some other old statefulset that u have deleted. </p> <p>Statefulset means that if the pod is restarted or re-created for some reason, the same volume will be assigned to it. This doesn't mean that the volume will be assigned across the StatefulSets.</p>
<p>Is it possible to invoke a kubernetes Cron job inside a pod . Like I have to run this job from the application running in pod . </p> <p>Do I have to use kubectl inside the pod to execute the job . </p> <p>Appreciate your help</p>
<blockquote> <p>Use the Default Service Account to access the API server. When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/ -o yaml), you can see the spec.serviceAccountName field has been automatically set.</p> <p>You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service account depend on the authorization plugin and policy in use.</p> <p>In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account</p> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p> <ol> <li><p>So the First task is to either grant the permission of doing what you need to create to the default service account of the pod OR create a custom service account and use it inside the pod</p></li> <li><p>Programatically access the API server using that service account to create the job you need</p></li> <li><p>It could be just a simple curl POST to the API server from inside the pod with the json for the job creation</p></li> </ol> <p><a href="https://stackoverflow.com/questions/30690186/how-do-i-access-the-kubernetes-api-from-within-a-pod-container">How do I access the Kubernetes api from within a pod container?</a></p> <p>you can also use the application specific SDK , for example if you have a python application , you can import kubernetes and run the job.</p>
<p>I have a very simple web app based on HTML, javascript + little bit jquery, angularjs. It is tested locally on eclipse Jee and on Tomcat and working fine. And its image is working fine on docker locally.</p> <p>I can access on browser using <code>localhost:8080/xxxx</code>, <code>127.0.0.1:8080/xxxx</code>, <code>0.0.0.0:8080</code>. But when I deploy to google Kubernetes, I'm getting "This site can not be reached" if I use the external IP on the browser. I can ping my external IP, but curl is not working. It's not a firewall issue because sample voting app from dockerhub is working fine on my Kubernetes.</p> <p><strong>my Dockerfile:</strong></p> <pre><code>FROM tomcat:9.0 ADD GeoWebv3.war /usr/local/tomcat/webapps/GeoWeb.war expose 8080 </code></pre> <p><strong>my pod yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: front-app-pod labels: name: front-app-pod app: demo-geo-app spec: containers: - name: front-app image: myrepo/mywebapp:v2 ports: - containerPort: 80 </code></pre> <p><strong>my service yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: front-service labels: name: front-service app: demo-geo-app spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: name: front-app-pod app: demo-geo-app </code></pre>
<p>Change your yamls like this</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: front-service labels: name: front-service app: demo-geo-app spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: name: front-app-pod app: demo-geo-app </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: front-app-pod labels: name: front-app-pod app: demo-geo-app spec: containers: - name: front-app image: myrepo/mywebapp:v2 ports: - containerPort: 8080 </code></pre> <p>You expose the port 8080 in the docker image. Hence in the service you have to say that the <code>targetPort: 8080</code> to redirect the traffic coming to load balancer on port 80 to the container's port 8080</p>
<p>I am following instructions on <a href="https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/" rel="nofollow noreferrer">site</a> to spin up a multi-node kubernetes cluster using vagrant/ansible. Unfortunately, I get following error:</p> <pre><code>TASK [Configure node ip] ******************************************************* fatal: [k8s-master]: FAILED! =&gt; {"changed": false, "msg": "Destination /etc/default/kubelet does not exist !", "rc": 257} </code></pre> <p>The relevant passage in the Vagrantfile is:</p> <pre><code>- name: Install Kubernetes binaries apt: name: "{{ packages }}" state: present update_cache: yes vars: packages: - kubelet - kubeadm - kubectl - name: Configure node ip lineinfile: path: /etc/default/kubelet line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }} </code></pre> <p>Is it just the wrong path? Which one would it be then ?</p> <p>P.S.: I also get a warning beforehand indicating:</p> <pre><code>[WARNING]: Could not find aptitude. Using apt-get instead </code></pre> <p>Is it not installing the kubelet package and might that be the reason why it doesn't find the file ? How to fix it in that case ?</p>
<p>Updating the node ip in the config file is not required. If you still wanted to change for any specific reason, below is the solution. </p> <p>You can change the file to <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> as per the <a href="https://github.com/kubernetes/release/issues/654" rel="noreferrer">change</a> </p> <p>Before you change, please check if this file exists in the nodes. </p> <p><code>/etc/default/kubelet</code> is for yum package. </p>
<p>I have set up an EKS cluster using eksctl using all the default settings and now need to communicate with an external service which uses IP whitelisting. Obviously requests made to the service from my cluster come from whichever node the request was made from, but the list of nodes (and their ips) can and will change frequently so I cannot supply a single IP address for them to whitelist. After looking into this I found that I need to use a NAT Gateway.</p> <p>I am having some trouble getting this to work, I have tried setting AWS_VPC_K8S_CNI_EXTERNALSNAT to true however doing so prevents all outgoing traffic on my cluster, I assume because the return packets do not know where to go so I never get the response. I've tried playing around with the route tables to no avail.</p> <p>Any assistance is much appreciated.</p>
<p>You can follow <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html" rel="noreferrer">this guide</a> to create public subnets and private subnets in your VPC.</p> <p>Then create NAT gateways in public subnets. Also run all EKS nodes in private subnets. The pods in K8S will use NAT gateway to access the internet services.</p>
<p>I created a single-node kubeadm cluster on bare-metal and after some research I would go for a host network approach (<a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network</a>), since NodePort is not an option due to network restrictions. </p> <p>I tried installing nginx-ingress with helm chart through the command:</p> <pre><code> helm install stable/nginx-ingress \ --set controller.hostNetwork=true </code></pre> <p>The problem is that it is creating a LoadBalancer service which is Pending forever and my ingress objects are not being routed:</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/whopping-kitten-nginx-ingress-controller-5db858b48c-dp2j8 1/1 Running 0 5m34s pod/whopping-kitten-nginx-ingress-default-backend-5c574f4449-dr4xm 1/1 Running 0 5m34s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 6m43s service/whopping-kitten-nginx-ingress-controller LoadBalancer 10.97.143.40 &lt;pending&gt; 80:30068/TCP,443:30663/TCP 5m34s service/whopping-kitten-nginx-ingress-default-backend ClusterIP 10.106.217.96 &lt;none&gt; 80/TCP 5m34s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/whopping-kitten-nginx-ingress-controller 1/1 1 1 5m34s deployment.apps/whopping-kitten-nginx-ingress-default-backend 1/1 1 1 5m34s NAME DESIRED CURRENT READY AGE replicaset.apps/whopping-kitten-nginx-ingress-controller-5db858b48c 1 1 1 5m34s replicaset.apps/whopping-kitten-nginx-ingress-default-backend-5c574f4449 1 1 1 5m34s </code></pre> <p>Is there any other configuration that needs to be done to succeed in this approach?</p> <p>UPDATE: here are the logs for the ingress-controller pod</p> <pre><code>------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.24.1 Build: git-ce418168f Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- I0707 19:02:50.552631 6 flags.go:185] Watching for Ingress class: nginx W0707 19:02:50.552882 6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.10 W0707 19:02:50.556215 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0707 19:02:50.556368 6 main.go:205] Creating API client for https://10.96.0.1:443 I0707 19:02:50.562296 6 main.go:249] Running in Kubernetes cluster version v1.15 (v1.15.0) - git (clean) commit e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529 - platform linux/amd64 I0707 19:02:51.357524 6 main.go:102] Validated default/precise-bunny-nginx-ingress-default-backend as the default backend. I0707 19:02:51.832384 6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem W0707 19:02:53.516654 6 store.go:613] Unexpected error reading configuration configmap: configmaps "precise-bunny-nginx-ingress-controller" not found I0707 19:02:53.527297 6 nginx.go:265] Starting NGINX Ingress controller I0707 19:02:54.630002 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"staging-ingress", UID:"9852d27b-d8ad-4410-9fa0-57b92fdd6f90", APIVersion:"extensions/v1beta1", ResourceVersion:"801", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/staging-ingress I0707 19:02:54.727989 6 nginx.go:311] Starting NGINX process I0707 19:02:54.728249 6 leaderelection.go:217] attempting to acquire leader lease default/ingress-controller-leader-nginx... W0707 19:02:54.729235 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint W0707 19:02:54.729334 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:02:54.729442 6 controller.go:797] Service "default/test" does not have any active Endpoint. I0707 19:02:54.729535 6 controller.go:170] Configuration changes detected, backend reload required. I0707 19:02:54.891620 6 controller.go:188] Backend successfully reloaded. I0707 19:02:54.891654 6 controller.go:202] Initial sync, sleeping for 1 second. I0707 19:02:54.948639 6 leaderelection.go:227] successfully acquired lease default/ingress-controller-leader-nginx I0707 19:02:54.949148 6 status.go:86] new leader elected: precise-bunny-nginx-ingress-controller-679b9557ff-n57mc [07/Jul/2019:19:02:55 +0000]TCP200000.000 W0707 19:02:58.062645 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint W0707 19:02:58.062676 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:02:58.062686 6 controller.go:797] Service "default/test" does not have any active Endpoint. W0707 19:03:02.406151 6 controller.go:373] Service "default/precise-bunny-nginx-ingress-default-backend" does not have any active Endpoint W0707 19:03:02.406188 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:03:02.406357 6 controller.go:797] Service "default/test" does not have any active Endpoint. [07/Jul/2019:19:03:02 +0000]TCP200000.000 W0707 19:03:05.739438 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:03:05.739467 6 controller.go:797] Service "default/test" does not have any active Endpoint. [07/Jul/2019:19:03:05 +0000]TCP200000.001 W0707 19:03:09.072793 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:03:09.072820 6 controller.go:797] Service "default/test" does not have any active Endpoint. W0707 19:03:12.406121 6 controller.go:797] Service "default/face" does not have any active Endpoint. W0707 19:03:12.406143 6 controller.go:797] Service "default/test" does not have any active Endpoint. [07/Jul/2019:19:03:15 +0000]TCP200000.000 I0707 19:03:54.959607 6 status.go:295] updating Ingress default/staging-ingress status from [] to [{ }] I0707 19:03:54.961925 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"staging-ingress", UID:"9852d27b-d8ad-4410-9fa0-57b92fdd6f90", APIVersion:"extensions/v1beta1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/staging-ingress </code></pre>
<p>@ijaz-ahmad-khan @vkr gave good ideas for solving the problem but the complete steps for setup are:</p> <p>1) Install nginx-ingress with: </p> <pre><code>helm install stable/nginx-ingress --set controller.hostNetwork=true,controller.service.type="",controller.kind=DaemonSet </code></pre> <p>2) In your deployments put:</p> <pre><code>spec: template: spec: hostNetwork: true </code></pre> <p>3) In all your Ingress objects put:</p> <pre><code>metadata: annotations: kubernetes.io/ingress.class: "nginx" </code></pre>
<p>I have a deployment with 2 replicas of Nginx. It has only the liveness probe to monitor the health of the service. Due to high traffic, my liveness probe fails, the Nginx container is restarted, but the pod status running state and Pod condition is Ready. Due to that POD, IP is not removed the service endpoint and request were send to restarting pod which results in some failures.</p>
<p>As per <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">container probes</a>:</p> <blockquote> <ul> <li><p><code>livenessProbe</code>: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy [...]</p></li> <li><p><code>readinessProbe</code>: Indicates whether the Container is ready to service requests. <strong>If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services</strong> that match the Pod [...]</p></li> </ul> </blockquote> <p>You need to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">add a readinessProbe</a> to grant that the endpoints of unhealthy containers will be removed.</p> <blockquote> <p>Readiness probes are configured similarly to liveness probes. The only difference is that you use the <code>readinessProbe</code> field instead of the <code>livenessProbe</code> field.</p> </blockquote>
<p>I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message using <code>kafkacat</code>, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume. </p> <p>Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.</p>
<p>I would probably look at how the persistent volume is created. If you run the command <code>kubectl get pv</code> you can see the <code>Reclaim policy</code>, if it is set to retain, then your volume will survive even when stateful set is deleted</p>
<p>I know we can set up application with internal or external ip address using load balancer. If I use external Ip address I can reserve it in Azure beforehand as public. Now my question is what if I don't want that ip address to be visible from outside the cluster ?</p> <p>Configuration for internal ip address in kubernetes yaml would be:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: internal-app annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" spec: loadBalancerIP: 10.240.1.90 type: LoadBalancer ports: - port: 80 selector: app: internal-app </code></pre> <p>Now I've read that the specified IP address must reside in the same subnet as the AKS cluster and must not already be assigned to a resource.</p> <p>If I have ip address for my aks agentpool set up as X.X.0.0/16 and I use for example X.X.0.1 as Ip address for my internal load balancer I'm getting error: 'Private IP address is in reserved subnet range'</p> <p>I see I also have something like internal endpoints in AKS. Can those be used for internal application-to-application communication ?</p> <p>I'm just looking for any way for my apps to talk with each other internally with out exposing them to outside world. Also I'd like for that to be repeatable that means that something like dynamic ip addresses wouldn't be too good. I need the set up to be repeatable so I don't have to change all of the apps internal settings every time Ip address changes accidentally.</p>
<p>Easiest solution is just to use a service of type ClusterIP. it would create a virtual IP address inside the cluster that your apps can use to reach each other. You can also use the dns name of the service to reach it:</p> <pre><code>service-name.namespace.svc.cluster.local </code></pre> <p>from any pod inside kubernetes. either of these ways you dont have to care about ip addresses at all, kubernetes manages them</p>
<p>I have a very simple web app and deployed to google Kubernetes(LoadBalancer) and it is working fine. I can access my <code>index.html</code> like this <code>my_external_IP:8080/myweb/html/index.html</code>.</p> <p>The <code>index.html</code> is a frameset which loads other htmls. But the html URLs are hardcoded like <a href="http://localhost/html/my_frame1.html" rel="nofollow noreferrer">http://localhost/html/my_frame1.html</a> in my code. But Google Kubernetes is complaining about localhost refusing to connect. I can not define localhost to my external IP before I create my war. The external IP is known to me long after.</p> <p>Deployment info is in the following: <a href="https://stackoverflow.com/questions/56997861/loadbalancer-service-not-reachable">LoadBalancer service not reachable</a></p>
<p>As @malathi mentioned, this is not related to Google Kubernetes at all.</p> <p>Instead of using hardcoded values I recommend using <a href="https://stackoverflow.com/a/24028813/2989261">Relative Paths</a>. Basically, all your paths in your initial index.html would need to direct to <code>html/...</code>.</p>
<p>One can create <code>Role</code> or <code>ClusterRole</code> and assign it to user via <code>RoleBinding</code> or <code>ClusterRoleBinding</code>.</p> <p>from user view that have a token, how to get all granted permissions or roles\rolebindings applied to him via <code>kubectl</code>?</p>
<pre><code> # Check to see if I can do everything in my current namespace ("*" means all) kubectl auth can-i '*' '*' # Check to see if I can create pods in any namespace kubectl auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace kubectl auth can-i list deployments.extensions </code></pre> <p>you can get further information with <code>kubectl auth --help</code> command </p> <p>You can also impersonate as a different user to check their permission with the following flag <code>--as</code> or <code>--as-group</code></p> <pre><code>kubectl auth can-i create deployments --namespace default --as john.cena </code></pre>
<p>I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.</p> <p>I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.</p> <p>Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?</p>
<p>You can add a custom pool where the image is Ubuntu and be sure to add the special GCE instance metadata <code>startup-script</code> and then you can put your customization on it.</p> <p>But my advice is to put the URL of a shell script stored in a bucket of the same project, GCE will download every time a new node is created and will execute it on the startup as a root.</p> <p><a href="https://cloud.google.com/compute/docs/startupscript#cloud-storage" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/startupscript#cloud-storage</a></p>
<p>I am new to helm and I want to know how I can pass variables from the parent package to children. I have a multi microservices application which consists of different helm packages as below:</p> <pre><code>$ tree parent/ parent/ ├── charts/ │ ├── child-A-0.0.1.tgz │ ├── child-B-0.0.1.tgz │ └── child-C-0.0.1.tgz ├── Chart.yaml ├── templates/ │ ├── NOTES.txt │   └── secret.yaml └── values.yaml </code></pre> <p>And for example, I want to set <code>public-IP</code> in the parent <code>values.yaml</code> and when I install the parent that <code>public-IP</code> passed and set in the children as well.</p>
<p><a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">Overriding values of a child chart</a> is described in the Helm documentation.</p> <p>In the parent chart's <code>values.yaml</code> file (or an external file you pass to <code>helm install -f</code>) you can explicitly override values for a subchart:</p> <pre class="lang-yaml prettyprint-override"><code>childA: publicIP: 10.10.10.10 </code></pre> <p>The top-level values key <code>global</code> is also always passed to child charts, but the chart needs to know to look for it.</p> <pre class="lang-yaml prettyprint-override"><code>global: publicIP: 10.10.10.10 </code></pre> <pre class="lang-yaml prettyprint-override"><code>env: - name: PUBLIC_IP value: {{ .Values.global.publicIP }} </code></pre> <p>This resolution happens fairly early in the Helm setup phase, so there's no way to pass a computed value to a subchart, or to pass the same value from the parent chart's top-level config into subcharts without restructuring things into <code>global</code>.</p>
<p>My goal is to have multiple ports defined for a service of type <code>LoadBalancer</code> and I do not want to copy paste the same thing over and over again.</p> <p>I did come to a solution, but sure how I could define the range - I need all values from 50000 to 50999.</p> <p>In my service, I define the range:</p> <pre><code>{{- range $service.ports }} - name: tport protocol: TCP port: {{ . }} {{- end }} </code></pre> <p>And in my values file:</p> <pre><code>ports: - 50000 - 50001 - 50999 </code></pre> <p>How could I define the ports or update the service template to do this?</p>
<p>Put the min and max port as two different values on your values.yaml and use the range on your template like this:</p> <pre><code>{{- range untilStep (.Values.config.min_port|int) (.Values.config.max_port|int) 1 }} - port: {{ . }} targetPort: "tcp-{{ . }}" protocol: TCP name: "tcp-{{ . }}" {{ -end }} </code></pre>
<p>I need to connect to windows remote server(shared drive) from GO API hosted in the alpine linux. I tried using tcp,ssh and ftp none of them didn't work. Any suggestions or ideas to tackle this?</p>
<p>Before proceeding with debugging the GO code, it would be needed to do some "unskilled labour" within container in order to ensure pre-requisites are met: </p> <ol> <li>samba client is installed and daemons are running; </li> <li>the target name gets resolved; </li> <li>there are no connectivity issues (routing, firewall rules, etc); </li> <li>there are share access permissions; </li> <li>mounting remote volume is allowed for the container.</li> </ol> <p>Connect to the container: </p> <pre><code>$ docker ps $ docker exec -it container_id /bin/bash </code></pre> <p>Samba daemons are running: </p> <pre><code>$ smbd status $ nmbd status </code></pre> <p>You use the right name format in your code and command lines: </p> <pre><code>UNC notation =&gt; \\server_name\share_name URL notation =&gt; smb://server_name/share_name </code></pre> <p>Target name is resolvable</p> <pre><code>$ nslookup server_name.domain_name $ nmblookup netbios_name $ ping server_name </code></pre> <p>Samba shares are visible</p> <pre><code>$ smbclient -L //server [-U user] # list of shares </code></pre> <p>and accessible (<code>ls</code>, <code>get</code>, <code>put</code> commands provide expected output here)</p> <pre><code>$ smbclient //server/share &gt; ls </code></pre> <p>Try to mount remote share as suggested by <a href="https://stackoverflow.com/users/7053644/cwadley">@cwadley</a> (mount could be prohibited by default in Docker container): </p> <pre><code>$ sudo mount -t cifs -o username=geeko,password=pass //server/share /mnt/smbshare </code></pre> <p>For investigation purposes you might use the Samba docker container available at <a href="https://github.com/dperson/samba" rel="nofollow noreferrer">GitHub</a>, or even deploy your application in it since it contains Samba client and helpful command line tools: </p> <pre><code>$ sudo docker run -it -p 139:139 -p 445:445 -d dperson/samba </code></pre> <p>After you get this working at the Docker level, you could easily reproduce this in Kubernetes. </p> <p>You might do the checks from within the running Pod in Kubernetes: </p> <pre><code>$ kubectl get deployments --show-labels $ LABEL=label_value; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name $ kubectl exec pod_name -c container_name -- ping -c1 server_name </code></pre> <p>Having got it working in command line in Docker and Kubernetes, you should get your program code working also. </p> <p>Also, there is a really thoughtful discussion on StackOverflow regards Samba topic:<br> <a href="https://stackoverflow.com/questions/27989751/mount-smb-cifs-share-within-a-docker-container">Mount SMB/CIFS share within a Docker container</a></p>
<p>I would like to list all objects that are present in a specific namespace in kubernetes. </p> <pre><code>kubectl get all -n &lt;namespace&gt; </code></pre> <p>the above command doesn't list all available objects from the given namespace. Is there a way to list them using kubectl?</p> <p>i can list all objects that i want by passing them to kubectl. but i dont want that.</p> <pre><code>kubectl -n &lt;namespace&gt; get deployment,rs,sts,ds,job,cronjobs -oyaml </code></pre>
<p>First of all these following rules decide if the resource will be part of the <strong>all</strong> Category or not. </p> <blockquote> <p>Here are the rules to add a new resource to the <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-cli/kubectl-conventions.md#rules-for-extending-special-resource-alias---all" rel="nofollow noreferrer">kubectl get all</a> output.</p> </blockquote> <ul> <li><p>No cluster scoped resources</p></li> <li><p>No namespace admin level resources (limits, quota, policy,<br> authorization rules)</p></li> <li><p>No resources that are potentially unrecoverable (secrets and pvc)</p></li> <li><p>Resources that are considered "similar" to #3 should be grouped the<br> same (configmaps)</p></li> </ul> <p>To <strong>Answer your question</strong> This is taken from <a href="https://stackoverflow.com/questions/47691479/listing-all-resources-in-a-namespace">rcorre's Answer</a></p> <pre><code>kubectl api-resources --verbs=list --namespaced -o name \ | xargs -n 1 kubectl get --show-kind --ignore-not-found -l &lt;label&gt;=&lt;value&gt; -n &lt;namespace&gt; </code></pre> <p>Lastly, If you want to add a Custom Resource in <strong>all category</strong>, you need to provide these field in your CRD spec. <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#categories" rel="nofollow noreferrer">custom-resource-definitions:categories</a> </p> <pre><code># categories is a list of grouped resources the custom resource belongs to. categories: - all </code></pre>
<p>I am running selenium hubs and my pods are getting terminated frequently. I would like to look at the logs of the pods which are terminated. How to do it?</p> <pre><code>NAME READY STATUS RESTARTS AGE chrome-75-0-0e5d3b3d-3580-49d1-bc25-3296fdb52666 0/2 Terminating 0 49s chrome-75-0-29bea6df-1b1a-458c-ad10-701fe44bb478 0/2 Terminating 0 23s chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 0/2 ContainerCreating 0 7s kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 Error from server (NotFound): pods &quot;chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5&quot; not found $ kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 --previous Error from server (NotFound): pods &quot;chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5&quot; not found </code></pre>
<p>Running <code>kubectl logs -p</code> will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.</p> <p>As mentioned in other answers, the best way is to have your logs centralized via <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures" rel="noreferrer">logging agents or directly pushing these logs into an external service</a>.</p> <p>Alternatively and given the <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level" rel="noreferrer">logging architecture in Kubernetes</a>, you might be able to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#system-component-logs" rel="noreferrer">fetch the logs directly from the log-rotate files</a> in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.</p>
<p><strong>Getting this in log while deploying image in openshift:</strong></p> <blockquote> <p>AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.13. Set the 'ServerName' directive globally to suppress this message</p> <p>(13)Permission denied: AH00072: make_sock: could not bind to address [::]:80</p> <p>(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down AH00015: Unable to open logs</p> </blockquote> <p><strong>Dockerfile:</strong></p> <pre><code>FROM httpd:2.4 RUN echo "hello app" &gt; /usr/local/apache2/htdocs/hello.html </code></pre> <p>also getting the error if i use EXPOSE 80</p>
<p>Ports up to 1024 are so called <code>privileged ports</code> this means that in order to bind to them, the user has to have root capabilities. In your case, you are trying have your service listen on port 80, which is in that <code>privileged</code> port range. By default openshift is not running any containers inside the Pods as root.</p> <p>You will either have to adjust the user as which it runs or have it listen on a different port.</p>
<p>I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.</p> <p>I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.</p> <p>Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?</p>
<p>Node pools in GKE are based off GCE instance templates and can't be modified. That means that you aren't allowed to set metadata such as <em>startup-scripts</em> or make them based on custom images.</p> <p>However, an alternative approach might be <a href="https://serverfault.com/a/940757/504596">deploying a privileged DaemonSet</a> that <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers" rel="nofollow noreferrer">manipulates the underlying OS settings and resources</a>.</p> <p>Is important to mention that giving <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">privileges to resources in Kubernetes</a> must be done carefully.</p>
<p>I can't find in the official <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">docs</a> whether <code>NetworkPolicy</code> block <code>NodePort</code> ingress traffic.</p> <pre><code> ingress: - from: - podSelector: matchLabels: role: frontend </code></pre> <p>Considering the NetworkPolicy above - is it expected that it'll block any ingress traffic to <code>NodePort</code>-type services within my namespace?</p> <p>If so, how do I allow such traffic into my namespace? Will <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#ipblock-v1-networking-k8s-io" rel="nofollow noreferrer"><code>ipBlock</code></a> solve it?</p>
<blockquote> <p>Do Kubernetes NetworkPolicies block NodePort traffic?</p> </blockquote> <p>Not really.</p> <p>As it says in the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">docs</a>: "<em>is a specification of how groups of <strong>pods</strong>...</em>". Basically, they are applied to pods or group of pods. <code>NodePort</code> is defined in Kubernetes as a type of <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>.</p> <p>You can restrict traffic applying the Network Policy to your namespace and specifying <strong>ipBlock</strong> as you mentioned. This will specifically restrict traffic to your pods but not the NodePort. (It may be all that you need)</p> <p>To restrict traffic to NodePorts you will have to use an alternative external solution and it will really depend on your setup. For example, if you are using a cloud provider like AWS you could use <a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html" rel="nofollow noreferrer">security groups</a>.</p> <p>Alternatively, GCP provides <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig" rel="nofollow noreferrer">Google Cloud Armor</a> with a Backend Config that allows you to control traffic to a service/ingress.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-armor-backendconfig</a></p> <p>I have only seen example assigning one securityPolicy but I want to assign multiple ones.</p> <p>I created the following backend config with 2 policies and applied to my service with <code>beta.cloud.google.com/backend-config: my-backend-config</code></p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: namespace: cloud-armor-how-to name: my-backend-config spec: securityPolicy: name: "policy-one" name: "policy-two" </code></pre> <p>When I deploy only "policy-two" is applied. Can I assign two policies somehow? I see no docs for this </p>
<p>There's nothing in the docs that says that you can specify more than one policy. Even the spec says <em>securityPolicy</em> the singular and the YAML structure is not an array. </p> <p>Furthermore, if you look at your spec:</p> <pre><code>spec: securityPolicy: name: "policy-one" name: "policy-two" </code></pre> <p>The YAML standard completely ignores the first <code>name: "policy-one"</code> which explains why only <code>name: "policy-two"</code> is used. You can check it on <a href="http://www.yamllint.com/" rel="nofollow noreferrer">YAMLlint</a>. To have one more value on your YAML you would have to convert <code>securityPolicy</code> to an array. Something like this:</p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: namespace: cloud-armor-how-to name: my-backend-config spec: securityPolicy: - name: "policy-one" - name: "policy-two" </code></pre> <p>The issue with this is that it's probably not supported by GCP.</p>
<p>i have not configured any rangelimit or pod limit but my nodes show requests and limits, is that a limit? or the max-seen value?</p> <p>having around 20 active nodes all of them are the same hardware size - but each node shows diffrent limit with <code>kubctl describe node nodeXX</code></p> <p>does that mean i cannot use more than the limit?</p> <p><a href="https://i.stack.imgur.com/1xMKX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1xMKX.jpg" alt="usage"></a></p> <p><a href="https://i.stack.imgur.com/gwPWC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gwPWC.png" alt="cli"></a></p>
<p>If you check the result of <code>kubectl describe node nodeXX</code> again more carefully you can see that each pod has the columns: <em>CPU Requests</em>, <em>CPU Limits</em>, <em>Memory Requests</em> and <em>Memory Limits</em>. The total <em>Requests</em> and <em>Limits</em> as shown in your screenshot should be the sum of your pods requests and limits.<br /> If you haven't configured limits for your pods then they will have 0%. However I can see in your screenshot that you have a node-exporter pod on your node. You probably also have pods in the kube-system namespace that you haven't scheduled yourself but are essential for kubernetes to work.</p> <p>About your question:</p> <blockquote> <p>does that mean i cannot use more than the limit</p> </blockquote> <p>This <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">article</a> is great at explaining about requests and limits:</p> <blockquote> <p>Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource.</p> <p>Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.</p> </blockquote> <p>For example: if your pod requests 1000Mi of memory and your node only has 500Mi of requested memory left, the pod will never be scheduled. If your pod requests 300Mi and has a limit of 1000Mi it will be scheduled, and kubernetes will try to not allocate more than 1000Mi of memory to it.</p> <p>It may be OK to surpass 100% limit, specially in development environments, where we trade performance for capacity. Example:</p> <p><a href="https://i.stack.imgur.com/wGfYC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wGfYC.png" alt="enter image description here" /></a></p>
<p>I have created a cluster of Kubernetes, and installed docker for each node. When I try to pull or push an image to my local registry, using docker <code>push local_registry_addr:port/image_id</code>, I get the following response: <code>Get local_registry_addr:port/v2: http: server gave HTTP response to HTTPS client</code>.</p> <p>This happens although I got the certificate from the registry server, and add it as a certificate on my docker server. If I try to <code>wget local_registry_addr:port</code>, I get <code>200 OK</code>. </p> <p>How can I fix it? Is there anything I need to configure perhaps?</p>
<p>The problem was that I wasn't suppose to add the port - using <code>push local_registry_addr/image_id</code> worked fine.</p>
<p>K8 Version:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I tried to launch spinnaker pods(<a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/svcs" rel="noreferrer">yaml files here</a>). I choose <code>Flannel</code>(<code>kubectl apply -f kube-flannel.yml</code>) while installing K8. Then I see the pods are not starting, it is struck in "ContainerCreating" status. I <code>kubectl describe</code> a pod, showing <code>NetworkPlugin cni failed to set up pod</code></p> <pre><code>veeru@ubuntu:/opt/spinnaker/experimental/kubernetes/simple$ kubectl describe pod data-redis-master-v000-38j80 --namespace=spinnaker Name: data-redis-master-v000-38j80 Namespace: spinnaker Node: ubuntu/192.168.6.136 Start Time: Thu, 01 Jun 2017 02:54:14 -0700 Labels: load-balancer-data-redis-server=true replication-controller=data-redis-master-v000 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"spinnaker","name":"data-redis-master-v000","uid":"43d4a44c-46b0-11e7-b0e1-000c29b... Status: Pending IP: Controllers: ReplicaSet/data-redis-master-v000 Containers: redis-master: Container ID: Image: gcr.io/kubernetes-spinnaker/redis-cluster:v2 Image ID: Port: 6379/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Limits: cpu: 100m Requests: cpu: 100m Environment: MASTER: true Mounts: /redis-master-data from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-71p4q (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-71p4q: Type: Secret (a volume populated by a Secret) SecretName: default-token-71p4q Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 45m 45m 1 default-scheduler Normal Scheduled Successfully assigned data-redis-master-v000-38j80 to ubuntu 43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 8265d80732e7b73ebf8f1493d40403021064b61436c4c559b41330e7592fd47f" 43m 43m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: b972862d763e621e026728073deb9a304748c4ec4522982db0a168663ab59d36 42m 42m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 72b39083a3a81c0da1d4b7fa65b5d6450b62a3562a05452c27b185bc33197327" 41m 41m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d315511bfa9f6f09d7ef4cd277bde44e4885291ea566e3089460356c1ed34413" 40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: a03d776d2d7c5c4ae9c1ec31681b0b6e40759326a452916cff0e60c4d4e2c954" 40m 40m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: acf30a4aacda0c53bdbb8bc2d416704720bd1b623c43874052b4029f15950052" 39m 39m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ea49f5f9428d585be7138f4ebce54f713eef549b16104a3d7aa728175b6ebc2a" 38m 38m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "447d302c-46b0-11e7-b0e1-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: ec2483435b4b22576c9bd7bffac5d67d53893c189c0cf26aca1ae6af79d09914" 38m 1m 39 kubelet, ubuntu Warning FailedSync (events with common reason combined) 45m 1s 448 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 45m 0s 412 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"data-redis-master-v000-38j80_spinnaker(447d302c-46b0-11e7-b0e1-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"data-redis-master-v000-38j80_spinnaker\" network: open /run/flannel/subnet.env: no such file or directory" </code></pre> <p>How can I resolve above issue? </p> <p><strong>UPDATE-1</strong></p> <p>I have reinitialized K8 with <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> and deployed sample <a href="http://containertutorials.com/get_started_kubernetes/k8s_example.html" rel="noreferrer">nginx pod</a>. Still getting same error</p> <pre><code>-----------------OUTPUT REMOVED------------------------------- Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 3m 3m 1 default-scheduler Normal Scheduled Successfully assigned nginx-622qj to ubuntu 1m 1m 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 38250afd765f0108aeff6e31bbe5a642a60db99b97cbbf15711f810cbe8f3829" 24s 24s 1 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "0728fece-46fe-11e7-ae5d-000c29b1270f" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 3bebcef02cb5f6645a65dcf06b2730144080f9d3c4fb18267feca5c5ce21031c" 2m 9s 33 kubelet, ubuntu Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 3m 7s 32 kubelet, ubuntu Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-622qj_default(0728fece-46fe-11e7-ae5d-000c29b1270f)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"nginx-622qj_default\" network: open /run/flannel/subnet.env: no such file or directory" </code></pre>
<p>Running the following command resolved my issues:</p> <pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 </code></pre> <p>For flannel as cni the api server needs to have the argument <code>--pod-network-cidr=...</code> to be set to the overlay. </p>
<p>I am looking for an option to remove pod from deployment/replication without deleting it. I found a good solution by <a href="https://www.ianlewis.org/en/performing-maintenance-pods" rel="nofollow noreferrer">using selector and labels here</a>, but it's impossible in my case since I am not a pod/service creator so I can't force selector creation. My application just manipulates pods of existing services.</p> <p>Many thanks for a help</p>
<p>You could use a readiness probe on the pod. Then find your own way to make that probe fail. As soon as a service detects a pod is not ready, it is removed from the set of pods a service will forward connections to. This doesn't kill the pod, nor cause it to restart (liveness probe does that). It also does not disconnect current clients. You may want to wait for those connected clients to drain away if your plan is to eventually restart the pod. </p> <p>One idea for doing this: use jmx or a rest call on the specific pod's ip and make the implementation of that tell the application to return not-ready for the readiness implementation. Same trick to switch it back to ready, or delay readiness on startup until you manually trigger that pod (e.g. if you are having the pod do some expensive startup like a DB scan/update).</p>
<p>I'm trying to debug my application, which is running in kubernetes in a docker container. The image is built using sbt native-packager docker plugin. I would like to login to the pod and investigate there, but the default user account (<code>demiourgos728</code>) has no privileges and I don't know how to switch to a root account.</p> <p>I've tried running <code>kubectl exec --user=root image -- bash</code> (user not found), also tried to run <code>su -</code> in the image (authentication failure), with no success. The base image used for building the app is java:8.</p> <p>These are my docker settings in build.sbt:</p> <pre><code>.settings( Seq( dockerBaseImage := "java:8", publishArtifact := false, dockerExposedPorts ++= Seq(8088), packageName in Docker := "api-endpoint" )) </code></pre>
<p><code>--user=root</code> in <code>kubectl</code> is for authenticating on your kube-apiserver and not forcing the container to execute a command as a user in the container.</p> <p>If you are running docker on your nodes and want to force escalate to <code>root</code> you can ssh to the node where your container is running and then run:</p> <pre><code>$ docker exec -it --user=root &lt;container-id&gt; bash # or $ docker exec -it -u root &lt;container-id&gt; bash </code></pre> <p>You can find out the node where your pod/container is running by using:</p> <pre><code>$ kubectl describe pod &lt;pod-id&gt; </code></pre>
<p>To setup the kubernetes, I started with creating namespace, deployment, service. To clean the resources, do I need to follow any order like remove the service first then pods and then deployment and finally namespace? how to clean the resources in a proper way? Because I deleted pods and service, but I could see the pods,services running again. Its deploying the resources again, so this question came up here for experts answers.</p>
<p>Just in case you are running them in default namespace and there are many of them ,and you don't want to spend time on deleting them one by one:</p> <pre><code>kubectl delete deployments --all kubectl delete services --all kubectl delete pods --all kubectl delete daemonset --all </code></pre>
<p>So, i'm trying to work with kubernetes (minikube). I'm a total beginner, with some basic experience with docker. It turns out, i installed kubernetes 2 days ago and haven't managed to do a single thing. I barely managed to connect to the dashboard, and spent an ungodly amount of time finding how to get authenticated for that.</p> <p>All i'm trying to do is to deploy a single docker image but I can't even do a basic hello world tutorial, as pretty much whatever command i type, i get an error message about not being authorized.</p> <p>At the moment, I'm trying to write a deployment file, but i'm getting this "unauthorized" error as soon as i use "kubectl create". I'm totally at a loss as to what to do.</p> <p><strong>kubectl create -f deployment.yaml</strong> </p> <pre><code>error: unable to recognize "deployment.yaml": Unauthorized </code></pre> <p>I don't know what information to give you. Here is the minikube status:</p> <p><strong>minikube status</strong></p> <pre><code>host: Running kubelet: Running apiserver: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100 </code></pre> <p><strong>Minikube version</strong> : v1.2.0</p> <p><strong>Docker version</strong> : 18.06.3-ce, build d7080c1</p> <p><strong>kubectl version</strong> :</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Do you guys have any idea what to do? The issue here is really that i don't understand what is happening: - Why do i need to authenticate? - What do i need to authenticate myself against? - Why is it not self evident what to do?</p> <p>I find that most pages about the topic online are either outdated, or ask me to perform action that just end up returning "enable to recognize ... : Unauthorized". The tutorials online don't address this problem. They all seem to automatically be able to use "kubectl create" without any need for authentication.</p> <p>Do you know what i'm supposed to do? Have you had this problem?</p>
<p>I solved the problem by deleting ~/.kube and removing the minikube and kubectl binaries at /usr/local/bin.</p> <p>I re-downloaded and re-installed minikube and kubectl. I then launched <code>minikube start</code>, and everything is now working fine.</p> <p>The origin of my problem seem to have been the installation of the dashboard. I followed some indications online, not knowing exactly what i was doing. In the process, i had to create some security roles, and something involving a token. I managed to connect to the dashboard, but from then on, every kubectl command told me that i was unauthorized.</p>
<p>I am thinking about paritioning my Kubernetes cluster into zones of dedicated nodes for exclusive use by dedicated sets of users as discussed <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#example-use-cases" rel="nofollow noreferrer">here</a>. I am wondering how tainting nodes would affect <code>DaemonSets</code>, including those that are vital to cluster operation (e.g. <code>kube-proxy</code>, <code>kube-flannel-ds-amd64</code>)?</p> <p>The <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">documentation</a> says daemon pods respect taints and tolerations. But if so, how can the system schedule e.g. kube-proxy pods on nodes tainted with <code>kubectl taint nodes node-x zone=zone-y:NoSchedule</code> when the pod (which is not under my control but owned by Kubernetes' own <code>DaemonSet kube-proxy</code>) does not carry a corresponding toleration.</p> <p>What I have found empirically so far is that Kubernetes 1.14 reschedules a kube-proxy pod regardless (after I have deleted it on the tainted <code>node-x</code>), which seems to contradict the documentation. One the other hand, this does not seem to be the case for my own <code>DaemonSet</code>. When I kill its pod on <code>node-x</code> it only gets rescheduled after I remove the node's taint (or presumably after I add a toleration to the pod's spec inside the <code>DaemonSet</code>).</p> <p>So how do <code>DaemonSet</code>s and tolerations interoperate in detail. Could it be that certain <code>DaemonSet</code>s (such as <code>kube-proxy</code>, <code>kube-flannel-ds-amd64</code>) are treated specially?</p>
<p>Your <code>kube-proxy</code> and flannel daemonsets will have many tolerations defined in their manifest that mean they will get scheduled even on tainted nodes. </p> <p>Here are a couple from my canal daemonset:</p> <pre><code>tolerations: - effect: NoSchedule operator: Exists - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists </code></pre> <p>Here are the taints from one of my master nodes:</p> <pre><code>taints: - effect: NoSchedule key: node-role.kubernetes.io/controlplane value: "true" - effect: NoExecute key: node-role.kubernetes.io/etcd value: "true" </code></pre> <p>Even though most workloads won't be scheduled on the master because of its <code>NoSchedule</code> and <code>NoExectue</code> taints, a canal pod will be run there because the daemonset <em>tolerates</em> those taints specifically.</p> <p>The <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration" rel="noreferrer">doc you already linked to</a> goes into detail.</p>
<p>I have been implementing helm sub-chart by referring <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md" rel="nofollow noreferrer">helm sub chart documentation</a>. According to the documentation it worked for me. This works fine with default value files. But when I try to refer my own value file, the values are not there in the configmap. My value file is values.staging.yaml.</p> <p>eg :-</p> <p>config.yaml in mysubchart</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: salad: {{ .Values.dessert }} </code></pre> <p>values.staging.yaml in mysubchart</p> <pre><code>dessert: banana </code></pre> <p>values.yaml in mysubchart</p> <pre><code>dessert: cake </code></pre> <p>Only 'cake' is referenced as the value. I need to reference banana as the value.</p> <p>I have tried following commands.</p> <ol> <li>helm install --dry-run --debug mychart --values mychart/charts/mysubchart/values.staging.yaml</li> <li>helm install --dry-run --debug --name mychart mychart -f mychart/charts/mysubchart/values.staging.yaml</li> <li>helm install --name mychart mychart -f mychart/charts/mysubchart/values.staging.yaml</li> </ol> <p>In each instance the configmap does not refer the value in the values.staging.yaml.</p> <p>Is there a way to do this?</p> <p>Thank you .!</p>
<p>As described in <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">Overriding Values of a Child Chart</a> in your link, you need to wrap the subchart values in a key matching the name of the subchart.</p> <p>Any values file you pass with <code>helm install -f</code> is always interpreted at the top level, even if it's physically located in a subchart's directory. A typical values file could look like</p> <pre><code>mysubchart: dessert: banana </code></pre>
<p>Right after enabling cert-manager in <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Ingress Controller</a> by TTFB (time to first byte) increased by 200+ms in most of the regions. Without SSL, it was &lt;200ms to 80% of the regions. After enabling SSL only 30% have TTFB &lt;200ms</p> <p>without SSL <a href="https://i.stack.imgur.com/RHQhZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RHQhZ.png" alt="enter image description here"></a></p> <p>with SSL <a href="https://i.stack.imgur.com/froJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/froJ0.png" alt="enter image description here"></a></p> <p>My Ingress definition:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod kubernetes.io/tls-acme: "true" spec: rules: - host: gce.wpspeedmatters.com http: paths: - path: / backend: serviceName: wordpress servicePort: 80 tls: - secretName: tls-prod-cert hosts: - gce.wpspeedmatters.com </code></pre>
<p>Switched to TLS 1.3 and I was able to above shave off extra 50-150ms!</p> <p>I wrote a detailed blog post too: <a href="https://wpspeedmatters.com/tls-1-3-in-wordpress/" rel="nofollow noreferrer">https://wpspeedmatters.com/tls-1-3-in-wordpress/</a></p> <p>With TLS 1.3: <a href="https://i.stack.imgur.com/FGitc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FGitc.png" alt="enter image description here"></a></p>
<p>I have a folder with 3 json files in a <code>postman</code> folder. How can I create a ConfigMap using a Helm yaml template?</p> <pre><code>kubectl create configmap test-config --from- file=clusterfitusecaseapihelm/data/postman/ </code></pre> <p>The above solution works, but I need this as yaml file as I am using Helm.</p>
<p>Inside a Helm template, you can use the <code>Files.Glob</code> and <code>AsConfig</code> helper functions to achieve this:</p> <pre><code>{{- (.Files.Glob &quot;postman/**.json&quot;).AsConfig | nindent 2 }} </code></pre> <p>Or, to give a full example of a Helm template:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: test-config data: {{- (.Files.Glob &quot;postman/**.json&quot;).AsConfig | nindent 2 }} </code></pre> <p>See <a href="https://helm.sh/docs/chart_template_guide/accessing_files/#configmap-and-secrets-utility-functions" rel="noreferrer">the documentation</a> (especially the section on &quot;ConfigMap and secrets utility functions&quot;) for more information.</p>
<p>I have the following deployment...</p> <pre><code>--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-data-disk spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql:5.7 ports: - containerPort: 3306 volumeMounts: - mountPath: "/var/lib/mysql" subPath: "mysql" name: mysql-data env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-secrets key: ROOT_PASSWORD volumes: - name: mysql-data persistentVolumeClaim: claimName: mysql-data-disk </code></pre> <p>This works great I can access the db like this...</p> <pre><code>kubectl exec -it mysql-deployment-&lt;POD-ID&gt; -- /bin/bash </code></pre> <p>Then I run...</p> <pre><code>mysql -u root -h localhost -p </code></pre> <p>And I can log into it. However, when I try to access it as a service by using the following yaml...</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: mysql-service spec: selector: app: mysql ports: - protocol: TCP port: 3306 targetPort: 3306 </code></pre> <p>I can see it by running this <code>kubectl describe service mysql-service</code> I get...</p> <pre><code>Name: mysql-service Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mysql-service","namespace":"default"},"spec":{"ports":[{"port":33... Selector: app=mysql Type: ClusterIP IP: 10.101.1.232 Port: &lt;unset&gt; 3306/TCP TargetPort: 3306/TCP Endpoints: 172.17.0.4:3306 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>and I get the ip by running <code>kubectl cluster-info</code> </p> <pre><code>#kubectl cluster-info Kubernetes master is running at https://192.168.99.100:8443 KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>but when I try to connect using Oracle SQL Developer like this...</p> <p><a href="https://i.stack.imgur.com/zk2M4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zk2M4.png" alt="enter image description here"></a></p> <p>It says it cannot connect.</p> <p><a href="https://i.stack.imgur.com/ZeEeW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZeEeW.png" alt="enter image description here"></a></p> <p>How do I connect to the MySQL running on K8s?</p>
<p>Service type <code>ClusterIP</code> will not be accessible outside of Pod network. If you don't have <code>LoadBalancer</code> option, then you have to use either Service type <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer"><code>NodePort</code></a> or <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer"><code>kubectl port-forward</code></a></p>
<p>I`ve just read <em>Deploying Cassandra with Stateful Sets</em> topic in the Kubernetes documentation. The deployment process: 1. Creation of StorageClass 2. Creation of PersistentVolume (in my case 4 PersistentVolume). Set created in 1) storageClassName 3. Creation of Cassandra Headless Service 4. Using a StatefulSet to Create a Cassandra Ring - setting created in 1) storageClassName in StatefulSet yml definition.</p> <p>As a result, there are 4 pods: Cassandra-0, Cassandra-1, Cassandra-2, Cassandra-4, which are mounted to created in 2) volumes (pv-0, pv-1, pv-2, pv-3). I wonder how / if these persistent volumes synchronize data with each other. </p> <p>E.g. if I add some record, which will be written by pod cassandra-0 in persistent volume pv-0, then if someone who is going to retrieve data from the database a moment later - using the cassandra-1 pod/pv will see data that has been added to pv-0. Can anyone tell me how it works exactly?</p>
<ol> <li><p>This is not related to Kubernetes</p></li> <li><p>The replication is done by database and is configurable</p></li> <li><p>See the CAP theorem and Eventual Consistency for Cassandra </p></li> <li><p>You can control the level of consistency in Cassandra, whether the record is immediately updated across or later , depends on the configuration you do in Cassandra. </p></li> <li><p>See also: Synchronous Replication , Asynchronous Replication</p></li> </ol> <p>Cassandra Consistency:</p> <p><a href="https://stackoverflow.com/questions/35731452/how-to-set-cassandra-read-and-write-consistency">how to set cassandra read and write consistency</a></p> <p><a href="https://docs.datastax.com/en/archived/cassandra/3.0/cassandra/dml/dmlConfigConsistency.html" rel="nofollow noreferrer">How is the consistency level configured? </a></p>
<p>I'm trying to set up a bare metal Kubernetes cluster. I have the basic cluster set up, no problem, but I can't seem to get MetalLB working correctly to expose an external IP to a service.</p> <p>My end goal is to be able to deploy an application with 2+ replicas and have a single IP/Port that I can reference in order to hit any of the running instances.</p> <p>So far, what I've done (to test this out,) is:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml kubectl apply -f metallb-layer-2.yml kubectl run nginx --image=nginx --port=80 kubectl expose deployment nginx --type=LoadBalancer --name=nginx-service </code></pre> <p>metallb-layer-2.yml:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: k8s-master-ip-space protocol: layer2 addresses: - 192.168.0.240-192.168.0.250 </code></pre> <p>and then when I run <code>kubectl get svc</code>, I get:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.101.122.140 &lt;pending&gt; 80:30930/TCP 9m29s </code></pre> <p>No matter what I do, I can't get the service to have an external IP. Does anyone have an idea?</p> <p><strong>EDIT:</strong> After finding another post about using NodePort, I did the following:</p> <pre><code>iptables -A FORWARD -j ACCEPT </code></pre> <p>found <a href="https://stackoverflow.com/questions/46667659/kubernetes-cannot-access-nodeport-from-other-machines">here</a>.</p> <p>Now, unfortunately, when I try to curl the nginx endpoint, I get:</p> <pre><code>&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.101.122.140 192.168.0.240 80:30930/TCP 13h &gt; curl 192.168.0.240:30930 curl: (7) Failed to connect to 192.168.0.240 port 30930: No route to host &gt; curl 192.168.0.240:80 curl: (7) Failed to connect to 192.168.0.240 port 80: No route to host </code></pre> <p>I'm not sure what exactly this means now. </p> <p><strong>EDIT 2:</strong> When I do a TCP Dump on the worker where the pod is running, I get:</p> <pre><code>15:51:44.705699 IP 192.168.0.223.54602 &gt; 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375711117 ecr 0,nop,wscale 7], length 0 15:51:44.709940 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 15:51:45.760613 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 15:51:45.775511 IP 192.168.0.223.54602 &gt; 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375712189 ecr 0,nop,wscale 7], length 0 15:51:46.800622 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 15:51:47.843262 IP 192.168.0.223.54602 &gt; 192.168.0.240.http: Flags [S], seq 1839813500, win 29200, options [mss 1460,sackOK,TS val 375714257 ecr 0,nop,wscale 7], length 0 15:51:47.843482 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 15:51:48.880572 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 15:51:49.774953 ARP, Request who-has 192.168.0.240 tell 192.168.0.223, length 46 15:51:49.920602 ARP, Request who-has 192.168.0.240 tell 192.168.0.169, length 28 </code></pre>
<p>After going through with the MetalLB maintainer, he was able to figure out the issue was Debian Buster's new nftables firewall. To disable,</p> <pre><code># update-alternatives --set iptables /usr/sbin/iptables-legacy # update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy # update-alternatives --set arptables /usr/sbin/arptables-legacy # update-alternatives --set ebtables /usr/sbin/ebtables-legacy </code></pre> <p>and restart the nodes in the cluster!</p>
<p>I am trying to setup ElasticSearch OpenDistro but I am not able to understand what is meant by <code>&lt;CERTIFICATE_DATA_REDACTED&gt;</code> in the link </p> <p><a href="https://github.com/opendistro-for-elasticsearch/community/blob/master/open-distro-elasticsearch-kubernetes/elasticsearch/35-es-bootstrap-secrets.yml" rel="nofollow noreferrer">https://github.com/opendistro-for-elasticsearch/community/blob/master/open-distro-elasticsearch-kubernetes/elasticsearch/35-es-bootstrap-secrets.yml</a> . </p> <p>How should I use this yaml file? Do I need to put the base64 encoded values here replacing the and then <code>kubectl apply -f secrets.yaml</code> or what? </p> <p>Can someone provide any reference link around this which explains the same?</p> <pre><code>kind: Secret metadata: name: elasticsearch-tls-data namespace: elasticsearch type: Opaque stringData: elk-crt.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; elk-key.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; elk-root-ca.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; admin-crt.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; admin-key.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; admin-root-ca.pem: |- &lt;CERTIFICATE_DATA_REDACTED&gt; ```` </code></pre>
<p>I have not used this configuration before, but on my opinion what you should do is create your own certificates <code>elk-crt.pem, elk-key.pem, elk-root-ca.pem, admin-crt.pem, admin-key.pem, admin-root-ca.pem</code> same thing with Kibana (if you will use it), then just create your Secret with raw values, </p> <p>please read this:</p> <p><code> For certain scenarios, you may wish to use the stringData field instead. This field allows you to put a non-base64 encoded string directly into the Secret, and the string will be encoded for you when the Secret is created or updated. </code></p> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p>I am trying to deploy a Kubernetes cluster, my master node is UP and running but some pods are stuck in pending state. Below is the output of get pods.</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-65b4876956-29tj9 0/1 Pending 0 9h &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt; kube-system calico-node-bf25l 2/2 Running 2 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; kube-system coredns-7d6cf57b54-b55zw 0/1 Pending 0 9h &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt; kube-system coredns-7d6cf57b54-bk6j5 0/1 Pending 0 12m &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-master-0-eccdtest 1/1 Running 1 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-master-0-eccdtest 1/1 Running 1 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; kube-system kube-proxy-jhfjj 1/1 Running 1 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-master-0-eccdtest 1/1 Running 1 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; kube-system openstack-cloud-controller-manager-tlp4m 1/1 CrashLoopBackOff 114 9h &lt;none&gt; master-0-eccdtest &lt;none&gt; &lt;none&gt; </code></pre> <p>When I am trying to check the pod logs I am getting the below error.</p> <pre><code>Error from server: no preferred addresses found; known addresses: [] </code></pre> <p>Kubectl get events has lot of warnings.</p> <pre><code>NAMESPACE LAST SEEN TYPE REASON KIND MESSAGE default 23m Normal Starting Node Starting kubelet. default 23m Normal NodeHasSufficientMemory Node Node master-0-eccdtest status is now: NodeHasSufficientMemory default 23m Normal NodeHasNoDiskPressure Node Node master-0-eccdtest status is now: NodeHasNoDiskPressure default 23m Normal NodeHasSufficientPID Node Node master-0-eccdtest status is now: NodeHasSufficientPID default 23m Normal NodeAllocatableEnforced Node Updated Node Allocatable limit across pods default 23m Normal Starting Node Starting kube-proxy. default 23m Normal RegisteredNode Node Node master-0-eccdtest event: Registered Node master-0-eccdtest in Controller kube-system 26m Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 3m15s Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 25m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/node:v3.6.1-26684321" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/cni:v3.6.1-26684321" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 23m Warning Unhealthy Pod Readiness probe failed: Threshold time for bird readiness check: 30s calico/node is not ready: felix is not ready: Get http://localhost:9099/readiness: dial tcp [::1]:9099: connect: connection refused kube-system 23m Warning Unhealthy Pod Liveness probe failed: Get http://localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused kube-system 26m Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 3m15s Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 105s Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 26m Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 22m Warning FailedScheduling Pod 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. kube-system 21m Warning FailedScheduling Pod skip schedule deleting pod: kube-system/coredns-7d6cf57b54-w95g4 kube-system 21m Normal SuccessfulCreate ReplicaSet Created pod: coredns-7d6cf57b54-bk6j5 kube-system 26m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/kube-apiserver:v1.13.5-1-80cc0db3" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 26m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/kube-controller-manager:v1.13.5-1-80cc0db3" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 23m Normal LeaderElection Endpoints master-0-eccdtest_ed8f0ece-a6cd-11e9-9dd7-fa163e182aab became leader kube-system 26m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/kube-proxy:v1.13.5-1-80cc0db3" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 26m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 23m Normal Pulled Pod Container image "registry.eccd.local:5000/kube-scheduler:v1.13.5-1-80cc0db3" already present on machine kube-system 23m Normal Created Pod Created container kube-system 23m Normal Started Pod Started container kube-system 23m Normal LeaderElection Endpoints master-0-eccdtest_ee2520c1-a6cd-11e9-96a3-fa163e182aab became leader kube-system 26m Warning DNSConfigForming Pod Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.96.0.10 10.51.40.100 10.51.40.103 kube-system 36m Warning BackOff Pod Back-off restarting failed container kube-system 23m Normal SandboxChanged Pod Pod sandbox changed, it will be killed and re-created. kube-system 20m Normal Pulled Pod Container image "registry.eccd.local:5000/openstack-cloud-controller-manager:v1.14.0-1-11023d82" already present on machine kube-system 20m Normal Created Pod Created container kube-system 20m Normal Started Pod Started container kube-system 3m20s Warning BackOff Pod Back-off restarting failed container </code></pre> <p>The only nameserver in reslov.conf is</p> <pre><code>nameserver 10.96.0.10 </code></pre> <p>I have extensively used google for these problems but did not get any working solution. Any suggestions would be appreciated.</p> <p>TIA</p>
<p>Your main issue here is <code>0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate</code> warning message. You are getting this due to the <code>node-role.kubernetes.io/master:NoSchedule</code> and <code>node.kubernetes.io/not-ready:NoSchedule</code> <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">taints</a></p> <p>This taint prevents scheduling pods on current node.</p> <p>If you want to be able to schedule pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:</p> <pre><code>kubectl taint nodes instance-1 node-role.kubernetes.io/master- kubectl taint nodes instance-1 node.kubernetes.io/not-ready:NoSchedule- </code></pre> <p>But from my POW it is better to:</p> <p>-<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="noreferrer">initiate cluster using kubeadm</a></p> <p>-<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="noreferrer">apply CNI</a></p> <p>-<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#join-nodes" rel="noreferrer">add new worker node</a></p> <p>-and let all your new pods be scheduled on worker node.</p> <pre><code>sudo kubeadm init --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.15.0 ... Your Kubernetes control-plane has initialized successfully! $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.extensions/calico-node created serviceaccount/calico-node created deployment.extensions/calico-kube-controllers created serviceaccount/calico-kube-controllers created -ADD worker node using kubeadm join string on slave node $ kubectl get nodes NAME STATUS ROLES AGE VERSION instance-1 Ready master 21m v1.15.0 instance-2 Ready &lt;none&gt; 34s v1.15.0 $ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-658558ddf8-v2rqx 1/1 Running 0 11m 192.168.23.129 instance-1 &lt;none&gt; &lt;none&gt; kube-system calico-node-c2tkt 1/1 Running 0 11m 10.132.0.36 instance-1 &lt;none&gt; &lt;none&gt; kube-system calico-node-dhc66 1/1 Running 0 107s 10.132.0.38 instance-2 &lt;none&gt; &lt;none&gt; kube-system coredns-5c98db65d4-dqjm7 1/1 Running 0 22m 192.168.23.130 instance-1 &lt;none&gt; &lt;none&gt; kube-system coredns-5c98db65d4-hh7vd 1/1 Running 0 22m 192.168.23.131 instance-1 &lt;none&gt; &lt;none&gt; kube-system etcd-instance-1 1/1 Running 0 21m 10.132.0.36 instance-1 &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-instance-1 1/1 Running 0 21m 10.132.0.36 instance-1 &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-instance-1 1/1 Running 0 21m 10.132.0.36 instance-1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-qwvkq 1/1 Running 0 107s 10.132.0.38 instance-2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-s9gng 1/1 Running 0 22m 10.132.0.36 instance-1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-instance-1 1/1 Running 0 21m 10.132.0.36 instance-1 &lt;none&gt; &lt;n </code></pre>
<p>SFTP server is not accessible when exposed using a NodePort service and an Kubernetes Ingress. However, if the same deployment is exposed using a Service of type LoadBalancer it works fine.</p> <p>Below is the deployment file for SFTP in <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">GKE</a> using <a href="https://github.com/atmoz/sftp" rel="nofollow noreferrer">atmoz/sftp</a> Dockerfile.</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: sftp labels: environment: production app: sftp spec: replicas: 1 minReadySeconds: 10 template: metadata: labels: environment: production app: sftp annotations: container.apparmor.security.beta.kubernetes.io/sftp: runtime/default spec: containers: - name: sftp image: atmoz/sftp:alpine imagePullPolicy: Always args: ["user:pass:1001:100:upload"] ports: - containerPort: 22 securityContext: capabilities: add: ["SYS_ADMIN"] resources: {} </code></pre> <p>If I expose this deployment normally using a Kubernetes Service of type LoadBalancer like below:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: environment: production name: sftp-service spec: type: LoadBalancer ports: - name: sftp-port port: 22 protocol: TCP targetPort: 22 selector: app: sftp </code></pre> <p>Above Service gets an external IP which I can simply use in the command <code>sftp xxx.xx.xx.xxx</code> command to get access using the <code>pass</code> password.</p> <p>However, I try to expose the same deployment using GKE Ingress it does not work. Below is the manifest for the ingress:</p> <pre><code> # First I create a NodePort service to expose the deployment internally --- apiVersion: v1 kind: Service metadata: labels: environment: production name: sftp-service spec: type: NodePort ports: - name: sftp-port port: 22 protocol: TCP targetPort: 22 nodePort: 30063 selector: app: sftp # Ingress service has SFTP service as it's default backend --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress-2 spec: backend: serviceName: sftp-service servicePort: 22 rules: - http: paths: # "http-service-sample" is a service exposing a simple hello-word app deployment - path: /sample backend: serviceName: http-service-sample servicePort: 80 </code></pre> <p>After an external IP is assigned to the Ingress (I know it takes a few minutes to completely set up) and <code>xxx.xx.xx.xxx/sample</code> starts working but <code>sftp -P 80 xxx.xx.xx.xxx</code> doesn't work.</p> <p>Below is the error I get from the server:</p> <pre><code>ssh_exchange_identification: Connection closed by remote host Connection closed </code></pre> <p>What am I doing wrong in the above set-up? Why does LoadBalancer service is able to allow access to SFTP service, while Ingress fails?</p>
<p>That's currently not fully supported to route in Kubernetes Ingress any other traffic than HTTP/HTTPS protocols (see <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="noreferrer">docs</a>). </p> <p>You can try to make some workaround as described there: <a href="https://stackoverflow.com/questions/49439899/kubernetes-routing-non-http-request-via-ingress-to-container">Kubernetes: Routing non HTTP Request via Ingress to Container</a> </p>
<p>I am setting up a pod say <em>test-pod</em> on my google kubernetes engine. When I deploy the pod and see in workloads using google console, I am able to see <code>100m CPU</code> getting allocated to my pod by default, but I am not able to see how much memory my pod has consumed. The memory requested section always shows <code>0</code> there. I know we can restrict memory limits and initial allocation in the deployment YAML. But I want to know how much default memory a pod gets allocated when no values are specified through YAML and what is the maximum limit it can avail?</p>
<p>If you have no resource requests on your pod, it can be scheduled anywhere at all, even the busiest node in your cluster, as though you requested 0 memory and 0 CPU. If you have no resource limits and can consume all available memory and CPU on its node.</p> <p>(If it’s not obvious, realistic resource requests and limits are a best practice!)</p>
<p>I have containerized my ASP.NET Core 2.2 application into Docker image and then deployed it to Google Kubernetes Engine. Application regularly starts, but every now and then it just randomly shuts down. Log gives no special hints on what is going on, all I get is:</p> <pre><code>I 2019-07-11T19:36:07.692416088Z Application started. Press Ctrl+C to shut down. I 2019-07-11T20:03:59.679718522Z Application is shutting down... </code></pre> <p>Is there any way I can get reason on why application is shutting down? I know you can register event handler on Shutdown like:</p> <pre><code>public class Startup { public void Configure(IApplicationBuilder app, IApplicationLifetime applicationLifetime) { applicationLifetime.ApplicationStopping.Register(OnShutdown); } private void OnShutdown() { //this code is called when the application stops } } </code></pre> <p>But how would I extract reason for application shutdown from there?</p>
<p>The problem was that by default my ASP.NET Core Web Api project did not handle root path. So <code>/</code> was hit by health check and when it didn't get <code>200 OK</code> back GKE would should down Kubernetes pod.</p>
<p>I'm new in kubernetes and I'm trying to deploy an elasticsearch on it. Currently, I have a problem with the number of file descriptor required by elasticsearch and allow by docker.</p> <pre><code>[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] </code></pre> <p>So to fix that I have tried 3 different ways:</p> <h1>way 1</h1> <p>From the docker documentation, dockerd should use the system value as default value.</p> <ol> <li>set <code>/etc/security/limits.con</code>f with <code>* - nofile 65536</code></li> <li>reboot</li> <li>execute <code>ulimit -Hn &amp;&amp; ulimit -Sn</code> return return 65536 twice</li> <li>execute <code>docker run --rm centos:7 /bin/bash -c 'ulimit -Hn &amp;&amp; ulimit -Sn'</code> (should return 65536 twice but no, return 4096 and 1024 )</li> </ol> <h1>way 2</h1> <ol> <li>add <code>--default-ulimit nofile=65536:65536</code> to <code>/var/snap/microk8s/current/args/dockerd</code></li> <li>reboot</li> <li>execute <code>docker run --rm centos:7 /bin/bash -c 'ulimit -Hn &amp;&amp; ulimit -Sn'</code> (should return 65536 twice but no return 4096 and 1024)</li> </ol> <h1>way 3</h1> <ol> <li><p>add </p> <p>"default-ulimit" : { "nofile":{ "Name":" nofile", "Hard":" 65536", "Soft":" 65536" } }</p> <p>to <code>/var/snap/microk8s/354/args/docker-daemon.json</code></p></li> <li>execute <code>systemctl restart snap.microk8s.daemon-docker.service</code></li> <li>execute <code>journalctl -u snap.microk8s.daemon-docker.service -f</code> will return <code>unable to configure the Docker daemon with file /var/snap/microk8s/354/args/docker-daemon.json: the following directives don't match any configuration option: nofile</code> </li> </ol> <p>The only way I found for set the ulimit is to pass <code>--ulimit nofile=65536:65536</code> to the docker run command. But I cannot do that inside my kubernetes statesfullset config.</p> <p>So do you know how I can solve this problem ? I didn't somethings wrong here ?</p> <p>Thanks in advance for your help</p> <p>ps: I'm on ubuntu 18.0.1 with docker 18.06.1-ce and microk8s installed with snap</p>
<p>A bit late but if someone has this problem too, you can add this line to <code>/var/snap/microk8s/current/args/containerd-env</code>:</p> <pre><code>ulimit -n 65536 </code></pre> <p>Then stop/start microk8s to enable this fix. If you execute command <code>docker run --rm centos:7 /bin/bash -c 'ulimit -Hn &amp;&amp; ulimit -Sn'</code> you can see 65536 twice</p> <p>More information on Microk8s Github issue <a href="https://github.com/ubuntu/microk8s/issues/253" rel="nofollow noreferrer">#253</a>. Microk8s has merge a fix for this, it may will be soon available on a release.</p>
<p>I have a local running cluster deployed with minikube. Spring Cloud Data Flow is deployed according to <a href="https://dataflow.spring.io/docs/installation/kubernetes/kubectl/" rel="nofollow noreferrer">this tutorial</a>. At this time, I'm able to create a kubernetes task on the SCDF dashboard and launch it. Although I have a task which reads file from file system and I would like to read that file from a shared NFS directory mounted in the POD.</p> <p>I have a NFS server configured and running in another virtual machine and there is a persistent volume created in my kubernetes cluster pointing to the NFS host. When launching a task, some parameters are provided.</p> <pre><code> deployer.job-import-access-file.kubernetes.volumes=[ { name: accessFilesDir, persistentVolumeClaim: { claimName: 'apache-volume-claim' } }, { name: processedFilesDir, persistentVolumeClaim: { claimName: 'apache-volume-claim' } } ]deployer.job-import-access-file.kubernetes.volumeMounts=[ { name: 'accessFilesDir', mountPath: '/data/apache/access' }, { name: 'processedFilesDir', mountPath: '/data/apache/processed' } ] </code></pre> <p>nfs-volume.yaml</p> <pre><code>--- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-apache-volume spec: capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: standard nfs: server: 10.255.254.10 path: '/var/nfs/apache' </code></pre> <p>nfs-volume-claim.yaml</p> <pre><code>--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: apache-volume-claim namespace: default spec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre> <p>Application Docker file</p> <pre><code>FROM openjdk:8-jdk-alpine COPY target/job-import-access-file-0.1.0.jar /opt/job-import-access-file-0.1.0.jar VOLUME ["/data/apache/access", "/data/apache/processed"] ENTRYPOINT ["java","-jar","/opt/job-import-access-file-0.1.0.jar"] </code></pre> <p>It is expected that my task reads files from the mounted directory. But directory is empty. I mean, it is mounted although there is no sync.</p>
<p>Looks like the actual issue in your case is the <code>name</code> of the volume you specified in the configuration properties. Since K8s doesn't allow upper case letters for the name (see <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names" rel="nofollow noreferrer">here</a>), you need to use lowercase for your <code>name</code> values (Currently there are accessFilesDir and processedFilesDir), etc.,</p> <p>I tried to pass the similar settings on minikube (without NFS mounting etc.,) just to see if the task launching passes the volume and volume mount K8s deployer properties and they seem to work fine:</p> <pre><code>dataflow:&gt;task create a1 --definition "timestamp" dataflow:&gt;task launch a1 --properties "deployer.timestamp.kubernetes.volumes=[{name: accessfilesdir, persistentVolumeClaim: { claimName: 'apache-volume-claim' }},{name: processedfilesdir, persistentVolumeClaim: { claimName: 'apache-volume-claim' }}],deployer.timestamp.kubernetes.volumeMounts=[{name: 'accessfilesdir', mountPath: '/data/apache/access'},{name: 'processedfilesdir', mountPath: '/data/apache/processed'}]" </code></pre> <p>and, this resulted in the following config when I describe the pod (kubectl describe ) of the launched task:</p> <pre><code>Volumes: accessfilesdir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: apache-volume-claim ReadOnly: false processedfilesdir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: apache-volume-claim ReadOnly: false </code></pre>
<p>I have an entrypoint defines as <code>ENTRYPOINT ["/bin/sh"]</code> in <code>Dockerfile</code>. The docker image contains most of the shell scripts. But later on I added a Golan scripts, which I want to run from kubernetes job. But due to entrypoint defined as <code>/bin/sh</code>, it gives me error <code>No such file or directory</code> when I try to run the compiled and installed go binary through <code>args</code> in yml of kubernetes job's deployment descriptor. </p> <p>Any well defined way to achieve such goal?</p>
<p>As standing in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">documentation</a> you can override docker's entrypoint by using <code>command</code> section of pod's definition in deployment.yaml, example from docs:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: command-demo labels: purpose: demonstrate-command spec: containers: - name: command-demo-container image: debian command: ["printenv"] args: ["HOSTNAME", "KUBERNETES_PORT"] restartPolicy: OnFailure </code></pre>
<p>I've read the docs a thousand times but I still don't understand how I would use the <code>nginx.ingress.kubernetes.io/app-root</code> annotation.</p> <p>Can someone please provide an example + description in plain english?</p> <p>Docs: <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p> <p>I might as well copy paste the entire text; it's not that much.</p> <blockquote> <p>nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string</p> </blockquote> <p><strong>App Root</strong></p> <p>Create an Ingress rule with a app-root annotation:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / </code></pre> <p>Check the rewrite is working</p> <pre><code>$ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14:57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://stickyingress.example.com/app1 Connection: keep-alive </code></pre>
<p>It simply <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="noreferrer">redirects requests coming to <code>'/'</code> to a different path</a> internally. This is useful if your application's root path is different from <code>'/'</code>.</p> <p>For example, say your application is listening in <code>'/server'</code>, you'd set your Nginx annotiations like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /server </code></pre> <p>At this point, any incoming requests at <code>example.com/</code> would be <a href="https://stackoverflow.com/a/64754302/10892354">redirected with HTTP status code 302</a> to indicate where the resource is: <code>example.com/server</code>.</p>
<p>I have the following <code>docker-compose</code> file and I don't get how I can set the <code>working_dir</code> and <code>entrypoint</code> in the helm <code>deployment.yaml</code>. Does someone have an example on how to do this?</p> <p><code>docker-compose</code></p> <pre><code>version: "3.5" services: checklist: image: ... working_dir: /checklist entrypoint: ["dotnet", "Checklist.dll"] ... </code></pre>
<p>Helm uses Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer"><code>Deployment</code></a> with a different terminology than Docker. You'll want to define:</p> <ul> <li><code>command</code> in Helm for <code>entrypoint</code> in Docker Compose (see <a href="https://stackoverflow.com/questions/44316361/difference-between-docker-entrypoint-and-kubernetes-container-spec-command">this post</a>)</li> <li><code>workingDir</code> in Helm for <code>working_dir</code> in Docker Compose (see <a href="https://stackoverflow.com/questions/36634614/how-to-set-the-workdir-of-a-container-launched-by-kubernetes">this post</a>)</li> </ul> <p>For your example it would be:</p> <pre><code>... containers: - name: checklist ... command: ["dotnet", "Checklist.dll"] # Docker entrypoint equivalent workingDir: "/checklist" # Docker working_dir equivalent </code></pre>
<p>As the title, I want to get the istio object running status from the kubernetes platform but the kubernetes API not support CRS object's status, so hope someone help me, give me the solution, thank you !</p>
<p>You might not have permissions to update a custom resource's <code>status</code>, but you can get it by doing a <code>GET</code> API call on the object. The <code>GET</code> API call will return the complete object, along with the status part.</p> <p>Alternatively, you can also see the status by doing a <code>kubectl get virtualservice -o="jsonpath={.status}"</code>.</p> <p><a href="https://github.com/kubernetes/sample-controller/blob/499fb3ff94b9068498a9474e813cd5526a3821f0/controller.go#L251" rel="nofollow noreferrer">Here</a> is an example on how to use the API to issue a <code>GET</code> API call on a custom resource.</p>
<p>I've set up a Kubernetes Cluster on a custom OpenStack Platform to which, I don't have any administration access. It is only possible to create Instances and assign Firewall-Rules to them. Each new instance will be automatically provided by a static external IPv4 Address which can be reached globally. This means, that I can't even create OpenStack Routers to my internal network.</p> <p>So far so good, I've set up a Kubernetes Cluster using <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer"><code>kubeadm</code></a>, <code>CoreDNS</code> and <code>flanel</code> as <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#network-plugin-requirements" rel="nofollow noreferrer"><code>CNI</code></a>. The cluster Hardware Setup is as following:</p> <ul> <li>Kubernetes-Client and Server-Version: <code>1.14.3</code> linux/amd64</li> <li>All Servers run on Fedora Cloud Base 28</li> <li>1 Kubernetes Master</li> <li>5 Worker Nodes</li> <li>6 static external IPv4 addresses (one for each of the nodes)</li> </ul> <p>After the setup, I deployed my required services using <code>deployment</code>-files. Everything works as it should.</p> <p>My question is now, how I can make the services externally accessible? Since I've no <code>LoadBalancer</code> provided by OpenStack? How is the best approach for this?</p> <p>I'm asking this question after an estimated amount of four hours of Googling (maybe I'm just bad at it). I tried the suggested approaches from the <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">Documentation</a>, but it stays totally unclear for me, what the concept and the right approach for the task is.</p> <p>For example I tried to assign external IPs to the Service by using for example</p> <pre class="lang-sh prettyprint-override"><code>kubectl expose deployment $DEPLOYMENT_NAME \ --name=$SERVICE_NAME \ --port=$HOST_PORT \ --target-port=$TARGET_PORT \ --type=NodePort </code></pre> <p>or this</p> <pre class="lang-sh prettyprint-override"><code>kubectl patch service $SERVICE_NAME -p '{"spec":{"externalIPs":["&lt;worker_host_ip&gt;"]}}' </code></pre> <p>Even if the external IP is now assigned, the routing to my destination service is still not routed properly, because as I get it, Kubernetes automatically assigns the hosts and random ports to the Pods (which is the desired behaviour), but with that in mind, every redeployment could crash the assigned IP to service mapping.</p> <p>After your help and a big "Thank You!" in advance I expect, that I can assign the application ports of the containers, to the static IPv4 of one of the hosts and that Kubernetes automatically knows, that the deployed service will be routed over this specific IP even, if the Pods run on a different worker.</p>
<p>After a while of researching, I stumbled upon the <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> implementation of a bare metal load-balancing solution for Kubernetes. This helped me to solve the issue described above. However from my point of view, MetalLB should be only used as last chance, since it is not production ready and requires excessive configration using the NGINX Ingress Controller solution to properly map the IPv4 distribution of a cluster.</p> <p>Anyway a big big thank you to the above gentleman, which were willing to support me and give advice! </p>
<p>I have a Dockerfile that uses the official nginx image to serve a static website (from an Angular app)</p> <pre><code>FROM nginx:1.17 COPY ./dist/thepath/ /usr/share/nginx/html COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf CMD ["nginx", "-g", "daemon off;"] </code></pre> <p>It works perfect when running local, via docker. But I'm trying to use it with Red Hat's Openshift (owned by the company I work for). When I start a build</p> <p>oc start-build somelabel --from-dir . --follow -n someprojectname</p> <p>Openshift uses an image from RedHat registry instead of the official</p> <blockquote> <p>Step 1/7 : FROM registry.access.redhat.com/rhscl/nginx-112-rhel7@sha256:ba3352b9f577e80cc5bd6805f86d0cee5965b3b6caf26c10e84f54b6b33178e5</p> </blockquote> <p>Is it possible to "force" usage from the Docker Hub (<a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">https://hub.docker.com/_/nginx</a>)?</p>
<p>OpenShift doesn't support running containers as root user. Make sure the container that you are trying to run runs as non root user. </p>
<p>I'm trying to deploy a model on GKE with tensorflow model serving using a GPU. I created a container with docker and it works great on a cloud VM. I'm trying to scale using GKE but the deployment exists with above error.</p> <p>I created the GKE cluster with only 1 node, with a GPU (Tesla T4). I installed the drivers according to the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#installing_drivers" rel="nofollow noreferrer">docs</a></p> <p>It seems to be successful as much as I understand (a pod named <code>nvidia-driver-installer-tckv4</code> was added to the pods list in the node, and it's running without errors)</p> <p>Next I created the deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reph-deployment spec: replicas: 1 template: metadata: labels: app: reph spec: containers: - name: reph-container image: gcr.io/&lt;project-id&gt;/reph_serving_gpu resources: limits: nvidia.com/gpu: 1 ports: - containerPort: 8500 args: - "--runtime=nvidia" </code></pre> <p>Then I ran kubectl create -f d1.yaml and the container exited with the above error in the logs.</p> <p>I also tried to switch the os from cos to ubuntu and run an example from the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#pods_gpus" rel="nofollow noreferrer">docs</a></p> <p>I installed the drivers as above, this time for ubuntu. and applied this yaml taken from the GKE docs (only changed number of gpus to consume):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-gpu-pod spec: containers: - name: my-gpu-container image: nvidia/cuda:10.0-runtime-ubuntu18.04 resources: limits: nvidia.com/gpu: 1 </code></pre> <p>This time i'm getting CrashLoopBackOff without anything more in the logs.</p> <p>Any idea wha'ts wrong? I'm a total newcomer to kubernetes and docker so I may be missing something trivial, but I really tried to stick to the GKE docs.</p>
<p>ok, I think the docs aren't clear enough on this, but it seems that what was missing was including <code>/usr/local/nvidia/lib64</code> in the <code>LD_LIBRARY_PATH</code> environment variable. The following yaml file runs successfully:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: reph-deployment spec: replicas: 1 template: metadata: labels: app: reph spec: containers: - name: reph-container env: - name: LD_LIBRARY_PATH value: "$LD_LIBRARY_PATH:/usr/local/nvidia/lib64" image: gcr.io/&lt;project-id&gt;/reph_serving_gpu imagePullPolicy: IfNotPresent resources: limits: nvidia.com/gpu: 1 requests: nvidia.com/gpu: 1 ports: - containerPort: 8500 args: - "--runtime=nvidia" </code></pre> <p>Here's the relevant part in the GKE <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#cuda" rel="nofollow noreferrer">docs</a> </p>
<p>How to install kubeadm for Kubernetes in macOS. When tempting to use </p> <blockquote> <p>brew install kubeadm</p> </blockquote> <p>I get this error</p> <pre><code>Error: No available formula with the name "kubeadm" ==&gt; Searching for a previously deleted formula (in the last month).. </code></pre> <p>NB : In macOS I can't use apt-get </p>
<p>Not sure about MAC OS</p> <p>The supported platforms on their list are:</p> <pre><code>Ubuntu 16.04+ Debian 9 CentOS 7 RHEL 7 Fedora 25/26 (best-effort) HypriotOS v1.0.1+ Container Linux (tested with 1800.6.0) </code></pre> <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/</a></p> <p>KubeAdm is not for Desktop local environment. </p> <p>You can install Docker For MAC that will install the minikube environment for you.</p>
<p>The IP whitelisting/blacklisting example explained here <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/</a> uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use <code>request.headers[&quot;X-Real-IP&quot;]</code>, however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac.</p> <p><a href="https://github.com/istio/istio/issues/7328" rel="nofollow noreferrer">https://github.com/istio/istio/issues/7328</a> mentions this issue and states:</p> <blockquote> <p>With a proxy that terminates the client connection and opens a new connection to your nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that of the client.</p> <p>With a packet forwarder, such that requests from the client sent to the loadbalancer VIP end up at the node with the source IP of the client, not an intermediate proxy.</p> <p>Loadbalancers in the first category must use an <em>agreed upon protocol</em> between the loadbalancer and backend to communicate the true client IP such as the HTTP X-FORWARDED-FOR header, or the proxy protocol.</p> </blockquote> <p>Can someone please help how can we define a protocol to get the client IP from the loadbalancer?</p>
<p>Maybe your are confusing with kube-proxy and istio, by default Kubernetes uses kube-proxy but you can install istio that injects a new proxy per pod to control the traffic in both directions to the services inside the pod.</p> <p>With that said you can install istio on your cluster and enable it for only the services you need and apply a blacklisting using the istio mechanisms</p> <p><a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/" rel="nofollow noreferrer">https://istio.io/docs/tasks/policy-enforcement/denial-and-list/</a> </p> <p>To make a blacklist using the source IP we have to leave istio manage how to fetch the source IP address and use som configuration like this taken from the docs:</p> <pre><code>apiVersion: config.istio.io/v1alpha2 kind: handler metadata: name: whitelistip spec: compiledAdapter: listchecker params: # providerUrl: ordinarily black and white lists are maintained # externally and fetched asynchronously using the providerUrl. overrides: ["10.57.0.0/16"] # overrides provide a static list blacklist: false entryType: IP_ADDRESSES --- apiVersion: config.istio.io/v1alpha2 kind: instance metadata: name: sourceip spec: compiledTemplate: listentry params: value: source.ip | ip("0.0.0.0") --- apiVersion: config.istio.io/v1alpha2 kind: rule metadata: name: checkip spec: match: source.labels["istio"] == "ingressgateway" actions: - handler: whitelistip instances: [ sourceip ] --- </code></pre> <p>You can use the param <code>providerURL</code> to maintain an external list.</p> <p>Also check to use <code>externalTrafficPolicy: Local</code> on the ingress-gateway servce of istio.</p> <p>As per comments my last advice is to use a different ingress-controller to avoid the use of kube-proxy, my recomendation is to use the nginx-controller</p> <p><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p> <p>You can configure this ingress as a regular nginx acting as a proxy</p>
<p>This question is more advice related so I hope its not flagged for anything. Just really need help :(</p> <p>Trying to implement CI/CD using GitHub/Jenkins/Kubernetes. </p> <p>On a highlevel this is what should happen:</p> <ol> <li>Build on Jenkins</li> <li>Push to container registry</li> <li>Deploy built image on Kubernetes development cluster</li> <li>Once testing finished on Development cluster, deploy it on a client testing cluster and finally production cluster</li> </ol> <p>So far this is what I have created a job on Jenkins which will be triggered using a Github hook. This job is responsible for the following things:</p> <ol> <li>Checkout from GitHub</li> <li>Run unit tests / call REST API and send unit test results</li> <li>Build artifacts using maven / call REST API and inform if build success or fail</li> <li>Build docker image</li> <li>Push docker image to container registry (docker image will have incremented versions which match with the BUILD_NUMBER environment variable)</li> </ol> <p>The above stated tasks are more or less completed and I dont need much assitance with it (unless anyone thinks the aforementioned steps are not best practice)</p> <p>I do need help with the part where I deploy to the Kubernetes cluster.</p> <p>For local testing, I have set up a local cluster using Vagrant boxes and it works. In order to deploy the built image on the development cluster, I am thinking about doing it like this: Point Jenkins build server to Kubernetes development cluster Deploy using deployment.yml and service.yml (available in my repo) This part I need help with...</p> <p>Is this wrong practice? Is there a better/easier way to do it?</p> <p>Also is there a way to migrate between clusters? Ex: Development cluster to client testing cluster and client testing cluster to production cluster etc</p> <p>When searching on the internet, the name Helm comes up a lot but I am not sure if it will be applicable to my use case. I would test it and see but I am a bit hard pressed for time which is why I cant</p> <p>Would appreciate any help y'all could provide.</p> <p>Thanks a lot</p>
<p>There are countless ways of doing this. Take Helm out for now as you are just starting.</p> <p>If you are already using Github and docker , then I would just recommend you to push your code/changes/config/Dockerfile to Github that will auto trigger a docker build on Dockerhub ( maybe jenkins in ur case if u dont want to use dockerhub for builds ) , it can be a multi-stage docker build where you can build code , run tests , throw away dev environmenet , and finally produce a producion docker image , once the image is produced , it will triger a web hook to your kubernetes deployment job/manifests to deploy on to test evironmenet , followed by manual triiger to deploy to production.</p> <p>The docker images can be tagged based on SHA of the commits in Github/Git so that you can deploy and rollback based on commits.</p> <p>Reference: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build</a></p> <p>Here is my Gitlab implementation of Gtips workflow:</p> <pre><code># Author , IjazAhmad image: docker:latest stages: - build - test - deploy services: - docker:dind variables: CI_REGISTRY: dockerhub.example.com CI_REGISTRY_IMAGE: $CI_REGISTRY/$CI_PROJECT_PATH DOCKER_DRIVER: overlay2 before_script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY docker-build: stage: build script: - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . docker-push: stage: build script: - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest unit-tests: stage: test script: - echo "running unit testson the image" - echo "running security testing on the image" - echo "pushing the results to build/test pipeline dashboard" sast: stage: test script: - echo "running security testing on the image" - echo "pushing the results to build/test pipeline dashboard" dast: stage: test script: - echo "running security testing on the image" - echo "pushing the results to build/test pipeline dashboard" testing: stage: deploy script: - sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml - sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml - kubectl apply --namespace webproduction-test -f k8s-configs/ environment: name: testing url: https://testing.example.com only: - branches staging: stage: deploy script: - sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml - sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml - kubectl apply --namespace webproduction-stage -f k8s-configs/ environment: name: staging url: https://staging.example.com only: - master production: stage: deploy script: - sed -i "s|CI_IMAGE|$CI_REGISTRY_IMAGE|g" k8s-configs/deployment.yaml - sed -i "s|TAG|$CI_COMMIT_SHA|g" k8s-configs/deployment.yaml - kubectl apply --namespace webproduction-prod -f k8s-configs/ environment: name: production url: https://production.example.com when: manual only: - master </code></pre> <p><a href="https://i.stack.imgur.com/KAcu3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KAcu3.png" alt="Pipeline"></a></p> <p>Links:</p> <p><a href="https://www.fourkitchens.com/blog/article/trigger-jenkins-builds-pushing-github/" rel="nofollow noreferrer">Trigger Jenkins builds by pushing to Github</a></p> <p><a href="https://medium.com/@marc_best/trigger-a-jenkins-build-from-a-github-push-b922468ef1ae" rel="nofollow noreferrer">Triggering a Jenkins build from a push to Github</a></p> <p><a href="https://embeddedartistry.com/blog/2017/12/21/jenkins-kick-off-a-ci-build-with-github-push-notifications" rel="nofollow noreferrer">Jenkins: Kick off a CI Build with GitHub Push Notifications</a></p>
<p>Is there a way to access a pod by its hostname? I have a pod with hostname: <code>my-pod-1</code> that need to connect to another pod with hostname: <code>my-pod-2</code>.</p> <p>What is the best way to achieve this without services?</p>
<p>Through your description, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">Headless-Service</a> is you want to find. You can access pod by accessing <code>podName.svc</code> with headless service.</p> <p>OR access pod by pod ip address.</p>
<p>After creating a NFS Persistent Volume for one of Deployments running in a cluster the containers are able to store and share the file data between each other. The file data is persistent between the containers life cycles too. And that's great! But I wonder where exactly is this file data stored: where is it "physically" located? Is it saved onto the container itself or is it saved somewhere onto a VM's disk - the VM that is used to run the Deployment? </p> <p>The VM that is used to host the Deployment has only 20 Gb available disk space by default. Let's say I am running a Docker container inside a pod on a Node (aka VM) running some file server. What happens if I attempt to transfer a 100 Gb file to that File Server? Where will be this gigantic file saved if the VM disk itself has only 20 Gb available space? </p> <h3>Edited later by appending the portion of yaml file:</h3> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-claim labels: app: deployment spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi # --- apiVersion: apps/v1 kind: Deployment metadata: name: deployment spec: selector: matchLabels: app: app replicas: 1 minReadySeconds: 10 strategy: type: RollingUpdate # Recreate rollingUpdate: maxUnavailable: 1 maxSurge: 1 template: metadata: labels: app: app spec: containers: - name: container image: 12345.dkr.ecr.us-west-2.amazonaws.com/container:v001 ports: - containerPort: 80 imagePullPolicy: IfNotPresent volumeMounts: - name: volume-mount mountPath: /data volumes: - name: volume-mount persistentVolumeClaim: claimName: pv-claim </code></pre>
<p>The "physical" location of the volume is defined by the provisioner, which is defined by the storage class. Your PV claim doesn't have a storage class assigned. That means that the default storage class is used, and it can be anything. I suspect that in EKS default storage class will be EBS, but you should double check that.</p> <p>First, see what storage class is actually assigned to your persistent volumes:</p> <pre><code>kubectl get pv -o wide </code></pre> <p>Then see what provisioner is assigned to that storage class:</p> <pre><code>kubectl get storageclass </code></pre> <p>Most likely you will see something like <code>kubernetes.io/aws-ebs</code>. Then google documentation for a specific provisioner to understand where the volume is stored "physically".</p>
<p>My probelm is that i am trying to use the <code>unstructured.Unstructured</code> type to create a deployment as such:</p> <pre><code>// +kubebuilder:rbac:groups=stable.resource.operator.io,resources=resource,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=stable.resource.operator.io,resources=resource/status,verbs=get;update;patch // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;list;watch;create;update;patch;delete func (r *ResourceReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { ctx := context.Background() log := r.Log.WithValues("resource", req.NamespacedName) instance := &amp;stablev1.Resource{} // your logic here if err := r.Get(ctx, req.NamespacedName, instance); err != nil { log.Error(err, "unable to fetch Resource") // we'll ignore not-found errors, since they can't be fixed by an immediate // requeue (we'll need to wait for a new notification), and we can get them // on deleted requests. return ctrl.Result{}, ignoreNotFound(err) } // your logic here u := &amp;unstructured.Unstructured{} u.Object = map[string]interface{}{ "name": "name", "namespace": "namespace", "spec": map[string]interface{}{ "replicas": 2, "selector": map[string]interface{}{ "matchLabels": map[string]interface{}{ "foo": "bar", }, }, "template": map[string]interface{}{ "labels": map[string]interface{}{ "foo": "bar", }, "spec": map[string]interface{}{ "containers": []map[string]interface{}{ { "name": "nginx", "image": "nginx", }, }, }, }, }, } u.SetGroupVersionKind(schema.GroupVersionKind{ Group: "apps", Kind: "Deployment", Version: "v1", }) err = r.Create(context.Background(), u) log.Error(err, "unable to get object") log.V(1).Info("reconciling") return ctrl.Result{}, nil } </code></pre> <p>My understanding is that I have specified the rbac rules, my operator should be able to create said Deployment, but I am still getting the error:</p> <pre><code>the server does not allow this method on the requested resource </code></pre> <p>All the examples I have seen are based of using the actual deployment type, I cant find anywhere where there is an example of doing this with type unstructured, am i missing something? Just to save time, I have tried:</p> <ul> <li>Applying the clusterroles manually</li> <li>Given the operator cluster-admin</li> <li>used both make run and make deploy (obviously after running make manifests etc..)</li> <li>The roles generator is working</li> <li>I have started a new project to make sure me playing around with env is not the cause</li> </ul>
<p>So seperatley from what the docs stated at <a href="https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client" rel="nofollow noreferrer">https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client</a> when you define the type unstructured.Unstructured you need to set the namespace and name in the metadata field as such:</p> <pre><code>u.Object = map[string]interface{}{ "metadata": map[string]interface{}{ "name": "name", "namespace": "namespace"}, "spec": map[string]interface{}{ "replicas": 2, "selector": map[string]interface{}{ "matchLabels": map[string]interface{}{ "foo": "bar", }, }, "template": map[string]interface{}{ "labels": map[string]interface{}{ "foo": "bar", }, "spec": map[string]interface{}{ "containers": []map[string]interface{}{ { "name": "nginx", "image": "nginx", }, }, }, }, }, } </code></pre> <p>otherwise unstructured client reads it as cluster scoped resource deployments, which does not exist</p>
<p>I have a cluster running on GKE. I created 2 separated node pools. My first node pool (let's call it <code>main-pool</code>) scales from 1 to 10 nodes. The second one (let's call it <code>db-pool</code>) scales from 0 to 10 nodes. The <code>db-pool</code> nodes have specific needs as I have to dynamically create some pretty big databases, requesting a lot of memory, while the <code>main-pool</code> is for "light" workers. I used node selectors for my workers to be created on the right nodes and everything works fine.</p> <p>The problem I have is that the <code>db-pool</code> nodes, because they request a lot of memory, are way more expensive and I want them to scale down to 0 when no database is running. It was working fine until I added the node selectors (I am not 100% sure but it seems to be when it happened), but now it will not scale down to less than 1 node. I believe it is because some kube-system pods are running on this node:</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE heapster-v1.6.0-beta.1-6c9dfdb9f5-2htn7 3/3 Running 0 39m 10.56.18.22 gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm &lt;none&gt; metrics-server-v0.3.1-5b4d6d8d98-7h659 2/2 Running 0 39m 10.56.18.21 gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm &lt;none&gt; fluentd-gcp-v3.2.0-jmlcv 2/2 Running 0 1h 10.132.15.241 gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm &lt;none&gt; kube-proxy-gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm 1/1 Running 0 1h 10.132.15.241 gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm &lt;none&gt; prometheus-to-sd-stfz4 1/1 Running 0 1h 10.132.15.241 gke-padawan-cluster-ipf-db-pool-bb2827a7-99pm &lt;none&gt; </code></pre> <p>Is there any way to prevent it from happening?</p>
<p>System pods like Fluentd and (eventually kube-proxy) are daemonsets and are required on each node; these shouldn't stop scaling down though. Pods like Heapster and metrics-server are not required and those can block the node pool from scaling down to 0.</p> <p>The best way to stop these non-node critical system pods from scheduling on your expensive node pool is to use <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">taints and tolerations</a>. The taints will prevent pods from being scheduled to the nodes, you just need to make sure that the db pods do get scheduled on the larger node pool by setting tolerations along with the node selector.</p> <p>You should configure the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-taints#creating_a_node_pool_with_node_taints" rel="noreferrer">node taints when you create the node pool</a> so that new nodes are created with the taint already in place. With proper taints and tolerations, your node pool should be able to scale down to 0 without issue.</p>
<p>I am trying to use "autoscaling/v2beta2" apiVersion in my configuration following <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">this tutorial</a>. And also I am on Google Kubernetes Engine.</p> <p>However I get this error:</p> <p><code>error: unable to recognize "backend-hpa.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2"</code></p> <p>When I list the available api-versions:</p> <pre><code>$ kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1 certmanager.k8s.io/v1alpha1 cloud.google.com/v1beta1 coordination.k8s.io/v1beta1 custom.metrics.k8s.io/v1beta1 extensions/v1beta1 external.metrics.k8s.io/v1beta1 internal.autoscaling.k8s.io/v1alpha1 metrics.k8s.io/v1beta1 networking.gke.io/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scalingpolicy.kope.io/v1alpha1 scheduling.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1 </code></pre> <p>So indeed I am missing autoscaling/v2beta2.</p> <p>Then I check my kubernetes version:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:53:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>So it looks like I have a version of 1.13.6. Supposedly autoscaling/v2beta2 is available since 1.12.</p> <p>So why it is not available for me?</p>
<p>Unfortunaltely HPA autoscaling API v2beta2 is not available on GKE yet. It can be freely used with Kubeadm and Minikube.</p> <p>There is already opened issue on issuetracker - <a href="https://issuetracker.google.com/135624588" rel="nofollow noreferrer">https://issuetracker.google.com/135624588</a></p>
<p>We can use DaemonSet object to deploy one replica on each node. How can we deploy say 2 replicas or 3 replicas per node? How can we achieve that. please let us know</p>
<p>There is no way to force x pods per node the way a Daemonset does. However, with some planning, you can force a fairly even pod distribution across your nodes using <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="noreferrer">pod anti affinity</a>.</p> <p>Let's say we have 10 nodes. The first thing is we need to have a ReplicaSet (deployment) with 30 pods (3 per node). Next, we want to set the pod anti affinity to use <code>preferredDuringSchedulingIgnoredDuringExecution</code> with a relatively high weight and match the deployment's labels. This will cause the scheduler to prefer not scheduling pods where the same pod already exists. Once there is 1 pod per node, the cycle starts over again. A node with 2 pods will be weighted lower than one with 1 pod so the next pod should try to go there.</p> <p>Note this is not as precise as a DaemonSet and may run into some limitations when it comes time to scale up or down the cluster.</p> <p>A more reliable way if scaling the cluster is to simply create multiple DaemonSets with different names, but identical in every other way. Since the DaemonSets will have the same labels, they can all be exposed through the same service.</p>
<p>I've created an nginx ingress controller that is linked to two services. The website works fine but the js and css files are not loaded in the HTML page (404) error. I've created nginx pod using helm, and Included the nginx config in the ingress.yaml. The error is raised when I use nginx, I've I run the docker image locally, it works fine. Also, if I made the services' types as a Load balancer, the applications work fine.</p> <p><a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer">![here is the error in the webpage ]<a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer">1</a></a></p> <p><a href="https://i.stack.imgur.com/F256W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F256W.png" alt="enter image description here"></a></p> <p>here is the GKE services:</p> <p><a href="https://i.stack.imgur.com/HgDvT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HgDvT.png" alt="enter image description here"></a></p> <p>ingress.yaml:</p> <pre class="lang-py prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: creationTimestamp: 2019-07-08T08:35:52Z generation: 1 name: www namespace: default resourceVersion: "171166" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/www uid: 659594d6-a15b-11e9-a671-42010a980160 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: rules: - host: twitter.domain.com http: paths: - backend: serviceName: twitter servicePort: 6020 - host: events.omarjs.com http: paths: - backend: serviceName: events servicePort: 6010 - http: paths: - backend: serviceName: twitter servicePort: 6020 path: /twitter - backend: serviceName: events servicePort: 6010 path: /events tls: - secretName: omarjs-ssl status: loadBalancer: {} </code></pre> <p>twitter.yaml:</p> <pre class="lang-py prettyprint-override"><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-07-07T20:43:49Z labels: run: twitter name: twitter namespace: default resourceVersion: "27299" selfLink: /api/v1/namespaces/default/services/twitter uid: ec8920ca-a0f7-11e9-ac47-42010a98008f spec: clusterIP: 10.7.253.177 externalTrafficPolicy: Cluster ports: - nodePort: 31066 port: 6020 protocol: TCP targetPort: 3020 selector: run: twitter sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre> <pre class="lang-py prettyprint-override"><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-07-07T20:43:49Z labels: run: twitter name: twitter namespace: default resourceVersion: "27299" selfLink: /api/v1/namespaces/default/services/twitter uid: ec8920ca-a0f7-11e9-ac47-42010a98008f spec: clusterIP: 10.7.253.177 externalTrafficPolicy: Cluster ports: - nodePort: 31066 port: 6020 protocol: TCP targetPort: 3020 selector: run: twitter sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre>
<p>You can probably solve your problem by adding additional ingress rules which will route the requests for static files to proper directories. As far as I can see on the attached print screen, your static files are placed in <code>css</code> and <code>js</code> directories respectively, so you need to add 4 additional rules to your <code>ingress.yaml</code>. The final version of this fragment may look like that:</p> <pre><code> - http: paths: - backend: serviceName: twitter servicePort: 6020 path: /twitter - backend: serviceName: twitter servicePort: 6020 path: /css - backend: serviceName: twitter servicePort: 6020 path: /js - backend: serviceName: events servicePort: 6010 path: /events - backend: serviceName: events servicePort: 6010 path: /css - backend: serviceName: events servicePort: 6010 path: /js </code></pre>
<p>We have a lot of the long running, memory/cpu intensive jobs in k8s which are run with celery on kubernetes on google cloud platform. However we have big problems with scaling/retrying/monitoring/alerting/guarantee of delivery. We want to move from celery to some more advanced framework. </p> <p>There is a comparison: <a href="https://github.com/argoproj/argo/issues/849" rel="noreferrer">https://github.com/argoproj/argo/issues/849</a> but it's not enough.</p> <p>Airflow:</p> <ul> <li>has better support in community ~400 vs ~12 tags on SO, 13k stars vs ~3.5k stars</li> <li>python way of defining flows feels better than using just yamls</li> <li>support in GCP as a product: Cloud Composer</li> <li>better dashboard</li> <li>some nice operators like email operator</li> </ul> <p>Argoproj:</p> <ul> <li>Native support of Kubernetes (which i suppose is somehow better)</li> <li>Supports CI/CD/events which could useful in the future</li> <li>(Probably) better support for passing results from one job to the another one (in Airflow xcom mechanism)</li> </ul> <p>Our DAGs are not that much complicated. Which of those frameworks should we choose? </p>
<p>Idiomatic Airflow isn't really designed to execute long-running jobs by itself. Rather, Airflow is meant to serve as the facilitator for kicking off compute jobs within another service (this is done with Operators) while monitoring the status of the given compute job (this is done with Sensors).</p> <p>Given your example, any compute task necessary within Airflow would be initiated with the appropriate Operator for the given service being used (Airflow has GCP hooks for simplifying this) and the appropriate Sensor would determine when the task was completed and no longer blocked downstream tasks dependent on that operation.</p> <p>While not intimately familiar on the details of Argoproj, it appears to be less of a "scheduling system" like Airflow and more of a system used to orchestrate and actually execute much of the compute.</p>
<p>With a Horizontal Pod <strong>Autoscaler</strong> implemented for the <strong>Deployment</strong> running a <strong>Pod</strong> with two different containers: <strong>Containers-A</strong> and <strong>Contaner-B</strong>, I want to make sure that it only scales the number of Containers-A. The number of Containers-B should always be the same and not being effected by the Pod's <strong>Autoscaler</strong>. I wonder if isolating the Container-B from Autoscaler would be possible. If so, how to achieve it?</p> <pre><code>kubectl autoscale my-deployment --min=10 --max=15 --cpu-percent=80 </code></pre>
<p>As the name suggests <strong>"Horizontal Pod Autoscaler"</strong> scales pods, not containers.</p> <p>However, I do not see a reason why you would want this behavior.</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#understanding-pods" rel="noreferrer">You should have more than one container in a Pod only if those containers are tightly coupled together and need to share resources.</a> The fact that you want to scale container A and B independently, tells me that those containers are not tightly coupled.</p> <p>I would suggest the following approach:</p> <ol> <li><p>Deployment A that manages Pods with container A. The Autoscaler will only scale the pods for this deployment.</p></li> <li><p>Deployment B that manages Pods with container B. This deployment will not be affected by the autoscaler.</p></li> </ol>
<p>I'm trying to run gitlab-runner on my kubernetes cluster on raspberry pi. </p> <p>The gitlab pipeline generates the following output:</p> <pre><code>Running with gitlab-runner 10.4.0 (857480b6) on hello-world-gitlab-runner-6548-tq4mr (123) Using Kubernetes namespace: gitlab Using Kubernetes executor with image arm32v7/node ... Waiting for pod gitlab/runner-123-project-456-concurrent-789 to be running, status is Pending Waiting for pod gitlab/runner-123-project-456-concurrent-789 to be running, status is Pending Waiting for pod gitlab/runner-123-project-456-concurrent-789 to be running, status is Pending ERROR: Job failed (system failure): unable to upgrade connection: container not found ("helper") </code></pre> <p>The gitlab-runner pod logs the following output: </p> <pre><code>+ cp /scripts/config.toml /etc/gitlab-runner/ + /entrypoint register --non-interactive --executor kubernetes Running in system-mode. Registering runner... succeeded runner=xyz Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! + /entrypoint run --user=gitlab-runner --working-directory=/home/gitlab-runner Starting multi-runner from /etc/gitlab-runner/config.toml ... builds=0 Running in system-mode. Configuration loaded builds=0 Metrics server disabled Checking for jobs... received job=63348569 repo_url=https://gitlab.com/mypublicaccount/helloworld-docker.git runner=123 ERROR: Job failed (system failure): unable to upgrade connection: container not found ("helper") job=456 project=789 runner=123 Checking for jobs... received job=456 repo_url=https://gitlab.com/mypublicaccount/helloworld-docker.git runner=789 ERROR: Job failed (system failure): unable to upgrade connection: container not found ("helper") job=123 project=456 runner=789 </code></pre> <p>Any idea to fix this? </p>
<p>I managed to specify helper container version (my runner was configured with <a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="noreferrer">helm chart</a> and <a href="https://docs.gitlab.com/runner/install/kubernetes.html#additional-configuration" rel="noreferrer">values.yaml</a>) - official image supports ARM</p> <pre><code>runners: helpers: image: gitlab/gitlab-runner-helper:arm-latest </code></pre>
<p>I've got a local deployment system that is mirroring our production system. Both are deployed by calling kubectl apply -f deployments-and-services.yaml</p> <p>I'm tagging all builds with the current git hash, which means that for clean deploys to GKE, all the services have a new docker image tag which means that apply will restart them, but locally to minikube the tag is often not changing which means that new code is not run. Before I was working around this by calling kubectl delete and then kubectl create for deploying to minikube, but as the number of services I'm deploying has increased, that is starting to stretch the dev cycle too far. </p> <p>Ideally, I'd like a better way to tell kubectl apply to restart a deployment rather than just depending on the tag?</p> <p>I'm curious how people have been approaching this problem.</p> <p>Additionally, I'm building everything with bazel which means that I have to be pretty explicit about setting up my build commands. I'm thinking maybe I should switch to just delete/creating the one service I'm working on and leave the others running. </p> <p>But in that case, maybe I should just look at telepresence and run the service I'm dev'ing on outside of minikube all together? What are best practices here?</p>
<p>Kubernetes, only triggers a deployment when something has changed, if you have image pull policy to always you can delete your pods to get the new image, if you want kube to handle the deployment you can update the kubernetes yaml file to container a constantly changing metadata field (I use seconds since epoch) which will trigger a change. Ideally you should be tagging your images with unique tags from your CI/CD pipeline with the commit reference they have been built from. this gets around this issue and allows you to take full advantage of the kubernetes rollback feature. </p>
<h3>What I am trying to do:</h3> <p>The app that runs in the Pod does some refreshing of its data files on start. I need to restart the container each time I want to refresh the data. (A refresh can take a few minutes, so I have a Probe checking for readiness.)</p> <h3>What I <em>think</em> is a solution:</h3> <p>I will run a <a href="http://kubernetes.io/docs/user-guide/scheduled-jobs/" rel="noreferrer">scheduled job</a> to do a <em>rolling-update</em> kind of deploy, which will take the old Pods out, one at a time and replace them, without downtime.</p> <h3>Where I'm stuck:</h3> <p>How do I trigger a deploy, if I haven't changed anything??</p> <p>Also, I need to be able to do this from the scheduled job, obviously, so no manual editing..</p> <p>Any other ways of doing this?</p>
<p>As of kubectl 1.15, you can run:</p> <pre><code>kubectl rollout restart deployment &lt;deploymentname&gt; </code></pre> <p>What this does internally, is patch the deployment with a <code>kubectl.kubernetes.io/restartedAt</code> annotation so the scheduler performs a rollout according to the deployment update strategy.</p> <p>For previous versions of Kubernetes, you can simulate a similar thing:</p> <pre><code> kubectl set env deployment --env="LAST_MANUAL_RESTART=$(date +%s)" "deploymentname" </code></pre> <p>And even replace all in a single namespace:</p> <pre><code> kubectl set env --all deployment --env="LAST_MANUAL_RESTART=$(date +%s)" --namespace=... </code></pre>
<p>I just upgraded kubernetes cluster, but kubectl is very inconsistent in showing me the version. How can I verify this. Any source of truth?</p> <pre><code>[iahmad@web-prod-ijaz001 k8s-test]$ kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} [iahmad@web-prod-ijaz001 k8s-test]$ kubectl get nodes NAME STATUS ROLES AGE VERSION wp-np3-0b Ready worker 55d v1.14.1 wp-np3-0f Ready worker 55d v1.14.1 wp-np3-45 Ready master 104d v1.13.5 wp-np3-46 Ready worker 104d v1.13.5 wp-np3-47 Ready worker 104d v1.13.5 wp-np3-48 Ready worker 43d v1.14.1 wp-np3-49 Ready worker 95d v1.13.5 wp-np3-76 Ready worker 55d v1.14.1 [iahmad@web-prod-ijaz001 k8s-test]$ </code></pre>
<p>IIRC: <code>kubectl version</code> is telling you what version the APIServer is at (1.14.3). <code>kubectl get nodes</code> is telling you what version the <code>kubelet</code> is on those nodes.</p>
<p>This question is somewhat related to one of <a href="https://stackoverflow.com/questions/57043048/best-practices-when-implementing-ci-cd-pipeline-using-github-jenkins-kubernetes">my previous questions</a> as in it gives a clearer idea on what I am trying to achieve.. This question is about an issue I ran into when trying to achieve the task in that previous question...</p> <p>I am trying to test if my <code>kubectl</code> works from within the Jenkins container. When I start up my Jenkins container I use the following command:</p> <pre><code>docker run \ -v /home/student/Desktop/jenkins_home:/var/jenkins_home \ -v $(which kubectl):/usr/local/bin/kubectl \ #bind docker host binary to docker container binary -v ~/.kube:/home/jenkins/.kube \ #docker host kube config file stored in /.kube directory. Binding this to $HOME/.kube in the docker container -v /var/run/docker.sock:/var/run/docker.sock \ -v $(which docker):/usr/bin/docker -v ~/.kube:/home/root/.kube \ --group-add 998 -p 8080:8080 -p 50000:50000 -d --name jenkins jenkins/jenkins:lts </code></pre> <p>The container starts up and I can login/create jobs/run pipeline scripts all no issue.</p> <p>I created a pipeline script just to check if I can access my cluster like this:</p> <pre><code>pipeline { agent any stages { stage('Kubernetes test') { steps { sh "kubectl cluster-info" } } } } </code></pre> <p>When running this job, it fails with the following error:</p> <pre><code>+ kubectl cluster-info // this is the step To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. error: the server doesn't have a resource type "services" </code></pre> <p>Thanks!</p>
<p>I'm not getting why there is:</p> <p><code>-v $(which kubectl):/usr/local/bin/kubectl -v ~/.kube:/home/jenkins/.kube</code></p> <p><code>/usr/local/bin/kubectl</code> is a kubectl binary and <code>~/.kube:/home/jenkins/.kube</code> should be the location where the kubectl binary looks for the cluster context file i.e. <code>kubeconfig</code>. First, you should make sure that the <code>kubeconfig</code> is mounted to the container at <code>/home/jenkins/.kube</code> and is accessible to <code>kubectl</code> binary. After appropriate volume mounts, you can verify by creating a session in the jenkins container with <code>docker container exec -it jenkins /bin/bash</code> and test with <code>kubectl get svc</code>. Make sure you have <code>KUBECONFIG</code> env var set in the session with:</p> <pre><code>export KUBECONFIG=/home/jenkins/.kube/kubeconfig </code></pre> <p>Before you run the verification test and</p> <pre><code>withEnv(["KUBECONFIG=$HOME/.kube/kubeconfig"]) { // Your stuff here } </code></pre> <p>In your pipeline code. If it works with the session, it should work in the pipeline as well.</p> <p>I would personally recommend to create a custom Docker image for Jenkins which will contain <code>kubectl</code> binary and other utilities necessary (such as <code>aws-iam-authenticator</code> for AWS EKS IAM-based authentication) for working with Kubernetes cluster. This creates isolation between your host system binaries and your Jenkins binaries. </p> <p>Below is the <code>Dockerfile</code> I'm using which contains, <code>helm</code>, <code>kubectl</code> and <code>aws-iam-authenticator</code>.</p> <pre><code># This Dockerfile contains Helm, Docker client-only, aws-iam-authenticator, kubectl with Jenkins LTS. FROM jenkins/jenkins:lts USER root ENV VERSION v2.9.1 ENV FILENAME helm-${VERSION}-linux-amd64.tar.gz ENV HELM_URL https://storage.googleapis.com/kubernetes-helm/${FILENAME} ENV KUBE_LATEST_VERSION="v1.11.0" # Install the latest Docker CE binaries RUN apt-get update &amp;&amp; \ apt-get -y install apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common &amp;&amp; \ curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg &gt; /tmp/dkey; apt-key add /tmp/dkey &amp;&amp; \ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \ $(lsb_release -cs) \ stable" &amp;&amp; \ apt-get update &amp;&amp; \ apt-get -y install docker-ce \ &amp;&amp; curl -o /tmp/$FILENAME ${HELM_URL} \ &amp;&amp; tar -zxvf /tmp/${FILENAME} -C /tmp \ &amp;&amp; mv /tmp/linux-amd64/helm /bin/helm \ &amp;&amp; rm -rf /tmp/linux-amd64/helm \ &amp;&amp; curl -L https://storage.googleapis.com/kubernetes-release/release/${KUBE_LATEST_VERSION}/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl \ &amp;&amp; chmod +x /usr/local/bin/kubectl \ &amp;&amp; curl -L https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.5/2018-12-06/bin/linux/amd64/aws-iam-authenticator -o /usr/local/bin/aws-iam-authenticator \ &amp;&amp; chmod +x /usr/local/bin/aws-iam-authenticator </code></pre>
<p>I have declared the following Variable group in Azure DevOps.</p> <pre><code>KUBERNETES_CLUSTER = myAksCluster RESOURCE_GROUP = myResourceGroup </code></pre> <p><a href="https://i.stack.imgur.com/KHMWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KHMWp.png" alt="enter image description here"></a></p> <p>In the Release pipeline process, I want to create a Public Static IP address in order to assigned to some application in <code>myAksCluster</code>.</p> <p>Via Azure cli commands I am creating the Static IP address of this way via <strong>az cli bash small script</strong>. We can assume here that I already have created the kubernetes cluster</p> <pre><code>#!/usr/bin/env bash KUBERNETES_CLUSTER=myAksCluster RESOURCE_GROUP=myResourceGroup # Getting the name of the node resource group. -o tsv is to get the value without " " NODE_RESOURCE_GROUP=$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv) echo "Node Resource Group:" $NODE_RESOURCE_GROUP # Creating Public Static IP PUBLIC_IP_NAME=DevelopmentStaticIp az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static # Query the ip PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv) # Output # I want to use the value of PUBLIC_IP_ADDRESS variable in Azure DevOps variable groups of the release pipeline </code></pre> <p>If I execute <code>az network public-ip list ...</code> command I get my public Ip address.</p> <pre><code>⟩ az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress -o tsv 10.x.x.x </code></pre> <p>I want to use that <code>PUBLIC_IP_ADDRESS</code> value to assign it to a new Azure DevOps variable groups in my release, but doing all this process <strong>from a CLI task or Azure Cli task like part of the release pipeline process.</strong></p> <p>The idea is when my previous <strong>az cli bash small script</strong> to be executed in the release pipeline, I have a new variable in my ReleaseDev azure variable groups of this way:</p> <p><a href="https://i.stack.imgur.com/Dfiq0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfiq0.png" alt="enter image description here"></a></p> <p>And after that, I can use <code>PUBLIC_STATIC_IP_ADDRESS</code> which will be an azure devops variable like arguments to the application which will use that IP value inside my kubernetes cluster.</p> <p>I have been checking some information and maybe I could create an Azure CLI task in my release pipeline to execute the <strong>az cli bash small script</strong> which is creating the public static ip address of this way: <a href="https://i.stack.imgur.com/187Po.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/187Po.png" alt="enter image description here"></a></p> <p>But is finally at the end when I get the public ip address value, that I don't know how to create from this Azure CLI task (my script) the variable <code>PUBLIC_STATIC_IP_ADDRESS</code> with its respective value that I got here.</p> <p>Can I use an Azure CLI task from the release pipeline to get this small workflow? I have been checking somethings like <a href="https://github.com/Microsoft/azure-pipelines-tasks/issues/9416#issuecomment-470791784" rel="nofollow noreferrer">this recommendation</a> but is not clear for me</p> <p>how to create an Az variable group with some value passed from my release pipeline? Is Azure CLI release pipeline task, the proper task for do that?</p> <p><strong>UPDATE</strong></p> <p>I am following the approach suugested by <strong>Lu Mike</strong>, so I have created a Powershell task and executing the following script in Inline type/mode:</p> <pre><code># Connect-AzAccount Install-Module -Name Az -AllowClobber -Force @{KUBERNETES_CLUSTER = "$KUBERNETES_CLUSTER"} @{RESOURCE_GROUP = "$RESOURCE_GROUP"} @{NODE_RESOURCE_GROUP="$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv)"} # echo "Node Resource Group:" $NODE_RESOURCE_GROUP @{PUBLIC_IP_NAME="Zcrm365DevelopmentStaticIpAddress"} az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static @{PUBLIC_IP_ADDRESS="$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv)"} echo "##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%" $orgUrl="https://dev.azure.com/my-org/" $projectName = "ZCRM365" ########################################################################## $personalToken = $PAT # "&lt;your PAT&gt;" # I am creating a varaible environment inside the power shell task and reference it here. ########################################################################## $token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)")) $header = @{authorization = "Basic $token"} $projectsUrl = "$($orgUrl)/$($projectName)/_apis/distributedtask/variablegroups/1?api-version=5.0-preview.1" $requestBody = @{ "variables" = @{ "PUBLIC_STATIC_IP_ADDRESS" = @{ "value" = "$PUBLIC_IP_ADDRESS" } } "@type" = "Vsts" "name" = "ReleaseVariablesDev" "description" = "Updated variable group" } | ConvertTo-Json Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody </code></pre> <hr> <p><strong>IMPORTANT</strong></p> <p>As you can see, in this process I am mixing <code>az cli</code> commands and powershell language in this task. I am not sure if it is good. By the way I am using a Linux Agent.</p> <p>I had to include <code>-Force</code> flag to the <code>Install-Module -Name Az -AllowClobber -Force</code> command to install azure powershell module </p> <hr> <p>My output is the following:</p> <pre><code>2019-07-19T06:01:29.6372873Z Name Value 2019-07-19T06:01:29.6373433Z ---- ----- 2019-07-19T06:01:29.6373706Z KUBERNETES_CLUSTER 2019-07-19T06:01:29.6373856Z RESOURCE_GROUP 2019-07-19T06:01:38.0177665Z ERROR: az aks show: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:38.0469751Z usage: az aks show [-h] [--verbose] [--debug] 2019-07-19T06:01:38.0470669Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:38.0471442Z [--query JMESPATH] --resource-group RESOURCE_GROUP_NAME 2019-07-19T06:01:38.0472050Z --name NAME [--subscription _SUBSCRIPTION] 2019-07-19T06:01:38.1381959Z NODE_RESOURCE_GROUP 2019-07-19T06:01:38.1382691Z PUBLIC_IP_NAME Zcrm365DevelopmentStaticIpAddress 2019-07-19T06:01:39.5094672Z ERROR: az network public-ip create: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:39.5231190Z usage: az network public-ip create [-h] [--verbose] [--debug] 2019-07-19T06:01:39.5232152Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:39.5232671Z [--query JMESPATH] --resource-group 2019-07-19T06:01:39.5233234Z RESOURCE_GROUP_NAME --name NAME 2019-07-19T06:01:39.5233957Z [--location LOCATION] 2019-07-19T06:01:39.5234866Z [--tags [TAGS [TAGS ...]]] 2019-07-19T06:01:39.5235731Z [--allocation-method {Static,Dynamic}] 2019-07-19T06:01:39.5236428Z [--dns-name DNS_NAME] 2019-07-19T06:01:39.5236795Z [--idle-timeout IDLE_TIMEOUT] 2019-07-19T06:01:39.5237070Z [--reverse-fqdn REVERSE_FQDN] 2019-07-19T06:01:39.5240483Z [--version {IPv4,IPv6}] 2019-07-19T06:01:39.5250084Z [--sku {Basic,Standard}] [--zone {1,2,3}] 2019-07-19T06:01:39.5250439Z [--ip-tags IP_TAGS [IP_TAGS ...]] 2019-07-19T06:01:39.5251048Z [--public-ip-prefix PUBLIC_IP_PREFIX] 2019-07-19T06:01:39.5251594Z [--subscription _SUBSCRIPTION] 2019-07-19T06:01:40.4262896Z ERROR: az network public-ip list: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:40.4381683Z usage: az network public-ip list [-h] [--verbose] [--debug] 2019-07-19T06:01:40.4382086Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:40.4382346Z [--query JMESPATH] 2019-07-19T06:01:40.4382668Z [--resource-group RESOURCE_GROUP_NAME] 2019-07-19T06:01:40.4382931Z [--subscription _SUBSCRIPTION] 2019-07-19T06:01:40.5103276Z PUBLIC_IP_ADDRESS 2019-07-19T06:01:40.5133644Z ##[error]Unable to process command '##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%' successfully. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296) 2019-07-19T06:01:40.5147351Z ##[error]##vso[task.setvaraible] is not a recognized command for Task command extension. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296) </code></pre> <p>And maybe that is why I have some problem to execute <code>az</code> commands</p> <p>Power shell and its respective azure task are new for me, and I am not sure about how can I get along of this process.</p>
<p>You can try to invoke REST API(<a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/variablegroups/update?view=azure-devops-rest-5.0" rel="nofollow noreferrer">Variablegroups - Update</a>) to add or update the variable group in the script. Please refer to following script.</p> <pre><code>$orgUrl = "https://dev.azure.com/&lt;your organization &gt;" $projectName = "&lt;your project&gt;" $personalToken = "&lt;your PAT&gt;" $token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)")) $header = @{authorization = "Basic $token"} $projectsUrl = "$($orgUrl)/$($projectName)/_apis/distributedtask/variablegroups/1?api-version=5.0-preview.1" $requestBody = @{ "variables" = @{ "PUBLIC_STATIC_IP_ADDRESS" = @{ "value" = "&lt;the static ip you got&gt;" } } "@type" = "Vsts" "name" = "&lt;your variable group name&gt;" "description" = "Updated variable group" } | ConvertTo-Json Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody </code></pre> <p>Then, you will find the varaible in the group.</p> <p><a href="https://i.stack.imgur.com/s5qz8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5qz8.png" alt="enter image description here"></a></p> <p><strong>UPDATE:</strong></p> <p>You can use <code>AZ</code> commands directly as powershell script, since after you installed Az module for powershell, Az commands support powershell and bash. Please refer to following script.</p> <pre><code>$KUBERNETES_CLUSTER = "KUBERNETES_CLUSTER" $RESOURCE_GROUP = "RESOURCE_GROUP" $PUBLIC_IP_ADDRESS $PUBLIC_IP_NAME="Zcrm365DevelopmentStaticIpAddress" $NODE_RESOURCE_GROUP = az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv | az network public-ip create --resource-group --name $PUBLIC_IP_NAME --allocation-method static az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static $PUBLIC_IP_ADDRESS = az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv echo "##vso[task.setvaraible variable=ipAddress;]$PUBLIC_IP_ADDRESS" $orgUrl="https://dev.azure.com/&lt;org&gt;/" .... </code></pre> <p>Please refer to following links to learn the usage of az commands in powershell.</p> <p><a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-script" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-script</a></p> <p><a href="https://stackoverflow.com/questions/45585000/azure-cli-vs-powershell">Azure CLI vs Powershell?</a></p>
<p>I am using go-client for kubernetes to control deployments on my GKE cluster, but this client is to be run behind a proxy and needs to make all it's internet bound requests through this. But I cannot seem to find a way to configure my KubeClient to make all http requests through a proxy. </p> <p>My code is not very different from the sample here - <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go</a></p>
<p>When you are setting up the new client with the config (<code>kubernetes.NewForConfig(config)</code>) you can customize your transport:</p> <pre><code>proxyURL := url.URL{Host: proxy} transport := http.Transport{Proxy: http.ProxyURL(&amp;proxyURL), ....} config.Transport = config.Transport </code></pre> <p>Or you can make use of config.WrapTransport: </p> <ul> <li>Transport http.RoundTripper</li> </ul> <blockquote> <p>Transport may be used for custom HTTP behavior. This attribute may not be specified with the TLS client certificate options. Use WrapTransport for most client level operations.</p> </blockquote> <ul> <li>WrapTransport func(rt http.RoundTripper) http.RoundTripper </li> </ul> <blockquote> <p>WrapTransport will be invoked for custom HTTP behavior after the underlying transport is initialized (either the transport created from TLSClientConfig, Transport, or http.DefaultTransport). The config may layer other RoundTrippers on top of the returned RoundTripper.</p> </blockquote> <hr> <p>Sadly it is not straightforward to make it work, and making use of <code>HTTP_PROXY</code> and <code>no_proxy</code> often is easier. </p>
<p>I have 2 containers in a pod. 1. Webapp 2. Nginx I would like to share the data from Webapp container /var/www/webapp/ with the nginx container. /var/www/html</p> <pre><code>/var/www/webapp ( folder structure ) │ index.php │ │ └───folder1 │ │ service1.php │ │ │ └───subfolder1 │ │ app.php │ └───folder2 │ service2.php </code></pre> <p>The folder is mounted properly but all the files are missing.</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: volumes: - name: webapp-data persistentVolumeClaim: claimName: webapp-data containers: - name: webapp image: webapp imagePullPolicy: Always volumeMounts: - name: webapp-data mountPath: /var/www/webapp/ - name: nginx imagePullPolicy: Always image: nginx volumeMounts: - name: webapp-data mountPath: /var/www/html/ </code></pre> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: webapp-data spec: storageClassName: local accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>When mounting a volume under docker all the folders and files from within the container are available but not in k8s.</p>
<p>Kubernetes will not automatically populate an empty volume with contents from an image. (This is a difference from <code>docker run</code>.) Your application needs to figure out how to set up the shared-data directory itself, if it's empty.</p> <p>For standard database containers this doesn't really matter since they typically start off with some sort of <code>initdb</code> type call which will create the required file structure. Similarly, if you're using a persistent volume as a cache or upload space, it doesn't matter.</p> <p>For the use case you're describing where you want one container to just have a copy of files from the other, you don't really need a persistent volume. I'd use an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">emptyDir volume</a> that can be shared between the two containers, and then an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init container</a> to copy data into the volume. Do not mount anything over the application content.</p> <p>This would roughly look like (in reality use a Deployment):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: ... spec: volumes: - name: webapp-data emptyDir: {} initContainers: - name: populate image: webapp volumeMounts: - name: webapp-data mountPath: /data command: [cp, -a, /var/www/webapp, /data] containers: - name: webapp image: webapp # no volumeMounts; default command - name: nginx image: nginx volumeMounts: - name: webapp-data mountPath: /var/www/html </code></pre> <p>With this setup there's also not a hard requirement that the two containers run in the same pod; you could have one deployment that runs the back-end service, and a second deployment that runs nginx (starting up by copying data from the back-end image).</p> <p>(The example in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#creating-a-pod-that-has-an-init-container" rel="noreferrer">Configure Pod Initialization</a> in the Kubernetes docs is very similar, but fetches the nginx content from an external site.)</p>
<p>I have declared the following Variable group in Azure DevOps.</p> <pre><code>KUBERNETES_CLUSTER = myAksCluster RESOURCE_GROUP = myResourceGroup </code></pre> <p><a href="https://i.stack.imgur.com/KHMWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KHMWp.png" alt="enter image description here"></a></p> <p>In the Release pipeline process, I want to create a Public Static IP address in order to assigned to some application in <code>myAksCluster</code>.</p> <p>Via Azure cli commands I am creating the Static IP address of this way via <strong>az cli bash small script</strong>. We can assume here that I already have created the kubernetes cluster</p> <pre><code>#!/usr/bin/env bash KUBERNETES_CLUSTER=myAksCluster RESOURCE_GROUP=myResourceGroup # Getting the name of the node resource group. -o tsv is to get the value without " " NODE_RESOURCE_GROUP=$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv) echo "Node Resource Group:" $NODE_RESOURCE_GROUP # Creating Public Static IP PUBLIC_IP_NAME=DevelopmentStaticIp az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static # Query the ip PUBLIC_IP_ADDRESS=$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv) # Output # I want to use the value of PUBLIC_IP_ADDRESS variable in Azure DevOps variable groups of the release pipeline </code></pre> <p>If I execute <code>az network public-ip list ...</code> command I get my public Ip address.</p> <pre><code>⟩ az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress -o tsv 10.x.x.x </code></pre> <p>I want to use that <code>PUBLIC_IP_ADDRESS</code> value to assign it to a new Azure DevOps variable groups in my release, but doing all this process <strong>from a CLI task or Azure Cli task like part of the release pipeline process.</strong></p> <p>The idea is when my previous <strong>az cli bash small script</strong> to be executed in the release pipeline, I have a new variable in my ReleaseDev azure variable groups of this way:</p> <p><a href="https://i.stack.imgur.com/Dfiq0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dfiq0.png" alt="enter image description here"></a></p> <p>And after that, I can use <code>PUBLIC_STATIC_IP_ADDRESS</code> which will be an azure devops variable like arguments to the application which will use that IP value inside my kubernetes cluster.</p> <p>I have been checking some information and maybe I could create an Azure CLI task in my release pipeline to execute the <strong>az cli bash small script</strong> which is creating the public static ip address of this way: <a href="https://i.stack.imgur.com/187Po.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/187Po.png" alt="enter image description here"></a></p> <p>But is finally at the end when I get the public ip address value, that I don't know how to create from this Azure CLI task (my script) the variable <code>PUBLIC_STATIC_IP_ADDRESS</code> with its respective value that I got here.</p> <p>Can I use an Azure CLI task from the release pipeline to get this small workflow? I have been checking somethings like <a href="https://github.com/Microsoft/azure-pipelines-tasks/issues/9416#issuecomment-470791784" rel="nofollow noreferrer">this recommendation</a> but is not clear for me</p> <p>how to create an Az variable group with some value passed from my release pipeline? Is Azure CLI release pipeline task, the proper task for do that?</p> <p><strong>UPDATE</strong></p> <p>I am following the approach suugested by <strong>Lu Mike</strong>, so I have created a Powershell task and executing the following script in Inline type/mode:</p> <pre><code># Connect-AzAccount Install-Module -Name Az -AllowClobber -Force @{KUBERNETES_CLUSTER = "$KUBERNETES_CLUSTER"} @{RESOURCE_GROUP = "$RESOURCE_GROUP"} @{NODE_RESOURCE_GROUP="$(az aks show --resource-group $RESOURCE_GROUP --name $KUBERNETES_CLUSTER --query nodeResourceGroup -o tsv)"} # echo "Node Resource Group:" $NODE_RESOURCE_GROUP @{PUBLIC_IP_NAME="Zcrm365DevelopmentStaticIpAddress"} az network public-ip create --resource-group $NODE_RESOURCE_GROUP --name $PUBLIC_IP_NAME --allocation-method static @{PUBLIC_IP_ADDRESS="$(az network public-ip list --resource-group $NODE_RESOURCE_GROUP --query [1].ipAddress --output tsv)"} echo "##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%" $orgUrl="https://dev.azure.com/my-org/" $projectName = "ZCRM365" ########################################################################## $personalToken = $PAT # "&lt;your PAT&gt;" # I am creating a varaible environment inside the power shell task and reference it here. ########################################################################## $token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)")) $header = @{authorization = "Basic $token"} $projectsUrl = "$($orgUrl)/$($projectName)/_apis/distributedtask/variablegroups/1?api-version=5.0-preview.1" $requestBody = @{ "variables" = @{ "PUBLIC_STATIC_IP_ADDRESS" = @{ "value" = "$PUBLIC_IP_ADDRESS" } } "@type" = "Vsts" "name" = "ReleaseVariablesDev" "description" = "Updated variable group" } | ConvertTo-Json Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody Invoke-RestMethod -Uri $projectsUrl -Method Put -ContentType "application/json" -Headers $header -Body $requestBody </code></pre> <hr> <p><strong>IMPORTANT</strong></p> <p>As you can see, in this process I am mixing <code>az cli</code> commands and powershell language in this task. I am not sure if it is good. By the way I am using a Linux Agent.</p> <p>I had to include <code>-Force</code> flag to the <code>Install-Module -Name Az -AllowClobber -Force</code> command to install azure powershell module </p> <hr> <p>My output is the following:</p> <pre><code>2019-07-19T06:01:29.6372873Z Name Value 2019-07-19T06:01:29.6373433Z ---- ----- 2019-07-19T06:01:29.6373706Z KUBERNETES_CLUSTER 2019-07-19T06:01:29.6373856Z RESOURCE_GROUP 2019-07-19T06:01:38.0177665Z ERROR: az aks show: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:38.0469751Z usage: az aks show [-h] [--verbose] [--debug] 2019-07-19T06:01:38.0470669Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:38.0471442Z [--query JMESPATH] --resource-group RESOURCE_GROUP_NAME 2019-07-19T06:01:38.0472050Z --name NAME [--subscription _SUBSCRIPTION] 2019-07-19T06:01:38.1381959Z NODE_RESOURCE_GROUP 2019-07-19T06:01:38.1382691Z PUBLIC_IP_NAME Zcrm365DevelopmentStaticIpAddress 2019-07-19T06:01:39.5094672Z ERROR: az network public-ip create: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:39.5231190Z usage: az network public-ip create [-h] [--verbose] [--debug] 2019-07-19T06:01:39.5232152Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:39.5232671Z [--query JMESPATH] --resource-group 2019-07-19T06:01:39.5233234Z RESOURCE_GROUP_NAME --name NAME 2019-07-19T06:01:39.5233957Z [--location LOCATION] 2019-07-19T06:01:39.5234866Z [--tags [TAGS [TAGS ...]]] 2019-07-19T06:01:39.5235731Z [--allocation-method {Static,Dynamic}] 2019-07-19T06:01:39.5236428Z [--dns-name DNS_NAME] 2019-07-19T06:01:39.5236795Z [--idle-timeout IDLE_TIMEOUT] 2019-07-19T06:01:39.5237070Z [--reverse-fqdn REVERSE_FQDN] 2019-07-19T06:01:39.5240483Z [--version {IPv4,IPv6}] 2019-07-19T06:01:39.5250084Z [--sku {Basic,Standard}] [--zone {1,2,3}] 2019-07-19T06:01:39.5250439Z [--ip-tags IP_TAGS [IP_TAGS ...]] 2019-07-19T06:01:39.5251048Z [--public-ip-prefix PUBLIC_IP_PREFIX] 2019-07-19T06:01:39.5251594Z [--subscription _SUBSCRIPTION] 2019-07-19T06:01:40.4262896Z ERROR: az network public-ip list: error: argument --resource-group/-g: expected one argument 2019-07-19T06:01:40.4381683Z usage: az network public-ip list [-h] [--verbose] [--debug] 2019-07-19T06:01:40.4382086Z [--output {json,jsonc,table,tsv,yaml,none}] 2019-07-19T06:01:40.4382346Z [--query JMESPATH] 2019-07-19T06:01:40.4382668Z [--resource-group RESOURCE_GROUP_NAME] 2019-07-19T06:01:40.4382931Z [--subscription _SUBSCRIPTION] 2019-07-19T06:01:40.5103276Z PUBLIC_IP_ADDRESS 2019-07-19T06:01:40.5133644Z ##[error]Unable to process command '##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%' successfully. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296) 2019-07-19T06:01:40.5147351Z ##[error]##vso[task.setvaraible] is not a recognized command for Task command extension. Please reference documentation (http://go.microsoft.com/fwlink/?LinkId=817296) </code></pre> <p>And maybe that is why I have some problem to execute <code>az</code> commands</p> <p>Power shell and its respective azure task are new for me, and I am not sure about how can I get along of this process.</p>
<p>You can use the Rest API as <strong>Lu Mike</strong> mentioned to update the variable group, or you can install the <a href="https://marketplace.visualstudio.com/items?itemName=Lanalua.lanalua-variableupdater" rel="nofollow noreferrer">Shared variable updater</a> extension that do it for you.</p> <p>But the important thing is, you need to set a variable in the Azure CLI task that you can use it in the second task (either it's Rest API task or the Shared variable updater task). so you need to add this line to your script:</p> <pre><code>echo "##vso[task.setvaraible variable=ipAddress;]%PUBLIC_IP_ADDRESS%" </code></pre> <p>Now in the next tasks you can use the variable <code>$(ipAddress)</code> that contains the ip value:</p> <p><a href="https://i.stack.imgur.com/1CPeu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1CPeu.png" alt="enter image description here"></a></p>
<p>So a typical k8s deployment file that I'm woking on looks like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: ... name: ${service-name} spec: replicas: 1 strategy: ... template: metadata: ... spec: serviceAccountName: test ... </code></pre> <p>the goal is to create multiple services who have access to the same <code>serviceAccount</code>. This structure works fine when <code>test</code> exists in </p> <pre><code>kubectl get serviceaccount </code></pre> <p>The question is how can I set <code>serviceAccountName</code> to <code>default</code> serviceAccount if <code>test</code> does not exist in the namespace (for any reason)? I don't wanna fail the deployment</p> <p>I essentially need to have something like </p> <pre><code>serviceAccountName: {test:-default} </code></pre> <p>P.S. clearly I can assign a variable to <code>serviceAccountName</code> and parse the yaml file from outside, but wanted to see if there's a better option</p>
<p>As long as you want run this validation inside the cluster, the only way would be to use <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">MutatingAdmissionWebhook</a>. </p> <p>This will intercepts requests matching the rules defined in MutatingWebhookConfiguration before presisting into etcd. MutatingAdmissionWebhook executes the mutation by sending admission requests to webhook server. Webhook server is just plain http server that adhere to the API.</p> <p>Thus, you can validate if the service account exists and set default sa if it's not. </p> <p>Here is an <a href="https://github.com/banzaicloud/admission-webhook-example/blob/master/webhook.go" rel="nofollow noreferrer">example</a> of the weebhook, which validates and sets custom labels.</p> <p>More info about <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">Admission Controller Webhooks</a></p>
<p>How do I properly change the TCP keepalive time for node-proxy?</p> <p>I am running Kubernetes in Google Container Engine and have set up an ingress backed by HTTP(S) Google Load Balancer. When I continuously make POST requests to the ingress, I get a 502 error exactly once every 80 seconds or so. <code>backend_connection_closed_before_data_sent_to_client</code> error in Cloud Logging, which is because GLB's tcp keepalive (600 seconds) is larger than node-proxy's keepalive (no clue what it is).</p> <p>The logged error is detailed in <a href="https://cloud.google.com/compute/docs/load-balancing/http/" rel="noreferrer">https://cloud.google.com/compute/docs/load-balancing/http/</a>.</p> <p>Thanks!</p>
<p>You can use the custom resource <code>BackendConfig</code> that exist on each GKE cluster to configure timeouts and other parameters like CDN <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig" rel="nofollow noreferrer">here</a> is the documentacion</p> <p>An example from <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service" rel="nofollow noreferrer">here</a> shows how to configure on the ingress</p> <p>That is the <code>BackendConfig</code> definition:</p> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: my-bsc-backendconfig spec: timeoutSec: 40 connectionDraining: drainingTimeoutSec: 60 </code></pre> <p>And this is how to use on the ingress definition through annotations</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-bsc-service labels: purpose: bsc-config-demo annotations: beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}' spec: type: NodePort selector: purpose: bsc-config-demo ports: - port: 80 protocol: TCP targetPort: 8080 </code></pre>
<p>I am perfectly aware of the Kubernetes API and that a manifest can be broken down in several K8S API calls.</p> <p>I am wondering if there is a way to apply a whole manifest at once in a single API call. A REST equivalent to <code>kubectl apply</code>.</p>
<p>The feature is still in alpha but yes, it's called "server-side apply". Given the in-flux nature of alpha APIs, you should definitely check the KEPs before using it, but it's a new mode for the PATCH method on objects.</p>
<p>I have a Neo4j service, but before the deployment starts up, I need to pre-fill it with data (about 2GB of data). Currently, I wrote a Kubernetes <code>Job</code> to transform the data from a CSV and format it for the database using the <code>neo4j-admin</code> tool. It saves the formatted data to a <em>persistent volume</em>. After waiting for the job to complete, I mount the volume in the Neo4j container and the container is effectively read-only on this data for the rest of its life. </p> <p><strong>Is there a better way to do this more automatically?</strong></p> <p>I don't want to have to wait for the job to complete to run another command to create the Neo4j deployment. I looked into initContainers, but that isn't suitable because I don't want to redo the data filling when a pod is re-created. I just want subsequent pods to read from the same persistent volume. Is there a way to wait for the job to complete first?</p>
<p>As Jobs can't natively spawn new objects once finished (and if exited gracefully, using <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">PreStop</a> to invoke further actions won't work), you might want to monitor the API objects instead.</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#programmatic-access-to-the-api" rel="nofollow noreferrer">Programatically accessing the API</a> to determine when the Job is finished and then, <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="nofollow noreferrer">create your Deployment object</a> might be a feasible, automated way to do it.</p> <p>Doing it this way, you don't have to worry for redoing the data processing with <em>initContainers</em> as you can essentially call the deployment and remount the already existing volume.</p> <p>Also, using the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries" rel="nofollow noreferrer">official Go library</a> allows you to either <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries" rel="nofollow noreferrer">run within the cluster, in a pod or externally</a>.</p>
<p>There is a documentation article <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="nofollow noreferrer">here</a> explaining on how one can reserve resources on a node for system use.</p> <p>What I did not manage to figure out is how can one get these values? If I understand things correctly <code>kubectl top nodes</code> will return available resources, but I would like to see <code>kube-reserved</code>, <code>system-reserved</code> and <code>eviction-threshold</code> as well.</p> <p>Is it possible?</p>
<p>by checking the kubelet's flag, we can get the values of kube-reserved, system-reserved and eviction-threshold. </p> <p>ssh into the <code>$NODE</code> and <code>ps aufx | grep kubelet</code> will list out the running kubelet and its flag.</p> <p>kube-reserved and system-reserved values are only useful for scheduling as scheduler can see the allocatable resources. </p>