prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm trying to integrate socket.io into an application deployed on Google Kubernetes Engine. Developing locally, everything works great. But once deployed, I am continuously getting the dreaded 400 response when my sockets try to connect on. I've been searching on SO and other sites for a few days now and I haven't found anything that fixes my issue.</p> <p>Unfortunately this architecture was set up by a developer who is no longer at our company, and I'm certainly not a Kubernetes or GKE expert, so I'm definitely not sure I've got everything set up correctly.</p> <p>Here's out setup:</p> <ul> <li><p>we have 5 app pods that serve our application distributed across 5 cloud nodes (GCE vm instances)</p></li> <li><p>we are using the nginx ingress controller (<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>) to create a load balancer to distribute traffic between our nodes</p></li> </ul> <p>Here's what I've tried so far:</p> <ul> <li>adding the following annotations to the ingress:</li> </ul> <p><code>nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-hash: "sha1" nginx.ingress.kubernetes.io/session-cookie-name: "route"</code></p> <ul> <li>adding <code>sessionAffinity: ClientIP</code> to the backend service referenced by the ingress</li> </ul> <p>These measures don't seem to have made any difference, I'm still getting a 400 response. If anyone has handled a similar situation or has any advice to point me in the right direction I'd be very, very appreciative! </p>
<p>I just setup ngin ingress with same config where we are using socket.io. </p> <p>here is my ingress config</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: core-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.org/websocket-services : "app-test" nginx.ingress.kubernetes.io/rewrite-target: / certmanager.k8s.io/cluster-issuer: core-prod nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/websocket-services : "socket-service" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" spec: tls: - hosts: - &lt;domain&gt; secretName: core-prod rules: - host: &lt;domain&gt; http: paths: - backend: serviceName: service-name servicePort: 80 </code></pre> <p>i was also facing same issue so added <code>proxy-send-timeout</code> and <code>proxy-read-timeout</code>. </p>
<p>I have multiple contexts and I want to be able to run commands against a context that I have access to but am not pointed to, while in another context.</p> <p>How do I run a command, say <code>$ kubectl get pods</code> against context B while I'm currently pointed to context A?</p>
<p><code>--context</code> is a global option for all kubectl commands. Simply run:</p> <pre><code>$ kubectl get pods --context &lt;context_B&gt; </code></pre> <p>For a list of all global kubectl options, run <code>$ kubectl options</code></p>
<p>Currently I created Helm chart for my Kubernetes resource and trying to deploy on my remote Kubernetes cluster from my local machine where I configured the helm client and kubectl. I created the Helm chart by using the following command,</p> <pre><code>helm create my-chart </code></pre> <p>And after creating I edited the image values in my-chart/values.yaml. Now I need to deploy this docker image on my remote Kubernetes cluster</p> <p><strong>My confusion</strong></p> <ol> <li>Here my confusion is that, when I deploying do I only need to use 'helm install' command? It will deploy on my cluster?</li> <li>If I need to access the service, how I can access the result from cluster? Do I need to set cluster IP or node IP?</li> </ol> <p>I am new to Helm chart with Kubernetes.</p>
<ol> <li><p>To upgrade/install a helm chart, you can try running <code>helm upgrade --install CHART_NAME --values values.yaml</code>. This behaves like upsert and deletes any existing resources that need to be modified and creates new resources that reflect your <code>values.yaml</code></p> </li> <li><p>Accessing the service is dependent on how your Helm Chart exposes the service. If your Helm Chart created a ClusterIP-type service, then you can access it within the cluster. If it's exposed as a nodePort/LoadBalancer/Ingress, then you can access it externally. To test ClusterIP services, you can use <code>kubectl port-forward svc/your-service-name 8000:8000</code> (or some port number mapping applicable to your service). With the port forwarding, you can access the service on <code>localhost:8000</code>.</p> </li> </ol>
<p>I know that k8s have <em>readynessProbe</em> with <em>httpGet</em> method to check if service ready to work or not. Is there something like <em>httpPost</em> method to run <em>POST</em> request to /api/postService with some body and check return code? Or some <em>tricky</em> way to do it in yaml file.</p>
<p>Can be done by running curl as an exec readinessProbe:</p> <pre><code>readinessProbe: exec: command: - "curl" - "-X POST" - "-f" - "http://localhost/api/postService" </code></pre> <p>Of course you'd need to make sure to install <em>curl</em> in the Docker image that packages your service.</p>
<p>I am trying to figure out the minimum number of kubernetes master nodes in a master replica set for kubernetes that will allow the entire cluster to behave as normal. Their official docs mention that you need a of three master nodes. </p> <p>What happens when you lose 1 of the 3 master nodes? Can you lose two of the master nodes in a replica set and still have the cluster behave as normal?</p>
<p>Kubernetes API works as long as Etcd cluster works. Etcd cluster works when there is quorum, so at least 2 of 3 Etcd pods are alive. If only 1 of 3 Etcd pod is alive - cluster goes to read-only state when no new pods can be scheduled and no resource creates/updates/deletes are allowed. </p>
<p>Every article I've seen on the internet about running a Cassandra cluster on Kubernetes were either omitting the <code>podManagementPolicy</code> field or setting it to <code>OrderedReady</code> which is essentially the same thing because it is the default value. </p> <p>I was wondering if it is possible to use <code>podManagementPolicy: Parallel</code> to speed up the process of synchronizing when multiple nodes of the Cassandra cluster restart.</p>
<p>As far as I know, this is a bad idea. I tried it and got the last node going into CrashLoopBackoff. It seems like the reason is that joining nodes crash if they see another node is trying to join at the same time. </p> <p><code>podManagementPolicy: OrderedReady</code> should be the way to go. </p>
<p>What are the steps followed by Google when doing a node upgrade or maintenance on a node? I've been assuming it was:</p> <ul> <li>Drain node</li> <li>Perform any operation</li> <li>Bring node up again</li> </ul> <p>or</p> <ul> <li>Drain node</li> <li>Delete node</li> <li>Bring new node up</li> </ul> <p>But recently during a node upgrade operation, some of our pods that weren't replicated died and there was a downtime for a couple of minutes. Later checking the age of the new pod it matched the age of the node. I was also able to see that the node where it was deployed changed at the time of the node upgrade.</p> <p>So, does anybody knows what is the procedure that Google follows in order to make a node upgrade? </p>
<p>Check out - <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a></p> <p>Rolling updates are the default (other option is Node Pool migration)</p> <p>"A rolling update works in the following way. One by one, a node is drained and cordoned so that there are no more pods running on that node. Then the node is deleted, and a new node is created with the updated Kubernetes version. Once that node is up and running, the next node is updated. This goes on until all nodes are updated.</p> <p>You can let Kubernetes Engine manage this process for you completely by enabling automatic node upgrades on the node pool. One drawback is that you get one less node of capacity in your cluster. This issue is easily solved by scaling up your node pool to add extra capacity, and then scaling it back down once the upgrade is finished. The fully automated nature of the rolling update makes it easy to do, but you have less control over the process. It also takes time to roll back to the old version if there is a problem, as you have to stop the rolling update and then undo it."</p>
<p>I am using the following command to check logs in Kubernetes.</p> <blockquote> <p>kubectl logs pod_name -n namespace</p> </blockquote> <p>It is printing all the logs from the beginning.</p> <ol> <li>Is there anyway to tail the logs or check logs between the given window?</li> <li>Is it possible to rotate docker logs based on size or date?</li> </ol>
<p>Yes, we can extract the log by using the since like below -</p> <p><code>kubectl logs --since=48h podname &gt; 24Logs.txt</code></p> <p>Then you can easily check the logs for specific time within last 48 hours.</p>
<p>I installed <code>Docker Toolbox</code> on windows 10 Pro and i want run <code>iis</code> on it , but when i run <code>docker pull microsoft/iis:latest</code> command it say's:</p> <p><code>no matching manifest for unknown in the manifest list entries</code></p> <p>as well this command:</p> <p><code>docker pull mcr.microsoft.com/windows/nanoserver:1809</code></p> <p>and it say's: </p> <p><code>no matching manifest for unknown in the manifest list entries</code></p> <p>And i tried to run <code>Docker Desktop</code> but when i run it it say's :</p> <p><code>Hardware assisted virtualization and data execution protection must be enabled in the BIOS. See https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization-must-be-enabled</code></p> <p>But i'm sure the Hardware assisted virtualization is enabled </p> <p>My goal and purpose is install <code>Docker</code> on windows and then install <code>Kubernetes</code> and after that join my windows <code>Kubernetes</code> node to my linux <code>Kubernetes</code> cluster</p>
<p>You can not run windows container in ToolBox because it uses virtual box and it is suited for ubuntu container. you should use DockerDesktop. and make sure you switched to windows containers. </p>
<p>I have a problem with service (DNS) discovery in <strong>kubernetes 1.14</strong> version in <strong>ubuntu bionic</strong>.</p> <p>Right now my 2 pods communicating using IP addresses. How can I enable <strong>coredns</strong> for service (DNS) discovery?</p> <p>Here is the output of kubectl for service and pods from kube-system namespace:</p> <pre><code> kubectl get pods,svc --namespace=kube-system | grep dns pod/coredns-fb8b8dccf-6plz2 1/1 Running 0 6d23h pod/coredns-fb8b8dccf-thxh6 1/1 Running 0 6d23h service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 6d23h </code></pre> <h2>I have installed kubernetes on master node(ubuntu bionic machine) using below steps</h2> <pre><code> apt-get update apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - add-apt-repository &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable&quot; apt-get update apt-get install docker-ce docker-ce-cli containerd.io apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl kubectl version apt-mark hold kubelet kubeadm kubectl kubeadm config images pull swapoff -a kubeadm init mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config sysctl net.bridge.bridge-nf-call-iptables=1 kubectl apply -f &quot;https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&quot; kubectl get pods --all-namespaces </code></pre> <h3>This is on worker node</h3> <pre><code> Docker is already installed, so directly installing kubernetes on worker node apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl kubectl version apt-mark hold kubelet kubeadm kubectl swapoff -a Now joined worker node to master </code></pre> <h2>Answer:-</h2> <p>I think everything was setup correctly by default, There was a misunderstanding by me that I can call a server running in one pod from another pod using the container name and port which I have specified in spec, but instead I should use service name and port.</p> <h2>Below is my deployment spec and service spec:-</h2> <h3>Deployment spec:-</h3> <pre><code> apiVersion: extensions/v1beta1 kind: Deployment metadata: name: node-server1-deployment spec: replicas: 1 template: metadata: labels: app: node-server1 spec: hostname: node-server1 containers: - name: node-server1 image: bvenkatr/node-server1:1 ports: - containerPort: 5551 </code></pre> <h3>Service spec:</h3> <pre><code> kind: Service apiVersion: v1 metadata: name: node-server1-service spec: selector: app: node-server1 ports: - protocol: TCP port: 5551 </code></pre>
<blockquote> <p>As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. In Kubernetes, CoreDNS is installed with the following default Corefile configuration:</p> </blockquote> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } </code></pre> <p>More info yo can find <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">here</a>. </p> <p>You can verify your env by running:</p> <pre><code>kubectl get cm coredns -n kube-system -o yaml apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system </code></pre> <p>and: kubeadm config view dns: type: CoreDNS</p> <p>during kubeadm init you should noticed:</p> <pre><code>[addons] Applied essential addon: CoreDNS </code></pre> <p>If you are moving from kube-dns to CoreDNS, make sure to set the CoreDNS <code>feature gate</code> to <strong>true</strong> during an upgrade. For example, here is what a v1.11.0 upgrade would look like: <code>kubeadm upgrade apply v1.11.0 --feature-gates=CoreDNS=true</code></p> <blockquote> <p>In Kubernetes version 1.13 and later the CoreDNS feature gate is removed and CoreDNS is used by default. More information <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/" rel="nofollow noreferrer">here</a>.</p> </blockquote> <p>You can see if your coredns pod is working properly by running:</p> <pre><code>kubectl logs &lt;your coredns pod&gt; -n kube-system .:53 2019-05-02T13:32:41.438Z [INFO] CoreDNS-1.3.1 CoreDNS-1.3.1 . . </code></pre>
<p>I am running a Spring Boot application in Docker with Kubernetes.</p> <p>While downloading an image I am getting the below error:</p> <blockquote> <p>sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target</p> </blockquote> <p>How can I solve this problem?</p>
<p>Try adding the certificates to the docker image and installing them via keytool</p>
<p>How do I redirect all my <code>hrefs</code> within my response to hit my new path. For e.g., my ingress file is</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-odin annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /odin/?(.*) backend: serviceName: flask-app-tutorial servicePort: 8080 </code></pre> <p>When I visit the page at <a href="https://mysite/odin" rel="nofollow noreferrer">https://mysite/odin</a>. It works and returns back the response:</p> <p>The HTML response is:</p> <pre><code>&lt;html&gt; .. &lt;body&gt; &lt;div&gt; &lt;a href="/v0/index"&gt;Home&lt;/a&gt; &lt;a href="/v0/login"&gt;Login&lt;/a&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>However, as you can see, the relative links are like <code>&lt;a href="/v0/index"&gt;Home&lt;/a&gt;</code>. If I click on that, it won't work since there is no link like <code>http://mysite/v0/index</code>. If I click on the link, I want it to go to <code>http://mysite/odin/v0/index</code>. Is it possible either by modifying the links in the response to have <code>odin</code> or if I do click on it, it looks at the source url i.e. <code>http://mysite/odin</code> and direct it relative to that?</p> <pre><code>Nginx Version: 1.15.10 ingress-nginx: 0.24.0 </code></pre> <p>So far,I have tried the following.</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Accept-Encoding ""; #disable compression sub_filter '&lt;head&gt;' '&lt;head&gt; &lt;base href="/odin/"&gt;'; nginx.ingress.kubernetes.io/add-base-url: ":true" nginx.ingress.kubernetes.io/app-root: /odin nginx.ingress.kubernetes.io/use-regex: "true" </code></pre> <p>I have also tried <a href="https://github.com/kubernetes/ingress-nginx/pull/3174#issuecomment-483315744" rel="nofollow noreferrer">this</a> i.e.</p> <pre><code> change the spec.rules.host.paths.path from /odin/?(.*) to/(odin/.*) </code></pre> <p>There may be a typo in the advice above. I think it should <code>http</code> instead of <code>host</code>.</p>
<p>I had a similar problem and inspired by your question I found a solution which worked for me. Your annotation should work like this:</p> <pre><code>nginx.ingress.kubernetes.io/configuration-snippet: | sub_filter_once on; sub_filter '&lt;base href="/"' '&lt;base href="/odin/"'; </code></pre> <p>As you see sub_filter is now enabled and I used <em>base</em> instead of <em>head</em>.</p>
<p>My pod declares two different volumes.</p> <p>I use some definition templating, and depending on the environment in some cases I would like to reuse the same claim for the two volumes.</p> <p>This results in an error:</p> <pre><code> Unable to mount volumes for pod &quot;task-pv-pod_&lt;...&gt;&quot;: timeout expired waiting for volumes to attach/mount for pod &quot;&lt;...&gt;&quot;/&quot;task-pv-pod&quot;. list of unattached/unmounted volumes=[task-pv-storage1] </code></pre> <ul> <li>It works fine with two different claims.</li> <li>It is possible to reuse the same claim in several pods.</li> </ul> <p>This is a simplified pod definition:</p> <pre><code>kind: Pod apiVersion: v1 metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage1 persistentVolumeClaim: claimName: task-pv-claim - name: task-pv-storage2 persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: &quot;http-server&quot; volumeMounts: - mountPath: &quot;/usr/share/nginx/html&quot; name: task-pv-storage1 - mountPath: &quot;/usr/share/nginx/something-else&quot; name: task-pv-storage2 </code></pre> <p>So why is it not working?</p>
<p>To answer your question "is it possible to use the same claim in several pods?", let's take a look at different claim attach access modes:</p> <p>When you create a PVC with the default settings, you are creating a Persistent Volume and a Claim that sits on top of it, with the attach access mode <code>ReadWriteOnce</code>. </p> <blockquote> <p>ReadWriteOnce – the volume can be mounted as read-write by a single node</p> </blockquote> <p>So this claim can only be mounted on pods on the same node. There is a workaround to be able to mount this volume onto multiple pods. One is to schedule all of your pods on the same node, which technically defeats the purpose of using container orchestration. This could be achieved by <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">assigning pods to nodes</a>. Read the linked doc for details.</p> <p>Another method is to use disk persistent volumes / NFS. Depending on the cloud provider you are using there are different FileSystem provisioners you can use. </p> <ul> <li>AWS: <a href="https://github.com/helm/charts/tree/master/stable/efs-provisioner" rel="noreferrer">EFS Provisioner - helm</a></li> <li>GKE: <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="noreferrer">GCE NFS Disk</a></li> <li>Other cloud providers: <a href="https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner" rel="noreferrer">NFS Server Provisioner</a></li> </ul> <p>This way you can change your access mode to <code>ReadWriteMany</code>:</p> <blockquote> <p>ReadWriteMany – the volume can be mounted as read-write by many nodes</p> </blockquote> <p>With this access policy you can mount your volume onto multiple pods in your cluster regardless of the node they are running on.</p>
<p>I am working on setting up CI CD pipeline for Spring boot application on GKE. The CI build step worked correctly but the delivery build step is failing due to 'error: no objects passed to apply' error. I could see below logs in the cloud build</p> <pre><code>Starting Step #0 - "Deploy" Step #0 - "Deploy": Already have image (with digest): gcr.io/cloud-builders/kubectl Step #0 - "Deploy": Running: gcloud container clusters get-credentials --project="location-finder-kubernetes" --zone="us-central1-b" "location-finder" Step #0 - "Deploy": Fetching cluster endpoint and auth data. Step #0 - "Deploy": kubeconfig entry generated for location-finder. Step #0 - "Deploy": Running: kubectl apply -f kubernetes.yaml Step #0 - "Deploy": error: no objects passed to apply Finished Step #0 - "Deploy" ERROR ERROR: build step 0 "gcr.io/cloud-builders/kubectl" failed: exit status 1 </code></pre> <p>where location-finder is the name of the cluster.</p> <p>To set up this pipeline, I followed all the guidelines mentioned at <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/gitops-cloud-build</a></p> <p>The cloudbuild.yaml content for this failing step is</p> <pre><code>steps: - name: 'gcr.io/cloud-builders/kubectl' id: Deploy args: - 'apply' - '-f' - 'kubernetes.yaml' env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-b' - 'CLOUDSDK_CONTAINER_CLUSTER=location-finder' </code></pre> <p>Kubectl version:</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.9-dispatcher", GitCommit:"e3f5193e8f1091a162af7e17a781e6a3129bcfd0", GitTreeState:"clean", BuildDate:"2019-03-28T18:13:46Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.8-gke.6", GitCommit:"394ee507d00f15a63cef577a14026096c310698e", GitTreeState:"clean", BuildDate:"2019-03-30T19:31:43Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Am I missing any configuration?</p>
<p>To test, I ran the same <code>kubectl apply -f kubernetes.yaml</code>, but passed an empty yaml file and got the same error as you. Is there anyting acutally in your yaml file? </p>
<p>Trying to statically provision a PV with GCP SSD storage. Errors out with the following message:</p> <pre><code>The PersistentVolume "monitoring" is invalid: spec: Required value: must specify a volume type </code></pre> <p>Steps to reproduce:</p> <pre><code>$ cat storage.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ssd provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd reclaimPolicy: Retain $ kubectl apply -f storage.yaml storageclass.storage.k8s.io/ssd created $ cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: monitoring spec: storageClassName: ssd capacity: storage: 50Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce $ kubectl apply -f pv.yaml The PersistentVolume "monitoring" is invalid: spec: Required value: must specify a volume type </code></pre> <p>Kubernetes version:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-gke.10", GitCommit:"aaf0906400b5fc1d858ce0566a571e4f3ed06b9f", GitTreeState:"clean", BuildDate:"2019-03-30T19:30:48Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>If using a provisioner, you usually don't create the PV on your own. Just create a PVC requiring that created storage class and GKE will provide the PV with the requested storage size and kind for you:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-retain-ssd-storage namespace: default spec: accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: ssd </code></pre>
<p>How can I access environment variables in Vue, that are passed to the container at runtime and not during the build?</p> <p>Stack is as follows:</p> <ul> <li>VueCLI 3.0.5</li> <li>Docker</li> <li>Kubernetes</li> </ul> <p>There are suggested solutions on stackoverflow and elsewhere to use .env file to pass variables (and using mode) but that's at build-time and gets baked into the docker image.</p> <p>I would like to pass the variable into Vue at run-time as follows:</p> <ul> <li>Create Kubernetes ConfigMap (I get this right)</li> <li>Pass ConfigMap value into K8s pod env variable when running deployment yaml file (I get this right)</li> <li>Read from env variable created above eg. VUE_APP_MyURL and do something with that value in my Vue App (I DO NOT get this right)</li> </ul> <p>I've tried the following in helloworld.vue:</p> <pre><code>&lt;template&gt; &lt;div&gt;{{displayURL}} &lt;p&gt;Hello World&lt;/p&gt; &lt;/div&gt; &lt;/template&gt; &lt;script&gt; export default { data() { return { displayURL: &quot;&quot; } }, mounted() { console.log(&quot;check 1&quot;) this.displayURL=process.env.VUE_APP_ENV_MyURL console.log(process.env.VUE_APP_ENV_MyURL) console.log(&quot;check 3&quot;) } } &lt;/script&gt; </code></pre> <p>I get back &quot;undefined&quot; in the console log and nothing showing on the helloworld page.</p> <p>I've also tried passing the value into a vue.config file and reading it from there. Same &quot;undefined&quot; result in console.log</p> <pre><code>&lt;template&gt; &lt;div&gt;{{displayURL}} &lt;p&gt;Hello World&lt;/p&gt; &lt;/div&gt; &lt;/template&gt; &lt;script&gt; const vueconfig = require('../../vue.config'); export default { data() { return { displayURL: &quot;&quot; } }, mounted() { console.log(&quot;check 1&quot;) this.displayURL=vueconfig.VUE_APP_MyURL console.log(vueconfig.VUE_APP_MyURL) console.log(&quot;check 3&quot;) } } &lt;/script&gt; </code></pre> <p>With vue.config looking like this:</p> <pre><code>module.exports = { VUE_APP_MyURL: process.env.VUE_APP_ENV_MyURL } </code></pre> <p>If I hardcode a value into VUE_APP_MyURL in the vue.config file it shows successfully on the helloworld page.</p> <p>VUE_APP_ENV_MyURL is successfully populated with the correct value when I interrogate it: kubectl describe pod </p> <p>process.env.VUE_APP_MyURL doesn't seem to successfully retrieve the value.</p> <p>For what it is worth... I am able to use process.env.VUE_APP_3rdURL successfully to pass values into a Node.js app at runtime.</p>
<p>Create a file <code>config.js</code> with your desired configuration. We will use that later to create a config map that we deploy to Kubernetes. Put it into your your Vue.js project where your other JavaScript files are. Although we will exclude it later from minification, it is useful to have it there so that IDE tooling works with it.</p> <pre class="lang-js prettyprint-override"><code>const config = (() =&gt; { return { &quot;VUE_APP_ENV_MyURL&quot;: &quot;...&quot;, }; })(); </code></pre> <p>Now make sure that your script is excluded from minification. To do that, create a file vue.config.js with the following content that preserves our config file.</p> <pre class="lang-js prettyprint-override"><code>const path = require(&quot;path&quot;); module.exports = { publicPath: '/', configureWebpack: { module: { rules: [ { test: /config.*config\.js$/, use: [ { loader: 'file-loader', options: { name: 'config.js' }, } ] } ] } } } </code></pre> <p>In your index.html, add a script block to load the config file manually. Note that the config file won't be there as we just excluded it. Later, we will mount it from a <code>ConfigMap</code> into our container. In this example, we assume that we will mount it into the same directory as our HTML document.</p> <pre class="lang-html prettyprint-override"><code>&lt;script src=&quot;&lt;%= BASE_URL %&gt;config.js&quot;&gt;&lt;/script&gt; </code></pre> <p>Change your code to use our runtime config:</p> <pre class="lang-js prettyprint-override"><code>this.displayURL = config.VUE_APP_ENV_MyURL || process.env.VUE_APP_ENV_MyURL </code></pre> <p>In Kubernetes, create a config map that uses the content your config file. Of course, you wanna read the content from your config file.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: ... data: config.js: | var config = (() =&gt; { return { &quot;VUE_APP_ENV_MyURL&quot;: &quot;...&quot;, }; })(); </code></pre> <p>Reference the config map in your deployment. This mounts the config map as a file into your container. The <code>mountPath</code> Already contains our minified index.html. We mount the config file that we referenced before.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: ... spec: ... template: ... spec: volumes: - name: config-volume configMap: name: ... containers: - ... volumeMounts: - name: config-volume mountPath: /usr/share/nginx/html/config.js subPath: config.js </code></pre> <p>Now you can access the config file at <code>&lt;Base URL&gt;/config.js</code> and you should see the exact content that you put into the ConfigMap entry. Your HTML document loads that config map as it loads the rest of your minified Vue.js code. Voila!</p>
<p>I'm studing Kubernetes (with no regard of a specific Cloud Provider) and it's not so clear if the most generic Service (not the Service of type Load-Balancer) works as an internal load balancer among the various replicas of a single microservice.</p> <p>So <strong>how to implement internal load balancing among replicas without exposing the microservice to the outside traffic</strong>?</p>
<p>You can use kubernetes service object which is top of the pod.</p> <p>Service object manage the connection and traffic it can be also used as internal load balancer.</p> <p>You can create service with yaml file</p> <pre><code>kind: Service apiVersion: v1 metadata: name: myapp-service spec: selector: app: Myapp ports: - port: 80 targetPort: 9376 </code></pre> <p>On the base of the same selector in pod metadata to divert the traffic to that pods. Just use proper selector in specs section inside service and pods.</p>
<p>I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this? </p> <p>This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low. </p>
<p>You can deploy an Ingress configured to use the host network and port 80/443.</p> <ol> <li><p>DO's firewall for your cluster doesn't have 80/443 inbound open by default.</p> <p>If you edit the auto-created firewall the rules <a href="https://www.digitalocean.com/community/questions/how-to-customize-firewall-rules-for-managed-kubernetes-service" rel="noreferrer">will eventually reset themselves</a>. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:</p> </li> </ol> <pre><code>$ doctl compute firewall create \ --inbound-rules=&quot;protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0&quot; \ --tag-names=k8s:CLUSTER_UUID \ --name=k8s-extra-mycluster </code></pre> <p>(Get the <code>CLUSTER_UUID</code> value from the dashboard or the ID column from <code>doctl kubernetes cluster list</code>)</p> <ol start="2"> <li>Create the <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx ingress</a> using the host network. I've included the <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">helm chart</a> config below, but you could do it via the direct install process too.</li> </ol> <p>EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(<a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx" rel="noreferrer">as per the new docs</a>) is :</p> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update </code></pre> <p>After this repo is added &amp; updated</p> <pre><code># For Helm 2 $ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml # For Helm 3 $ helm install myingress stable/nginx-ingress -f myingress.values.yml #EDIT: The New way to install in helm 3 helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml </code></pre> <p><code>myingress.values.yml</code> for the chart:</p> <pre class="lang-yaml prettyprint-override"><code>--- controller: kind: DaemonSet hostNetwork: true dnsPolicy: ClusterFirstWithHostNet daemonset: useHostPort: true service: type: ClusterIP rbac: create: true </code></pre> <ol start="3"> <li><p>you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.</p> </li> <li><p>since node IPs can &amp; do change, look at deploying <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">external-dns</a> to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):</p> </li> </ol> <pre><code># For Helm 2 $ helm install --name=mydns -f mydns.values.yml stable/external-dns # For Helm 3 $ helm install mydns stable/external-dns -f mydns.values.yml </code></pre> <p><code>mydns.values.yml</code> for the chart:</p> <pre class="lang-yaml prettyprint-override"><code>--- provider: digitalocean digitalocean: # create the API token at https://cloud.digitalocean.com/account/api/tokens # needs read + write apiToken: &quot;DIGITALOCEAN_API_TOKEN&quot; domainFilters: # domains you want external-dns to be able to edit - example.com rbac: create: true </code></pre> <ol start="5"> <li>create a Kubernetes <a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="noreferrer">Ingress resource</a> to route requests to an existing Kubernetes service:</li> </ol> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: testing123-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: testing123.example.com # the domain you want associated http: paths: - path: / backend: serviceName: testing123-service # existing service servicePort: 8000 # existing service port </code></pre> <ol start="6"> <li>after a minute or so you should see the DNS records appear and be resolvable:</li> </ol> <pre><code>$ dig testing123.example.com # should return worker IP address $ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service </code></pre> <p>(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).</p>
<p>I am using <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a> for the deployment of a kubernetes cluster and want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server" rel="nofollow noreferrer"><code>oidc-issuer-url</code></a> parameter). I saw that kubespray has some vars to set (<a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md</a>), but not the ones I am looking for.</p> <p>Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> files).</p> <p>Thanks for your help</p>
<p>On the bottom of the page you are referring to there is description how to define custom flags for various components of k8s:</p> <pre><code>kubelet_custom_flags: - "--eviction-hard=memory.available&lt;100Mi" - "--eviction-soft-grace-period=memory.available=30s" - "--eviction-soft=memory.available&lt;300Mi" </code></pre> <p>The possible vars are:</p> <pre><code>apiserver_custom_flags controller_mgr_custom_flags scheduler_custom_flags kubelet_custom_flags kubelet_node_custom_flags </code></pre>
<p>I had a Angular 7 web app, I am trying to consume a REST API deployed in Kubernetes that port is not open to the Internet. </p> <p>Tried with http client module ... but this is executing client side ... so no way to reach the service that is running in Kubernetes. </p> <p>Is it possible to consume this without expose the service to the Internet ?</p>
<p>Your Angular application is running on your clients so you have to publicly expose the REST API in order to consume it from there. If you only want to expose the API for specific IPs (if your Angular application should only work within your intranet for example), then you can use an ingress controller (e. g. nginx) and configure it with annotations. Example:</p> <pre><code>nginx.ingress.kubernetes.io/whitelist-source-range: &lt;YourNetworkCIDR&gt; </code></pre>
<p>I am using <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">kubespray</a> for the deployment of a kubernetes cluster and want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server" rel="nofollow noreferrer"><code>oidc-issuer-url</code></a> parameter). I saw that kubespray has some vars to set (<a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md</a>), but not the ones I am looking for.</p> <p>Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> files).</p> <p>Thanks for your help</p>
<p>The <a href="https://github.com/kubernetes-sigs/kubespray/blob/b708db4cd5256e2c114ff872d4a7a56e86b7ac34/inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml#L63" rel="nofollow noreferrer">k8s-cluster.yml</a> file has some parameters which allow to set the OID configuration:</p> <pre><code>kube_oidc_auth: true ... kube_oidc_url: https:// ... kube_oidc_client_id: kubernetes kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem" kube_oidc_username_claim: sub kube_oidc_username_prefix: oidc: kube_oidc_groups_claim: groups kube_oidc_groups_prefix: oidc: </code></pre> <p>These parameters are the counter parts to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server" rel="nofollow noreferrer">oidc api server parameters</a></p>
<p>I have below code, I would like to convert yaml to client go data structure and get the object name from it</p> <pre><code>package main import ( "fmt" "k8s.io/api/extensions/v1beta1" "k8s.io/client-go/kubernetes/scheme" ) var yml = ` apiVersion: extensions/v1beta1 kind: Deployment metadata: name: testnginx replicas: 1 spec: template: metadata: labels: run: testnginx spec: containers: - image: nginx name: testnginx ports: - containerPort: 8080 ` func main() { decode := scheme.Codecs.UniversalDeserializer().Decode obj, _, err := decode([]byte(yml), nil, nil) if err != nil { fmt.Printf("%#v", err) } //fmt.Printf("%#v\n", obj) deployment := obj.(*v1beta1.Deployment) fmt.Printf("%#v\n", deployment.ObjectMeta.Name) } </code></pre> <p>The return result supposes to be testnginx but it is empty</p> <pre><code>$ ./decode-k8s-exercise "" </code></pre> <p>Not sure why. thanks</p>
<p>The problem is in the yaml. The <code>name</code> field should be an attribute inside <code>metadata</code> but they are currently at the same level. If you space indent the name then it should work (the same applies to <code>template</code> later on). <code>relicas</code> should also be inside <code>spec</code>:</p> <pre><code>var yml = ` apiVersion: extensions/v1beta1 kind: Deployment metadata: name: testnginx spec: replicas: 1 template: metadata: labels: run: testnginx spec: containers: - image: nginx name: testnginx ports: - containerPort: 8080 ` </code></pre>
<p>I have my assets on s3 and my service is deployed on kubernetes. Is it possible to define proxy pass in nginx-ingress. My current nginx proxy_pass assets to s3 and I want to replicate in kubernetes.</p> <pre><code>location /assets/ { proxy_pass https://s3.ap-south-1.amazonaws.com; } </code></pre> <p>I tried this but its not working</p> <pre><code>nginx.ingress.kubernetes.io/server-snippet: | location /assets/ { proxy_pass https://s3.ap-south-1.amazonaws.com/assets/; } </code></pre>
<p>You can try to use service of type ExternalName here like that:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: s3-ap-south spec: type: ExternalName externalName: s3.ap-south-1.amazonaws.com --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: s3-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: rules: - host: YOUR_HOSTNAME http: paths: - backend: serviceName: s3-ap-south servicePort: 443 </code></pre>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">Reference</a></p> <p>I want to add audit-policy-file param and file is present in /etc/kubernetes/audit-policy.yaml</p> <p>It has basic metadata logging rule.</p> <p>But once i restart service apiserver is not coming up. If i keep value empty then it works fine and log in /var/log/containers say file read failed.</p> <blockquote> <p>{"log":"\n","stream":"stderr","time":"2018-12-24T12:23:36.82013719Z"} {"log":"error: loading audit policy file: failed to read file path \"/etc/kubernetes/audit-policy.yaml\": open /etc/kubernetes/audit-policy.yaml: no such file or directory\n","stream":"stderr","time":"2018-12-24T12:23:36.820146912Z"}</p> </blockquote> <pre><code>[root@kube2-master containers]# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <pre><code>[root@kube2-master containers]# cat /etc/kubernetes/audit-policy.yaml rules: - level: Metadata </code></pre> <pre><code>[root@kube2-master containers]# cat /etc/kubernetes/manifests/kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --authorization-mode=Node,RBAC - --advertise-address=192.168.213.23 - --allow-privileged=true - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --insecure-port=0 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --audit-policy-file=/etc/kubernetes/audit-policy.yaml image: k8s.gcr.io/kube-apiserver:v1.13.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.213.23 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs status: {} </code></pre>
<p>You're running the <code>kube-apiserver</code> as a pod, so it's looking for that audit file on the filesystem <em>inside</em> the container, whereas you're putting it on the filesystem of the host. You need to mount that path through to your <code>kube-apiserver</code> pod. Assuming you're using <code>kubeadm</code>, add the following to your <code>ClusterConfiguration</code>:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration apiServer: extraVolumes: - name: audit-policy hostPath: /etc/kubernetes/audit-policy.yaml mountPath: /etc/kubernetes/audit-policy.yaml readOnly: true </code></pre>
<p>I have downloaded the kubernetes from GitHub and now I want to run it from the downloaded file not from GitHub. Could you please help me? </p>
<p>You can follow:</p> <ul> <li>"<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Assign Memory Resources to Containers and Pods</a>"</li> <li>"<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">Assign CPU Resources to Containers and Pods</a>"</li> </ul> <p>That is:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: cpu-example spec: containers: - name: demo-ctr image: vish/stress resources: limits: cpu: "1" memory: "200Mi" requests: cpu: "0.5" memory: "100Mi" args: - -cpus - "2" </code></pre> <p>At the pod level: "<a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">Configure a Pod Quota for a Namespace</a>".</p>
<p>I would like to know, if the memory request of a pod is 200m and the limit is 400m, what would be the memory resource consumtion? Does it always consume all 400m which is specified in limits or it can be lower than that?</p>
<p>In short, yes it can be lower than what is specified in limits. But, it is not about consumption but about allocation. Requests mean pod is guaranteed to be allocated this amount of memory where as limits specify at max what will be allowed to used/claimed by pod. </p> <p>Although, it might seem that it controls resource usage but they have more impacts on node selection (for pod scheduling) and horizontal auto-scalers.</p> <p>Please note that memory usage is not elastic in nature as cpu because once pod/container is granted memory usually applications dont have mechanism to give it back to node. So, at times it might appear that load has decreased on pod but memory claimed/consumed is not given back. </p>
<p>We are getting very frequently below message in <code>/var/log/messages</code></p> <pre><code>kernel: SLUB: Unable to allocate memory on node -1 (gfp=0x8020) </code></pre> <p>In some cases followed by an allocation table</p> <pre><code>kernel: cache: sigqueue(12019:454c4ebd186d964699132181ad7367c669700f7d8991c47d4bc053ed101675bc), object size: 160, buffer size: 160, default order: 0, min order: 0 kernel: node 0: slabs: 57, objs: 23313, free: 0 kernel: node 1: slabs: 35, objs: 14315, free: 0 </code></pre> <p>Ok, free is 0, but how may this be tuned?</p> <p>Following is set information</p> <hr> <pre><code>OS - Centos7.3 Kernel - 3.10.0-327.36.3.el7.x86_64 Docker - 1.12.6 Kubernetes - 1.5.5 </code></pre> <p>We have private cloud powered by kurbernetes, having 10 nodes; it was working fine till last month and now we are getting these alerts very frequently on every nodes, pods/container also increased in last few days.</p> <p>We have enough <code>memory</code> and <code>cpu</code> available on each node. </p> <p>Any fine tuning for these alert will be very helpful.</p> <p>Additional information: <strong><code>sysctl.conf</code> options</strong></p> <pre><code>net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_max_syn_backlog = 4096 net.core.somaxconn = 1024 net.ipv4.tcp_syncookies = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.rmem_default = 65535 net.core.wmem_default = 65535 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.ip_local_port_range = 1024 65535 vm.max_map_count = 262144 vm.swappiness=10 vm.vfs_cache_pressure=100 </code></pre>
<p>Please look at this: <a href="https://pingcap.com/blog/try-to-fix-two-linux-kernel-bugs-while-testing-tidb-operator-in-k8s/" rel="nofollow noreferrer">https://pingcap.com/blog/try-to-fix-two-linux-kernel-bugs-while-testing-tidb-operator-in-k8s/</a>. It's a kernel bug.</p>
<p>I'm using Kube-prometheus with Prometheus-Operator to monitor my K8s cluster. I've deployed Jenkins on my cluster and want to start to get metrics here using ServiceMonitor.</p> <p>I've installed the Prometheus plugin which exposes the metrics using <code>/prometheus</code> or by <code>/metrics/API_KEY/metrics</code>, this works fine if I create a new static job. However, if I want to use ServiceMonitor, it does not work.</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: jenkins name: jenkins namespace: monitoring spec: endpoints: - interval: 30s port: http path: /metrics/y1H6G16T-DhqpHdW9XwHWnP9FWAXMMfy4XnXVnyoIOEV3-gPJZKN284OFUcVkPxL/metrics selector: matchLabels: jenkins: main </code></pre>
<p>I don't know about ServiceMonitor, but I monitor my Jenkins instance without any problem, using annotations on Jenkins' service :</p> <pre><code>prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/prometheus" </code></pre>
<p>Hello and thanks in advance for your help !</p> <p>I am trying to deploy a Kubernetes cluster using Kubespray (an ansible playbook). I am trying to deploy the Cluster on 17 KVM host (all nodes are running centos 7 , and are hosted on a baremetal server)</p> <p>However , when I try to run the playbook at the task <code>[download : file_download | Download item]</code> I get the following error (and thus stopping the installation) :</p> <pre><code>An exception occurred during task execution. To see the full traceback, use -vvv. The error was: SSLError: ('The read operation timed out',) fatal: [node1]: FAILED! =&gt; {"attempts": 4, "changed": false, "msg": "failed to create temporary content file: ('The read operation timed out',)"} </code></pre> <p>This error is the same for most of the node , BUT , some nodes are willing to downlaod.</p> <p>Here is the error in verbose mode:</p> <pre><code>The full traceback is: Traceback (most recent call last): File "/tmp/ansible_get_url_payload_72qREk/__main__.py", line 360, in url_get shutil.copyfileobj(rsp, f) File "/usr/lib64/python2.7/shutil.py", line 49, in copyfileobj buf = fsrc.read(length) File "/usr/lib64/python2.7/socket.py", line 380, in read data = self._sock.recv(left) File "/usr/lib64/python2.7/httplib.py", line 602, in read s = self.fp.read(amt) File "/usr/lib64/python2.7/socket.py", line 380, in read data = self._sock.recv(left) File "/usr/lib64/python2.7/ssl.py", line 757, in recv return self.read(buflen) File "/usr/lib64/python2.7/ssl.py", line 651, in read v = self._sslobj.read(len or 1024) SSLError: ('The read operation timed out',) fatal: [node14]: FAILED! =&gt; { "attempts": 4, "changed": false, "invocation": { "module_args": { "attributes": null, "backup": null, "checksum": "", "client_cert": null, "client_key": null, "content": null, "delimiter": null, "dest": "/tmp/releases/kubeadm", "directory_mode": null, "follow": false, "force": false, "force_basic_auth": false, "group": null, "headers": null, "http_agent": "ansible-httpget", "mode": "0755", "owner": "root", "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "sha256sum": "c4fc478572b5623857f5d820e1c107ae02049ca02cf2993e512a091a0196957b", "src": null, "timeout": 10, "tmp_dest": null, "unsafe_writes": null, "url": "https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm", "url_password": null, "url_username": null, "use_proxy": true, "validate_certs": true } }, "msg": "failed to create temporary content file: ('The read operation timed out',)" } </code></pre> <p>I tried to connect to the nodes and try to download an item (I tried to download the Kubspray zip) and it worked , all nodes reach the internet and can install package.</p> <p>From the verbose output , I somewhat understood that the error come from python , but I really don't know how to solve it...</p> <p>Let me know if I can give you another piece of information and angain , thanks in advance !</p>
<p>So , I "solved" the problem.</p> <p>In fact , Ansbible has a timeout of 10 seconds for <strong>all ssh related command/read/write</strong> task. For unknown reasons , the write task on my nodes was a bit slow , so I got this error. Fortunately , you can change the time before a timeout.</p> <p>You can either use the argument -T (or-tiemout ) 'time_wanted' or change the ansible.cfg. The 'basic' cfg file is located in <code>etc/ansible/ansible.etc</code> , but be careful , some custom ansible playbook will have a custom ansible.cfg so you will need to find it and change it to your needs.</p> <p>I was lucky , the 4 time I ran the playbook , it installed K8s smoothly.</p> <p>For testing reason , I am running my nodes on KVM , so maybe the write slowness come from here.</p> <p>Hope that someone will find my answer useful !</p>
<p>Looking into it for making awesome api things, but I don't know Go yet. I see there are generators for go, but I'd rather use a more familiar language at first. Is this possible or should I start learning Go?</p>
<p>Client libraries for Kubernetes are available in many languages. See here for examples: <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/client-libraries/</a></p>
<p>When I want to generate yaml by running <code>kubectl</code>, it denotes that I should denote <code>--generator=something</code> flag within the command.</p> <p>For example, to get the deployment template via <code>kubectl</code>, I should run the below command:</p> <pre><code>kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run -o yaml </code></pre> <p>Without mentioning <code>--generator</code> flag the CLI states in some kind that I should mention the generator flag with a proper value (e.g. <code>run-pod/v1</code>).</p> <p>My question is, what is essentially a generator? What does it do? Are they some sort of object creation templates or something else?</p>
<p>That was introduced in <a href="https://github.com/kubernetes/kubernetes/commit/426ef9335865ebef43f682da90796bd8bf976637" rel="noreferrer">commit 426ef93</a>, Jan. 2016 for Kubernetes v1.2.0-alpha.8.</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/426ef9335865ebef43f682da90796bd8bf976637/docs/devel/kubectl-conventions.md#generators" rel="noreferrer">The generators</a> were described as:</p> <blockquote> <p>Generators are <code>kubectl</code> commands that generate resources based on a set of inputs (other resources, flags, or a combination of both).</p> <p>The point of generators is:</p> <ul> <li>to enable users using <code>kubectl</code> in a scripted fashion to pin to a particular behavior which may change in the future.<br> Explicit use of a generator will always guarantee that the expected behavior stays the same.</li> <li>to enable potential expansion of the generated resources for scenarios other than just creation, similar to how <code>-f</code> is supported for most general-purpose commands.</li> </ul> </blockquote> <p>And:</p> <blockquote> <p>Generator commands should obey to the following conventions:</p> <ul> <li>A <code>--generator</code> flag should be defined. Users then can choose between different generators, if the command supports them (for example, <code>kubectl run</code> currently supports generators for pods, jobs, replication controllers, and deployments), or between different versions of a generator so that users depending on a specific behavior may pin to that version (for example, <code>kubectl</code> expose currently supports two different versions of a service generator).</li> <li>Generation should be decoupled from creation.<br> A generator should implement the <code>kubectl.StructuredGenerator</code> interface and have no dependencies on cobra or the Factory</li> </ul> </blockquote>
<p>We have several pods (as service/deployments) in our k8s workflow that are dependent on each other, such that if one goes into a <code>CrashLoopBackOff</code> state, then all these services need to be redeployed. </p> <p>Instead of having to manually do this, is there a programatic way of handling this?</p> <p>Of course we are trying to figure out why the pod in question is crashing.</p>
<p>If these are so tightly dependant on each other, I would consider these options a) Rearchitect your system to be more resilient towards failure and tolerate, if a pod is temporary unavailable b) Put all parts into one pod as separate containers, making the atomic design more explicit</p> <p>If these don't fit your needs, you can use the Kubernetes API to create a program that automates the task of restarting all dependent parts. There are client libraries for multiple languages and integration is quite easy. The next step would be a custom resource definition (CRD) so you can manage your own system using an extension to the Kubernetes API.</p>
<p>I have one node that is giving me some trouble at the moment. Not found a solution as of yet but that might be a skill level problem, Google coming up empty, or I have found some unsolvable issue. The latter is highly unlikely.</p> <pre><code>kubectl version v1.8.5 docker version 1.12.6 </code></pre> <p>Doing some normal maintenance on my nodes I noticed the following:</p> <pre><code>NAME STATUS ROLES AGE VERSION ip-192-168-4-14.ourdomain.pro Ready master 213d v1.8.5 ip-192-168-4-143.ourdomain.pro Ready master 213d v1.8.5 ip-192-168-4-174.ourdomain.pro Ready &lt;none&gt; 213d v1.8.5 ip-192-168-4-182.ourdomain.pro Ready &lt;none&gt; 46d v1.8.5 ip-192-168-4-221.ourdomain.pro Ready &lt;none&gt; 213d v1.8.5 ip-192-168-4-249.ourdomain.pro Ready master 213d v1.8.5 ip-192-168-4-251.ourdomain.pro NotReady &lt;none&gt; 206d v1.8.5 </code></pre> <p>On the <strong>NotReady</strong> node, I am unable to <strong>attach</strong> or <strong>exec</strong> myself in which seems normal when in a <strong>NotReady</strong> state unless I am misreading it. Not able to look at any specific logs on that node for the same reason.</p> <p>At this point, I restarted <strong>kubelet</strong> and attached myself to the logs simultaneously to see if anything out of the ordinary would appear.</p> <p>I have attached the things I spent a day Googling but I can not confirm is the actually connected to the problem.</p> <p><strong>ERROR 1</strong></p> <pre><code>unable to connect to Rkt api service </code></pre> <p>We are not using this so I put this on the ignore list.</p> <p><strong>ERROR 2</strong></p> <pre><code>unable to connect to CRI-O api service </code></pre> <p>We are not using this so I put this on the ignore list.</p> <p><strong>ERROR 3</strong></p> <pre><code>Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container / </code></pre> <p>I have not been able to exclude this as potential pitfall but the things I have found thus far do not seem to relate to the version I am running.</p> <p><strong>ERROR 4</strong></p> <pre><code>skipping pod synchronization - [container runtime is down PLEG is not healthy </code></pre> <p>I do not have an answer for this one except for the fact that the garbage collection error above appears a second time after this message.</p> <p><strong>ERROR 5</strong></p> <pre><code>Registration of the rkt container factory failed </code></pre> <p>Not using this so it should fail unless I am mistaken.</p> <p><strong>ERROR 6</strong></p> <pre><code>Registration of the crio container factory failed </code></pre> <p>Not using this so it should fail unless, again, I am mistaken.</p> <p><strong>ERROR 7</strong></p> <pre><code>28087 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kube-dns-545bc4bfd4-rt7qp_kube-system": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container </code></pre> <p>Found a Github ticket for this one but seems it's fixed so not sure how it relates.</p> <p><strong>ERROR 8</strong></p> <pre><code>28087 kubelet_node_status.go:791] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-05-06 05:00:40.664331773 +0000 UTC LastTransitionTime:2019-05-06 05:00:40.664331773 +0000 UTC Reason:KubeletNotReady Message:container runtime is down} </code></pre> <p>And here the node goes into NotReady.</p> <p><strong>Last log messages and status</strong></p> <pre><code> systemctl status kubelet kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Mon 2019-05-06 05:00:39 UTC; 1h 58min ago Docs: http://kubernetes.io/docs/ Main PID: 28087 (kubelet) Tasks: 21 Memory: 42.3M CGroup: /system.slice/kubelet.service └─28087 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manife... May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310305 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for vo...4a414b9c") May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310330 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for vo...4a414b9c") May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310359 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "varl... May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310385 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cali... May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310408 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube... May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310435 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for vo...4a414b9c") May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310456 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for vo...4a414b9c") May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310480 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-c... May 06 05:00:45 kube-master-1 kubelet[28087]: I0506 05:00:45.310504 28087 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-... May 06 05:14:29 kube-master-1 kubelet[28087]: E0506 05:14:29.848530 28087 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs </code></pre> <p>Here is the <strong>kubectl get po -o wide</strong> output.</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE docker-image-prune-fhjkl 1/1 Running 4 213d 100.96.67.87 ip-192-168-4-249 docker-image-prune-ltfpf 1/1 Running 4 213d 100.96.152.74 ip-192-168-4-143 docker-image-prune-nmg29 1/1 Running 3 213d 100.96.22.236 ip-192-168-4-221 docker-image-prune-pdw5h 1/1 Running 7 213d 100.96.90.116 ip-192-168-4-174 docker-image-prune-swbhc 1/1 Running 0 46d 100.96.191.129 ip-192-168-4-182 docker-image-prune-vtsr4 1/1 NodeLost 1 206d 100.96.182.197 ip-192-168-4-251 fluentd-es-4bgdz 1/1 Running 6 213d 192.168.4.249 ip-192-168-4-249 fluentd-es-fb4gw 1/1 Running 7 213d 192.168.4.14 ip-192-168-4-14 fluentd-es-fs8gp 1/1 Running 6 213d 192.168.4.143 ip-192-168-4-143 fluentd-es-k572w 1/1 Running 0 46d 192.168.4.182 ip-192-168-4-182 fluentd-es-lpxhn 1/1 Running 5 213d 192.168.4.174 ip-192-168-4-174 fluentd-es-pjp9w 1/1 Unknown 2 206d 192.168.4.251 ip-192-168-4-251 fluentd-es-wbwkp 1/1 Running 4 213d 192.168.4.221 ip-192-168-4-221 grafana-76c7dbb678-p8hzb 1/1 Running 3 213d 100.96.90.115 ip-192-168-4-174 model-5bbe4862e4b0aa4f77d0d499-7cb4f74648-g8xmp 2/2 Running 2 101d 100.96.22.234 ip-192-168-4-221 model-5bbe4862e4b0aa4f77d0d499-7cb4f74648-tvp4m 2/2 Running 2 101d 100.96.22.235 ip-192-168-4-221 prometheus-65b4b68d97-82vr7 1/1 Running 3 213d 100.96.90.87 ip-192-168-4-174 pushgateway-79f575d754-75l6r 1/1 Running 3 213d 100.96.90.83 ip-192-168-4-174 rabbitmq-cluster-58db9b6978-g6ssb 2/2 Running 4 181d 100.96.90.117 ip-192-168-4-174 replicator-56x7v 1/1 Running 3 213d 100.96.90.84 ip-192-168-4-174 traefik-ingress-6dc9779596-6ghwv 1/1 Running 3 213d 100.96.90.85 ip-192-168-4-174 traefik-ingress-6dc9779596-ckzbk 1/1 Running 4 213d 100.96.152.73 ip-192-168-4-143 traefik-ingress-6dc9779596-sbt4n 1/1 Running 3 213d 100.96.22.232 ip-192-168-4-221 </code></pre> <p>Output of <strong>kubectl get po -n kube-system -o wide</strong></p> <pre><code>NAME READY STATUS RESTARTS AGE IP calico-kube-controllers-78f554c7bb-s7tmj 1/1 Running 4 213d 192.168.4.14 calico-node-5cgc6 2/2 Running 9 213d 192.168.4.249 calico-node-bbwtm 2/2 Running 8 213d 192.168.4.14 calico-node-clwqk 2/2 NodeLost 4 206d 192.168.4.251 calico-node-d2zqz 2/2 Running 0 46d 192.168.4.182 calico-node-m4x2t 2/2 Running 6 213d 192.168.4.221 calico-node-m8xwk 2/2 Running 9 213d 192.168.4.143 calico-node-q7r7g 2/2 Running 8 213d 192.168.4.174 cluster-autoscaler-65d6d7f544-dpbfk 1/1 Running 10 207d 100.96.67.88 kube-apiserver-ip-192-168-4-14 1/1 Running 6 213d 192.168.4.14 kube-apiserver-ip-192-168-4-143 1/1 Running 6 213d 192.168.4.143 kube-apiserver-ip-192-168-4-249 1/1 Running 6 213d 192.168.4.249 kube-controller-manager-ip-192-168-4-14 1/1 Running 5 213d 192.168.4.14 kube-controller-manager-ip-192-168-4-143 1/1 Running 6 213d 192.168.4.143 kube-controller-manager-ip-192-168-4-249 1/1 Running 6 213d 192.168.4.249 kube-dns-545bc4bfd4-rt7qp 3/3 Running 13 213d 100.96.19.197 kube-proxy-2bn42 1/1 Running 0 46d 192.168.4.182 kube-proxy-95cvh 1/1 Running 4 213d 192.168.4.174 kube-proxy-bqrhw 1/1 NodeLost 2 206d 192.168.4.251 kube-proxy-cqh67 1/1 Running 6 213d 192.168.4.14 kube-proxy-fbdvx 1/1 Running 4 213d 192.168.4.221 kube-proxy-gcjxg 1/1 Running 5 213d 192.168.4.249 kube-proxy-mt62x 1/1 Running 4 213d 192.168.4.143 kube-scheduler-ip-192-168-4-14 1/1 Running 6 213d 192.168.4.14 kube-scheduler-ip-192-168-4-143 1/1 Running 6 213d 192.168.4.143 kube-scheduler-ip-192-168-4-249 1/1 Running 6 213d 192.168.4.249 kubernetes-dashboard-7c5d596d8c-q6sf2 1/1 Running 5 213d 100.96.22.230 tiller-deploy-6d9f596465-svpql 1/1 Running 3 213d 100.96.22.231 </code></pre> <p>I am a bit lost at this point of where to go from here. Any suggestions are welcome.</p>
<p>Most likely the kubelet must be down.</p> <p>share the output from below command</p> <pre><code>journalctl -u kubelet </code></pre> <p>share the output from the below command </p> <pre><code>kubectl get po -n kube-system -owide </code></pre> <p>It appears like the node is not able to communicate with the control plane. you can below steps</p> <ol> <li>detached the node from cluster ( cordon the node, drain the node and finally delete the node)</li> <li>reset the node</li> <li>rejoin the node as fresh to cluster</li> </ol>
<p>I want my pods to be gracefully recycled from my deployments after certain period of time such as every week or month. I know I can add a cron job for that if I know the Kubernetes command. </p> <p>The question is what is the best approach to do this in Kubernetes. Which command will let me achieve this goal?</p> <p>Thank you very much for helping me out on this.</p>
<p>As the <a href="https://stackoverflow.com/users/1527879/rayhan">OP rayhan</a> has <a href="https://stackoverflow.com/a/53947838/6309">found out</a>, and as commented in <a href="https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-446101458" rel="noreferrer"><code>kubernetes/kubernetes</code> issue 13488</a>, a kubectl patch of an environment variable is enough.</p> <p>But... K8s 1.15 will bring <a href="https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-481023838" rel="noreferrer"><code>kubectl rollout restart</code></a>... that is when <a href="https://github.com/kubernetes/kubernetes/pull/77423" rel="noreferrer">PR 77423</a> is accepted and merged.</p> <blockquote> <p><code>kubectl rollout restart</code> now works for daemonsets and statefulsets.</p> </blockquote>
<p>I use kubernetes on docker for windows.<br> And I want to use Kaniko but I could not build an image on local kubernetes.</p> <p>Dockerfile</p> <pre><code>FROM ubuntu:18.04 RUN apt update RUN apt install -y ssh </code></pre> <p>kanikopod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - image: gcr.io/kaniko-project/executor:latest args: - --dockerfile=/mnt/Dockerfile - --context= /mnt - --no-push name: kaniko command: - sleep - infinity volumeMounts: - mountPath: /mnt name: mount-volume restartPolicy: Never volumes: - name: mount-volume persistentVolumeClaim: claimName: kaniko-workspace </code></pre> <p>But ContainerCannotRun error occurred.</p> <p>kubectl describe pods kaniko<br> result is</p> <pre><code>Name: kaniko Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: docker-desktop/192.168.65.3 Start Time: Mon, 06 May 2019 18:13:47 +0900 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"kaniko","namespace":"default"},"spec":{"containers":[{"args":["--dock... Status: Pending IP: Containers: kaniko: Container ID: Image: gcr.io/kaniko-project/executor:latest Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sleep infinity Args: --dockerfile=/mnt/Dockerfile --context= /mnt --no-push State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /mnt from mount-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-spjnr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mount-volume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: kaniko-workspace ReadOnly: false default-token-spjnr: Type: Secret (a volume populated by a Secret) SecretName: default-token-spjnr Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4s default-scheduler Successfully assigned default/kaniko to docker-desktop </code></pre> <p>kubectl logs kaniko<br> But there was no output.</p> <p>I think "--destination=" option is needed for kaniko, but I cannot find the information.<br> What should I do?</p>
<p>Try to remove this lines:</p> <pre><code> command: - sleep - infinity </code></pre>
<p>First off, I am totally new to deploying CICD builds.</p> <p>I started with a successful setup of Jenkins X on an AWS EKS Cluster via this <a href="https://aws.amazon.com/blogs/opensource/continuous-delivery-eks-jenkins-x/#" rel="nofollow noreferrer">guide</a>.</p> <p>I am able to run the pipeline via GitHub and builds successfully on a normal jx quickstart.</p> <p>Problems arose when I started pushing my node express application.</p> <p>On an alpine node base, my dockerfile looked like this:</p> <pre><code>FROM node:10.15.3-alpine RUN mkdir -p /app/node_modules &amp;&amp; chown -R node:node /app WORKDIR /app COPY package*.json ./ RUN npm ci --prod FROM alpine:3.7 COPY --from=0 /usr/bin/node /usr/bin/ COPY --from=0 /usr/lib/libgcc* /usr/lib/libstdc* /usr/lib/ WORKDIR /app COPY --from=0 /app . EXPOSE 3000 CMD ["node", "server.js"] </code></pre> <p>And it terminated with an error:</p> <pre><code>Step 5/14 : RUN npm ci --prod ---&gt; Running in c7f038a80dcc [91mnpm[0m[91m ERR! code EAI_AGAIN [0m[91mnpm ERR! errno EAI_AGAIN [0m[91mnpm ERR![0m[91m request to https://registry.npmjs.org/express/-/express-4.16.4.tgz failed, reason: getaddrinfo EAI_AGAIN registry.npmjs.org registry.npmjs.org:443 [0mtime="2019-03-28T08:26:00Z" level=fatal msg="build failed: building [community]: build artifact: The command '/bin/sh -c npm ci --prod' returned a non-zero code: 1" </code></pre> <p>I tried using a non alpine base and this was how it looked:</p> <pre><code>FROM node:10-slim WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . ENV PORT 3000 EXPOSE 3000 CMD ["npm", "start"] </code></pre> <p>But then, the problem was the build hangs (or is taking very long) when it hits the RUN npm install step.</p> <p>I have scoured for possible answers and duplicate questions but to no avail. So I last resorted into asking it here.</p> <p>I don't have an idea of what's going on honestly.</p>
<p>I managed to solve this issue by enabling docker bridge network when bootstrapping EKS worker nodes.</p> <pre><code>#!/bin/bash set -o xtrace /etc/eks/bootstrap.sh --enable-docker-bridge true 'your-cluster-name' </code></pre> <p>More detail in this Github issue: <a href="https://github.com/awslabs/amazon-eks-ami/issues/183" rel="nofollow noreferrer">https://github.com/awslabs/amazon-eks-ami/issues/183</a></p>
<p>We have several pods (as service/deployments) in our k8s workflow that are dependent on each other, such that if one goes into a <code>CrashLoopBackOff</code> state, then all these services need to be redeployed. </p> <p>Instead of having to manually do this, is there a programatic way of handling this?</p> <p>Of course we are trying to figure out why the pod in question is crashing.</p>
<p>First thing to do is making sure that pods are started in correct sequence. This can be done using initContainers like that:</p> <pre><code>spec: initContainers: - name: waitfor image: jwilder/dockerize args: - -wait - "http://config-srv/actuator/health" - -wait - "http://registry-srv/actuator/health" - -wait - "http://rabbitmq:15672" - -timeout - 600s </code></pre> <p>Here your pod will not start until all the services in a list are responding to HTTP probes.</p> <p>Next thing you may want to define liveness probe that periodically executes curl to the same services</p> <pre><code> spec: livenessProbe: exec: command: - /bin/sh - -c - curl http://config-srv/actuator/health &amp;&amp; curl http://registry-srv/actuator/health &amp;&amp; curl http://rabbitmq:15672 </code></pre> <p>Now if any of those services fail - you pod will fail liveness probe, be restarted and wait for services to become back online.</p> <p>That's just an example how it can be done. In your case checks can be different of course.</p>
<p>I am using Kubernetes within Google Cloud Kubernetes Engine and have setup the following: - Nginx docker image (nginx:latest), which is hosting a web application - Kubernetes Deployment (yaml file) - Kubernetes Service (yaml file) - Kubernetes Secret with existing key and certificates (Wildcard PositiveSSL) - Kubernetes Ingress</p> <p>Currently I have both HTTP and HTTPS working. However, I want to redirect any and all HTTP calls to HTTPS automatically, but don't seem to get it to be working.</p> <p>I have tried many variations of the conf and script files below, and it doesn't seem to be able to redirect HTTP to HTTPS.</p> <p>Any idea what I might be doing wrong here?</p> <p>Please see below for my conf, yaml and docker files.</p> <p>Nginx Conf:</p> <pre><code>server { listen 80; charset utf-8; root /usr/share/nginx/html; location / { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect http:// https://; proxy_pass http://portal.domain.com; proxy_http_version 1.1; proxy_request_buffering off; } } server { listen 443 ssl; charset utf-8; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/domain_com_full.crt; ssl_certificate_key /etc/nginx/ssl/domain_com.key; location / { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect http:// https://; proxy_pass http://portal.domain.com; proxy_http_version 1.1; proxy_request_buffering off; } } </code></pre> <p>Docker file:</p> <pre><code>FROM nginx:latest COPY nginx.conf /etc/nginx/conf.d/default.conf COPY domain_com_full.crt /etc/nginx/ssl/domain_com_full.crt COPY domain_com.key /etc/nginx/ssl/domain_com.key COPY dist /usr/share/nginx/html EXPOSE 443 80 </code></pre> <p>Deployment YAML (I use a script to fill in the revision part of the image):</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: domain-frontend-prd spec: replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: run: domain-frontend-prd template: metadata: labels: run: domain-frontend-prd spec: containers: - name: domain-frontend-image image: eu.gcr.io/domain-service/domain-frontend-image:{{REVISION_ID}} ports: - containerPort: 80 - containerPort: 443 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 </code></pre> <p>Service YAML:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: domain-frontend-service-prd spec: type: NodePort selector: run: domain-frontend-prd ports: - protocol: TCP port: 443 targetPort: 443 name: https-port - protocol: TCP port: 80 targetPort: 80 name: http-port </code></pre> <p>Ingress YAML (The Secret is working, as the HTTPS call also works + static IP is also there and working):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: domain-frontend-ingress-prd annotations: kubernetes.io/ingress.global-static-ip-name: kubernetes-ingress nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - portal.domain.com secretName: domain-tls backend: serviceName: domain-frontend-service-prd servicePort: 80 rules: - host: portal.domain.com http: paths: - path: / backend: serviceName: domain-frontend-service-prd servicePort: 80 </code></pre>
<p>Through extensive search I have found out that apparently the standard Ingress controller in Google Cloud Kubernetes engine, does not support redirect to HTTPS.</p> <p>In order to be able to resend traffic to HTTPS (from HTTP), you need to install the NGINX Ingress controller according to this tutorial/documentation:</p> <p><a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a></p> <p>This has resolved my issue.</p>
<p>I have created an EKS cluster by following the getting started guide by AWS with k8s version 1.11. I have not changed any configs as such for kube-dns. If I create a service let's say myservice, I would like to access it from some other ec2 instance which is not part of this eks cluster but it is in same VPC. Basically, I want this DNS to work as my DNS server for instances outside the cluster as well. How will I be able to do that?</p> <p>I have seen that the kube-dns service gets a cluster IP but doesn't get an external IP, is that necessary for me to be able to access it from outside the cluster?</p> <p>This is the current response : </p> <pre><code>[ec2-user@ip-10-0-0-149 ~]$ kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 172.20.0.10 &lt;none&gt; 53/UDP,53/TCP 4d </code></pre> <p>My VPC subnet is 10.0.0.0/16</p> <p>I am trying to reach this 172.20.0.10 IP from other instances in my VPC and I am not able to, which I think is expected because my VPC is not aware of any subnet range that is 172.20.0.10. But then how do make this dns service accessible to all my instances in VPC?</p>
<p>The problem you are facing is mostly not related to DNS. As you said you cannot reach ClusterIP from your other instances because it is internal cluster network and it is unreachable from outside of Kubernetes. </p> <p>Instead of going into the wrong direction I recommend you to make use of <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress</a> which allows you to create Nginx backed by AWS Load Balancer and expose your services on that Nginx.</p> <p>You can further integrate your Ingresses with <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">External-DNS</a> addon which will allow you to dynamically create DNS records in Route 53. </p> <p>This will take some time to configure but this is the Kubernetes way.</p>
<p>I have created a new cluster in GKE with version 1.10.5-gke.0. I see that my applications cannot reach IPs in the same network, basically instances running on compute.</p> <p>I have ssh'd to one of the Kubernetes nodes, and by using the toolbox included i can ping those IP addresses, but I can't if I try from a container running on this cluster.</p> <p>I saw that since 1.10 google disables access scopes for compute &amp; storage, and even if I enable those scopes I still get the same.</p> <p>I find it a bit puzzling, as this used to work for all other clusters in the past without any extra config needed</p> <p>Am I missing something here?</p>
<p>An easy way of doing fixing this is using the Google Cloud Console. </p> <p>Go to </p> <blockquote> <p>Navigation Menu -> VPC network -> Firewall rules</p> </blockquote> <p>. </p> <p>Normally when a cluster is created, a number of rules are created automatically with certain prefixes and suffixes. Look in the table of rules with a <strong>gke-</strong> prefix and an <strong>-all</strong> suffix e.g. gke-[my_cluster_name]-all. You'll notice for this rule, it has the source ranges for your pods within the cluster and quite a few protocols (tcp, udp, imp, esp, etc.) allowed.</p> <p><strong>Select this rule</strong> and go to <strong>Edit</strong>. Under <strong>Targets</strong>, select the drop down and change to <strong>All instances in the network</strong>. </p> <p>Alternatively, you can choose specified <strong>Specified target tags</strong> or <strong>Specified service account</strong>, inputing the correct values below, like your correct developer service account for the compute engine you're trying to reach. </p> <p>You can also look here if you're Kubernetes is version 1.9.x and later for another alternative way. <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#autofirewall" rel="noreferrer">Troubleshooting</a></p> <p>Hope all this helps. </p>
<p>I made a Jenkins Deployment on Kubernetes. It works fine but when I try to show container logs an error occurs.</p> <pre><code>$ kubectl logs jenkins-5b587f57b8-tfz2f Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions? touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied </code></pre>
<p>Good working <a href="https://www.thoutam.com/2018/08/13/jenkins-copy_reference_file-log-permission-denied-issue-in-k8s/" rel="noreferrer">solution</a> i found is adding a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">Security Context</a>:</p> <pre><code>securityContext: fsGroup: 1000 runAsUser: 0 </code></pre> <p>to Jenkins Deployment spec.</p>
<p>I have a namespace <code>namespace</code> - which has ~10-15 deployments. Creating a big yaml file, and apply it on a "deploy".</p> <p>How do i validate, wait, watch, block, until all deployments have been rolledout ?</p> <p>currently i am thinking of:</p> <ul> <li>get list of deployments</li> <li>foreach deployment - make api call to get status</li> <li>once all deployments are "green" - end process, signaling deployment/ship is done.</li> </ul> <p>what are the status'es of deployments, is there already a similar tool that can do it? <a href="https://github.com/Shopify/kubernetes-deploy" rel="nofollow noreferrer">https://github.com/Shopify/kubernetes-deploy</a> is kind of what i am searching for, but it forces a yml structure and so on.</p> <p>what would be the best approach?</p>
<p>Set a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">readiness probe</a> and use <code>kubectl rollout status deployment &lt;deployment_name&gt;</code> to see the <code>deployment</code> rollout status</p>
<p>We are using Kubernetes and we need to do "Smart partitioning" of data. We want to split the space between 1 to 1000 between n running Pods, And each pod should know which part of the space is his to handle (for pooling partitioned tasks).</p> <p>So for example, if we have 1 pod he will handle the whole space from 1-1000. </p> <p>When we scale out to 3 pods, each of them will get the same share.</p> <p>Pod 1 - will handle 1-333</p> <p>Pod 2 - 334-667</p> <p>Pod 3 667-1000</p> <p>Right now the best way that we find to handle this issue is to create a Stateful-set, that pooling the number of running pods and his instance number and decide which part of the space he needs to handle. Is there a smarter/built-in way in Kubernetes to partition the space between nodes in this manner?</p> <p>Service fabric has this feature built-in. </p>
<p>There are <strong>NO</strong> native tools for scaling at the <a href="https://learn.microsoft.com/en-in/azure/service-fabric/service-fabric-concepts-scalability#scaling-at-the-partition-level" rel="nofollow noreferrer">partition level</a> in K8s yet.</p> <p>Only custom solutions similar to what you have came up with in your original post.</p>
<p>I have setup Official Kubernetes Python Client (<a href="https://github.com/kubernetes-client/python" rel="noreferrer">https://github.com/kubernetes-client/python</a>).</p> <p>I am running Python code to connect to a Kubernetes cluster and it's throwing an <strong>error</strong>:</p> <blockquote> <p>urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='****', port=6443): Max retries exceeded with url: /api/v1/namespaces/default/pods (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out',))</p> </blockquote> <p><strong>Installed requirements:</strong></p> <pre><code>cachetools 3.1.0 , certifi 2019.3.9 , chardet 3.0.4 , google-auth 1.6.3 , idna 2.8 , ipaddress 1.0.22 , kubernetes 10.0.0-snapshot , oauthlib 3.0.1 , pip 19.0.3 , pyasn1 0.4.5 , pyasn1-modules 0.2.4 , python-dateutil 2.8.0 , PyYAML 5.1 , requests 2.21.0 , requests-oauthlib 1.2.0 , rsa 4.0 , setuptools 41.0.0 , six 1.12.0 , urllib3 1.24.1 , websocket-client 0.56.0 , wheel 0.33.1 </code></pre> <p><code>kubectl</code> is working fine with the same config <code>$HOME/.kube/config</code> and same proxy settings.</p> <p>Below is my code:</p> <pre><code>from kubernetes import client, config config.load_kube_config() v1 = client.CoreV1Api() print("Listing pods with their IPs:") ret = v1.list_pod_for_all_namespaces(watch=False) </code></pre>
<p>In my case simply loading the config via <code>load_kube_config</code> didn't work as well. I ended up first running a proxy:</p> <pre><code>kubectl proxy --port=8888 </code></pre> <p>and then changing the host using the <code>Configuration</code> object:</p> <pre><code>configuration = client.Configuration() configuration.host = "localhost:8888" api_client = client.CoreV1Api(client.ApiClient(configuration)) </code></pre>
<p>I have a <strong>console</strong> app in .net core. How to implement Kubernetes readiness/liveness probe?</p> <p>My app process rabbitmq messages in loop, and don`t listen any http ports. </p>
<p>In this case, it may be better to create a sidecar container in the same pod. This container would host an ASP.NET Core site, specifically to leverage the new health checks api recently introduced in .NET Core 2.2. There are extensions for monitoring rabbitmq connectivity. You would then expose the ASP.NET Core website for health checks.</p> <p><a href="https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/implement-resilient-applications/monitor-app-health" rel="nofollow noreferrer"> ASP.NET Core Health Montioring</a></p> <p><a href="https://www.nuget.org/packages/AspNetCore.HealthChecks.Rabbitmq/" rel="nofollow noreferrer">AspNetCore.HealthChecks.Rabbitmq</a></p>
<p>I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.</p> <p>Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates. I found same problem for traefik v1 on github. Solution was in adding to config:</p> <pre><code>InsecureSkipVerify = true passHostHeader = true </code></pre> <p>It doesn't help me.</p> <p>Here is my configmap</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: traefik-ingress-configmap namespace: kube-system data: traefik.toml: | [Global] sendAnonymousUsage = true debug = true logLevel = "DEBUG" [ServersTransport] InsecureSkipVerify = true [entrypoints] [entrypoints.web] address = ":80" [entryPoints.web-secure] address = ":443" [entrypoints.mongo-port] address = ":11111" [providers] [providers.file] [tcp] # YAY! [tcp.routers] [tcp.routers.everything-to-mongo] entrypoints = ["mongo-port"] rule = "HostSNI(`*`)" # Catches every request service = "database" [tcp.services] [tcp.services.database.LoadBalancer] [[tcp.services.database.LoadBalancer.servers]] address = "mongodb-service.default.svc:11111" [http] [http.routers] [http.routers.for-jupyterx-https] entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port # rule = "Host(`clients-ui.ddns.net`)" # rule = "Path(`/jupyterx`)" # abo /jupyterx/* rule = "PathPrefix(`/jupyterx`)" service = "jupyterx" [http.routers.for-jupyterx.tls] [http.routers.for-jupyterx-http] entryPoints = ["web"] # won't listen to entrypoint mongo-port # rule = "Host(`clients-ui.ddns.net`)" # rule = "Path(`/jupyterx`)" # abo /jupyterx/* rule = "PathPrefix(`/jupyterx`)" service = "jupyterx" [http.services] [http.services.jupyterx.LoadBalancer] PassHostHeader = true # InsecureSkipVerify = true [[http.services.jupyterx.LoadBalancer.servers]] url = "http://jupyter-service.default.svc/" weight = 100 [acme] # every router with TLS enabled will now be able to use ACME for its certificates email = "account@mail.com" storage = "acme.json" # onHostRule = true # dynamic generation based on the Host() &amp; HostSNI() matchers caServer = "https://acme-staging-v02.api.letsencrypt.org/directory" [acme.httpChallenge] entryPoint = "web" # used during the challenge </code></pre> <p>And DaemonSet yaml:</p> <pre><code># --- # apiVersion: v1 # kind: ServiceAccount # metadata: # name: traefik-ingress-controller # namespace: kube-system --- kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: traefik-ingress-controller namespace: kube-system labels: k8s-app: traefik-ingress-lb spec: template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 volumes: # - name: traefik-ui-tls-cert # secret: # secretName: traefik-ui-tls-cert - name: traefik-ingress-configmap configMap: name: traefik-ingress-configmap containers: - image: traefik:2.0 # The official v2.0 Traefik docker image name: traefik-ingress-lb ports: - name: http containerPort: 80 hostPort: 80 - name: web-secure containerPort: 443 hostPort: 443 - name: admin containerPort: 8080 - name: mongodb containerPort: 11111 volumeMounts: - mountPath: "/config" name: "traefik-ingress-configmap" args: - --api - --configfile=/config/traefik.toml --- kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP port: 80 name: web - protocol: TCP port: 443 name: web-secure - protocol: TCP port: 8080 name: admin - port: 11111 protocol: TCP name: mongodb type: LoadBalancer loadBalancerIP: 1.1.1.1 </code></pre> <p>Have any suggestions, how to fix it?</p>
<p>Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page. There is a "routers for HTTP &amp; HTTPS" configuration example here <a href="https://docs.traefik.io/v2.0/routing/routers/" rel="nofollow noreferrer">https://docs.traefik.io/v2.0/routing/routers/</a> look like:</p> <pre><code>[http.routers] [http.routers.Router-1-https] rule = "Host(`foo-domain`) &amp;&amp; Path(`/foo-path/`)" service = "service-id" [http.routers.Router-1.tls] # will terminate the TLS request [http.routers.Router-1-http] rule = "Host(`foo-domain`) &amp;&amp; Path(`/foo-path/`)" service = "service-id" </code></pre> <p>But working config looks like:</p> <pre><code>[http.routers] [http.routers.Router-1-https] rule = "Host(`foo-domain`) &amp;&amp; Path(`/foo-path/`)" service = "service-id" [http.routers.Router-1-https.tls] # will terminate the TLS request [http.routers.Router-1-http] rule = "Host(`foo-domain`) &amp;&amp; Path(`/foo-path/`)" service = "service-id" </code></pre> <p>So, in my config string</p> <pre><code>[http.routers.for-jupyterx.tls] </code></pre> <p>should be changed on</p> <pre><code>[http.routers.for-jupyterx-https.tls] </code></pre>
<p>I would like to add an iSCSI volume to a pod as in this <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/iscsi" rel="nofollow noreferrer">this example</a>. I have already prepared an iSCSI target on a Debian server and installed <code>open-iscsi</code> on all my worker nodes. I have also confirmed that I can mount the iSCSI target on a worker node with command line tools (i.e. still outside Kubernetes). This works fine. For simplicity, there is no authentication (CHAP) in play yet, and there is already a <code>ext4</code> file system present on the target.</p> <p>I would now like for Kubernetes 1.14 to mount the same iSCSI target into a pod with the following manifest:</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: iscsipd spec: containers: - name: iscsipd-ro image: kubernetes/pause volumeMounts: - mountPath: "/mnt/iscsipd" name: iscsivol volumes: - name: iscsivol iscsi: targetPortal: 1.2.3.4 # my target iqn: iqn.2019-04.my-domain.com:lun1 lun: 0 fsType: ext4 readOnly: true </code></pre> <p>According to <code>kubectl describe pod</code> this works in the initial phase (<code>SuccessfulAttachVolume</code>), but then fails (<code>FailedMount</code>). The exact error message reads:</p> <pre><code>Warning FailedMount ... Unable to mount volumes for pod "iscsipd_default(...)": timeout expired waiting for volumes to attach or mount for pod "default"/"iscsipd". list of unmounted volumes=[iscsivol]. list of unattached volumes=[iscsivol default-token-7bxnn] Warning FailedMount ... MountVolume.WaitForAttach failed for volume "iscsivol" : failed to get any path for iscsi disk, last err seen: Could not attach disk: Timeout after 10s </code></pre> <p>How can I further diagnose and overcome this problem?</p> <p><strong>UPDATE</strong> In <a href="https://github.com/rancher/rancher/issues/12433" rel="nofollow noreferrer">this</a> related issue the solution consisted of using a numeric IP address for the target. However, this does not help in my case, since I am already using a <code>targetPortal</code> of the form <code>1.2.3.4</code> (have also tried both with and without port number 3260).</p> <p><strong>UPDATE</strong> Stopping <code>scsid.service</code> and/or <code>open-iscsi.service</code> (as suggested <a href="https://monach.us/operations/rke-and-iscsi-race-condition/" rel="nofollow noreferrer">here</a>) did not make a difference either.</p> <p><strong>UPDATE</strong> The error apparently gets triggered in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/iscsi/iscsi_util.go" rel="nofollow noreferrer"><code>pkg/volume/iscsi/iscsi_util.go</code></a> if <code>waitForPathToExist(&amp;devicePath, multipathDeviceTimeout, iscsiTransport)</code> fails. However, what is strange is that when it is triggered the file at <code>devicePath</code> (<code>/dev/disk/by-path/ip-...-iscsi-...-lun-...</code>) does actually exist on the node.</p> <p><strong>UPDATE</strong> I have used this procedure for defining an simple iSCSI target for these test purposes:</p> <pre><code>pvcreate /dev/sdb vgcreate iscsi /dev/sdb lvcreate -L 10G -n iscsi_1 iscsi apt-get install tgt cat &gt;/etc/tgt/conf.d/iscsi_1.conf &lt;&lt;EOL &lt;target iqn.2019-04.my-domain.com:lun1&gt; backing-store /dev/mapper/iscsi-iscsi_1 initiator-address 5.6.7.8 # my cluster node #1 ... # my cluster node #2, etc. &lt;/target&gt; EOL systemctl restart tgt tgtadm --mode target --op show </code></pre>
<p>This is probably because of authentication issue to your iscsi target. </p> <p>If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use <code>targetcli</code>, you can run below commands to disable it.</p> <pre><code>$ sudo targetcli /&gt; /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth /&gt; /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default </code></pre> <p>If this doesn't help you, please share your iscsi target configuration, or guide that you followed.</p>
<p>I have deployed istio service mesh in my AKS cluster. i have ui and backend services both configured in istio for service communication. i am able to invoke backend service through istio, and no problem in accessing my backend services.</p> <p>But i am facing issue in test-ui pod, when i see pod status </p> <pre><code>NAME READY STATUS RESTARTS AGE Test-api-deployment-59f6c6f67-ml4xm 2/2 Running 0 3d21h Test-ui-deployment-b54fd89b-2ndsv 1/2 Running 0 52m </code></pre> <p>In above status, my ui pod , one of my istio container is not in ready state to serve my request.. when i looked into the container state, i get below error..</p> <pre><code>Warning Unhealthy 2m24s (x299 over 12m) kubelet, aks-wmsdevk8s-25812762-4 Readiness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>i see it an open issue <a href="https://github.com/istio/istio/issues/12446" rel="nofollow noreferrer">issue in github</a>.. is there any workaround for this...</p> <p>EDIT</p> <p>my istio version</p> <blockquote> <p>version.BuildInfo{Version:"1.1.5", GitRevision:"9b6d31b74d1c0cc9358cc82d395b53f71393326b", User:"root", Host:"3e29fde4-6c3f-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.4-10-g9b6d31b"}</p> </blockquote> <pre><code>istio proxy version - Image: docker.io/istio/proxyv2:1.1.2 </code></pre>
<p>It should work with istio 1.1.5. I suppose you didnt configure istio to rewrite HTTPProbes. You need to install Istio with the <code>sidecarInjectorWebhook.rewriteAppHTTPProbe=true</code> (<a href="https://preliminary.istio.io/help/ops/setup/app-health-check/#configure-istio-to-rewrite-liveness-http-probes" rel="nofollow noreferrer">source</a>).</p> <p>You can check your <code>istio-sidecar-injector</code> configmap, it should have <code>rewriteAppHTTPProbe: true</code></p> <p>ps. it works for me with istio 1.1.2, i believe</p>
<p>We stand up a lot of clusters for testing/poc/deving and its up to us to remember to delete them</p> <p>What I would like is a way of setting a ttl on an entire gke cluster and having it get deleted/purged automatically.</p> <p>I could tag the clusters with a timestamp at creation and have an external process running on a schedule that reaps old clusters, but it'd be great if I didn't have to do that- it might be the only way but maybe there is a gke/k8s feature for this?</p> <p>Is there a way to have the cluster delete itself without relying on an external service? I suppose it could spawn a cloud function itself- but Im wondering if there is a native gke/k8s feature to do this more elegantly</p>
<p>You can spawn GKE cluster with Alpha features. Such clusters exist for one month maximum and then are auto-deleted. Read more: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters</a></p>
<p>I'm trying to deploy Node.js code to a Kubernetes cluster, and I'm seeing that in my reference (provided by the maintainer of the cluster) that the <code>yaml</code> files are all prefixed by numbers:</p> <pre><code>00-service.yaml 10-deployment.yaml etc. </code></pre> <p>I don't think that this file format is specified by <code>kubectl</code>, but I found another example of it online: <a href="https://imti.co/kibana-kubernetes/" rel="nofollow noreferrer">https://imti.co/kibana-kubernetes/</a> (but the numbering scheme isn't the same).</p> <p>Is this a Kubernetes thing? A file naming convention? Is it to keep files ordered in a folder?</p>
<p>This is to handle the resource creation order. There's an opened issue in kubernetes: <a href="https://github.com/kubernetes/kubernetes/issues/16448#issue-113878195" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/16448#issue-113878195</a></p> <p>tl;dr <code>kubectl apply -f k8s/*</code> should handle the order but it does not.</p> <p>However, except the namespace, I cannot imagine where the order will matter. Every relation except namespace is handled by label selectors, so it fixes itself once all resources are deployed. You can just do 00-namespace.yaml and everything else without prefixes. Or just skip prefixes at all unless you really hit the issue (I never faced it).</p>
<p>I am setting up a k8s test cluster environment. But the pods deployed by k8s can't reach external ip address.</p> <p>The pod ip address is 173.16.2.5/24 The node has ip 10.168.99.198/24 on interface eth0 and 173.16.2.1/24 on cni network.</p> <ol> <li>Ping 10.168.99.197 from node is working:</li> </ol> <pre><code>#ping 10.168.99.197 PING 10.168.99.197 (10.168.99.197) 56(84) bytes of data. 64 bytes from 10.168.99.197: icmp_seq=1 ttl=64 time=0.120 ms </code></pre> <ol start="2"> <li>But ping the same ip from busybox pod failed:</li> </ol> <pre><code>#ping 10.168.99.197 PING 10.168.99.197 (10.168.99.197): 56 data bytes &lt;-- no response </code></pre> <p>Route on busybox container created by k8s:</p> <pre><code># ip route default via 173.16.2.1 dev eth0 10.244.0.0/16 via 173.16.2.1 dev eth0 173.16.2.0/24 dev eth0 scope link src 173.16.2.5 </code></pre> <p>If I start a busybox container which not created by k8s, the network is fine: Route on busybox container created by docker:</p> <pre><code># ip route default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 scope link src 172.17.0.2 </code></pre> <pre><code># ping 10.168.99.197 PING 10.168.99.197 (10.168.99.197): 56 data bytes 64 bytes from 10.168.99.197: seq=0 ttl=63 time=0.554 ms </code></pre> <p>Route table on the node:</p> <pre><code># route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default localhost 0.0.0.0 UG 0 0 0 eth0 10.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 link-local 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 173.16.0.0 173-16-0-0.clie 255.255.255.0 UG 0 0 0 flannel.1 173.16.1.0 173-16-1-0.clie 255.255.255.0 UG 0 0 0 flannel.1 173.16.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0 </code></pre> <p>How to resolve this problem to make the pods created by k8s reach external ip?</p>
<p>The reason of pod unable to reach external ip is that the flannel network configuration not match to cni network, change flannel setting resolved this problem:</p> <pre><code># kubectl get configmap -n kube-system -o yaml kube-flannel-cfg ... net-conf.json: | { "Network": "172.30.0.0/16", "Backend": { "Type": "vxlan" } } ... </code></pre>
<p>I want to setup a in-cluster NFS-Server in Kubernetes to provide shares for my pods (nginx webroot etc.).</p> <p>In theory there should be a persistent volume, a volume claim and the NFS-Server, which, as I understand is a deployment.</p> <p>To use the PV and PVC I need to assign the NFS-Server's IP-Adress, which I don't know, because it automatically generated when I expose the NFS-Server with a service.</p> <p>The same problem appears if I want to deploy the nfs-server deployment itself, because I am using the PVC as volumes. But I can't deploy the PV and PVCs without giving them the NFS-Server IP.</p> <p>I think I am lost, maybe you can help me.</p> <ol> <li>PV</li> </ol> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-nfs-pv1 labels: type: local spec: storageClassName: manual capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: path: "/exports/www" server: SERVER_NAME:PORT </code></pre> <ol start="2"> <li>PVC</li> </ol> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-nfs-pv1 labels: type: local spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 500Mi </code></pre> <ol start="3"> <li>NFS-Deployment</li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-server spec: replicas: 1 selector: matchLabels: role: nfs-server template: metadata: labels: role: nfs-server spec: containers: - name: nfs-server image: gcr.io/google_containers/volume-nfs:0.8 ports: - name: nfs containerPort: 2049 - name: mountd containerPort: 20048 - name: rpcbind containerPort: 111 securityContext: privileged: true volumeMounts: - mountPath: /exports/www name: pv-nfs-pv1 volumes: - name: pv-nfs-pv1 gcePersistentDisk: pdName: pv-nfs-pv1 # fsType: ext4 </code></pre>
<p>1) You create NFS-server deployment.</p> <p>2) You expose NFS-server deployment by creating service, say "nfs-server", exposing TCP port 2049 (assuming you use NFSv4). </p> <p>3) You create PV with the following information:</p> <pre><code> nfs: path: /exports/www server: nfs-server </code></pre> <p>4) You create PVC and mount it wherever you need it.</p>
<p>I used this tutorial: <a href="https://severalnines.com/blog/using-kubernetes-deploy-postgresql" rel="nofollow noreferrer">https://severalnines.com/blog/using-kubernetes-deploy-postgresql</a></p> <p>With my configuration on Kubernetes which is based off the official Docker Image I keep getting:</p> <pre><code>psql -h &lt;publicworkernodeip&gt; -U postgres -p &lt;mynodeport&gt; postgres Password for user postgres: example psql: FATAL: password authentication failed for user "postgres" DETAIL: Role "postgres" does not exist. Connection matched pg_hba.conf line 95: "host all all all md5" </code></pre> <p>yamls:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: postgres POSTGRES_USER: postgres POSTGRES_PASSWORD: example </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:11 imagePullPolicy: Always ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumes: - name: postgres persistentVolumeClaim: claimName: postgres-pv-claim </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres spec: type: NodePort ports: - port: 5432 selector: app: postgres </code></pre> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local app: postgres spec: storageClassName: manual capacity: storage: 12Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim labels: app: postgres spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 12Gi </code></pre>
<p>try to login using the below command</p> <p>psql -h $(hostname -i) -U postgres</p> <pre><code>kubectl exec -it postgres-566fbfb87c-rcbvd sh # env POSTGRES_PASSWORD=example POSTGRES_USER=postgres POSTGRES_DB=postgres # psql -h $(hostname -i) -U postgres Password for user postgres: psql (11.2 (Debian 11.2-1.pgdg90+1)) Type "help" for help. postgres=# \c postgres You are now connected to database "postgres" as user "postgres". postgres=# </code></pre>
<p>I installed <em>istio</em> on <em>kubernetes</em> without helm.</p> <p>I can see pods and services are created in istio-system namespace.</p> <p>All service like grafana, Prometheus are created and their ports are not exposed.</p> <p>As load-balancer-service is created so that one load balancer is also created in AWS, I wanted to access grafana, prometheus etc dashboard from an external network through newly created load balancer endpoint but that dashboard is not accessible from load balancer endpoint.</p> <p>I tried port forwarding recommended by istio docs:</p> <pre><code>kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &amp; </code></pre> <p>These is working with only <a href="http://localhost:3000" rel="nofollow noreferrer">http://localhost:3000</a> but not accessible with <a href="http://publicip:3000" rel="nofollow noreferrer">http://publicip:3000</a></p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 172.20.192.71 &lt;none&gt; 3000/TCP 1m istio-citadel ClusterIP 172.20.111.103 &lt;none&gt; 8060/TCP,15014/TCP 1m istio-egressgateway ClusterIP 172.20.123.112 &lt;none&gt; 80/TCP,443/TCP,15443/TCP 1m istio-galley ClusterIP 172.20.45.229 &lt;none&gt; 443/TCP,15014/TCP,9901/TCP 1m istio-ingressgateway LoadBalancer 172.20.94.157 xxxx-yyyy.us-west-2.elb.amazonaws.com 15020:31336/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32146/TCP,15030:30126/TCP,15031:31506/TCP,15032:30501/TCP,15443:31053/TCP 1m istio-pilot ClusterIP 172.20.27.87 &lt;none&gt; 15010/TCP,15011/TCP,8080/TCP,15014/TCP 1m istio-policy ClusterIP 172.20.222.108 &lt;none&gt; 9091/TCP,15004/TCP,15014/TCP 1m istio-sidecar-injector ClusterIP 172.20.240.198 &lt;none&gt; 443/TCP 1m istio-telemetry ClusterIP 172.20.157.227 &lt;none&gt; 9091/TCP,15004/TCP,15014/TCP,42422/TCP 1m jaeger-agent ClusterIP None &lt;none&gt; 5775/UDP,6831/UDP,6832/UDP 1m jaeger-collector ClusterIP 172.20.92.248 &lt;none&gt; 14267/TCP,14268/TCP 1m jaeger-query ClusterIP 172.20.168.197 &lt;none&gt; 16686/TCP 1m kiali ClusterIP 172.20.236.20 &lt;none&gt; 20001/TCP 1m prometheus ClusterIP 172.20.21.205 &lt;none&gt; 9090/TCP 1m tracing ClusterIP 172.20.231.66 &lt;none&gt; 80/TCP 1m zipkin ClusterIP 172.20.200.32 &lt;none&gt; 9411/TCP 1m </code></pre> <p>As shown in above I'm trying to access grafana dashboard using load balncer as well as port forwarding but I haven't get grafana dashboard</p>
<p>You can create <a href="https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway" rel="noreferrer">Istio Gateway and VirtualService</a> in order to forward your requests to <code>grafana</code> service running by default on port 3000</p> <p>Firstly, let's check <code>grafana</code> and <code>istio-ingressgateway</code> service</p> <pre><code>kubectl get svc grafana istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 100.71.67.105 &lt;none&gt; 3000/TCP 18h istio-ingressgateway LoadBalancer 100.64.42.106 &lt;Public IP address&gt; 15020:31766/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:30728/TCP,15031:31037/TCP,15032:31613/TCP,15443:32501/TCP 18h </code></pre> <p>So, we have <code>grafana</code> running service listening on port 3000, and default <code>istio-ingressgateway</code> LoadBalancer service running with assigned public ip address.</p> <p>Then we create <code>gateway</code> to use this default LoadBalancer.</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: istio-system # Use same namespace with backend service spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: HTTP protocol: HTTP hosts: - "*" EOF </code></pre> <p>Then configure route to <code>grafana service</code> for traffic entering via the this gateway:</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana namespace: istio-system # Use same namespace with backend service spec: hosts: - "*" gateways: - grafana-gateway # define gateway name http: - match: - uri: prefix: "/" route: - destination: port: number: 3000 # Backend service port host: grafana # Backend service name EOF </code></pre> <p>Then hit the <code>http://&lt;public_ip_istio_ingressgateway&gt;</code>, you should see the grafana dashboard</p> <p>I hope it will be helpful for you.</p>
<p>In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate. However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.</p> <p>All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.</p> <p>I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?</p>
<p><strong>Updated Edit read option 3:</strong></p> <p>I can think of 3 options to solve your issue if I was in your scenario:</p> <h1>Option 1)</h1> <p><strong>(The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following <a href="https://medium.com/@paraspatidar/add-ssl-tls-certificate-or-pem-file-to-kubernetes-pod-s-trusted-root-ca-store-7bed5cd683d" rel="nofollow noreferrer">site</a> :)</strong></p> <ol> <li><p>Add certificate to config map: lets say your pem file is my-cert.pem <br> <code>kubectl -n &lt;namespace-for-config-map-optional&gt; create configmap ca-pemstore — from-file=my-cert.pem</code></p> </li> <li><p>Mount configmap as volume to exiting CA root location of container: mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example</p> </li> </ol> <pre><code>apiVersion: v1 kind: Pod metadata: name: cacheconnectsample spec: containers: - name: cacheconnectsample image: cacheconnectsample:v1 volumeMounts: - name: ca-pemstore mountPath: /etc/ssl/certs/my-cert.pem subPath: my-cert.pem readOnly: false ports: - containerPort: 80 command: [ &quot;dotnet&quot; ] args: [ &quot;cacheconnectsample.dll&quot; ] volumes: - name: ca-pemstore configMap: name: ca-pemstore </code></pre> <p>So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets. <br> If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert.)</p> <h1>Option 2)</h1> <p><strong>(Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option :)</strong></p> <p>In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.<br> (there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network.)</p> <h1>Option 3)</h1> <p><strong>Edit: (After gaining more hands on experience with Kubernetes)</strong></p> <p>I believe that switchboard.op's answer is probably the best/should be the accepted answer. This &quot;can&quot; be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.</p> <p>Also it turns out that my Option 1 doing it is only half correct. mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod.)</p> <p>switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.</p>
<p>my problem is the same: i have an extra laptop and want use it like a kubernetes worker node, instead having the master node and worker node in the same machine(like minikube). the machine with the master node and the laptop with worker node are on the same LAN. but i have no idea about the technology which i have to use (openshift or something else) thanks for all</p>
<p>If you opt for creating Highly Available Clusters on bare metal, just follow official kubernetes.io's tutorial: <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">Creating Highly Available Clusters with kubeadm</a>, take option: with stacked control plane nodes.</p>
<p>I would like to run telegraf as daemonset but it complains about volumemounts.</p> <p>I've followed <a href="https://www.howtoforge.com/tutorial/centos-kubernetes-docker-cluster/" rel="nofollow noreferrer">this</a> article to create my kubernetes cluster.</p> <p>Now try to follow <a href="https://www.influxdata.com/blog/monitoring-kubernetes-architecture/" rel="nofollow noreferrer">this</a> article to setup telegraf metrics sendings to our kafka endpoint.</p> <p><a href="https://pastebin.com/jzq6aUpE" rel="nofollow noreferrer">This</a> is my yaml configuration.</p> <p>When I try to run kubectl apply -f /tmp/telegraf.yaml I got this error:</p> <pre><code>configmap/telegraf unchanged The DaemonSet "telegraf" is invalid: * spec.template.spec.containers[0].volumeMounts[1].name: Not found: "docker" * spec.template.spec.containers[0].volumeMounts[3].mountPath: Invalid value: "/var/run/docker.sock": must be unique </code></pre> <p>What is the issue to move forward?</p>
<p>Remove these lines from manifest:</p> <pre><code>- name: docker mountPath: /var/run/docker.sock readOnly: true </code></pre>
<p>I am setting up the kubernetes setup for django webapp.</p> <p>I am passing environment variable while creating deployment as below</p> <pre><code>kubectl create -f deployment.yml -l key1=value1 </code></pre> <p>I am getting error as below</p> <pre><code>error: no objects passed to create </code></pre> <p>Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.</p> <p>deployment.yaml as below</p> <pre><code>#Deployment apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: service: sigma-service name: $key1 </code></pre> <p>What will be the reason for causing the above error while creating deployment?</p>
<p>I used envsubst (<a href="https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html" rel="noreferrer">https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html</a>) for this. Create a <code>deployment.yaml</code></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: $NAME labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>Then:</p> <pre><code>export NAME=my-test-nginx envsubst &lt; deployment.yaml | kubectl apply -f - </code></pre> <p>Not sure what OS are you using to run this. On macOS, envsubst installed like:</p> <pre><code>brew install gettext brew link --force gettext </code></pre>
<p>i'ams truggle with writing K8s network policy to deny trafic from pod to pod, pod must be specefied by label or name.</p> <p>Can some one help me?</p> <p>I'am not are devops/sys admin. And i need this for Integration Testting. I check <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes</a> examples. But still can't do it. </p> <p>Here is exampel that allow acces from only one service </p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: api-allow spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore </code></pre> <p>But i need oposite things, denny only from one pod.</p>
<p>you should define two network policies as mentioned below</p> <ol> <li>default deny all traffic</li> <li>allow the traffic to a pod from other pods/namespaces</li> </ol> <p>here are some good recipes that would be helpful. follow the link <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes" rel="nofollow noreferrer">network policy recipes</a></p>
<p>For publicly exposed Kubernetes services - type LoadBalancer - we have a dns record that points to the external ip address.</p> <p>In each service we do TLS termination with a LetsEncrypt certificate that is tied to the external name. It's not allowed to add the internal name to the certificate as well.</p> <p>What I would like to do is use any application in the same Kubernetes cluster connect using the external name but keep communication internal.</p> <p>Is there any way to provide a dns record in the Kubernetes cluster that resolves the same name to the internal service ip address to do so?</p>
<p>For reference, you can achieve this <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">configuring CoreDNS</a>. Example configuration that will link from external DNS name <code>foo.example.com</code> to the internal IP of <code>foo</code> service in <code>default</code> namespace</p> <pre><code> .:53 { errors log health rewrite name foo.example.com foo.default.svc.cluster.local kubernetes cluster.local 10.0.0.0/24 file /etc/coredns/example.db example.org proxy . /etc/resolv.conf cache 30 } </code></pre> <p>Thanks @Jorrit Salverda for opening the kubernetes issue with this.</p>
<p>I am trying to change istio global config parameter <code>initialDelaySeconds</code> value in inject-config.yaml</p> <pre><code>initialDelaySeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds` 1 ]] </code></pre> <p>when i try below code sample for my <code>initialDelaySeconds</code> i am getting error..</p> <pre><code>$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e "s initialDelaySeconds: 1/ initialDelaySeconds: 10/" | kubectl apply -f - </code></pre> <p>Getting below error</p> <pre><code> sed: -e expression #1, char 28: unknown option to `s' error: no objects passed to apply error: write /dev/stdout: The pipe has been ended. </code></pre> <p>what is correct syntax to change my global parameter in sidecar inject-config.xml</p> <p>Also below code snippet works for me for <code>rewriteAppHTTPProbe</code></p> <pre><code>$ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e "s/ rewriteAppHTTPProbe: false/ rewriteAppHTTPProbe: true/" | kubectl apply -f - </code></pre>
<p>you could just use <code>kubectl edit</code> to edit the configmap:</p> <pre><code>kubectl edit cm istio-sidecar-injector -n istio-system </code></pre>
<p><strong>Is it a request for help ?</strong> <br> Yes</p> <p><strong>what I am trying to do?</strong> <br> I am trying to use stable/mongodb as <em>sub chart</em> in my main chart with replica set enabled</p> <p><strong>steps performed</strong> <br> </p> <ul> <li>added mongodb as a sub chart in chart folder</li> <li>mention dependency of this chart in requirements.yaml</li> </ul> <pre><code>dependencies: - name: mongodb version: "5.17.0" repository: "file://charts/mongodb" </code></pre> <ul> <li>override sub chart values in main chart</li> </ul> <pre><code>mongodb: usePassword: true existingSecret: skeleton mongodbUsername: skeleton mongodbDatabase: skeleton replicaSet: ## Whether to create a MongoDB replica set for high availability or not enabled: true useHostnames: true name: rs0 replicas: secondary: 1 arbiter: 1 </code></pre> <p><strong>- define secret for this</strong> in secret for skeleton</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: skeleton type: Opaque data: # mongo ha--- mongodb-root-password: &lt;64-bit-encoded-root-password&gt; mongodb-replica-set-key: &lt;64-bit-encoded-Key&gt; mongodb-password: &lt;64-bit-encoded-password&gt; </code></pre> <p><strong>- in deployment.yaml provided headless service url mongodb url to connect to this</strong></p> <p>from application hosted inside same k8s cluster</p> <p><code>mongodb://&lt;mongodbUsername-value-define-in-value.yaml&gt;:&lt;mongodb-password-value-present-in-secret&gt;@skeleton-mongodb-headless:27017/skeleton?authSource=admin&amp;replicaSet=rs0 </code></p> <ul> <li><strong>error i am getting</strong> when using headless service</li> </ul> <pre><code>[1;30m2019-05-07 05:06:16,085[0;39m =&gt; [34mINFO [0;39m [[32mmain[0;39m] [33mcom.mongodb.diagnostics.logging.SLF4JLogger[0;39m: Cluster created with settings {hosts=[skeleton-mongodb-headless:27017], mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500, requiredReplicaSetName='rs0'} [1;30m2019-05-07 05:06:16,085[0;39m =&gt; [34mINFO [0;39m [[32mmain[0;39m] [33mcom.mongodb.diagnostics.logging.SLF4JLogger[0;39m: Adding discovered server skeleton-mongodb-headless:27017 to client view of cluster [1;30m2019-05-07 05:06:16,175[0;39m =&gt; [34mINFO [0;39m [[32mmain[0;39m] [33mcom.mongodb.diagnostics.logging.SLF4JLogger[0;39m: No server chosen by com.mongodb.Mongo$4@359ff4d9 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=skeleton-mongodb-headless:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out [1;30m2019-05-07 05:06:16,182[0;39m =&gt; [34mINFO [0;39m [[32mcluster-ClusterId{value='5cd11248b2cfc400017b5805', description='null'}-skeleton-mongodb-headless:27017[0;39m] [33mcom.mongodb.diagnostics.logging.SLF4JLogger[0;39m: Exception in monitor thread while connecting to server skeleton-mongodb-headless:27017 com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=null, userName='skeleton', source='admin', password=&lt;hidden&gt;, mechanismProperties={}} at com.mongodb.connection.SaslAuthenticator.wrapException(SaslAuthenticator.java:162) at com.mongodb.connection.SaslAuthenticator.access$200(SaslAuthenticator.java:39) at com.mongodb.connection.SaslAuthenticator$1.run(SaslAuthenticator.java:68) at com.mongodb.connection.SaslAuthenticator$1.run(SaslAuthenticator.java:46) at com.mongodb.connection.SaslAuthenticator.doAsSubject(SaslAuthenticator.java:168) at com.mongodb.connection.SaslAuthenticator.authenticate(SaslAuthenticator.java:46) </code></pre> <p><strong>queries</strong></p> <ul> <li>am i missing something for connection </li> <li>Do we need to provide key in connection url for internal communication ?</li> </ul>
<p>I am able to resolve this issue for authentication after deleting pv and Azure Disk. I also used <em>release-name-mongodb</em> instead of <em>release-name-mongodb-headless</em></p>
<p>My docker-compose file consisting of tomcat7 server and mysql database</p> <pre><code> version: '3' services: mysql: image: suji165475/vignesh:latest ports: - "3066:3066" tomcat: image: suji165475/vignesh:tomcatserver container_name: tomcat7hope ports: - "8080:8080" </code></pre> <p>I built the images using docker files</p> <pre><code>FROM mysql ENV MYSQL_ROOT_PASSWORD=root ADD init.sql /docker-entrypoint-initdb.d </code></pre> <pre><code>FROM picoded/tomcat7 COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war </code></pre> <p>how to run this on kubernetes clusture?? I already tried kompose convert and my war file wont start in tomcats application manager.But the war file starts succesfully using docker-compose up.</p> <p>Why am I facing this issue only in kubernetes and not when directly running docker-compose up.Please help me by letting me know what changes I should make to the kubernetes yaml files.</p> <p>mysql-deployment.yaml</p> <pre><code> apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: mysql name: mysql spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: mysql spec: containers: - image: suji165475/vignesh:latest name: mysql ports: - containerPort: 3066 resources: {} restartPolicy: Always status: {} </code></pre> <p>tomcat-deployment.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: tomcat name: tomcat spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: io.kompose.service: tomcat spec: containers: - image: suji165475/vignesh:tomcatserver name: tomcat7hope ports: - containerPort: 8080 resources: {} restartPolicy: Always status: {} </code></pre> <p>mysql-service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: mysql name: mysql spec: ports: - name: "3066" port: 3066 targetPort: 3066 selector: io.kompose.service: mysql status: loadBalancer: {} </code></pre> <p>tomcat-service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 (06a2e56) creationTimestamp: null labels: io.kompose.service: tomcat name: tomcat spec: ports: - name: "8080" port: 8080 targetPort: 8080 selector: io.kompose.service: tomcat status: loadBalancer: {} </code></pre>
<p>Let me make it clear, you can't "run docker-compose" on Kubernetes. If you wanted to do that, you should not be using Kubernetes.</p> <p>With that said, using <code>kompose</code> is not such a great idea. Docker compose yaml files are all different and <code>kompose</code> simply makes a guess as to what the associated Kubernetes manifests would look like. If your application is very simple and you are lucky, <code>kompose</code> might give you your manifests ready to be deployed, but that is not usually the case.</p> <p>There could be a few reasons why this is not working for you:</p> <ul> <li>your <code>tomcat</code> application is not correctly referencing your <code>mysql</code> service</li> <li>your <code>mysql</code> deployment is missing some variables or mounted volumes/files</li> </ul> <p>Your <code>tomcat</code> app can refer to your <code>mysql</code> db through <code>mysql:3066</code> and you might need to add some environment variables in your <code>tomcat</code> deployment such as db name, db username and db password for authentication. Here are your edited <code>tomcat</code> manifests:</p> <p><em>deployment.yaml</em></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tomcat namespace: default labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - image: suji165475/vignesh:tomcatserver name: tomcat7hope ports: - containerPort: 8080 </code></pre> <p><em>service.yaml</em></p> <pre><code>apiVersion: v1 kind: Service metadata: name: tomcat namespace: default labels: app: tomcat spec: ports: - name: "8080" port: 8080 targetPort: 8080 selector: app: tomcat </code></pre> <p>Your <code>mysql</code> deployment might require a <code>PersistentVolumeClaim</code> or <code>emptyDir</code> in order to keep your data. Take a look at attached volumes and confirm that you are not missing any volume mounts or environment variables that are needed. Here are your edited <code>mysql</code> manifests:</p> <p><em>deployment.yaml</em></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mysql namespace: default labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: suji165475/vignesh:latest name: mysql ports: - containerPort: 3066 env: - name: MYSQL_ROOT_PASSWORD value: root </code></pre> <p><em>service.yaml</em></p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql namespace: default labels: app: mysql spec: ports: - name: "3066" port: 3066 targetPort: 3066 selector: app: mysql </code></pre>
<p>I want to run <a href="https://podman.io" rel="noreferrer">podman</a> as a container to run CI/CD pipelines. However, I keep getting this error from the podman container:</p> <pre class="lang-sh prettyprint-override"><code>$ podman info ERRO[0000] 'overlay' is not supported over overlayfs Error: could not get runtime: 'overlay' is not supported over overlayfs: backing file system is unsupported for this graph driver </code></pre> <p>I am using the <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="noreferrer">Jenkins Kubernetes plugin</a> to write CI/CD pipelines that run as containers within a Kubernetes cluster. I've been successful at writing pipelines that use a Docker-in-Docker container to run <code>docker build</code> and <code>docker push</code> commands.</p> <p>However, running a Docker client and a Docker Daemon inside a container makes the CI/CD environment very bloated, hard to configure, and just not ideal to work with. So I figured I could use <a href="https://podman.io" rel="noreferrer">podman</a> to build Docker images from Dockerfiles without using a fat Docker daemon.</p> <p>The problem is that <strong>podman</strong> is so new that I have not seen anyone attempt this before, nor I am enough of a podman expert to properly execute this.</p> <p>So, using the <a href="https://github.com/containers/libpod/blob/master/install.md#ubuntu" rel="noreferrer">podman installation instructions for Ubuntu</a> I created the following Dockerfile:</p> <pre><code>FROM ubuntu:16.04 RUN apt-get update -qq \ &amp;&amp; apt-get install -qq -y software-properties-common uidmap \ &amp;&amp; add-apt-repository -y ppa:projectatomic/ppa \ &amp;&amp; apt-get update -qq \ &amp;&amp; apt-get -qq -y install podman # To keep it running CMD tail -f /dev/null </code></pre> <p>So I built the image and ran it as follows:</p> <pre class="lang-sh prettyprint-override"><code># Build docker build -t podman:ubuntu-16.04 . # Run docker run --name podman -d podman:ubuntu-16.04 </code></pre> <p>Then when running this command on the running container, I get an error:</p> <pre class="lang-sh prettyprint-override"><code>$ docker exec -ti podman bash -c "podman info" ERRO[0000] 'overlay' is not supported over overlayfs Error: could not get runtime: 'overlay' is not supported over overlayfs: backing file system is unsupported for this graph driver </code></pre> <p>I install podman on an Ubuntu 16.04 machine I had and ran the same <code>podman info</code> command I got the expected results:</p> <pre class="lang-sh prettyprint-override"><code>host: BuildahVersion: 1.8-dev Conmon: package: 'conmon: /usr/libexec/crio/conmon' path: /usr/libexec/crio/conmon version: 'conmon version , commit: ' Distribution: distribution: ubuntu version: "16.04" MemFree: 2275770368 MemTotal: 4142137344 OCIRuntime: package: 'cri-o-runc: /usr/lib/cri-o-runc/sbin/runc' path: /usr/lib/cri-o-runc/sbin/runc version: 'runc version spec: 1.0.1-dev' SwapFree: 2146758656 SwapTotal: 2146758656 arch: amd64 cpus: 2 hostname: jumpbox-4b3620b3 kernel: 4.4.0-141-generic os: linux rootless: false uptime: 222h 46m 33.48s (Approximately 9.25 days) insecure registries: registries: [] registries: registries: - docker.io store: ConfigFile: /etc/containers/storage.conf ContainerStore: number: 0 GraphDriverName: overlay GraphOptions: null GraphRoot: /var/lib/containers/storage GraphStatus: Backing Filesystem: extfs Native Overlay Diff: "true" Supports d_type: "true" Using metacopy: "false" ImageStore: number: 15 RunRoot: /var/run/containers/storage VolumePath: /var/lib/containers/storage/volumes </code></pre> <p>Does anyone know how I can fix this error and get podman working from a container?</p>
<p>Your Dockerfile should install iptables as well:</p> <pre><code>FROM ubuntu:16.04 RUN apt-get update -qq \ &amp;&amp; apt-get install -qq -y software-properties-common uidmap \ &amp;&amp; add-apt-repository -y ppa:projectatomic/ppa \ &amp;&amp; apt-get update -qq \ &amp;&amp; apt-get -qq -y install podman \ &amp;&amp; apt-get install -y iptables # To keep it running CMD tail -f /dev/null </code></pre> <p>Then run the command with:</p> <pre><code>docker run -ti --rm podman:test bash -c "podman --storage-driver=vfs info" </code></pre> <p>This should give you the response you expect.</p>
<p>I have Keycloak behind Kong Ingress Controller. I 'm able to see keycloak welcome page at my {url}/auth/. However, when I click at Administration Console I am redirected to {url}:8443/auth/admin/master/console/</p> <p>When I click at Administration Console I should be redirect to {url}/auth/admin/master/console/</p> <p>When I install keycloak (with helm) on minikube exposing the the service as a NodePort service without using ingress and load balancer I'm able to access Administration Console page.</p> <p>I have detailed information about this problem in this link -> <a href="https://github.com/codecentric/helm-charts/issues/17" rel="nofollow noreferrer">https://github.com/codecentric/helm-charts/issues/17</a></p> <p>I'm stuck in this and have no idea how to solve the problem.</p>
<p>I have faced this issue may be a year ago, I remember that stupid redirect but I was not using Kong Ingress Controller, just a plain Kong. The problem I faced is that Kong runs as unprivileged user and cannot bind to low number ports. So Kong binds to 8443 ssl and places stupid redirect from 443 to 8443. I could not normally fix that and reinvented the wheel.</p> <p>I used ports 80 and 443 for Kong:</p> <pre><code> ports: - name: kong-proxy containerPort: 80 - name: kong-proxy-ssl containerPort: 443 - name: kong-admin containerPort: 8001 - name: kong-admin-ssl containerPort: 8444 </code></pre> <p>Then defined new ports and capability:</p> <pre><code>securityContext: capabilities: add: - NET_BIND_SERVICE env: - name: KONG_PROXY_LISTEN value: 0.0.0.0:80, 0.0.0.0:443 ssl - name: KONG_ADMIN_LISTEN value: 0.0.0.0:8001, 0.0.0.0:8444 ssl </code></pre> <p>After that that stupid redirect disappeared.</p> <p>Hope that helps.</p> <p><strong>UPDATE</strong></p> <p>Sorry, forgot to mention that for ports 80 and 443 to work I build custom Docker image with that lines:</p> <pre><code>FROM kong:1.1.1-centos RUN chown -R kong:kong /usr/local/kong \ &amp;&amp; setcap 'cap_net_bind_service=+ep' /usr/local/bin/kong \ &amp;&amp; setcap 'cap_net_bind_service=+ep' /usr/local/openresty/nginx/sbin/nginx </code></pre>
<p>Is it possible to simplify this chain that runs on bare metal:</p> <ol> <li>StatefulSet with replicas count that will change over the time</li> <li>Service</li> <li>Nginx-ingress with <code>proxy-next-upstream: "error http_502 timeout invalid_header non_idempotent"</code></li> <li>Pod with Nginx for caching and many other things that ingress can't do</li> <li>Service type: LoadBalancer</li> <li><a href="https://github.com/danderson/metallb" rel="nofollow noreferrer">MetalLB</a></li> </ol> <p>Is it possible to simplify this stack?</p>
<p>Yes if you turn nginx into sidecar (deploy in every pod) + remove ingress. Cache is not shared in this case: </p> <ol> <li>StatefulSet with replicas count that will change over the time</li> <li>Sidecar (means in every replica) with nginx for caching and many other things that ingress can't do, including the ingress settings you used. Proxy pass to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication" rel="nofollow noreferrer">localhost</a> in this case.</li> <li>Service: LoadBalancer</li> <li>MetalLB</li> </ol> <p>Or if you need a common cache - just throw away the ingress:</p> <ol> <li>StatefulSet</li> <li>ServiceA (pointing to StatefulSet): <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">ClusterIP</a></li> <li>nginx with caching and hacks. Proxy pass to ServiceA.namespace.svc.cluster.local</li> <li>ServiceB (pointing to nginx deployment): LoadBalancer</li> <li>MetalLB</li> </ol>
<p>I am doing a setup of fabric using multinode environment with dynamic provision (dynamic pod allocation) over google cloud platform using native kubernetes engine.</p> <p>I am using helm for the deployment . So let me tell you the entity which are running successfully .</p> <p>1-> 1 Orderer with pod name <code>ord1-hlf-ord-54bbffd6f7-4qtxd</code></p> <p>2-> 1 couchdb with pod name <code>cdb-peer1-hlf-couchdb-7cf88678b8-74426</code></p> <p>3-> 1 peer with pod name <code>peer1-hlf-peer-6c5f6886d4-wrddj</code></p> <p>Till now everything is fine . Let me paste the results of some useful commands below </p> <p><code>kubectl get nodes</code></p> <pre><code>master Ready master 138m v1.14.1 worker1 Ready &lt;none&gt; 135m v1.14.1 </code></pre> <p><code>kubectl get ns</code></p> <pre><code>default Active 141m kube-node-lease Active 141m kube-public Active 141m kube-system Active 141m orderers Active 122m peers Active 122m </code></pre> <p><code>kubectl get pods -n peers</code> </p> <pre><code>cdb-peer1-hlf-couchdb-7cf88678b8-74426 1/1 Running 0 114m peer1-hlf-peer-6c5f6886d4-wrddj 1/1 Running 0 33m </code></pre> <p><code>kubectl get pods -n orderers</code> </p> <pre><code>NAME READY STATUS RESTARTS AGE ord1-hlf-ord-54bbffd6f7-4qtxd 1/1 Running 0 120m </code></pre> <p><code>kubectl get svc -A</code> </p> <pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 144m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 144m kube-system tiller-deploy ClusterIP 10.111.43.221 &lt;none&gt; 44134/TCP 138m orderers ord1-hlf-ord ClusterIP 10.105.9.161 &lt;none&gt; 7050/TCP 121m peers cdb-peer1-hlf-couchdb ClusterIP 10.99.87.182 &lt;none&gt; 5984/TCP 117m peers peer1-hlf-peer ClusterIP 10.110.219.142 &lt;none&gt; 7051/TCP,7053/TCP 35m </code></pre> <p>now as you can see that orderer is running with service name <code>ord1-hlf-ord</code> so I am following the kubernetes pattern to connect this as <code>service-name.namespace.svc.cluster.local</code> so I am trying to connect orderer on <code>ord1-hlf-ord.orderers.svc.cluster.local:7050</code> In the below command :</p> <pre><code>kubectl exec -n peers $PEER_POD -- su -c "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/admin_msp peer channel create -o ord1-hlf-ord.orderers.svc.cluster.local:7050 -c mychannel -f /hl_config/channel/channel.tx" </code></pre> <p>But I am not able to connect to orderer </p> <p>One more thing I want to tell you that when I am entering into the peer pod using command <code>kubectl exec -it peer1-hlf-peer-6c5f6886d4-wrddj bash -n peers</code> and trying to run <code>apt-get update</code> , So that by installing ping I can check that peer is able to ping orderer or not then I am getting error </p> <pre><code>root@peer1-hlf-peer-6c5f6886d4-wrddj:/# apt-get update Err:1 http://security.ubuntu.com/ubuntu xenial-security InRelease Temporary failure resolving 'security.ubuntu.com' Err:2 http://archive.ubuntu.com/ubuntu xenial InRelease Temporary failure resolving 'archive.ubuntu.com' Err:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease Temporary failure resolving 'archive.ubuntu.com' Err:4 http://archive.ubuntu.com/ubuntu xenial-backports InRelease Temporary failure resolving 'archive.ubuntu.com' Reading package lists... Done W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease Temporary failure resolving 'archive.ubuntu.com' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial-updates/InRelease Temporary failure resolving 'archive.ubuntu.com' W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial-backports/InRelease Temporary failure resolving 'archive.ubuntu.com' W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/xenial-security/InRelease Temporary failure resolving 'security.ubuntu.com' W: Some index files failed to download. They have been ignored, or old ones used instead. </code></pre> <p>If anyone faced this issue of having knowledge about the cause of issue then please let me know.</p> <p>Thanks in advance</p>
<p>If it is working while applying the <code>orderer</code> IP address. Then probably try entering an entry for it in your <code>/etc/hosts</code> file. The entry will look like </p> <pre><code> 10.105.9.161 ord1-hlf-ord.orderers.svc.cluster.local </code></pre> <p>This should allow you to resolve DNS, you can check this by pinging <code>ord1-hlf-ord.orderers.svc.cluster.local</code> </p>
<p>If I have the following Kubernetes objects:</p> <ul> <li><code>Deployment</code> with <code>rollingUpdate.maxUnavailable</code> set to <code>1</code>.</li> <li><code>PodDisruptionBudget</code> with <code>maxUnavailable</code> set to <code>1</code>.</li> <li><code>HorizontalPodAutoscaler</code> setup to allow auto scaling.</li> <li>Cluster auto-scaling is enabled.</li> </ul> <p>If the cluster was under load and is in the middle of scaling up, what happens:</p> <ol> <li>During a rolling update? Do the new <code>Pod</code>'s added due to the scale up use the new version of the <code>Pod</code>?</li> <li>When a node needs to be restarted or replaced? Does the <code>PodDisruptionBudget</code> stop the restart completely? Does the <code>HorizontalPodAutoscaler</code> to scale up the number of nodes before taking down another node?</li> <li>When the <code>Pod</code> affinity is set to avoid placing two <code>Pod</code>'s from the same <code>Deployment</code> on the same node.</li> </ol>
<ol> <li>As in <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work" rel="nofollow noreferrer">documentation</a>:</li> </ol> <blockquote> <p>Pods which are deleted or unavailable due to a rolling upgrade to an application do count against the disruption budget, but controllers (like deployment and stateful-set) are not limited by PDBs when doing rolling upgrades – the handling of failures during application updates is configured in the controller spec.</p> </blockquote> <p>So it partially depends on the controller configuration and implementation. I believe new pods added by the autoscaler will use the new version of the Pod, because that's the version present in deployments definition at that point.</p> <ol start="2"> <li><p>That depends on the way you execute the node restart. If you just cut the power, nothing can be done ;) If you execute proper <code>drain</code> before shutting the node down, then the <code>PodDisruptionBudget</code> will be taken into account and the draining procedure won't violate it. The disruption budget is respected by the Eviction API, but can be violated with low level operations like manual pod deletion. It is more like a suggestion, that some APIs respect, than a force limit that is enforced by whole Kubernetes.</p></li> <li><p>According to the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">official documentation</a> if the affinity is set to be a "soft" one, the pods will be scheduled on the same node anyway. If it's "hard", then the deployment will get stuck, not being able to schedule required amount of pods. Rolling update will still be possible, but the HPA won't be able to grow the pod pool anymore.</p></li> </ol>
<p>I'm trying to make an ssl based connection to memsql from an openshift pod/container. I have to insert an self-signed ca-certificate inside the pod such that the connection is established between memsql and the pod. I have following ideas but not sure how to implement them: 1. Mount a path in volumeMounts: like /etc/ssl/certs and insert the certificate in that path and give a secret for that file name, but how do I copy that file to the specified path. 2. Copy the contents of the certificate in the deployment config under secrets, but will it be validated and how to achieve that?</p>
<p>First create TLS secret:</p> <pre><code>oc create secret tls mycert --cert /tmp/cert.pem --key /tmp/key.pem </code></pre> <p>Then mount this secret into your container into /certs directory</p> <pre><code>oc set volume dc/myapp --add -t secret -m /certs --name cert --secret-name mycert </code></pre> <p>You cert will be available inside pod at <strong>/certs/tls.crt</strong> and key will be at <strong>/certs/tls.key</strong></p>
<p>I am running a python application which is having react bundle.</p> <p>Currently, i am running single pod of application everything seems to work fine but when i was scaling up to pod number 2-3 it's creating an issue in some case not able to login into the application.</p> <p>in ingress, i have added ingress yaml config is :</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: admin-ingress annotations: kubernetes.io/ingress.class: "nginx" certmanager.k8s.io/cluster-issuer: admin-prod nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name : "session" spec: tls: - hosts: - domain secretName: admin-prod rules: - host: domain http: paths: - path: / backend: serviceName: admin-service servicePort: 80 </code></pre>
<p>It appears like you are storing user session info in cookie.</p> <p>nginx.ingress.kubernetes.io/affinity: "cookie"</p> <p>the above setting would enable sticky session using the cookie. say, user is connected to replica-1. his session info is stored in cookie. he gets logged out and logs in back, he would be taken to replica-1.</p>
<p>minikube doesn't start with an error "vswitch not found" even though the virtual switch is available</p> <p><strong>Operating System:</strong> Windows 10 <strong>Hypervisor:</strong> Hyper-V <strong>Minikube version:</strong> v1.0.0</p> <p><strong>Available virtual switches:</strong></p> <pre><code>PS C:\WINDOWS\system32&gt; Get-VMSwitch Name SwitchType NetAdapterInterfaceDescription ---- ---------- ------------------------------ DockerNAT Internal MinikubeVS External Intel(R) Dual Band Wireless-AC 7265 Default Switch Internal </code></pre> <p><strong>Error:</strong></p> <pre><code>λ minikube start --vm-driver hyperv --hyperv-virtual-switch "MinikubeVS" --v 99 o minikube v1.0.0 on windows (amd64) $ Downloading Kubernetes v1.14.0 images in the background ... &gt; Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... @ Downloading Minikube ISO ... 142.88 MB / 142.88 MB [============================================] 100.00% 0s Creating CA: C:\Users\sudhi\.minikube\certs\ca.pem Creating client certificate: C:\Users\sudhi\.minikube\certs\cert.pem [executing ==&gt;] : C:\windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique [stdout =====&gt;] : Hyper-V [stderr =====&gt;] : [executing ==&gt;] : C:\windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole("S-1-5-32-578") [stdout =====&gt;] : False [stderr =====&gt;] : [executing ==&gt;] : C:\windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator") [stdout =====&gt;] : True [stderr =====&gt;] : [executing ==&gt;] : C:\windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; (Hyper-V\Get-VMSwitch).Name [stdout =====&gt;] : [stderr =====&gt;] : Hyper-V\Get-VMSwitch : The operation on computer 'LAPTOP-CLH1EQAJ' failed: The specified service does not exist as an installed service. At line:1 char:53 + ... ::OutputEncoding = [Text.Encoding]::UTF8; (Hyper-V\Get-VMSwitch).Name + ~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Get-VMSwitch], VirtualizationException + FullyQualifiedErrorId : Unspecified,Microsoft.HyperV.PowerShell.Commands.GetVMSwitch ! Unable to start VM: create: precreate: vswitch "MinikubeVS" not found * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new </code></pre>
<p>Had similar situations couple of times. Especially after playing with creating/deleting/reusing <code>hyperv-virtual-switch</code>.</p> <p>What I can suggest you is to remove <code>C:\Users\%USERNAME%\.minikube</code> and try start minikube again. </p> <p>This helps in many cases.</p>
<p>Is it possible to have some sort of condition check before declaring a variable in a helm chart for deployment?</p> <p>For example lets say I have </p> <pre><code> - name: EXAMPLE_VAR valueFrom: secretKeyRef: name: "name" key: "key" </code></pre> <p>but I only want to include this in my deployment for a specific configuration (based on lets say an environment variable) and don't want to have to maintain a seperate yaml configuration just for this configuration option</p>
<p>Let's say you have this in your chart's values.yaml</p> <pre><code>some-condition: true </code></pre> <p>Then in deployment template you may do like that:</p> <pre><code>spec: env: {{- if .Values.some-condition }} - name: EXAMPLE_VAR valueFrom: secretKeyRef: name: "name" key: "key" {{- end }} </code></pre>
<p>I'm trying to send a GET request from Lambda to a pod without exposing it externally. That pod got a ClusterIP service attached to it. I can access this pod directly via internet (via ingress) so I know it works properly.</p> <p>Here is the part of the service attached to a pod:</p> <pre><code>spec: clusterIP: 10.xxx.xxx.xx ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: app_name sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>I attached lambda to the vpc and subnets but if I use below code I get an error. I tried using pod IP address and cluster IP address but with the same error. It works for google/other sites and when the pod is exposed to the internet.</p> <pre><code> const http = require('http'); exports.handler = async (event, context) =&gt; { return new Promise((resolve, reject) =&gt; { const options = { host: 'www.google.com', path: '/api/', port: 80, method: 'GET' }; const req = http.request(options, (res) =&gt; { let data = ''; // A chunk of data has been recieved. res.on('data', (chunk) =&gt; { data += chunk; }); // The whole response has been received. Print out the result. res.on('end', () =&gt; { console.log(JSON.parse(data)); }); }); req.write(''); req.end(); }); }; </code></pre> <p>Response:</p> <pre><code>{ "errorMessage": "Task timed out after 3.00 seconds" } </code></pre> <p>I understand all below and I'm happy to change the service type but I don't know how do I supposed to address the pod in my Lambda (replace www.google.com with something). Happy to try any other code or python script.</p> <pre><code>A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service. A LoadBalancer service is the standard way to expose a service to the internet. On GKE, this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service. </code></pre> <p>Anyone tried something like that?</p>
<p>One of the easiest way is to expose service behind internal Load Balancer. In this way your service will not be exposed to Internet but will still be available within VPC.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 spec: type: LoadBalancer selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>A little bit more advanced but more flexible solution is to use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress</a> with the same internal Load Balancer.</p>
<p>Due to some legacy application that relies on Host header to function correctly, I need to have an Ingress (proxy, etc) that capable of rewrite Host header and pass that to downstream (backend). Is there any Ingress Controller that supports this functionality?</p> <p>Example:</p> <p>End user access our website through <code>foo.com/a</code> for backend <code>a</code> and <code>foo.com/b</code> for backend <code>b</code>. But since <code>a</code> and <code>b</code> are legacy app, it only accept:</p> <ul> <li><code>a</code> accepts connection when <code>Host: a.foo.com</code></li> <li><code>b</code> accepts connection when <code>Host: b.foo.com</code></li> </ul>
<p>This can be done using this annotation: <code>nginx.ingress.kubernetes.io/upstream-vhost: host.example.com</code></p>
<p>The example has:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: usernamekey - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: passwordkey restartPolicy: Never </code></pre> <p>the above from:</p> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p> <p>I've created a secret like this:</p> <pre><code>kubectl --namespace=mycustomnamespace create secret generic mysecret --from-literal=passwordkey="abc123" --from-literal=usernamekey="mememe" </code></pre> <p>I understand that the above secrets exist under the namespace.</p> <p>But if I try this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: secret-env-pod namespace: mycustomnamespace spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: usernamekey namespace: mycustomnamespace - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: passwordkey namespace: mycustomnamespace restartPolicy: Never </code></pre> <p>(note that I added a namespace declaration under metadata)</p> <p>I get this error:</p> <blockquote> <p>Error validating data: [ValidationError(Pod.spec.containers[1].env[2].valueFrom.secretKeyRef): unknown field "namespace" in io.k8s.api.core.v1.SecretKeySelector, ValidationError(Pod.spec.containers[1].env[6].valueFrom.secretKeyRef): unknown field "namespace" in io.k8s.api.core.v1.SecretKeySelector];</p> </blockquote> <p>If I take out the namespace(s) (under the secretKeyRef(s))....the pod fails..with</p> <blockquote> <p>Warning Failed 2s (x8 over 1m) kubelet, minikube Error: secret "mysecret" not found</p> </blockquote> <p>Yes, my secrets are in the namespace:</p> <pre><code>kubectl get secrets --namespace mycustomnamespace NAME TYPE DATA AGE default-token-55bzp kubernetes.io/service-account-token 3 10m mysecret Opaque 2 10m </code></pre> <p>APPEND : (resolution)</p> <p>It was an error on my part. Check my comment under Vasily's answer.</p> <p>But basically, the magic-sauce is that the below yml....</p> <pre><code>metadata: name: secret-env-pod namespace: mycustomnamespace </code></pre> <p>the above yml should "drive" the namespaces (aka, set the scope of the namespace) for the rest of the configuration (yml) .... </p> <p>(if you are a future reader of this question, double and triple check that you have everything under the correct namespace. ALL of your normal "get" statements need to use -n (aka --namespace) as a part.</p> <p>example</p> <pre><code>kubectl get pods </code></pre> <p>the above will only get pods under "default".</p> <p>you have to do</p> <pre><code>kubectl get pods --namespace mycustomnamespace </code></pre>
<p>Simply remove <code>namespace: mycustomnamespace</code> from pod secretKeyRef definitions. </p> <p>Also your secret create command should be like that:</p> <pre><code>kubectl --namespace=mycustomnamespace create secret generic mysecret --from-literal=passwordkey="abc123" --from-literal=usernamekey="mememe" </code></pre>
<p>Heketi pod was restarted on our Kubernetes Cluster and now I'm struggling with how to change glusterfs storage class resturl with new heketi endpoint.<br> What are the safest options without any data loss on our PVCs? I was able to recreate Kubernetes Cluster v1.11.10 on our test environment and start investigating on it. When I tried to edit storage class I got:</p> <pre><code>"StorageClass.storage.k8s.io "glusterfs" is invalid: parameters Forbidden: updates to parameters are forbidden." </code></pre> <p>We are using Kubernetes v.1.11.10.<br> I tried to create new storage class with correct heketi endpoint, but I couldn't edit PVCs:</p> <pre><code>PersistentVolumeClaim "test-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims </code></pre> <p>I was able only to delete old storage class and create new with correct heketi resturl.</p>
<p>You may try to use "kubectl replace" like that:</p> <pre><code>kubectl replace -f storage-class.yaml --force </code></pre> <p>Just make sure that you use Heketi Service name as a REST URL to avoid further such issues.</p>
<p><a href="https://i.stack.imgur.com/aWsNv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aWsNv.png" alt="enter image description here"></a></p> <p>Please refer to the above image, I am trying to access Kubernetes Clusters Using the Kubernetes API and for that purpose as mentioned in <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">documents</a> I am using Python Kuberentes Client, that needs a token to communicate with <code>Kube API</code>.</p> <p>When I am trying to authenticate the user with the token added in <code>Kube-config</code> file by Kube login everything works fine, where token provided by <code>Keycloak</code> REST API results in an un-authorization error.</p> <p>I would be glad if anyone could provide any insight on this issue.</p>
<p>We have resolved the issue, below are our findings and their resolution:</p> <ul> <li>There was a discrepancy in the token generated with KubeLogin client and Keycloak Rest API, issuer URL is hostname in the token generated with KubeLogin whereas in the token generated with Keycloak Rest API issuer URL is IP Address.</li> <li>Also, In our(Keycloak instance) generated certificate, we have hostname as issuer name which might be the reason for getting un-authorization errors. We reconfigured the Keycloak with new certificates which have DNS name.</li> <li>We had to create RBAC with cluster scope.</li> </ul>
<p>I am having troubles with a Jenkins pod and with the kube-dns. It seems that my Jenkins is offline and I can't install any plugin, and this is because of the DNS.</p> <p>This is the error that I'm obtaining: "UnknownHostException: updates.jenkins.io"</p> <p>Because of this error, I cant download any plugin, so my Jenkins is totally useless.</p> <p>I have exposed my deployment using "HostPort", in order to have my Jenkins in a private IP. It is important to say that this private IP has access to Internet, so my pod should have it too.</p> <p>This is the content of the /etc/resolv of the pod:</p> <pre><code>jenkins@jenkins-86b49f4cb4-zchct:/$ cat /etc/resolv.conf nameserver 10.0.0.10 search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com options ndots:5 </code></pre> <p>And this is my kube-dns:</p> <pre><code>[root@azureutils1 ~]# kubectl get svc --namespace kube-system --insecure-skip-tls-verify NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 10.0.243.103 &lt;none&gt; 80/TCP 5d kube-dns ClusterIP 10.0.0.10 &lt;none&gt; 53/UDP,53/TCP 5d kubernetes-dashboard ClusterIP 10.0.237.105 &lt;none&gt; 80/TCP 5d metrics-server ClusterIP 10.0.34.149 &lt;none&gt; 443/TCP 5d </code></pre> <p>So the IP of the resolv.conf matches the IP of kube-dns service. If I expose my Jenkins using a public LoadBalancer I don't have this problem, but I want that my Jenkins stays in a private IP.</p> <p>Any thought on this issue?</p> <p>Greetings</p>
<p>This kind of issues happens because your <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">pod DNS</a> is not configured. It could be fixed by adding:</p> <pre><code>hostNetwork: true dnsPolicy: ClusterFirstWithHostNet </code></pre> <p>to spec of your pod.</p>
<p>When I installed our cluster, I used a self signed cert from our internal CA authority. Everything was fine until I started getting cert errors from applications that I was deploying to the OKD cluster. We decided instead of trying to fix the errors one at a time for all time, we would simply purchase a commercial cert and install that. So we bought a SAN cert, with wildcards (identical to the one we got from our internal CA originally) from GlobalSign and I'm trying to install it with huge problems.</p> <p>Keep in mind, I have tried dozens of iterations here. I'm just documenting the last one I have tried in an attempt to figure out what the hell is the problem. This is on my test cluster, which is a VM server and I revert to a snapshot after every one. The snapshot is the operational cluster using the internal CA certs.</p> <p>So, my first step was to build my CAfile to be passed in. I downloaded the root and intermediate certs for GlobalSign and put them in the <code>ca-globalsign.crt</code> file. (PEM formatted)</p> <p>when i run</p> <pre><code>openssl verify -CAfile ../ca-globalsign.crt labtest.mycompany.com.pem </code></pre> <p>i get:</p> <pre><code>labtest.mycompany.com.pem: OK </code></pre> <p>and <code>openssl x509 -in labtest.mycompany.com.pem -text -noout</code> gives me (redacted)</p> <pre><code>Certificate: Data: Version: 3 (0x2) Serial Number: (redacted) Signature Algorithm: sha256WithRSAEncryption Issuer: C=BE, O=GlobalSign nv-sa, CN=GlobalSign Organization Validation CA - SHA256 - G2 Validity Not Before: Apr 29 16:11:07 2019 GMT Not After : Apr 29 16:11:07 2020 GMT Subject: C=US, ST=(redacted), L=(redacted), OU=Information Technology, O=(redacted), CN=labtest.mycompany.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: (redacted) Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment Authority Information Access: CA Issuers - URI:http://secure.globalsign.com/cacert/gsorganizationvalsha2g2r1.crt OCSP - URI:http://ocsp2.globalsign.com/gsorganizationvalsha2g2 X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.4146.1.20 CPS: https://www.globalsign.com/repository/ Policy: 2.23.140.1.2.2 X509v3 Basic Constraints: CA:FALSE X509v3 Subject Alternative Name: DNS:labtest.mycompany.com, DNS:*.labtest.mycompany.com, DNS:*.apps.labtest.mycompany.com X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Subject Key Identifier: (redacted) X509v3 Authority Key Identifier: (redacted) (redacted) </code></pre> <p>on my local machine. Everything i know about SSL says the cert is fine. These new files are put in the project that I use to hold the configs and such for my OKD install.</p> <p>Then I updated the cert files in my ansible inventory project and run the command</p> <pre><code>ansible-playbook -i ../okd_install/inventory/okd_labtest_inventory.yml playbooks/redeploy-certificates.yml </code></pre> <p>When I read the docs, everything tells me that it should simply roll thru its process and come up with the new certs. This doesn't happen. When I use <code>openshift_master_overwrite_named_certificates: false</code> in my inventory file, the install completes, but it only replaces the cert on the <code>*.apps.labtest</code> domain, but the <code>console.labtest</code> stays the original but it does come online, other than the fact the monitoring says <code>bad gateway</code> in the cluster console. </p> <p>Now, if I try to run the command again, using <code>openshift_master_overwrite_named_certificates: true</code> my <code>/var/log/containers/master-api*.log</code> is flooded with errors like this</p> <pre><code>{"log":"I0507 15:53:28.451851 1 logs.go:49] http: TLS handshake error from 10.128.0.56:46796: EOF\n","stream":"stderr","time":"2019-05-07T19:53:28.451894391Z"} {"log":"I0507 15:53:28.455218 1 logs.go:49] http: TLS handshake error from 10.128.0.56:46798: EOF\n","stream":"stderr","time":"2019-05-07T19:53:28.455272658Z"} {"log":"I0507 15:53:28.458742 1 logs.go:49] http: TLS handshake error from 10.128.0.56:46800: EOF\n","stream":"stderr","time":"2019-05-07T19:53:28.461070768Z"} {"log":"I0507 15:53:28.462093 1 logs.go:49] http: TLS handshake error from 10.128.0.56:46802: EOF\n","stream":"stderr","time":"2019-05-07T19:53:28.463719816Z"} </code></pre> <p>and These</p> <pre><code>{"log":"I0507 15:53:29.355463 1 logs.go:49] http: TLS handshake error from 10.70.25.131:44424: remote error: tls: bad certificate\n","stream":"stderr","time":"2019-05-07T19:53:29.357218793Z"} {"log":"I0507 15:53:29.357961 1 logs.go:49] http: TLS handshake error from 10.70.25.132:43128: remote error: tls: bad certificate\n","stream":"stderr","time":"2019-05-07T19:53:29.358779155Z"} {"log":"I0507 15:53:29.357993 1 logs.go:49] http: TLS handshake error from 10.70.25.132:43126: remote error: tls: bad certificate\n","stream":"stderr","time":"2019-05-07T19:53:29.358790397Z"} {"log":"I0507 15:53:29.405532 1 logs.go:49] http: TLS handshake error from 10.70.25.131:44428: remote error: tls: bad certificate\n","stream":"stderr","time":"2019-05-07T19:53:29.406873158Z"} {"log":"I0507 15:53:29.527221 1 logs.go:49] http: TLS handshake error from 10.70.25.132:43130: remote error: tls: bad certificate\n","stream":"stderr","time":"2019-05-07T19:53 </code></pre> <p>and the install hangs on the ansible task <code>TASK [Remove web console pods]</code>. It will sit there for hours. When go into the masters console and run <code>oc get pods</code> on the <code>openshift-web-console</code> its in <code>terminating</code> state. When I describe the pod that is trying to start with <code>pending</code>, it comes back saying the hard disk is full. I'm assuming thats because its not capable of communicating with the storage system because of all those TLS errors above. It just stays there. I can bring the cluster back up if i force delete the terminating pod, then reboot the master, then delete the new pod that is attempting to start, then reboot a second time. Then the web console comes online but all my log files are flooded with those TLS errors. But, the more concerning thing is the install hangs at that spot, so im assuming there are additional steps after bringing the web console online that cause me problems as well.</p> <p>So, I have also attempted to redeploy the server CA. That yielded problems because my new cert isn't a CA cert. Then when I just ran the redeploy CA playbook, to have the cluster recreate the server CA's, it finished fine, but then when I tried to run the <code>redeploy-certificates.yml</code>, I got the same results. </p> <p>here is my inventory file</p> <pre><code>all: children: etcd: hosts: okdmastertest.labtest.mycompany.com: masters: hosts: okdmastertest.labtest.mycompany.com: nodes: hosts: okdmastertest.labtest.mycompany.com: openshift_node_group_name: node-config-master-infra okdnodetest1.labtest.mycompany.com: openshift_node_group_name: node-config-compute openshift_schedulable: True OSEv3: children: etcd: masters: nodes: # https://docs.okd.io/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#overview-containerized-glusterfs # https://github.com/openshift/openshift-ansible/tree/master/playbooks/openshift-glusterfs # glusterfs: vars: openshift_deployment_type: origin ansible_user: root openshift_master_cluster_method: native openshift_master_default_subdomain: apps.labtest.mycompany.com openshift_install_examples: true openshift_master_cluster_hostname: console.labtest.mycompany.com openshift_master_cluster_public_hostname: console.labtest.mycompany.com openshift_hosted_registry_routehost: registry.apps.labtest.mycompany.com openshift_certificate_expiry_warning_days: 30 openshift_certificate_expiry_fail_on_warn: false openshift_master_overwrite_named_certificates: true openshift_hosted_registry_routetermination: reencrypt openshift_master_named_certificates: - certfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.pem" keyfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.key" cafile: "/Users/me/code/devops/okd_install/certs/ca-globalsign.crt" names: - "console.labtest.mycompany.com" # - "labtest.mycompany.com" # - "*.labtest.mycompany.com" # - "*.apps.labtest.mycompany.com" openshift_hosted_router_certificate: certfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.pem" keyfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.key" cafile: "/Users/me/code/devops/okd_install/certs/ca-globalsign.crt" openshift_hosted_registry_routecertificates: certfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.pem" keyfile: "/Users/me/code/devops/okd_install/certs/labtest/commercial.04.29.2019.labtest.mycompany.com.key" cafile: "/Users/me/code/devops/okd_install/certs/ca-globalsign.crt" # LDAP auth openshift_master_identity_providers: - name: 'mycompany_ldap_provider' challenge: true login: true kind: LDAPPasswordIdentityProvider attributes: id: - dn email: - mail name: - cn preferredUsername: - sAMAccountName bindDN: 'ldapbind@int.mycompany.com' bindPassword: (redacted) insecure: true url: 'ldap://dc-pa1.int.mycompany.com/ou=mycompany,dc=int,dc=mycompany,dc=com' </code></pre> <p>what am I missing here? I thought this <code>redeploy-certificates.yml</code> playbook was designed to update the certificates. Why can't I get this to swtich to my new commercial cert? Its almost like its replacing the certs on the router (kinda), but in the process screwing up the internal server cert. I'm really at my whits end here, I don't know what else to try.</p>
<p>You should configure <code>openshift_master_cluster_hostname</code> and <code>openshift_master_cluster_public_hostname</code> as different hostname each other. The both hostname also should be resolved by DNS. Your commercial certificates are used as external access point. </p> <pre><code>The openshift_master_cluster_public_hostname and openshift_master_cluster_hostname parameters in the Ansible inventory file, by default /etc/ansible/hosts, must be different. If they are the same, the named certificates will fail and you will need to re-install them. # Native HA with External LB VIPs openshift_master_cluster_hostname=internal.paas.example.com openshift_master_cluster_public_hostname=external.paas.example.com </code></pre> <p>And you had better to configure certificates each component step by step for testing. For example, First, <a href="https://docs.openshift.com/container-platform/3.11/install_config/certificate_customization.html#configuring-custom-certificates-master" rel="nofollow noreferrer">Configuring a Custom Master Host Certificate</a>, and verify. And then, <a href="https://docs.openshift.com/container-platform/3.11/install_config/certificate_customization.html#configuring-custom-certificates-wildcard" rel="nofollow noreferrer">Configuring a Custom Wildcard Certificate for the Default Router</a>, and verify. And so on. If you can succeed all the redeploying certificates tasks, finally you can run with complete parameters for your commercial certificates maintenance.</p> <p>Refer <a href="https://docs.openshift.com/container-platform/3.11/install_config/certificate_customization.html" rel="nofollow noreferrer">Configuring Custom Certificates</a> for more details. I hope it help you.</p>
<p>I have created an AKS cluster with below versions.</p> <pre><code>Kubernetes version: 1.12.6 Istio version: 1.1.4 Cloud Provider: Azure </code></pre> <p>I have also successfully installed Istio as my Ingress gateway with an external IP address. I have also enabled istio-injection for the namespace where I have deployed my service. and I see that the sidecar injection is happening successfully. it is showing.</p> <pre><code>NAME READY STATUS RESTARTS AGE club-finder-deployment-7dcf4479f7-8jlpc 2/2 Running 0 11h club-finder-deployment-7dcf4479f7-jzfv7 2/2 Running 0 11h </code></pre> <p>My tls-gateway</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: tls-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: name: https number: 443 protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key hosts: - "*" </code></pre> <p><strong>Note: I am using self-signed certs for testing.</strong></p> <p>I have applied below virtual service</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: club-finder-service-rules namespace: istio-system spec: # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways - tls-gateway hosts: - "*" # APIM Manager URL http: - match: - uri: prefix: /dev/clubfinder/service/clubs rewrite: uri: /v1/clubfinder/clubs/ route: - destination: host: club-finder.club-finder-service-dev.svc.cluster.local port: number: 8080 - match: - uri: prefix: /dev/clubfinder/service/status rewrite: uri: /status route: - destination: host: club-finder.club-finder-service-dev.svc.cluster.local port: number: 8080 </code></pre> <p>Now when I am trying to test my service using Ingress external IP like</p> <pre><code>curl -kv https://&lt;external-ip-of-ingress&gt;/dev/clubfinder/service/status </code></pre> <p>I get below error</p> <pre><code>* Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7fe5e800d600) &gt; GET /dev/clubfinder/service/status HTTP/2 &gt; Host: x.x.x.x --&gt; Replacing IP intentionally &gt; User-Agent: curl/7.54.0 &gt; Accept: */* &gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! &lt; HTTP/2 503 &lt; date: Tue, 07 May 2019 05:15:01 GMT &lt; server: istio-envoy &lt; * Connection #0 to host x.x.x.x left intact </code></pre> <p>Can someone please point me out what is wrong here</p>
<p>I was incorrectly defining my "VirtualService" yaml. Instead of using default HTTP port 80 i was mentioning 8080 which is my applications listening port. Below yaml worked for me</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: club-finder-service-rules namespace: istio-system spec: # https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways - tls-gateway hosts: - "*" # APIM Manager URL http: - match: - uri: prefix: /dev/clubfinder/service/clubs rewrite: uri: /v1/clubfinder/clubs/ route: - destination: host: club-finder.club-finder-service-dev.svc.cluster.local port: number: 80 - match: - uri: prefix: /dev/clubfinder/service/status rewrite: uri: /status route: - destination: host: club-finder.club-finder-service-dev.svc.cluster.local port: number: 80 </code></pre>
<p>I would like to use <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> (Kubernetes in Docker) to test a tool I am writing. I would love to have a test matrix of different Kubernetes versions to test against, is there a way to configure the kubernetes version in <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> somehow?</p>
<p>You can specify the image to be used for the nodes and choose any other published version:</p> <pre><code>kind create cluster --image "kindest/node:v1.14.1" </code></pre> <p>Available tags can be found at <a href="https://hub.docker.com/r/kindest/node/tags" rel="noreferrer">https://hub.docker.com/r/kindest/node/tags</a></p>
<p>I have installed Kafka on a local Minikube by using the Helm charts <a href="https://github.com/confluentinc/cp-helm-charts" rel="noreferrer">https://github.com/confluentinc/cp-helm-charts</a> following these instructions <a href="https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html" rel="noreferrer">https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html</a> like so:</p> <pre><code>helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360 </code></pre> <p>The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).</p> <p>Also running on Minikube is a MySQL instance. Here's the output of <code>kubectl get pods --namespace myNamespace</code>:</p> <p><a href="https://i.stack.imgur.com/agWGv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/agWGv.png" alt="enter image description here"></a></p> <p>I want to connect MySQL and Kafka, using one of the connectors (like <a href="https://www.confluent.io/connector/debezium-mysql-cdc-connector/" rel="noreferrer">Debezium MySQL CDC</a>, for instance). In the instructions, it says:</p> <blockquote> <p><strong>Install your connector</strong></p> <p>Use the Confluent Hub client to install this connector with:</p> <p><code>confluent-hub install debezium/debezium-connector-mysql:0.9.2</code></p> </blockquote> <p>Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.</p> <p>Questions:</p> <ol> <li>Does confluent-hub not come installed via those Helm charts?</li> <li>Do I have to install confluent-hub myself?</li> <li>If so, which pod do I have to install it on?</li> </ol>
<p>Ideally this should be configurable as part of the <code>helm</code> script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below. </p> <p>Contents of Dockerfile</p> <pre><code>FROM confluentinc/cp-kafka-connect:5.2.1 COPY &lt;connector-directory&gt; /usr/share/java </code></pre> <p><code>/usr/share/java</code> is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (<code>plugin.path</code>) during your <code>helm</code> installation.</p> <p>Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the <code>helm</code> installation.</p> <p><a href="https://github.com/confluentinc/cp-helm-charts/blob/e17565cd5a6985a594155b12b08068cb5882e51f/charts/cp-kafka-connect/values.yaml" rel="noreferrer">Here</a> is the path to the <code>values.yaml</code> file. You can find the <code>image</code> and <code>plugin.path</code> values here.</p>
<p>I have a stateful set for MQ, exposed two ports 1414 for TCP and 9443 for HTTPS and created service of type Loadbalancer. 1414 for TCP is working fine, able to telnet from other PODs in the same cluster using service name/cluster IP..also able to connect 1414 from outside GKE cluster. </p> <p>But the problem is port 9443 is not accessible from other POD in the cluster (service name/cluster IP) or outside the cluster (external IP). The telnet is working fine when exec to the POD and test locally.. telnet 127.0.01 9443</p> <p>Is there any configuration missing for HTTPS service.. </p> <p>Note: Port forward is working fine and able to connect to the API. kubectl port-forward svc/mq-qmdtest 9443:9443</p> <p>Service Definition </p> <pre><code>apiVersion: v1 kind: Service metadata: name: {{.Values.name}} namespace: {{.Values.namespace}} annotations: cloud.google.com/load-balancer-type: "Internal" labels : run: {{.Values.name}} spec: type: LoadBalancer loadBalancerIP: {{.Values.loadBalancerIP}} ports: - name: webui port: 9443 protocol: TCP - name: mq port: 1414 protocol: TCP selector: run: {{.Values.name}} </code></pre> <p>Stateful Set – Container port configuration </p> <pre><code> ports: - containerPort: 9443 protocol: TCP name: webui - containerPort: 1414 protocol: TCP name: mq </code></pre>
<blockquote> <p>The telnet is working fine when exec to the POD and test locally.. telnet 127.0.01 9443 ... Port forward is working fine and able to connect to the API. kubectl port-forward svc/mq-qmdtest 9443:9443</p> </blockquote> <p>Is almost certainly caused by the pod only listening on localhost; <code>port-forward</code> also engages with localhost, so the fact that you cannot reach it from other pods in the cluster but you can from itself and you can from port-forward means the service is only listening for <em>local</em> connections.</p> <p>Without knowing more about the software I can't offer you a "open this file, change this value" type instructions, but be on the lookout for "bind host" or any "listen" configuration that would accept both a host and a port, and in that case set the "bind host" to <code>0.0.0.0</code> or the "listen" configuration to <code>0.0.0.0:9443</code></p>
<p>I'm developing a controller with a CRD. The CRD includes our custom stuff along with an embedded core.v1.PodSpec. (v1.13.1)</p> <p>I define a validation section in the CRD that can validate and enforce constraints on our custom fields but I can't figure out how to do this for the embedded PodSpec. PodSpec is far too large and far too many options to manually add this to the validate section of the CRD:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: mystuff.example.com spec: group: mystuff.example.com versions: - name: v1alpha1 served: true storage: true names: kind: MyStuff plural: mystuffs singular: mystuff shortNames: - ms scope: Namespaced additionalPrinterColumns: - JSONPath: .status.phase name: Status type: string - JSONPath: .metadata.resourceVersion name: Version type: string validation: openAPIV3Schema: properties: spec: required: - myVar1 - myVar2 - podSpec properties: myVar1: type: boolean myVar2: type: boolean # Here I need to validate a valid core.v1.PodSpec podSpec: type: core.v1.PodSpec </code></pre> <p>How do other people approach this? </p> <p>I also need validation for any mechanism in which a user can submit a workload, ie directly using the kube apiserver or with kubectl.</p> <p>Thanks for any help.</p>
<p>In general CRD doesn't allow to put references to other objects. There was a discussion on that: <a href="https://github.com/kubernetes/kubernetes/issues/54579" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/54579</a>. Decision was made not to add references.</p> <p>Workaround is described in this comment: <a href="https://github.com/kubernetes/kubernetes/issues/54579#issuecomment-370372942" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/54579#issuecomment-370372942</a> I haven't use it but you may try. </p>
<pre><code>[root@vpct-k8s-1 kubernetes]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-ui-v2-ck0yw 0/1 Pending 0 1h [root@vpct-k8s-1 kubernetes]# kubectl get rc --all-namespaces NAMESPACE CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE kube-system kube-ui-v2 kube-ui gcr.io/google_containers/kube-ui:v2 k8s-app=kube-ui,version=v2 1 1h </code></pre> <p>Can't delete pods in pending state? </p>
<pre><code>kubectl get ns kubectl get pods --all-namespaces kubectl get deployment -n (namespacename) kubectl get deployments --all-namespaces kubectl delete deployment (podname) -n (namespacename) </code></pre>
<p>When I trying to follow the <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">Kubernetes Configure Multiple Schedulers guide</a>, I cannot successfully get the my-scheduler pod running.</p> <p>I succeed to build the image and push it to a registry, and the image shows in my docker and Google Could Platform. But when I apply the deployment file to set the pods, the status is <code>CrashLoopBackOff</code>.</p> <pre class="lang-sh prettyprint-override"><code>my-scheduler-64576b7897-8b549 0/1 CrashLoopBackOff 14 84m </code></pre> <p>Here is the output for <code>kuberctl describe pod my-scheduler-64576b7897-8b549 -n kube-system</code>:</p> <pre class="lang-sh prettyprint-override"><code>Name: my-scheduler-64576b7897-8b549 Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: minikube/10.0.2.15 Start Time: Tue, 07 May 2019 15:06:59 -0500 Labels: component=scheduler pod-template-hash=64576b7897 tier=control-plane version=second Annotations: &lt;none&gt; Status: Running IP: 172.17.0.15 Controlled By: ReplicaSet/my-scheduler-64576b7897 Containers: kube-second-scheduler: Container ID: docker://a0cadf03dd5ffd11cbb6f1f8e669fc04cd6aa00aa6f361550f06f32785a4d63c Image: gcr.io/my-gcp-project-239900/my-kube-scheduler:1.0 Image ID: docker-pullable://gcr.io/my-gcp-project-239900/my-kube-scheduler@sha256:0895687b7646d159438f086e8f8b23b253332930e169cf29d8b9951ebf3ba653 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /usr/local/bin/kube-scheduler --address=1.1.1.1 --leader-elect=false --scheduler-name=my-scheduler State: Terminated Reason: Error Exit Code: 1 Started: Tue, 07 May 2019 15:07:45 -0500 Finished: Tue, 07 May 2019 15:07:45 -0500 Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 07 May 2019 15:07:24 -0500 Finished: Tue, 07 May 2019 15:07:24 -0500 Ready: False Restart Count: 3 Requests: cpu: 100m Liveness: http-get http://:10251/healthz delay=15s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:10251/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from my-scheduler-token-lzwvk (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: my-scheduler-token-lzwvk: Type: Secret (a volume populated by a Secret) SecretName: my-scheduler-token-lzwvk Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned kube-system/my-scheduler-64576b7897-8b549 to minikube Normal Pulled 2s (x4 over 47s) kubelet, minikube Container image "gcr.io/my-gcp-project-239900/my-kube-scheduler:1.0" already present on machine Normal Created 2s (x4 over 47s) kubelet, minikube Created container kube-second-scheduler Normal Started 2s (x4 over 47s) kubelet, minikube Started container kube-second-scheduler Warning BackOff 1s (x6 over 45s) kubelet, minikube Back-off restarting failed container </code></pre> <p>And when I try to see the log of this pod, this error message appears:</p> <pre class="lang-sh prettyprint-override"><code> standard_init_linux.go:207: exec user process caused "exec format error" </code></pre> <p>And my <code>Dockerfile</code> is bit different from the guide, it looks like:</p> <pre><code>FROM busybox ADD ./_output/local/bin/darwin/amd64/kube-scheduler /usr/local/bin/kube-scheduler </code></pre> <p>I changed</p> <pre><code>ADD ./_output/dockerized/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-scheduler </code></pre> <p>into</p> <pre><code>ADD ./_output/local/bin/darwin/amd64/kube-scheduler /usr/local/bin/kube-scheduler </code></pre> <p>because <code>./_output/local/bin/darwin/amd64/kube-scheduler</code>is where the scheduler file acctually located. I would also appreciate it if anyone could tell me why would this happen.</p> <hr> <p><strong>Some Updates</strong></p> <h2>Different file location</h2> <p>Following the guide, I typed these command in my shell</p> <pre class="lang-sh prettyprint-override"><code>git clone https://github.com/kubernetes/kubernetes.git cd kubernetes make </code></pre> <p>by executing <code>make</code>, the <code>Makefile</code>file will generate the kubernetes exec files in certain path (including the kube-scheduler we want). And the generation path is related to your system. Since I'm using <strong>minikube</strong> to run kubernetes and docker, and it runs on <strong>darwin(amd64)</strong>. I think that's the reason why my path is different from the guide. </p> <h2>The "exec format error"</h2> <p>For this error</p> <pre class="lang-sh prettyprint-override"><code> standard_init_linux.go:207: exec user process caused "exec format error" </code></pre> <p>I find <a href="https://github.com/moby/moby/issues/23865" rel="nofollow noreferrer">this issue</a> suggesting add the <code>#!/bin/bash</code> before you try to run commands in the container. This tells the system to treat the commands as bash commands. Here in busybox it should be <code>/bin/sh</code>. (I don't know much about shell language mechanism, so correct me if I'm wrong)</p> <p>So I changed these lines in <code>my-sheduler.yaml</code>:</p> <pre><code>- command: - /usr/local/bin/kube-scheduler - --address=0.0.0.0 - --leader-elect=false - --scheduler-name=my-scheduler </code></pre> <p>into</p> <pre><code>- command: - /bin/sh - /usr/local/bin/kube-scheduler - --address=0.0.0.0 - --leader-elect=false - --scheduler-name=my-schedule </code></pre> <p>Now the <code>my-sheduler</code> pods still fail and get the <code>CrashLoopBackOff</code> error. Good news is I now have the log when executing <code>kubectl logs</code>. Bad news is the logs are unreadable like this</p> <pre class="lang-sh prettyprint-override"><code>/usr/local/bin/kube-scheduler: line 1: ����: not found /usr/local/bin/kube-scheduler: line 2: syntax error: unexpected ")" </code></pre> <p>I assume this happened because of something happened in the docker image. But since I'm using the scrips right from the <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">kubernetes github page</a>, there shouldn't be any bug.</p> <p>The same log shows up when I run <code>docker run -it {my-image-id} /bin/sh</code> and go into my image and run the exec kube-scheduler file.</p> <p>I can't figure it out what's in the GO script causes this problem. </p> <hr> <h1>Another alternative</h1> <p>I ended up trying another way to build my custom scheduler.</p> <p>I followed this tutorial: <a href="https://www.youtube.com/watch?v=IYcL0Un1io0&amp;t=1807s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=IYcL0Un1io0&amp;t=1807s</a></p> <p>And you can find the source code here: <a href="https://github.com/kelseyhightower/scheduler" rel="nofollow noreferrer">https://github.com/kelseyhightower/scheduler</a></p> <p>Comparing to the official guide, this one is more clear light (in in official guide you have to build the whole thing to get the scheduler file). </p> <p>Hope this helps if someone is running into the same problem. </p>
<p>This totally looks like that you are using the <code>darwin</code> <code>kube-scheduler</code> go binary to run in a <a href="https://hub.docker.com/_/busybox" rel="nofollow noreferrer">Linux container</a>. </p> <p>You probably ran this on your mac:</p> <pre><code>git clone https://github.com/kubernetes/kubernetes.git cd kubernetes make </code></pre> <p>Which created the <code>darwin</code> binaries and not the Linux binaries. I would try to run that from a Linux box. Or change the make target to use <a href="https://gist.github.com/asukakenji/f15ba7e588ac42795f421b48b8aede63" rel="nofollow noreferrer"><code>GOOS=linux</code> and <code>GOARCH=amd64</code></a>.</p> <p>Note that K8s mainly supports Linux and only Windows nodes as of this writing.</p>
<p>I am currently working on deploying the WSO2 APIM on a Kubernetes cluster. All the pods and services are configured and running correctly. On the ingress settings we want to use expose default HTTP ports 80/443 instead of the default ports used by the product i.e. 9763/9443. We are not using any offsets for the port. </p> <p>When testing the config we notice that the URL is always being redirected to port 9443 which is being blocked by the ingress. </p> <p>Enabling HTTP protocol for admin also does not work as the redirection is happening to the default HTTP port configured in the product i.e. 9763</p> <p>Is there a configuration in the carbon.xml/axis2.xml which can be changed to achieve the desired result or do we need to make changes in the Ingress setting for rewriting the URLs.</p>
<p>You can set proxy ports in PRODUCT_HOME/repository/conf/tomcat/catalina-server.xml file. For 9443 you can set the port 443 and for port 9763 you can use the port 80.</p> <pre><code>&lt;Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9763" proxyPort="80" ............ &lt;Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="9443" proxyPort="443" ........... </code></pre> <p>Please refer <a href="https://docs.wso2.com/display/Carbon430/Adding+a+Custom+Proxy+Path" rel="nofollow noreferrer">https://docs.wso2.com/display/Carbon430/Adding+a+Custom+Proxy+Path</a></p>
<p>I want to deploy jupyter notebook on a kubernetes cluster. Following the official documentation(<a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html" rel="nofollow noreferrer">https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html</a>), I ran the following command:</p> <pre><code># Suggested values: advanced users of Kubernetes and Helm should feel # free to use different values. RELEASE=jhub NAMESPACE=jhub helm upgrade --install $RELEASE jupyterhub/jupyterhub \ --namespace $NAMESPACE \ --version=0.8.2 \ --values jupyter-hub-config.yaml </code></pre> <p>Where <code>jupyter-hub-config.yaml</code> is a config file as mentioned in the doc, containing a token generated by the command <code>openssl rand -hex 32 </code>.</p> <p>While running the aforementioned command, I get the following error:</p> <pre><code>Error: release jhub failed: persistentvolumeclaims "hub-db-dir" is forbidden: Internal error occurred: 8 default StorageClasses were found </code></pre> <p>I tried looking into various methods of installing <code>jhub</code> but none could point me to any difference in this approach that I would consider causing error here.</p> <p>The o/p of the command <code>kubectl get storageclass</code> is:</p> <pre><code>NAME PROVISIONER AGE aviral-worker-volume (default) kubernetes.io/aws-ebs 14d default (default) kubernetes.io/aws-ebs 14d es-ebs-storage (default) kubernetes.io/aws-ebs 7d gp2 (default) kubernetes.io/aws-ebs 14d prometheus-monitoring-volume (default) kubernetes.io/aws-ebs 8d replicated (default) kubernetes.io/aws-ebs 14d replicated-premkit (default) kubernetes.io/aws-ebs 14d replicated-statsd (default) kubernetes.io/aws-ebs 14d </code></pre>
<p>You have 8 default storage classes in your cluster, which is definitely not normal. You should make sure you have only one default storage class.</p> <p>I don't know which one should be default, it's totally up to your cluster, I don't wanna be responsible for that decision. But for all storage classes except the default you need to do this: </p> <pre><code>kubectl patch storageclass &lt;your-class-name&gt; -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' </code></pre>
<p>I have a json file with some keys like this:</p> <pre><code>{ &quot;a&quot;:&quot;someval&quot; &quot;b&quot;:&quot;someval&quot; .. more keys } </code></pre> <p>How do I add these keys to a secret in kubernetes?</p> <p>When I try <code>$ kubectl create secret generic mysecret --from-file=file.json</code> it returns a secret containing the file, but <strong>I want to map the contents of the file to the secret</strong>, not add the file as a secret.</p> <p>Output:</p> <p><code>$ kubectl get secret mysecret -o yaml</code></p> <pre><code>apiVersion: v1 data: file.json: #base64 encoded stuff here. kind: Secret </code></pre> <hr /> <p><strong>Wanted output:</strong></p> <p><code>$ kubectl get secret mysecret -o yaml</code></p> <pre><code>apiVersion: v1 data: a: someval b: someval kind: Secret </code></pre> <p>What am I doing wrong?</p>
<p>If you have flat (not nested) JSON then try this (assuming you have <a href="https://stedolan.github.io/jq/download/" rel="noreferrer"><code>jq</code></a> tool installed):</p> <pre><code>kubectl create secret generic test --from-env-file &lt;(jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" YOUR_FILE.json) </code></pre>
<p>I am using k8s in mac-docker-desktop. I deploy a mysql pod with below config. </p> <p>run with: <code>kubectl apply -f mysql.yaml</code></p> <pre><code># secret apiVersion: v1 kind: Secret metadata: name: mysql type: Opaque data: # root mysql-root-password: cm9vdAo= --- # configMap apiVersion: v1 kind: ConfigMap metadata: name: mysql-conf data: database: app --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql labels: app: mysql spec: selector: matchLabels: app: mysql strategy: type: RollingUpdate template: metadata: labels: app: mysql spec: volumes: - name: mysql persistentVolumeClaim: claimName: mysql containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql key: mysql-root-password - name: MYSQL_DATABASE valueFrom: configMapKeyRef: name: mysql-conf key: database ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql mountPath: /var/lib/mysql --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql labels: app: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- # services apiVersion: v1 kind: Service metadata: name: mysql labels: app: mysql spec: selector: app: mysql ports: - port: 3306 targetPort: 3306 </code></pre> <p>After that. it shows ok . and then, I want to connect the mysql server with node ip, but failed. then I exec in the pod, and got failed either.</p> <p>I execute in the pod and can't login.</p> <pre><code> ☁ gogs-k8s kubectl get pods NAME READY STATUS RESTARTS AGE blog-59fb8cbd44-frmtx 1/1 Running 0 37m blog-59fb8cbd44-gdskp 1/1 Running 0 37m blog-59fb8cbd44-qrs8f 1/1 Running 0 37m mysql-6c794ccb7b-dz9f4 1/1 Running 0 31s ☁ gogs-k8s kubectl exec mysql-6c794ccb7b-dz9f4 -it bash root@mysql-6c794ccb7b-dz9f4:/# ls bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var root@mysql-6c794ccb7b-dz9f4:/# mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) root@mysql-6c794ccb7b-dz9f4:/# echo $MYSQL_ROOT_PASSWORD root root@mysql-6c794ccb7b-dz9f4:/# mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) </code></pre> <p>It there any problems with my config file ? </p>
<p>Probably you have invalid base64 encoded password. Try this one:</p> <pre><code>data: pass: cm9vdA== </code></pre>
<p>I have a container running on url like <a href="http://localhost:8000/ps/app/ui/?pid=201" rel="nofollow noreferrer">http://localhost:8000/ps/app/ui/?pid=201</a>. The container is deployed on kubernetes and exposed to a service as "ps-app-ui:8000" I want to create an ingress which can be accessible from outside. Ingress template is like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ps-ingress spec: rules: - http: paths: - path: /myapp/ backend: serviceName: ps-app-ui servicePort: 8000 </code></pre> <p>The problem is its not working with this ingress. I also tried adding "ingress.kubernetes.io/rewrite-target: /" but had no success. Can anyone help me to get my application accessible via "<a href="http://INGRESS-IP/myapp/ps/app/ui/?pid=201" rel="nofollow noreferrer">http://INGRESS-IP/myapp/ps/app/ui/?pid=201</a>"</p> <p>Will be really grateful.</p>
<p>Ingress version 0.22.0 or higher has changed the way how rewrite-target works. You'll need to regex-match the path and add it to the rewrite-target.</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$2 ... ... rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) </code></pre> <p>Refer to changelog <a href="https://github.com/kubernetes/ingress-nginx/releases" rel="nofollow noreferrer">here</a>. How to article <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">here</a></p>
<p>By default, <code>kubectl get pods</code> outputs</p> <pre><code>NAME READY STATUS RESTARTS AGE my-podob-6f7f9d798c-grhmd 1/1 Running 0 6d17h </code></pre> <p>Now I'd like to add a few extra columns to this, so I've set up a template.txt file with:</p> <pre><code>NAME IMAGES metadata.name metadata.annotations.imageTag </code></pre> <p>I can't seem to figure out how <code>READY</code> is calculated from the output. Ideally, I would append to the <code>get pods</code> output with a few custom columns but I can't tell if that's possible.</p>
<p>There is a FR for appending extra columns <a href="https://github.com/kubernetes/kubernetes/issues/71612" rel="nofollow noreferrer">here</a>.</p> <p>The "READY" column is a special case in the source: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/printers/internalversion/printers.go#L558" rel="nofollow noreferrer">printers.go#L558</a> and its value is calculated in the implementation.</p> <p>Unfortunately it seems that it's impossible to "count things" both in JSONPath and Go templates.</p> <p>Maybe consider creating an alias to run the ordinary <code>kubectl get pods</code> and then the one with your custom columns?</p>
<p>I built an application using Docker Compose which included an Nginx instance accepting connections on port 80:</p> <pre><code> nginx: image: nginx:1.15.12-alpine container_name: nginx volumes: - etc. ports: - 80:80 </code></pre> <p>I'd like to spin up this application on Kubernetes running on my local machine (macOS). So I've run <code>kompose convert</code>, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/#kompose-convert" rel="nofollow noreferrer">documented here</a>.</p> <p>This generated <code>nginx-service.yaml</code> which looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert kompose.version: 1.18.0 () creationTimestamp: null labels: io.kompose.service: nginx name: nginx spec: ports: - name: "80" port: 80 targetPort: 80 selector: io.kompose.service: nginx status: loadBalancer: {} </code></pre> <p>I ran <code>kubectl apply</code> with all of the YAML files produced by <code>kompose</code>, and then <code>kubectl describe svc nginx</code>:</p> <pre><code>Name: nginx Namespace: myproject Labels: io.kompose.service=nginx Annotations: kompose.cmd=kompose convert kompose.version=1.18.0 () kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"kompose.cmd":"kompose convert","kompose.version":"1.18.0 ()"},"creationTimestamp":null,... Selector: io.kompose.service=nginx Type: ClusterIP IP: 172.30.110.242 Port: 80 80/TCP TargetPort: 80/TCP Endpoints: Session Affinity: None Events: &lt;none&gt; </code></pre> <p>However, I cannot access the web server by navigating to <code>http://172.30.110.242:80</code> on the same machine.</p> <p>There is documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">accessing services running on clusters</a>. I'm new to k8s and I'm not sure how to diagnose the problem and pick the right solution of the options they list.</p> <p>Is it a defect in <code>kompose</code> that it did not generate a comparable service config file? </p>
<p>See:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p> <p>Your "connect to" URL from your local machine to a K8 world will not be "172.x.x.x". It will probably be 192.168.99.100:33333 (port number wil be different).. run this: </p> <p><code>minikube service myservicename -n "default" --url</code> </p> <p>see what that gives you</p> <p>but basically, you need to "expose" the k8 world to the outside world.</p> <p>this yml should help</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservicename namespace: mycustomnamespace labels: name: myservicemetadatalabel spec: type: NodePort ports: - name: myfirstportname port: 80 targetPort: 80 selector: myexamplelabelone: mylabelonevalue myexamplelabeltwo: mylabeltwovalue </code></pre> <p>the selector will refer to your pod/container setup.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: myfirstpodmetadataname namespace: mycustomnamespace labels: myexamplelabelone: mylabelonevalue myexamplelabeltwo: mylabeltwovalue </code></pre> <p>"selectors" is outside scope of this question...but the above will give you the breadcrumb you need</p> <p>also see:</p> <p><a href="https://stackoverflow.com/questions/48361841/how-to-expose-k8-pods-to-the-public-internet/48366206#48366206">How to expose k8 pods to the public internet?</a></p>
<p>I want to utilize <a href="https://github.com/helm/helm/blob/master/docs/charts_hooks.md" rel="nofollow noreferrer">Charts Hooks</a>'s <code>post-install</code> to do some action on my deployment's container.</p> <p>For example, I have a <code>php-fpm</code> container that consists of a <code>Laravel</code> application, and I want to run <code>php artisan key:gen</code> on install. Since it's a one time command so I couldn't place it on the <code>postStart</code> lifecycle, otherwise it would keep overwriting the <code>APP_KEY</code>.</p> <p>How can I use <strong><em>Charts Hooks</em></strong> to achieve it? Or is there a better way?</p>
<p>Your job needs to run a container that contains <code>kubectl</code> and you would execute this script to exec into another container. Since <code>kubectl exec</code> <a href="https://github.com/kubernetes/kubernetes/issues/8876" rel="nofollow noreferrer">doesn't support selection by labels</a> you need to retrieve the pod name beforehand:</p> <pre><code>$pod=$(kubectl get pods --no-headers -o custom-columns=":metadata.name" -l YOUR-LABELS=YOUR-VALUES) kubectl exec $pod php artisan key:gen </code></pre>
<p>I have been searching for ages for a community driven helm chart for Jenkins-X. I can find them for regular jenkins, but I need to setup Jenkins-x, and I just can't seem to find any information,</p>
<p>we recommend you don't try and install jenkins x via a chart - but via the CLI install tool we've built using <a href="https://jenkins-x.io/getting-started/create-cluster/" rel="nofollow noreferrer">jx create cluster</a> if you don't have a kubernetes cluster or <a href="https://jenkins-x.io/getting-started/install-on-cluster/" rel="nofollow noreferrer">jx install</a> if you do.</p> <p>Jenkins X requires lots of things to be setup (git repos, git/docker registry/jenkins tokens, ingress, TLS, DNS et al) - doing it by hand via a helm chart is gonna be very hard</p>
<p>We have a number of services behind an API gateway which is itself behind ingress-nginx. We're trying to use HTTP/2 to speed up data transfer to the front-end but all of our connections are still being done with HTTP/1.1.</p> <p>The connection from client to nginx is over HTTPS, but nginx communicates with our API gateway using HTTP, and the gateway also uses HTTP to communicate with the backend services.</p> <p>Do we need to use HTTPS from end-to-end to get HTTP/2 to work? If so, what's the best way to set this up re: using certificates? If not, what could be causing the connection to drop to HTTP/1.1?</p> <p>We are using ingress-nginx version 0.21.0, which has nginx 1.15.6 and OpenSSL 1.1.1, which should be sufficient to support TLS 1.3/ALPN/HTTP2. Our nginx-configuration configmap has <code>use-http2</code> set to <code>true</code> and I can see that the pod's <code>/etc/nginx.conf</code> has a <code>listen ... http2;</code> line.</p> <hr> <p><em>Edit 10/05/2019:</em></p> <p>Further to the comments of @Barry Pollard and @Rico, I've found out that AWS Elastic Load Balancer, which sits in front of our ingress-nginx controller, doesn't support HTTP/2. I've cut nginx out of the stack and our API Gateway is being provisioned its own Network Load Balancer. However, we're still on HTTP/1.1. It looks like ASP.Net Core 2.2's HTTP server Kestrel uses HTTP/2 by default, so I'm not sure why the connection is still dropping to 1.1.</p>
<p>Like @BarryPollard said you shouldn't need HTTP/2 end to end to establish HTTP/2 connections on your browser. </p> <p>It sounds like whatever you are using for a client is dropping to HTTP/1.1, make sure you try with one of the following:</p> <ul> <li>Chrome 51</li> <li>Firefox 53</li> <li>Edge 12</li> <li>Internet Explorer 11</li> <li>Opera 38</li> </ul> <p>You didn't specify what architecture is fronting your nginx. Is it connected directly to the internet? or it's going through cloud load balancer? CDN? You can also test with Wireshark as described <a href="https://stackoverflow.com/a/51644816/2989261">here</a>.</p>
<p>I created a Kubernetes cluster through Kops. The configuration and the ssh keys were in a machine that I don't have access to anymore. Is it possible to ssh to the nodes through kops even if I have lost the key? I see there is a command - </p> <blockquote> <p>kops get secrets</p> </blockquote> <p>This gives me all the secrets. Can I use this to get ssh access to the nodes and how to do it?</p> <p>I see the cluster state is stored in S3. Does it store the secret key as well?</p>
<p>You can't recover the private key, but you should be able install a new public key following this procedure:</p> <pre><code>kops delete secret --name &lt;clustername&gt; sshpublickey admin kops create secret --name &lt;clustername&gt; sshpublickey admin -i ~/.ssh/newkey.pub kops update cluster --yes to reconfigure the auto-scaling groups kops rolling-update cluster --name &lt;clustername&gt; --yes to immediately roll all the machines so they have the new key (optional) </code></pre> <p>Taken from this document:</p> <p><a href="https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access" rel="noreferrer">https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access</a></p>
<p>I just started to learn Kubernetes. I know what a rollback is, but I have never heard of rollout. Is "<strong>rollout</strong>" related to rollback in any way? Or "<strong>rollout</strong> is similar to deploying something? </p> <p><a href="https://i.stack.imgur.com/p77yB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p77yB.png" alt="("></a></p>
<p>Rollout simply means rolling update of application. Rolling update means that application is updated gradually, gracefully and with no downtime. So when you push new version of your application's Docker image and then trigger rollout of your deployment Kubernetes first launches new pod with new image while keeping old version still running. When new pod settles down (passes its readiness probe) - Kubernetes kills old pod and switches Service endpoints to point to new version. When you have multiple replicas it will happen gradually until all replicas are replaced with new version.</p> <p>This behavior however is not the only one possible. You can tune Rolling Update settings in your deployments <code>spec.strategy</code> settings.</p> <p>Official docs even have interactive tutorial on Rolling Update feature, it perfectly explains how it works: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p>
<p><strong>Question</strong></p> <p>Is it problematic to create two Services for the same pod, one for internal access and the other for external access?</p> <p><strong>Context</strong></p> <p>I have a simple app running on GKE.</p> <p>There are two pods, each with one container:</p> <ol> <li>flask-pod, which runs a containerized flask app</li> <li>postgres-pod, which runs a containerized postgres DB</li> </ol> <p>The flask app accesses the postgres DB through a ClusterIP Service around the postgres DB.</p> <p><strong>Concern</strong></p> <p>I also have connected a client app, TablePlus (running on my machine), to the postgres DB through a LoadBalancer Service. Now I have 2 separate services to access my postgres DB. Is this redundant, or can this cause problems?</p> <p>Thanks for your help.</p>
<p>It is perfectly fine. If you look at StatefulSets, you define one headless service that is used for internal purpose and another service to allow access from clients.</p>
<p>Currently, I am trying to deploy my microservice end point Docker image on a Kubernetes cluster by creating the Helm chart. For this, I created the chart and changed the parameters in values.yaml and deployment.yaml for port change. And also I want to access from my Angular front end. So I added service type= NodePort. And when I described the service, it gave me the port <code>30983</code> to access.</p> <p>And I accessed like <a href="http://node-ip:30983/endpoint" rel="noreferrer">http://node-ip:30983/endpoint</a></p> <p>But I am only getting the site can't be reached the message. Let me add the details of what I did here:</p> <p>My values.yaml file containing the following to mention the service type:</p> <p><a href="https://i.stack.imgur.com/f0KZK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/f0KZK.png" alt="enter image description here"></a></p> <p>And my templates/service.yaml file containing like the following:</p> <p><a href="https://i.stack.imgur.com/eF0zY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/eF0zY.png" alt="enter image description here"></a></p> <p>And my templates/deployment.yaml file containing the following:</p> <p><a href="https://i.stack.imgur.com/63VTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/63VTS.png" alt="enter image description here"></a></p> <p>And I tried to access like the following:</p> <p><a href="http://192.168.16.177:30983/" rel="noreferrer">http://192.168.16.177:30983/</a></p> <p>And only getting site can't be reached.</p> <p>NB: when I tried to describe the service, then I am getting the following:</p> <p><a href="https://i.stack.imgur.com/Gdo5c.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Gdo5c.png" alt="enter image description here"></a></p> <p>The output of <strong>kubectl get pod --show-labels</strong> like the following image screenshot</p> <p><a href="https://i.stack.imgur.com/C0ziT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/C0ziT.png" alt="enter image description here"></a></p> <p><strong>Updated</strong></p> <p>And when we using <code>kubectl describe pod</code> command, getting like the following:</p> <p><a href="https://i.stack.imgur.com/O6qx2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/O6qx2.png" alt="enter image description here"></a></p> <p><strong>Updated Error</strong></p> <pre><code>Readiness probe failed: HTTP probe failed with statuscode: 404 Liveness probe failed: HTTP probe failed with statuscode: 404 </code></pre> <p>How can I access my endpoint from deployment?</p>
<p>Try this for healthcheck probes:</p> <pre><code>livenessProbe: tcpSocket: port: 8085 readinessProbe: tcpSocket: port: 8085 </code></pre>