prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm trying to deploy two nodejs applications as two separate containers using Kubernetes, a defined service and an ingress with a static ip and a ssl certificate</p> <p>I would like to deploy these micro-services using Kubernetes Engine of the GCP. I've added the second micro-service later than the other one. With only one container into the pod, everything works good. I've defined three yaml files: deployment.yaml, service.yaml, ingress.yaml. </p> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: qa-chatbot-backend-deployment spec: selector: matchLabels: app: service-backend1 replicas: 1 template: metadata: labels: app: service-backend1 spec: containers: - name: serice-backend1 image: gcr.io/project-id/serice-backend1:v1.0.1 imagePullPolicy: Always command: ["npm", "start"] livenessProbe: httpGet: path: / port: 8081 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 25 periodSeconds: 30 successThreshold: 1 failureThreshold: 2 readinessProbe: httpGet: path: / port: 8081 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 25 periodSeconds: 30 successThreshold: 1 failureThreshold: 2 ports: - name: service1-port containerPort: 8081 - name: service-backend2 image: gcr.io/project-id/serice-backend2:v1.0.1 imagePullPolicy: Always command: ["npm", "start"] livenessProbe: httpGet: path: /api/test port: 8082 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 25 periodSeconds: 30 successThreshold: 1 failureThreshold: 2 readinessProbe: httpGet: path: /api/test port: 8082 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 25 periodSeconds: 30 successThreshold: 1 failureThreshold: 2 ports: - name: service2-port containerPort: 8082 </code></pre> <p>service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service-kube spec: type: LoadBalancer ports: - targetPort: service1-port port: 80 protocol: TCP selector: app: service-backend1 </code></pre> <p>ingress.yaml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: service-backend1 name: ingress-kube annotations: kubernetes.io/ingress.global-static-ip-name: app-static-ip spec: tls: - hosts: - custom-host.com secretName: custom-host-secret-name rules: - host: custom-host.com http: paths: - backend: serviceName: service-kube servicePort: 80 </code></pre> <p>With this configuration, only one service is reachable, the first one</p> <p>I've tried to add multiple ports into service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service-kube spec: type: LoadBalancer ports: - targetPort: service1-port port: 80 protocol: TCP - targetPort: service2-port port: 80 protocol: TCP selector: app: service-backend1 </code></pre> <p>But i receive an error.</p> <pre><code>The Service "service-kube" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", Port:80, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0} </code></pre> <p>My goal is to expose on the domain custom-host.com two backends; one reachable on a specific path (api/*), the other one reachable to all the possible endpoints.</p> <p>Thank you for your help</p>
<p>You can't have a single service port send traffic to two different target ports. There must be two different ports on your service (or use two separate services). Then you should have two <code>paths</code> in your ingress that route to the appropriate service port.</p> <p>You need to do something like this...</p> <pre><code>apiVersion: v1 kind: Service metadata: name: service-kube spec: type: LoadBalancer ports: - targetPort: service1-port port: 81 protocol: TCP - targetPort: service2-port port: 82 protocol: TCP selector: app: service-backend1 </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: service-backend1 name: ingress-kube annotations: kubernetes.io/ingress.global-static-ip-name: app-static-ip spec: tls: - hosts: - custom-host.com secretName: custom-host-secret-name rules: - host: custom-host.com http: paths: - backend: serviceName: service-kube servicePort: 81 path: /api - backend: serviceName: service-kube servicePort: 82 path: / </code></pre>
<p>A MCVE example is here: <a href="https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube" rel="nofollow noreferrer">https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube</a> (just run <code>./init.sh</code> and <code>minikube addons enable ingress</code>).</p> <p>The IP assigned to the ingress keeps getting reset, I don't know what is causing it? Do I need additional configuration perhaps?</p> <pre><code>kubectl get ingress --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE chris-example app-ingress example.com 192.168.122.253 80, 443 61m </code></pre> <p>And a minute later:</p> <pre><code>NAMESPACE NAME HOSTS ADDRESS PORTS AGE chris-example app-ingress example.com 80, 443 60m </code></pre> <hr> <p>In terms of configuration I've just applied:</p> <pre><code># metallb kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml # nginx kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml </code></pre> <hr> <p>ingress controller logs logs:</p> <pre><code>I0714 22:00:38.056148 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8681", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress I0714 22:01:19.153298 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8743", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress I0714 22:01:38.051694 7 status.go:296] updating Ingress chris-example/app-ingress status from [{192.168.122.253 }] to [] I0714 22:01:38.060044 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8773", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress </code></pre> <p>And the metallb controller logs:</p> <pre><code>{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656725017Z"} {"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656741267Z"} {"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.6567588Z"} {"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656842026Z"} {"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656873586Z"} </code></pre> <hr> <p>As a test I deleted the deployment+daemonset relating to metallb:</p> <pre><code>kubectl delete deployment -n metallb-system controller kubectl delete daemonset -n metallb-system speaker </code></pre> <p>And after the external IP is set, it'll once again reset... </p>
<p>I was curious and recreated your case. I was able to properly expose the service.</p> <p>First of all: you don't need to use minikube ingress addon when deploying your own NGINX. If you do, you have 2 ingress controllers in a cluster and it leads confusion later. Run: <code>minikube addons disable ingress</code></p> <p>Sidenote: You can see this confusion in the IP your ingress got assigned: <code>192.168.122.253</code> which is not in CIDR range <code>192.168.39.160/28</code> you defined in <code>configmap-metallb.yaml</code>.</p> <hr /> <p>You need to change service type of <code>ingress-nginx</code> to <code>LoadBalancer</code>. you can do this by running:</p> <pre><code>kubectl edit -n ingress-nginx service ingress-nginx </code></pre> <p>Additionally, you can change <code>app-lb</code> service to <code>NodePort</code>, since it doesn't need to be exposed outside of the cluster - ingress controller will take care of it.</p> <hr /> <h3>Explanation</h3> <p>It's easier to think about <code>Ingress</code> object as of <code>ConfigMap</code>, rather than <code>Service</code>.</p> <p>MetalLB takes configuration you provided in <code>ConfigMap</code> and waits for IP request API call. When it gets one it provides IP form the CIDR range you specified.</p> <p>In a similar way, ingress controller (NGINX in your case) takes configuration described in <code>Ingress</code> object and uses it to rout traffic to desired place in the cluster.</p> <p>Then <code>ingress-nginx</code> service is exposed outside of the cluster with assigned IP.</p> <p>Inbound traffic is directed by Ingress controller (NGINX), based on rules described in <code>Ingress</code> object to a service in font of your application.</p> <h3>Diagram</h3> <pre><code>Inbound traffic ++ +---------+ || |ConfigMap| || +--+------+ || | || | CIDR range to provision || v || +--+----------+ || |MetalLB | +-------+ || |Load balancer| |Ingress| || +-+-----------+ +---+---+ || | | || | External IP assigned |Rules described in spec || | to service | || v v || +--+--------------------+ +---+------------------+ || | | | Ingress Controller | |----&gt;+ ingress-nginx service +-----&gt;+ (NGINX pod) | +----&gt;| +-----&gt;+ | +-----------------------+ +----------------------+ || VV +-----------------+ | Backend service | | (app-lb) | | | +-----------------+ || VV +--------------------+ | Backend pod | | (httpbin) | | | +--------------------+ </code></pre>
<p>I have services with ClusterIP in Kubernetes and using <code>nginx</code> (<a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a>) to expose these services to the internet. When I try to get client IP address in application I am getting cluster's node IP. How can I retrieve actual client IP?</p> <p>I looked into <code>"externalTrafficPolicy": "Local"</code> settings in service but for that service type must be <code>LoadBalancer</code>.</p> <p>I also tried update ingress annotations with:</p> <pre><code>nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,X-Forwarded-For,csrf-token" nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com" </code></pre> <p>But, still, it's not working. Please advice!</p>
<p>This is unfortunately not possible today. Please see <a href="https://github.com/kubernetes/kubernetes/issues/67202" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/67202</a> and <a href="https://github.com/kubernetes/kubernetes/issues/69811" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/69811</a> for more discussion around this.</p> <p>If you want to get the client IP address, you'll need to use <code>NodePort</code> or <code>LoadBalancer</code> types.</p>
<p>I just tested Ranche RKE , upgrading kubernetes 13.xx to 14.xx , during upgrade , an already running nginx Pod got restarted during upgrade. Is this expected behavior? </p> <p>Can we have Kubernetes cluster upgrades without user pods restarting? </p> <p>Which tool supports un-intruppted upgrades?</p> <p>What are the downtimes that we can never aviod? ( apart from Control plane )</p>
<p>The default way Kubernetes upgrades is by doing a rolling upgrade of the nodes, one at a time.</p> <p>This works by draining and cordoning (marking the node as unavailable for new deployments) each node that is being upgraded so that there no pods running on that node.</p> <p>It does that by creating a new revision of the existing pods on another node (if it's available) and when the new pod starts running (and answering to the readiness/health probes), it stops and remove the old pod (sending <code>SIGTERM</code> to each pod container) on the node that was being upgraded.</p> <p>The amount of time Kubernetes waits for the pod to graceful shutdown, is controlled by the <code>terminationGracePeriodSeconds</code> on the pod spec, if the pod takes longer than that, they are killed with <code>SIGKILL</code>.</p> <p>The point is, to have a graceful Kubernetes upgrade, you need to have enough nodes available, and your pods must have correct liveness and readiness probes (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a>).</p> <p>Some interesting material that is worth a read: </p> <p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-upgrading-your-clusters-with-zero-downtime</a> (specific to GKE but has some insights)<br> <a href="https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b33" rel="nofollow noreferrer">https://blog.gruntwork.io/zero-downtime-server-updates-for-your-kubernetes-cluster-902009df5b33</a></p>
<p>I have successfully deployed <code>efs-provisioner</code> following the steps outlined in <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">efs-provisioner</a>.</p> <p><a href="https://i.stack.imgur.com/Y4AHo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y4AHo.png" alt="enter image description here"></a></p> <p>But the PVC is hanging in <code>Pending</code> State displaying the same message:</p> <p><code>waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator</code>.</p> <p><a href="https://i.stack.imgur.com/vhFAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vhFAw.png" alt="enter image description here"></a></p> <p>What could be a reason why PVC is not created properly?</p>
<p>The solution was described by ParaSwarm posted <a href="https://github.com/kubernetes-incubator/external-storage/issues/754#issuecomment-418207930" rel="nofollow noreferrer">here</a> </p> <blockquote> <p><em>"...The quick fix is to give the cluster-admin role to the default service account. Of course, depending on your environment and security, you may need a more elaborate fix. If you elect to go the easy way, you can simply apply this:"</em></p> </blockquote> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: default-admin-rbac (or whatever) subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io </code></pre>
<p>I have a <code>nginx</code> deployment in k8s cluster which proxies my <code>api/</code> calls like this:</p> <pre><code>server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; try_files $uri $uri/ /index.html =404; } location /api { proxy_pass http://backend-dev/api; } } </code></pre> <p>This works most of the time, however sometimes when <code>api</code> pods aren't ready, nginx fails with error:</p> <pre class="lang-sh prettyprint-override"><code>nginx: [emerg] host not found in upstream &quot;backend-dev&quot; in /etc/nginx/conf.d/default.conf:12 </code></pre> <p>After couple of hours exploring internets, I found the <a href="https://ilhicas.com/2018/04/14/Nginx-Upstream-Unavalailble-Docker.html" rel="nofollow noreferrer">article</a> which pretty much the same issue. I've tried this:</p> <pre><code> location /api { set $upstreamName backend-dev; proxy_pass http://$upstreamName/api; } </code></pre> <p>Now nginx returns <strong>502</strong>. And this:</p> <pre><code> location /api { resolver 10.0.0.10 valid=10s; set $upstreamName backend-dev; proxy_pass http://$upstreamName/api; } </code></pre> <p>Nginx returns <strong>503</strong>.</p> <p>What's the correct way to fix it on k8s?</p>
<p>If your API pods are not ready, Nginx wouldn't be able to route traffic to them.</p> <p>From Kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p> </blockquote> <p>If you are not using liveness or readiness probes, then your pod will be marked as "ready" even if your application running inside the container has not finished it's startup process and is ready to accept traffic.</p> <p>The relevant section regarding Pods and DNS records can be found <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">here</a></p> <blockquote> <p>Because A records are not created for Pod names, hostname is required for the Pod’s A record to be created. A Pod with no hostname but with subdomain will only create the A record for the headless service (default-subdomain.my-namespace.svc.cluster-domain.example), pointing to the Pod’s IP address. Also, Pod needs to become ready in order to have a record unless publishNotReadyAddresses=True is set on the Service.</p> </blockquote> <p><strong>UPDATE:</strong> I would suggest using NGINX as an <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">ingress controller</a>.</p> <p>When you use NGINX as an ingress controller, the NGINX service starts successfully and whenever an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">ingress rule</a> is deployed, the NGINX configuration is <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/" rel="nofollow noreferrer">reloaded on the fly</a>.</p> <p>This will help you avoid NGINX pod restarts.</p>
<p>I can`t create wilcard ssl with cert manager, I add my domain to cloudflare but cert manager can`t verify ACME account. How i resolve this problem?</p> <p>i want wilcard ssl for my domain and use any deployments how could i do?</p> <p><strong>I find error but how i resolve, error is my k8s doesnt resolve dns acme-v02.api.letsencrypt.org</strong></p> <p>error is k8s dns can't find My k8s is </p> <pre><code>Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3-k3s.1", GitCommit:"8343999292c55c807be4406fcaa9f047e8751ffd", GitTreeState:"clean", BuildDate:"2019-06-12T04:56+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Error log:</p> <pre><code>I0716 13:06:11.712878 1 controller.go:153] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/issuer-letsencrypt" I0716 13:06:11.713218 1 setup.go:162] cert-manager/controller/issuers "level"=0 "msg"="ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default" I0716 13:06:11.713245 1 logger.go:88] Calling GetAccount E0716 13:06:16.714911 1 setup.go:172] cert-manager/controller/issuers "msg"="failed to verify ACME account" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "related_resource_kind"="Secret" "related_resource_name"="issuer-letsencrypt" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default" I0716 13:06:16.715527 1 sync.go:76] cert-manager/controller/issuers "level"=0 "msg"="Error initializing issuer: Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "resource_kind"="Issuer" "resource_name"="issuer-letsencrypt" "resource_namespace"="default" E0716 13:06:16.715609 1 controller.go:155] cert-manager/controller/issuers "msg"="re-queuing item due to error processing" "error"="Get https://acme-v02.api.letsencrypt.org/directory: dial tcp: i/o timeout" "key"="default/issuer-letsencrypt" </code></pre> <p>my Issuer</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Issuer metadata: name: issuer-letsencrypt spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: yusufkaan142@gmail.com privateKeySecretRef: name: issuer-letsencrypt dns01: providers: - name: cf-dns cloudflare: email: mail@gmail.com apiKeySecretRef: name: cloudflare-api-key key: api-key.txt </code></pre> <p>Secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: cloudflare-api-key namespace: cert-manager type: Opaque data: api-key.txt: base64encoded </code></pre> <p>My Certificate:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: wilcard-theykk-net namespace: cert-manager spec: secretName: wilcard-theykk-net issuerRef: name: issuer-letsencrypt kind: Issuer commonName: '*.example.net' dnsNames: - '*.example.net' acme: config: - dns01: provider: cf-dns domains: - '*.example.net' - 'example.net' </code></pre> <p>Dns for k8s</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists name: kube-dns namespace: kube-system data: upstreamNameservers: | ["1.1.1.1","8.8.8.8"] </code></pre>
<p>I would start with debugging DNS resolution function within your K8s cluster:</p> <p>Spin up some container with basic network tools on a board:</p> <p><code>kubectl run -i -t busybox --image=radial/busyboxplus:curl --restart=Never</code></p> <p>From within <code>busybox</code> container check <code>/etc/resolv.conf</code> file and ensure that you can resolve Kubernetes DNS <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">service</a> <code>kube-dns</code>:</p> <pre><code>$ cat /etc/resolv.conf nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local c.org-int.internal google.internal options ndots:5 </code></pre> <p>Make a lookup request to <code>kubernetes.default</code> which should get output with a DNS nameserver without any issues:</p> <pre><code>$ nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local </code></pre> <p>Due to the fact that you've defined <code>upstreamNameservers</code> in the corresponded <code>kube-dns</code> ConfigMap, check whether you can ping upstream nameservers: <code>1.1.1.1</code> and <code>8.8.8.8</code> that should be accessible from within a Pod. </p> <p>Verify DNS pod logs for any suspicious events for each container(kubedns, dnsmasq, sidecar):</p> <pre><code>kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c dnsmasq kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar </code></pre> <p>If you are fine with all precedent steps then DNS discovery is working properly, thus you can also inspect <a href="https://www.cloudflare.com/dns/dns-firewall/" rel="noreferrer">Cloudflare</a> DNS firewall configuration in order to exclude potential restrictions. More relevant information about troubleshooting DNS issue you can find in the official K8s <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">documentation</a>.</p>
<p>I'm reading a lot of documentation about <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">CRD Controller</a></p> <p>I've implemented one with my business logic, and sometimes I got this race condition:</p> <ul> <li>I create a Custom Object, let's call it <code>Foo</code> with name <code>bar</code></li> <li>My business logic applies, let's says that it creates a <code>Deployment</code> with a <strong>generated</strong> name, and I save the name (as reference) it in the <code>Foo</code> object</li> <li>I remove the Custom Object</li> <li>I quickly recreate it with the same name, and sometimes I get this log:</li> </ul> <pre><code>error syncing 'default/bar': Operation cannot be fulfilled on Foo.k8s.io "bar": the object has been modified; please apply your changes to the latest version and try again, requeuing </code></pre> <p>The thing is because my <code>Deployment</code> has a generated name and maybe the save (Foo) has failed, I got two <code>Deployment</code> with two names.</p> <p>I did not found a way to fix it for now, but it raises a question.</p> <p><strong>How if I've multiple controllers running ?</strong> </p> <p>I've started two controllers, and I got the same race condition by just creating a new object.</p> <p>So, what is the best design to scale a CRD controller and avoid this kind of race conditions?</p>
<p>Generally you only run one copy of a controller, or at least only one is active at any given time. As long as you were careful to write your code convergently then it shouldn't technically matter, but there really isn't much reason to run multiple.</p>
<p>I tried to deploy kubernetes using minikube using both from local docker image and from docker hub. But both doesn't work. </p> <p>method-1: Using save and load the tar file, created the image and it is available to kubectl.</p> <pre><code>root@arun-desktop-e470:/var/local/dprojects/elasticsearch# kubectl get pods --all-namespaces -o jsonpath="{..image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c|grep elk 2 elk/elasticsearch:latest </code></pre> <p>Execute below commands to create the deployment:</p> <pre><code>kubectl run elastic --image=elk/elasticsearch:latest --port=9200 kubectl expose deployment elastic --target-port=9200 --type=NodePort minikube service elastic --url </code></pre> <p>From kubectl describe pod command,</p> <pre><code> Warning Failed 122m (x4 over 124m) kubelet, minikube Failed to pull image "elk/elasticsearch:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for elk/elasticsearch, repository does not exist or may require 'docker login' </code></pre> <p>Method-2: I did pushed the image to my docker hub repository, (<a href="https://hub.docker.com/r/get2arun/elk/tags" rel="nofollow noreferrer">https://hub.docker.com/r/get2arun/elk/tags</a>) and then login to docker hub in the terminal and created the deployment again. </p> <p>pushed to docker hub like below and hence I have permission to push and pull the images to my docker hub account. I have checked the "collaborators" under manage repositories and it has my docker hub id.</p> <pre><code>root@arun-desktop-e470:~# docker push get2arun/elk:elasticsearch_v1 The push refers to repository [docker.io/get2arun/elk] 19b7091eba36: Layer already exists 237c06a69e1c: Layer already exists c84fa0f11212: Layer already exists 6ca6c301e2ab: Layer already exists 76dd25653d9b: Layer already exists 602956e7a499: Layer already exists bde76be259f3: Layer already exists 2333287a7524: Layer already exists d108ac3bd6ab: Layer already exists elasticsearch_v1: digest: sha256:6f0b981b5dedfbe3f8e0291dc17fc09d32739ec3e0dab6195190ab0cc3071821 size: 2214 </code></pre> <p>kubectl run elasticsearch-v2 --image=get2arun/elk:elasticsearch_v1 --port=9200</p> <p>From kubectl describe pods command:</p> <pre><code> Normal BackOff 21s kubelet, minikube Back-off pulling image "get2arun/elk:elasticsearch_v1" Warning Failed 21s kubelet, minikube Error: ImagePullBackOff Normal Pulling 7s (x2 over 24s) kubelet, minikube Pulling image "get2arun/elk:elasticsearch_v1" Warning Failed 4s (x2 over 21s) kubelet, minikube Failed to pull image "get2arun/elk:elasticsearch_v1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for get2arun/elk, repository does not exist or may require 'docker login' </code></pre> <p>I removed the proxy settings and tried from open wifi account but still seeing permission denied.</p> <p>This error message is not sufficient to identify the issue and hoping there should be some way to narrow down these kind of issues.</p> <ol> <li>What happens in the background when Kubernetes is asked to use the local docker image or pull the image from docker hub? </li> <li>How to get all the log information when deployment is started ?</li> <li>What are the other sources for logs</li> </ol>
<p>In method-1, as the image is not pushed to the repository, you have to use the imagePullPolicy.</p> <h3>Never try to pull the image</h3> <pre><code>imagePullPolicy: Never </code></pre> <h3>Try to pull the image, if it is not present</h3> <pre><code>imagePullPolicy: IfNotPresent </code></pre> <p>I think IfNotPresent is ideal, if you want to use local image / repository. Use as per your requirement.</p> <h3>kubectl</h3> <pre><code>kubectl run elastic --image=elk/elasticsearch:latest --port=9200 --image-pull-policy IfNotPresent </code></pre>
<p>I use <a href="https://k3s.io/" rel="nofollow noreferrer">K3S</a> for my Kubernetes cluster. It's really fast and efficient. By default K3S use <a href="https://traefik.io/" rel="nofollow noreferrer">Traefik</a> for ingress controller which also work well til now.</p> <p>The only issue I have is, I want to have HTTP2 server push. The service I have is behind the ingress, generates <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link" rel="nofollow noreferrer">Link header</a> which in the case of <a href="https://www.nginx.com/" rel="nofollow noreferrer">NGINX</a> I can simply turn it into the HTTP2 server push (explained <a href="https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/" rel="nofollow noreferrer">here</a>). Is there any same solution for Traefik? Or is it possible to switch to NGINX in K3S?</p>
<p>HTTP2 Push <strong>not</strong> supported in Traefik yet. See the github open issue <a href="https://github.com/containous/traefik/issues/906" rel="nofollow noreferrer">#906</a> for progress on the matter.</p> <p>Though, you can safely switch to the nginx ingress controller to accomplish HTTP2 push</p> <p>a) <code>helm install stable/nginx-ingress</code></p> <p>b) in your ingress yaml set appropriate annotation </p> <pre><code>metadata: annotations: kubernetes.io/ingress.class: nginx </code></pre>
<p>I have AKS cluster and I would like to check a node disk type. I know there are 4 types of disk at the moment: standard HDD, standard SSD, premium SSD and ultra SSD (in preview). The node itself is set up to be <code>Standard_DS2_v2</code> (via terraform) but there is no option (or I dont see it) for setting up certain disk type. How can I check disk type on Kubernetes node(s)?</p>
<p>Terraform, just like the Azure portal and <code>az aks create</code>, only allows you to select a predefined VM size.</p> <p><a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general#dsv2-series" rel="noreferrer">Standard_DS2_v2</a> has "Premium SSD". All AKS nodes use SSD storage, in the provided link the ones with "Premium SSD" are listed as "Premium Storage: Supported".</p> <p>Alternatively in the Azure Portal, create a new AKS resource (no need to really create it, just open the wizard), then in the first step, click the link "Change size" next to "Node size" and you'll get a list of available VM sizes with a column "Premium Disk Supported".</p> <p>On an existing cluster, if you are using VMSS, you can also check the storage tier in the portal under the VMSS "Storage" tab, or by issuing <code>az vmss list</code> (storageProfile.osDisk.managedDisk.storageAccountType). Otherwise simply check the disk type in the virtual machine nodes AKS creates.</p>
<p>I have a got a kubernetes mysql pod which is exposed as a nodePort like shown below</p> <pre><code>apiVersion: v1 kind: Service metadata: name: demo-mysql labels: app: demo spec: type: NodePort ports: - port: 3306 nodePort: 32695 </code></pre> <p>I am trying to access this mysql server using the command below</p> <pre><code>mysql -u root -h 117.213.118.86 -p 32695 </code></pre> <p>but I get this error</p> <pre><code>ERROR 2003 (HY000): Can't connect to MySQL server on '117.213.118.86' (111) </code></pre> <p>What am I doing wrong here ?</p>
<p>If you want to connect to a remote mysql service, you have to specify an endpoint that has the remote service's ip addrress like this: </p> <pre><code>apiVersion: v1 kind: Endpoints metadata: name: demo-mysql subsets: - addresses: - ip: 192.0.2.42 ports: - port: 3306 </code></pre> <p>More details <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">here</a>.</p>
<p>I have a got a kubernetes mysql pod which is exposed as a nodePort like shown below</p> <pre><code>apiVersion: v1 kind: Service metadata: name: demo-mysql labels: app: demo spec: type: NodePort ports: - port: 3306 nodePort: 32695 </code></pre> <p>I am trying to access this mysql server using the command below</p> <pre><code>mysql -u root -h 117.213.118.86 -p 32695 </code></pre> <p>but I get this error</p> <pre><code>ERROR 2003 (HY000): Can't connect to MySQL server on '117.213.118.86' (111) </code></pre> <p>What am I doing wrong here ?</p>
<p>try this</p> <pre><code>mysql -u root --password=&lt;PASSWORD&gt; -h &lt;CLUSTER_HOST&gt; --port=32695 </code></pre>
<p>I installed kubernetes cluster (include one master and two nodes), and status of nodes are ready on master. When I deploy the dashboard and run it by acccessing the link <code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</code>, I get error</p> <blockquote> <p>'dial tcp 10.32.0.2:8443: connect: connection refused' Trying to reach: '<a href="https://10.32.0.2:8443/" rel="nofollow noreferrer">https://10.32.0.2:8443/</a>'</p> </blockquote> <p>The pod state of dashboard is ready, and I tried to ping to 10.32.0.2 (dashboard's ip) not succesfully </p> <p>I run dashboard as the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Web UI (Dashboard)</a> guide suggests.</p> <p>How can I fix this ?</p>
<p>There are few options here:</p> <ol> <li><p>Most of the time if there is some kind of connection refused, timeout or similar error it is most likely a configuration problem. If you can't get the Dashboard running then you should try to deploy another application and try to access it. If you fail then it is not a Dashboard issue.</p></li> <li><p>Check if you are using root/sudo.</p></li> <li><p>Have you properly installed flannel or any other network for containers?</p></li> <li><p>Have you checked your API logs? If not, please do so.</p></li> <li><p>Check the description of the dashboard pod (<code>kubectl describe</code>) if there is anything suspicious.</p></li> <li><p>Analogically check the description of service.</p></li> <li><p>What is your cluster version? Check if any updates are required. </p></li> </ol> <p>Please let me know if any of the above helped.</p>
<p>I'm using the <a href="https://github.com/operator-framework/operator-sdk" rel="noreferrer">Operator SDK</a> to build a custom Kubernetes operator. I have created a custom resource definition and a controller using the respective Operator SDK commands:</p> <pre><code>operator-sdk add api --api-version example.com/v1alpha1 --kind=Example operator-sdk add controller --api-version example.com/v1alpha1 --kind=Example </code></pre> <p>Within the main reconciliation loop (for the example above, the auto-generated <code>ReconcileExample.Reconcile</code> method) I have some custom business logic that requires me to query the Kubernetes API for other objects of the same kind that have a certain field value. It's occurred to me that I might be able to use the default API client (that is provided by the controller) with a custom field selector:</p> <pre><code>func (r *ReconcileExample) Reconcile(request reconcile.Request) (reconcile.Result, error) { ctx := context.TODO() listOptions := client.ListOptions{ FieldSelector: fields.SelectorFromSet(fields.Set{"spec.someField": "someValue"}), Namespace: request.Namespace, } otherExamples := v1alpha1.ExampleList{} if err := r.client.List(ctx, &amp;listOptions, &amp;otherExamples); err != nil { return reconcile.Result{}, err } // do stuff... return reconcile.Result{}, nil } </code></pre> <p>When I run the operator and create a new <code>Example</code> resource, the operator fails with the following error message:</p> <pre><code>{"level":"info","ts":1563388786.825384,"logger":"controller_example","msg":"Reconciling Example","Request.Namespace":"default","Request.Name":"example-test"} {"level":"error","ts":1563388786.8255732,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"example-controller","request":"default/example-test","error":"Index with name field:spec.someField does not exist","stacktrace":"..."} </code></pre> <p>The most important part being</p> <blockquote> <p>Index with name field:spec.someField does not exist</p> </blockquote> <p>I've already searched <a href="https://github.com/operator-framework/operator-sdk/blob/master/doc/user/client.md" rel="noreferrer">the Operator SDK's documentation</a> on the default API client and learned a bit about the inner workings of the client, but no detailed explanation on this error or how to fix it.</p> <p><strong>What does this error message mean, and how can I create this missing index to efficiently list objects by this field value?</strong></p>
<p>The default API client that is provided by the controller is a <em>split client</em> -- it serves <code>Get</code> and <code>List</code> requests from a locally-held cache and forwards other methods like <code>Create</code> and <code>Update</code> directly to the Kubernetes API server. This is also explained in <a href="https://github.com/operator-framework/operator-sdk/blob/master/doc/user/client.md#default-client" rel="noreferrer">the respective documentation</a>:</p> <blockquote> <p>The SDK will generate code to create a Manager, which holds a Cache and a Client to be used in CRUD operations and communicate with the API server. By default a Controller's Reconciler will be populated with the Manager's Client which is a split-client. [...] A split client reads (Get and List) from the Cache and writes (Create, Update, Delete) to the API server. Reading from the Cache significantly reduces request load on the API server; as long as the Cache is updated by the API server, read operations are eventually consistent.</p> </blockquote> <p>To query values from the cache using a custom field selector, the cache needs to have a <em>search index</em> for this field. This indexer can be defined right after the cache has been set up.</p> <p>To register a custom indexer, add the following code into the bootstrapping logic of the operator (in the auto-generated code, this is done directly in <code>main</code>). This needs to be done <strong>after</strong> the controller manager has been instantiated (<code>manager.New</code>) and <strong>also after</strong> the custom API types have been added to the <code>runtime.Scheme</code>:</p> <pre><code>package main import ( k8sruntime "k8s.io/apimachinery/pkg/runtime" "example.com/example-operator/pkg/apis/example/v1alpha1" // ... ) function main() { // ... cache := mgr.GetCache() indexFunc := func(obj k8sruntime.Object) []string { return []string{obj.(*v1alpha1.Example).Spec.SomeField} } if err := cache.IndexField(&amp;v1alpha1.Example{}, "spec.someField", indexFunc); err != nil { panic(err) } // ... } </code></pre> <p>When a respective indexer function is defined, field selectors on <code>spec.someField</code> will work from the local cache as expected.</p>
<p>I am trying to deploy my application using helm charts. I have defined a statefulSet as a kind under deployment.yaml and provided a headless service under spec.serviceName. Code snippet from deployment.yaml is given below.</p> <p>Post deployment, when I fire "kubectl get pods", its shows POD name as "MyApp-100-deployment-n" where n >=0, based on replicas.</p> <p>If I go inside the pod using kubectl exec, and fire "hostname" command , I get "MyApp-100-deployment-n" as hostname and when I fire "hostname --fqdn" , I get something like below:</p> <pre><code>MyApp-100-deployment-n.&lt;name of Service&gt;.&lt;Namespace&gt;.svc.cluster.local </code></pre> <p>These results are fine and good but when my application written in JAVA and which is deployed now tries to get the hostname using <code>InetAddress.getLocalHost().getHostName()</code>, It gets the entire fqdn of pods , not the hostname of pod. This is something which is bothering me. Why Java InetAddress is not able to get the hostname only? Is there any way related to configuration in yaml files to not allow the headless service to modify the hostnames? Eventually, I need only hostname not fqdn in Java code to do processing. </p> <p>If I remove the headless service name from the deployment.yaml and deploy the application and fire the hostname and hostname --fqdn command from inside the container, I get only "MyApp-100-deployment-n" as results for both. Adding the headless service is only introducing the fqdn.</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}-deployment" labels: app: Myapp-deployment {{ include "metadata.labels.standard" . | indent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}" serviceName: "{{ .Chart.Name }}-{{ .Chart.AppVersion | replace "." "" }}" </code></pre>
<p>You can use the env variable <code>HOSTNAME</code> that is present on each container and gives you only the hostname.</p> <p>Also, try to execute env on any pod to see all available variables</p> <pre class="lang-sh prettyprint-override"><code>kubectl exec &lt;pod-name&gt; env </code></pre> <p>An explanation of why using <code>InetAddress.getLocalHost().getHostName()</code> is not the right way of retrieving the hostname is <a href="https://stackoverflow.com/questions/7348711/recommended-way-to-get-hostname-in-java">here</a></p>
<p>I have installed minikube on my system and <code>minikube start</code> works as expected for me. When I want to use local docker images and hence trying to run sudo <code>eval $(minikube docker-env)</code>. </p> <p>This gives me an error: </p> <blockquote> <p>sudo: eval: command not found</p> </blockquote> <p>Any guidance or solution for this? I am running this on MacOS Mojave.</p>
<p>You use <code>sudo eval $(minikube docker-env)</code>, <code>sudo: eval: command not found</code> this means <code>eval</code> not found. <code>eval</code> is an built-in in shell, so when <code>sudo</code> without <code>-s</code> it will surely tell you this error, like next:</p> <pre><code>shubuntu1@shubuntu1:~/777$ sudo eval sudo: eval: command not found shubuntu1@shubuntu1:~/777$ sudo -s eval shubuntu1@shubuntu1:~/777$ </code></pre> <ul> <li><p>If you want to execute with root account:</p> <pre><code>$ sudo -s -H $ eval $(minikube docker-env) </code></pre></li> <li><p>If you just intend to execute with current account:</p> <pre><code>$ eval $(minikube docker-env) </code></pre></li> </ul>
<p>I'm trying to set up a full CI/CD pipeline for a SpringBoot application, starting from a GitLab repository (see <a href="https://gitlab.com/pietrom/clock-api" rel="nofollow noreferrer">https://gitlab.com/pietrom/clock-api</a>) and automatically deploying to a Kubernetes Cluster backed by Google Cloud Platform.</p> <p>My pipeline works quite fine (the app is built, it is packaged as a Docker image, the image is published on my project registry and containers are started for both <code>staging</code> and <code>production</code> environment), except for a detail: <code>Operation/Environments</code> page shows me both environments, with the following warning:</p> <pre><code>Kubernetes deployment not found To see deployment progress for your environments, make sure your deployments are in Kubernetes namespace &lt;projectname&gt;, and annotated with app.gitlab.com/app=$CI_PROJECT_PATH_SLUG and app.gitlab.com/env=$CI_ENVIRONMENT_SLUG. </code></pre> <p>I googled around a bit but I can't resolve this problem: my <code>deployment.yml</code> contains requested annotation, both for <em>deployment</em> and <em>pod</em>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: clock-api-ENVIRONMENT annotations: app.gitlab.com/app: "PROJECT_PATH_SLUG" app.gitlab.com/env: "ENVIRONMENT" spec: replicas: 1 template: metadata: labels: app: ENVIRONMENT annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" prometheus.io/path: "/actuator/prometheus" app.gitlab.com/app: "PROJECT_PATH_SLUG" app.gitlab.com/env: "ENVIRONMENT" spec: containers: - name: clock-api-ENVIRONMENT image: registry.gitlab.com/pietrom/clock-api imagePullPolicy: Always ports: - containerPort: 8080 imagePullSecrets: - name: registry.gitlab.com </code></pre> <p><code>PROJECT_PATH_SLUG</code> and <code>ENVIRONMENT</code> placeholder are substituded (using <code>sed</code>) during pipeline execution with values provided by GitLab infrastructure (<code>$CI_PROJECT_PATH_SLUG</code> and <code>$CI_ENVIRONMENT_SLUG</code>, respectively) and I can see the expected values in my GCP console, but GitLab integration does not seem to work.</p> <p>I'm missing something but I can't figure what differences there are between my deployment setup and the official documentation available <a href="https://docs.gitlab.com/ee/user/project/deploy_boards.html" rel="nofollow noreferrer">here</a>.</p> <p>Thanks in advance for your help!</p>
<p>This is also an important part:</p> <blockquote> <p>make sure your deployments are in Kubernetes namespace</p> </blockquote> <p>GitLab tries to manage namespaces in the attached Kubernetes cluster - creates a new namespace for every new GitLab project. It generates namespace from project name and project id.</p> <p>Sometimes GitLab fails to create namespace, for example when cluster is added <strong>after</strong> project is created. It's probably a bug, and here is how they overcome it <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/691d88b71d51786983b823207d876cee7c93f5d4/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml#L506" rel="nofollow noreferrer">in AutoDevOps</a>:</p> <pre><code> function ensure_namespace() { kubectl get namespace "$KUBE_NAMESPACE" || kubectl create namespace "$KUBE_NAMESPACE" } </code></pre> <p>This env var <code>$KUBE_NAMESPACE</code> - is defined by GitLab automatically, as well as many other Kubernetes-related variables: <a href="https://docs.gitlab.com/ee/user/project/clusters/#deployment-variables" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/clusters/#deployment-variables</a></p> <p>Then GitLab relies on this namespace internally and uses for various UI pages of the project, including Operations/Environment. So if you follow their convention, and apply your Kubernetes deployment to this namespace, GitLab will see your application.</p> <p>In our work we go one step further and at the beginning of a job even <a href="https://gitlab.com/softmill/template-gitlab-ci/blob/2656a064c8e92be0f93e036e0f74f79b39d8d1fb/.gitlab-ci-kube.yml#L128" rel="nofollow noreferrer">set this namespace as the default</a>. </p> <pre><code>kubectl config set-context --current --namespace="$KUBE_NAMESPACE" </code></pre> <p>Then all further <code>kubectl</code> commands will by default use this namespace.</p>
<p>I have a Pod with two containers.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: - name: my-container image: google/my-container:v1 - name: third-party image: google/third-party:v1 </code></pre> <p>One container is my image and the second is third-party image which I can’t control its stdout/stderr.<br> I need that my-container will access logs written in third-party container.<br> Inside "my-container" I want to collect all the stdout and stderr from the "third-party" container, add some metadata and write it with my logger.</p> <p>I cant use a privileged container with volumeMounts.</p> <p>If I could do something like this it was great.</p> <pre><code> containers: - name: my-container image: google/my-container:v1 volumeMounts: - name: varlog mountPath: /var/log - name: third-party image: google/third-party:v1 stdout: /var/log/stdout stderr: /var/log/stderr volumes: - name: varlog emptyDir: {} </code></pre>
<p>Based on the <a href="https://docs.docker.com/v17.09/engine/admin/logging/overview/" rel="nofollow noreferrer">logging driver</a> specified for docker, docker tracks the containers' logs. The default logging driver of docker is <code>json-file</code> which redirect the container's <code>stdout</code> and <code>stderr</code> logs to <code>/var/log/containers</code> folder in the host machine that runs docker.</p> <p>In case of kubernetes, the logs will be available in the worker nodes <code>/var/log/containers</code> folder. </p> <p>Probably, what you are looking for is <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd daemonset</a>, that creates a daemonset, which runs in each worker node and then help you move the logs to s3, cloudwatch or Elastic search. There are many sinks provided with fluentd. You can use one that suits your needs. I hope this is what you want to do with your <code>my-container</code>. </p>
<p>I have a conceptional question, does ReplicaSets use Pod settings? Before i applied my ReplicaSets i deleted my Pods, so there is no information about my old Pods ? If I apply now the Replicaset does this reference to the Pod settings, so with all settings like readinessProbe/livenessProbe ... ? My Questions came up because in my replicaset.yml is a container section where I specified my docker image, but why does it need that information, isn't it a redundant information, because this information is in my pods.yml ?</p> <pre><code>apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: test1 spec: replicas: 2 template: metadata: labels: app: web spec: containers: - name: test1 image: test/test </code></pre>
<blockquote> <p>Pods are the smallest deployable units of computing that can be created and managed in Kubernetes.</p> <p>A Pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers.</p> </blockquote> <p>See, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/</a>.</p> <p>So, you can specify how your Pod will be scheduled (one or more containers, ports, probes, volumes, etc.). But in case of the node failure or anything bad that can harm to the Pod, then that Pod won't be rescheduled (you have to rescheduled manually). So, in that case, you need a controller. Kubernetes provides some controllers (each one for different purposes). They are -</p> <ol> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">ReplicationController</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a></li> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a></li> </ol> <p>All of the above controllers and the Pod together are called as Workload. Because they all have a <code>podTemplate</code> section. And they all create some number of identical Pods ass specified by the <code>spec.replicas</code> field (if this field exists in the corresponding workload manifest). They all are upper-level concept than Pod.</p> <blockquote> <p>Though the <code>Deployment</code> is more suitable than the <code>ReplicaSet</code>, this answer focuses on <code>ReplicaSet</code> over <code>Pod</code> cause the question is between the <code>Pod</code> and <code>ReplicaSet</code>.</p> </blockquote> <p>In addition, each one of the above controllers has it's own purpose. Like a ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.</p> <p>A ReplicaSet contains a <code>podTemplate</code> field including selectors to identify and acquire Pod(s). A pod template specifying the configuration of new Pods it should create to meet the number of replicas criteria. It creates and deletes Pod(s) as needed to reach the desired number. When a ReplicaSet needs to create new Pod(s), it uses its Pod template.</p> <p>The Pod(s) maintained by a ReplicaSet has <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents" rel="nofollow noreferrer">metadata.ownerReferences</a> field, to tell which resource owns the current Pod(s).</p> <p>A ReplicaSet identifies new Pods to acquire by using its selector. If there is a Pod that has no OwnerReference or the OwnerReference is not a controller and it matches a ReplicaSet’s selector, it will be immediately acquired by said ReplicaSet.</p> <p>Ref: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/</a></p> <p>**</p> <h2>Now, its time to answer your questions</h2> <p>Since ReplicaSet is one of the Pod controller (listed above), obviously, it needs a <code>podTemplate</code> (using this template, your Pods will be scheduled). All of the Pods the ReplicaSet creates will have the same Pod configuration (same containers, same ports, same readiness/livelinessProbe, volumes, etc.). And having this <code>podTemplate</code> is not redundant info, it's needed. So, if you have a Pod controller like ReplicaSet or other (as your need), you don't need the Pod itself anymore. Because the ReplicaSet (or the other controllers) will create Pod(s).</p> <p>**</p> <p>Guess, you got the answer.</p>
<p>I am using the kubernetes sample controller and I want to increase the log verbosity</p> <p>On starting up the controller I tried <code>./sample-controller -kubeconfig=kubeconfig.yaml -v=8</code></p> <p>Does klog require a flag to be passed in on the flag.Parse() step or can I set some env variable to increase log level?</p>
<p>This was fixed by klog.InitFlags(nil) in this PR <a href="https://github.com/kubernetes/kubernetes/pull/79219/files" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/79219/files</a></p>
<p>I have an Angular 7 application where the <code>app.module.ts</code> files looks like the following. Note that in this example, there are 2 modules with each having 1 key whose value needs to be externalized. By <code>externalized</code>, I mean the values should be acquired from the environment at runtime.</p> <pre><code>@NgModule({ declarations: [ ... ], imports: [ SomeModule.forRoot({ apiKey1: "needs to be externalized" }), AnotherModule.forRoot({ apiKey2: "needs to also be externalized" }) ], providers: [ ... ], bootstrap: [AppComponent] }) export class AppModule { } </code></pre> <p>What I do is build this application (e.g. <code>ng build</code> and then containerize it using Docker). At deployment time, the <code>DevOps</code> person wants to run the docker container as follows.</p> <pre><code>docker run -e API_KEY_1='somekey' -e API_KEY_2='anotherkey' -p 80:80 my-container:production </code></pre> <p>Note that <code>API_KEY_1</code> should map to <code>apiKey1</code> and <code>API_KEY_2</code> should map to <code>apiKey2</code>. </p> <p>Is there any disciplined way of externalizing values for an Angular application?</p> <p>I thought about writing a helper script to do string substitution against the file, but I think this approach is not very disciplined (as the transpiled Angular app files are obfuscated and minified). The script would run at container startup, read the environment variables (key and value), and then look at the files to replace the old values with the ones from the environment.</p> <p>Eventually, the Angular app will be orchestrated with Kubernetes. I'm wondering if there's anything there that might help or influence how to externalize the values in a best practice way.</p> <p>Any help is appreciated.</p>
<p>You could use a substitution in custom entry-point.</p> <pre class="lang-sh prettyprint-override"><code>FROM nginx RUN apt-get update &amp;&amp; apt-get -y install gettext-base nginx-extras ADD docker-entrypoint.sh / ADD settings.json.template / COPY dist /usr/share/nginx/html ENTRYPOINT ["/docker-entrypoint.sh"] CMD ["nginx", "-g", "daemon off;"] </code></pre> <p>With a <code>docker-entrypoint.sh</code> like this:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/bash envsubst &lt; "settings.json.template" &gt; "settings.json" cp settings.json /usr/share/nginx/html/assets/ # Launch nginx exec "$@" </code></pre> <p>And a <code>settings.json.template</code>:</p> <pre><code>{ "apiKey2": "$API_KEY_2", "apiKey1": "$API_KEY_1" } </code></pre> <p>Then on your source you add a file <code>settings-loader.ts</code></p> <pre><code>export const settingsLoader = new Promise&lt;any&gt;((resolve, reject) =&gt; { const xmlhttp = new XMLHttpRequest(); const method = 'GET'; const url = './assets/settings.json'; xmlhttp.open(method, url, true); xmlhttp.onload = function() { if (xmlhttp.status === 200) { const _environment = JSON.parse(xmlhttp.responseText); resolve(_environment); } else { resolve(); } }; xmlhttp.send(); }); </code></pre> <p>And on your <code>main.ts</code>:</p> <pre><code>import {enableProdMode} from '@angular/core'; import {platformBrowserDynamic} from '@angular/platform-browser-dynamic'; import {AppModule} from './app/app.module'; import {environment} from './environments/environment'; import {settingsLoader} from 'settings-loader'; settingsLoader.then((settings) =&gt; { if (settings != null) { environment.settings = Object.assign(environment.settings, settings); } if (environment.production) { enableProdMode(); } platformBrowserDynamic().bootstrapModule(AppModule) .catch(err =&gt; console.log(err)); }); </code></pre> <p>Then you should have access in your code with </p> <pre><code>import {environment} from '../environments/environment'; console.log(environment.settings.apiKey1); </code></pre>
<p>On my Kubernetes Setup, i have 2 Services - A and B.<br> Service B is dependent on Service A being fully started through. I would now like to set a TCP Readiness-Probe in Pods of Service B, so they test if any Pod of Service A is fully operating.</p> <p>the ReadinessProbe section of the deployment in Service B looks like:</p> <pre><code>readinessProbe: tcpSocket: host: serviceA.mynamespace.svc.cluster.local port: 1101 # same port of Service A Readiness Check </code></pre> <p>I can apply these changes, but the Readiness Probe fails with:</p> <pre><code>Readiness probe failed: dial tcp: lookup serviceB.mynamespace.svc.cluster.local: no such host </code></pre> <p>I use the same hostname on other places (e.g. i pass it as ENV to the container) and it works and gets resolved. </p> <p>Does anyone have an idea to get the readiness working for another service or to do some other kind of dependency-checking between services? Thanks :)</p>
<p>Due to the fact that Readiness and Liveness <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">probes</a> are fully managed by <code>kubelet</code> node agent and <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> inherits DNS discovery service from the particular Node configuration, you are not able to resolve K8s internal nameserver <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a> records:</p> <blockquote> <p>For a probe, the kubelet makes the probe connection at the <strong>node</strong>, not in the <strong>pod</strong>, which means that you can not use a service name in the host parameter since the kubelet is unable to resolve it.</p> </blockquote> <p>You can consider scenario when your source <strong>Pod A</strong> consumes Node IP Address by propagating <code>hostNetwork: true</code> parameter, thus <code>kubelet</code> can reach and success Readiness probe from within <strong>Pod B</strong>, as described in the official k8s <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="nofollow noreferrer">documentation</a>:</p> <pre><code>tcpSocket: host: Node Hostname or IP address where Pod A residing port: 1101 </code></pre> <p>However, I've found Stack <a href="https://stackoverflow.com/questions/51079849/kubernetes-wait-for-other-pod-to-be-ready">thread</a>, where you can get more efficient solution how to achieve the same result through <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#what-can-init-containers-be-used-for" rel="nofollow noreferrer">Init Containers</a>.</p>
<p>I'm trying to create a chart with multiple subcharts ( 2 instances of ibm-db2oltp-dev). Is there a way to define in the same values.yaml file, different configuration for each instance?</p> <p>I need two databases:</p> <pre><code>db2inst.instname: user1 db2inst.password: password1 options.databaseName: dbname1 db2inst.instname: user2 db2inst.password: password2 options.databaseName: dbname2 </code></pre> <p>I saw it could be done via alias but I didn't find an example explaining how to do it. Is it possible?</p>
<p>Yes, it is possible:</p> <p>In <strong>Chart.yaml</strong> for Helm 3 or in <strong>requirements.yaml</strong> for Helm 2:</p> <pre><code>dependencies: - name: ibm-db2oltp-dev *(full chart name here)* repository: http://localhost:10191 *(Actual repository url here)* version: 0.1.0 *(Required version)* alias: db1inst *(The name of the chart locally)* - name: ibm-db2oltp-dev repository: http://localhost:10191 version: 0.1.0 alias: db2inst </code></pre> <p><a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-of-a-child-chart" rel="noreferrer"><strong>parentChart/values.yaml:</strong></a></p> <pre><code>someParentChartValueX: x someParentChartValueY: y db1inst: instname: user1 db2inst: password1 db2inst: instname: user2 db2inst: password2 </code></pre>
<p>I have an AKS-Cluster in Azure. When I scale down the cluster with az aks scale for example I want to control which Node should be removed.</p> <p>I cannot find a documentation that describes how Azure decides. Will Azure prefer removing nodes that are already cordoned or drained?</p> <p>Deleting it from the Azure Portal is not an option, because I want an application to communicate with Azure via CLI or API.</p>
<p>First of all, it's impossible to control which node to remove when you scale down the AKS cluster. Then I will show you how do the nodes change when you scale the AKS cluster.</p> <p>When you do not use the VMSS as the agent pool, it means the AKS cluster use the individual VMs as the nodes. If you scale up, then it will increase the nodes with the index after the existing nodes. For example, the cluster has one node with the index 0 and then it will use the index 1 if you scale up one node. And if you scale down, it will remove the nodes with the biggest index in the sequence at first.</p> <p>When you use the VMSS as the agent pool, it will comply with the scale rules of VMSS. And you can see the VMSS scale rules in <a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-faq#if-i-reduce-my-scale-set-capacity-from-20-to-15-which-vms-are-removed" rel="nofollow noreferrer">the changes of VMSS scale up and down</a>.</p> <p>Also, you can take a look at the Azure CLI command <a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-scale" rel="nofollow noreferrer"><code>az aks scale</code></a> that scale the AKS cluster and the <a href="https://learn.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate" rel="nofollow noreferrer">REST API</a>.</p>
<p><a href="https://i.stack.imgur.com/4Tej0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4Tej0.png" alt="enter image description here"></a></p> <p>I get "is forbidden" all over the dashboard site in Kubernetes*(See image)</p> <p>To reproduce:</p> <ol> <li><p>Create a Google Kubernetes Cluster via the site, not from shell.</p></li> <li><p>Select Kubernetes version 1.8.6</p></li> <li><p>Open shell via the connect button: <code>gcloud container clusters get-credentials cluster-1 --zone us-central1-a --project awear-cloud</code></p></li> <li><p><code>Kubectl proxy</code></p></li> <li><code>echo http://127.0.0.1:8001/ui</code></li> <li>click the link from <code>echo</code></li> </ol> <p>Note: also tried: <code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code></p> <p>Do you know why?</p>
<p>1 - Create a file <strong>sa.yaml</strong> and paste the contents below into it.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system </code></pre> <p>2 - Apply it - <strong>kubectl apply -f sa.yaml</strong></p> <p>3 - Create a file <strong>rbac.yaml</strong> and paste the contents below into it.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system </code></pre> <p>4 - Apply it - <strong>kubectl apply -f rbac.yaml</strong></p> <p>5 - Now, <strong>login</strong> to your dashboard. </p> <p>Let me know if this works. </p>
<p>I have a kubernetes 1.13 cluster running on Azure and I'm using multiple persistent volumes for multiple applications. I have setup monitoring with Prometheus, Alertmanager, Grafana.</p> <p>But I'm unable to get any metrics related to the PVs.</p> <p>It seems that kubelet started to expose some of the metrics from kubernetes 1.8 , but again stopped since 1.12</p> <p>I have already spoken to Azure team about any workaround to collect the metrics directly from the actual FileSystem (Azure Disk in my case). But even that is not possible.</p> <p>I have also heard some people using sidecars in the Pods to gather PV metrics. But I'm not getting any help on that either.</p> <p>It would be great even if I get just basic details like consumed / available free space.</p>
<p>I'm was having the same issue and solved it by joining two metrics:</p> <pre><code>avg(label_replace( 1 - node_filesystem_free_bytes{mountpoint=~".*pvc.*"} / node_filesystem_size_bytes, "volumename", "$1", "mountpoint", ".*(pvc-[^/]*).*")) by (volumename) + on(volumename) group_left(namespace, persistentvolumeclaim) (0 * kube_persistentvolumeclaim_info) </code></pre> <p>As an explanation, I'm adding a label <code>volumename</code> to every time-series of <code>node_filesystem*</code>, cut out of the existing <code>mountpoint</code> label and then joining with the other metrics containing the additional labels. Multiplying by 0 ensures this is otherwise a no-op.</p> <p>Also quick warning: I or you may be using some relabeling configs making this not work out immediately without adaptation.</p>
<p>I was following this <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple" rel="noreferrer">documentation</a> to setup Spinnaker on Kubernetes. I ran the scripts as they specified. Then the replication controllers and services are started. But some of PODs are not started</p> <pre><code>root@nveeru~# kubectl get pods --namespace=spinnaker NAME READY STATUS RESTARTS AGE data-redis-master-v000-zsn7e 1/1 Running 0 2h spin-clouddriver-v000-6yr88 1/1 Running 0 47m spin-deck-v000-as4v7 1/1 Running 0 2h spin-echo-v000-g737r 1/1 Running 0 2h spin-front50-v000-v1g6e 0/1 CrashLoopBackOff 21 2h spin-gate-v000-9k401 0/1 Running 0 2h spin-igor-v000-zfc02 1/1 Running 0 2h spin-orca-v000-umxj1 0/1 CrashLoopBackOff 20 2h </code></pre> <p>Then I <code>kubectl describe</code> the pods</p> <pre><code>root@veeru:~# kubectl describe pod spin-orca-v000-umxj1 --namespace=spinnaker Name: spin-orca-v000-umxj1 Namespace: spinnaker Node: 172.25.30.21/172.25.30.21 Start Time: Mon, 19 Sep 2016 00:53:00 -0700 Labels: load-balancer-spin-orca=true,replication-controller=spin-orca-v000 Status: Running IP: 172.16.33.8 Controllers: ReplicationController/spin-orca-v000 Containers: orca: Container ID: docker://e6d77e9fd92dc9614328d09a5bfda319dc7883b82f50cc352ff58dec2e933d04 Image: quay.io/spinnaker/orca:latest Image ID: docker://sha256:2400633b89c1c7aa48e5195c040c669511238af9b55ff92201703895bd67a131 Port: 8083/TCP QoS Tier: cpu: BestEffort memory: BestEffort State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Sep 2016 02:59:09 -0700 Finished: Mon, 19 Sep 2016 02:59:39 -0700 Ready: False Restart Count: 21 Readiness: http-get http://:8083/env delay=20s timeout=1s period=10s #success=1 #failure=3 Environment Variables: Conditions: Type Status Ready False Volumes: spinnaker-config: Type: Secret (a volume populated by a Secret) SecretName: spinnaker-config default-token-6irrl: Type: Secret (a volume populated by a Secret) SecretName: default-token-6irrl Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulling pulling image "quay.io/spinnaker/orca:latest" 1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulled Successfully pulled image "quay.io/spinnaker/orca:latest" 1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Created (events with common reason combined) 1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Started (events with common reason combined) 1h 3m 23 {kubelet 172.25.30.21} spec.containers{orca} Warning Unhealthy Readiness probe failed: Get http://172.16.33.8:8083/env: dial tcp 172.16.33.8:8083: connection refused 1h &lt;invalid&gt; 399 {kubelet 172.25.30.21} spec.containers{orca} Warning BackOff Back-off restarting failed docker container 1h &lt;invalid&gt; 373 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "orca" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=orca pod=spin-orca-v000-umxj1_spinnaker(ee2511f0-7e3d-11e6-ab16-0022195df673)" </code></pre> <p><strong>spin-front50-v000-v1g6e</strong></p> <pre><code>root@veeru:~# kubectl describe pod spin-front50-v000-v1g6e --namespace=spinnaker Name: spin-front50-v000-v1g6e Namespace: spinnaker Node: 172.25.30.21/172.25.30.21 Start Time: Mon, 19 Sep 2016 00:53:00 -0700 Labels: load-balancer-spin-front50=true,replication-controller=spin-front50-v000 Status: Running IP: 172.16.33.9 Controllers: ReplicationController/spin-front50-v000 Containers: front50: Container ID: docker://f5559638e9ea4e30b3455ed9fea2ab1dd52be95f177b4b520a7e5bfbc033fc3b Image: quay.io/spinnaker/front50:latest Image ID: docker://sha256:e774808d76b096f45d85c43386c211a0a839c41c8d0dccb3b7ee62d17e977eb4 Port: 8080/TCP QoS Tier: memory: BestEffort cpu: BestEffort State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Sep 2016 03:02:08 -0700 Finished: Mon, 19 Sep 2016 03:02:15 -0700 Ready: False Restart Count: 23 Readiness: http-get http://:8080/env delay=20s timeout=1s period=10s #success=1 #failure=3 Environment Variables: Conditions: Type Status Ready False Volumes: spinnaker-config: Type: Secret (a volume populated by a Secret) SecretName: spinnaker-config creds-config: Type: Secret (a volume populated by a Secret) SecretName: creds-config aws-config: Type: Secret (a volume populated by a Secret) SecretName: aws-config default-token-6irrl: Type: Secret (a volume populated by a Secret) SecretName: default-token-6irrl Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulling pulling image "quay.io/spinnaker/front50:latest" 1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulled Successfully pulled image "quay.io/spinnaker/front50:latest" 1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Created (events with common reason combined) 1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Started (events with common reason combined) 1h &lt;invalid&gt; 443 {kubelet 172.25.30.21} spec.containers{front50} Warning BackOff Back-off restarting failed docker container 1h &lt;invalid&gt; 417 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "front50" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=front50 pod=spin-front50-v000-v1g6e_spinnaker(edf85f41-7e3d-11e6-ab16-0022195df673)" </code></pre> <p><strong>spin-gate-v000-9k401</strong></p> <pre><code>root@n42-poweredge-5:~# kubectl describe pod spin-gate-v000-9k401 --namespace=spinnaker Name: spin-gate-v000-9k401 Namespace: spinnaker Node: 172.25.30.21/172.25.30.21 Start Time: Mon, 19 Sep 2016 00:53:00 -0700 Labels: load-balancer-spin-gate=true,replication-controller=spin-gate-v000 Status: Running IP: 172.16.33.6 Controllers: ReplicationController/spin-gate-v000 Containers: gate: Container ID: docker://7507c9d7c00e5834572cde2c0b0b54086288e9e30d3af161f0a1dbdf44672332 Image: quay.io/spinnaker/gate:latest Image ID: docker://sha256:074d9616a43de8690c0a6a00345e422c903344f6876d9886f7357505082d06c7 Port: 8084/TCP QoS Tier: memory: BestEffort cpu: BestEffort State: Running Started: Mon, 19 Sep 2016 01:14:54 -0700 Ready: False Restart Count: 0 Readiness: http-get http://:8084/env delay=20s timeout=1s period=10s #success=1 #failure=3 Environment Variables: Conditions: Type Status Ready False Volumes: spinnaker-config: Type: Secret (a volume populated by a Secret) SecretName: spinnaker-config default-token-6irrl: Type: Secret (a volume populated by a Secret) SecretName: default-token-6irrl Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1h &lt;invalid&gt; 696 {kubelet 172.25.30.21} spec.containers{gate} Warning Unhealthy Readiness probe failed: Get http://172.16.33.6:8084/env: dial tcp 172.16.33.6:8084: connection refused </code></pre> <p>what's wrong here?</p> <p><strong>UPDATE1</strong></p> <p>Logs (Please check the logs <a href="https://docs.google.com/document/d/1g270nTtVPK1JPTKALYf94Ktw0hgDKms-f0P3gvBMS60/edit?usp=sharing" rel="noreferrer">here</a>)</p> <pre><code>2016-09-20 06:49:45.062 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application startup failed org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:133) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:690) at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:134) at org.springframework.boot.builder.SpringApplicationBuilder$run$0.call(Unknown Source) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116) at com.netflix.spinnaker.front50.Main.main(Main.groovy:47) Caused by: org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:99) at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.&lt;init&gt;(TomcatEmbeddedServletContainer.java:76) at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:384) at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:156) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:159) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:130) ... 10 common frames omitted Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration'........ .......... </code></pre> <p><strong>UPDATE-1(02-06-2017)</strong></p> <p>I tried above setup again in latest version of K8</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Still not all PODs are up</p> <pre><code>ubuntu@ip-172-31-18-78:~/spinnaker/experimental/kubernetes/simple$ kubectl get pods --namespace=spinnaker NAME READY STATUS RESTARTS AGE data-redis-master-v000-rzmzq 1/1 Running 0 31m spin-clouddriver-v000-qhz97 1/1 Running 0 31m spin-deck-v000-0sz8q 1/1 Running 0 31m spin-echo-v000-q9xv5 1/1 Running 0 31m spin-front50-v000-646vg 0/1 CrashLoopBackOff 10 31m spin-gate-v000-vfvhg 0/1 Running 0 31m spin-igor-v000-8j4r0 1/1 Running 0 31m spin-orca-v000-ndpcx 0/1 CrashLoopBackOff 9 31m </code></pre> <p>Here is the logs links</p> <p>Front50 <a href="https://pastebin.com/ge5TR4eR" rel="noreferrer">https://pastebin.com/ge5TR4eR</a></p> <p>Orca <a href="https://pastebin.com/wStmBtst" rel="noreferrer">https://pastebin.com/wStmBtst</a></p> <p>Gate <a href="https://pastebin.com/T8vjqL2K" rel="noreferrer">https://pastebin.com/T8vjqL2K</a></p> <p>Deck <a href="https://pastebin.com/kZnzN62W" rel="noreferrer">https://pastebin.com/kZnzN62W</a></p> <p>Clouddriver <a href="https://pastebin.com/1pEU6V5D" rel="noreferrer">https://pastebin.com/1pEU6V5D</a></p> <p>Echo <a href="https://pastebin.com/cvJ4dVta" rel="noreferrer">https://pastebin.com/cvJ4dVta</a></p> <p>Igor <a href="https://pastebin.com/QYkHBxkr" rel="noreferrer">https://pastebin.com/QYkHBxkr</a></p> <p>Did I miss any configuration? I have not touched <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/config" rel="noreferrer">yaml config</a>(Updated Jenkins URL,uname, passwd), that's I'm getting errors?. I'm new to Spinnaker. I had little knowledge on normal Spinnaker installation. Please guide me the installation.</p> <p>Thanks</p>
<p>Based on these errors, it seems that the Front50 service cannot reach any backend(if there is one configured). I would not modify the <code>spinnaker-local.yml</code> file directly, but instead use <a href="https://www.spinnaker.io/setup/install/halyard/" rel="nofollow noreferrer">Halyard</a> to install Spinnaker services into Kubernetes. </p> <p>I have setup Spinnaker services in Kubernetes successfully using the instructions and scripts in <a href="https://github.com/grizzthedj/kubernetes-spinnaker" rel="nofollow noreferrer">this repo</a>. Just omit/skip the components that you don't need.</p> <p><a href="https://github.com/grizzthedj/kubernetes-spinnaker" rel="nofollow noreferrer">https://github.com/grizzthedj/kubernetes-spinnaker</a></p>
<p>What is the best method for checking to see if a custom resource definition exists before running a script, using only <code>kubectl</code> command line?</p> <p>We have a yaml file that contains definitions for a NATS cluster <code>ServiceAccount</code>, <code>Role</code>, <code>ClusterRoleBinding</code> and <code>Deployment</code>. The image used in the <code>Deployment</code> creates the <code>crd</code>, and the second script uses that <code>crd</code> to deploy a set of <code>pods</code>. At the moment our CI pipeline needs to run the second script a few times, only completing successfully once the <code>crd</code> has been fully created. I've tried to use <code>kubectl wait</code> but cannot figure out what condition to use that applies to the completion of a <code>crd</code>. </p> <p>Below is my most recent, albeit completely wrong, attempt, however this illustrates the general sequence we'd like. </p> <p><code>kubectl wait --for=condition=complete kubectl apply -f 1.nats-cluster-operator.yaml kubectl apply -f 2.nats-cluster.yaml</code></p>
<p>The condition for a CRD would be <code>established</code>:</p> <pre><code>kubectl -n &lt;namespace-here&gt; wait --for condition=established --timeout=60s crd/&lt;crd-name-here&gt; </code></pre> <p>You may want to adjust <code>--timeout</code> appropriately.</p>
<p>I am unable to figure out how to change my kube-apiserver. The current version I am using from azure AKS is 1.13.7.</p> <p>Below is what I need to change the kube-apiserver in kubernetes.</p> <p>The kube-apiserver process accepts an argument --encryption-provider-config that controls how API data is encrypted in etcd.</p> <p>Additionally, I am unable to find the kube-apiserver.</p> <p><a href="https://i.stack.imgur.com/wEvPM.png" rel="nofollow noreferrer">Yaml File Formatted</a></p> <pre><code>apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - identity: {} - aesgcm: keys: - name: key1 secret: c2VjcmV0IGlzIHNlY3VyZQ== - name: key2 secret: dGhpcyBpcyBwYXNzd29yZA== - aescbc: keys: - name: key1 secret: c2VjcmV0IGlzIHNlY3VyZQ== - name: key2 secret: dGhpcyBpcyBwYXNzd29yZA== - secretbox: keys: - name: key1 secret: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY= </code></pre> <p>I have tried to apply this yaml file but the error I get is below.</p> <blockquote> <p>error: unable to recognize "examplesecret.yaml": no matches for kind "EncryptionConfiguration" in version "apiserver.config.k8s.io/v1"</p> </blockquote> <p>Created aks cluster in azure. Used example encryption yaml file. Expected to be able to create rest secrets. The results I get are unable to create.</p>
<p>The <code>Kind: EncryptionConfiguration</code> is understood only by the api-server via the flag <code>--encryption-provider-config=</code> (<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options" rel="nofollow noreferrer">ref</a>); in AKS, there’s no way to pass that flag to the api-server, as it’s a managed service. Feel free to request the feature in the <a href="https://feedback.azure.com/forums/914020-azure-kubernetes-service-aks" rel="nofollow noreferrer">public forum</a>.</p>
<pre><code>Kubernetes version - v1.11.2 Prometheus helm chart version - 6.7.0 </code></pre> <p>I have my service running on 2 ports - 80 and 9000 Now I only need to monitor port 80 and I used below configuration to achieve that.</p> <pre><code>- job_name: '&lt;service-name&gt;' honor_labels: true kubernetes_sd_configs: - role: service relabel_configs: - source_labels: [__meta_kubernetes_service_label_app] action: keep regex: &lt;service-name&gt; - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name </code></pre> <p>The above solution adds both the service endpoint in Prometheus</p> <pre><code>http://&lt;service-name&gt;.default.svc:80/metrics http://&lt;service-name&gt;.default.svc:9000/metrics </code></pre> <p>To scrape only port 80 I added below config but its not able to scrape any service endpoints now.</p> <pre><code>- source_labels: [__meta_kubernetes_service_port_number] action: keep regex: 8\d{1} </code></pre> <p>Is there a way to only restrict specific port numbers?</p>
<p>I had a similar issue, specifying the port in the relabel_configs worked for me.</p> <pre><code>relabel_configs: - source_labels : [__meta_kubernetes_pod_label_app,__meta_kubernetes_pod_container_port_number] action: keep regex: myapp;8081 </code></pre> <p>After this my service was getting scraped only for port 8081</p>
<p>I had to stop a job in k8 by killing the pod, and now the job is not schedule anymore. </p> <pre><code># Import - name: cron-xml-import-foreman schedule: "*/7 * * * *" args: - /bin/sh - -c /var/www/bash.sh; /usr/bin/php /var/www/import-products.php --&gt;env=prod; resources: request_memory: "3Gi" request_cpu: "2" limit_memory: "4Gi" limit_cpu: "4" </code></pre> <p>Error : </p> <blockquote> <p>Warning FailedNeedsStart 5m34s (x7883 over 29h) cronjob-controller Cannot determine if job needs to be started: Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew.</p> </blockquote>
<p>According to the official <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/?source=post_page---------------------------#cron-job-limitations" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>If startingDeadlineSeconds is set to a large value or left unset (the default) and if concurrencyPolicy is set to Allow, the jobs will always run at least once.</p> </blockquote> <hr> <blockquote> <p>A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If concurrencyPolicy is set to Forbid and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.</p> </blockquote> <hr> <p>And regarding the <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">concurrencyPolicy</a></p> <blockquote> <p>It specifies how to treat concurrent executions of a job that is created by this cron job.</p> </blockquote> <p>Check your <code>CronJob</code> configuration and adjust those values accordingly.</p> <p>Please let me know if that helped.</p>
<p>Programmatically creating a pod using the Kubernetes client-go gives me the following error: <code>an empty namespace may not be set during creation</code></p> <p>Started from this example: <a href="https://github.com/feiskyer/go-examples/blob/master/kubernetes/pod-create/pod.go" rel="nofollow noreferrer">https://github.com/feiskyer/go-examples/blob/master/kubernetes/pod-create/pod.go</a></p> <pre><code>#go handler := clientset.CoreV1().Pods("").PodInterface pod := apiv1.Pod{ TypeMeta: metav1.TypeMeta{ Kind: "Pod", APIVersion: "v1", }, ObjectMeta: metav1.ObjectMeta{ Name: "my-pod", Namespace: "my-namespace", }, Spec: apiv1.PodSpec{ Containers: []apiv1.Container{ { Name: "my-container", Image: "my-container", }, }, }, } result, err := handler.Create(pod) </code></pre> <p><strong>Expectation</strong>: Pod is created.<br> <strong>Actual</strong>: Creation fails with k8s error: <em>an empty namespace may not be set during creation</em></p>
<p>To fix the issue above, I had to specify the namespace in the following line:</p> <pre><code>handler := clientset.CoreV1().Pods("my-namespace").PodInterface </code></pre> <p>This fixed the error, because it is not allowed to create a pod outside an namespace. So, even if the namespace was provided in the pod object, it also must be specified 'as a flag'.</p> <p>So, it should be similar to something like (see the flag --namespace in the command): </p> <pre><code>#my-pod-file-definition.yaml ---------------------------- apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: my-container image: my-image </code></pre> <p><code>kubectl apply -f my-pod-file-definition.yaml --namespace=my-namespace</code></p>
<p>Apache ignite nodes deployed as pods discover each other using TcpDiscoveryKubernetesIpFinder but cannot communication and therefore do not join the same cluster.</p> <p>I set up a kubernetes deployment on Azure for an ignite based application using the "official" tutorials. At this point, the deployment are successful but there is always only one server in the topology for each pods. When I log on the pod directly and try to connect to the other pod on pod 47500 it does not work. More interesting is that the port 47500 is only accessing on 127.0.01 on the current pod not using its external IP.</p> <p>Here are the debug message on pod/node 1. As you can see the TcpDiscoveryKubernetesIpFinder discovers the two ignite pods/nodes. But it cannot connect to the other ignite node:</p> <pre><code>INFO [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false] DEBUG [org.apache.ignite.internal.managers.communication.GridIoManager] (ServerService Thread Pool -- 5) Starting SPI: TcpCommunicationSpi [connectGate=null, connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@48ca2359, enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, locHost=0.0.0.0/0.0.0.0, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=600000, connTimeout=5000, maxConnTimeout=600000, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=GridNioServer [selectorSpins=0, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=org.apache.ignite.internal.util.nio.GridDirectParser@30a29315, directMode=true], GridConnectionBytesVerifyFilter], closed=false, directBuf=true, tcpNoDelay=true, sockSndBuf=32768, sockRcvBuf=32768, writeTimeout=2000, idleTimeout=600000, skipWrite=false, skipRead=false, locAddr=0.0.0.0/0.0.0.0:47100, order=LITTLE_ENDIAN, sndQueueLimit=0, directMode=true, sslFilter=null, msgQueueLsnr=null, readerMoveCnt=0, writerMoveCnt=0, readWriteSelectorsAssign=false], shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=47100, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@4186e275[Count = 1], stopping=false] DEBUG [org.apache.ignite.internal.managers.communication.GridIoManager] (ServerService Thread Pool -- 5) Starting SPI implementation: org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi DEBUG [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Using parameter [locAddr=null] DEBUG [org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi] (ServerService Thread Pool -- 5) Using parameter [locPort=47100] DEBUG [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] Grid runnable started: tcp-disco-srvr DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Getting Apache Ignite endpoints from: https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/ignite DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Added an address to the list: 10.244.0.93 DEBUG [org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder] (ServerService Thread Pool -- 5) Added an address to the list: 10.244.0.94 ERROR [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] (ServerService Thread Pool -- 5) Exception on direct send: Invalid argument (connect failed): java.net.ConnectException: Invalid argument (connect failed) at java.net.PlainSocketImpl.socketConnect(Native Method) </code></pre> <p>I logged on the pods directly and tried a ping on the other node/pod which works BUT neither <code>echo &gt; /dev/tcp/10.244.0.93/47500</code> nor <code>echo &gt; /dev/tcp/10.244.0.94/47500</code> worked. On the other end <code>echo &gt; /dev/tcp/127.0.0.1/47500</code> does. Which leads me to think that ignite is just listening to the local loopback address.</p> <p>There are similar logs on pods/node 2</p> <p>Here is the kubernetes configuration</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pgdata namespace: default annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadWriteOnce] resources: requests: storage: 1Gi --- apiVersion: v1 kind: ServiceAccount metadata: name: ignite namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ignite namespace: default rules: - apiGroups: - "" resources: - pods - endpoints verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: ignite roleRef: kind: ClusterRole name: ignite apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: ignite namespace: default --- apiVersion: v1 kind: Service metadata: name: ignite namespace: default spec: clusterIP: None # custom value. ports: - port: 9042 # custom value. selector: type: processing-engine-node --- apiVersion: apps/v1 kind: Deployment metadata: name: database-tenant-1 namespace: default spec: replicas: 1 selector: matchLabels: app: database-tenant-1 template: metadata: labels: app: database-tenant-1 spec: containers: - name: database-tenant-1 image: postgres:12 env: - name: "POSTGRES_USER" value: "admin" - name: "POSTGRES_PASSWORD" value: "admin" - name: "POSTGRES_DB" value: "tenant1" volumeMounts: - name: pgdata mountPath: /var/lib/postgresql/data subPath: postgres ports: - containerPort: 5432 readinessProbe: exec: command: ["psql", "-W", "admin", "-U", "admin", "-d", "tenant1", "-c", "SELECT 1"] initialDelaySeconds: 15 timeoutSeconds: 2 livenessProbe: exec: command: ["psql", "-W", "admin", "-U", "admin", "-d", "tenant1", "-c", "SELECT 1"] initialDelaySeconds: 45 timeoutSeconds: 2 volumes: - name: pgdata persistentVolumeClaim: claimName: pgdata --- apiVersion: v1 kind: Service metadata: name: database-tenant-1 namespace: default labels: app: database-tenant-1 spec: type: NodePort ports: - port: 5432 selector: app: database-tenant-1 --- apiVersion: apps/v1 kind: Deployment metadata: name: processing-engine-master namespace: default spec: replicas: 1 selector: matchLabels: app: processing-engine-master template: metadata: labels: app: processing-engine-master type: processing-engine-node spec: serviceAccountName: ignite initContainers: - name: check-db-ready image: postgres:12 command: ['sh', '-c', 'until pg_isready -h database-tenant-1 -p 5432; do echo waiting for database; sleep 2; done;'] containers: - name: xxxx-engine-master image: shostettlerprivateregistry.azurecr.io/xxx/xxx-application:4.2.5 ports: - containerPort: 8081 - containerPort: 11211 # REST port number. - containerPort: 47100 # communication SPI port number. - containerPort: 47500 # discovery SPI port number. - containerPort: 49112 # JMX port number. - containerPort: 10800 # SQL port number. - containerPort: 10900 # Thin clients port number. volumeMounts: - name: config-volume mountPath: /opt/project-postgres.yml subPath: project-postgres.yml volumes: - name: config-volume configMap: name: pe-config --- apiVersion: apps/v1 kind: Deployment metadata: name: processing-engine-worker namespace: default spec: replicas: 1 selector: matchLabels: app: processing-engine-worker template: metadata: labels: app: processing-engine-worker type: processing-engine-node spec: serviceAccountName: ignite initContainers: - name: check-db-ready image: postgres:12 command: ['sh', '-c', 'until pg_isready -h database-tenant-1 -p 5432; do echo waiting for database; sleep 2; done;'] containers: - name: xxx-engine-worker image: shostettlerprivateregistry.azurecr.io/xxx/xxx-worker:4.2.5 ports: - containerPort: 8081 - containerPort: 11211 # REST port number. - containerPort: 47100 # communication SPI port number. - containerPort: 47500 # discovery SPI port number. - containerPort: 49112 # JMX port number. - containerPort: 10800 # SQL port number. - containerPort: 10900 # Thin clients port number. volumeMounts: - name: config-volume mountPath: /opt/project-postgres.yml subPath: project-postgres.yml volumes: - name: config-volume configMap: name: pe-config </code></pre> <p>and the ignite config</p> <pre><code>&lt;bean id="tcpDiscoveryKubernetesIpFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/&gt; &lt;property name="discoverySpi"&gt; &lt;bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"&gt; &lt;property name="localPort" value="47500" /&gt; &lt;property name="localAddress" value="127.0.0.1" /&gt; &lt;property name="networkTimeout" value="10000" /&gt; &lt;property name="ipFinder"&gt; &lt;bean id="tcpDiscoveryKubernetesIpFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/&gt; &lt;/property&gt; &lt;/bean&gt; &lt;/property&gt; </code></pre> <p>I expect the pods to be able to communicate and to end up with the following topology Topology snapshot:</p> <pre><code>[ver=1, locNode=a8e6a058, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=0.24GB, heap=1.5GB] </code></pre>
<p>You configured discovery to bind to localhost:</p> <pre><code>&lt;property name="localAddress" value="127.0.0.1" /&gt; </code></pre> <p>This means that nodes from different pods will not be able to join each other. Try removing this line from configuration.</p>
<p>We have an issue in an AKS cluster running Kubernetes 1.13.5. The symptoms are:</p> <ul> <li>Pods are randomly restarted</li> <li>The "Last State" is "Terminated", the "Reason" is "Error" and the "Exit Code" is "137"</li> <li>The pod events show no errors, either related to lack of resources or failed liveness checks</li> <li>The docker container shows "OOMKilled" as "false" for the stopped container</li> <li>The linux logs show no OOM killed pods</li> </ul> <p>The issues were are experiencing match those described in <a href="https://github.com/moby/moby/issues/38768" rel="nofollow noreferrer">https://github.com/moby/moby/issues/38768</a>. However, I can find no way to determine if the version of Docker run on the AKS nodes is affected by this bug, because AKS seems to use a custom build of Docker whose version is something like 3.0.4, and I can't find any relationship between these custom version numbers and the upstream Docker releases.</p> <p>Does anyone know how to match internal AKS Docker build numbers to upstream Docker releases, or better yet how someone might prevent pods from being randomly killed?</p> <p><strong>Update</strong></p> <p>This is still an ongoing issue, and I though I would document how we debugged it for future AKS users.</p> <p>This is the typical description of a pod with a container that has been killed with an exit code of 137. The common factors are the <code>Last State</code> set to <code>Terminated</code>, the <code>Reason</code> set to <code>Error</code>, <code>Exit Code</code> set to 137 and no events.</p> <pre><code>Containers: octopus: Container ID: docker://3a5707ab02f4c9cbd66db14d1a1b52395d74e2a979093aa35a16be856193c37a Image: index.docker.io/octopusdeploy/linuxoctopus:2019.5.10-hosted.462 Image ID: docker-pullable://octopusdeploy/linuxoctopus@sha256:0ea2a0b2943921dc7d8a0e3d7d9402eb63b82de07d6a97cc928cc3f816a69574 Ports: 10943/TCP, 80/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Mon, 08 Jul 2019 07:51:52 +1000 Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 04 Jul 2019 21:04:55 +1000 Finished: Mon, 08 Jul 2019 07:51:51 +1000 Ready: True Restart Count: 2 ... Events: &lt;none&gt; </code></pre> <p>The lack of events is caused by the event TTL set in Kubernetes itself resulting in the events expiring. However with Azure monitoring enabled we can see that there were no events around the time of the restart other than the container starting again.</p> <p><a href="https://i.stack.imgur.com/1H4uT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1H4uT.png" alt="enter image description here"></a></p> <p>In our case, running <code>kubectl logs octopus-i002680-596954c5f5-sbrgs --previous --tail 500 -n i002680</code> shows no application errors before the restart.</p> <p>Running <code>docker ps --all --filter 'exited=137'</code> on the Kubernetes node hosting the pod shows the container 593f857910ff with an exit code of 137.</p> <pre><code>Enable succeeded: [stdout] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 20930700810f 7c23e4d2be70 "./install.sh " 14 hours ago Exited (137) 12 hours ago k8s_octopus_octopus-i002525-55f69565f8-s488l_i002525_b08125ab-9e2e-11e9-99be-422b98e8f214_2 593f857910ff 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 25 hours ago k8s_octopus_octopus-i002680-596954c5f5-sbrgs_i002680_01eb1b4d-9e03-11e9-99be-422b98e8f214_1 d792afb85c6f 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 4 days ago k8s_octopus_octopus-i002521-76bb77b5fd-twsdx_i002521_035093c5-9e2e-11e9-99be-422b98e8f214_0 0361bc71bf14 7c23e4d2be70 "./install.sh " 4 days ago Exited (137) 2 days ago k8s_octopus_octopus-i002684-769bd954-f89km_i002684_d832682d-9e03-11e9-99be-422b98e8f214_0 [stderr] </code></pre> <p>Running <code>docker inspect 593f857910ff | jq .[0] | jq .State</code> shows the container was not <code>OOMKilled</code>.</p> <pre><code>Enable succeeded: [stdout] { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 137, "Error": "", "StartedAt": "2019-07-04T11:04:55.037288884Z", "FinishedAt": "2019-07-07T21:51:51.080928603Z" } [stderr] </code></pre>
<p>This issue appears to have been resolved by updating to AKS 1.13.7, which includes an update to Moby 3.0.6. Since updating a few days ago we have not seen containers killed in the manner described in the Docker bug at <a href="https://github.com/moby/moby/issues/38768" rel="nofollow noreferrer">https://github.com/moby/moby/issues/38768</a>.</p>
<p>I am trying to deploy my microservice on a Kuberenetes cluster in 2 different environment dev and test. And I am using helm chart to deploy my Kubernetes service. I am using Jenkinsfile to deploy the chart. And inside Jenkinsfile I added helm command within the stage like the following ,</p> <pre><code>stage ('helmchartinstall') { steps { sh 'helm upgrade --install kubekubedeploy --namespace test pipeline/spacestudychart' } } } </code></pre> <p>Here I am defining the <code>--namespace</code> test parameter. But when it deploying, it showing the console output with default namespace. I already created namespaces test and prod.</p> <p>When I checked the Helm version, I got response like the following,</p> <pre><code>docker@mildevdcr01:~$ helm version Client: &amp;version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"} </code></pre> <p>Have I made any mistake here for defining the namespace?</p>
<p>The most likely issue here is that the Chart already specifies <code>default</code> as <code>metadata.namespace</code> which in Helm 2 is not overwritten by the <code>--namespace</code> parameter.</p> <p>If this is the cause a solution would be to remove the namespace specified in the <code>metadata.namespace</code> or to make it a template parameter (aka release <code>value</code>). </p> <p>Also see <a href="https://stackoverflow.com/a/51137448/1977182">https://stackoverflow.com/a/51137448/1977182</a>. </p>
<p>So, in my use case i need to scale up an application when an event happens in the system.</p> <p>HPA will not help here - as i don't want the pods to scale down when the metric is met, nor i don't want the pods to keep scaling up (as what HPA does - scales up until it meets its criteria, and then scale down).</p> <p>The scale down should happen when another event happens, thus the logic is:</p> <ol> <li>Event is raised in the system</li> <li>Deployment is scaled by 1</li> <li>Work continues</li> <li>Another "scale down" event is raised from the system</li> <li>Deployment is scaled down by 1</li> </ol> <p>Should i just write a different pod that accesses the k8s API and scales up/down the deployment? it feels to me that something like that already exists but i'm missing something</p>
<p>In this scenario I would look at admission <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#experimenting-with-admission-webhooks" rel="nofollow noreferrer">Webhooks</a>, a comprehensive way to interact with K8s API server that can modify the basic behavior of some Kubernetes features by representing a <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#response" rel="nofollow noreferrer">Dynamic admission control</a> model.</p> <p>It means that you can implement special controllers that can intercept Kubernetes API requests, and modify or reject them based on custom logic.Thus, you can create either <code>ValidatingWebhookConfiguration</code> or <code>MutatingWebhookConfiguration</code> admission webhooks; the only difference validating webhooks can reject a request, but they cannot modify the object they are receiving in the admission HTTP request, while mutating webhooks can modify objects by creating a patch that will be sent back in the admission HTTP response. I would propose to get more relevant information in this <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="nofollow noreferrer">tutorial</a>.</p> <p>According to the above said, it might be feasible to scale up or down relevant deployment resource based on the mutating rules in <code>MutatingWebhookConfiguration</code> within <code>AdmissionReview</code> API object as described in the official K8s <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#webhook-request-and-response" rel="nofollow noreferrer">documentation</a>.</p>
<p>I think my focus is on how to use this configuration parameter- "controlPlaneEndpoint". It is currently buggy to use "controlPlaneEndpoint". <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/</a></p> <p>I really hope you can be patient to see my actual situation.</p> <p>First, The configuration parameter- "controlPlaneEndpoint" is a vip or a Load balancing, right? So, I configure “controlPlaneEndpoint” with 4 layer load balancing; I tried aws\ali. All the results show that will be probability of timeout during use, and "nodexxx not found" appeared 100% of the time during the installation with kubeadm.</p> <p>Why is this happening? If I use 4 layers of load balancing in parameter- "controlPlaneEndpoint", There will be network problems. For example, I have three master , ServerA、ServerB、ServerC, I enter the command ”kubectl get pod“ on serverA. There was a 33 percent probability of timeout. Everything is fine when the serverA request is directed to either ServerB or ServerC through the 4 layer load balancing. If the request directed to ServerA itself through the 4-layer load balancing, A timeout is bound to occur.</p> <p>Because the 4 layer load balancing cannot be used when the ServerA is the server as well as the requestor. This is the network feature of the 4-layer load balancing. Same reason, When I create a new cluster with kubeadm, My first master is serverA. Although ServerA's apiserver is already running in docker and I can telnet ServerA-IP:6443 successful , kubelet will check 4-layer load balancing-IP:prot in parameter- "controlPlaneEndpoint" . So "nodexxx not found" appeared 100% of the time during the installation with kubeadm when I configure “controlPlaneEndpoint”.</p> <p>In a public cloud environment, such as ali, I can't use keepalived+haproxy. This means that I have to use 7 layers of load balancing for k8s-apiserver ,If I want use parameter- "controlPlaneEndpoint" . right?</p> <p>How to configure the kubeadm-config with layer 7 load balancing? It is https, I had a problem with kubeadm certification. Is there any documentation?</p>
<p>We are suffering the exact same problem, but with the Azure Load Balancer (Level 4).</p> <p>1) It fails on the first master node where "kubeadm init" is executed because it tries to communicate with itself through the load balancer.</p> <p>2) On all the other master nodes where "kubeadm join" is executed, there's a 1/N chance of failure when the load balancer selects the node itself and not any of the (N-1) nodes that are already in the cluster.</p> <p>We hacked our way by using iptables rules. For instance, in the first node before "kubeadm init" we make iptables to route the load balancer ip to 127.0.0.1:</p> <blockquote> <p>iptables -t nat -A OUTPUT -p all -d ${FRONTEND_IP} -j DNAT --to-destination 127.0.0.1</p> </blockquote> <p>Of course we delete the iptables rule after kubeadm init. I'm not recommending anybody to do this, it's a nasty hack and my intention with this post is to compel somebody who may know what we are missing to please post what the right solution is.</p> <p>To the original poster: I don't think the intention is that we use a Level 7 LB. The documentation is clear when they say that a Level 4 is all that's needed.</p> <p>I'll post again if we find the right solution.</p>
<p>I have the following yaml:</p> <pre><code> volumeMounts: - name: app-secret mountPath: /app readOnly: true volumes: - name: app-secret secret: secretName: app-secret items: - key: app-secret.json path: appsettings.secret.json </code></pre> <p>I expect the secret is mounted on <code>/app/appsettings.secret.json</code> but it isn't. I don't know where it is mounted and the container crashes and I don't have a chance to <code>kubectl exec</code> into the container to inspect where the secret is mounted. My guess is that it wipes out the content of <code>/app</code>. Any advice and insight is appreciated.</p>
<p>This works:</p> <pre><code> volumeMounts: - name: app-secret mountPath: /app/appsettings.secret.json subPath: appsettings.secret.json readOnly: true volumes: - name: app-secret secret: secretName: app-secret items: - key: app-secret.json path: appsettings.secret.json </code></pre>
<p>Requst:limits of a pod may be set to low at the beginning, to make full use of node's resource, we need to set the limits higher. However, when the resource of node is not enough, to make the node's still work well, we need to set the limits lower. It is better not to kill the pod, because it may influence the cluster. </p> <p>Background:I am currently a beginner in k8s and docker, my mentor give me this requests. Can this requests fullfill normaly? Or is it better way to solve this kind of problem? Thanks for your helps! All I tried:I am trying to do by editing the Cgroups, but I can only do this in a container, so may be container should be use in privileged mode.</p> <p>I expect a resonable plan for this requests. Thanks...</p>
<p>The clue is you want to change limits <strong>without killing the pod</strong>. </p> <p>This is not the way Kubernetes works, as <a href="https://stackoverflow.com/users/1296707/markus-w-mahlberg">Markus W Mahlberg</a> explained in his comment above. In Kubernetes there is no "hot plug CPU/memory" or "live migration" facilities the convenient hypervisors provide. Kubernetes treats pods as ephemeral instances and does not take care about keeping them running. Whether you need to change resource limits for the application, change the app configuration, install app updates or repair misbehaving application, the <strong>"kill-and-recreate"</strong> approach is applied to pods. </p> <p>Unfortunately, the solutions suggested here will not work for you: </p> <ul> <li>Increasing limits for the running container within the pod ( <code>docker update</code> command ) will lead to breaching the pod limits and killing the pod by Kubernetes. </li> <li>Vertical Pod Autoscaler is part of Kubernetes project and relies on the "kill-and-recreate" approach as well.</li> </ul> <p>If you really need to keep the containers running and managing allocated resource limits for them "on-the-fly", perhaps Kubernetes is not suitable solution in this particular case. Probably you should consider using pure Docker or a VM-based solution. </p>
<p>I have a SpringBoot project with graceful shutdown configured. Deployed on k8s <code>1.12.7</code> Here are the logs,</p> <pre><code>2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event 2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish 2019-07-20 10:23:16.273 INFO [service,fd964ebaa631a860,75a07c123397e4ff,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=59 2019-07-20 10:23:16.374 INFO [service,9a569ecd8c448e98,00bc11ef2776d7fb,false] 1 --- [nio-8080-exec-1] com.jay.resource.ProductResource : GET /products?id=68 ... 2019-07-20 10:23:33.711 INFO [service,1532d6298acce718,08cfb8085553b02e,false] 1 --- [nio-8080-exec-9] com.jay.resource.ProductResource : GET /products?id=209 2019-07-20 10:23:46.181 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation 2019-07-20 10:23:46.216 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' </code></pre> <p>Application has received the <code>SIGTERM</code> at <code>10:23:16.180</code> from Kubernetes. As per <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">Termination of Pods</a> <code>point#5</code> says that the terminating pod is removed from the endpoints list of service, but it is contradicting that it forwarded the requests for 17 seconds (until <code>10:23:33.711</code>) after sending <code>SIGTERM</code> signal. Is there any configuration missing?</p> <p><code>Dockerfile</code></p> <pre><code>FROM openjdk:8-jre-slim MAINTAINER Jay RUN apt update &amp;&amp; apt install -y curl libtcnative-1 gcc &amp;&amp; apt clean ADD build/libs/sample-service.jar / CMD ["java", "-jar" , "sample-service.jar"] </code></pre> <p><code>GracefulShutdown</code></p> <pre><code>// https://github.com/spring-projects/spring-boot/issues/4657 class GracefulShutdown(val waitTime: Long, val timeout: Long) : TomcatConnectorCustomizer, ApplicationListener&lt;ContextClosedEvent&gt; { @Volatile private var connector: Connector? = null override fun customize(connector: Connector) { this.connector = connector } override fun onApplicationEvent(event: ContextClosedEvent) { log.info("Received shutdown event") val executor = this.connector?.protocolHandler?.executor if (executor is ThreadPoolExecutor) { try { val threadPoolExecutor: ThreadPoolExecutor = executor log.info("Waiting for ${waitTime}s to finish") hibernate(waitTime * 1000) log.info("Resumed after hibernation") this.connector?.pause() threadPoolExecutor.shutdown() if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) { log.warn("Tomcat thread pool did not shut down gracefully within $timeout seconds. Proceeding with forceful shutdown") threadPoolExecutor.shutdownNow() if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) { log.error("Tomcat thread pool did not terminate") } } } catch (ex: InterruptedException) { log.info("Interrupted") Thread.currentThread().interrupt() } }else this.connector?.pause() } private fun hibernate(time: Long){ try { Thread.sleep(time) }catch (ex: Exception){} } companion object { private val log = LoggerFactory.getLogger(GracefulShutdown::class.java) } } @Configuration class GracefulShutdownConfig(@Value("\${app.shutdown.graceful.wait-time:30}") val waitTime: Long, @Value("\${app.shutdown.graceful.timeout:30}") val timeout: Long) { companion object { private val log = LoggerFactory.getLogger(GracefulShutdownConfig::class.java) } @Bean fun gracefulShutdown(): GracefulShutdown { return GracefulShutdown(waitTime, timeout) } @Bean fun webServerFactory(gracefulShutdown: GracefulShutdown): ConfigurableServletWebServerFactory { log.info("GracefulShutdown configured with wait: ${waitTime}s and timeout: ${timeout}s") val factory = TomcatServletWebServerFactory() factory.addConnectorCustomizers(gracefulShutdown) return factory } } </code></pre> <p><code>deployment file</code></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: service name: service spec: progressDeadlineSeconds: 420 replicas: 1 revisionHistoryLimit: 1 selector: matchLabels: k8s-app: service strategy: rollingUpdate: maxSurge: 2 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: k8s-app: service spec: terminationGracePeriodSeconds: 60 containers: - env: - name: SPRING_PROFILES_ACTIVE value: dev image: service:2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 20 httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 5 name: service ports: - containerPort: 8080 protocol: TCP readinessProbe: failureThreshold: 60 httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 100 periodSeconds: 10 timeoutSeconds: 5 </code></pre> <p><strong>UPDATE:</strong></p> <p>Added custom health check endpoint</p> <pre><code>@RestControllerEndpoint(id = "live") @Component class LiveEndpoint { companion object { private val log = LoggerFactory.getLogger(LiveEndpoint::class.java) } @Autowired private lateinit var gracefulShutdownStatus: GracefulShutdownStatus @GetMapping fun live(): ResponseEntity&lt;Any&gt; { val status = if(gracefulShutdownStatus.isTerminating()) HttpStatus.INTERNAL_SERVER_ERROR.value() else HttpStatus.OK.value() log.info("Status: $status") return ResponseEntity.status(status).build() } } </code></pre> <p>Changed the <code>livenessProbe</code>,</p> <pre><code> livenessProbe: httpGet: path: /actuator/live port: 8080 initialDelaySeconds: 100 periodSeconds: 5 timeoutSeconds: 5 failureThreshold: 3 </code></pre> <p>Here are the logs after the change,</p> <pre><code>2019-07-21 14:13:01.431 INFO [service,9b65b26907f2cf8f,9b65b26907f2cf8f,false] 1 --- [nio-8080-exec-2] com.jay.util.LiveEndpoint : Status: 200 2019-07-21 14:13:01.444 INFO [service,3da259976f9c286c,64b0d5973fddd577,false] 1 --- [nio-8080-exec-3] com.jay.resource.ProductResource : GET /products?id=52 2019-07-21 14:13:01.609 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event 2019-07-21 14:13:01.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish ... 2019-07-21 14:13:06.431 INFO [service,002c0da2133cf3b0,002c0da2133cf3b0,false] 1 --- [nio-8080-exec-3] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:06.433 INFO [service,072abbd7275103ce,d1ead06b4abf2a34,false] 1 --- [nio-8080-exec-4] com.jay.resource.ProductResource : GET /products?id=96 ... 2019-07-21 14:13:11.431 INFO [service,35aa09a8aea64ae6,35aa09a8aea64ae6,false] 1 --- [io-8080-exec-10] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:11.508 INFO [service,a78c924f75538a50,0314f77f21076313,false] 1 --- [nio-8080-exec-2] com.jay.resource.ProductResource : GET /products?id=110 ... 2019-07-21 14:13:16.431 INFO [service,38a940dfda03956b,38a940dfda03956b,false] 1 --- [nio-8080-exec-9] com.jay.util.LiveEndpoint : Status: 500 2019-07-21 14:13:16.593 INFO [service,d76e81012934805f,b61cb062154bb7f0,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=152 ... 2019-07-21 14:13:29.634 INFO [service,38a32a20358a7cc4,2029de1ed90e9539,false] 1 --- [nio-8080-exec-6] com.jay.resource.ProductResource : GET /products?id=191 2019-07-21 14:13:31.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation 2019-07-21 14:13:31.692 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor' </code></pre> <p>With the <code>livenessProbe</code> of 3 failures, kubernetes served the traffic for 13 seconds after liveness failures i.e., from <code>14:13:16.431</code> to <code>14:13:29.634</code>.</p> <p><strong>UPDATE 2:</strong> The sequence of events (thanks to <code>Eamonn McEvoy</code>)</p> <pre><code>seconds | healthy | events 0 | ✔ | * liveness probe healthy 1 | ✔ | - SIGTERM 2 | ✔ | 3 | ✔ | 4 | ✔ | 5 | ✔ | * liveness probe unhealthy (1/3) 6 | ✔ | 7 | ✔ | 8 | ✔ | 9 | ✔ | 10 | ✔ | * liveness probe unhealthy (2/3) 11 | ✔ | 12 | ✔ | 13 | ✔ | 14 | ✔ | 15 | ✘ | * liveness probe unhealthy (3/3) .. | ✔ | * traffic is served 28 | ✔ | * traffic is served 29 | ✘ | * pod restarts </code></pre>
<p>SIGTERM isn't putting the pod into a terminating state immediately. You can see in the logs your application begins graceful shutdown at 10:23:16.180 and takes >20 seconds to complete. At this point, the container stops and pod can enter the terminating state.</p> <p>As far as kubernetes is concerned the pod looks ok during the graceful shutdown period. You need to add a liveness probe to your deployment; when it becomes unhealthy the traffic will stop.</p> <pre><code>livenessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 100 periodSeconds: 10 timeoutSeconds: 5 </code></pre> <p>Update:</p> <p>This is because you have a failure threshold of 3, so you are allowing traffic for up to 15 seconds after the sigterm;</p> <p>e.g.</p> <pre><code>seconds | healthy | events 0 | ✔ | * liveness probe healthy 1 | ✔ | - SIGTERM 2 | ✔ | 3 | ✔ | 4 | ✔ | 5 | ✔ | * liveness probe issued 6 | ✔ | . 7 | ✔ | . 8 | ✔ | . 9 | ✔ | . 10 | ✔ | * liveness probe timeout - unhealthy (1/3) 11 | ✔ | 12 | ✔ | 13 | ✔ | 14 | ✔ | 15 | ✔ | * liveness probe issued 16 | ✔ | . 17 | ✔ | . 18 | ✔ | . 19 | ✔ | . 20 | ✔ | * liveness probe timeout - unhealthy (2/3) 21 | ✔ | 22 | ✔ | 23 | ✔ | 24 | ✔ | 25 | ✔ | * liveness probe issued 26 | ✔ | . 27 | ✔ | . 28 | ✔ | . 29 | ✔ | . 30 | ✘ | * liveness probe timeout - unhealthy (3/3) | | * pod restarts </code></pre> <p>This is assuming that the endpoint returns an unhealthy response during the graceful shutdown. Since you have <code>timeoutSeconds: 5</code>, if the probe simply times out this will take much longer, with a 5 second delay between issuing a liveness probe request and receiving its response. It could be the case that the container actually dies before the liveness threshold is hit and you are still seeing the original behaviour</p>
<p>When listing all the API resources in K8s you get:</p> <pre><code>$ kubectl api-resources -owide NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS bindings true Binding [create] componentstatuses cs false ComponentStatus [get list] configmaps cm true ConfigMap [create delete deletecollection get list patch update watch] endpoints ep true Endpoints [create delete deletecollection get list patch update watch] events ev true Event [create delete deletecollection get list patch update watch] limitranges limits true LimitRange [create delete deletecollection get list patch update watch] namespaces ns false Namespace [create delete get list patch update watch] nodes no false Node [create delete deletecollection get list patch update watch] persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch] persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch] pods po true Pod [create delete deletecollection get list patch update watch] podtemplates true PodTemplate [create delete deletecollection get list patch update watch] replicationcontrollers rc true ReplicationController [create delete deletecollection get list patch update watch] resourcequotas quota true ResourceQuota [create delete deletecollection get list patch update watch] secrets true Secret [create delete deletecollection get list patch update watch] serviceaccounts sa true ServiceAccount [create delete deletecollection get list patch update watch] services svc true Service [create delete get list patch update watch] mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration [create delete deletecollection get list patch update watch] ... etc ... </code></pre> <p>Many list the verb <code>deletecollection</code> which sounds useful, but I can't run it e.g. </p> <pre><code>$ kubectl deletecollection Error: unknown command "deletecollection" for "kubectl" Run 'kubectl --help' for usage. unknown command "deletecollection" for "kubectl" </code></pre> <p>Nor can I find it in the docs except where it appears in the api-resources output above or mentioned as a verb.</p> <p><strong>Is there a way to deletecollection?</strong></p> <p>It sounds like it would be better than the sequence of grep/awk/xargs that I normally end up doing if it does do what I think it should do. i.e. delete all the pods of a certain type.</p>
<p>The <code>delete</code> verb refers to deleting a single resource, for example a single Pod. The <code>deletecollection</code> verb refers to deleting multiple resources at the same time, for example multiple Pods using a label or field selector or all Pods in a namespace.</p> <p>To give some examples from the API documentation:</p> <ol> <li>To <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#delete-pod-v1-core" rel="noreferrer">delete a single Pod</a>: <code>DELETE /api/v1/namespaces/{namespace}/pods/{name}</code></li> <li>To <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#delete-collection-pod-v1-core" rel="noreferrer">delete multiple Pods</a> (or, <code>deletecollection</code>): <ol> <li>All pods in a namespace <code>DELETE /api/v1/namespaces/{namespace}/pods</code></li> <li>All pods in a namespace matching a given label selector: <code>DELETE /api/v1/namespaces/{namespace}/pods?labelSelector=someLabel%3dsomeValue</code></li> </ol></li> </ol> <p>Regarding kubectl: You cannot invoke <code>deletecollection</code> explicitly with <code>kubectl</code>. </p> <p>Instead, <code>kubectl</code> will infer on its own whether to use <code>delete</code> or <code>deletecollection</code> depending on how you invoke <code>kubectl delete</code>. When deleting a single source (<code>kubectl delete pod $POD_NAME</code>), kubectl will use a <code>delete</code> call and when using a label selector or simply deleting all Pods (<code>kubectl delete pods -l $LABEL=$VALUE</code> or <code>kubectl delete pods --all</code>), it will use the <code>deletecollection</code> verb.</p>
<p>env: vagrant + virtualbox </p> <ul> <li>kubernetes: 1.14</li> <li>docker 18.06.3~ce~3-0~debian</li> <li>os: debian stretch</li> </ul> <p>I have priority classes:</p> <pre><code>root@k8s-master:/# kubectl get priorityclass NAME VALUE GLOBAL-DEFAULT AGE cluster-health-priority 1000000000 false 33m &lt; -- created by me default-priority 100 true 33m &lt; -- created by me system-cluster-critical 2000000000 false 33m &lt; -- system system-node-critical 2000001000 false 33m &lt; -- system </code></pre> <p>default-priority - has been set as <code>globalDefault</code></p> <pre><code>root@k8s-master:/# kubectl get priorityclass default-priority -o yaml apiVersion: scheduling.k8s.io/v1 description: Used for all Pods without priorityClassName globalDefault: true &lt;------------------ kind: PriorityClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"scheduling.k8s.io/v1","description":"Used for all Pods without priorityClassName","globalDefault":true,"kind":"PriorityClass","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile"},"name":"default-priority"},"value":100} creationTimestamp: "2019-07-15T16:48:23Z" generation: 1 labels: addonmanager.kubernetes.io/mode: Reconcile name: default-priority resourceVersion: "304" selfLink: /apis/scheduling.k8s.io/v1/priorityclasses/default-priority uid: 5bea6f73-a720-11e9-8343-0800278dc04d value: 100 </code></pre> <p>I have some pods, which were created after policy classes creation</p> <p>This</p> <pre><code>kube-state-metrics-874ccb958-b5spd 1/1 Running 0 9m18s 10.20.59.67 k8s-master &lt;none&gt; &lt;none&gt; </code></pre> <p>And this</p> <pre><code>tmp-shell-one-59fb949cb5-b8khc 1/1 Running 1 47s 10.20.59.73 k8s-master &lt;none&gt; &lt;none&gt; </code></pre> <p>kube-state-metrics pod is using priorityClass <code>cluster-health-priority</code></p> <pre><code>root@k8s-master:/etc/kubernetes/addons# kubectl -n kube-system get pod kube-state-metrics-874ccb958-b5spd -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-07-15T16:48:24Z" generateName: kube-state-metrics-874ccb958- labels: k8s-app: kube-state-metrics pod-template-hash: 874ccb958 name: kube-state-metrics-874ccb958-b5spd namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: kube-state-metrics-874ccb958 uid: 5c64bf85-a720-11e9-8343-0800278dc04d resourceVersion: "548" selfLink: /api/v1/namespaces/kube-system/pods/kube-state-metrics-874ccb958-b5spd uid: 5c88143e-a720-11e9-8343-0800278dc04d spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kube-role operator: In values: - master containers: - image: gcr.io/google_containers/kube-state-metrics:v1.6.0 imagePullPolicy: Always name: kube-state-metrics ports: - containerPort: 8080 name: http-metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-state-metrics-token-jvz5b readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: k8s-master nodeSelector: namespaces/default: "true" priorityClassName: cluster-health-priority &lt;------------------------ restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: kube-state-metrics serviceAccountName: kube-state-metrics terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: dedicated operator: Equal value: master - key: CriticalAddonsOnly operator: Exists volumes: - name: kube-state-metrics-token-jvz5b secret: defaultMode: 420 secretName: kube-state-metrics-token-jvz5b status: conditions: - lastProbeTime: null lastTransitionTime: "2019-07-15T16:48:24Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-07-15T16:48:58Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-07-15T16:48:58Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-07-15T16:48:24Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://a736dce98492b7d746079728b683a2c62f6adb1068075ccc521c5e57ba1e02d1 image: gcr.io/google_containers/kube-state-metrics:v1.6.0 imageID: docker-pullable://gcr.io/google_containers/kube-state-metrics@sha256:c98991f50115fe6188d7b4213690628f0149cf160ac47daf9f21366d7cc62740 lastState: {} name: kube-state-metrics ready: true restartCount: 0 state: running: startedAt: "2019-07-15T16:48:46Z" hostIP: 10.0.2.15 phase: Running podIP: 10.20.59.67 qosClass: BestEffort startTime: "2019-07-15T16:48:24Z" </code></pre> <p><code>tmp-shell</code> pod has nothing about priority classes at all:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-07-15T16:56:49Z" generateName: tmp-shell-one-59fb949cb5- labels: pod-template-hash: 59fb949cb5 run: tmp-shell-one name: tmp-shell-one-59fb949cb5-b8khc namespace: monitoring ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: tmp-shell-one-59fb949cb5 uid: 89c3caa3-a721-11e9-8343-0800278dc04d resourceVersion: "1350" selfLink: /api/v1/namespaces/monitoring/pods/tmp-shell-one-59fb949cb5-b8khc uid: 89c71bad-a721-11e9-8343-0800278dc04d spec: containers: - args: - /bin/bash image: nicolaka/netshoot imagePullPolicy: Always name: tmp-shell-one resources: {} stdin: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File tty: true volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-g9lnc readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: k8s-master nodeSelector: namespaces/default: "true" restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: default-token-g9lnc secret: defaultMode: 420 secretName: default-token-g9lnc status: conditions: - lastProbeTime: null lastTransitionTime: "2019-07-15T16:56:49Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-07-15T16:57:20Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-07-15T16:57:20Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-07-15T16:56:49Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://545d4d029b440ebb694386abb09e0377164c87d1170ac79704f39d3167748bf5 image: nicolaka/netshoot:latest imageID: docker-pullable://nicolaka/netshoot@sha256:b3e662a8730ee51c6b877b6043c5b2fa61862e15d535e9f90cf667267407753f lastState: terminated: containerID: docker://dfdfd0d991151e94411029f2d5a1a81d67b5b55d43dcda017aec28320bafc7d3 exitCode: 130 finishedAt: "2019-07-15T16:57:17Z" reason: Error startedAt: "2019-07-15T16:57:03Z" name: tmp-shell-one ready: true restartCount: 1 state: running: startedAt: "2019-07-15T16:57:19Z" hostIP: 10.0.2.15 phase: Running podIP: 10.20.59.73 qosClass: BestEffort startTime: "2019-07-15T16:56:49Z" </code></pre> <p>According to the docs: </p> <blockquote> <p>The globalDefault field indicates that the value of this PriorityClass should be used for Pods without a priorityClassName</p> </blockquote> <p>and</p> <blockquote> <p>Pod priority is specified by setting the priorityClassName field of podSpec. The integer value of priority is then resolved and populated to the priority field of podSpec</p> </blockquote> <p>So, the questions are:</p> <ol> <li>Why <code>tmp-shell</code> pod is not using priorityClass <code>default-priority</code>, even it created after priority class with globalDefault to <code>true</code>?</li> <li>Why <code>kube-state-metrics</code> pod does not have field <code>priority</code> with parsed value from the priority class <code>cluster-health-priority</code> in podSpec?(look at .yaml above)</li> <li>What am I doing wrong?</li> </ol>
<p>The only way I can reproduce it is by disabling the <code>Priority</code> Admission Controller by adding this argument <code>--disable-admission-plugins=Priority</code> to the <code>kube-api-server</code> definition which is under <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> of the Host running the API Server.</p> <p>According to the <a href="https://v1-14.docs.kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default" rel="nofollow noreferrer">documentation</a> in v1.14 this is enabled by default. Please make sure that it is enabled in your cluster as well.</p>
<p>We are seeing an issue with the GKE kubernetes scheduler being unable or unwilling to schedule Daemonset pods on nodes in an auto scaling node pool.</p> <p>We have three node pools in the cluster, however the <code>pool-x</code> pool is used to exclusively schedule a single Deployment backed by an HPA, with the nodes having the taint "node-use=pool-x:NoSchedule" applied to them in this pool. We have also deployed a filebeat Daemonset on which we have set a very lenient tolerations spec of <code>operator: Exists</code> (hopefully this is correct) set to ensure the Daemonset is scheduled on every node in the cluster.</p> <p>Our assumption is that, as <code>pool-x</code> is auto-scaled up, the filebeat Daemonset would be scheduled on the node prior to scheduling any of the pods assigned to on that node. However, we are noticing that as new nodes are added to the pool, the filebeat pods are failing to be placed on the node and are in a perpetual "Pending" state. Here is an example of the describe output (truncated) of one the filebeat Daemonset:</p> <pre><code>Number of Nodes Scheduled with Up-to-date Pods: 108 Number of Nodes Scheduled with Available Pods: 103 Number of Nodes Misscheduled: 0 Pods Status: 103 Running / 5 Waiting / 0 Succeeded / 0 Failed </code></pre> <p>And the events for one of the "Pending" filebeat pods:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 18m (x96 over 68m) default-scheduler 0/106 nodes are available: 105 node(s) didn't match node selector, 5 Insufficient cpu. Normal NotTriggerScaleUp 3m56s (x594 over 119m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 6 node(s) didn't match node selector Warning FailedScheduling 3m14s (x23 over 15m) default-scheduler 0/108 nodes are available: 107 node(s) didn't match node selector, 5 Insufficient cpu. </code></pre> <p>As you can see, the node does not have enough resources to schedule the filebeat pod CPU requests are exhausted due to the other pods running on the node. However, why is the Daemonset pod not being placed on the node prior to scheduling any other pods. Seems like the very definition of a Daemonset necessitates priority scheduling.</p> <p>Also of note, if I delete a pod on a node where filebeat is "Pending" scheduling due to being unable to satisfy the CPU requests, filebeat is immediately scheduled on that node, indicating that there is some scheduling precedence being observed. </p> <p>Ultimately, we just want to ensure the filebeat Daemonset is able to schedule a pod on every single node in the cluster and have that priority work nicely with our cluster autoscaling and HPAs. Any ideas on how we can achieve this?</p> <p>We'd like to avoid having to use <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority" rel="nofollow noreferrer">Pod Priority</a>, as its <em>apparently</em> an alpha feature in GKE and we are unable to make use of it at this time.</p>
<p>Before kubernetes 1.12 daemonset were scheduled by his own controller, after that version, deploying daemonset was managed by the default scheduler, in the hope that priority, preemption and toleration cover all the cases. If you want schedule of daemonsets managed by daemonset scheduler, check ScheduleDaemonSetPods feature. </p> <p><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/</a></p>
<p>I have two containers in my POD. First container is my main application and second is used as sidecar container having following images having following Dockerfile.</p> <pre><code>FROM scratch EXPOSE 8080 ADD my-binary / ENV GOROOT=/usr/lib/go ENTRYPOINT ["/my-binary"] </code></pre> <p>Basically it's using <strong>scratch</strong> and my-binary is a go application which running as process. So I cannot <strong>exec</strong> on this side car container. I have a requirement to re-start the side container(my-binary), but there should be no change in the main container. Main container should not be altered in any way.</p> <p>Is there any possibility, how I can achieve this?</p> <p>Thank you so much for looking into this.</p> <p>Someone has asked to provide the complete details of POD, then you can consider following pod structure</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-deploy spec: template: metadata: labels: app: my-app spec: containers: - name: main image: my-main-app-image ports: - containerPort: 80 - name: my-go-binary image: my-go-binary-image </code></pre> <p>Please note- </p> <pre><code>kubectl exec POD_NAME -c CONTAINER_NAME reboot </code></pre> <p>this is not going to work for 2nd container as it is scratch image.</p>
<p>So, your <code>reboot</code> command wasn't working (of course it will newer work) cause of using <code>scatch</code> as the base image.</p> <blockquote> <p>This image is most useful in the context of building base images (such as <a href="https://registry.hub.docker.com/_/debian/" rel="nofollow noreferrer">debian</a> and <a href="https://registry.hub.docker.com/_/busybox/" rel="nofollow noreferrer">busybox</a>) or super minimal images (that contain only a single binary and whatever it requires, such as hello-world).</p> </blockquote> <p>See, <a href="https://hub.docker.com/_/scratch" rel="nofollow noreferrer">https://hub.docker.com/_/scratch</a></p> <blockquote> <p>The base image <code>scratch</code> is Docker's reserved minimal image. It can be a starting point for building small-sized containers. Using the <code>scratch</code> “image” signals to the build process that you want the next command in the Dockerfile to be the first filesystem layer in your image.</p> </blockquote> <p>Ref: <a href="https://docs.docker.com/develop/develop-images/baseimages/#create-a-simple-parent-image-using-scratch" rel="nofollow noreferrer">https://docs.docker.com/develop/develop-images/baseimages/#create-a-simple-parent-image-using-scratch</a></p> <p>From you provided dockerfile, there is the only filesystem is your go-binery. There is nothing other than this. That's why you can't (couldn't) run <code>reboot</code> command. If you change the base image like <code>busybox</code> or <code>alpine</code> or anything other you will be able to run the <code>reboot</code> command.</p> <blockquote> <p>But keep in mind that your new base image must have a proper shell to run your expected command. For example, the <code>busybox</code> image has shell (<code>bash</code>) and so it's possible to run <code>reboot</code> command. And the <code>alpine</code> image has shell (<code>sh</code>) too.</p> </blockquote>
<p>I am using gcp and kubernetes.</p> <p>I have gcp repository and container registry.</p> <p>I have a trigger for build container after pushing into the master branch.</p> <p>I don't know how to set some auto-trigger to deploy new version of the container (docker file).</p> <p>How can I automate the build process?</p>
<p>You need some extra pieces to do it, for example if you use Helm to package your deployment you can use Flux to trigger the automated deployment.</p> <p><a href="https://helm.sh/" rel="nofollow noreferrer">https://helm.sh/</a></p> <p><a href="https://fluxcd.github.io/flux/" rel="nofollow noreferrer">https://fluxcd.github.io/flux/</a></p>
<p>I occasionally get such an error "the object has been modified; please apply your changes to the latest version and try again" when I update node or deploy with client-go.My goal is that adding a taint/toleration and a label to one node/deployment.</p> <p>some people said should use</p> <pre><code>err = retry.RetryOnConflict(retry.DefaultBackoff, func() error {}) </code></pre> <p>,but it seems that does not work.</p> <pre><code>func AddFaultToleration(deploy *appsv1.Deployment, ns string, client kubernetes.Interface) (*appsv1.Deployment, error) { updateDeploy, err := client.AppsV1().Deployments(ns).Get(deploy.Name, metav1.GetOptions{}) if updateDeploy == nil || err != nil { return deploy, fmt.Errorf("Failed to get latest version of Deployment: %v", err) } effect := apiv1.TaintEffectNoExecute updateDeploy.Spec.Template.Spec.Tolerations = append(updateDeploy.Spec.Template.Spec.Tolerations, apiv1.Toleration{ Key: ToBeFaultTaintKey, Value: ToBeFaultTaintValue, Effect: effect, Operator: apiv1.TolerationOpEqual, }) updatedDeployWithTolera, err := client.AppsV1().Deployments(ns).Update(updateDeploy) if err != nil || updatedDeployWithTolera == nil { return deploy, fmt.Errorf("failed to update deploy %v after adding toleration: %v", deploy.Name, err) } log.Info("Successfully added toleration on pod:", updatedDeployWithTolera.Name) return updatedDeployWithTolera, nil } </code></pre>
<p>I have solved the problem. the reason is that the err of conflict is hided by</p> <pre><code>updatedDeployWithTolera, err := client.AppsV1().Deployments(ns).Update(updateDeploy) if err != nil || updatedDeployWithTolera == nil { return deploy, fmt.Errorf("failed to update deploy %v after adding toleration: %v", deploy.Name, err)} </code></pre> <p>,So the function of retry.RetryOnConflict doesn't work.</p>
<p>I have created a deployment and a service on Google Kubernetes Engine. These are running on Cloud Compute instances.</p> <p>I need to make my k8s application reachable from other Compute instances, but not from the outside world. That is because there are some legacy instances running outside the cluster and those cannot be migrated (yet, at least).</p> <p>My understanding is that a <code>Service</code> makes the pod reachable from other cluster nodes, whereas an <code>Ingress</code> exposes the pod to the external traffic with an external IP.</p> <p>What I need is something in the middle: I need to expose my pod outside the cluster, but only to other local Compute instances (in the same zone). I don't understand how I am supposed to do it.</p>
<p>In Google Kubernetes Engine this is accomplished with a <code>LoadBalancer</code> type Service that is annotated to be an internal load balancer. The documentation for it is at <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing</a>.</p> <p>Assuming you had a Deployment with label <code>app: echo-pod</code> that listened on port 080 and you wanted to expose it as port 80 to GCE instance the service would look something like:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: echo-internal annotations: cloud.google.com/load-balancer-type: "Internal" labels: app: echo-pod spec: type: LoadBalancer selector: app: echo-pod ports: - port: 80 targetPort: 8080 protocol: TCP </code></pre> <p>It will take a moment to create the Service and internal load balancer. It will have an external IP once created:</p> <pre><code>$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo-internal LoadBalancer 10.4.11.165 10.128.0.35 80:31706/TCP 2m33s kubernetes ClusterIP 10.4.0.1 &lt;none&gt; 443/TCP 20m </code></pre> <p>The <code>10.128.0.35</code> IP is actually an <em>internal</em> IP address only accessible inside your VPC. From another GCE instance you can access it on the exposed port:</p> <pre><code>$ curl http://10.128.0.35 Hostname: echo-deployment-5f55bb9855-hxl7b </code></pre> <p>Note: You need to have the "Load balancing" add-on enabled when you provisioned your cluster. But it is enabled by default and should be working unless you explicitly disabled the "Enable HTTP load balancing" option at cluster creation.</p>
<p>I have created a kubernetes cluster and I successfully deployed my spring boot application + nginx reverse proxy for testing purposes.</p> <p>Now I'm moving to production, the only difference between test and prod is the connection to the database and the nginx basic auth (of course scaling parameters are also different).</p> <p>In this case, considering I'm using a cloud provider infrastructure, what are the best practices for kubernetes? Should I create a new cluster only for prod? Or I could use the same cluster and use labels to identify test and production machines?</p> <p>For now having 2 clusters seems a waste to me: the provider assures me that I have the hardware capabilities and I can put different request/limit/replication parameters according to the environment. Also, for now, I just have 2 images to deploy per environment (even though for production I will opt for an horizontal scaling of 2).</p>
<p>I would absolutely 100% set up a separate test cluster. (...assuming a setup large enough where Kubernetes makes sense; I might consider an easier deployment system for a simple three-tier app like what you're describing.)</p> <p>At a financial level this shouldn't make much difference to you. You'll need some amount of hardware to run the test copy of your application, and your organization will be paying for it whether it's in the same cluster or a different cluster. The additional cost will only be the cost of the management plane, which shouldn't be excessive.</p> <p>At an operational level, there are all kinds of things that can go wrong during a deployment, and in particular there are cases where one Kubernetes resource can "step on" another. Deploying to a physically separate cluster helps minimize the risk of accidents in production; you won't accidentally overwrite the prod deployment's ConfigMap holding its database configuration, for example. If you have some sort of crash reporting or alerting set up, "it came from the test cluster" is a very clear check you can use to not wake up the DevOps team. It also gives you a place to try out possibly risky configuration changes: if you run your update script once in the test cluster and it passes then you can re-run it in prod, but if the first time you run it is in prod and it fails, that's an outage.</p> <p>Depending on what you're using for a CI system, the other thing you can set up is fully automated deploys to the test environment. If a commit passes its own unit tests, you can have the test environment always running current <code>master</code> and run integration tests there. If and only if those integration tests pass, you can promote to the production environment.</p>
<p>Secondly , For deploying container images to kubernetes , we generally made deployment config(yaml files) .. </p> <p>Now, these may different for staging and development environments . and with some new feature there may come some system environment variable , which needs to be present in yaml . </p> <p>My question here is . </p> <pre><code>1. How yaml are managed , for example , manual efforts required if these is some change in yaml . 2. How it can be made automated . </code></pre>
<p>use <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a>, k8s package manager. helm will let you define a separated set of values for your environments (thanks @xun for pointing that out)- development, canary, production etc, and use them in a single yml <a href="https://helm.sh/docs/developing_charts/#charts" rel="nofollow noreferrer">chart</a>, which will be generated into a kubernetes regular <code>.yml</code> file.</p> <p>helm will also let you share and use deployment-ready charts from the <a href="https://hub.helm.sh/" rel="nofollow noreferrer">helm hub</a> and the <a href="https://chartmuseum.com/" rel="nofollow noreferrer">chart museums</a>. </p>
<p>I’m currently using kubernetes and I came across of helm. Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.</p> <p>So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.</p> <p>For example (taking it from guides I read:</p> <ol> <li>you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done</li> <li>repositories -> I have docker ones (where I pull my images from)</li> <li>you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy</li> </ol> <p>What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply). In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?</p>
<p>It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.</p> <p>Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.</p> <p>Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too. </p> <p>One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple <code>helm install</code> and reuse the configuration that someone has already made, sometimes even the company who created the Application.</p> <p>There are a lot of other use cases, but these two are the ones that I would say are the most common.</p>
<p>I have a flask app with uwsgi and gevent.<br> Here is my <code>app.ini</code> How could I write readinessProbe and livenessProbe on kubernetes to check to flask app?</p> <pre><code>[uwsgi] socket = /tmp/uwsgi.sock chdir = /usr/src/app/ chmod-socket = 666 module = flasky callable = app master = false processes = 1 vacuum = true die-on-term = true gevent = 1000 listen = 1024 </code></pre>
<p>I think what you are really asking is "How to health check a uWSGI application". There are some example tools to do this. Particularly:</p> <ul> <li><a href="https://github.com/andreif/uwsgi-tools" rel="nofollow noreferrer">https://github.com/andreif/uwsgi-tools</a></li> <li><a href="https://github.com/che0/uwping" rel="nofollow noreferrer">https://github.com/che0/uwping</a></li> <li><a href="https://github.com/m-messiah/uwget" rel="nofollow noreferrer">https://github.com/m-messiah/uwget</a></li> </ul> <p>The <code>uwsgi-tools</code> project seems to have the most complete example at <a href="https://github.com/andreif/uwsgi-tools/issues/2#issuecomment-345195583" rel="nofollow noreferrer">https://github.com/andreif/uwsgi-tools/issues/2#issuecomment-345195583</a>. In a Kubernetes Pod spec context this might end up looking like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: myapp image: myimage livenessProbe: exec: command: - uwsgi_curl - -H - Host:host.name - /path/to/unix/socket - /health initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>This would also assume your application responded to <code>/health</code> as a health endpoint.</p>
<p>i ran the minikube start in windows 10 and i got this error,i totally installed the minikube and virtualbox and kubectl .</p> <pre><code>--&gt;&gt;minikube start * minikube v1.2.0 on windows (amd64) * Tip: Use 'minikube start -p &lt;name&gt;' to create a new cluster, or 'minikube delete' to delete this one. * Re-using the currently running virtualbox VM for "minikube" ... * Waiting for SSH access ... * Found network options: - NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24,192.168.39.0/24 * Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6 - env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.1/24,192.168.39.0/24 * Relaunching Kubernetes v1.15.0 using kubeadm ... X Error restarting cluster: waiting for apiserver: timed out waiting for the condition * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new </code></pre> <pre><code>--&gt;minikube status host: Running kubelet: Running apiserver: Stopped kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.101 </code></pre> <p>if there is a way to handle this problem please let me know.</p>
<p>There are few things you should try:</p> <ol> <li><p>You might not be waiting long enough for the apiserver to become healthy. Increase apiserver wait time. </p></li> <li><p>Use different version of Minikube. Remember to run <code>minikube delete</code> to remove the previous cluster state.</p></li> <li><p>If your environment is behind a proxy than setup a correct <code>NO_PROXY</code> env. More about this can be found <a href="https://github.com/kubernetes/minikube/blob/master/docs/http_proxy.md" rel="nofollow noreferrer">here</a>.</p></li> <li>use minikube delete then minikube start</li> </ol> <p>Please let me know if that helped.</p>
<p>Created a cluster in EKS (Kubernetes 1.11.5) with multiple node groups however I'm noticing that in the <code>extension-apiserver-authentication</code> configmap that <code>client-ca-file</code> key is missing.</p> <p>I assume this is due to the way Kubernetes API service is initiated. Has anyone else come across this issue ?</p> <p>I came across this problem while deploying certificate manager which queries the api server with <code>GET https://10.100.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication</code>.</p> <p>In GKE this isnt a problem as <code>extension-apiserver-authentication</code> configmap already includes <code>client-ca-file</code>.</p> <p><code>extension-apiserver-authentication</code> configmap in AWS,</p> <pre><code>apiVersion: v1 data: requestheader-allowed-names: '["front-proxy-client"]' requestheader-client-ca-file: | &lt;certificate file&gt; requestheader-extra-headers-prefix: '["X-Remote-Extra-"]' requestheader-group-headers: '["X-Remote-Group"]' requestheader-username-headers: '["X-Remote-User"]' kind: ConfigMap metadata: creationTimestamp: 2019-01-14T04:56:51Z name: extension-apiserver-authentication namespace: kube-system resourceVersion: "39" selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication uid: ce2b6f64-17b8-11e9-a6dd-021a269d3ce8 </code></pre> <p>However in GKE,</p> <pre><code>apiVersion: v1 data: client-ca-file: | &lt;client certificate file&gt; requestheader-allowed-names: '["aggregator"]' requestheader-client-ca-file: | &lt;certificate file&gt; requestheader-extra-headers-prefix: '["X-Remote-Extra-"]' requestheader-group-headers: '["X-Remote-Group"]' requestheader-username-headers: '["X-Remote-User"]' kind: ConfigMap metadata: creationTimestamp: 2018-05-24T12:06:33Z name: extension-apiserver-authentication namespace: kube-system resourceVersion: "32" selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication uid: e6c0c431-5f4a-11e8-8d8c-42010a9a0191 </code></pre>
<p>I've also run into this issue while trying to use cert-manager on an AWS EKS cluster. It is possible to inject the certificate yourself using the certificate obtained from the AWS CLI. Follow these steps to address this issue:</p> <p><strong>Obtain the Certificate</strong></p> <p>The certificate is stored Base64 encoded and can be retrieved using</p> <pre class="lang-sh prettyprint-override"><code>aws eks describe-cluster \ --region=${AWS_DEFAULT_REGION} \ --name=${CLUSTER_NAME} \ --output=text \ --query 'cluster.{certificateAuthorityData: certificateAuthority.data}' | base64 -D </code></pre> <p><strong>Inject the Certificate</strong></p> <p>Edit configMap/extension-apiserver-authentication under the kube-system namespace: <code>kubectl -n kube-system edit cm extension-apiserver-authentication</code></p> <p>Under the data section, add the CA under a new config entry named client-ca-file. For example:</p> <pre><code> client-ca-file: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- </code></pre>
<p>I am looking to migrate from old GKE clusters to the new Alias IP, however I need to migrate my statefulsets and their PersistentVolumeClaims to the new GKE clusters. I can't seem to find a good answer anywhere stating it's possible, but I imagine it should be as long as it's within the same region. Both new/old k8s cluster still in the same GCP Project, and same Region.</p> <p>I've searched, but can't find an answer and I can't figure out how to recreate the statefulset without creating a new PV.</p>
<p>You might want to look into the <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">Volume Snapshots and volume snapshot content</a> direction.</p> <blockquote> <p>Similar to how API resources PersistentVolume and PersistentVolumeClaim are used to provision volumes for users and administrators, VolumeSnapshotContent and VolumeSnapshot API resources are provided to create volume snapshots for users and administrators.</p> <p>A VolumeSnapshotContent is a snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a PersistentVolume is a cluster resource.</p> <p>A VolumeSnapshot is a request for snapshot of a volume by a user. It is similar to a PersistentVolumeClaim.</p> </blockquote> <p>Example of Volume Snapshot Contents:</p> <pre><code>apiVersion: snapshot.storage.k8s.io/v1alpha1 kind: VolumeSnapshotContent metadata: name: new-snapshot-content-test spec: snapshotClassName: csi-hostpath-snapclass source: name: pvc-test kind: PersistentVolumeClaim volumeSnapshotSource: csiVolumeSnapshotSource: creationTime: 1535478900692119403 driver: csi-hostpath restoreSize: 10Gi snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002 </code></pre> <p>Example of VolumeSnapshot:</p> <pre><code>apiVersion: snapshot.storage.k8s.io/v1alpha1 kind: VolumeSnapshot metadata: name: new-snapshot-test spec: snapshotClassName: csi-hostpath-snapclass source: name: pvc-test kind: PersistentVolumeClaim </code></pre> <p><a href="https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/" rel="nofollow noreferrer">Volume Snapshot Alpha for Kubernetes</a> was introduces in v1.12. This feature allows creating/deleting volume snapshots, and the ability to create new volumes from a snapshot natively using the Kubernetes API.</p>
<p>I'm using Cloudflare as CDN and it's hiding the real IP address for the clients I'm using an NGINX ingress controller as a loadbalancer running in Google Kubernetes engine So I'm trying to restore the original IP address and trying to follow this link <a href="https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-restore-original-visitor-IP-with-Nginx-" rel="nofollow noreferrer">https://support.cloudflare.com/hc/en-us/articles/200170706-How-do-I-restore-original-visitor-IP-with-Nginx-</a> How can I implement this in the configmap for my Nginx ingress since I need multiple value for the same key "set-real-ip-from" ?</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingressname annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "route" nginx.ingress.kubernetes.io/session-cookie-expires: "172800" nginx.ingress.kubernetes.io/session-cookie-max-age: "172800" spec: tls: - hosts: - example.com secretName: sslcertificate rules: - host: example.com http: paths: - backend: serviceName: service servicePort: 80 path: / </code></pre>
<p>I also had this problem and it took me forever to fix but apparently all I needed was this configuration:</p> <pre><code>apiVersion: v1 data: # Cloudflare IP ranges which you can find on https://www.cloudflare.com/ips/ proxy-real-ip-cidr: &quot;173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/13,104.24.0.0/14,172.64.0.0/13,131.0.72.0/22,2400:cb00::/32,2606:4700::/32,2803:f800::/32,2405:b500::/32,2405:8100::/32,2a06:98c0::/29,2c0f:f248::/32&quot; # This is the important part use-forwarded-headers: &quot;true&quot; # Still works without this line because it defaults to X-Forwarded-For, but I use it anyways forwarded-for-header: &quot;CF-Connecting-IP&quot; kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx </code></pre> <p>IMO this is all really unclear from the documentation. I had to search through tons of issues and the actual template file itself to figure it out.</p>
<p>I have a node with 64 cores and another one with just 8. I need multiple replicas of my Kubernetes pods ( at least 6 ) and my 8-core node can only handle 1 instance. How could I ask kubernetes to schedule the rest (5) on the more powerful node ?</p> <p>It would be good if I could do a scale-up only on the required node, is that possible?</p>
<p>While kubernetes is intelligent to spread pods on nodes with enough resources (CPU cores in this case), the following ways can be used to fine-tune how pods can be spread/load-balanced on the nodes in a cluster:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/" rel="nofollow noreferrer">Adding labels to nodes and pods</a></li> <li><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">Resource requests and limits for pods</a></li> <li><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">nodeSelector, node affinity/anti-affinity, nodeName</a></li> <li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a></li> <li><a href="https://github.com/kubernetes-incubator/descheduler" rel="nofollow noreferrer">K8s Descheduler</a></li> </ul>
<p>im using <a href="https://github.com/sirupsen/logrus" rel="nofollow noreferrer"><code>logrus</code></a> for logging for out applications which run on K8S we have env variable which we can set the <code>log-level</code> and change it when we restart out application our applications is running with docker containers on k8s Now we want to change the log-level on runtime, i.e. don’t restart the container and change it when it’s running and with this we can change it from <code>error</code> to <code>debug</code>, I think this is legitimic request but didn’t find any reference or any open source which doing this, any idea? </p> <pre><code>package logs import ( "fmt" "os" "github.com/sirupsen/logrus" ) const ( AppLogLevel = “APP_LOG_LEVEL" DefLvl = "info" ) var Logger *logrus.Logger func NewLogger() *logrus.Logger { var level logrus.Level lvl := getLogLevel() // In case level doesn't set will not print any message level = logLevel(lvl) logger := &amp;logrus.Logger{ Out: os.Stdout, Level: level, } Logger = logger return Logger } // use from env func getLogLevel() string { lvl, _ := os.LookupEnv(AppLogLevel) if lvl != "" { return lvl } return DefLvl } func logLevel(lvl string) logrus.Level { switch lvl { case "debug": // Used for tracing return logrus.DebugLevel case "info": return logrus.InfoLevel case "error": return logrus.ErrorLevel case "fatal": return logrus.FatalLevel default: panic(fmt.Sprintf("the specified %s log level is not supported", lvl)) } } </code></pre> <p>I know how to change the log level but I need a way to infuance the logger to change the level</p>
<p>As a general Un*x statement, you cannot change an environment variable in a process after it has started. (You can <strong>setenv</strong>(3) your own environment, and you can specify a new process's environment when you <strong>execve</strong>(2) it, but once it's started, you can't change it again.)</p> <p>This restriction carries through to higher levels. If you've <code>docker run</code> a container, its <code>-e</code> option to set an environment variable is one of the things you have to delete and recreate a container to change. The <code>env:</code> is one of the many immutable parts of a Kubernetes Pod specification; you also can't change it without deleting and recreating the pod.</p> <p>If you've deployed the pod via a Deployment (and you really should), you can change the environment variable setting in the Deployment spec (edit the YAML file in source control and <code>kubectl apply -f</code> it, or directly <code>kubectl edit</code>). This will cause Kubernetes to start new pods with the new log value and shut down old ones, in that order, doing a zero-downtime update. <em>Deleting and recreating pods like this is totally normal</em> and happens whenever you want to, for example, change the image inside the deployment to have today's build.</p> <p>If your application is capable of noticing changes to config files it's loaded (and it would have to be specially coded to do that) one other path that could work for you is to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">mount a ConfigMap into a container</a>; if you change the ConfigMap contents, the files the container sees will change but it will not restart. I wouldn't go out of my way to write this just to avoid restarting a pod, though.</p>
<p>In Kubernetes services talk to each other via a service ip. With iptables or something similar each TCP connection is transparently routed to one of the pods that are available for the called service. If the calling service is not closing the TCP connection (e.g. using TCP keepalive or a connection pool) it will connect to one pod and not use the other pods that may be spawned.</p> <p>What is the correct way to handle such a situation?</p> <hr> <p>My own unsatisfying ideas:</p> <h2>Closing connection after each api call</h2> <p>Am I making every call slower only to be able to distribute requests to different pods? Doesn't feel right.</p> <h2>Minimum number of connections</h2> <p>I could force the caller to open multiple connections (assuming it would then distribute the requests across these connections) but how many should be open? The caller has (and probably should not have) no idea how many pods there are.</p> <h2>Disable bursting</h2> <p>I could limit the resources of the called services so it gets slow on multiple requests and the caller will open more connections (hopefully to other pods). Again I don't like the idea of arbitrarily slowing down the requests and this will only work on cpu bound services.</p>
<p>The keep-alive behavior can be tuned by options specified in the Keep-Alive general header:</p> <p>E.g:</p> <pre><code>Connection: Keep-Alive Keep-Alive: max=10, timeout=60 </code></pre> <p>Thus, you could re-open a tcp connection after a specific timeout instead than at each API request or after a max number of http transactions.</p> <p>Keep in mind that timeout and max are not guaranteed.</p> <p>EDIT:</p> <p>Note that If you use k8s service you can choose two LB mode:</p> <ul> <li><p>iptables proxy mode (By default, kube-proxy in iptables mode chooses a backend at random.)</p></li> <li><p>IPVS proxy mode where you have different load balancing options:</p></li> </ul> <p>IPVS provides more options for balancing traffic to backend Pods; these are:</p> <p>rr: round-robin lc: least connection (smallest number of open connections) dh: destination hashing sh: source hashing sed: shortest expected delay nq: never queue</p> <p>check <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">this link</a></p>
<p>I am trying to convert the following <code>ConfigMap</code> yaml file (<a href="https://github.com/GoogleCloudPlatform/opentsdb-bigtable/blob/master/configmaps/opentsdb-config.yaml" rel="nofollow noreferrer">link here</a>) into a <a href="https://www.terraform.io/docs/providers/kubernetes/r/config_map.html" rel="nofollow noreferrer"><code>kubernetes_config_map</code></a> but am running into syntax errors when trying to define it. </p> <p>In particular, I can't get around the dot notation inside the <code>opentsdb.conf</code> file </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: opentsdb-config data: opentsdb.conf: | google.bigtable.project.id = REPLACE_WITH_PROJECT google.bigtable.instance.id = REPLACE_WITH_INSTANCE google.bigtable.zone.id = REPLACE_WITH_ZONE hbase.client.connection.impl = com.google.cloud.bigtable.hbase1_2.BigtableConnection google.bigtable.auth.service.account.enable = true tsd.network.port = 4242 tsd.core.auto_create_metrics = true tsd.core.meta.enable_realtime_ts = true tsd.core.meta.enable_realtime_uid = true tsd.core.meta.enable_tsuid_tracking = true tsd.http.request.enable_chunked = true tsd.http.request.max_chunk = 131072 tsd.storage.fix_duplicates = true tsd.storage.enable_compaction = false tsd.storage.max_tags = 12 tsd.http.staticroot = /opentsdb/build/staticroot tsd.http.cachedir = /tmp/opentsdb </code></pre> <p>This is my current attempt that is erroring out on the <code>"opentsdb.conf"</code></p> <pre><code>resource "kubernetes_config_map" "opentsdb" { metadata { name = "opentsdb-config", namespace = "dev" } data { "opentsdb.conf" = { google.bigtable.project.id = var.project_id, google.bigtable.instance.id = google_bigtable_instance.development-instance.name, google.bigtable.zone.id = var.zone, hbase.client.connection.impl = "com.google.cloud.bigtable.hbase1_2.BigtableConnection", google.bigtable.auth.service.account.enable = true tsd.network.port = 4242 tsd.core.auto_create_metrics = true tsd.core.meta.enable_realtime_ts = true tsd.core.meta.enable_realtime_uid = true tsd.core.meta.enable_tsuid_tracking = true tsd.http.request.enable_chunked = true tsd.http.request.max_chunk = 131072 tsd.storage.fix_duplicates = true tsd.storage.enable_compaction = false tsd.storage.max_tags = 12 tsd.http.staticroot = "/opentsdb/build/staticroot" tsd.http.cachedir = "/tmp/opentsdb" } } } </code></pre>
<p>The issue that I had is that I was trying to assign an object to a string literal.</p> <p>I needed to use the <code>EOF</code> syntax as follows:</p> <pre><code>resource "kubernetes_config_map" "opentsdb" { metadata { name = "opentsdb-config" namespace = "dev" } data = { "opentsdb.conf" = &lt;&lt;EOF google.bigtable.project.id = ${var.project_id} google.bigtable.instance.id = ${var.bigtable_instance_id} google.bigtable.zone.id = ${var.zone} hbase.client.connection.impl = com.google.cloud.bigtable.hbase1_2.BigtableConnection google.bigtable.auth.service.account.enable = true tsd.network.port = 4242 tsd.core.auto_create_metrics = true tsd.core.meta.enable_realtime_ts = true tsd.core.meta.enable_realtime_uid = true tsd.core.meta.enable_tsuid_tracking = true tsd.http.request.enable_chunked = true tsd.http.request.max_chunk = 131072 tsd.storage.fix_duplicates = true tsd.storage.enable_compaction = false tsd.storage.max_tags = 12 tsd.http.staticroot = /opentsdb/build/staticroot tsd.http.cachedir = /tmp/opentsdb EOF } } </code></pre>
<p>Manage data redundancy for stability. I also want to duplicate the Persistent Volume. Is there any way to replicate persistent volumes?</p>
<p>Since recently the CSI interface supports snapshots. <a href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volume-snapshots/</a></p>
<p>I want to start out by saying I do not know the exact architecture of the servers involved.. All I know is that they are Ubuntu machines on the cloud.</p> <p>I have set up a 1 master/1 worker k8s cluster using two servers.</p> <p><code>kubectl cluster-info</code> gives me:</p> <pre><code>Kubernetes master is running at https://10.62.194.4:6443 KubeDNS is running at https://10.62.194.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>I have created a simple deployment as such:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: nginx name: nginx-deploy spec: replicas: 2 selector: matchLabels: run: nginx template: metadata: labels: run: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 </code></pre> <p>Which spins up an nginx pod exposed on container port 80.</p> <p>I have exposed this deployment using:</p> <pre><code>kubectl expose deployment nginx-deploy --type NodePort </code></pre> <p>When I run <code>kubectl get svc</code>, I get:</p> <pre><code>nginx-deploy NodePort 10.99.103.239 &lt;none&gt; 80:30682/TCP 29m </code></pre> <p><code>kubectl get pods -o wide</code> gives me:</p> <pre><code>nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 33m 192.168.1.5 myserver1 &lt;none&gt; &lt;none&gt; nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 33m 192.168.1.4 myserver1 &lt;none&gt; &lt;none&gt; </code></pre> <p>Since I exposed the deployment using NodePort, I was under the impression I can access the deployment using <code>&lt; Node IP &gt; : &lt; Node Port &gt;</code></p> <p>The Node IP of the worker node is 10.62.194.5 and when I try to access <a href="http://10.62.194.5:30682" rel="nofollow noreferrer">http://10.62.194.5:30682</a> I do not get the nginx landing page.</p> <p>One part I do not understand is that when I do <code>kubectl describe node myserver1</code>, in the long output I receive I can see:</p> <pre><code>Addresses: InternalIP: 10.62.194.5 Hostname: myserver1 </code></pre> <p>Why does it say InternalIP? I can ping this IP</p> <p>EDIT: Output of <code>sudo lsof -i -P -n | grep LISTEN</code></p> <pre><code>systemd-r 846 systemd-resolve 13u IPv4 24990 0t0 TCP 127.0.0.53:53 (LISTEN) sshd 1157 root 3u IPv4 30168 0t0 TCP *:22 (LISTEN) sshd 1157 root 4u IPv6 30170 0t0 TCP *:22 (LISTEN) xrdp-sesm 9840 root 7u IPv6 116948 0t0 TCP [::1]:3350 (LISTEN) xrdp 9862 xrdp 11u IPv6 117849 0t0 TCP *:3389 (LISTEN) kubelet 51562 root 9u IPv4 560219 0t0 TCP 127.0.0.1:42735 (LISTEN) kubelet 51562 root 24u IPv4 554677 0t0 TCP 127.0.0.1:10248 (LISTEN) kubelet 51562 root 35u IPv6 558616 0t0 TCP *:10250 (LISTEN) kube-prox 52427 root 10u IPv4 563401 0t0 TCP 127.0.0.1:10249 (LISTEN) kube-prox 52427 root 11u IPv6 564298 0t0 TCP *:10256 (LISTEN) kube-prox 52427 root 12u IPv6 618851 0t0 TCP *:30682 (LISTEN) bird 52925 root 7u IPv4 562993 0t0 TCP *:179 (LISTEN) calico-fe 52927 root 3u IPv6 562998 0t0 TCP *:9099 (LISTEN) </code></pre> <p>Output of <code>ss -ntlp | grep 30682</code></p> <pre><code>LISTEN 0 128 *:30682 *:* </code></pre>
<p>As far as I understand you are trying to access <code>10.62.194.5</code> from a Host which is in a different subnet, for example your terminal. In Azure I guess you have a Public IP and a Private IP for each Node. So, if you are trying to access the Kubernetes <code>Service</code> from your terminal, you should use the Public IP of the Host together with the port and also be sure that the port is open in your azure firewall.</p>
<p>I want to enable dynamic auditing in GKE and send logs to some endpoint. How can I enable it?</p> <p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#dynamic-backend" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#dynamic-backend</a></p> <p>According to official doc I need to restart the api server with mentioned flag. But I am not able to access kube-api server pod in GKE</p> <p>Need to set these 3 flags</p> <pre><code>--audit-dynamic-configuratio --feature-gates=DynamicAuditing=true --runtime-config=auditregistration.k8s.io/v1alpha1=true </code></pre> <p>I am expecting it to enable dynamic auditing.</p>
<p>Consider that GKE is a managed version of Kubernetes and the kube-api server is completely managed by Google, so there is no way to pass these flags and restart the server.</p> <p>However, GKE is constantly implementing the new features released in the Open Source version of Kubernetes, and such flags might be enabled by default on an <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes" rel="noreferrer">oncoming version</a>.</p> <p>Unfortunately for this in specific, doesn't seem to be the case (<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters#about_feature_stages" rel="noreferrer">not even for alpha clusters</a>). If you check the API resources enabled with <code>kubectl api-resources</code>, you'll notice that there is no <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#auditsink-v1alpha1-auditregistration" rel="noreferrer"><code>auditregistration.k8s.io</code></a>.</p> <p>Furthermore, this feature has <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noreferrer">CRDs kinds</a> (<code>AuditSink</code>) not available in GKE yet.</p> <p>At this point, you can either wait for the feature to be rolled out on GKE or switch to the Open Source version.</p>
<p>I am trying to use of Istio in bare-metal and I wanted to use the minimum resources needed just to get an Ingress controller with Envoy and Cert-Manager (maybe later evolving to the use of more advanced service mesh features). I tried following this docs: <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/" rel="nofollow noreferrer">Istio Kubernetes Ingress with Cert-Manager</a> Demonstrates how to obtain Let's Encrypt TLS certificates for Kubernetes Ingress automatically using Cert-Manager.</p> <p>My main problem is that I am in bare-metal and want to use neither LoadBalancer nor NodePort. I was going for a host-network approach as the analogous solution using nginx <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network" rel="nofollow noreferrer">here</a>. 1) Can I use istio to replace my current nginx-ingress controller with Hostnetwork?</p> <p>Setup tried (with no success):</p> <pre><code>helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system helm install install/kubernetes/helm/istio --name istio --namespace istio-system \ --values install/kubernetes/helm/istio/values-istio-minimal.yaml </code></pre> <p>2) If (1) is possible, can I use istio helm chart with istio-minimal (just istio-pilot) for that? What is the recommended minimal profile setup in this case?</p>
<p>The <em>istio.io</em> document for <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/" rel="nofollow noreferrer">Ingress with Cert-Manager</a> needs you to use the <code>ingress-gateway</code> object to attach it to a load balancer, so it's not an alternative in this case.</p> <p>The Nginx approach is feasible as <a href="https://docs.cert-manager.io/en/latest/tutorials/acme/quick-start/" rel="nofollow noreferrer">you can use Cert-Manager with the Nginx ingress class</a> to automatically manage your certificates (replacing the Envoy-based Istio resources).</p> <p>Now, the issue is that you have to redirect all the Nginx incoming traffic into the Istio mesh.</p> <p>Although this is integration is not yet natively supported, <a href="https://github.com/kubernetes/ingress-nginx/issues/2126#issuecomment-367323042" rel="nofollow noreferrer">there are ways to make them work together</a> that might end up rather hacky.</p> <p>Unless you're having an issue that is not described in the question, I don't think having the minimal Istio installation has any relationship with this scenario.</p>
<p>I'd like to install Kubeflow under a large kubernetes cluster for which I'm a namespace admin, but not a cluster admin.</p> <p>I've been following this related git issue:</p> <p><a href="https://github.com/kubeflow/kubeflow/issues/1915" rel="nofollow noreferrer">https://github.com/kubeflow/kubeflow/issues/1915</a></p> <p>The issue suggests that v0.6 may be providing this capability, but the git issue has not been updated recently. Now that v0.6 is released I'm trying to track down whether this is now possible, and if so how to go about installing Kubeflow under a namespace without cluster admin privileges.</p>
<p>v0.6 supports multi-user profiles which allow non-admin users to manage resources in their own namespaces, but the initial installation still requires cluster admin privileges.</p> <p>Also replied on the GitHub issue.</p>
<p>I have set up a v1.13 Kubernetes cluster using Kube spray. Our etcd is running as docker containers outside the K8s cluster. If I check the etcd certificates, I can see each etcd has its own ca, client cert and key.</p> <p>If I want to scrape the /metrics endpoints of these etcd conatiners for Prometheus, which certificates to use for the HTTPS endpoints?</p>
<p>I am not yet sure, if this is the most secured way or not. But I took the ca.pem, cert and key that one of the etcd uses.</p> <p>I created a Kubernetes secret object out of the three:</p> <pre><code>kubectl create secret generic etcd-metrics -n monitoring --from-file=etcd-secrets/ </code></pre> <p>Then I added the secrets as configmaps in Prometheus config and below as my scrape</p> <pre><code>targets: - job_name: etcd scrape_interval: 15s scrape_timeout: 10s metrics_path: /metrics scheme: https static_configs: - targets: - 172.xxxxx:2379 - 172.xxxxx:2379 - 172.xxxxx:2379 tls_config: ca_file: /etc/ssl/etcd/ca.pem cert_file: /etc/ssl/etcd/etcd-node.pem key_file: /etc/ssl/etcd/etcd-key.pem insecure_skip_verify: false </code></pre>
<p>I recently started to explore k8s extensions and got introduced to two concepts:</p> <ol> <li>CRD.</li> <li>Service catalogs.</li> </ol> <p>They look pretty similar to me. The only difference to my understanding is, CRDs are deployed inside same cluster to be consumed; whereas, catalogs are deployed to be exposed outside the cluster for example as database service (client can order cluster of mysql which will be accessible from his cluster). </p> <p>My query here is:</p> <p>Is my understanding correct? if yes, can there be any other scenario where I would like to create catalog and not CRD.</p>
<p>Yes, your understanding is correct. Taken from <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog" rel="nofollow noreferrer">official documentation</a>: </p> <blockquote> <h3>Example use case</h3> <p>An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.</p> <p>A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.</p> </blockquote> <p>With CRD you are responsible for provisioning resources, running backend logic and so on.</p> <p>More info can be found in this <a href="https://www.youtube.com/watch?v=7wdUa4Ulwxg" rel="nofollow noreferrer">KubeCon 2018 presentation</a>.</p>
<p>I am able to login to the container running in a pod using <code>kubectl exec -t ${POD } /bin/bash --all-namespaces</code> (POD is the text parameter value in my Jenkins job, In which user would have entered the pod name before running the job), Now my question is : I am able to login into the container , I want to my test.sh file from the logged in container ? Flow: </p> <p>Step1 : Run a Jenkins job which should login to a docker container running inside the pods</p> <p>Step: From the container execute the test.sh script.</p> <p>test.sh</p> <p>echo "This is demo file"</p>
<p>There is no need to have two steps one step is sufficient. I believe below should get the job done</p> <p>kubectl exec ${POD} /path/to/script/test.sh --all-namespaces</p> <p>Below is the reference form Kubernetes <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">documentation</a></p> <blockquote> <p>kubectl exec my-pod -- ls / # Run command in existing pod (1 container case)</p> <p>kubectl exec my-pod -c my-container -- ls / # Run command in existing pod (multi-container case)</p> </blockquote>
<p>i installed K8S cluster in my laptop, it was running fine in the beginning but when i restarted my laptop then some services were not running.</p> <pre><code>kube-system coredns-5c98db65d4-9nm6m 0/1 Error 594 12d kube-system coredns-5c98db65d4-qwkk9 0/1 CreateContainerError kube-system kube-scheduler-kubemaster 0/1 CreateContainerError </code></pre> <p>I searched online for solution but could not get appropriate answer , please help me resolve this issue</p>
<p>I encourage you to look for <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">official kubernetes documentation</a>. Remember that your kubemaster should have at least fallowing resources: 2CPUs or more, 2GB or more of RAM.</p> <ol> <li><p>Firstly install <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">docker</a> and <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer">kubeadm</a> (as a root user) on each machine.</p></li> <li><p>Initialize kubeadm (on master):</p></li> </ol> <pre><code>kubeadm init &lt;args&gt; </code></pre> <p>For example for Calico to work correctly, you need to pass <code>--pod-network-cidr=192.168.0.0/16</code> to kubeadm init:</p> <pre><code>kubeadm init --pod-network-cidr=192.168.0.0/16 </code></pre> <ol start="3"> <li>Install a <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">pod network</a> add-on (depends on what you would like to use). You can install a pod network add-on with the following command:</li> </ol> <pre><code>kubectl apply -f &lt;add-on.yaml&gt; </code></pre> <p>e.g. for Calico:</p> <pre><code>kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml </code></pre> <ol start="4"> <li>To start using your cluster, you need to run on master the following as a regular user:</li> </ol> <pre><code>mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <ol start="5"> <li>You can now join any number of machines by running the following on each node as root:</li> </ol> <pre><code>kubeadm join &lt;master-ip&gt;:&lt;master-port&gt; --token &lt;token&gt; --discovery-token-ca-cert-hash sha256:&lt;hash&gt; </code></pre> <p>By default, <strong>tokens expire after 24 hours</strong>. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:</p> <pre><code>kubeadm token create </code></pre> <p>Please, let me know if it works for you.</p>
<p>Hey I'm running a kubernetes cluster and I want to run a command on all pods that belong to a specific service.</p> <p>As far as I know kubectl exec can only run on a pod and tracking all my pods is a ridiculous amount of work (which is one of the benefits of services).</p> <p>Is there any way or tool that gives you the ability to "broadcast" to all pods in a service?</p> <p>Thanks in advance for your help! </p>
<p>Here's a simple example with kubectl pipe to xargs, printing env of each pod:</p> <pre><code>k get pod \ -l {your label selectors} \ --field-selector=status.phase=Running \ -o custom-columns=name:metadata.name --no-headers \ | xargs -I{} kubectl exec {} env </code></pre>
<p>When I access my Kubeflow endpoint to upload and run a pipeline using a cloned TFX, the process starts hanging at the first step producing this message:</p> <p>"This step is in Pending state with this message: ImagePullBackOff: Back-off pulling image "tensorflow/tfx:0.14.0dev", which is the same image used in the created pipeline yaml file.</p> <p>My overall goal is to build an ExampleGen for tfrecords files, just as described in the guide <a href="https://www.tensorflow.org/tfx/guide/examplegen" rel="nofollow noreferrer">here</a>. The most recent tfx version in pip is 0.13 and <a href="https://github.com/tensorflow/tfx/blob/r0.13/tfx/utils/dsl_utils.py" rel="nofollow noreferrer">does not yet include the necessary functions</a>. For this reason, I install tf-nightly and clone/build tfx (dev-version 0.14). Doing so and installing some additional modules, e.g. tensorflow_data_validation, I can now create my pipeline using the tfx components and including an ExampleGen for tfrecords files. I finally build the pipeline with the KubeflowRunner. Yet this yields the error stated above.</p> <p>I now wonder about an appropriate way to address this. I guess one way would be to build an image myself with the specified versions, but maybe there is a more practical way?</p>
<p>TFX doesn't have a nightly image build as yet. Currently, it defaults to using the image tagged with the version of the library you use to build the pipeline, hence the reason the tag is <code>0.14dev0</code>. This is the current version at HEAD, see here: <a href="https://github.com/tensorflow/tfx/blob/a1f43af5e66f9548ae73eb64813509445843eb53/tfx/version.py#L17" rel="nofollow noreferrer">https://github.com/tensorflow/tfx/blob/a1f43af5e66f9548ae73eb64813509445843eb53/tfx/version.py#L17</a></p> <p>You can build your own image and push it somewhere, for example <code>gcr.io/your-gcp-project/your-image-name:tag</code>, and specify that the pipeline use this image instead, by customizing the <code>tfx_image</code> argument to the pipeline: <a href="https://github.com/tensorflow/tfx/blob/74f9b6ab26c51ebbfb5d17826c5d5288a67dcf85/tfx/orchestration/kubeflow/base_component.py#L54" rel="nofollow noreferrer">https://github.com/tensorflow/tfx/blob/74f9b6ab26c51ebbfb5d17826c5d5288a67dcf85/tfx/orchestration/kubeflow/base_component.py#L54</a></p> <p>See for example: <a href="https://github.com/tensorflow/tfx/blob/b3796fc37bd4331a4e964c822502ba5096ad4bb6/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow.py#L243" rel="nofollow noreferrer">https://github.com/tensorflow/tfx/blob/b3796fc37bd4331a4e964c822502ba5096ad4bb6/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow.py#L243</a></p>
<p>I can't start pod which requires privileged security context. PodSecurityPolicy:</p> <pre><code>apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: pod-security-policy spec: privileged: true allowPrivilegeEscalation: true readOnlyRootFilesystem: false allowedCapabilities: - '*' allowedProcMountTypes: - '*' allowedUnsafeSysctls: - '*' volumes: - '*' hostPorts: - min: 0 max: 65535 hostIPC: true hostPID: true hostNetwork: true runAsUser: rule: 'RunAsAny' seLinux: rule: 'RunAsAny' supplementalGroups: rule: 'RunAsAny' fsGroup: rule: 'RunAsAny' </code></pre> <p>ClusterRole:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: privileged rules: - apiGroups: - '*' resourceNames: - pod-security-policy resources: - '*' verbs: - '*' </code></pre> <p>ClusterRoleBinding:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: privileged-role-binding roleRef: kind: ClusterRole name: privileged apiGroup: rbac.authorization.k8s.io subjects: # Authorize specific service accounts: - kind: ServiceAccount name: default namespace: kube-system - kind: ServiceAccount name: default namespace: default - kind: Group # apiGroup: rbac.authorization.k8s.io name: system:authenticated # Authorize specific users (not recommended): - kind: User apiGroup: rbac.authorization.k8s.io name: admin </code></pre> <pre><code>$ k auth can-i use psp/pod-security-policy Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'extensions' yes $ k apply -f daemonset.yml The DaemonSet "daemonset" is invalid: spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy </code></pre> <p>Not sure if it is needed but I have added PodSecurityContext to args/kube-apiserver <code>--enable-admission-plugins</code></p> <p>Any advice and insight is appreciated. WTF is this: "It looks like your post is mostly code; please add some more details." !?!</p>
<p>Just checked your Pod Security <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">Policy</a> configuration on my current environment: </p> <pre><code>kubeadm version: &amp;version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1" Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1" Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1" </code></pre> <p>I assume that you've included Privileged <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noreferrer">securityContext</a> in the current DaemonSet manifest file.</p> <pre><code>securityContext: privileged: true </code></pre> <p>In order to allow Kubernetes API spawning <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged" rel="noreferrer">Privileged</a> containers you might have to set <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">kube-apiserver</a> flag <code>--allow-privileged</code> to <code>true</code> value.</p> <p><code>--allow-privileged=true</code></p> <p>Therefore, I'm facing the same issue in my k8s cluster, once I disallow to run privileged containers with <code>false</code> option.</p>
<p>I was trying to add to a pod a list of values inside one key as a label, thing is it doesn't really work, when I try to do <code>kubectl edit po/sektor -o yaml</code> (sektor is pod name) and then in labels section I edited the "abilities" label like this:</p> <pre><code> labels: abilities: - ability1: fire - ability2: teleport - ability3: rocekts </code></pre> <p>but when I try to save it shows me the following error: Invalid value: "The edited file failed validation": ValidationError(Pod.metadata.labels.abilities): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.labels: got "array", expected "string"</p> <p>So I see that I need to change the type of that label somehow but I can't figure out how.</p> <p>If it's possible, anyone knows how?</p>
<p>This is not possible because of the definition of <code>Labels</code>. According to the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">documentation</a> <code>Labels</code> are key/value pairs. What you could do there tho, would be to have something like this:</p> <pre><code>labels: ability-fire: true ability-teleport: true ability-rockets: true </code></pre> <p>With this way you can easily create <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">Selectors</a> for your other Resources like <code>Services</code>.</p>
<p>my lisNamespaces.py file</p> <pre><code>from __future__ import print_function import time import kubernetes.client from kubernetes.client.rest import ApiException configuration = kubernetes.client.Configuration() configuration.ssl_ca_cert = 'LS0XXXXXXXXXS0tLQo=' configuration.api_key['authorization'] = 'ZXXXXXXXXXXdw==' configuration.api_key_prefix['authorization'] = 'Bearer' configuration.host = 'https://aaaaaaaaaaaaaaa.gr7.us-east-1.eks.amazonaws.com' #configuration.verify_ssl = False api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient(configuration)) api_response = api_instance.list_namespace() for i in api_response.items: print(i.metadata.name) </code></pre> <p>For <strong>ssl_ca_cert</strong> value i did <code>kubectl edit secret nameofsa-token-xyze -n default</code> and used ca.crt value. user has cluster level admin permissions</p> <p>For bearer token i have used same user TOKEN.</p> <p>If i disable ssl verification by setting <code>configuration.verify_ssl = False</code> my code works fine but with an warining.</p> <p>i want to know what mistake i am doing here in passing ssl_ca_cert. please help me with this.</p>
<p><strong>Mistake i did</strong> was to pass data of <strong>ca.crt</strong> which i got from <code>kubectl edit secret nameofsa-token-xyze -n default</code> directly to <code>configuration.ssl_ca_cert</code> in the code.</p> <p>Instead what should be done is to decode the data using <code>base64 --decode</code>, which i got from above command(<code>kubectl edit secret nameofsa-token-xyze -n default</code>), this is how i did it.</p> <p><code>kubectl get secrets default-token-nqkdv -n default -o jsonpath='{.data.ca\.crt}' | base64 --decode &gt; ca.crt</code>.</p> <p>Then i need to <strong>pass the path of ca.crt file in the code</strong>, so final code look like below</p> <pre><code>from __future__ import print_function import time import kubernetes.client from kubernetes.client.rest import ApiException configuration = kubernetes.client.Configuration() configuration.ssl_ca_cert = 'ca.crt' configuration.api_key['authorization'] = 'ZXXXXXXXXXXdw==' configuration.api_key_prefix['authorization'] = 'Bearer' configuration.host = 'https://aaaaaaaaaaaaaaa.gr7.us-east-1.eks.amazonaws.com' api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient(configuration)) api_response = api_instance.list_namespace() for i in api_response.items: print(i.metadata.name) </code></pre>
<p>There is this kubernetes cluster with n number of nodes where some of the nodes are fitted with multiple NVIDIA 1080Ti GPU cards on it. </p> <p>I have two kind of pods 1. GPU enabled, these need to be scheduled on GPU fitted nodes where pod will only use one of the GPU cards present on that node. 2. CPU only, now these can be scheduled anywhere, preferably on CPU only nodes.</p> <p>Scheduling problem is addressed clearly <a href="https://stackoverflow.com/questions/53859237/kubernetes-scheduling-for-expensive-resources">in this</a> answer.</p> <p>Issue: When scheduling a GPU-enabled pod on a GPU fitted node I want to be able decide on which GPU card among those multiple GPU cards my pod is going to use. Further, I was thinking of a loadbalancer sitting transparently b/w GPU hardware and pods that will decide the mapping.</p> <p>Any help around this architecture would be deeply appreciated. Thank you!</p>
<p>You have to use <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">Official NVIDIA GPU device plugin</a> rather than suggested by GCE. There's possibility to schedule GPUs by attributes</p> <p>Pods can specify device selectors based on the attributes that are advertised on the node. These can be specified at the container level. For example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: gpu-pod spec: containers: - name: cuda-container image: nvidia/cuda:9.0-base command: ["sleep"] args: ["100000"] computeResourceRequests: ["nvidia-gpu"] computeResources: - name: "nvidia-gpu" resources: limits: nvidia.com/gpu: 1 affinity: required: - key: "nvidia.com/gpu-memory" operator: "Gt" values: ["8000"] # change value to appropriate mem for GPU </code></pre> <p>Check Kubernetes on NVIDIA GPUs <a href="https://docs.nvidia.com/datacenter/kubernetes/kubernetes-install-guide/index.html#abstract" rel="nofollow noreferrer">Installation Guide</a></p> <p>Hope this will help</p>
<p>When using an external load balancer with istio ingress gateways (multiple replicas spread across different nodes), how does it identify which istio ingress gateway it can possibly hit i.e. I can manually access nodeip:nodeport/endpoint for any node manually but how is an external load balancer expected to know all nodes.</p> <p>Is this manually configured or does the load balancer consume this info from an API Is there a recommended strategy for bypassing an external load balancer eg. roundrobin across a DNS which is aware of the node ip / port ?</p> <p>The root of this question is - how do we avoid a single point of failure . Using multiple istio ingress gateway replicas achieves this in istio but then the the external load balancer / load balancer cluster needs to know the replicas . Is this automated or a manual config or is there a single virtual endpoint that the external load balancer hits?</p>
<p>External load balancers are generally configured to do health check on your set of nodes (over <code>/healthz</code> endpoint or some other method), and balance the incoming traffic using an LB algorithm, by sending the packets it receives to <strong>one of the healthy nodes over the service's NodePort</strong>. </p> <p>In fact, that's mostly the reason why NodePort type services exist in the first place - they don't have much of an usage by themselves, but they are the intermediate steps between modes <code>LoadBalancer</code> and <code>ClusterIP</code>.</p> <p>How does the load balancer know about the nodes? It heavily depends on the load balancer. As an example, if you use <a href="https://metallb.universe.tf/concepts/bgp/" rel="nofollow noreferrer">MetalLB</a> in BGP mode, you need to add your nodes as peers to your external BGP router (either manually or in an automated way). MetalLB takes care of advertising the IPs of the LoadBalancer type services to the router. This means, that router effectively becomes the load balancer of your cluster.</p> <p>There are also a number of enterprise-grade commercial Kubernetes load balancers out there, such as F5 Big-IP.</p>
<p>My pods have a dynamically generated ID appended to their names like i.e. <code>my-app-name-7b587cd75b-dscsr</code> which is different on every deployment (next time it could be <code>my-app-name-xcgv83bfsd-4kjsf</code>).</p> <p>This makes using some commands really cumbersome, because every time I need to see the logs I have to list all pods first and copy-paste the changed name to the <code>logs</code> command: <code>kubectl -n [namespace] logs my-app-name-7b587cd75b-dscsr</code>.</p> <p>Is there a way I can skip using the pod name or part of the name and do something like <code>kubectl -n [namespace] logs my-pod-name-~</code> or <code>kubectl -n [namespace] logs service/my-pod-name</code> like in <code>port-forward</code> command?</p> <p>I tried to inject <code>grep</code> inside the <code>logs</code> command to obtain the pod name and run <code>logs</code> in a single command, but Cmder on Windows, as great as it is, doesn't seem to support <code>$()</code>: <code>kubectl -n [namespace] logs $(kubectl -n my-app-name get pod | grep my-app-name | sed 's/ .*//')</code></p>
<p>Rather than using <strong>POD/$POD_NAME,</strong> you can use <strong>Deployment/$DEPLOYMENT_NAME</strong> to fetch the logs of pods</p> <p><code>kubectl logs deployment/$DEPLOY_NAME</code></p> <pre><code> # Return snapshot logs from container nginx-1 of a deployment named nginx kubectl logs deployment/nginx -c nginx-1 </code></pre> <p><code>kubectl logs --help</code> will provide more info</p>
<p>I am trying to implement some cookie-based session stickiness with HAProxy inside of K8S cluster. I am using 2.0.2-alpine image.</p> <p>In this environment I have 10 pods (all alive and ready) of some dummy web application StatefulSet, and HAProxy is pointed to their service by the following back-end:</p> <pre><code>backend within_session log global mode http option log-health-checks option httpchk GET /isalive balance roundrobin dynamic-cookie-key XXXXX cookie SESSION_COOKIE rewrite nocache dynamic option httpclose server-template srv 10 _http._tcp.dummywebapplication-service.mynamespace.svc.cluster.local resolvers k8s check </code></pre> <p>Although I set "srv" to maximum of 10 and I have 10 pods running, HAProxy created only 4 "srv"s.</p> <p>Do have idea what could cause it?</p> <p>Here are the relevant HAProxy logs:</p> <pre><code>&lt;133&gt;Jul 23 08:09:04 haproxy[7]: within_session/srv1 changed its FQDN from (null) to dummywebapplication-0.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:09:54 haproxy[7]: within_session/srv2 changed its FQDN from (null) to dummywebapplication-1.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:10:24 haproxy[7]: within_session/srv3 changed its FQDN from (null) to dummywebapplication-2.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:11:14 haproxy[7]: within_session/srv4 changed its FQDN from (null) to dummywebapplication-3.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:11:54 haproxy[7]: within_session/srv3 changed its FQDN from (null) to dummywebapplication-4.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:13:14 haproxy[7]: within_session/srv2 changed its FQDN from (null) to dummywebapplication-6.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:14:44 haproxy[7]: within_session/srv2 changed its FQDN from (null) to dummywebapplication-8.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:20:04 haproxy[7]: within_session/srv1 changed its FQDN from (null) to dummywebapplication-6.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:20:04 haproxy[7]: within_session/srv2 changed its FQDN from (null) to dummywebapplication-1.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' &lt;133&gt;Jul 23 08:20:04 haproxy[7]: within_session/srv3 changed its FQDN from (null) to dummywebapplication-5.dummywebapplication-service.mynamespace.svc.cluster.local by 'SRV record' </code></pre> <p>Consider that the first 4 lines has srv1-4, and after that the srv ids was reused.</p>
<p>Adding <code>accepted_payload_size 8192</code> to the "revolvers" was fix that issue.</p>
<p>I have Kubernetes cluster with two nodes. One's role is Master.</p> <p>I want to move master node to another physical server. </p> <p>The possible way I see is to add another node and change its role to master. </p> <p>How can I do that? Is there any kind of instructions? Is this the only way?</p>
<p>Might be a duplicate of <a href="https://stackoverflow.com/questions/34239979/migration-of-kubernetes-master">Migration of Kubernetes Master</a></p> <p>In fact, you just need to </p> <ul> <li>clone data from the disk</li> <li>change identity of the new server (IP address/DNS name)</li> </ul> <p><strong>Update</strong></p> <p>If you want to change master IP address in kubeadm config, check following references: </p> <ul> <li>replacing the IP address in all config files in /etc/kubernetes</li> <li>backing up /etc/kubernetes/pki</li> <li>identifying certs in /etc/kubernetes/pki that have the old IP address as an alt name</li> <li>deleting both the cert and key for each of them (for me it was just apiserver and etcd/peer)</li> <li>regenerating the certs using kubeadm alpha phase certs<a href="https://github.com/kubernetes/kubeadm/issues/338#issuecomment-418879755" rel="nofollow noreferrer">2</a></li> <li>identifying configmap in the kube-system namespace that referenced the old IP</li> <li>manually editing those configmaps restarting kubelet and docker (to force all containers to be recreated)</li> </ul> <p>Or take a look at this step-by-step <a href="https://github.com/kubernetes/kubeadm/issues/338#issuecomment-418879755" rel="nofollow noreferrer">instruction</a> </p>
<p>I have a classic microservice architecture. So, there are differ applications. Each application may have <code>1..N</code> instances. The system is deployed to <code>Kubernetes.</code> So, we have many differ <code>PODs</code>, which can start and stop in any time.</p> <p>I want to implement <a href="https://www.confluent.io/blog/transactions-apache-kafka/" rel="nofollow noreferrer">read-process-write</a> pattern, so I need Kafka transactions. </p> <p>To configure transactions, I need to set some <code>transaction id</code> for each Kafka producer. (Actually, I need <code>transaction-id-prefix</code>, because of I use Spring for my applications, and it has such <code>API</code>). These <code>IDs</code> have to be the same, after application is restarted.</p> <p>So, how to choose Kafka transaction id for several applications, hosted in Kubernetes?</p>
<p>If the consumer starts the transaction (read-process-write) then the transaction id prefix must be the same for all instances of the same app (so that zombie fencing works correctly after a rebalance). The actual transaction id used is <code>&lt;prefix&gt;&lt;group&gt;.&lt;topic&gt;.&lt;partition&gt;</code>.</p> <p>If you have multiple apps, they should have unique prefixes (although if they consume from different topics, they will be unique anyway).</p> <p>For producer-only transactions, the prefix must be unique in each instance (to prevent kafka fencing the producers).</p>
<p>In order to access the Kubernetes Dashboard remotely, I have tried to replace the <code>ClusterIP</code> with <code>nodePort</code> as recomended <a href="https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard---1.7.X-and-above" rel="nofollow noreferrer">here</a> and <a href="https://stackoverflow.com/questions/54130046/how-can-i-access-to-kubernetes-dashboard-using-nodeport-in-a-remote-cluster-for">here</a>. However the edit always fails with the following error:</p> <pre><code>Invalid value: "The edited file failed validation": ValidationError(Service.spec): unknown field "nodePort" in io.k8s.api.core.v1.ServiceSpec </code></pre> <p>The command recommended by the references above is:</p> <pre><code>kubectl edit svc/kubernetes-dashboard --namespace=kube-system </code></pre> <p>Here is the <code>yaml</code> what I was trying after changing:</p> <pre><code>apiVersion: v1 kind: Service metadata creationTimestamp: "2019-07-24T13:03:48Z" labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system resourceVersion: "2238" selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard uid: 79c37d2b-ae13-11e9-b2a1-0026b95c3009 spec: NodePort: 10.110.154.246 ports: - port: 80 protocol: TCP targetPort: 9090 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <p>And the output of client and server version is as follows:</p> <pre><code> $kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.8", GitCommit:"0c6d31a99f81476dfc9871ba3cf3f597bec29b58", GitTreeState:"clean", BuildDate:"2019-07-08T08:38:54Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>You were using the wrong configuration. There is no field in the <code>spec</code> of Kubernetes Service named <code>NodePort</code>. The doc you shared told you to change the value of the field <code>spec.type</code> from <code>ClusterIP</code> to <code>NodePort</code>. On the hand, you are adding a new field <code>spec.NodePort</code> which is totally invalid. See, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p> <p>Try like this, while doing <code>kubectl edit</code>:</p> <pre><code>apiVersion: v1 kind: Service metadata ... labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system ... spec: ports: - port: 80 protocol: TCP targetPort: 9090 ... type: NodePort ... </code></pre> <p>Or just run this:</p> <pre><code>kubectl get svc -n kube-system kubernetes-dashboard -o yaml | sed 's/type: ClusterIP/type: NodePort/' | kubectl replace -f - </code></pre>
<p>I am trying to package a helm chart on my CI server without having to helm init</p> <pre><code> docker run --rm -v "$PWD":/tmp/src/ -w /tmp/src/ alpine/helm helm package ./charts docker run --rm -v "$PWD":/tmp/src/ -w /tmp/src/ alpine/helm helm init --client-only &amp;&amp; helm package ./charts </code></pre> <p>The above do not work can anyone help! i thought this would be a common request the ability to create a helm package without the .kube folder or access to a cluster</p>
<p>You might want to look into the <a href="https://github.com/kiwigrid/helm-maven-plugin" rel="nofollow noreferrer">helm-maven-plugin</a></p> <p>Basically you can use Maven to get the same result, and should be able to do it from most CI software.</p>
<p>I read in the <a href="https://github.com/hashicorp/envconsul" rel="nofollow noreferrer">envconsul documentation</a> this:</p> <blockquote> <p>For additional security, tokens may also be read from the environment using the CONSUL_TOKEN or VAULT_TOKEN environment variables respectively. It is highly recommended that you do not put your tokens in plain-text in a configuration file.</p> </blockquote> <p>So, I have this <code>envconsul.hcl</code> file:</p> <pre><code># the settings to connect to vault server # "http://10.0.2.2:8200" is the Vault's address on the host machine when using Minikube vault { address = "${env(VAULT_ADDR)}" renew_token = false retry { backoff = "1s" } token = "${env(VAULT_TOKEN)}" } # the settings to find the endpoint of the secrets engine secret { no_prefix = true path = "secret/app/config" } </code></pre> <p>However, I get this error:</p> <pre><code>[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Get $%7Benv%28VAULT_ADDR%29%7D/v1/secret/app/config: unsupported protocol scheme "" (retry attempt 1 after "1s") </code></pre> <p>As I understand it, it cannot do the variable substitution.<br> I tried to set <code>"http://10.0.2.2:8200"</code> and it works.</p> <p>The same happens with the <code>VAULT_TOKEN</code> var.<br> If I hardcode the <code>VAULT_ADDR</code>, then I get this error:</p> <pre><code>[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Error making API request. URL: GET http://10.0.2.2:8200/v1/secret/app/config Code: 403. Errors: * permission denied (retry attempt 2 after "2s") </code></pre> <p>Is there a way for this file to understand the environmental variables?</p> <p><strong>EDIT 1</strong> This is my <code>pod.yml</code> file</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: sample spec: serviceAccountName: vault-auth restartPolicy: Never # Add the ConfigMap as a volume to the Pod volumes: - name: vault-token emptyDir: medium: Memory # Populate the volume with config map data - name: config configMap: # `name` here must match the name # specified in the ConfigMap's YAML # -&gt; kubectl create configmap vault-cm --from-file=./vault-configs/ name: vault-cm items: - key : vault-agent-config.hcl path: vault-agent-config.hcl - key : envconsul.hcl path: envconsul.hcl initContainers: # Vault container - name: vault-agent-auth image: vault volumeMounts: - name: vault-token mountPath: /home/vault - name: config mountPath: /etc/vault # This assumes Vault running on local host and K8s running in Minikube using VirtualBox env: - name: VAULT_ADDR value: http://10.0.2.2:8200 # Run the Vault agent args: [ "agent", "-config=/etc/vault/vault-agent-config.hcl", "-log-level=debug", ] containers: - name: python image: myappimg imagePullPolicy: Never ports: - containerPort: 5000 volumeMounts: - name: vault-token mountPath: /home/vault - name: config mountPath: /etc/envconsul env: - name: HOME value: /home/vault - name: VAULT_ADDR value: http://10.0.2.2:8200 </code></pre>
<p>I. Within container specification set environmental variables (values in double quotes):</p> <pre><code>env: - name: VAULT_TOKEN value: "abcd1234" - name: VAULT_ADDR value: "http://10.0.2.2:8200" </code></pre> <p>Then refer to the values in envconsul.hcl </p> <pre><code>vault { address = ${VAULT_ADDR} renew_token = false retry { backoff = "1s" } token = ${VAULT_TOKEN} } </code></pre> <p>II. Another option is to unseal the vault cluster (with the unseal key which was printed while initializing the vault cluster) </p> <pre><code>$ vault operator unseal </code></pre> <p>and then authenticate to the vault cluster using a root token. </p> <pre><code>$ vault login &lt;your-generated-root-token&gt; </code></pre> <p>More <a href="https://blog.kubernauts.io/managing-secrets-in-kubernetes-with-vault-by-hashicorp-f0db45cc208a" rel="nofollow noreferrer">details</a> </p>
<p>I am looking for a way in Kubectl that shows the history of a node ready/notready status and the timestamp for the transition time.</p>
<p>History of node status (ready or not) and last transition time can be seen in the output of <code>kubectl describe nodes</code> under the <code>Conditions:</code> section:</p> <pre><code>Name: master-node-cf430c398 ... Conditions: ... Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready True Wed, 24 Jul 2019 16:14:06 +0000 Mon, 22 Jul 2019 20:17:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled ... Name: worker-node-b587b0f0d3 ... Conditions: ... Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Ready True Wed, 24 Jul 2019 16:14:07 +0000 Mon, 22 Jul 2019 20:17:22 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled ... </code></pre>
<p>We are seeing an issue with the GKE kubernetes scheduler being unable or unwilling to schedule Daemonset pods on nodes in an auto scaling node pool.</p> <p>We have three node pools in the cluster, however the <code>pool-x</code> pool is used to exclusively schedule a single Deployment backed by an HPA, with the nodes having the taint "node-use=pool-x:NoSchedule" applied to them in this pool. We have also deployed a filebeat Daemonset on which we have set a very lenient tolerations spec of <code>operator: Exists</code> (hopefully this is correct) set to ensure the Daemonset is scheduled on every node in the cluster.</p> <p>Our assumption is that, as <code>pool-x</code> is auto-scaled up, the filebeat Daemonset would be scheduled on the node prior to scheduling any of the pods assigned to on that node. However, we are noticing that as new nodes are added to the pool, the filebeat pods are failing to be placed on the node and are in a perpetual "Pending" state. Here is an example of the describe output (truncated) of one the filebeat Daemonset:</p> <pre><code>Number of Nodes Scheduled with Up-to-date Pods: 108 Number of Nodes Scheduled with Available Pods: 103 Number of Nodes Misscheduled: 0 Pods Status: 103 Running / 5 Waiting / 0 Succeeded / 0 Failed </code></pre> <p>And the events for one of the "Pending" filebeat pods:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 18m (x96 over 68m) default-scheduler 0/106 nodes are available: 105 node(s) didn't match node selector, 5 Insufficient cpu. Normal NotTriggerScaleUp 3m56s (x594 over 119m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 6 node(s) didn't match node selector Warning FailedScheduling 3m14s (x23 over 15m) default-scheduler 0/108 nodes are available: 107 node(s) didn't match node selector, 5 Insufficient cpu. </code></pre> <p>As you can see, the node does not have enough resources to schedule the filebeat pod CPU requests are exhausted due to the other pods running on the node. However, why is the Daemonset pod not being placed on the node prior to scheduling any other pods. Seems like the very definition of a Daemonset necessitates priority scheduling.</p> <p>Also of note, if I delete a pod on a node where filebeat is "Pending" scheduling due to being unable to satisfy the CPU requests, filebeat is immediately scheduled on that node, indicating that there is some scheduling precedence being observed. </p> <p>Ultimately, we just want to ensure the filebeat Daemonset is able to schedule a pod on every single node in the cluster and have that priority work nicely with our cluster autoscaling and HPAs. Any ideas on how we can achieve this?</p> <p>We'd like to avoid having to use <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority" rel="nofollow noreferrer">Pod Priority</a>, as its <em>apparently</em> an alpha feature in GKE and we are unable to make use of it at this time.</p>
<p>The behavior you are expecting of the DaemonSet pods being scheduled first on a node is no longer the reality <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#how-daemon-pods-are-scheduled" rel="nofollow noreferrer">(since 1.12)</a>. Since 1.12, DaemonSet pods are handled by the default scheduler and relies on <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#effect-of-pod-priority-on-scheduling-order" rel="nofollow noreferrer">pod priority</a> to determine the order in which pods are scheduled. You may want to consider creating a priorityCLass specific for DaemonSets with a relatively high <code>value</code> to ensure they are scheduled ahead of most of your other pods.</p>
<p>I am new to Docker and Kubernetes, though I have mostly figured out how it all works at this point.</p> <p>I inherited an app that uses both, as well as KOPS.</p> <p>One of the last things I am having trouble with is the KOPS setup. I know for absolute certain that Kubernetes is setup via KOPS. There's two KOPS state stores on an S3 bucket (corresponding to a dev and prod cluster respectively)</p> <p>However while I can find the server that kubectl/kubernetes is running on, absolutely none of the servers I have access to seem to have a kops command.</p> <p>Am I misunderstanding how KOPS works? Does it not do some sort of dynamic monitoring (would that just be done by ReplicaSet by itself?), but rather just sets a cluster running and it's done?</p> <p>I can include my cluster.spec or config files, if they're helpful to anyone, but I can't really see how they're super relevant to this question.</p> <p>I guess I'm just confused - as far as I can tell from my perspective, it <em>looks</em> like KOPS is run once, sets up a cluster, and is done. But then whenever one of my node or master servers goes down, it is self-healing. I would expect that of the node servers, but not the master servers.</p> <p>This is all on AWS.</p> <p>Sorry if this is a dumb question, I am just having trouble conceptually understanding what is going on here.</p>
<p><code>kops</code> is a command line tool, you run it from your own machine (or a jumpbox) and it creates clusters for you, it’s not a long-running server itself. It’s like Terraform if you’re familiar with that, but tailored specifically to spinning up Kubernetes clusters.</p> <p><code>kops</code> creates nodes on AWS via autoscaling groups. It’s this construct (which is an AWS thing) that ensures your nodes come back to the desired number.</p> <p><code>kops</code> is used for managing Kubernetes clusters themselves, like creating them, scaling, updating, deleting. <code>kubectl</code> is used for managing container workloads that run on Kubernetes. You can create, scale, update, and delete your replica sets with that. How you run workloads on Kubernetes should have nothing to do with how/what tool you (or some cluster admin) use to manage the Kubernetes cluster itself. That is, unless you’re trying to change the “system components” of Kubernetes, like the Kubernetes API or <code>kubedns</code>, which are cluster-admin-level concerns but happen to run on top of Kuberentes as container workloads.</p> <p>As for how pods get spun up when nodes go down, that’s what Kubernetes as a container orchestrator strives to do. You declare the desired state you want, and the Kubernetes system makes it so. If things crash or fail or disappear, Kubernetes aims to reconcile this difference between actual state and desired state, and schedules desired container workloads to run on available nodes to bring the actual state of the world back in line with your desired state. At a lower level, AWS does similar things — it creates VMs and keeps them running. If Amazon needs to take down a host for maintenance it will figure out how to run your VM (and attach volumes, etc.) elsewhere automatically.</p>
<p>I am working on automating stuffs on Kubernetes cluster and have a requirement of creating an API to cordon a node. Basically this API should not allow any new pods to enter the cordoned node.</p> <p>I went through below stack-overflow discussion but couldn't figure out the APIs needed to cordon (and then drain) a node: <a href="https://stackoverflow.com/questions/52325091/how-to-access-the-kubernetes-api-in-go-and-run-kubectl-commands">How to access the Kubernetes API in Go and run kubectl commands</a></p>
<p>In order to findout API involved in particular kubectl commmand, use kubectl with flag <code>--v=9</code> which displays HTTP request made to API server with their response (<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="noreferrer">verbose mode</a>)</p> <h2>API involved in <code>kubectl cordon nodename</code>:</h2> <pre><code>GET /api/v1/nodes/node-name PATCH /api/v1/nodes/node-name </code></pre> <p>In HTTP PATCH Request, <code>Request Body: {"spec":{"unschedulable":true}}</code> <code>Content-Type: "application/strategic-merge-patch+json"</code></p> <p>Under the hood, the Golang client will simply make similar HTTP calls. Refer <a href="https://stackoverflow.com/a/43418833/4989632">here</a> for making HTTP PATCH request in golang client.</p> <h2>API involved in <code>kubectl drain &lt;nodename&gt; --ignore-daemonsets</code>:</h2> <pre><code>PATCH /api/v1/nodes/node-name -&gt; Request Body: {"spec":{"unschedulable":true}} GET /api/v1/pods?fieldSelector=spec.nodeName%3Dnode-name -&gt; Get Podlist POST /api/v1/namespaces/kube-system/pods/coredns-7b5c8bfcfc-s94bs/eviction GET /api/v1/namespaces/kube-system/pods/coredns-7b5c8bfcfc-s94bs -&gt; If API call returns 404 means Pod is successfully evicted. </code></pre> <p>Basically, drain command, first cordons the node, then evict the Daemonset Pod(s) from that node.</p>
<p>I am trying to run Minikube with RBAC but unable to create a cluster:</p> <p>Using this:</p> <pre><code>minikube start --memory=8192 --cpus=4 --vm-driver=virtualbox --extra-config=apiserver.Authorization.Mode=RBAC </code></pre> <p>Error:</p> <pre><code>E0113 13:02:54.464971 65250 start.go:343] Error starting cluster: kubeadm init error sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI running command: : Process exited with status 1 ================================================================================ An error has occurred. Would you like to opt in to sending anonymized crash information to minikube to help prevent future errors? To opt out of these messages, run the command: minikube config set WantReportErrorPrompt false ================================================================================ Please enter your response [Y/n]: % </code></pre>
<p>The problem is: <strong>--extra-config=apiserver.Authorization.Mode=RBAC</strong></p> <p>Should be: <strong>--extra-config=apiserver.authorization-mode=RBAC</strong></p> <p>Example:</p> <pre><code>minikube start \ --kubernetes-version v1.13.7 \ --vm-driver=kvm2 \ --memory=4096 \ --cpus=2 \ --disk-size=60G \ --extra-config=apiserver.authorization-mode=RBAC </code></pre>
<p>Is it actually possible to use a kubernates secret in a terraform-deployed application. I am seeing some odd behaviour.</p> <p>I define a cluster with appropriate node pool, a config map and a secret. The secret contains the service account key json data. I can then deploy my application using <code>kubectl apply -f myapp-deploy.yaml</code> and it works fine. That tells me the cluster is all good, including the secret and config. However, when I try to deploy with terraform I get an error in what looks like the service account fetch:</p> <pre><code>2019-07-19 06:20:45.497 INFO [myapp,,,] 1 --- [main] b.c.PropertySourceBootstrapConfiguration : Located property source: SecretsPropertySource {name='secrets.myapp.null'} 2019-07-19 06:20:45.665 WARN [myapp,,,] 1 --- [main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. 2019-07-19 06:20:45.677 INFO [myapp,,,] 1 --- [main] n.c.m.s.myappApplication : The following profiles are active: test-dev </code></pre> <p>The middle line is the interesting one, it seems to be trying to read the service account from the wrong place.</p> <p>I've walked the relevant settings from my yaml file over to my tf file but maybe I missed something. Here's what the yaml file looks like:</p> <pre><code>... env: - name: GOOGLE_APPLICATION_CREDENTIALS value: "/var/run/secret/cloud.google.com/myapp-sa.json" volumeMounts: - name: "service-account" mountPath: "/var/run/secret/cloud.google.com" ports: - containerPort: 8080 volumes: - name: "service-account" secret: secretName: "myapp" ... </code></pre> <p>And this yaml basically just works fine. Now the equivalent in my tf file looks like:</p> <pre><code>... env { name = "GOOGLE_APPLICATION_CREDENTIALS" value = "/var/run/secret/cloud.google.com/myapp-sa.json" } volume_mount { name = "myapp-sa" mount_path = "/var/run/secret/cloud.google.com" sub_path = "" } } volume { name = "myapp-sa" secret { secret_name = "myapp" } } ... </code></pre> <p>And this gives the above error. It seems to decide to look in <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> for the service account token instead of where I told it to. But only when deployed by <strong>terraform</strong>. I'm deploying the same image, and into the same cluster with the same <strong>configmap</strong>. There's something wrong with my tf somewhere. I've tried importing from the yaml deploy but I couldn't see anything important that I missed.</p> <p>FWIW this is a Spring Boot application running on GKE.</p> <p>Hopefully someone knows the answer.</p> <h2>Thanks for any help.</h2> <p>more info: I turned on debugging for io.fabric8.kubernetes and reran both scenarios ie <strong>terraform</strong> and yaml file. Here are the relevant log snippets:</p> <p>Terraform:</p> <pre><code>2019-07-23 23:03:39.189 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from Kubernetes config... 2019-07-23 23:03:39.268 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find Kubernetes config at: [/root/.kube/config]. Ignoring. 2019-07-23 23:03:39.274 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from service account... 2019-07-23 23:03:39.274 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account host and port: 10.44.0.1:443 2019-07-23 23:03:39.282 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt]. 2019-07-23 23:03:39.285 WARN [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring. 2019-07-23 23:03:39.291 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path... 2019-07-23 23:03:39.295 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. Ignoring. </code></pre> <p>Yaml:</p> <pre><code>2019-07-23 23:14:53.374 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path... 2019-07-23 23:14:53.375 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. 2019-07-23 23:14:53.376 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from Kubernetes config... 2019-07-23 23:14:53.377 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Did not find Kubernetes config at: [/root/.kube/config]. Ignoring. 2019-07-23 23:14:53.378 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client from service account... 2019-07-23 23:14:53.378 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account host and port: 10.44.0.1:443 2019-07-23 23:14:53.383 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt]. 2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token]. 2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Trying to configure client namespace from Kubernetes service account namespace path... 2019-07-23 23:14:53.384 DEBUG [snakecharmer,,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. </code></pre> <p>It looks like the yaml deploy finds what it needs at <code>/var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code> etc and the <strong>terraform</strong> deploy doesn't. As if there is a phantom volume mount in there that is missing in ** terraform**</p>
<p>I found the fix. The <strong>terraform</strong> deploy adds a <code>automount_service_account_token = false</code> but the yaml default is for <code>true</code> and that makes all the difference.</p> <p>The switch is in the template.spec section of the <code>kubernetes_deployment</code> in my tf file and that now looks like this snippet:</p> <pre><code>... spec { restart_policy = "Always" automount_service_account_token = true container { port { container_port = 8080 protocol = "TCP" } ... </code></pre> <p>Setting the <code>automount_service_account_token = true</code> is the fix and it comes up fine with that in place.</p>
<p>I have an AKS cluster with an nginx ingress controller. Controller has created a service with a type LoadBalancer and Ports section looks like this (from <code>kubectl get service</code>):</p> <blockquote> <p>80:31141/TCP</p> </blockquote> <p>If I understand things correctly port 80 is a ClusterIp port that is not reachable from the outside but port 31141 is a port that is a NodePort and is reachable from outside. So I would assume that an Azure Load Balancer is sending traffic to this 31141 port.</p> <p>I was surprised to find that Azure Load Balancer is set up with a rule:</p> <pre><code>frontendPort: 80 backendPort: 80 probe (healthCheck): 31141 </code></pre> <p>So it actually does use the nodeport but only as a healthcheck and all traffic is sent to port 80 which presumably functions the same way as 31141.</p> <p>A curious note is that if I try to reach the node IP at port 80 from a pod I only get "connection refused", but I suppose it does work if traffic comes from a load balancer.</p> <p>I was not able to find any information about this on internet, so the question is how this really works and why is ALB doing it this way?</p> <p>P.S. I don't have troubles with connectivity, it works. I am just trying to understand how and why it does behind the scenes.</p>
<p>I think I have figured out how that works (disclaimer: my understanding might not be correct, please correct me if it's wrong).</p> <p>What happens is that load balanced traffic does not reach the node itself on port 80 nor does it reach it on an opened node port (31141 in my case). Instead the traffic that is sent to the node is not "handled" by the node itself but rather routed further with the help of iptables. So if some traffic hits the node with the destination IP of the LB frontendIP and port 80 it goes to the service and further to the pod.</p> <p>As for health check I suppose it does not use the same port 80 because the request would not have a destination equal to the external IP (LB frontend IP) and rather the node itself directly, then it uses the service nodePort for that reason.</p>
<p>I have been using the Google Cloud Load Balancer ingress. However, I'm trying to install a <code>nginxinc/kubernetes-ingress</code> controller in a node with a Static IP address in GKE.</p> <ol> <li>Can I use Google's Cloud Load Balancer ingress controller in the same cluster?</li> <li>How can we use the <code>nginxinc/kubernetes-ingress</code> with a static IP?</li> </ol> <p>Thanks</p>
<p>In case you're using helm to deploy nginx-ingress. </p> <p>First create a static IP address. In google the Network Loadbalancers (NLBs) only support regional static IPs: </p> <pre><code>gcloud compute addresses create my-static-ip-address --region us-east4 </code></pre> <p>Then install nginx-helm with the ip address as a loadBalancerIP parameter</p> <pre><code>helm install --name nginx-ingress stable/nginx-ingress --namespace my-namespace --set controller.service.loadBalancerIP=35.186.172.1 </code></pre>
<p>I use Azure Kubernetes to host a couple of different services.</p> <p>I'm trying to configure UDP load balancing over external IP. I have created service with type LoadBalancer, UDP protocol and sessionAffinity. Also my deployment has configured HTTP RedinessProbe.</p> <p>If UDP client reach my service from kubernetes network every thing works fine: - client have sticky session to concrete pod in ready state. - client re-balanced to another ready pod if already assigned pod was dead; - client re-balanced after <strong>sessionAffinityConfig.clientIP.timeoutSeconds</strong> is elapsed(i.e next packets may be routed to other ready pod).</p> <p>Thinks go different if I try to connect to LoadBalancer externally(using external IP): - client have sticky session to concrete pod in ready state. - client doesn't get new ready pod if previous was dead. It connected to new pod only in case If client stop to send messages during <strong>sessionAffinityConfig.clientIP.timeoutSeconds</strong> period of time.</p> <p>So to solve it I tried to use ingress-nginx. I found useful article <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="nofollow noreferrer">here </a> about this kind of configuration.</p> <p>But after I completed with udp-service configuration and adding UDP port I get following error:</p> <blockquote> <p>cannot create an external load balancer with mix protocols</p> </blockquote> <p>Could You please point me how to do it properly in Kubernetes.</p> <p>udp-services config map:</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx data: 5684: "dev/dip-dc:5684" </code></pre> <p>ingress-nginx controller service YAML:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx2 namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: externalTrafficPolicy: Local type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https - name: upd port: 5684 targetPort: udp </code></pre>
<p>A multi-protocol LB service is unfortunately not supported in many K8S providers. </p> <p>Check out this <a href="https://medium.com/asl19-developers/build-your-own-cloud-agnostic-tcp-udp-loadbalancer-for-your-kubernetes-apps-3959335f4ec3" rel="nofollow noreferrer"><strong>tutorial</strong></a> that shows you how to build your own UDP/TCP load balancer.</p> <p>The summary of what you will need to do is:</p> <ol> <li>Create a NodePort service for your application</li> <li>Create a small server instance and run Nginx with LB config</li> </ol>
<p>I am new to Kubernetes, so maybe a silly question.<br> I am trying to deploy statefulset of ElasticSearch with 3 pod replicas. I have defined Statefulset with pvc in spec.<br> This pvc has storage class which is served by a hostPath volume. </p> <pre><code>volumeClaimTemplates: - metadata: name: beehive-pv-claim spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "local-storage" resources: requests: storage: 1Gi </code></pre> <hr> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: beehive-pv spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage hostPath: path: /home/abc </code></pre> <p>I have few doubts.<br> 1) Would above setup/pv serve /host/abc directory on each node separately? i.e. every pod data would be stored on its corresponding node/host path. Also, wouuld k8s show one volume bound to multiple pvc?<br> 2) As I am using statefulset, I am assuming that the once pod-{i} is scheduled on node-{i}, it will always be scheduled there in every case (e.g. restart).<br> 3) Is above setup right way to implement such case where I need to store the data on host local directory. Or local persistent volume is better? I could not get the actual difference between the two. 4) Do I need to create local-storage storage class manually? (Above setup runs fine in docker for windows setup without creating storage class) 5) I may have other containers also which need to store the data under /home/abc directory only. So, I will be using subPath while mounting the volume in container. Do you see any issue here?</p> <p>Pleas help.</p>
<p>hostPath volumes work well only on single-node clusters if you have a multi-node environment then you should use Local Persistent Volume</p> <p>These blog posts explain the Local Persistent Volume.</p> <p>Official blog - <a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/</a></p> <p>another reference - <a href="https://vocon-it.com/2018/12/20/kubernetes-local-persistent-volumes/" rel="nofollow noreferrer">https://vocon-it.com/2018/12/20/kubernetes-local-persistent-volumes/</a></p>
<p>Given the following K8s resources (deployment/pods, service, ingress), I expect to see the request echoed back to me when I visit <code>https://staging-micro.local/</code> in my browser. What I get instead is <code>502 Bad Gateway</code>. </p> <pre><code># describe deployment (trunc. to show only containers) Containers: cloudsql-proxy: Image: gcr.io/cloudsql-docker/gce-proxy:1.11 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /cloud_sql_proxy -instances=myproject:us-central1:project-staging=tcp:5432 -credential_file=/secrets/cloudsql/credentials.json Environment: &lt;none&gt; Mounts: /secrets/cloudsql from cloudsql-instance-credentials-volume (ro) adv-api-django: Image: gcr.io/google_containers/echoserver:1.9 Port: 8000/TCP Host Port: 0/TCP Environment: # describe service Name: staging-adv-api-service Namespace: staging Labels: app=adv-api platformRole=api tier=backend Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"adv-api","platformRole":"api","tie... Selector: app=adv-api-backend,platformRole=api,tier=backend Type: LoadBalancer IP: 10.103.67.61 Port: http 80/TCP TargetPort: 8000/TCP NodePort: http 32689/TCP Endpoints: 172.17.0.14:8000,172.17.0.6:8000,172.17.0.7:8000 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; # describe ingress Name: staging-api-ingress Namespace: staging Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.12:8080) Rules: Host Path Backends ---- ---- -------- staging-micro.local / staging-adv-api-service:http (172.17.0.14:8000,172.17.0.6:8000,172.17.0.7:8000) </code></pre> <p>note that I have the entry <code>192.168.99.100 staging-micro.local</code> in <code>/etc/hosts</code> on the host machine (running minikube) and that is the correct <code>minikube ip</code>. If I remove the service, hitting <code>staging-micro.local/</code> gives the <code>404 Not Found</code> response of the default backend. </p> <p>My expectation is that the Ingress maps the hostname <code>staging-micro.local</code> and the path <code>/</code> to the service, which is listening on port 80. The service then forwards the request on to one of the 3 selected containers on port 8000. The echoserver container is listening on port 8000, and returns an HTTP Response with the Request as its body. This is, of course, not what actually happens.</p> <p>Finally, the <code>cloudsql-proxy</code> container: this should not be involved at this point, but I'm including it because I wanted to validate the service works when the sidecar container is present. Then I can swap out the <code>echoserver</code> for my main application container. I have tested with <code>echoserver</code> removed, and get the same results. </p> <p>Logs show the <code>echoserver</code> is starting up without error. </p> <p>I haven't been able to locate any more comprehensive documentation of <code>echoserver</code> so I'm not 100% about the ports it's listening on. </p>
<p>My guess you have used wrong <code>echoserver:1.9</code> target container port, as it is responding on <code>8080</code> port by default. Look at this <a href="https://gist.github.com/kiloreux/4be1caec29f6a7f73e73f6a21e94a75b" rel="nofollow noreferrer">example</a>.</p> <p>I have tested it on my environment with successful container responsiveness on <code>8080</code> port.</p>
<p>I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with</p> <pre><code>FATAL: data directory "/var/lib/postgresql/data" has wrong ownership HINT: The server must be started by the user that owns the data directory. </code></pre> <p>This is the NFS configuration:</p> <pre><code>% exportfs -v /srv/nfs/postgresql/postgres-registry kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash) $ ls -ldn /srv/nfs/postgresql/postgres-registry drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry $ ls -ln /srv/nfs/postgresql/postgres-registry total 4 drwx------. 2 999 999 4096 Jul 25 08:36 pgdata </code></pre> <p>The full log from the pod:</p> <pre><code>2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres". 2019-07-25T07:32:50.618113000Z This user must also own the server process. 2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8". 2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8". 2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english". 2019-07-25T07:32:50.620826000Z Data page checksums are disabled. 2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok 2019-07-25T07:32:50.647445000Z creating subdirectories ... ok 2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20 2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB 2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC 2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix 2019-07-25T07:32:51.218995000Z creating configuration files ... ok 2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership 2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory. 2019-07-25T07:32:51.262238000Z child process exited with exit code 1 2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data" 2019-07-25T07:32:51.380205000Z running bootstrap script ... </code></pre> <p>The deployment has the following in:</p> <pre><code> securityContext: runAsUser: 999 supplementalGroups: [999,1000] fsGroup: 999 </code></pre> <p><strong><em>What am I doing wrong?</em></strong></p> <p>Edit: Added storage.yaml file:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: postgres-registry-pv-volume spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 192.168.3.7 path: /srv/nfs/postgresql/postgres-registry --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-registry-pv-claim labels: app: postgres-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi </code></pre> <p>Edit: And the full deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-registry spec: replicas: 1 template: metadata: labels: app: postgres-registry spec: securityContext: runAsUser: 999 supplementalGroups: [999,1000] fsGroup: 999 containers: - name: postgres-registry image: postgres:latest imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 env: - name: POSTGRES_DB value: postgresdb - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: Sekret volumeMounts: - mountPath: /var/lib/postgresql/data subPath: "pgdata" name: postgredb-registry-persistent-storage volumes: - name: postgredb-registry-persistent-storage persistentVolumeClaim: claimName: postgres-registry-pv-claim </code></pre> <p>Even more debugging adding:</p> <pre><code>command: ["/bin/bash", "-c"] args:["id -u; ls -ldn /var/lib/postgresql/data"] </code></pre> <p>Which returned:</p> <pre><code>999 drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data </code></pre> <p>Clearly, the UID/GID are wrong. Why?</p> <p>Even with the work around suggested by Jakub Bujny, I get this:</p> <pre><code> 2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres". 2019-07-25T09:32:08.735335000Z This user must also own the server process. 2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8". 2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8". 2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english". 2019-07-25T09:32:08.738754000Z Data page checksums are disabled. 2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok 2019-07-25T09:32:08.766606000Z creating subdirectories ... ok 2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20 2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB 2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC 2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix 2019-07-25T09:32:09.168161000Z creating configuration files ... ok 2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership 2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory. 2019-07-25T09:32:09.208849000Z child process exited with exit code 1 2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data" 2019-07-25T09:32:09.274741000Z running bootstrap script ... 999 2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data </code></pre>
<p>Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine. You're also missing an 's' in your <code>name: postgredb-registry-persistent-storage</code></p> <p>And with your <code>subPath: "pgdata"</code> do you need to change the <a href="https://github.com/docker-library/docs/tree/master/postgres#pgdata" rel="nofollow noreferrer">$PGDATA</a>? I didn't include the subpath for this.</p> <pre><code>$ sudo mount 172.29.0.218:/test/nfs ./nfs $ sudo su -c "ls -al ./nfs" postgres total 8 drwx------ 2 postgres postgres 4096 Jul 25 14:44 . drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 .. $ kubectl apply -f nfspv.yaml persistentvolume/postgres-registry-pv-volume created persistentvolumeclaim/postgres-registry-pv-claim created $ kubectl apply -f postgres.yaml deployment.extensions/postgres-registry created $ sudo su -c "ls -al ./nfs" postgres total 124 drwx------ 19 postgres postgres 4096 Jul 25 14:46 . drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 .. drwx------ 3 postgres postgres 4096 Jul 25 14:46 base drwx------ 2 postgres postgres 4096 Jul 25 14:46 global drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts . . . </code></pre> <p>I noticed using <code>nfs:</code> directly in the persistent volume took significantly longer to initialize the database, whereas using <code>hostPath:</code> to the mounted nfs volume behaved normally.</p> <p>So after a few minutes:</p> <pre><code>$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3 2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections done server started $ kubectl exec -it postgres-registry-675869694-9fp52 psql psql (11.4 (Debian 11.4-1.pgdg90+1)) Type "help" for help. postgres=# </code></pre> <p>Checking the uid/gid</p> <pre><code>$ kubectl exec -it postgres-registry-675869694-9fp52 bash postgres@postgres-registry-675869694-9fp52:/$ whoami &amp;&amp; id -u &amp;&amp; id -g postgres 999 999 </code></pre> <p><code>nfspv.yaml</code>:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: postgres-registry-pv-volume spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: server: 172.29.0.218 path: /test/nfs --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-registry-pv-claim labels: app: postgres-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi </code></pre> <p><code>postgres.yaml</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres-registry spec: replicas: 1 template: metadata: labels: app: postgres-registry spec: securityContext: runAsUser: 999 supplementalGroups: [999,1000] fsGroup: 999 containers: - name: postgres-registry image: postgres:latest imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 env: - name: POSTGRES_DB value: postgresdb - name: POSTGRES_USER value: postgres - name: POSTGRES_PASSWORD value: Sekret volumeMounts: - mountPath: /var/lib/postgresql/data name: postgresdb-registry-persistent-storage volumes: - name: postgresdb-registry-persistent-storage persistentVolumeClaim: claimName: postgres-registry-pv-claim </code></pre>
<p>I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:</p> <ol> <li><p>I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?</p> </li> <li><p>Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?</p> </li> </ol>
<p>You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.</p> <pre><code>|-scripts/runjob.sh |-templates/post-install.yaml |-Chart.yaml |-values.yaml </code></pre> <p>Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.</p> <p>values.yaml</p> <pre><code>key1: someKey1: value1 key2: someKey2: value1 </code></pre> <p>post-install.yaml</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: post-install-job labels: provider: stackoverflow microservice: {{ template "name" . }} release: "{{ .Release.Name }}" chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" annotations: "helm.sh/hook": pre-install,pre-upgrade,pre-rollback "helm.sh/hook-delete-policy": before-hook-creation "helm.sh/hook-weight": "3" spec: template: metadata: name: "{{.Release.Name}}" labels: provider: stackoverflow microservice: {{ template "name" . }} release: "{{ .Release.Name }}" app: {{ template "fullname" . }} spec: restartPolicy: Never containers: - name: post-install-job image: "custom-docker-image:v1" command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ] env: #setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml) - name: KEY1 value: {{ .Values.key1.someKey1 }} - name: KEY2 value: {{ .Values.key2.someKey2 }} </code></pre> <p>runjob.sh</p> <pre><code># you can access the variable from env variable echo $KEY1 echo $KEY2 # some stuff </code></pre>
<p>I'm working on the manifest of a kubernetes <code>job</code>. </p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: hello-job spec: template: spec: containers: - name: hello image: hello-image:latest </code></pre> <p>I then apply the manifest using <code>kubectl apply -f &lt;deployment.yaml&gt;</code> and the job runs without any issue. </p> <p>The problem comes when i change the image of the running container from <code>latest</code> to something else. </p> <p>At that point i get a <code>field is immutable</code> exception on applying the manifest. </p> <p>I get the same exception either if the job is running or completed. The only workaround i found so far is to delete manually the job before applying the new manifest. </p> <p>How can i update the current job without having to manually delete it first? </p>
<p>I guess you are probably using an incorrect kubernetes resource . <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Job</a> is a immutable Pod that runs to completion , you cannot update it . As per Kubernetes documentation ..</p> <blockquote> <p>Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=false. </p> </blockquote> <p>If you intend to update an image you should either use Deployment or Replication controller which supports updates</p>
<p>I'm running a pod (website) and a simple service </p> <pre><code>apiVersion: v1 kind: Service metadata: name: ui spec: type: NodePort selector: app: ui ports: - protocol: TCP port: 80 targetPort: 3000 $&gt; kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 83m &lt;none&gt; component=apiserver,provider=kubernetes ui NodePort 10.0.25.205 &lt;none&gt; 80:30180/TCP 53m app=ui &lt;none&gt; </code></pre> <p>Because this service is of type <code>NodePort</code> it opens a port on each cluster node. In my case I'm running kubernetes in Azure, single node setup. But how do I access my service/website? </p> <pre><code>$&gt; kubectl describe service ui Name: ui Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata": {"annotations":{},"name":"ui","namespace":"default"},"spec":{"ports":[{"port":80,"protocol"... Selector: app=ui Type: NodePort IP: 10.0.25.205 Port: &lt;unset&gt; 80/TCP TargetPort: 3000/TCP NodePort: &lt;unset&gt; 30180/TCP Endpoints: 10.244.0.14:3000,10.244.0.15:3000 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Type 29m service-controller NodePort -&gt; LoadBalancer Normal EnsuringLoadBalancer 29m service-controller Ensuring load balancer Normal EnsuredLoadBalancer 27m service-controller Ensured load balancer Normal Type 10m service-controller LoadBalancer -&gt; NodePort Normal DeletingLoadBalancer 10m service-controller Deleting load balancer Normal DeletedLoadBalancer 9m5s service-controller Deleted load balancer </code></pre> <p>I don't see an external IP. </p> <p>For example, if I change <code>NodePort</code> to <code>LoadBalancer</code> I get an external IP and I can access my website, but how can I do this with NodePort?</p>
<p>As far as I know, the AKS is a managed service and it just exposes the master which is also managed by Azure to control all the actions. The slave nodes do not expose and do not have the external IP in default.</p> <p>In the AKS cluster, you only can access the applications through the service with a load balancer or the ingress(which also uses the load balancer for its service).</p> <p>If you really want to use the node type for your service, there is also a way to solve it. You can create public IPs manually and associate them to the nodes that you want to create the services with node type. Then the nodes have the external IPs. But all operations for AKS Iaas are not recommended. So the load balancer type is the most appropriate way for the service if you want to access them from the Internet.</p>
<p>I have the following ingress.yml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: default annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$2 labels: app: ingress spec: rules: - host: http: paths: - path: /apistarter(/|$)(.*) backend: serviceName: svc-aspnetapistarter servicePort: 5000 - path: //apistarter(/|$)(.*) backend: serviceName: svc-aspnetapistarter servicePort: 5000 </code></pre> <p>After deploying my ASP.Net Core 2.2 API application and navigate to <code>http://localhost/apistarter/</code>, browser debugger console shows errors loading the static content and Javascripts. In addition, navigating to <code>http://localhost/apistarter/swagger/index.html</code> results in</p> <pre><code>Fetch error Not Found /swagger/v2/swagger.json </code></pre> <p>I am using the SAME ingress for multiple micro-services using different path prefix. It is running on my local kubernetes cluster using microk8s. Not on any cloud provider yet. I have checked out <a href="https://stackoverflow.com/questions/52404475/how-to-configure-an-asp-net-core-multi-microservice-application-and-azure-aks-in">How to configure an ASP.NET Core multi microservice application and Azure AKS ingress routes so that it doesn&#39;t break resources in the wwwroot folder</a> and <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-2.1</a> but none of these helps.</p>
<p>Follow these steps to run your code:</p> <ol> <li><strong>ingress</strong>: remove URL-rewriting from <em>ingress.yml</em></li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress namespace: default annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "false" labels: app: ingress spec: rules: - host: http: paths: - path: /apistarter # &lt;--- backend: serviceName: svc-aspnetapistarter servicePort: 5000 </code></pre> <ol start="2"> <li><strong>deployment</strong>: pass environment variable with <em>path base</em> in <em>ingress.yml</em></li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment # .. spec: # .. template: # .. spec: # .. containers: - name: test01 image: test.io/test:dev # ... env: # define custom Path Base (it should be the same as 'path' in Ingress-service) - name: API_PATH_BASE # &lt;--- value: "apistarter" </code></pre> <ol start="3"> <li><strong>program</strong>: enable loading environment params in <em>Program.cs</em></li> </ol> <pre class="lang-cs prettyprint-override"><code>var builder = new WebHostBuilder() .UseContentRoot(Directory.GetCurrentDirectory()) // .. .ConfigureAppConfiguration((hostingContext, config) =&gt; { // .. config.AddEnvironmentVariables(); // &lt;--- // .. }) // .. </code></pre> <ol start="4"> <li><strong>startup</strong>: apply <em>UsePathBaseMiddleware</em> in <em>Startup.cs</em></li> </ol> <pre class="lang-cs prettyprint-override"><code>public class Startup { public Startup(IConfiguration configuration) { _configuration = configuration; } private readonly IConfiguration _configuration; public void Configure(IApplicationBuilder app, IHostingEnvironment env) { var pathBase = _configuration["API_PATH_BASE"]; // &lt;--- if (!string.IsNullOrWhiteSpace(pathBase)) { app.UsePathBase($"/{pathBase.TrimStart('/')}"); } app.UseStaticFiles(); // &lt;-- StaticFilesMiddleware must follow UsePathBaseMiddleware // .. app.UseMvc(); } // .. } </code></pre>
<p>With Kubernetes Container running a Python script:</p> <pre><code>import time while True: try: for i in range(10): if i==0: raise Exception('Exception occurred!') except: pass time.sleep(1) </code></pre> <p>I would like to pass the Exception's message <code>'Exception occurred!'</code> down to the Container so this error message could be seen with:</p> <p><code>kubectl describe pod pod_id</code></p> <p>Would it be possible?</p>
<p>Anything you <code>print()</code> will be visible in <code>kubectl logs</code>. (You may need to set an environment variable <code>PYTHONUNBUFFERED=1</code> in your pod spec.)</p> <p>Your code as you've written it will never print anything. The construct</p> <pre class="lang-py prettyprint-override"><code>try: ... except: pass </code></pre> <p>silently ignores any and all exceptions out of the <code>try</code> block. The bare <code>except:</code> even captures some system-level exceptions like <code>SystemExit</code> or <code>KeyboardInterrupt</code>; this is almost always wrong. Often you want your <code>except</code> blocks to be as tightly scoped as you can, and the Python tutorial on <a href="https://docs.python.org/3/tutorial/errors.html#tut-userexceptions" rel="noreferrer">user-defined exceptions</a> is a helpful pattern.</p> <p>(The exception to this, particularly in a Kubernetes context, is that you will often want a very broad exception handler to do something like return an HTTP 500 error to a network request, rather than crashing the application.)</p> <p>A better example might look like:</p> <pre class="lang-py prettyprint-override"><code>import time class OneException(Exception): pass def iteration(): for i in range(10): try: if i == 1: raise OneException("it is one") print(i, math.sqrt(i), math.sqrt(-i)) # will work when i==0 but fail when i==2 except OneException as e: print(i, repr(e)) # and proceed to the next iteration if __name__ == '__main__': while True: # The top-level loop. We want a very broad catch here. try: iteration() except Exception as e: print('iteration failed', repr(e)) time.sleep(1) </code></pre>