prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Requirement: We need to access the Kubernetes REST end points from our java code. Our basic operations using the REST end points are to Create/Update/Delete/Get the deployments.</p> <p>We have downloaded the kubectl and configured the kubeconfig file of the cluster in our Linux machine. We can perform operations in that cluster using the kubectl. We got the bearer token of that cluster running the command 'kubectl get pods -v=8'. We are using this bearer token in our REST end points to perform our required operations. </p> <p>Questions:</p> <ol> <li>What is the better way to get the bearer token? </li> <li>Will the bearer token gets change during the lifecycle of the cluster?</li> </ol>
<p>follow this simple way</p> <pre><code>kubectl proxy --port=8080 &amp; curl http://localhost:8080/api/ </code></pre> <p>from java code use the below approach</p> <pre><code># Check all possible clusters, as you .KUBECONFIG may have multiple contexts: kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}' # Select name of cluster you want to interact with from above output: export CLUSTER_NAME="some_server_name" # Point to the API server refering the cluster name APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}") # Gets the token value TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d) # Explore the API with TOKEN curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure </code></pre>
<p>I'm testing the side-by-side Windows/Linux container experimental feature in Docker for Windows and all is going well. I can create Linux containers while the system is set to use Windows containers. I see my ReplicaSets, Services, Deployments, etc in the Kubernetes dashboard and all status indicators are green. The issue, though, is that my external service endpoints don't seem to resolve to anything when Docker is set to Windows container mode. The interesting thing, however, is that if I create all of my Kubernetes objects in Linux mode and then switch to Windows mode, I can still access all services and the Linux containers behind them.</p> <p>Most of my Googling took me to errors with services and Kubernetes but this doesn't seem to be suffering from any errors that I can report. Is there a configuration somewhere which must be set in order for this to work? Or is this just a hazard of running the experimental features?</p> <p>Docker Desktop 2.0.0.3</p> <p>Docker Engine 18.09.2</p> <p>Kubernetes 1.10.11</p>
<p>just to confirm your thoughts about experimental features:</p> <blockquote> <p>Experimental features are not appropriate for production environments or workloads. They are meant to be sandbox experiments for new ideas. Some experimental features may become incorporated into upcoming stable releases, but others may be modified or pulled from subsequent Edge releases, and never released on Stable.</p> </blockquote> <p>Please consider additional steps to resolve this issue:</p> <blockquote> <p>The Kubernetes client command, kubectl, is included and configured to connect to the local Kubernetes server. If you have kubectl already installed and pointing to some other environment, such as minikube or a GKE cluster, be sure to change context so that kubectl is pointing to docker-for-desktop </p> </blockquote> <pre><code>&gt; kubectl config get-contexts &gt; kubectl config use-context docker-for-desktop </code></pre> <p>If you installed kubectl by another method, and experience conflicts, remove it.</p> <p>To enable <strong>Kubernetes support</strong> and install a standalone instance of Kubernetes running as a Docker container, <strong>select Enable Kubernetes and click the Apply and restart button</strong>.</p> <p><strong>By default</strong>, Kubernetes containers <strong>are hidden</strong> from commands like <strong>docker service ls</strong>, because managing them manually is not supported. To make them visible, select Show system containers (advanced) and click Apply and restart. Most users do not need this option.</p> <p>Please verify also <a href="https://docs.docker.com/docker-for-windows/install/#what-to-know-before-you-install" rel="nofollow noreferrer">System requirements</a>.</p>
<p>I have implemented mTLS for service-to-service security using Istio 1.0.4 on Kubernetes. Is there a configuration to specify the cipher and TLS versions to use with Citadel?</p>
<p>This may not answer the specific question, but I thought it would be nice to let you know that others have asked this question. See links below:</p> <p><a href="https://github.com/istio/istio/issues/8769" rel="nofollow noreferrer">https://github.com/istio/istio/issues/8769</a> <a href="https://github.com/istio/istio/issues/13138" rel="nofollow noreferrer">https://github.com/istio/istio/issues/13138</a></p> <p>Dealing with ingress traffic is a little different. I may be mistaken, but I think when submitting you own private/public keys, if applicable, istio loads those secrets and read/applies the appropriate ciphers according to the way the secrets were created. See links below for examples: </p> <p><a href="https://preliminary.istio.io/docs/tasks/traffic-management/secure-ingress/sds/" rel="nofollow noreferrer">https://preliminary.istio.io/docs/tasks/traffic-management/secure-ingress/sds/</a> <a href="https://preliminary.istio.io/docs/tasks/traffic-management/secure-ingress/mount/" rel="nofollow noreferrer">https://preliminary.istio.io/docs/tasks/traffic-management/secure-ingress/mount/</a></p>
<p>I have one application will start one pod in any node of cluster, but if that node doesn't have this image, it will downloading it the first time and it takes a lot of time (it is around 1gb, and takes more than 3 minutes to download the image), what is the best practice to solve this kind of issue ? Pre pull image or share docker image via nfs ?</p>
<p>Deploy a personal docker repository.</p> <p><a href="https://docs.docker.com/registry/deploying/" rel="nofollow noreferrer">https://docs.docker.com/registry/deploying/</a></p>
<p>I can not find a way to iterate over a range in helm templating. I have the next definition in my values.yaml:</p> <pre><code>ingress: app1: port: 80 hosts: - example.com app2: port: 80 hosts: - demo.example.com - test.example.com - stage.example.com app3: port: 80 hosts: - app3.example.com </code></pre> <p>And i want to generate the same nginx ingress rule for each mentioned host with:</p> <pre><code>spec: rules: {{- range $key, $value := .Values.global.ingress }} - host: {{ $value.hosts }} http: paths: - path: /qapi backend: serviceName: api-server servicePort: 80 {{- end }} </code></pre> <p>But it generates wrong hosts:</p> <pre><code>- host: [example.com] - host: [test.example.com demo.example.com test.example.com] </code></pre> <p>Thanks for the help!</p>
<p>I've finally got it working using:</p> <pre><code>spec: rules: {{- range $key, $value := .Values.global.ingress }} {{- range $value.hosts }} - host: {{ . }} http: paths: - path: /qapi backend: serviceName: api-server servicePort: 80 {{- end }} {{- end }} </code></pre>
<p>I'm using Kubernetes that is bundled with Docker-for-Mac. I'm trying to configure an Ingress that routes http requests starting with /v1/ to my backend service and /ui/ requests to my Angular app.</p> <p>My issues seems to be that the HTTP method of the requests are changed by ingress (NGINX) from a POST to a GET.</p> <p>I have tried various rewrite rules, but to no avail. I even switched from Docker-for-Mac to Minikube, but the result is the same. </p> <p>If I use a simple ingress with no paths (just the default backend) then the service is getting the correct HTTP method. The ingress below works: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: backend: serviceName: backend servicePort: 8080 </code></pre> <p>But this ingress does not:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /v1 backend: serviceName: backend servicePort: 8080 - path: /ui backend: serviceName: webui servicePort: 80 </code></pre> <p>When I debug the "backend" service I see that the HTTP Request is a GET instead of a POST.</p> <p>I read somewhere that NGINX rewrites issue a 308 (permanent) redirect and the HTTP method is changed from a GET to a POST, but if that is the case how can I configure my ingress to support different paths for different services that require POST calls?</p>
<p>I found the solution to my problem. When I add <code>host:</code> to the configuration then the http method is not changed. Here is my current ingress yaml (the rewrite and regex are used to omit sending the /v1 as part of the backend URL)</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: localhost http: paths: - path: /v1(/|$)(.*) backend: serviceName: gateway servicePort: 8080 </code></pre>
<p>I'm installing a kubernetes cluster on raspberry pis with hypriotOS. During the installation process, I have to only install kubeadm by using</p> <blockquote> <p>apt-get install kubeadm</p> </blockquote> <p>Can someone explain to me what kudeam actually does? I already read about bootstrapping in the documentation, but I don't understand exactly. I'm also wondering why I only have to install kubeadm, since it is written in the documentation that:</p> <blockquote> <p>kubeadm will not install or manage kubelet or kubectl</p> </blockquote> <p>After the installation I can use kubectl etc. without having installed it explicitly like </p> <blockquote> <p>apt-get install kubeadm kubectl kubelet kubernetes-cni</p> </blockquote>
<p>As mentioned by <a href="https://stackoverflow.com/users/11294273/manuel-dom%C3%ADnguez">@Manuel Domínguez</a>: Kubeadm is a tool to built Kubernetes clusters. It's responsible for cluster bootstrapping. It support also upgrades, downgrades, and managing bootstrap tokens.</p> <p>First of all Kubeadm runs a series of prechecks to ensure that the machine is ready to run Kubernetes, during bootstrapping the cluster kubeadm is downloading and installing the cluster control plane components and configure all necessary cluster resources.</p> <p>f.e.</p> <p>Control plane components like:</p> <ul> <li>kube-apiserver,</li> <li>kube-controller-manager,</li> <li>kube-scheduler,</li> <li>etcd</li> </ul> <p>Runtime components like:</p> <ul> <li>kubelet,</li> <li>kube-proxy</li> <li>container runtime</li> </ul> <p>You can find more information about Kubeadm:</p> <ul> <li><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">&quot;Creating a single master cluster with kubeadm&quot;</a></li> <li><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/" rel="nofollow noreferrer">&quot;Overview of kubeadm&quot;</a></li> <li><a href="https://github.com/kubernetes/kubeadm" rel="nofollow noreferrer">&quot;Github repository&quot;</a>:</li> </ul> <p>Hope this help</p>
<p>I have a Kubernetes cluster (Kubernetes 1.13, Weave Net CNI) that has no direct access to an internal company network. There is an authentication-free SOCKS5 proxy that can (only) be reached from the cluster, and which resolves and connects to resources in the internal network:</p> <p><img src="https://i.stack.imgur.com/thDnx.png" alt=""></p> <p>Consider some 3rd party Docker Images used on Pods that don't have any explicit proxy support, and just want a resolvable DNS name and target port to connect to a TCP-based service (which might be HTTP(S), but doesn't have to be).</p> <p>What kind of setup would you propose to bind the Pods and Company Network Services together? </p>
<p>The only two things comes to my mind are:</p> <p><strong>1)</strong> Run the Socks5 docker image as a sidecar: <a href="https://hub.docker.com/r/serjs/go-socks5-proxy/" rel="nofollow noreferrer">https://hub.docker.com/r/serjs/go-socks5-proxy/</a></p> <p><strong>2)</strong> Use Transparent Proxy Redirector on the nodes - <a href="https://github.com/darkk/redsocks" rel="nofollow noreferrer">https://github.com/darkk/redsocks</a></p>
<p>I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).</p> <p>I have an ingress in a different namespace that attempts to use the nginx controller.</p> <pre><code>#ingress.yaml kind: Ingress apiVersion: extensions/v1beta1 metadata: name: myapp annotations: kubernetes.io/backend-protocol: https nginx.ingress.kubernetes.io/enable-rewrite-log: "true" nginx.ingress.kubernetes.io/rewrite-target: "/$1" spec: tls: - hosts: - my-host secretName: tls-cert rules: - host: my-host paths: - backend: servicename: my-service servicePort: https path: "/api/(.*)" </code></pre> <p>The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#custom-http-errors" rel="nofollow noreferrer">custom-http-headers</a> then I get a 404 from nginx; my service is not being hit. According to <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-rewrite-log" rel="nofollow noreferrer">re-write logging</a>, the url is being re-written correctly.</p> <p>I have also hit the service directly from inside the pod, and that works as well.</p> <pre><code>#service.yaml kind: Service apiVersion: v1 metadata: name: my-service spec: ports: - name: https protocol: TCP port: 5000 targetPort: https selector: app: my-app clusterIP: &lt;redacted&gt; type: ClusterIP sessionAffinity: None </code></pre> <p>What could be going wrong?</p> <p><strong>EDIT</strong>: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)</p>
<p>Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.</p>
<p>My module <code>abc</code> contains an instance of <code>redis-ha</code> deployed to Kubernetes via helm compliments of <a href="https://github.com/helm/charts/tree/master/stable/redis-ha" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/redis-ha</a>. I want to <code>taint</code> this resource. When I <code>terraform state list</code> I see the resource listed as:</p> <ul> <li>module.abc.module.redis.helm_release.redis-ha[3]</li> </ul> <p>My understanding from <a href="https://github.com/hashicorp/terraform/issues/11570" rel="nofollow noreferrer">https://github.com/hashicorp/terraform/issues/11570</a> is that the <code>taint</code> command pre-dates the resource naming convention shown in <code>state list</code>. As of v0.12 it will honour the same naming convention. </p> <p>I'm unfortunately not in a position to upgrade to v0.12.</p> <p>How do I go about <code>taint</code>-ing the resource <code>module.abc.module.redis.helm_release.redis-ha[3]</code> pre-v0.12? </p> <p>I'm happy to taint the entire <code>redis-ha</code> deployment.</p>
<p>In Terraform v0.11 and earlier, the <code>taint</code> command can work with that resource instance like this:</p> <pre><code>terraform taint -module=abc.redis helm_release.redis-ha.3 </code></pre> <p>From Terraform v0.12.0 onwards, that uses the standard resource address syntax:</p> <pre><code>terraform taint module.abc.module.redis.helm_release.redis-ha[3] </code></pre>
<p>I am using <code>kubectl with bash completion</code> , but I prefer to use a shorter alias for <code>kubectl</code> such as <code>ks</code> , what changes I need to make to get the bash completion work with alias <code>ks</code></p>
<p>from the official docs</p> <pre><code># after installing bash completion alias k=kubectl complete -F __start_kubectl k </code></pre> <p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#bash" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#bash</a></p>
<p>I am trying to setup Traefik on Kubernetes with Let's Encrypt enabled. I managed yesterday to retrieve the first SSL certificated from Let's Encrypt but am a little bit stuck on how to store the SSL certificates.</p> <p>I am able to create a Volume to store the Traefik certificates but that would mean that I am limited to a single replica (when having multiple replicas am I unable to retrieve a certificate since the validation goes wrong most of the times due to that the volume is not shared).</p> <p>I read that Traefik is able to use something like Consul but I am wondering if I have to setup/run a complete Consul cluster to just store the fetched certificates etc.?</p>
<p>You can store the certificate in a kubernetes secret and you reference to this secret in your ingress.</p> <pre><code>spec: tls: - secretName: testsecret </code></pre> <p>The secret has to be in same namespace the ingress is running in. See also <a href="https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress</a></p>
<p>I have one application will start one pod in any node of cluster, but if that node doesn't have this image, it will downloading it the first time and it takes a lot of time (it is around 1gb, and takes more than 3 minutes to download the image), what is the best practice to solve this kind of issue ? Pre pull image or share docker image via nfs ?</p>
<p>Try to reduce the size of the image.<br> This can be done a few ways depending on your project. For example, if you are using Node you could use <code>FROM node:11-alpine</code> instead of <code>FROM node:11</code> for a significantly smaller image. Also, make sure you are not putting build files inside the image. Languages like C# and Java have separate build and runtime images. For example, use <code>java-8-jdk</code> to build your project but use <code>java-8-jre</code> your final image as you only need the runtime. </p> <p>Good luck!</p>
<p>I have an anti-affinity rule that asks kubernetes to schedule pods from the same deployment onto <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node" rel="nofollow noreferrer">different nodes</a>, we've used it successfully for a long time.</p> <pre><code>affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - key: application operator: In values: - {{ $appName }} - key: proc operator: In values: - {{ $procName }} </code></pre> <p>I'm trying to update my pod affinity rules to be a strong preference instead of a hard requirement, so that we don't need to expand our cluster if a deployment needs more replicas than there are nodes available.</p> <pre><code>affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname weight: 100 labelSelector: matchExpressions: - key: application operator: In values: - {{ $appName }} - key: proc operator: In values: - {{ $procName }} </code></pre> <p>However, when I try applying the new rules, I get an unexpected error with the topologyKey:</p> <pre><code>Error: Deployment.apps "core--web" is invalid: [spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Required value: can not be empty, spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Invalid value: "": name part must be non-empty, spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey: Invalid value: "": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')] </code></pre> <p>The scheduler seems to be getting an empty string value for the topology key, even though all my nodes have a label for the specified key which match the regex:</p> <pre><code>$ kubectl describe nodes | grep kubernetes.io/hostname kubernetes.io/hostname=ip-10-x-x-x.ec2.internal kubernetes.io/hostname=ip-10-x-x-x.ec2.internal kubernetes.io/hostname=ip-10-x-x-x.ec2.internal kubernetes.io/hostname=ip-10-x-x-x.ec2.internal </code></pre> <p>I didn't expect to see a problem like this from a simple change from required to preferred. What did I screw up to cause the topologyKey error?</p>
<p>There's a slight difference between the syntax of required and preferred, note the reference to <code>podAffinityTerm</code> in the error message path:</p> <pre><code>spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey </code></pre> <p>The correct syntax for preferred scheduling is:</p> <pre><code>affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - key: application operator: In values: - {{ $appName }} - key: proc operator: In values: - {{ $procName }} </code></pre> <p>Note that <code>weight</code> is a top level key, with a sibling of <code>podAffinityTerm</code> which contains the <code>topologyKey</code> and <code>labelSelector</code>.</p>
<p>There is a python project in which I have dependencies defined with the help of "requirement.txt" file. One of the dependencies is gmpy2. When I am running <code>docker build -t myimage .</code> command it is giving me following error at the step when setup.py install is getting executed. </p> <pre><code>In file included from src/gmpy2.c:426:0: src/gmpy.h:252:20: fatal error: mpfr.h: No such file or directory include "mpfr.h" </code></pre> <p>similarly other two errors are:</p> <pre><code>In file included from appscript_3x/ext/ae.c:14:0: appscript_3x/ext/ae.h:26:27: fatal error: Carbon/Carbon.h: No such file or directory #include &lt;Carbon/Carbon.h&gt; In file included from src/buffer.cpp:12:0: src/pyodbc.h:56:17: fatal error: sql.h: No such file or directory #include &lt;sql.h&gt; </code></pre> <p>Now question is how i can define or install these internal dependencies required for successful build of image. As per my understanding gmpy2 is written in C and depends on three other C libraries: GMP, MPFR, and MPC and it is unable to find this.</p> <p>Following is my docker-file:</p> <pre><code>FROM python:3 COPY . . RUN pip install -r requirement.txt CMD [ "python", "./mike/main.py" ] </code></pre>
<p>Install this <code>apt install libgmp-dev libmpfr-dev libmpc-dev</code> extra dependency and then <code>RUN pip install -r requirement.txt</code> i think it will work and you will be able to install all the dependency and build docker image.</p> <pre><code>FROM python:3 COPY . . RUN apt-get update -qq &amp;&amp; \ apt-get install -y --no-install-recommends \ libmpc-dev \ libgmp-dev \ libmpfr-dev RUN pip install -r requirement.txt CMD [ "python", "./mike/main.py" ] </code></pre> <p>if apt not run you can use Linux as base image.</p>
<p>I'm beginning to dig into kubeflow pipelines for a project and have a beginner's question. It seems like kubeflow pipelines work well for training, but how about serving in production?</p> <p>I have a fairly intensive pre processing pipeline for training and must apply that same pipeline for production predictions. Can I use something like Seldon Serving to create an endpoint to kickoff the pre processing pipeline, apply the model, then to return the prediction? Or is the better approach to just put everything in one docker container?</p>
<p>Yes, you can definitely use Seldon for serving. In fact, Kubeflow team offers an easy way to link between training and serving: <a href="https://github.com/kubeflow/fairing" rel="nofollow noreferrer">fairing</a></p> <p>Fairing provides a programmatic way of deploying your prediction endpoint. You could also take a look at <a href="https://github.com/kubeflow/fairing/tree/master/examples/prediction" rel="nofollow noreferrer">this example</a> on how to deploy your Seldon endpoint with your training result.</p>
<p>kubernetes cluster is running on two nodes. one master , one worker ... weave net is pod network. </p> <pre><code>[root@irf-centos1 ~]# kubectl cluster-info Kubernetes master is running at https://10.8.156.184:6443 KubeDNS is running at https://10.8.156.184:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <p>have deployed the rabbit docker image as container in kubernetes pod. </p> <pre><code>[root@irf-centos1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE rabbitmq-86bd97fd9d-8h444 1/1 Running 0 51m rabbitmq-86bd97fd9d-n2kgk 1/1 Running 0 51m </code></pre> <p>following are the service and deployment yaml file </p> <p><strong>deployment file</strong> </p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: rabbitmq spec: replicas: 2 template: metadata: labels: app: rabbitmqapp spec: containers: - image: "docker.io/rabbitmq:latest" imagePullPolicy: Always name: rabbitmq ports: - containerPort: 5672 name: http-port volumeMounts: - mountPath: /var/rabbitmqapp_home name: rabbitmqapp-home volumes: - emptyDir: {} name: rabbitmqapp-home </code></pre> <p><strong>service file</strong> </p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: rabbitmq namespace: default spec: ports: - port: 5672 targetPort: 5672 protocol: TCP nodePort: 31111 selector: app: rabbitmqapp type: NodePort </code></pre> <p>here are the services and deployment details </p> <pre><code>[root@irf-centos1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE rabbitmq 2/2 2 2 55m [root@irf-centos1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3d rabbitmq NodePort 10.98.204.64 &lt;none&gt; 5672:31111/TCP 55m </code></pre> <p>now, when i am trying to hit the rabbitmq dashboard/UI on the node port. it is not accessible </p> <pre><code>[root@irf-centos1 ~]# curl http://10.8.156.187:31111 curl: (56) Recv failure: Connection reset by peer AMQP [root@irf-centos1 ~]# </code></pre> <p>when i hit the same URL from the web browser, nothing is happening </p> <p>please suggest </p> <p>NOTE: this cluster is deployed using kubeadm on AZure VMs. for troubleshooting purpose, i have opened all inbound/outbound ports on these VMs so that , it should not be a firewall , port blocking issue. </p> <p><strong>Edit 1:</strong> </p> <p>I modified the service file as follows and redeployed the same. PSB</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: rabbitmq namespace: default spec: ports: - name: ui protocol: TCP port: 15672 targetPort: 15672 nodePort: 31112 - name: service port: 5672 targetPort: 5672 protocol: TCP nodePort: 31111 selector: app: rabbitmq type: NodePort </code></pre> <p>still getting the same error </p> <pre><code>[root@irf-centos1 ~]# curl -I http://guest:guest@10.8.156.187:31111/api/users curl: (56) Recv failure: Connection reset by peer AMQP [root@irf-centos1 ~]# curl -I http://guest:guest@10.8.156.187:31112/api/users curl: (7) Failed connect to 10.8.156.187:31112; Connection refused </code></pre>
<p>For rabbitmq dashboard/UI, it's running on : <code>15672</code></p> <p>So port number in service file should be included : <code>15672</code> </p> <p>Then access to dashboard/UI create user for application. then <code>curl</code> using this user.</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: rabbitmq namespace: default spec: ports: - port: 5672 targetPort: 5672 protocol: TCP nodePort: 31111 - protocol: TCP port: 15672 targetPort: 15672 nodePort: 31112 selector: app: rabbitmqapp type: NodePort </code></pre> <p>Or using default username &amp; password <code>guest</code> of rabitmq:</p> <pre><code>curl -I http://guest:guest@10.8.156.187:31112/api/users </code></pre> <p>Deployment file add containerPort: </p> <pre><code> image: "docker.io/rabbitmq:latest" imagePullPolicy: Always name: rabbitmq ports: - containerPort: 5672 name: http-port containerPort: 15672 name: ui-port </code></pre>
<p>In the kubernetes client-go API (or another library that uses it), is there a utility function to convert a <code>k8s.io/apimachinery/pkg/apis/meta/v1/LabelSelector</code> to a string to fill the field <code>LabelSelector</code> in <code>k8s.io/apimachinery/pkg/apis/meta/v1/ListOptions</code>?</p> <p>I digged through the code of <code>client-go</code> but I can't find a function like that.</p> <p>The <code>LabelSelector.Marshall()</code> nor <code>LabelSelector.String()</code> give me that (unsurprisingly, because that's not their purpose, but I tried it anyway).</p> <h3>Background</h3> <p>I have spec descriptions like <code>k8s.io/api/extensions/v1beta1/Deployment</code>, and want to use it's set of selector labels (i.e. the <code>Selector</code> field) to query it's pods using </p> <pre><code>options := metav1.ListOptions{ LabelSelector: &lt;stringified labels&gt;, } podList, err := clientset.CoreV1().Pods(&lt;namespace&gt;).List(options) </code></pre>
<p>You can use <code>LabelSelectorAsMap(LabelSelector)</code> function to convert the labelselector into <code>map[string]string</code> map. </p> <p>Then, use <code>SelectorFromSet</code> function of package <code>k8s.io/apimachinery/pkg/labels</code> to convert <code>map</code> to selector/strings.</p> <p>Pseudo code:</p> <pre><code>import ( "k8s.io/apimachinery/pkg/labels" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) func ListPod(labelSelector metav1.LabelSelector) { labelMap := metav1.LabelSelectorAsMap(labelSelector) options := metav1.ListOptions{ LabelSelector: labels.SelectorFromSet(labelMap).String(), } podList, err := clientset.CoreV1().Pods("&lt;namespace&gt;").List(options) } </code></pre>
<p>I've build a npm react-app that connects to a REST-backend using a given url. To run the app on kubernetes, I've distributed the app and put it into an nginx container. The app starts nicely, but I want to make the backend url configurable without having to rebuild the container image every time. I don't know how to do that or where to search for, any help would be appreciated</p>
<p>You have several methods to achieve your objective</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">Use environment variables</a></li> </ul> <pre><code> apiVersion: v1 kind: Pod metadata: name: pod-name spec: containers: - name: envar-demo-container image: my_image:my_version env: - name: BACKEND_URL value: "http://my_backend_url" </code></pre> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Using a configmap as a config file for your service</a></li> <li>If the service is external, you can use a fixed name and register as a local kubernetes service: <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services</a></li> </ul> <p>Regards.</p>
<p>Is it possible to select a service/pod via its label from Ingress (to direct the traffic to)?</p> <p>Let's say I have 2 similar pods/services with different labels, but I want to direct the traffic to only one of them</p> <p>I'm looking for something similar to this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: Ingress-name labels: owner: me selector: matchLabels: podlabel: pod-label spec: rules: - host: ${INGRESS_HOST} http: paths: - path: /api backend: serviceName: &lt;something&gt; servicePort: &lt;something&gt; </code></pre> <p>how should I support this part:</p> <pre><code>selector: matchLabels: podlabel: pod-label </code></pre>
<p>If you want to select a service name from ingress then you can use </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress spec: rules: - host: ${INGRESS_HOST} http: paths: - path: /api backend: serviceName: &lt;service name&gt; servicePort: &lt;service name&gt; </code></pre> <p>If you want to manage the traffic to redirect to particular pod then you can achieve this at the service layer.</p> <p>If you want to blue/green deployment etc you can manage and divert traffic to particular pod from service only.</p> <p>So ingress will be pointing to the same service but service will be changing pointing of pod. </p> <p>Check this blue-green deployment: <a href="https://www.ianlewis.org/en/bluegreen-deployments-kubernetes" rel="nofollow noreferrer">https://www.ianlewis.org/en/bluegreen-deployments-kubernetes</a> you can find out how service manages traffic routes based on lables blue and green.</p>
<p>I am using cloud build and GKE k8s cluster and i have setup CI/CD from github to cloud build.</p> <p>I want to know is it good to add CI build file and Dockerfile in the repository or manage config file separately in another repository? </p> <p>Is it good to add Ci &amp; k8s config files with business logic repository?</p> <p>What is best way to implement CI/CD cloud build to GKE with managing CI/k8 yaml files</p>
<p>Yes, you can add deployment directives, typically in a dedicated folder of your project, which can in turn use a cicd repository</p> <p>See "<a href="https://github.com/kelseyhightower/pipeline-application" rel="nofollow noreferrer"><code>kelseyhightower/pipeline-application</code></a>" as an example, where:</p> <blockquote> <p>Changes pushed to any branch except master should trigger the following actions:</p> <ul> <li>build a container image tagged with the build ID suitable for deploying to a staging cluster</li> <li>clone the <a href="https://github.com/kelseyhightower/pipeline-infrastructure-staging" rel="nofollow noreferrer">pipeline-infrastructure-staging repo</a></li> <li>patch the pipeline deployment configuration file with the staging container image and commit the changes to the pipeline-infrastructure-staging repo</li> </ul> <p>The pipeline-infrastructure-staging repo will deploy any updates committed to the master branch.</p> </blockquote>
<p>So on GKE I have a Node.js <code>app</code> which for each pod uses about: <code>CPU(cores): 5m, MEMORY: 100Mi</code></p> <p>However I am only able to deploy 1 pod of it per node. I am using the GKE <code>n1-standard-1</code> cluster which has <code>1 vCPU, 3.75 GB</code> per node.</p> <p>So in order to get 2 pods of <code>app</code> up total = <code>CPU(cores): 10m, MEMORY: 200Mi</code>, it requires another entire +1 node = 2 nodes = <code>2 vCPU, 7.5 GB</code> to make it work. If I try to deploy those 2 pods on the same single node, I get <code>insufficient CPU</code> error.</p> <p>I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of <code>f1-micro</code> (1 vCPU, 0.6 GB) or <code>f1-small</code> (1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money.</p> <p>But I am not sure why I seem so restricted by <code>insufficient CPU</code>. Is there some config I need to change? Any guidance would be appreciated.</p> <hr> <pre><code>Allocatable: cpu: 940m ephemeral-storage: 47093746742 hugepages-2Mi: 0 memory: 2702216Ki pods: 110 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default mission-worker-5cf6654687-fwmk4 100m (10%) 0 (0%) 0 (0%) 0 (0%) default mission-worker-5cf6654687-lnwkt 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-gcp-v3.1.1-5b6km 100m (10%) 1 (106%) 200Mi (7%) 500Mi (18%) kube-system kube-dns-76dbb796c5-jgljr 260m (27%) 0 (0%) 110Mi (4%) 170Mi (6%) kube-system kube-proxy-gke-test-cluster-pool-1-96c6d8b2-m15p 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system metadata-agent-nb4dp 40m (4%) 0 (0%) 50Mi (1%) 0 (0%) kube-system prometheus-to-sd-gwlkv 1m (0%) 3m (0%) 20Mi (0%) 20Mi (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 701m (74%) 1003m (106%) memory 380Mi (14%) 690Mi (26%) Events: &lt;none&gt; </code></pre>
<p>After the deployment, check the node capacities with <code>kubectl describe nodes</code>. For e.g: In the code example at the bottom of the answer:</p> <p><strong>Allocatable cpu: 1800m</strong></p> <p><strong>Already used by pods in kube-system namespace: 100m + 260m + +100m + 200m + 20m = 680m</strong></p> <p><strong>Which means 1800m - 680m = 1120m is left for you to use</strong></p> <p><strong>So, if your pod or pods request for more than 1120m cpu, they will not fit on this node</strong></p> <blockquote> <p>So in order to get 2 pods of app up total = CPU(cores): 10m, MEMORY: 200Mi, it requires another entire +1 node = 2 nodes = 2 vCPU, 7.5 GB to make it work. If I try to deploy those 2 pods on the same single node, I get insufficient CPU error.</p> </blockquote> <p>If you do the exercise described above, you will find your answer. In case, there is enough cpu for your pods to use and still you are getting insufficient CPU error, check if you are setting the cpu request and limit params correctly. See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit" rel="nofollow noreferrer">here</a></p> <p><strong>If you do all the above and still it's an issue. Then, I think in your case, what could be happening is that you are allocating 5-10m cpu for your node app which is too less cpu to allocate. Try increasing that may be to 50m cpu.</strong></p> <blockquote> <p>I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of f1-micro (1 vCPU, 0.6 GB) or f1-small (1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money.</p> </blockquote> <p>Again, do the exercise describe above to <em>conclude</em> that</p> <pre><code>Name: e2e-test-minion-group-4lw4 [ ... lines removed for clarity ...] Capacity: cpu: 2 memory: 7679792Ki pods: 110 Allocatable: cpu: 1800m memory: 7474992Ki pods: 110 [ ... lines removed for clarity ...] Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system fluentd-gcp-v1.38-28bv1 100m (5%) 0 (0%) 200Mi (2%) 200Mi (2%) kube-system kube-dns-3297075139-61lj3 260m (13%) 0 (0%) 100Mi (1%) 170Mi (2%) kube-system kube-proxy-e2e-test-... 100m (5%) 0 (0%) 0 (0%) 0 (0%) kube-system monitoring-influxdb-grafana-v4-z1m12 200m (10%) 200m (10%) 600Mi (8%) 600Mi (8%) kube-system node-problem-detector-v0.1-fj7m3 20m (1%) 200m (10%) 20Mi (0%) 100Mi (1%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 680m (34%) 400m (20%) 920Mi (12%) 1070Mi (14%) </code></pre>
<p>I have one application will start one pod in any node of cluster, but if that node doesn't have this image, it will downloading it the first time and it takes a lot of time (it is around 1gb, and takes more than 3 minutes to download the image), what is the best practice to solve this kind of issue ? Pre pull image or share docker image via nfs ?</p>
<p>Continuation to @anskurtis-streutker answer, this page explains in detail how to create smaller images.</p> <p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-how-and-why-to-build-small-container-images" rel="nofollow noreferrer">Build smaller images - Kubernetes best practises</a></p>
<p>I enabled the ingress addon for my local minikube and created a custom pod, service and ingress. After that I added the ingress host to my hosts file where it points to the minikube ip so that I can reach it with my browser. When accessing the url, the browser is telling my that because of a "HTTP Strict Transport Security (HSTS)" header I was redirected to the https version of that url. Firefox and chrome do not allow me to continue because they say that the certificate (Kubernetes Ingress Controller Fake Certificate) is only valid for the url "ingress.local". With IE11 I can accept the risk and actually continue. What I can do to prevent getting redirected to https? I don't want https, only http. Here are the yaml files of my service and my ingress.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foobar-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: "nginx" spec: rules: - host: foobar.app http: paths: - path: / backend: serviceName: foobar-frontend servicePort: 80 --- apiVersion: v1 kind: Service metadata: name: foobar-frontend spec: ports: - port: 80 targetPort: 8080 selector: app: foobar-frontend </code></pre>
<p>It seems I already had it right. The browser was just brutally caching the HSTS header which is why I always got redirected. Even cleaning the cache did not work. So I had to change my ingress to another name and then it worked.</p>
<p>How the OR expression can be used with selectors and labels?</p> <pre><code> selector: app: myapp tier: frontend </code></pre> <p>The above matches pods where labels <code>app==myapp</code> <strong>AND</strong> <code>tier=frontend</code>. </p> <p>But the OR expression can be used?</p> <p><code>app==myapp</code> <strong>OR</strong> <code>tier=frontend</code>?</p>
<p>Now you can do that :</p> <pre><code>kubectl get pods -l 'environment in (production, qa)' </code></pre> <p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering</a></p>
<p>Suppose if I have 3 node Kafka cluster setup. Then how do I expose it outside a cloud using Load Balancer service? I have read reference material but have a few doubts.</p> <p>Say for example below is a service for a broker</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-0 annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com spec: externalTrafficPolicy: Local type: LoadBalancer ports: - port: 9092 name: outside targetPort: 9092 selector: app: kafka kafka-pod-id: "0" </code></pre> <ol> <li>What is port and targetPort?</li> <li>Do I setup LoadBalancer service for each of the brokers?</li> <li>Do these multiple brokers get mapped to single public IP address of cloud LB?</li> <li>How does a service outside k8s/cloud access individual broker? By using <code>public-ip:port</code>? or by using <code>kafka-&lt;pod-id&gt;.kafka.my.company.com:port</code>?. Also which port is used here? <code>port</code> or <code>targetPort</code>?</li> <li>How do I specify this configuration in Kafka broker's <code>Advertised.listeners</code> property? As port can be different for services inside k8s cluster and outside it.</li> </ol> <p>Please help.</p>
<p>Based on the information you provided I will try give you some answers, eventually give some advise.</p> <p><strong>1)</strong> <code>port:</code> is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against <code>port</code> in the service spec file.</p> <p><code>targetPort:</code> is the port on the <code>POD</code> where the service is running. Your application needs to be listening for network requests on this port for the service to work.</p> <p><strong>2/3)</strong> Each Broker should be exposed as <code>LoadBalancer</code> and be configured as headless service for internal communication. There should be one addiational <code>LoadBalancer</code> with external ip for external connection. </p> <p>Example of Service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-0 annotations: dns.alpha.kubernetes.io/external: kafka-0.kafka.my.company.com spec: ports: - port: 9092 name: kafka-port protocol: TCP selector: pod-name: kafka-0 type: LoadBalancer </code></pre> <p><strong>4)</strong> You have to use <code>kafka-&lt;pod-id&gt;.kafka.my.company.com:port</code></p> <p><strong>5)</strong> It should be set to the external addres so that clients can connect to it. <a href="https://rmoff.net/2018/08/02/kafka-listeners-explained/" rel="nofollow noreferrer">This</a> article might help with understanding.</p> <p>Similar case was on Github, it might help you also - <a href="https://github.com/kow3ns/kubernetes-kafka/issues/3" rel="nofollow noreferrer">https://github.com/kow3ns/kubernetes-kafka/issues/3</a></p> <p>In addition, You could also think about Ingress - <a href="https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/" rel="nofollow noreferrer">https://tothepoint.group/blog/accessing-kafka-on-google-kubernetes-engine-from-the-outside-world/</a></p>
<p>For example, a deployment yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: guestbook spec: replicas: 2 template: metadata: labels: app: guestbook spec: container: - name: guestbook image: {{Here want to read value from config file outside}} </code></pre> <p>There is a <code>ConfigMap</code> feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?</p>
<p>You can also use <a href="https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html" rel="noreferrer"><code>envsubst</code></a> when deploying.</p> <p>e.g.</p> <pre><code>cat app/deployment.yaml | envsubst | kubectl apply ... </code></pre> <p>It will replace all variables in the file with their values. We are successfully using this approach on our CI when deploying to multiple environments, also to inject the <code>CI_TAG</code> etc into the deployments.</p>
<p>We have a Spring Boot (2.0.4) application exposing a number of endpoints, one of which enables clients to retrieve sometimes very large files (~200 GB). The application is exposed in a Pod via a Kubernetes deployment configured with the rolling-update strategy. </p> <p>When we update our deployment by setting the image to the latest version the pods get destroyed and new ones spun up. Our service provision is seamless for new requests. However current requests can and do get severed and this can be annoying for clients in the middle of downloading very large files.</p> <p>We can configure Container Lifecycle Pre-Stop hooks in our deployment spec to inject a pause before sending shutdown signals to the app via it's PID. This helps prevent any new traffic going to pods which have been set to Terminate. Is there a way to then pause the application shutdown process until all current requests have been completed (this may take tens of minutes)?</p> <p>Here's what we have tried from within the Spring Boot application:</p> <ul> <li><p>Implementing a shutdown listener which intercepts <code>ContextCloseEvents</code>; unfortunately we can't reliably retrieve a list of active requests. Any Actuator metrics which may have been useful are unavailable at this stage of the shutdown process. </p></li> <li><p>Count active sessions by implementing a <code>HttpSessionListener</code> and overriding <code>sessionCreated/Destroy</code> methods to update a counter. This fails because the methods are not invoked on a separate thread so always report the same value in the shutdown listener.</p></li> </ul> <p>Any other strategy we should try? From within the app itself, or the container, or directly through Kubernetes resource descriptors? Advice/Help/Pointers would be much appreciated.</p> <p>Edit: We manage the cluster so we're only trying to mitigate service outages to currently connected clients during a <em>managed update of our deployment via a modified pod spec</em> </p>
<p>We did a combination of the above to resolve our problem. </p> <ul> <li>increased the terminationGracePeriodSeconds to the absolute maximum we expect to see in production</li> <li>added livenessProbe to prevent Traefik routing to our pod too soon</li> <li>introduced a pre-stop hook injecting a pause and invoking a monitoring script: <ol> <li>Monitored netstat for ESTABLISHED connections to our process (pid 1) with a Foreign Address of our cluster Traefik service </li> <li>sent TERM to pid 1</li> </ol></li> </ul> <p>Note that because we send TERM to pid 1 from the monitoring script the pod will terminate at this point and the terminationGracePeriodSeconds never gets hit (it's there as a precaution)</p> <p>Here's the script:</p> <pre><code>#!/bin/sh while [ "$(/bin/netstat -ap 2&gt;/dev/null | /bin/grep http-alt.*ESTABLISHED.*1/java | grep -c traefik-ingress-service)" -gt 0 ] do sleep 1 done kill -TERM 1 </code></pre> <p>Here's the new pod spec:</p> <pre><code>containers: - env: - name: spring_profiles_active value: dev image: container.registry.host/project/app:@@version@@ imagePullPolicy: Always lifecycle: preStop: exec: command: - /bin/sh - -c - sleep 5 &amp;&amp; /monitoring.sh livenessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 60 periodSeconds: 20 timeoutSeconds: 3 name: app ports: - containerPort: 8080 readinessProbe: httpGet: path: /actuator/health port: 8080 initialDelaySeconds: 60 resources: limits: cpu: 2 memory: 2Gi requests: cpu: 2 memory: 2Gi imagePullSecrets: - name: app-secret serviceAccountName: vault-auth terminationGracePeriodSeconds: 86400 </code></pre>
<p>I was following this <a href="https://towardsdatascience.com/kubernetesexecutor-for-airflow-e2155e0f909c" rel="noreferrer">blog post</a> and at this command, </p> <p><code>helm upgrade --install airflow airflow/ \ --namespace airflow \ --values values.yaml</code></p> <p>I got this error. <code>in airflow: chart metadata (Chart.yaml) missing</code> but I actually have the Chart.yaml file under <code>airflow/</code>. </p> <pre class="lang-sh prettyprint-override"><code>$ ls Chart.yaml charts requirements.yaml tiller.yaml Icon? requirements.lock templates values.yaml </code></pre> <p>helm version &amp; kubectl pod below</p> <pre><code>$ helm version Client: &amp;version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} </code></pre> <pre><code>$ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-9z8v5 1/1 Running 3 6h coredns-fb8b8dccf-wdtpl 1/1 Running 3 6h etcd-minikube 1/1 Running 1 6h kube-addon-manager-minikube 1/1 Running 1 6h kube-apiserver-minikube 1/1 Running 1 6h kube-controller-manager-minikube 1/1 Running 1 6h kube-proxy-m4whq 1/1 Running 0 2h kube-scheduler-minikube 1/1 Running 1 6h kubernetes-dashboard-79dd6bfc48-5z9cx 1/1 Running 3 6h storage-provisioner 1/1 Running 3 6h tiller-deploy-8458f6c667-wmv62 1/1 Running 1 4h </code></pre> <p>Could someone help to fix it?</p>
<p>I also had the error with Airflow. I noticed that you have to respect the folder structure. This example will fail:</p> <pre><code>./charts ./charts/airflow ./charts/template ---&gt; will produce the error </code></pre> <p>While this folder structure will work:</p> <pre><code>./charts ./charts/airflow templates --&gt; will work </code></pre>
<p>I am currently learning Kubernetes and I'm stuck on how to handle the following situation:</p> <p>I have a Spring Boot application which handles files(photos, pdf, etc...) uploaded by users, users can also download these files. This application also produces logs which are spread into 6 different files. To make my life easier I decided to have a root directory containing 2 subdirectories(1 directory for users data and 1 for logs) so the application works only with 1 directory(<code>appData</code>)</p> <pre><code>.appData |__ usersData |__ logsFile </code></pre> <p>I would like to use GKE (Google Kubernetes Engine) to deploy this application but I have these problems: </p> <ul> <li><strong>How to handle multiple replicas which will read/write concurrently data + logs in the <code>appData</code> directory?</strong> </li> <li>Regarding logs, is it possible to have multiple Pods writing to the same file?</li> <li>Say we have 3 replicas (Pod-A, Pod-B and Pod-C), if user A uploads a file handled by Pod-B, how Pod-A and Pod-C will discover this file if the same user requests later its file?</li> <li>Should each replica have its own volume? (I would like to avoid this situation, which seems the case when using StatefulSet)</li> <li>Should I have only one replica? (using Kubernetes will be useless in that case)</li> </ul> <p>Same questions about database's replicas. I use <code>PostgreSQL</code> and I have the same questions. If we have multiple replicas, as requests are randomly send to replicas, how to be sure that requesting data will return a result?</p> <p>I know there a lot of questions. Thanks a lot for your clarifications.</p>
<p>You can use persistent volume using NFS in GKE (Google Kubernetes Engine) to share files across pods. <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">https://cloud.google.com/filestore/docs/accessing-fileshares</a></p>
<p>I don't mean being able to route to a specific port, I mean to actually change the port the ingress listens on.</p> <p>Is this possible? How? Where is this documented?</p>
<p>No. From the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="noreferrer">kubernetes documentation</a>:</p> <blockquote> <p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.</p> </blockquote> <p>It may be possible to customize a LoadBalancer on a cloud provider like AWS to listen on other ports.</p>
<p>I am running Kubeflow pipeline(docker approach) and cluster uses the <strong>endpoint</strong> to navigate to the dashboard. The Clusters is created followed by the instructions mentioned in this link <a href="https://deploy.kubeflow.cloud/#/" rel="nofollow noreferrer">Deploy Kubeflow</a>. Everything is successfully created and the cluster generated the endpoints and its working perfectly. </p> <p>Endpoint link would be something like this <a href="https://%3Cname%3E.endpoints.%3Cproject%3E.cloud.goog" rel="nofollow noreferrer">https://appname.endpoints.projectname.cloud.goog</a>.</p> <p>Every workload of the pipeline is working fine except the last one. In the last workload, I am trying to submit a job to the cloud-ml engine. But it logs shows that the application has no access to the project. Here is the full image of the log.</p> <blockquote> <p>ERROR: (gcloud.ml-engine.versions.create) PERMISSION_DENIED: Request had insufficient authentication scopes. </p> <p>ERROR: (gcloud.ml-engine.jobs.submit.prediction) User [clustername@project_name.iam.gserviceaccount.com] does not have permission to access project [project_name] (or it may not exist): Request had insufficient authentication scopes.</p> </blockquote> <p>From the logs, it's clear that this service account doesn't have access to the project itself. However, I tried to give access for Cloud ML Service to this service account but still, it's throwing the same error.</p> <p>Any other ways to give Cloud ML service credentials to this application. </p>
<p>Check two things:</p> <p>1) GCP IAM: if clustername-user@projectname.iam.gserviceaccount.com has ML Engine Admin permission.</p> <p>2) Your pipeline DSL: if the cloud-ml engine step calls apply(gcp.use_gcp_secret('user-gcp-sa')), e.g. <a href="https://github.com/kubeflow/pipelines/blob/ea07b33b8e7173a05138d9dbbd7e1ce20c959db3/samples/tfx/taxi-cab-classification-pipeline.py#L67" rel="nofollow noreferrer">https://github.com/kubeflow/pipelines/blob/ea07b33b8e7173a05138d9dbbd7e1ce20c959db3/samples/tfx/taxi-cab-classification-pipeline.py#L67</a></p>
<p>I am currently learning Kubernetes and I'm stuck on how to handle the following situation:</p> <p>I have a Spring Boot application which handles files(photos, pdf, etc...) uploaded by users, users can also download these files. This application also produces logs which are spread into 6 different files. To make my life easier I decided to have a root directory containing 2 subdirectories(1 directory for users data and 1 for logs) so the application works only with 1 directory(<code>appData</code>)</p> <pre><code>.appData |__ usersData |__ logsFile </code></pre> <p>I would like to use GKE (Google Kubernetes Engine) to deploy this application but I have these problems: </p> <ul> <li><strong>How to handle multiple replicas which will read/write concurrently data + logs in the <code>appData</code> directory?</strong> </li> <li>Regarding logs, is it possible to have multiple Pods writing to the same file?</li> <li>Say we have 3 replicas (Pod-A, Pod-B and Pod-C), if user A uploads a file handled by Pod-B, how Pod-A and Pod-C will discover this file if the same user requests later its file?</li> <li>Should each replica have its own volume? (I would like to avoid this situation, which seems the case when using StatefulSet)</li> <li>Should I have only one replica? (using Kubernetes will be useless in that case)</li> </ul> <p>Same questions about database's replicas. I use <code>PostgreSQL</code> and I have the same questions. If we have multiple replicas, as requests are randomly send to replicas, how to be sure that requesting data will return a result?</p> <p>I know there a lot of questions. Thanks a lot for your clarifications.</p>
<p>I'd do two separate solutions for logs and for shared files.</p> <p>For logs, look at a log aggregator like fluentd.</p> <p>For shared file system, you want an NFS. Take a look at this example: <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a>. The NFS will use a persistent volume from GKE, Azure, or AWS. It's not cloud agnostic per se, but the only thing you change is your provisioner if you want to work in a different cloud.</p>
<p>I recently created an AKS hosting cluster our microservices architecture.</p> <p>However, we do have some services that cannot be contained like the rest.</p> <p>My need is to be able to connect the virtual network on which my cluster is located to that of my virtual machines.</p> <p><strong>I would like to know what is the best way to do this action?</strong></p> <p>I have so far created two "virtual network gateway" that I have tried to connect between them. However, the status of the connection always remains on "Connecting"</p> <p>And when I connect to one of the pods in the cluster, the connection doesn't work.</p> <p><strong>Is there any other way to make it work?</strong></p>
<p>Its a lot easier (and cheaper) to use virtual network peering, you can follow <a href="https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-connect-virtual-networks-portal" rel="nofollow noreferrer">this tutorial</a> to peer two networks.</p> <p>Also, you need to use <a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni" rel="nofollow noreferrer">Azure CNI</a> when creating the AKS cluster (needs recreating, cant alter it after AKS was created), this way every pod will get a dedicated IP address, they would be able to utilize connection between virtual networks.</p> <p>You can also use gateways, but since peering now works globally it makes very little sense to use gateways\site-to-site</p>
<p>I have a container based application running node JS and my backend is a mongoDB container. </p> <p>Basically, what I am planning to do is to run this in kubernetes. </p> <p>I have deployed this as separate containers on my current environment and it works fine. I have a mongoDB container and a node JS container. </p> <p>To connect the two I would do </p> <pre><code>docker run -d --link=mongodb:mongodb -e MONGODB_URL='mongodb://mongodb:27017/user' -p 4000:4000 e922a127d049 </code></pre> <p>my connection.js runs as below where it would take the MONGODB_URL and pass into the process.env in my node JS container. My connection.js would then extract the MONGODB_URL into the mongoDbUrl as show below. </p> <pre><code>const mongoClient = require('mongodb').MongoClient; const mongoDbUrl = process.env.MONGODB_URL; //console.log(process.env.MONGODB_URL) let mongodb; function connect(callback){ mongoClient.connect(mongoDbUrl, (err, db) =&gt; { mongodb = db; callback(); }); } function get(){ return mongodb; } function close(){ mongodb.close(); } module.exports = { connect, get, close }; </code></pre> <p>To deploy on k8s, I have written a yaml file for </p> <p>1) web controller 2) web service 3) mongoDB controller 4) mongoDB service</p> <p>This is my current mongoDB controller </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 template: metadata: labels: name: mongo spec: containers: - image: mongo:latest name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 </code></pre> <p>my mongoDB service</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: mongodb name: mongodb spec: ports: - port: 27017 targetPort: 27017 selector: name: mongo </code></pre> <p>my web controller</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: labels: name: web name: web-controller spec: replicas: 1 selector: name: web template: metadata: labels: name: web spec: containers: - image: leexha/node_demo:21 env: - name: MONGODB_URL value: "mongodb://mongodb:27017/user" name: web ports: - containerPort: 4000 name: node-server </code></pre> <p>and my web service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: web labels: name: web spec: type: NodePort ports: - port: 4000 targetPort: 4000 protocol: TCP selector: name: web </code></pre> <p>I was able to deploy all the services and pods on my local kubernetes cluster. </p> <p>However, when I tried to access the web application over a nodeport, it tells me that there is a connection error to my mongoDB. </p> <pre><code>TypeError: Cannot read property 'collection' of null at /app/app.js:24:17 at Layer.handle [as handle_request] </code></pre> <p>This is my node JS code for app.js</p> <pre><code>var bodyParser = require('body-parser') , MongoClient = require('mongodb').MongoClient , PORT = 4000 , instantMongoCrud = require('express-mongo-crud') // require the module , express = require('express') , app = express() , path = require('path') , options = { //specify options host: `localhost:${PORT}` } , db = require('./connection') // connection to database db.connect(() =&gt; { app.use(bodyParser.json()); // add body parser app.use(bodyParser.urlencoded({ extended: true })); //console.log('Hello ' + process.env.MONGODB_URL) // get function app.get('/', function(req, res) { db.get().collection('users').find({}).toArray(function(err, data){ if (err) console.log(err) else res.render('../views/pages/index.ejs',{data:data}); }); }); </code></pre> <p>Clearly, this is an error when my node JS application is unable to read the mongoDB service. </p> <p>I at first thought my MONGODB_URL was not set in my container. However, when I checked the nodeJS container using </p> <pre><code>kubectl exec -it web-controller-r269f /bin/bash </code></pre> <p>and echo my MONGODB_URL it returned me back mongodb://mongodb:27017/user which is correct. </p> <p>Im quite unsure what I am doing wrong as I am pretty sure I have done everything in order and my web deployment is communicating to mongoDB service. Any help? Sorry am still learning kubernetes and please pardon any mistakes</p>
<p>[Edit] </p> <p>Sorry my bad, the connections string <code>mongodb://mongodb:27017</code> would actually work. I tried dns querying that name, and it was able to resolve to the correct ip address even without specifying ".default.svc...". </p> <p><code>root@web-controller-mlplb:/app# host mongodb mongodb.default.svc.cluster.local has address 10.108.119.125</code></p> <p>@Anshul Jindal is correct that you have race condition, where the web pods are being loaded first before the database pods. You were probably doing <code>kubectl apply -f .</code> Try doing a reset <code>kubectl delete -f .</code> in the folder containing those yaml . Then <code>kubectl apply</code> the database manifests first, then after a few seconds, <code>kubectl apply</code> the web manifests. You could also probably use Init Containers to check when the mongo service is ready, before running the pods. Or, you can also do that check in your node.js application. </p> <p><strong>Example of waiting for mongodb service in Node.js</strong></p> <p>In your connection.js file, you can change the connect function such that if it fails the first time (i.e due to mongodb service/pod not being available yet), it will retry again every 3 seconds until a connection can be established. This way, you don't even have to worry about load order of applying kubernetes manifests, you can just <code>kubectl apply -f .</code></p> <pre><code>let RECONNECT_INTERVAL = 3000 function connect(callback){ mongoClient.connect(mongoDbUrl, (err, db) =&gt; { if (err) { console.log("attempting to reconnect to " + mongoDbUrl) setTimeout(connect.bind(this, callback), RECONNECT_INTERVAL) return } else { mongodb = db; callback(); } }); } </code></pre>
<p>I deployed a gRPC service (spring boot docker image) in my on-premise kubernetes cluster. I followed this <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/grpc" rel="nofollow noreferrer">documentation</a> to configure correctly deployment, service and ingress kubernetes manifests.</p> <p>I tried to test my service using grpcurl : </p> <pre><code>grpcurl -insecure fortune-teller.mydomain.cloud:443 build.stack.fortune.FortuneTeller/Predict </code></pre> <p>and the request still stuck for minutes. </p> <p>In the ingress logs (debug enabled), I see client timeout :</p> <pre><code>client timed out (110: Connection timed out), client: 1.2.3.4, server: _, request: "POST /grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo HTTP/2.0", host: "fortune-teller.mydomain.cloud:443" </code></pre> <p>After 4 timeouts in the ingress logs, the command ends at the client side with :</p> <pre><code>Error invoking method "build.stack.fortune.FortuneTeller/Predict": failed to query for service descriptor "build.stack.fortune.FortuneTeller": rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR </code></pre> <p>I use <code>nginx/1.13.12</code> with <code>rancher/nginx-ingress-controller:0.16.2-rancher1</code> image.</p> <p>At the annotation level, I tested :</p> <pre><code>kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" </code></pre> <p>and</p> <pre><code>nginx.ingress.kubernetes.io/grpc-backend: "true" </code></pre> <p>and both have the same result (timeout).</p> <p><strong>Note</strong> : I bypassed the ingress testing another gRPC service directly using the kubernetes service DNS and it works.</p> <p>Any idea of what's going wrong ?</p>
<p>I found a problem in my ingress resource. </p> <p>Despite the <code>tls</code> configuration, the nginx config didn't listen on 443. Something wrong with my <code>secret</code></p> <p>I fixed this and I retested my different configuration.</p> <p>The working annotations are :</p> <pre><code>metadata: annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/grpc-backend: "true" </code></pre> <p>Which are not the annotations used in the kubernetes <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/grpc" rel="nofollow noreferrer">documentation</a>.</p>
<p>I have a pod that responds to requests to /api/</p> <p>I want to do a rewrite where requests to /auth/api/ go to /api/.</p> <p>Using an Ingress (nginx), I thought that with the ingress.kubernetes.io/rewrite-target: annotation I could do it something like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapi-ing annotations: ingress.kubernetes.io/rewrite-target: /api kubernetes.io/ingress.class: "nginx" spec: rules: - host: api.myapp.com http: paths: - path: /auth/api backend: serviceName: myapi servicePort: myapi-port </code></pre> <p>What's happening however is that /auth/ is being passed to the service/pod and a 404 is rightfully being thrown. I must be misunderstanding the rewrite annotation.</p> <p>Is there a way to do this via k8s &amp; ingresses?</p>
<p>I don't know if this is still an issue, but since version 0.22 it seems you need to use capture groups to pass values to the rewrite-target value From the nginx example available <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">here</a></p> <blockquote> <p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p> </blockquote> <p>For your specific needs, something like this should do the trick</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapi-ing annotations: ingress.kubernetes.io/rewrite-target: /api/$2 kubernetes.io/ingress.class: "nginx" spec: rules: - host: api.myapp.com http: paths: - path: /auth/api(/|$)(.*) backend: serviceName: myapi servicePort: myapi-port </code></pre>
<p>I'm running the kafka kubenetes helm deployment, however I am unsure about how to install a custom plugin.</p> <p>When running custom plugin on my local version of kafka I mount the volume <code>/myplugin</code> to the Docker image, and then set the plugin path environment variable.</p> <p>I am unsure about how to apply this workflow to the helm charts / kubernetes deployment, mainly how to go about mounting the plugin to the Kafka Connect pod such that it can be found in the default <code>plugin.path=/usr/share/java</code>.</p>
<p>Have a look at the last few slides of <a href="https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect" rel="noreferrer">https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect</a>. You can mount your plugins but the best way is to either build a new image to extend the <code>cp-kafka-connect-base</code>, or to install the plugin at runtime - both using Confluent Hub. </p>
<p>I have integration test, where i start StatefulSet, wait untill ready and then do some asserts.</p> <p>My problem that if Application fails - it try to restart too fast. And I can't get logs from failed pod.</p> <p>SO my question how can i increase time between Restart of pod in StatefulSet? Because K8s controllers do not support RestartPolicy: Never.</p>
<p>If all you want is to view the logs of the terminated pod, you can do </p> <p><code>kubectl log &lt;pod_name&gt; --previous</code></p>
<p>using ingress example in kubernetes. I've been trying to browse but couldn't understand how to rewrite ingress to take a file which needs from the website with the ingress object in kubernetes from a folder</p> <p>Here is the example:</p> <pre><code> apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: tomcat-ingress namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: tomcat-deployment-service servicePort: 8080 path: /tomcat/?(.*) - backend: serviceName: nginx-deployment-service servicePort: 80 path: /nginx/?(.*) </code></pre> <p>AND here how is done in our local nginx server:</p> <pre><code> location ~* /company/logo/[0-9]*/.*\.(jpg|jpeg|gif|png)$ { root /opt/comapny/docs-branch/; rewrite /company/logo/([0-9]*)/(.*) /$1/$2 break; </code></pre> <p>I'm trying to figure out how when accessing this endpoint to redirect to a folder so it can take its file when needed. Also shall it try to access them from the nginx controller pod or the pod that has the service with the ingress</p>
<p>It's possible to add <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">custom config snippets to an Ingress via annotations</a>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/server-snippet: | location ~* /company/logo/[0-9]*/.*\.(jpg|jpeg|gif|png)$ { rewrite /company/logo/([0-9]*)/(.*) /$1/$2 break; } </code></pre> <p>You can easily break things here, so to view the eventual config file in the ingress-controller pod:</p> <pre><code>kubectl get pods kubectl exec mingress-nginx-ingress-controller-8f57f66d-mm9j7 cat /etc/nginx/nginx.conf | less </code></pre>
<p>I have created a pod that produces several millions of log lines.</p> <p>However <code>kubectl logs -f &lt;pod_name&gt;</code> at some point stops and the output freezes.</p> <p>Here is my experiment:</p> <pre><code>kubectl logs --follow=true test-logs-job-13263-t5dbx | tee file1 kubectl logs --follow=true test-logs-job-13263-t5dbx | tee file2 kubectl logs --follow=true test-logs-job-13263-t5dbx | tee file3 </code></pre> <p>Each of the above processes was interrupted (<code>Ctrl+C</code>) as soon as I stopped receiving any input on my screen.</p> <p>So:</p> <pre><code>$ wc -l file* 106701 file1 106698 file2 106698 file3 320097 total </code></pre> <p>It is worth mentioning also that each of these files contains different logs, i.e. the actual logs that were streamed at the time the <code>kubectl logs -f</code> command was actually executed.</p> <p>Is there a setting that limits the max amount of logs to be streamed to the above number? (~10700) </p> <p>My cluster is on GKE for what that matters.</p> <p><strong>edit</strong>: For some reason this seem to happen <strong>only</strong> on GKE; when I run the same experiment on another k8s cluster (<a href="https://www.katacoda.com/courses/kubernetes/playground" rel="nofollow noreferrer"><code>katakoda</code></a>) the logs were streamed without any issue whatsoever.</p>
<p>Kubernetes does not provide logging itself but uses what the container engine provides. If you were running without kubernetes on a plain docker installation the default configuration of docker is to use the json-file driver and keep 10 mb of logs per container. See <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer">https://docs.docker.com/config/containers/logging/json-file/</a> If you need a huge amount of logs consider using central logging infrastructure and configure log forwarding. (This is supported by docker as well for several logging backends, see <a href="https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers" rel="nofollow noreferrer">https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers</a> )</p> <p>There is a bug reported that log rotation can interrupt the streaming of kubectl logs --follow: <a href="https://github.com/kubernetes/kubernetes/issues/28369" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/28369</a></p>
<p>I am not exactly sure what's going on which is why I am asking this question. When I run this command:</p> <pre><code>kubectl config get-clusters </code></pre> <p>I get:</p> <pre><code>arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1 arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1 </code></pre> <p>then I run:</p> <pre><code>kubectl config current-context </code></pre> <p>and I get:</p> <pre><code>arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1 </code></pre> <p>and if I run <code>kubectl get pods</code>, I get the expected output. But how do I switch to the other cluster/context? what's the difference between the cluster and context? I can't figure out how these commands differ:</p> <p>When I run them, I still get the pods from the wrong cluster:</p> <pre><code>root@4c2ab870baaf:/# kubectl config set-context arn:aws:eks:us-west-2:913617820371:cluster/eks1 Context "arn:aws:eks:us-west-2:913617820371:cluster/eks1" modified. root@4c2ab870baaf:/# root@4c2ab870baaf:/# kubectl get pods NAME READY STATUS RESTARTS AGE apache-spike-579598949b-5bjjs 1/1 Running 0 14d apache-spike-579598949b-957gv 1/1 Running 0 14d apache-spike-579598949b-k49hf 1/1 Running 0 14d root@4c2ab870baaf:/# kubectl config set-cluster arn:aws:eks:us-west-2:91xxxxxx371:cluster/eks1 Cluster "arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1" set. root@4c2ab870baaf:/# kubectl get pods NAME READY STATUS RESTARTS AGE apache-spike-579598949b-5bjjs 1/1 Running 0 14d apache-spike-579598949b-957gv 1/1 Running 0 14d apache-spike-579598949b-k49hf 1/1 Running 0 14d </code></pre> <p>so I really don't know how to properly switch between clusters or contexts and also switch the auth routine when doing so.</p> <p>For example:</p> <pre><code>contexts: - context: cluster: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster user: arn:aws:eks:us-west-2:91xxxx371:cluster/ignitecluster name: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster - context: cluster: arn:aws:eks:us-west-2:91xxxx371:cluster/teros-eks-cluster user: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster name: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster </code></pre>
<p>To clarify on the difference between <code>set-context</code> and <code>use-context</code></p> <p>A context is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. So when you do <strong>set-context</strong>, you just adding context details to your configuration file <code>~/.kube/config</code>, but it doesn't switch you to that context, while <strong>use-context</strong> actually does.</p> <p>Thus, as Vasily mentioned, in order to switch between clusters run</p> <pre><code>kubectl config use-context &lt;CONTEXT-NAME&gt; </code></pre> <p>Also, if you run <code>kubectl config get-contexts</code> you will see list of contexts with indication of the current one. </p>
<p>I'm trying to figure out how to verify if a pod is running with security context privileged enabled (set to true).</p> <p>I assumed that '<code>kubectl describe pod [name]</code>' would contain this information but it does not appear to.</p> <p>I quickly created a pod using the following definition to test:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: priv-demo spec: volumes: - name: priv-vol emptyDir: {} containers: - name: priv-demo image: gcr.io/google-samples/node-hello:1.0 volumeMounts: - name: priv-vol mountPath: /data/demo securityContext: allowPrivilegeEscalation: true privileged: true </code></pre> <p>Any ideas how to retrieve the security context? It must be an easy thing to do and I've just overlooked something.</p>
<pre><code>kubectl get pod POD_NAME -o json | jq -r '.spec.containers[].securityContext.privileged' </code></pre>
<p>I am unable to get list of namespaces using rest api and rest end point is <code>https://&lt;localhost&gt;:8001/api/v1/namespaces</code></p> <p>Using <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#list-namespace-v1-core" rel="nofollow noreferrer">this kubernetes document</a>:</p> <p>I am using postman. I will repeat the steps:</p> <ol> <li>Created a user and given cluster admin privileges: </li> </ol> <p><code>kubectl create serviceaccount exampleuser</code></p> <ol start="2"> <li>Created a rolebinding for our user with cluster role cluster-admin: </li> </ol> <p><code>kubectl create rolebinding &lt;nameofrolebinding&gt; --clusterrole cluster-admin --serviceaccount default:exampleuser</code></p> <ol start="3"> <li>Checked rolebinding using: </li> </ol> <p><code>kubectl describe rolebinding &lt;nameofrolebinding&gt;</code></p> <ol start="4"> <li>Now by using:</li> </ol> <p><code>kubectl describe serviceaccount exampleuser kubectl describe secret exampleuser-xxxx-xxxx</code> </p> <p>I will use token I got here to authenticate postman.</p> <pre><code>GET https://&lt;ipofserver&gt;:port/api/v1/namespace </code></pre> <p>AUTH using bearer token.</p> <p>Expected result to list all namespaces in cluster. like <code>kubectl get namespaces</code>. But got a warning as follows.</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "namespaces is forbidden: User \"system:serviceaccount:default:exampleuser\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope", "reason": "Forbidden", "details": { "kind": "namespaces" }, "code": 403 } </code></pre> <p>I have used "cluster-admin" clusterrole for the user, still getting authentication related error. please help.</p>
<p>You should use <code>clusterrolebinding</code> instead of <code>rolebinding</code>: </p> <pre><code>kubectl create clusterrolebinding &lt;nameofrolebinding&gt; --clusterrole cluster-admin --serviceaccount default:exampleuser </code></pre> <p><code>RoleBinding</code> means permissions to a namespaced resources, but <code>namespace</code> is not a <code>namespaced</code> resources, you can check this by <code>kubectl api-resouces</code>. </p> <p>More detail at <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">rolebinding-and-clusterrolebinding</a>: </p> <blockquote> <p>Permissions can be granted within a namespace with a RoleBinding, or cluster-wide with a ClusterRoleBinding</p> </blockquote>
<p>One way to get the resource quota values in kubernetes is to use the following command </p> <pre><code>&gt;kubectl describe resourcequotas Name: default-quota Namespace: my-namespace Resource Used Hard -------- ---- ---- configmaps 19 100 limits.cpu 13810m 18 limits.memory 25890Mi 36Gi </code></pre> <p>But issue is this display all the values in text file format. Anyone knows how I can get in json format!</p> <p>Of course, I can parse the output and get the individual entry and construct the json. </p> <pre><code>kubectl describe quota | grep limits.cpu | awk '{print $2}' 13810m </code></pre> <p>But I am looking for something inbuilt or some quick way of doing it. Thanks for your help.</p>
<p>Thanks for your messages. Let me answer my own question, I have found one.</p> <p><a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer">jq</a> has solved my problem. </p> <p><strong>To get the Max limit of resources in json format</strong></p> <pre><code>kubectl get quota -ojson | jq -r .items[].status.hard </code></pre> <p><strong>To get the Current usage of resources in json format</strong></p> <pre><code>kubectl get quota -ojson | jq -r .items[].status.used </code></pre>
<p>Mostly addressed to: google-cloud-platform</p> <p>Overall problem I am trying to solve is; to pull images from Google Container Registry from private Kubernetes.</p> <p><strong>Update</strong> Just added heptio-contour if some one over there have come across this - as the good people at Heptio has created the script mentioned in the question further down - thanks.</p> <p>First step is to just use the Service Account with a JSON key - as described <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file" rel="nofollow noreferrer">here</a>.<br> But when I run:</p> <pre><code>cat gcr-sa-key.json | docker login -u _json_key --password-stdin https://gcr.io </code></pre> <p>I should be able to login docker, but it fails with:</p> <pre><code>cat gcr-sa-key.json | docker login -u _json_key --password-stdin https://gcr.io Error response from daemon: Get https://gcr.io/v2/: unauthorized: GCR login failed. You may have invalid credentials. To login successfully, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication </code></pre> <p><strong>Note</strong>: I got the <code>gcr-sa-key.json</code> file from running <a href="http://docs.heptio.com/content/private-registries/pr-gcr.html" rel="nofollow noreferrer">this</a> - keep in mind that I am overall trying to use this from Kubernetes.</p> <p>I expect this to be a Google issue, but/and if I do run as described in the doc from Heptio I get:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned default/&lt;image-name&gt;-deployment-v1-844568c768-5b2rt to my-cluster-digitalocean-1-7781 Normal Pulling 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 pulling image "gcr.io/&lt;project-name&gt;&lt;image-name&gt;:v1" Warning Failed 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 Failed to pull image "gcr.io/&lt;project-name&gt;/&lt;image-name&gt;:v1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/&lt;project-name&gt;/&lt;image-name&gt;, repository does not exist or may require 'docker login' Warning Failed 37s (x2 over 48s) kubelet, my-cluster-digitalocean-1-7781 Error: ErrImagePull Normal SandboxChanged 31s (x7 over 47s) kubelet, my-cluster-digitalocean-1-7781 Pod sandbox changed, it will be killed and re-created. Normal BackOff 29s (x6 over 45s) kubelet, my-cluster-digitalocean-1-7781 Back-off pulling image "gcr.io/&lt;project-name&gt;/&lt;image-name&gt;:v1" Warning Failed 29s (x6 over 45s) kubelet, my-cluster-digitalocean-1-7781 Error: ImagePullBackOff </code></pre> <p>Just info. that might be related, I saw <a href="https://github.com/GoogleContainerTools/skaffold/issues/336" rel="nofollow noreferrer">this</a> issue on github.</p>
<p>You are missing the most important bit, you need to somehow grant a Kubernetes' default service account (the simplest approach) the permission to access your private container registry while pulling images. You do this in three steps:</p> <ol> <li>Create and grant your GCP service account appropriate role in AIM (at least Storage Object Viewer) as explain <a href="https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry" rel="nofollow noreferrer">here</a> in official doc</li> <li>Create kubernetes secret (of 'docker-registry' type) using downloaded JSON key for your GCP service account</li> </ol> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl create secret docker-registry my-private-gcr-readonly \ --docker-server=gcr.io \ --docker-username=_json_key \ --docker-password="$(cat /usr/local/home/demo/414141.json)" \ --docker-email=some@project-id.iam.gserviceaccount.com</code></pre> </div> </div> </p> <ol start="3"> <li>Grant your default Kubernetes service account (your PODs are running under its security context by default) the right to pull Image from private GCR repo. This is done indirectly, by assigning it the secret for imagePull operation: </li> </ol> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-private-gcr-readonly"}]}'</code></pre> </div> </div> </p> <p>That's it !</p> <p>PS.</p> <p>You can also check <a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="nofollow noreferrer">this</a> tutorial, that explains both ways of accessing Google Container Registry from within Kubernetes cluster (using JSON Key or Access token)</p>
<p>I'm trying to verify that shutdown is completing cleanly on Kubernetes, with a .NET Core 2.0 app.</p> <p>I have an app which can run in two "modes" - one using ASP.NET Core and one as a kind of worker process. Both use Console and JSON-which-ends-up-in-Elasticsearch-via-Filebeat-sidecar-container logger output which indicate startup and shutdown progress.</p> <p>Additionally, I have console output which writes directly to stdout when a SIGTERM or Ctrl-C is received and shutdown begins.</p> <p>Locally, the app works flawlessly - I get the direct console output, then the logger output flowing to stdout on Ctrl+C (on Windows).</p> <p>My experiment scenario:</p> <ul> <li>App deployed to GCS k8s cluster (using <code>helm</code>, though I imagine that doesn't make a difference)</li> <li>Using <code>kubectl logs -f</code> to stream logs from the specific container</li> <li>Killing the pod from GCS cloud console site, or deleting the resources via <code>helm delete</code></li> <li>Dockerfile is <code>FROM microsoft/dotnet:2.1-aspnetcore-runtime</code> and has <code>ENTRYPOINT ["dotnet", "MyAppHere.dll"]</code>, so not wrapped in a <code>bash</code> process or anything</li> <li>Not specifying a <code>terminationGracePeriodSeconds</code> so guess it defaults to 30 sec</li> <li>Observing output returned</li> </ul> <p>Results:</p> <ul> <li>The API pod log streaming showed just the immediate console output, "[SIGTERM] Stop signal received", not the other Console logger output about shutdown process</li> <li>The worker pod log streaming showed a little more - the same console output and some Console logger output about shutdown process</li> <li>The JSON logs didn't seem to pick any of the shutdown log output</li> </ul> <p>My conclusions:</p> <ol> <li>I don't know if Kubernetes is allowing the process to complete before terminating it, or just issuing SIGTERM then killing things very quick. I think it should be waiting, but then, why no complete console logger output?</li> <li>I don't know if console output is cut off when stdout log streaming at some point before processes finally terminates?</li> <li>I would guess that the JSON stuff doesn't come through to ES because filebeat running in the sidecar terminates even if there's outstanding stuff in files to send</li> </ol> <p>I would like to know:</p> <ul> <li>Can anyone advise on points 1,2 above?</li> <li>Any ideas for a way to allow a little extra time or leeway for the sidecar to send stuff up, like a pod container termination order, delay on shutdown for that container, etc?</li> </ul>
<p><code>SIGTERM</code> does indeed signal termination. The less obvious part is that when the <code>SIGTERM</code> handler returns, everything is considered finished.</p> <p>The fix is to not return from the <code>SIGTERM</code> handler until the app has finished shutting down. For example, using a <code>ManualResetEvent</code> and <code>Wait()</code>ing it in the handler.</p>
<p>I was using a Docker-based setup with an nginx reverse proxy forwarding to Dockerized Microservices for some time. Right now I am evaluating a switch to a Kubernetes-based approach and the <a href="http://traefik.io/" rel="nofollow noreferrer">Traefik</a> Ingress Controller.</p> <p>The Ingress Controller provides all functionality required for this, except for one: <a href="https://github.com/containous/traefik/issues/878" rel="nofollow noreferrer">It doesn't support caching</a>.</p> <p>The Microservices aren't very performant when it comes to serving static resources, and I would prefer to reduce the load so they can <em>concentrate</em> on their actual purpose, handling dynamic REST requests. </p> <p>Is there any way to add caching support for Traefik-based Ingress? As there are many yet small services, I'd prefer not to spinup a dedicated Pod per Microservice if possible. Additionally, a configuration-based approach would be appreciated, if possible (maybe using a custom <a href="https://coreos.com/operators/" rel="nofollow noreferrer">Operator</a>?).</p>
<p>Caching functionality is still on the wish list in <a href="https://github.com/containous/traefik/issues/878" rel="noreferrer">Traefik</a> project. <br> As a kind of workaround please check <a href="https://github.com/jonashackt/traefik-cache-nginx-spring-boot" rel="noreferrer">this</a> scenario where NGINX is put in front to do caching.<br> I don't see any contraindications to apply the same idea in front of Traefik Ingress Controller.</p>
<p>I am seeking to how to be able to use multiple parameter on kubernetes. For example: </p> <pre><code>kubectl get pods -n default || kube-system </code></pre> <p>(but the results of this query come out only the result of default namespace).</p> <p>How can I use multiple params?</p>
<p>You can't query for multiple namespaces resources in one command. As there is explanation why it is not worth to do that on this <a href="https://github.com/kubernetes/kubernetes/issues/52326" rel="nofollow noreferrer">github issue</a></p> <p>But you can query for multiple resources across one or <code>--all-namespaces</code>. For example got get <code>services</code> and <code>pods</code> for namespace <code>kube-dns</code> and <code>default</code>(this will include workaround as @PEkambaram suggested)</p> <pre><code>kubectl get svc,pods --all-namespaces |egrep -e 'kube-dns|default' </code></pre>
<p>I need to setup a basic rabbit mq instance (no cluster setup) without persistence or security requirements on a kubernetes cluster.</p> <p>What I need:</p> <p>Single rabbit mq pod running as stateful set with replicas = 1, and reach it from inside and outside of cluster via specific url (amgp port and mangement interface port)</p> <p>What I don't need:</p> <ul> <li>persistence</li> <li>security</li> <li>cluster setup</li> </ul> <p>The helm charts I found so far are all adressing production setups with clustering, persistence and so on, but I don't need this stuff as I will use instance only for testing</p> <p>This is what I have so far:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rs-rmq-mgt spec: selector: app: rs-rmq ports: - protocol: TCP port: 1337 targetPort: 15672 type: NodePort --- apiVersion: apps/v1 kind: StatefulSet metadata: name: rs-rmq spec: selector: matchLabels: app: rs-rmq serviceName: "rs-rmq" replicas: 1 template: metadata: labels: app: rs-rmq spec: containers: - name: rs-rmq image: rabbitmq:management ports: - containerPort: 25672 - containerPort: 5672 - containerPort: 4369 - containerPort: 15672 </code></pre>
<p>If you don't need more than a replica and persistent. You can go with a simple pod deployment rather than sts. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">Please refer sts doc</a></p> <pre><code>kubectl run rabbitmq --image=rabbitmq:management --expose --port=15672 --restart=Never --dry-run -o yaml &gt; rabbitmq.yml </code></pre> <p>Edit the relevant container ports and create the pod.</p> <pre><code>kubectl create -f rabbitmq.yml </code></pre> <p>Expose the service as NodePort. </p> <pre><code>kubectl expose po rabbitmq --port 15672 </code></pre> <p>Now, you can access it externally via </p> <blockquote> <p>NodesIP:NodePort</p> </blockquote> <p>and internally by using,</p> <blockquote> <p>[svc].[namespace].svc</p> </blockquote>
<p>I am migrating from minikube to Microk8s and I want to change the configs of Microk8s and control the resources that it can use (cpu, memory, etc.).</p> <p>In minikube we can use commands like below to set the amount of resources for minikube:</p> <pre><code>minikube config set memory 8192 minikube config set cpus 2 </code></pre> <p>But I don't know how to do it in Microk8s. I used below commands (with and without sudo):</p> <pre><code>microk8s.config set cpus 4 microk8s.config set cpu 4 </code></pre> <p>And they returned:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: VORCBDRVJUSUZJQ0FURS0tLS0... server: https://10.203.101.163:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: username: admin password: ... </code></pre> <p>But when I get the describe for that node I see that Microk8s is using 8 cpu:</p> <pre><code>Capacity: cpu: 8 ephemeral-storage: 220173272Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32649924Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 219124696Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32547524Ki pods: 110 </code></pre> <p>How can I change the config of Microk8s?</p>
<p>You have a wrong understanding of the microk8s concept. </p> <p>Unlike minikube, microk8s is not provisioning any VMs for you, it's running on you host machine, hence all resources of the host are allocated for microk8s.</p> <p>So, in order to keep your cluster resource in borders, you have to manage it with k8s pod/container <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">resource limits</a> </p> <p>Let's say, your host has 4 CPUs and you don't want your microk8s cluster to use more then half of it's capacity.</p> <p>You will need to set below limits based on the number of running pods. For a single pod, it'll be like follows:</p> <pre><code>resources: requests: memory: "64Mi" cpu: 2 limits: memory: "128Mi" cpu: 2 </code></pre>
<p>We are currently migrating from AWS ECS to Azure Kubernetes Service. Our first step is to first migrate the application code and just leave the database in AWS RDS, for now. Our RDS instance is protected by a security group which only allows connection from a set of IP addresses. </p> <p>When connecting to the RDS instance, what IP address does the database see? How can I configure RDS to allow connection from a kubernetes pod?</p>
<p>If you have an Azure Load Balancer (so any kubernetes service with type <code>LoadBalancer</code>) attached to worker nodes - they will use the first IP attached to the Load Balancer. If not - they will use public IP attached to the VM they run on. If the VM doesnt have a public IP (default for AKS), they will use ethereal IP that might change anytime and you have no control over that.</p> <p>So just create a service with the type of LoadBalancer in AKS, find its external IP address and whitelist that.</p>
<p>So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.</p> <p>I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is. </p> <p>And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.</p> <p>Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?</p> <p>Thanks. </p>
<p>In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work. </p>
<p>I'm trying to watch the kubernetes resource like pods with kubernetes python client(version 9.0), but the method of watch.stream() seems to be suspended when i watch the k8s resource after few minutes later</p> <pre class="lang-py prettyprint-override"><code>v1 = client.CoreV1Api() w = watch.Watch() for resource in w.stream(v1.list_namespaced_pod): dosomething(resource) </code></pre> <p>how do i always keep watching the kebernetes pod</p>
<p>You are hitting probably this <a href="https://github.com/kubernetes-client/python/issues/728" rel="nofollow noreferrer">issue</a>, and as workaround mentioned in this <a href="https://github.com/kubernetes-client/python/issues/728#issuecomment-455279700" rel="nofollow noreferrer">comment</a> you can try like below:</p> <pre><code>from kubernetes import client, config, watch from urllib3.exceptions import ProtocolError config.load_kube_config() api_instance = client.CoreV1Api() while True: w = watch.Watch() try: for event in w.stream(api_instance.list_namespaced_pod, namespace="default"): print("Event: %s %s %s" % (event['type'],event['object'].kind, event['object'].metadata.name)) except ProtocolError: print("watchPodEvents ProtocolError, continuing..") </code></pre> <p>Hope it helps!</p>
<p>Getting access denied error while running the systemctl command in a pod.</p> <p>Whenever try to start any service, for example, MySQL or tomcat server in a pod, it gives access denied error.</p> <p>Is there any way by which I can run systemctl within a pod.</p>
<p>This is a problem related to Docker, not Kubernetes. According to the page <a href="https://docs.docker.com/config/containers/multi-service_container/" rel="nofollow noreferrer">Run multiple services in a container</a> in docker docs: </p> <blockquote> <p>It is generally recommended that you separate areas of concern by using one service per container</p> </blockquote> <p>However if you really want to use a process manager, you can try <em>supervisord</em>, which allows you to use <em>supervisorctl</em> commands, similar to <em>systemctl</em>. The page above explains how to do that:</p> <blockquote> <p>Here is an example Dockerfile using this approach, that assumes the pre-written supervisord.conf, my_first_process, and my_second_process files all exist in the same directory as your Dockerfile.</p> </blockquote> <pre><code>FROM ubuntu:latest RUN apt-get update &amp;&amp; apt-get install -y supervisor RUN mkdir -p /var/log/supervisor COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf COPY my_first_process my_first_process COPY my_second_process my_second_process CMD ["/usr/bin/supervisord"] </code></pre>
<p>I am trying to access a file inside my helm templates as a config map, like below. I get an error as below. </p> <p>However, it works when my application.yml doesn't have nested objects (Eg - name: test). Any ideas on what I could be doing wrong?</p> <p><strong>config-map.yaml:</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: {{.Files.Get “application.yml”}} </code></pre> <p><strong>application.yml:</strong></p> <pre><code>some-config: application: name: some-application-name </code></pre> <p><strong>ERROR:</strong></p> <pre><code>*ConfigMap in version “v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadString: expects ” or n, but found {, error found in #10 byte of ...|ication”* </code></pre>
<p>Looks like you have an indentation issue on your <code>application.yaml</code> file. Perhaps invalid YAML? If I try your very same files I get the following:</p> <pre><code>○ → helm template ./mychart -x templates/configmap.yaml --- # Source: mychart/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-configmap data: some-config: application: name: some-application-name </code></pre>
<p>I want to use the <code>pre-install</code> hook of helm,</p> <p><a href="https://github.com/helm/helm/blob/master/docs/charts_hooks.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/charts_hooks.md</a></p> <p>in the docs its written that you need to use annotation which is clear but what is not clear how to combine it ? </p> <pre><code>apiVersion: ... kind: .... metadata: annotations: "helm.sh/hook": "pre-install" </code></pre> <p>for my case I need to execute a bash script which create some env variable , where should I put this pre-hook script inside my chart that helm can use </p> <p>before installation ? </p> <p>I guess I need to create inside the <code>templates</code> folder a file which called: <code>pre-install.yaml</code> is it true? if yes where should I put the commands which create the env variables during the installation of the chart?</p> <p><strong>UPDATE</strong> The command which I need to execute in the <code>pre-install</code> is like:</p> <pre><code>export DB=prod_sales export DOMAIN=www.test.com export THENANT=VBAS </code></pre>
<p>A Helm hook launches some other Kubernetes object, most often a Job, which will launch a separate Pod. Environment variable settings will only effect the current process and children it launches later, in the same Docker container, in the same Pod. That is: you can't use mechanisms like Helm pre-install hooks or Kubernetes initContainers to set environment variables like this.</p> <p>If you just want to set environment variables to fixed strings like you show in the question, you can <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">directly set that in a Pod spec</a>. If the variables are, well, variable, but you don't want to hard-code them in your Pod spec, you can also <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">put them in a ConfigMap and then set environment variables from that ConfigMap</a>. You can also use Helm templating to inject settings from install-time configuration.</p> <pre><code>env: - name: A_FIXED_VARIABLE value: A fixed value - name: SET_FROM_A_CONFIG_MAP valueFrom: configMapKeyRef: name: the-config-map-name key: someKey - name: SET_FROM_HELM value: {{ .Values.environmentValue | quote }} </code></pre> <p>With the specific values you're showing, the Helm values path is probably easiest. You can run a command like</p> <pre><code>helm install --set db=prod_sales --set domain=www.test.com ... </code></pre> <p>and then have access to <code>.Values.db</code>, <code>.Values.domain</code>, <em>etc.</em> in your templates.</p> <p>If the value is really truly dynamic and you can't set it any other way, you can use a Docker entrypoint script to set it at container startup time. In <a href="https://stackoverflow.com/questions/55921914/how-to-source-a-script-with-environment-variables-in-a-docker-build-process/55922307#55922307">this answer</a> I describe the generic-Docker equivalents to this, including the entrypoint script setup.</p>
<p>When I am trying to send an HTTP request from one pod to another pod within my cluster, how do I target it? By the cluster IP, service IP, serivce name? I can not seem to find any documentation on this even though it seems like such a big part. Any knowledge would help. Thanks!</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">DNS for Services and Pods</a> should help you here.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: myservice namespace: mynamespace spec: selector: name: myapp type: ClusterIP ports: - name: http port: 80 targetPort: 80 </code></pre> <p>Lets say you have a service defined as such and you are trying to call the service from the same namespace. You can call <code>http://myservice.svc.cluster.local:80</code>. If you want to call the service from another namespace you can use <code>http://myservice.mynamespace.svc.cluster.local:80</code></p>
<p>Kubernetes java client has the sdk functions to create deployment, services and many other core kubernetes function. How can I access custom resources like that of istio's service entry, destination rules, virtual services from kubernetes java client?</p>
<p>To connect to Istio you can use the project <strong><a href="https://github.com/snowdrop/istio-java-api" rel="nofollow noreferrer">istio-java-api</a></strong>. This project use the same approach as Fabric8’s kubernetes-model. The example below shows how to build and create a VirtualService:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import me.snowdrop.istio.api.networking.v1alpha3.ExactMatchType; import me.snowdrop.istio.api.networking.v1alpha3.VirtualService; import me.snowdrop.istio.api.networking.v1alpha3.VirtualServiceBuilder; import me.snowdrop.istio.client.DefaultIstioClient; import me.snowdrop.istio.client.IstioClient; Config config = new ConfigBuilder().withMasterUrl(masterURL).build(); IstioClient istioClient = new DefaultIstioClient(config); VirtualService virtualService = new VirtualServiceBuilder() .withApiVersion("networking.istio.io/v1alpha3") .withNewMetadata() .withName("details") .endMetadata() .withNewSpec() .withHosts("*") .withGateways("system-gateway") .addNewHttp() .addNewRoute() .withNewDestination() .withHost("service-example") .withNewPort() .withNewNumberPort(9900) .endPort() .endDestination() .endRoute() .endHttp() .endSpec() .build(); istioClient.virtualService().create(virtualService);</code></pre> </div> </div> </p>
<p>I have a simple 3-node cluster created using AKS. Everything has been going fine for 3 months. However, I'm starting to have some disk space usage issues that seem related to the Os disks attached to each nodes.</p> <p>I have no error in kubectl describe node and all disk-related checks are fine. However, when I try to run kubectl logs on some pods, I sometimes get "no space left on device".</p> <p>How can one manage storage used in those disks? I can't seem to find a way to SSH into those nodes as it seems to only be manageable via Azure CLI / web interface. Is there also a way to clean what takes up this space (I assume unused docker images would take place, but I was under the impression that those would get cleaned automatically...)</p>
<p>Generally, the AKS nodes just run the pods or other resources for you, the data is stored in other space just like remote storage server. In Azure, it means <a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume" rel="nofollow noreferrer">managed disks</a> and <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">Azure file Share</a>. You can also store the growing data in the nodes, but you need to configure big storage for each node and I don't think it's a good way.</p> <p>To SSH into the AKS nodes, there are ways. One is that set the NAT rule manually for the node which you want to SSH into in the load balancer. Another is that create a pod as the jump box and the steps <a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="nofollow noreferrer">here</a>.</p> <p>The last point is that the AKS will delete the unused images regularly and automatically. It's not recommended to delete the unused images manually.</p>
<p>I found that my kubernetes cluster was sending reports to usage.projectcalico.org, how can this be disabled and how exactly is it using usage.projectcalico.org?</p>
<p>Felix is the Calico component that sends usage information.</p> <p>Felix can be <a href="https://docs.projectcalico.org/v3.7/reference/felix/configuration" rel="nofollow noreferrer">configured</a> to disable the usage ping.</p> <p>Set the <code>FELIX_USAGEREPORTINGENABLED</code> environment variable can be to <code>&quot;false&quot;</code> (needs to be a string in yaml land!) in the <code>calico-node</code> DaemonSet</p> <p>Set the <code>UsageReportingEnabled</code> field in the <a href="https://docs.projectcalico.org/v3.7/reference/calicoctl/resources/felixconfig" rel="nofollow noreferrer">FelixConfiguration</a> resource to <code>false</code>. This could be in etcd or in the Kubernetes API depending on what store you use. Both modifiable with <code>calicoctl</code>.</p> <pre><code>calicoctl patch felixConfiguration default \ --patch='{&quot;spec&quot;: {&quot;UsageReportingEnabled&quot;: false}}' </code></pre> <p>If you happen to be using kubespray, modifying this setting is a little harder as these variables are not exposed to Ansible, other than by manually modifying <a href="https://github.com/kubernetes-sigs/kubespray/blob/a2cf6816ce328032d9ad457b7c2cdb23d9519b1b/roles/network_plugin/calico/templates/calico-node.yml.j2" rel="nofollow noreferrer">templates</a> or <a href="https://github.com/kubernetes-sigs/kubespray/blob/a2cf6816ce328032d9ad457b7c2cdb23d9519b1b/roles/network_plugin/calico/tasks/install.yml#L144-L162" rel="nofollow noreferrer">yaml</a>.</p>
<p>I am stuck with a helm install of jenkins </p> <p>:( </p> <p>please help!</p> <p>I have predefined a storage class via:</p> <pre><code>$ kubectl apply -f generic-storage-class.yaml </code></pre> <p>with generic-storage-class.yaml:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: generic provisioner: kubernetes.io/aws-ebs parameters: type: gp2 zones: us-east-1a, us-east-1b, us-east-1c fsType: ext4 </code></pre> <p>I then define a PVC via:</p> <pre><code>$ kubectl apply -f jenkins-pvc.yaml </code></pre> <p>with jenkins-pvc.yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc namespace: jenkins-project spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </code></pre> <p>I can then see the PVC go into the BOUND status:</p> <pre><code>$ kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 27m </code></pre> <p>But when I try to Helm install jenkins via:</p> <pre><code>$ helm install --name jenkins \ --set persistence.existingClaim=jenkins-pvc \ stable/jenkins --namespace jenkins-project </code></pre> <p>I get this output:</p> <pre><code>NAME: jenkins LAST DEPLOYED: Wed May 22 17:07:44 2019 NAMESPACE: jenkins-project STATUS: DEPLOYED RESOURCES: ==&gt; v1/ConfigMap NAME DATA AGE jenkins 5 0s jenkins-tests 1 0s ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE jenkins 0/1 1 0 0s ==&gt; v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins Pending gp2 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE jenkins-6c9f9f5478-czdbh 0/1 Pending 0 0s ==&gt; v1/Secret NAME TYPE DATA AGE jenkins Opaque 2 0s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins LoadBalancer 10.100.200.27 &lt;pending&gt; 8080:31157/TCP 0s jenkins-agent ClusterIP 10.100.221.179 &lt;none&gt; 50000/TCP 0s NOTES: 1. Get your 'admin' user password by running: printf $(kubectl get secret --namespace jenkins-project jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo 2. Get the Jenkins URL to visit by running these commands in the same shell: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc --namespace jenkins-project -w jenkins' export SERVICE_IP=$(kubectl get svc --namespace jenkins-project jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") echo http://$SERVICE_IP:8080/login 3. Login with the password from step 1 and the username: admin For more information on running Jenkins on Kubernetes, visit: https://cloud.google.com/solutions/jenkins-on-container-engine </code></pre> <p>where I see helm creating a new PersistentVolumeClaim called jenkins.</p> <p>How come helm did not use the "exsistingClaim"</p> <p>I see this as the only helm values for the jenkins release</p> <pre><code>$ helm get values jenkins persistence: existingClaim: jenkins-pvc </code></pre> <p>and indeed it has just made its own PVC instead of using the pre-created one.</p> <pre><code>kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins-project jenkins Bound pvc-a9caa3ba-7cf1-11e9-a90f-161c7e8a0754 8Gi RWO gp2 6m11s jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 56m </code></pre> <p>I feel like I am close but missing something basic. Any ideas?</p>
<p>So per Matthew L Daniel's comment I ran <code>helm repo update</code> and then re-ran the helm install command. This time it did not re-create the PVC but instead used the pre-made one. </p> <p>My previous jenkins chart version was "jenkins-0.35.0"</p> <p>For anyone wondering what the deployment looked like:</p> <pre><code>Name: jenkins Namespace: jenkins-project CreationTimestamp: Wed, 22 May 2019 22:03:33 -0700 Labels: app.kubernetes.io/component=jenkins-master app.kubernetes.io/instance=jenkins app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=jenkins helm.sh/chart=jenkins-1.1.21 Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/component=jenkins-master,app.kubernetes.io/instance=jenkins Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: Recreate MinReadySeconds: 0 Pod Template: Labels: app.kubernetes.io/component=jenkins-master app.kubernetes.io/instance=jenkins app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=jenkins helm.sh/chart=jenkins-1.1.21 Annotations: checksum/config: 867177d7ed5c3002201650b63dad00de7eb1e45a6622e543b80fae1f674a99cb Service Account: jenkins Init Containers: copy-default-config: Image: jenkins/jenkins:lts Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh /var/jenkins_config/apply_config.sh Limits: cpu: 2 memory: 4Gi Requests: cpu: 50m memory: 256Mi Environment: ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /tmp from tmp (rw) /usr/share/jenkins/ref/plugins from plugins (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (rw) /var/jenkins_home from jenkins-home (rw) /var/jenkins_plugins from plugin-dir (rw) Containers: jenkins: Image: jenkins/jenkins:lts Ports: 8080/TCP, 50000/TCP Host Ports: 0/TCP, 0/TCP Args: --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD) --argumentsRealm.roles.$(ADMIN_USER)=admin Limits: cpu: 2 memory: 4Gi Requests: cpu: 50m memory: 256Mi Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: JENKINS_OPTS: JENKINS_SLAVE_AGENT_PORT: 50000 ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /tmp from tmp (rw) /usr/share/jenkins/ref/plugins/ from plugin-dir (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (ro) /var/jenkins_home from jenkins-home (rw) Volumes: plugins: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; jenkins-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: jenkins Optional: false plugin-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; secrets-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; jenkins-home: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: jenkins-pvc ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: jenkins-86dcf94679 (1/1 replicas created) NewReplicaSet: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 42s deployment-controller Scaled up replica set jenkins-86dcf94679 to 1 </code></pre>
<p>So, i am trying to connect to my scylla instance in GKE with cqlsh by opening a connection with Kubectl. I am however stumbling into some wierd issues i cannot get my head around.</p> <p>I am running scylla on GKE, it is basicly a cassandra knockoff that is supossed to run way faster than cassandra itself. To access scylla, i want to be able to connect to the db with the kubectl port-forward command so i can hook up external tools such as table plus. When i run <code>kubectl port-forward pod/scylla-0 -n scylla 9042</code> i expect the port to be accessible from my local machine, however, when i try to connect with cqlsh localy i get these error messages:</p> <pre><code>from clqsh: Connection error: ('Unable to connect to any servers', {'127.0.0.1': ConnectionShutdown('Connection to 127.0.0.1 was closed',)}) from kubectl: E0520 17:12:12.522329 51 portforward.go:400] an error occurred forwarding 9042 -&gt; 9042: error forwarding port 9042 to pod &lt;some id&gt;, uid : exit status 1: 2019/05/20 15:12:12 socat[998972] E connect(5, AF=2 127.0.0.1:9042, 16): Connection refused </code></pre> <p>i've also tried to forward the service directly to similar results</p> <p>What i personaly believe is the wierd part of this is that when i expose scylla with a loadbalancer, i can connect to it perfectly fine, i can also use JConsole when i forward the JMX port for scylla, which is why i am having so many headaches over this.</p>
<p>The reason it is failing is because port-forward only binds to localhost (127.0.0.1). If you go inside the pod with </p> <pre><code>kubectl exec -ti pod/scylla-0 -n scylla -- /bin/bash yum install net-tools -y netstat -nlup | grep scylla </code></pre> <p>You will notice that scylla is binding to the container pod IP and not 127.0.0.1 so you get a connection refused.</p> <p>Try: <code>kubectl port-forward --address your_pod_ip pod/mypod 9042:9042</code></p> <p>Alternatively: <code>kubectl port-forward --address your_service_ip service/myservice 9042:9042</code></p> <p>I didn't try it myself but I think it can work. Please let me know.</p>
<p>While running a spark job with a Kubernetes cluster, we get the following error:</p> <pre><code>2018-11-30 14:00:47 INFO DAGScheduler:54 - Resubmitted ShuffleMapTask(1, 58), so marking it as still running. 2018-11-30 14:00:47 WARN TaskSetManager:66 - Lost task 310.0 in stage 1.0 (TID 311, 10.233.71.29, executor 3): ExecutorLostFailure (executor 3 exited caused by one of the running tasks) Reason: The executor with id 3 exited with exit code -1. The API gave the following brief reason: Evicted The API gave the following message: The node was low on resource: ephemeral-storage. Container executor was using 515228Ki, which exceeds its request of 0. The API gave the following container statuses: </code></pre> <p>How to configure the job so we can increase the ephemeral storage size of each container ?</p> <p>We use spark 2.4.0 and Kubernetes 1.12.1</p> <p>The spark submit option is as follow</p> <pre><code>--conf spark.local.dir=/mnt/tmp \ --conf spark.executor.instances=4 \ --conf spark.executor.cores=8 \ --conf spark.executor.memory=100g \ --conf spark.driver.memory=4g \ --conf spark.driver.cores=1 \ --conf spark.kubernetes.memoryOverheadFactor=0.1 \ --conf spark.kubernetes.container.image=spark:2.4.0 \ --conf spark.kubernetes.namespace=visionlab \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.kubernetes.container.image.pullPolicy=Always \ --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.myvolume.options.claimName=pvc \ --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.myvolume.mount.path=/mnt/ \ --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.myvolume.mount.readOnly=false \ --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.myvolume.options.claimName=pvc \ --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.myvolume.mount.path=/mnt/ \ --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.myvolume.mount.readOnly=false </code></pre>
<p>As @Rico says, there's no way to set ephemeral storage limits via driver configurations as of spark 2.4.3. Instead, you can set ephemeral storage limits for all new pods in your namespace using a <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/" rel="nofollow noreferrer">LimitRange</a>:</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: ephemeral-storage-limit-range spec: limits: - default: ephemeral-storage: 8Gi defaultRequest: ephemeral-storage: 1Gi type: Container </code></pre> <p>This applies the defaults to executors created in the LimitRange's namespace:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pod spark-kub-1558558662866-exec-67 -o json | jq '.spec.containers[0].resources.requests."ephemeral-storage"' "1Gi" </code></pre> <p>It's a little heavy-handed because it applies the default to all containers in your namespace, but it may be a solution if your workload is uniform.</p>
<p>I am running application on k8s.</p> <p>My docker file is like :</p> <pre><code>FROM python:3.5 AS python-build ADD . /test WORKDIR /test </code></pre> <p>in test directory, i am doing everything my files inside this test folder.</p> <p>when i go inside pod and check file structure it's like <code>/var /usr /test /bin</code></p> <p>so i want to add whole folder test in pvc</p> <p>in test file structure is like <code>/app /data /history</code></p> <p>so can i save add folder attach to pvc using mountpath?</p> <p>is it possible two mountpath but one pvc ?</p>
<p>As I understand, you want to include your test directory as a mount path in your PVC. To answer that question, yes you can do it by providing it in the hostpath not the mount path. As explained in the documentation :-</p> <blockquote> <p>A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.</p> </blockquote> <p>and a mount path is -</p> <blockquote> <p>The location in pod where the volume should be mounted.</p> </blockquote> <p>so, if from your host system you want to mount the \test folder you need to provide it in the pv like below</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/test" </code></pre> <p>and you can use this PV to claim a volume using pvc and use the <strong>mountPath</strong> to mount that volume into your pod.</p> <p>To answer your second question, yes you can have multiple mount paths for a single PVC. An example of this which works is :-</p> <pre><code> "containers": [ { ..., "volumeMounts": [ { "mountPath": "/mnt/1", "name": "v1", "subPath": "data/1" }, { "mountPath": "/mnt/2", "name": "v1", "subPath": "data/2" } ] } ], ..., "volumes": [ { "name": "v1", "persistentVolumeClaim": { "claimName": "testvolume" } } ] } } </code></pre>
<p>I have a simple 3-node cluster created using AKS. Everything has been going fine for 3 months. However, I'm starting to have some disk space usage issues that seem related to the Os disks attached to each nodes.</p> <p>I have no error in kubectl describe node and all disk-related checks are fine. However, when I try to run kubectl logs on some pods, I sometimes get "no space left on device".</p> <p>How can one manage storage used in those disks? I can't seem to find a way to SSH into those nodes as it seems to only be manageable via Azure CLI / web interface. Is there also a way to clean what takes up this space (I assume unused docker images would take place, but I was under the impression that those would get cleaned automatically...)</p>
<p>Things you can do to fix this:</p> <ol> <li>Create AKS with bigger OS disk (I usually use 128gb)</li> <li>Upgrade AKS to a newer version (this would replace all the existing vms with new ones, so they won't have stale docker images on them)</li> <li>Manually clean up space on nodes</li> <li>Manually extend OS disk on nodes (will only work until you scale\upgrade the cluster)</li> </ol> <p>I'd probably go with option 1, else this problem would haunt you forever :(</p>
<p>I create a custom object using java-client for Kubernetes API in the beforeAll method of the integration tests. After the custom object is created the pods get created too. However, it only works when I set Thread.sleep for a few seconds. Without it, the object is created and then all the tests executed. I also defined watch on custom object statuses, but it does not help either. Is there any other way (except for Thread.sleep) to hold for a few seconds until the pods get created?</p> <p>My code for the custom object creation:</p> <pre><code>def createWatchCustomObjectsCalls() = { client.getHttpClient.setReadTimeout(0, TimeUnit.SECONDS) val watchCalls: Watch[V1Namespace] = Watch.createWatch(client, apiInstance.listNamespacedCustomObjectCall(crdGroup, crdVersion, crdNamespace, crdPlural, "true", null, null, true,null, null), new TypeToken[Watch.Response[V1Namespace]]{}.getType) watchCalls } override def beforeAll(): Unit = { val creationResourcePath = Source.getClass.getResource("/" + httpServerScriptName).getPath val serverStartupProcessBuilder = Seq("sh", creationResourcePath, "&amp;") #&gt; Console.out serverStartupProcessBuilder.run() val body = convertYamlToJson() val sparkAppCreation = apiInstance.createNamespacedCustomObject(crdGroup, crdVersion, crdNamespace, crdPlural, body,"true") println(sparkAppCreation) } </code></pre>
<p>You can synchronously check in a while loop if pods had been created:</p> <pre><code>// while val currentPodList = getCoreV1Api() .listPodForAllNamespaces(null /* _continue */, null /* fieldSelector */, null /* includeUninitialized */, null /* labelSelector */, null /* limit */, "false" /* pretty */, null /* resourceVersion */, null /* timeoutSeconds */, false /* watch */) .getItems(); // check items from currentPodList // end while </code></pre>
<p>I am looking for a way to limit the number of pids in the Kubernetes pod.</p> <p>The following issue seems to be closed (already implemented) long time ago.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/43783" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/43783</a></p> <p>But nothing seems to be there in the reference yet..</p> <p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/</a></p> <p>The pull request also seems to be merged</p> <p><a href="https://github.com/kubernetes/kubernetes/commit/bf111161b7aa4a47cc42ee6061b6bd3e45872cc4" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/commit/bf111161b7aa4a47cc42ee6061b6bd3e45872cc4</a></p> <p>I would like to know if we can use this feature now. If so, how and where to set it in the yaml file?</p>
<p>The parameter (PodPidsLimit) is part of the kubelet configuration: <a href="https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/config#KubeletConfiguration" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/pkg/kubelet/apis/config#KubeletConfiguration</a></p> <p>To see current configuration and if the parameter is available in your current version: <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file</a></p> <p>Keep in mind that this means you can't configure the limit in a pod, you need set the limits for all the pods in the node</p>
<p>I've got a two node Kubernetes EKS cluster which is running "v1.12.6-eks-d69f1"</p> <pre><code>Amazon VPC CNI Plugin for Kubernetes version: amazon-k8s-cni:v1.4.1 CoreDNS version: v1.1.3 KubeProxy: v1.12.6 </code></pre> <p>There are two CoreDNS pods running on the cluster.</p> <p>The problem I have is that my pods are resolving internal DNS names intermittently. (Resolution of external DNS names work just fine)</p> <pre><code>root@examplecontainer:/# curl http://elasticsearch-dev.internaldomain.local:9200/ curl: (6) Could not resolve host: elasticsearch-dev.internaldomain.local </code></pre> <p>elasticsearch-dev.internaldomain.local is registered on an AWS Route53 Internal Hosted Zone. The above works intermittenly, if I fire five requests, two of them would resolve correctly and the rest would fail.</p> <p>These are the contents of the /etc/resolv.conf file on the examplecontainer above:</p> <pre><code>root@examplecontainer:/# cat /etc/resolv.conf nameserver 172.20.0.10 search default.svc.cluster.local svc.cluster.local cluster.local eu-central-1.compute.internal options ndots:5 </code></pre> <p>Any ideas why this might be happening?</p>
<p>I fixed this issue by switching from a custom "DHCP option set" to the default "DHCP option set" provided by AWS. I created the custom "DHCP option set" months ago and assigned it to the VPC where the EKS cluster is running...</p> <p>How did I get to the bottom of this?</p> <p>After running "kubectl get events -n kube-system", I realised of the following:</p> <pre><code>Warning DNSConfigForming 17s (x15 over 14m) kubelet, ip-10-4-9-155.us-west-1.compute.internal Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 10.4.8.2 8.8.8.8 8.8.4.4 </code></pre> <p>8.8.8.8 and 8.8.4.4 were injected by the troublesome "DHCP options set" that I created. And I think that the reason why my services where resolving internal DNS names intermittently was because the CoreDNS service was internally forwarding DNS requests to 10.4.8.2, 8.8.4.4, 8.8.8.8 in a round robin fashion. Since the last 2 DNS servers don't know about my Route53 internal hosted zone DNS records, the resolution failed intermittently.</p> <p>Note 10.4.8.2 is the default AWS nameserver.</p> <p>As soon as switch to the default "DHCP option set" provided by AWS, the EKS services can resolve my internal DNS names consistently.</p> <p>I hope this will help someone in the future.</p>
<p>How to debug why it's status is CrashLoopBackOff?</p> <p>I am not using minikube , working on Aws Kubernetes instance.</p> <p>I followed this tutorial. <a href="https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample" rel="nofollow noreferrer">https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample</a></p> <p>When I do </p> <pre><code> kubectl create -f specs/spring-boot-app.yml </code></pre> <p>and check status by </p> <pre><code> kubectl get pods </code></pre> <p>it gives </p> <pre><code> spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m </code></pre> <p>Below Command </p> <pre><code> kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg </code></pre> <p>gives </p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container </code></pre> <p>Command <strong>kubectl get pods --all-namespaces</strong> gives </p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default constraintpod 1/1 Running 1 88d default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m default rcsise-krbxg 1/1 Running 1 87d default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s default twocontainers 2/2 Running 479 89d kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d kube-system kube-proxy-5sgjb 1/1 Running 1 89d kube-system kube-proxy-hd7tr 1/1 Running 1 89d kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d </code></pre> <p>Command <strong>kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx</strong> doesn't print anything.</p>
<p>First of all I fixed by postgres deployment, there was some error of "pod has unbound PersistentVolumeClaims" , so i fixed that error by this post <a href="https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims">pod has unbound PersistentVolumeClaims</a></p> <p>So now my postgres deployment is running. </p> <p><strong>kubectl logs spring-boot-postgres-sample-67f9cbc8c-qnkzg</strong> doesn't print anything, it means there is something wrong in config file. <strong>kubectl describe pod spring-boot-postgres-sample-67f9cbc8c-qnkzg</strong> stating that container is terminated and reason is completed, I fixed it by running container infinity time by adding </p> <pre><code> # Just sleep forever command: [ "sleep" ] args: [ "infinity" ] </code></pre> <p>So now my deployment is running. But now i Exposed my service by </p> <pre><code>kubectl expose deployment spring-boot-postgres-sample --type=LoadBalancer --port=8080 </code></pre> <p>but can't able to get External-Ip , so I did </p> <pre><code>kubectl patch svc &lt;svc-name&gt; -n &lt;namespace&gt; -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}' </code></pre> <p>So I get my external-Ip as "172.31.71.218"</p> <p>But now the problem is curl <a href="http://172.31.71.218:8080/" rel="nofollow noreferrer">http://172.31.71.218:8080/</a> getting timeout</p> <p>Anything i did wrong?</p> <p>Here is my deployment.yml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: spring-boot-postgres-sample namespace: default spec: replicas: 1 template: metadata: name: spring-boot-postgres-sample labels: app: spring-boot-postgres-sample spec: containers: - name: spring-boot-postgres-sample command: [ "/bin/bash", "-ce", "tail -f /dev/null" ] env: - name: POSTGRES_USER valueFrom: configMapKeyRef: name: postgres-config key: postgres_user - name: POSTGRES_PASSWORD valueFrom: configMapKeyRef: name: postgres-config key: postgres_password - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: hostname-config key: postgres_host image: &lt;mydockerHUbaccount&gt;/spring-boot-postgres-on-k8s:v1 </code></pre> <p>Here is my postgres.yml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: postgres-config namespace: default data: postgres_user: postgresuser postgres_password: password --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: template: metadata: labels: app: postgres spec: volumes: - name: postgres-storage persistentVolumeClaim: claimName: postgres-pv-claim containers: - image: postgres name: postgres env: - name: POSTGRES_USER valueFrom: configMapKeyRef: name: postgres-config key: postgres_user - name: POSTGRES_PASSWORD valueFrom: configMapKeyRef: name: postgres-config key: postgres_password - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 name: postgres volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data --- apiVersion: v1 kind: Service metadata: name: postgres spec: type: ClusterIP ports: - port: 5432 selector: app: postgres </code></pre> <p>Here How i got host-config map</p> <pre><code>kubectl create configmap hostname-config --from-literal=postgres_host=$(kubectl get svc postgres -o jsonpath="{.spec.clusterIP}") </code></pre>
<p>In my <code>Azure DevOps</code> I added a <code>Docker Registry Service Connection</code> via the "Other" option (username and password).</p> <p>This service connection works in my <code>CI Pipeline</code> when push images via <code>docker compose</code>.</p> <p>But in my <code>CD Pipeline</code> (Release) pipeline, when I add the <code>Docker Registry Service Connection</code> in the Secrets section of my <code>Deploy to Kubernetes Task</code>.</p> <p>In <code>Azure DevOps</code> the <code>Deploy to Kubernetes Task</code> was processed successfully. But in the cluster the pods for the images from my <code>Azure Container Registry</code> show following error:</p> <blockquote> <p>Failed to pull image "xxx.azurecr.io/service.api:latest": [rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://xxx.azurecr.io/v2/service.api/manifests/latest" rel="nofollow noreferrer">https://xxx.azurecr.io/v2/service.api/manifests/latest</a>: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://xxx.azurecr.io/v2/service.api/manifests/latest" rel="nofollow noreferrer">https://xxx.azurecr.io/v2/service.api/manifests/latest</a>: unauthorized: authentication required]</p> </blockquote> <p>How do I fix this error?</p>
<p>you need to configure kubernetes with access to private registry (the fact that you configured Azure Devops to do that doesn't matter, it doesnt 'push' images to kubernetes, it just issues commands). You can follow <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">this link</a> to do that.</p> <p>In short you need to do this:</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>and then add ImagePullSecrets to your pod definition:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: &lt;your-private-image&gt; imagePullSecrets: - name: regcred </code></pre>
<p>I have been working on the deployment of windows container from Azure Container Registry to Azure Container Service with Kubernetes Orchestra it was working fine previously. Now I'm trying to create an acs kubernetes cluster of windows but the create command is only creating a master node and while deploying I'm getting the following error <strong>No nodes are available that match all of the following predicates:: MatchNodeSelector (1)</strong></p> <p>I have followed this link <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough</a> to create the windows based kubernetes cluster.</p> <p>This is the command I have used to create the cluster</p> <pre><code>az acs create --orchestrator-type=kubernetes \ --resource-group myResourceGroup \ --name=myK8sCluster \ --agent-count=2 \ --generate-ssh-keys \ --windows --admin-username azureuser \ --admin-password myPassword12 </code></pre> <p>As per the above documentation, the above command should create a cluster named myK8sCluster with one Linux master node and two Windows agent nodes.</p> <p>To verify the creation of cluster I have used the below command</p> <pre><code>kubectl get nodes NAME STATUS AGE VERSION k8s-master-98dc3136-0 Ready 5m v1.7.7 </code></pre> <p>According to the above command, it shows that it created only the Linux master node, not the two windows agent nodes.</p> <p>But in my case I require the windows agent nodes to deploy a windows based container in the cluster.</p> <p>So I assume that due this I'm getting the following error while deploying <strong>No nodes are available that match all of the following predicates:: MatchNodeSelector (1)</strong></p>
<p>As the documentation points out, ACS with a target of Kubernetes is deprecated. You want to use AKS (Azure Kubernetes as a Service). </p> <p>To go about this, start here: <a href="https://learn.microsoft.com/en-us/azure/aks/windows-container-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/windows-container-cli</a></p> <p>Make sure you have the <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest" rel="nofollow noreferrer">latest version of the CLI</a> installed on your machine if you choose to do it locally, or use the <a href="https://learn.microsoft.com/en-us/azure/cloud-shell/overview" rel="nofollow noreferrer">Azure Cloud Shell</a>.</p> <p>Follow the guide on the rest of the steps as it will walk you through the commands.</p>
<p>I'm porting my dockerized app to kubernetes and I'm facing an issue creating a load balancer with aks:</p> <pre><code>The Service "lbalance" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767 </code></pre> <p>the configuration is pretty straightforward</p> <pre><code>apiVersion: v1 kind: Service metadata: name: lbalance spec: selector: app: lbalance ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 80 name: http - protocol: TCP port: 443 targetPort: 443 nodePort: 443 name: https type: LoadBalancer </code></pre> <p>Behind that sits an haproxy with ssl termination toward the other services exposed within the cluster</p> <p>In my testing environment I had a property to control which port to open ( --service-node-port-range ) but I cannot find that property neither on the portal page nor on the Azure documentation.</p> <p>Is there a way to have a service on default ports or a recommended way to connect back to that Endpoint ports?</p>
<p>30000-32767 is the default nodeport range in kubernetes. you have defined as, nodePort: 443. it is not supported and hence the error was thrown.</p> <p>follow the below steps</p> <ol> <li>replace NodePort with ClusterIP as service type</li> <li>deploy ingress controller</li> <li>deploy default backend</li> <li>create secret from dns certificates ( for https )</li> <li>deploy Ingress Rule ( include the secrets ) to route the users request to backend service. </li> </ol>
<p>I'm porting my dockerized app to kubernetes and I'm facing an issue creating a load balancer with aks:</p> <pre><code>The Service "lbalance" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767 </code></pre> <p>the configuration is pretty straightforward</p> <pre><code>apiVersion: v1 kind: Service metadata: name: lbalance spec: selector: app: lbalance ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 80 name: http - protocol: TCP port: 443 targetPort: 443 nodePort: 443 name: https type: LoadBalancer </code></pre> <p>Behind that sits an haproxy with ssl termination toward the other services exposed within the cluster</p> <p>In my testing environment I had a property to control which port to open ( --service-node-port-range ) but I cannot find that property neither on the portal page nor on the Azure documentation.</p> <p>Is there a way to have a service on default ports or a recommended way to connect back to that Endpoint ports?</p>
<p>you need to remove the <code>nodePort</code> declaration from your yaml and it will get allocated by kubernetes from the pool mentioned in the error text (the only one you can use).</p> <pre><code>apiVersion: v1 kind: Service metadata: name: lbalance spec: selector: app: lbalance ports: - protocol: TCP port: 80 targetPort: 80 name: http - protocol: TCP port: 443 targetPort: 443 name: https type: LoadBalancer </code></pre> <p>this way your service would be available on 80\443 and thing will work like they should</p>
<p>I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (<a href="https://sub.domain.com/service/hangfire" rel="nofollow noreferrer">https://sub.domain.com/service/hangfire</a>) I got a 504 gateway timeout, where it should be directing me to authenticate.</p> <p>I had been mostly following this guide for reference: <a href="https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/" rel="nofollow noreferrer">https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/</a></p> <p>If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to <a href="https://sub.domain.com/oauth2" rel="nofollow noreferrer">https://sub.domain.com/oauth2</a>, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.</p> <p>In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.</p> <p>oauth2 deployment:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: oauth2-proxy spec: replicas: 1 selector: matchLabels: app: oauth2-proxy template: metadata: labels: app: oauth2-proxy spec: containers: - name: oauth2-proxy image: quay.io/pusher/oauth2_proxy:v3.2.0 imagePullPolicy: IfNotPresent args: - --provider=azure - --email-domain=domain.com - --upstream=http://servicename - --http-address=0.0.0.0:4180 - --azure-tenant=id - --client-id=id - --client-secret=number env: - name: OAUTH2_PROXY_COOKIE_SECRET value: secret ports: - containerPort: 4180 protocol : TCP --- apiVersion: v1 kind: Service metadata: labels: app: oauth2-proxy name: oauth2-proxy spec: ports: - name: http port: 4180 protocol: TCP targetPort: 4180 selector: app: oauth2-proxy </code></pre> <p>Ingress rules:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: service-ingress1 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire" spec: rules: - host: sub.domain.com http: paths: - path: /service/hangfire backend: serviceName: service servicePort: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: service-oauth2-proxy annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: sub.domain.com http: paths: - path: /oauth2 backend: serviceName: oauth2-proxy servicePort: 4180 </code></pre> <p>I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.</p>
<p>I ended up finding the resolution here: <a href="https://github.com/helm/charts/issues/5958" rel="nofollow noreferrer">https://github.com/helm/charts/issues/5958</a></p> <p>I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.</p> <pre><code>nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth </code></pre>
<p>We have a J2EE application (Wildfly) running behind an Apache web server currently running on Amazon EC2 instances. We are planning on migrating this to a Kubernetes (EKS) platform using Docker images. However, we were curious about best practices. Should we create a Docker container with both the web and app server running within it, as seems common, or are there advantages of creating separate images to house both the web server and another housing the app server?</p>
<p>Create a container per main process. One for the web server, one for the app server. </p> <p>If the containers will always have a one to one relationship and need to run together, the containers can be scheduled in the same <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pod</a> (or as @sfgroups mentioned, an ingress controller may take care of your web server needs).</p> <p>Kubernetes is a work load scheduler and benefits from knowing the state of the processes it runs. To run multiple processes in a container you need to add a layer of process management, usually starting with backgrounding in a script using <code>./app &amp;</code>, running into issues, then some type of <code>init</code> system like <a href="https://github.com/just-containers/s6-overlay" rel="nofollow noreferrer">s6</a></p> <pre><code>container-runtime c-r c-r | | | init VS web app / \ web app </code></pre> <p>If you start adding layers of process management in between Kubernetes and the processes being managed, the state Kubernetes can detect starts to become fuzzy. </p> <p>What happens when the web is down but the app is up? How do you manage logs from two processes? How do you debug a failing process when the init stays up and Kubernetes thinks all is good. There are a number of things that start becoming custom solutions rather than making use of the functionality Kubernetes already supplies.</p>
<p>How to debug why it's status is CrashLoopBackOff?</p> <p>I am not using minikube , working on Aws Kubernetes instance.</p> <p>I followed this tutorial. <a href="https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample" rel="nofollow noreferrer">https://github.com/mkjelland/spring-boot-postgres-on-k8s-sample</a></p> <p>When I do </p> <pre><code> kubectl create -f specs/spring-boot-app.yml </code></pre> <p>and check status by </p> <pre><code> kubectl get pods </code></pre> <p>it gives </p> <pre><code> spring-boot-postgres-sample-67f9cbc8c-qnkzg 0/1 CrashLoopBackOff 14 50m </code></pre> <p>Below Command </p> <pre><code> kubectl describe pods spring-boot-postgres-sample-67f9cbc8c-qnkzg </code></pre> <p>gives </p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m18s (x350 over 78m) kubelet, ip-172-31-11-87 Back-off restarting failed container </code></pre> <p>Command <strong>kubectl get pods --all-namespaces</strong> gives </p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default constraintpod 1/1 Running 1 88d default postgres-78f78bfbfc-72bgf 1/1 Running 0 109m default rcsise-krbxg 1/1 Running 1 87d default spring-boot-postgres-sample-667f87cf4c-858rx 0/1 CrashLoopBackOff 4 110s default twocontainers 2/2 Running 479 89d kube-system coredns-86c58d9df4-kr4zj 1/1 Running 1 89d kube-system coredns-86c58d9df4-qqq2p 1/1 Running 1 89d kube-system etcd-ip-172-31-6-149 1/1 Running 8 89d kube-system kube-apiserver-ip-172-31-6-149 1/1 Running 1 89d kube-system kube-controller-manager-ip-172-31-6-149 1/1 Running 1 89d kube-system kube-flannel-ds-amd64-4h4x7 1/1 Running 1 89d kube-system kube-flannel-ds-amd64-fcvf2 1/1 Running 1 89d kube-system kube-proxy-5sgjb 1/1 Running 1 89d kube-system kube-proxy-hd7tr 1/1 Running 1 89d kube-system kube-scheduler-ip-172-31-6-149 1/1 Running 1 89d </code></pre> <p>Command <strong>kubectl logs spring-boot-postgres-sample-667f87cf4c-858rx</strong> doesn't print anything.</p>
<p>Why don't you...</p> <ol> <li><p>run a dummy container (run an endless sleep command)</p></li> <li><p>kubectl exec -it bash</p></li> <li><p>Run the program directly and have a look at the logs directly. </p></li> </ol> <p>Its an easier form of debugging on K8s.</p>
<p>I've dockerized a Flask app, using gunicorn to serve it. The last line of my Dockerfile is:</p> <pre><code>CMD source activate my_env &amp;&amp; gunicorn --timeout 333 --bind 0.0.0.0:5000 app:app </code></pre> <p>When running the app locally – either straight in my console, without docker, or with </p> <pre><code>docker run -dit \ --name my-app \ --publish 5000:5000 \ my-app:latest </code></pre> <p>It boots up fine. I get a log like:</p> <pre><code>[2018-12-04 19:32:30 +0000] [8] [INFO] Starting gunicorn 19.7.1 [2018-12-04 19:32:30 +0000] [8] [INFO] Listening at: http://0.0.0.0:5000 (8) [2018-12-04 19:32:30 +0000] [8] [INFO] Using worker: sync [2018-12-04 19:32:30 +0000] [16] [INFO] Booting worker with pid: 16 &lt;my app's output&gt; </code></pre> <p>When running the same image in <code>k8s</code> I get</p> <pre><code>[2018-12-10 21:09:42 +0000] [5] [INFO] Starting gunicorn 19.7.1 [2018-12-10 21:09:42 +0000] [5] [INFO] Listening at: http://0.0.0.0:5000 (5) [2018-12-10 21:09:42 +0000] [5] [INFO] Using worker: sync [2018-12-10 21:09:42 +0000] [13] [INFO] Booting worker with pid: 13 [2018-12-10 21:10:52 +0000] [16] [INFO] Booting worker with pid: 16 [2018-12-10 21:10:53 +0000] [19] [INFO] Booting worker with pid: 19 [2018-12-10 21:14:40 +0000] [22] [INFO] Booting worker with pid: 22 [2018-12-10 21:16:14 +0000] [25] [INFO] Booting worker with pid: 25 [2018-12-10 21:16:25 +0000] [28] [INFO] Booting worker with pid: 28 &lt;etc&gt; </code></pre> <p>My k8s deployment yaml looks like</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-deployment spec: replicas: 1 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: imagePullSecrets: - name: regcred containers: - name: my-frontend image: my-registry/my-frontend:latest ports: - containerPort: 80 - name: my-backend image: my-registry/my-backend:latest ports: - containerPort: 5000 </code></pre> <p>Here, the container in question is <code>my-backend</code>. Any ideas why this is happening?</p> <p>Update: As I wrote this, the events list that is printed with <code>kubectl describe pods</code> was updated with the following:</p> <pre><code>Warning FailedMount 9m55s kubelet, minikube MountVolume.SetUp failed for volume "default-token-k2shm" : Get https://localhost:8443/api/v1/namespaces/default/secrets/default-token-k2shm: net/http: TLS handshake timeout Warning FailedMount 9m53s (x2 over 9m54s) kubelet, minikube MountVolume.SetUp failed for volume "default-token-k2shm" : secrets "default-token-k2shm" is forbidden: User "system:node:minikube" cannot get secrets in the namespace "default": no path found to object Normal SuccessfulMountVolume 9m50s kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-k2shm" </code></pre> <p>Not sure if it's relevant to my issue</p>
<p>I solved this by adding resources under the container - mine needed more memory.</p> <pre><code>resources: requests: memory: "512Mi" cpu: 0.1 limits: memory: "1024Mi" cpu: 1.0 </code></pre> <p>Hope that helps.</p>
<p>I have one configuration file which as following. This file is a configmap and will be mounted and read by my app. The problem here is that this configuration file has one property with my db password. And I don't want to it to be exposed. So is there anyway to inject kubernetes secret into such configuration file. Thanks</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8" standalone="no"?&gt; &lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt; &lt;configuration&gt; &lt;property&gt; &lt;name&gt;javax.jdo.option.ConnectionUserName&lt;/name&gt; &lt;value&gt;hive&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;javax.jdo.option.ConnectionPassword&lt;/name&gt; &lt;value&gt;my_db_password&lt;/value&gt; &lt;/property&gt; </code></pre>
<p>You can use a combination of an init container an a shared volume for this, if you don't want to expose the secret to the application container directly.</p> <p>The init container uses the secret to create the configuration file from a template (f.e. <code>sed</code> replacing a placeholder) and place the file in a shared volume. The application container uses the volume to retrieve the file. (Given that you can configure the path where the application expects the configuration file.)</p> <p>The other option is to simply use the secret as an environment variable for your application and retrieve it separately from the general configuration.</p>
<p>I am trying to install kubernetes with kubeadm in my laptop which has <strong>Ubuntu 16.04</strong>. I have disabled swap, since kubelet does not work with swap on. The command I used is :</p> <p><code>swapoff -a</code></p> <p>I also commented out the reference to swap in <code>/etc/fstab</code>.</p> <pre><code># /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # &lt;file system&gt; &lt;mount point&gt; &lt;type&gt; &lt;options&gt; &lt;dump&gt; &lt;pass&gt; # / was on /dev/sda1 during installation UUID=1d343a19-bd75-47a6-899d-7c8bc93e28ff / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation #UUID=d0200036-b211-4e6e-a194-ac2e51dfb27d none swap sw 0 0 </code></pre> <p>I confirmed swap is turned off by running the following:</p> <pre><code>free -m total used free shared buff/cache available Mem: 15936 2108 9433 954 4394 12465 Swap: 0 0 0 </code></pre> <p>When I start kubeadm, I get the following error:</p> <pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.14.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` </code></pre> <p>I also tried restarting my laptop, but I get the same error. What could the reason be?</p>
<p>below was the root cause.</p> <p>detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".</p> <p>you need to update the docker cgroup driver.</p> <p>follow the below fix</p> <pre><code>cat &gt; /etc/docker/daemon.json &lt;&lt;EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart Docker systemctl daemon-reload systemctl restart docker </code></pre>
<p>I have a docker file and using that i have build the image and then used EKS service to launch the containers. Now in my application for logging purpose I am taking environment variables like "container_instance" and "ec2_instance_id" and logging it so that I can see in Elastic Search from which container or host ec2 machine this log got generated.</p> <p>How can I set these 2 data when I start my container in environment variable?</p>
<p>In your Kubernetes Pod spec, you can use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">downward API</a> to inject some of this information. For example, to get the node's Kubernetes node name, you can set</p> <pre><code>env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName </code></pre> <p>The node name is typically a hostname for the node (<a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="nofollow noreferrer">this example in the EKS docs</a> shows the EC2 internal hostnames, for example). You can't easily get things like an EC2 instance ID at a per-pod level.</p> <p>You also might configure logging globally at a cluster level. The Kubernetes documentation includes <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/" rel="nofollow noreferrer">a packaged setup to route logs to Elasticsearch and Kibana</a>. The example shown there includes only the pod name in the log message metadata, but you should be able to reconfigure the underlying <a href="https://fluentd.io/" rel="nofollow noreferrer">fluentd</a> to include additional host-level metadata.</p>
<p>I have a kubernetes cluster with one master and two nodes. For some reason, a node became unreachable for the cluster so all pods were moved to the other node. The problem is that the broken node keep in the cluster, but i think the master should remove the node automatically and create another one.</p> <p>Can anyone help me?</p>
<p><strong>I option:</strong></p> <p>If you work on GKE and have HA cluster, node with <strong>NotReady</strong> state should have been automatically deleted after couple of minutes if you have autoscaling mode on. After a while new node will be added.</p> <p><strong>II option:</strong> If you use kubeadm:</p> <p>Nodes with state <strong>NotReady</strong> aren't automatically deleted if you don't have autoscaling mode on and HA cluster. Node will be continuously check and restart.</p> <p>If you have Prometheus check metrics what happened on your node which has NotReady state or from unreachable node execute command:</p> <p><code> $ sudo journalctl -u kubelet</code></p> <p>If you want node with <strong>NotReady</strong> state to be deleted you should do it manually:</p> <p>You should first drain the node and make sure that the node is empty before shutting it down.</p> <p><code> $ kubectl drain &lt;node name&gt; --delete-local-data --force --ignore-daemonsets</code></p> <p><code> $ kubectl delete node &lt;node name&gt;</code></p> <p>Then, on the node being removed, reset all kubeadm installed state:</p> <p><code>$ kubeadm reset</code></p> <p>The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:</p> <p><code>$ iptables -F &amp;&amp; iptables -t nat -F &amp;&amp; iptables -t mangle -F &amp;&amp; iptables -X</code></p> <p>If you want to reset the IPVS tables, you must run the following command:</p> <p><code>$ ipvsadm -C</code></p> <p>You can also simply shutdown desire node:</p> <p><code>$ shutdown -h</code></p> <p>The <strong>-h</strong> means halt while now clearly means that the instruction should be carried out immediately. Different delays can be used. For example, you might use +6 instead, which will tell the computer to run the shutdown procedure in six minutes.</p> <p>In this case new node will <strong>not</strong> be added automatically.</p> <p>I hope this helps.</p>
<p>I use GitLab Runner on my Kubernetes cluster to run CI jobs. I want to make build jobs run faster.</p> <p>To make them faster, I reuse Docker image from the previous build (tagged as <code>latest</code>). Build time has decreased, but now the bottleneck is the <code>pull</code> command which takes about 60-70% of the time.</p> <p>Here's the snippet of the <code>.gitlab-ci.yml</code>:</p> <pre><code>build:sheets: stage: build image: docker:stable services: - docker:dind before_script: - echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin script: - docker pull $SHEETS_LATEST || true - docker build --cache-from $SHEETS_LATEST --tag $SHEETS_TAG --tag $SHEETS_LATEST . - docker push $SHEETS_TAG - docker push $SHEETS_LATEST </code></pre> <p>I use Gitlab Registry and thus <code>pull</code> command requires a lot of communication between my cluster and the registry.</p> <p>So I have a couple of questions here:</p> <ol> <li><p><strong>Is it worth effort to deploy my own docker registry on the cluster to save a couple of minutes per build?</strong></p></li> <li><p><strong>Is there a way to save an image somewhere on cluster to not pull from registry every time?</strong></p></li> </ol>
<p>There are several methods to build Docker images in Pipeline quicker. </p> <ol> <li>DOOD (Docker out of Docker)</li> </ol> <p>Here you've to mount host's /var/lib/docker.sock inside your build container. It is vulnerable. But still too fast. </p> <ol start="2"> <li>DIND (Docker in Docker)</li> </ol> <p>Here you can use dind image which has both docker cli and daemon. It doesn't communicate to host's docker daemon. The entire build process will happen within the pod. Not 100% secure but faster.</p> <ol start="3"> <li>Kaniko, Makisu, Buildah </li> </ol> <p>These are Daemoneless next-generation image build tools and which does not depend on Docker daemon. More secure than DOOD and DIND also faster and supports caching. </p>
<p>I'm working on deploying a docker image in kubernetes. The first time I deployed the container, I used:</p> <pre><code>kubectl apply -f &lt;deployment_file&gt;.yaml </code></pre> <p>and the container was successfully deployed in a pod.</p> <p>Also, the deployment_file looks something like this:</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: publisher spec: replicas: 2 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 300 progressDeadlineSeconds: 900 template: metadata: labels: app: publisher spec: containers: - name: publisher image: 123dev.azurecr.io/publisher:{{ci-build-number}} env: - name: ENVIRONMENT value: "dev" ports: - containerPort: 8080 </code></pre> <p>I have defined the 'progressDeadlineSeconds' attribute in the yaml file above.</p> <p>To keep a track of the deployment, I used:</p> <pre><code>kubectl rollout status deployment.v1beta1.apps/publisher </code></pre> <p>Now if I want to update the container with a new image, I can again reuse the command</p> <pre><code>kubectl apply -f &lt;deployment_file&gt;.yaml </code></pre> <p>to apply the update.</p> <p>But what if applying the update fails due to some reason (let's say the docker image is corrupt), is there a way to automatically trigger a rollback to the previous revision when - pods status is not set to 'running' OR the execution time crosses 'pregressDeadlineSeconds'?</p> <p>Till now I haven't found a way to execute a rollback automatically. Thoughts would be appreciated.</p>
<p>follow the below steps. </p> <pre><code>1. append the --record param to deployment, as shown below kubectl apply -f &lt;deployment_file&gt;.yaml --record 2. kubectl rollout history deploy &lt;deployment-name&gt; to check deployment history. for example, kubectl rollout history deploy mynginx deployments "mynginx" REVISION CHANGE-CAUSE 3 kubectl set image deploy mynginx mynginx=nginx:1.12.2 4 kubectl set image deploy mynginx mynginx=nginx:1.13.8 3. you can rollback to previous version using revision, say to revision 3 kubectl rollout undo deploy mynginx --to-revision=3 </code></pre>
<p>I have a kubernetes user(below is the kubeconfig). </p> <pre><code>users: - name: alok.singh@practo.com user: auth-provider: config: client-id: XXX client-secret: XXX id-token: XXX refresh-token: </code></pre> <p>When making GET request to the kubernetes API, I am getting the below error.</p> <p>GET request - <a href="https://sourcegraph.com/github.com/vapor-ware/ksync/-/blob/pkg/ksync/doctor/kubernetes.go#L78" rel="nofollow noreferrer">https://sourcegraph.com/github.com/vapor-ware/ksync/-/blob/pkg/ksync/doctor/kubernetes.go#L78</a></p> <pre><code>err= forbidden: User "alok.singh@practo.com" cannot get path "/" </code></pre> <p>What is the exact role i need to create to give access to the path "/"</p>
<p>Only a cluster admin can access "/"</p> <p>Here is the role for it.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-admin rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '* </code></pre>
<p>I would like to implement the approach explained in the book "The tao of microservices" using kubernetes and Istio. In other words, I would like that microservices communicate to each other with pattern matched queue messages and still I use the routing abilities of Istio to send i.e. 5% of messages to a new microservice (Canari deployments). </p> <p>I read in <a href="https://www.linkedin.com/pulse/kubernetes-exploring-istio-event-driven-architectures-todd-kaplinger/" rel="nofollow noreferrer">a rather old article</a> that Istio does not support queue routing at the moment, but I'm wondering about the state of it now.</p> <p>Does anyone has an example that implements this solution with Istio/queue topics? i.e. a message with the following routing keys </p> <pre><code>store:save kind:entity </code></pre> <p>is rerouted to a microservice which registers itself as accepting </p> <pre><code>store:* kind:entity </code></pre>
<p>This is more of an architecture advice question.</p> <p>For this pattern, you are probably better off using a message broker like <a href="https://www.rabbitmq.com/" rel="nofollow noreferrer">RabbitMQ</a> or <a href="https://kafka.apache.org/" rel="nofollow noreferrer">Kafka</a>, or an event bus (or something else).</p> <p>Essentially, you would have services behind Istio that subscribe to certain message topics (Being published somewhere else which could be another service). </p> <p>This way, for example, you could have something like <code>(service 1, queue1/topic1)</code>, <code>(service2, queue2/topic2)</code>. Then on Istio, if you separating Andriod and iOS traffic, you would have a rule that sends all the traffic for Android to <code>(service 1, queue1/topic1)</code> and all the traffic for iOS to <code>(service2, queue2/topic2)</code>. Or you could do 80% of the traffic to <code>(service 1, queue1/topic1)</code> and 20% of the traffic to <code>(service2, queue2/topic2)</code></p> <p>You could run your message broker in Kubernetes or outside Kubernetes, depending on how you want to architect your solution.</p> <p>Hope it helps!</p>
<p>as the title suggests, I can't find any difference between Prometheus Adapter and Prometheus Operator for monitoring in kubernetes on the net.</p> <p>Can anyone tell me the difference? Or if there are particular use cases in which to use one or the other?</p> <p>Thanks in advance.</p>
<p>Those are completely different things. Prometheus Operator is a tool created by CoreOS to simplify deployment and management of Prometheus instances in K8s. Using Prometheus Operator you can very easily deploy Prometheus, Alertmanager, Prometheus alert rules and Service Monitors.</p> <p>Prometheus Adapter is required for using Custom Metrics API in K8s. It is used primarily for Horizontal Pod Autoscaler to scale based on metrics retrieved from Prometheus. For example, you can create metrics inside of your application and collect them using Prometheus and then you can scale based on those metrics which is really good, because by default K8s is able to scale based only on raw metrics of CPU and memory usage which is not suitable in many cases.</p> <p>So actually those two things can nicely complement each other.</p>
<p>I have a single image that I'm trying to deploy to an AKS cluster. The image is stored in Azure container registry and I'm simply trying to apply the YAML file to get it loaded into AKS using the following command:</p> <blockquote> <p>kubectl apply -f myPath\myimage.yaml</p> </blockquote> <p>kubectl keeps complaining that I'm missing the required "selector" field and that the field "spec" is unknown. This seems like a basic image configuration so I don't know what else to try.</p> <blockquote> <p>kubectl : error: error validating "myimage.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "spec" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false At line:1 char:1</p> </blockquote> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myimage spec: replicas: 1 template: metadata: labels: app: myimage spec: containers: - name: myimage image: mycontainers.azurecr.io/myimage:v1 ports: - containerPort: 5000 </code></pre>
<p>As specified in the error message, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> require a selector field inside their spec. You can look at the link for some examples.</p> <p>Also, do note that there are two spec fields. One for the deployment and one for the pod used as template. Your spec for the pod is misaligned. It should be one level deeper.</p>
<p>I deployed a kubernetes cluster and tried to test it with a simple bash pod as follows </p> <p>kubectl run my-shell --rm -i --tty --image ubuntu -- bash</p> <p>After I got the shell prompt I tried to do an apt-get update and got into following error </p> <pre><code>root@my-shell-796b6f7d5b-274q9:/# apt-get update Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease Err:1 http://security.ubuntu.com/ubuntu bionic-security InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Err:2 http://archive.ubuntu.com/ubuntu bionic InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease Err:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease Err:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?) Reading package lists... Done </code></pre> <p>Appreciate if you can suggest what is wrong. I feel it is a DNS error or something like that . I have added nameserver as 8.8.8.8 inside the shell pod</p>
<p>It appears that a proxy or firewall is blocking the connection. </p> <p>Does the command work on the machine hosting Kubernetes? </p> <p>If so, then Kubernetes is interfering with http://*.ubuntu.com/</p> <p>If not, then it is an issue with your current network</p>
<p>I have developed an Openshift template which basically creates two objects (A cluster &amp; a container operator).</p> <p>I understand that templates run <code>oc create</code> under the hood. So, in case any of these two objects already exists then trying to create the objects through template would through an error. Is there any way to override this behaviour? I want my template to re-configure the object even if it exists. </p>
<p>You can use "oc process" which renders template into set of manifests:</p> <pre><code>oc process foo PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f - </code></pre> <p>or</p> <pre><code>oc process -f template.json PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f - </code></pre>
<p>I'd like to modify etcd pod to listening 0.0.0.0(or host machine IP) instead of 127.0.0.1. </p> <p>I'm working on a migration from a single master to multi-master kubernetes cluster, but I faced with an issue that after I modified /etc/kubernetes/manifests/etcd.yaml with correct settings and restart kubelet and even docker daemons, etcd still working on 127.0.0.1. </p> <p>Inside docker container I'm steel seeing that etcd started with --listen-client-urls=<a href="https://127.0.0.1:2379" rel="nofollow noreferrer">https://127.0.0.1:2379</a> instead of host IP</p> <p><strong>cat /etc/kubernetes/manifests/etcd.yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --advertise-client-urls=https://192.168.22.9:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://192.168.22.9:2380 - --initial-cluster=test-master-01=https://192.168.22.9:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://192.168.22.9:2379 - --listen-peer-urls=https://192.168.22.9:2380 - --name=test-master-01 - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt image: k8s.gcr.io/etcd-amd64:3.2.18 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -ec - ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /var/lib/etcd type: DirectoryOrCreate name: etcd-data - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs status: {} </code></pre> <p><strong>[root@test-master-01 centos]# kubectl -n kube-system get po etcd-test-master-01 -o yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/config.hash: c3eef2d48a776483adc00311df8cb940 kubernetes.io/config.mirror: c3eef2d48a776483adc00311df8cb940 kubernetes.io/config.seen: 2019-05-24T13:50:06.335448715Z kubernetes.io/config.source: file scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: 2019-05-24T14:08:14Z labels: component: etcd tier: control-plane name: etcd-test-master-01 namespace: kube-system resourceVersion: "6288" selfLink: /api/v1/namespaces/kube-system/pods/etcd-test-master-01 uid: 5efadb1c-7e2d-11e9-adb7-fa163e267af4 spec: containers: - command: - etcd - --advertise-client-urls=https://127.0.0.1:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://127.0.0.1:2380 - --initial-cluster=test-master-01=https://127.0.0.1:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379 - --listen-peer-urls=https://127.0.0.1:2380 - --name=test-master-01 - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt image: k8s.gcr.io/etcd-amd64:3.2.18 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bin/sh - -ec - ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo </code></pre>
<p>First check your kubelet option <code>--pod-manifest-path</code>, put your correct yaml in this path. </p> <p>To make sure <code>etcd</code> pod has been deleted, move yaml file out of <code>pod-manifest-path</code>, wait this pod has been deleted by <code>docker ps -a</code>. Then put your correct yaml file into <code>pod-manifest-path</code>.</p>
<p>I am running a dotnet core 2.2 app with IdentityServer4 installed using Nuget. When I build a docker container and run, all works fine. When I deploy this container to my Google Kubernetes Engine cluster it fails on startup with the following:</p> <pre><code>{ insertId: "18ykz2ofg4ipm0" labels: { compute.googleapis.com/resource_name: "fluentd-gcp-v3.1.0-nndnb" container.googleapis.com/namespace_name: "my_namespace" container.googleapis.com/pod_name: "identity-deployment-5b8bd8745b-qn2v8" container.googleapis.com/stream: "stdout" } logName: "projects/my_project/logs/app" receiveTimestamp: "2018-12-07T21:09:25.708527406Z" resource: { labels: { cluster_name: "my_cluster" container_name: "app" instance_id: "4238697312444637243" namespace_id: "my_namespace" pod_id: "identity-deployment-5b8bd8745b-qn2v8" project_id: "my_project" zone: "europe-west2-b" } type: "container" } severity: "INFO" textPayload: "System.NullReferenceException: Object reference not set to an instance of an object. at IdentityServer4.Services.DefaultUserSession.RemoveSessionIdCookieAsync() at IdentityServer4.Services.DefaultUserSession.EnsureSessionIdCookieAsync() at IdentityServer4.Hosting.IdentityServerMiddleware.Invoke(HttpContext context, IEndpointRouter router, IUserSession session, IEventService events) at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context) at IdentityServer4.Hosting.BaseUrlMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context) " timestamp: "2018-12-07T21:09:17Z" } </code></pre> <p>As I mentioned, this works perfectly locally, and when running inside a docker container, only when within Kubernetes do I see these errors.</p> <p>I'm not sure what I've missed here with kubernetes, but any help very much appreciated.</p>
<p>This vexed me in the last few days. I suspect it has something to do with the recent deprecation of the former mechanism in app builder configuration:</p> <pre><code> app.UseAuthentication().UseCookieAuthentication(); &lt;-- no longer valid and apparently will not even compile now. </code></pre> <p>This has been replaced the following in the ConfigureServices section:</p> <pre><code> services.AddAuthentication("YourCookieName") .AddCookie("YourCookieName", options =&gt; { options.ExpireTimeSpan = TimeSpan.FromDays(30.0); }); </code></pre> <p>While I am not sure what the exact breaking change is for identityserver4, after cloning identityserver4 components and debugging I was able to isolate the the constructor for DefaultUserSession takes an IHttpContextAccessor that was arriving as null:</p> <p>The constructor in question:</p> <pre><code> public DefaultUserSession( IHttpContextAccessor httpContextAccessor, IAuthenticationSchemeProvider schemes, IAuthenticationHandlerProvider handlers, IdentityServerOptions options, ISystemClock clock, ILogger&lt;IUserSession&gt; logger) { ... </code></pre> <p>The following solution gets you past the error, though hopefully identityserver4 will make this moot in a near future release.</p> <p>You need to add an IHttpContextAccessor service in ConfigureServices:</p> <pre><code>public override void ConfigureServices(IServiceCollection services) { ... other code omitted ... services.AddScoped&lt;IHttpContextAccessor&gt;(provider =&gt; new LocalHttpContextAccessor(provider)); ... other code omitted ... } </code></pre> <p>The LocalHttpContextAccessor is just a private class in the configuration class as seen here:</p> <pre><code>private class LocalHttpContextAccessor : IHttpContextAccessor { public IServiceProvider serviceProvider { get; private set; } public HttpContext httpContext { get; set; } public LocalHttpContextAccessor(IServiceProvider serviceProvider) { this.serviceProvider = serviceProvider; this.httpContext = null; } public HttpContext HttpContext { get { return this.httpContext; } set { this.httpContext = null; } } } </code></pre> <p>The problem is that at the point of configuring services, there is no current context to set, so I set it in a using block in the app builder configuration stage:</p> <pre><code>public override void Configure(IApplicationBuilder app, IHostingEnvironment env) { app.Use(async (context, next) =&gt; { IHttpContextAccessor httpContextAccessor; httpContextAccessor = context.RequestServices.GetRequiredService&lt;IHttpContextAccessor&gt;(); if (httpContextAccessor is LocalHttpContextAccessor) { ((LocalHttpContextAccessor)httpContextAccessor).httpContext = context; } await next(); }); ... other code omitted ... app.UseIdentityServer(); </code></pre> <p>That will set the http context prior to running identity server code which fixes the bug. The scoped service should be created individually for each request. I've only recently made the full plunge into .net core from .net framework, so if there are scope or DI issues in that code that might lead to leaks or bad life-cycle, I'd appreciate the input. That said, at the very least that code keeps identity server 4 from crashing with the core 2.2+.</p>
<p>One K8saaS cluster in the IBM-Cloud runs preinstalled fluentd. May I use it on my own, too?</p> <p>We think about logging strategy, which is independed from the IBM infrastrukture and we want to save the information inside ES. May I reuse the fluentd installation done by IBM for sending my log information or should I install my own fluentd? If so, am I able to install fluentd on the nodes via kubernetes API and without any access to the nodes themselfes?</p>
<p>The fluentd that is installed and managed by IBM Cloud Kubernetes Service will only connect to the IBM cloud logging service.</p> <p>There is nothing to stop you installing your own Fluentd as well though to send your logs to your own logging service, either running inside your cluster or outside. This is best done via a daemonset so that it can collect logs from every node in the cluster.</p>
<p>I have the following issue with on Kubernetes pod accessing a persistent volume exclusively where apparently a file has not been removed upon shutdown of the previous pod using it:</p> <pre><code>[2019-05-25 09:11:33,980] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.apache.kafka.common.KafkaException: Failed to acquire lock on file .lock in /opt/kafka/data/logs. A Kafka instance in another process or thread is using this directory. at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:240) at kafka.log.LogManager$$anonfun$lockLogDirs$1.apply(LogManager.scala:236) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104) at kafka.log.LogManager.lockLogDirs(LogManager.scala:236) at kafka.log.LogManager.&amp;lt;init&amp;gt;(LogManager.scala:97) at kafka.log.LogManager$.apply(LogManager.scala:953) at kafka.server.KafkaServer.startup(KafkaServer.scala:237) at io.confluent.support.metrics.SupportedServerStartable.startup(SupportedServerStartable.java:114) at io.confluent.support.metrics.SupportedKafka.main(SupportedKafka.java:66) </code></pre> <p>I installed Kafka using the official helm chart, so I assume that the setup of Kafka and zookeeper pods as well as either assignment to persistent volumes and claims is suited for Kubernetes.</p> <p>First of all: Is this a good idea to persist the running state on the persistent volume? Since pods are supposed to be not reliable and can crash or be evicted at any point in time this method is very prone to errors. Should I consider this a bug or flaw which is worth reporting to the helm chart authors?</p> <p>Since bugs exist and other software might persist it's running state on the persistent volume, I'm interested in a general best practices approach how to bring the persistent volume into a state where the pod using it can start again (in this case it sould be deleting the lock file from <code>/opt/kafka/data/logs</code> afaik).</p> <p>So far I attempted to start a console in a container shell and try to run the command to remove the file before the pod crashes. This takes some tries and is very annoying.</p> <p>I'm experiencing this with microk8s 1.14.2 (608) on Ubuntu 19.10, but I think it could happen on any Kubernetes implementation.</p>
<p>To solve this bug, I think we just need a <code>PreStart</code> and <code>PreStop</code> Hook in Kafka pod specification like other offical helm charts.</p> <pre><code>containers: - name: kafka-broker ... lifecycle: preStart: exec: command: - "/bin/sh" - "-ec" - | rm -rf ${KAFKA_LOG_DIRS}/.lock preStop: exec: command: - "/bin/sh" - "-ec" - "/usr/bin/kafka-server-stop" </code></pre>
<p>I create an nodejs app and try to connect it with mongodb on kubernetes cluster. The nodejs and mongodb app are separate pods in my cluster.</p> <p>mongodb and app are running when i display the status , i can connect me to the mongodb pods and add datas</p> <pre><code>NAME READY STATUS RESTARTS AGE my-backend-core-test-5d7b78c9dc-dt4bg 1/1 Running 0 31m my-frontend-test-6868f7c7dd-b2qtm 1/1 Running 0 40h my-mongodb-test-7d55dbff74-2m6cm 1/1 Running 0 34m </code></pre> <p>But when i try to make the connection with this script:</p> <pre><code>const urlDB = "my-mongodb-service-test.my-test.svc.cluster.local:27017"; console.log("urlDB :: ", urlDB); mongoose.connect('mongodb://'+urlDB+'/test', { useNewUrlParser: true }).then(() =&gt; { console.log("DB connected") }).catch((err)=&gt; { console.log("ERROR") }) </code></pre> <p>I have the following error on my nodejs app: </p> <pre><code>&gt; my-core@1.0.0 start /usr/src/app &gt; node ./src/app.js urlDB :: my-mongodb-service-test.my-test.svc.cluster.local:27017 ERROR </code></pre> <p>As explained on kubernetes i'm suppose to communicate between the differents pods using service-name.namespace.svc.cluster.local (my-mongodb-service-test.my-test.svc.cluster.local:27017)</p> <p>mongo logs show me a different host, corresponding to my pod and not the service. How can i configure the host on my yaml file ? </p> <p><strong>mongodb logs :</strong></p> <pre><code> 2019-05-24T10:57:02.367+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2019-05-24T10:57:02.374+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=my-mongodb-test-7d55dbff74-2m6cm 2019-05-24T10:57:02.374+0000 I CONTROL [initandlisten] db version v4.0.9 2019-05-24T10:57:02.374+0000 I CONTROL [initandlisten] git version: fc525e2d9b0e4bceff5c2201457e564362909765 2019-05-24T10:57:02.374+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] allocator: tcmalloc 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] modules: none 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] build environment: 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] distmod: ubuntu1604 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] distarch: x86_64 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] target_arch: x86_64 2019-05-24T10:57:02.375+0000 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0" } } 2019-05-24T10:57:02.376+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2019-05-24T10:57:02.377+0000 I STORAGE [initandlisten] 2019-05-24T10:57:02.377+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-05-24T10:57:02.377+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2019-05-24T10:57:02.377+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=485M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress), 2019-05-24T10:57:03.521+0000 I STORAGE [initandlisten] WiredTiger message [1558695423:521941][1:0x7f2d2eeb0a80], txn-recover: Main recovery loop: starting at 2/140416 to 3/256 2019-05-24T10:57:03.719+0000 I STORAGE [initandlisten] WiredTiger message [1558695423:719280][1:0x7f2d2eeb0a80], txn-recover: Recovering log 2 through 3 2019-05-24T10:57:03.836+0000 I STORAGE [initandlisten] WiredTiger message [1558695423:836203][1:0x7f2d2eeb0a80], txn-recover: Recovering log 3 through 3 2019-05-24T10:57:03.896+0000 I STORAGE [initandlisten] WiredTiger message [1558695423:896185][1:0x7f2d2eeb0a80], txn-recover: Set global recovery timestamp: 0 2019-05-24T10:57:03.924+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0) 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2019-05-24T10:57:03.947+0000 I CONTROL [initandlisten] 2019-05-24T10:57:03.984+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data' 2019-05-24T10:57:03.986+0000 I NETWORK [initandlisten] waiting for connections on port 27017 </code></pre> <p><strong>mongodb yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-mongodb-service-test namespace: my-test spec: selector: app: my-mongodb env: test ports: - port: 27017 targetPort: 27017 protocol: TCP --- apiVersion: apps/v1 kind: Deployment metadata: name: my-mongodb-test namespace: my-test labels: app: my-mongodb env: test spec: selector: matchLabels: app: my-mongodb-test replicas: 1 template: metadata: labels: app: my-mongodb-test spec: containers: - name: mongo image: mongo:4.0.9 command: - mongod - "--bind_ip" - "0.0.0.0" imagePullPolicy: Always ports: - containerPort: 27017 name: mongo hostPort: 27017 protocol: TCP volumeMounts: - mountPath: /data/db name: mongodb-volume volumes: - name: mongodb-volume hostPath: path: /home/debian/mongodb </code></pre>
<p>Your <code>service selector</code> is mismatch with <code>pod labels</code>, service endpoints is empty (you can check this by <code>kubectl describe svc/my-mongodb-service-test -n my-test</code>), so kubernetes can not access pod by service.</p> <p>Correct service selector is:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-mongodb-service-test namespace: my-test spec: selector: app: my-mongodb ... </code></pre> <p>This should match pod labels specify by <code>spec.template.metadata.labels</code> in Deployment yaml.</p>
<p>I'm using kubernetes <a href="https://github.com/kubernetes/ingress-nginx/" rel="noreferrer">ingress-nginx</a> and this is my Ingress spec. <a href="http://example.com" rel="noreferrer">http://example.com</a> works fine as expected. But when I go to <a href="https://example.com" rel="noreferrer">https://example.com</a> it still works, but pointing to default-backend with Fake Ingress Controller certificate. How can I disable this behaviour? I want to disable listening on https at all on this particular ingress, since there is no TLS configured.</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: http-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: example.com http: paths: - backend: serviceName: my-deployment servicePort: 80 </code></pre> <p>I've tried this <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> annotation. However this has no effect.</p>
<p>I'm not aware of an ingress-nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">configmap</a> value or ingress <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="noreferrer">annotation</a> to easily disable TLS. </p> <p>You could remove port 443 from your ingress controllers service definition.</p> <p>Remove the <code>https</code> entry from the <code>spec.ports</code> array </p> <pre><code>apiVersion: v1 kind: Service metadata: name: mingress-nginx-ingress-controller spec: ports: - name: https nodePort: NNNNN port: 443 protocol: TCP targetPort: https </code></pre> <p>nginx will still be listening on a TLS port, but no clients outside the cluster will be able to connect to it.</p>
<p>I am using Kubernetes Kops. I want to set rate limit rps at Ingress-Nginx level for a specific path only. </p> <p>I know about </p> <pre><code>nginx.ingress.kubernetes.io/limit-rps </code></pre> <p>If I set this in Ingress rules then it will be applicable for all the routes. But, I want to apply it for a specific route. Let's say, when I am trying to access </p> <pre><code>/login </code></pre> <p>I want to set rps limit to 100 for the path /login</p> <pre><code>nginx.ingress.kubernetes.io/limit-rps: 100 </code></pre> <p>This is my Ingress rules config,</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: staging-ingress-rules namespace: staging annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/proxy-body-size: '0' spec: rules: - host: staging.mysite.com http: paths: - path: /login backend: serviceName: login_site servicePort: 80 - path: /registration backend: serviceName: registration_site servicePort: 80 </code></pre>
<p>It's possible to slightly abuse the config for ingress-nginx by adding multiple Ingress definitions for the same hostname. ingress-nginx will merge the rules/routes together. The config will become harder to manage though and you are getting towards the limits of what nginx proxying can do for you.</p> <h3>Other options</h3> <p>Traefik has a <a href="https://docs.traefik.io/v2.0/middlewares/ratelimit/" rel="noreferrer">rate limit middleware</a> that can be applied to <a href="https://docs.traefik.io/v2.0/middlewares/overview/" rel="noreferrer">routes</a>.</p> <p>Also look at something like <a href="https://docs.konghq.com/hub/kong-inc/rate-limiting/" rel="noreferrer">kong</a> or <a href="https://istio.io/docs/tasks/policy-enforcement/rate-limiting/" rel="noreferrer">istio</a> if you want to start managing individual services with more detail. </p> <h3>Nginx Ingress config</h3> <p>Creating a structure to your naming conventions will be important here so you know which Ingress houses which routes. Using the route path in the Ingress <code>name</code> is the where I would start but your use case may vary:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: staging-ingress-rules-registration annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/limit-rps: '10' spec: rules: - host: staging.mysite.com http: paths: - path: /registration backend: serviceName: registration-site servicePort: 80 </code></pre> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: staging-ingress-rules-login annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/limit-rps: '100' spec: rules: - host: staging.mysite.com http: paths: - path: /login backend: serviceName: login-site servicePort: 80 </code></pre> <p>I'm not sure how host wide or server wide annotations (like <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-ciphers" rel="noreferrer"><code>nginx.ingress.kubernetes.io/ssl-ciphers</code></a>) would need to be managed. If they all merge nicely then maybe create a special Ingress just to house those. If not you might end up managing host settings across all Ingress configs which will be a pain. </p>
<p>I installed Python, Docker on my machine and am trying to import the <code>from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator</code> but when I connect the docker, I get the message that the module does not exist. I have already done the <code>pip install apache-airflow[kubernetes]</code> and I still have the same error. Is there a specific machine location that I should check if the library is actually installed? What can I do to solve this?</p> <p><img src="https://i.stack.imgur.com/chcRs.png" alt="enter image description here"></p> <pre><code>from airflow import DAG from datetime import datetime, timedelta from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow.operators.dummy_operator import DummyOperator import logging import os from airflow.utils.helpers import parse_template_string default_args = { 'owner': 'airflow', 'depends_on_past': False, 'start_date': datetime.utcnow(), 'email': ['airflow@example.com'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=5) } dag = DAG( 'kubernetes_sample', default_args=default_args, schedule_interval=timedelta(minutes=10)) start = DummyOperator(task_id='run_this_first', dag=dag) passing = KubernetesPodOperator(namespace='default', image="Python:3.6", cmds=["Python","-c"], arguments=["print('hello world')"], labels={"foo": "bar"}, name="passing-test", task_id="passing-task", get_logs=True, dag=dag ) failing = KubernetesPodOperator(namespace='default', image="ubuntu:1604", cmds=["Python","-c"], arguments=["print('hello world')"], labels={"foo": "bar"}, name="fail", task_id="failing-task", get_logs=True, dag=dag ) passing.set_upstream(start) failing.set_upstream(start) </code></pre> <blockquote> <p>webserver_1 | Traceback (most recent call last): webserver_1 |<br> File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 377, in process_file webserver_1 | m = imp.load_source(mod_name, filepath) webserver_1 | File "/usr/local/lib/python3.6/imp.py", line 172, in load_source webserver_1 | module = _load(spec) webserver_1 | File "", line 684, in _load webserver_1 | File "", line 665, in _load_unlocked webserver_1 | File "", line 678, in exec_module webserver_1 | File "", line 219, in _call_with_frames_removed webserver_1 | File "/usr/local/airflow/dags/example_airflow.py", line 3, in webserver_1 | from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator webserver_1 | File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 21, in webserver_1 | from airflow.contrib.kubernetes import kube_client, pod_generator, pod_launcher webserver_1 | File "/usr/local/lib/python3.6/site-packages/airflow/contrib/kubernetes/pod_launcher.py", line 25, in webserver_1 | from kubernetes import watch, client webserver_1 | ModuleNotFoundError: No module named 'kubernetes'</p> </blockquote>
<p>Run the following </p> <pre><code>pip install apache-airflow[kubernetes] </code></pre> <p>Restart Airflow webserver and scheduler after that.</p>
<p>I have a single image that I'm trying to deploy to an AKS cluster. The image is stored in Azure container registry and I'm simply trying to apply the YAML file to get it loaded into AKS using the following command:</p> <blockquote> <p>kubectl apply -f myPath\myimage.yaml</p> </blockquote> <p>kubectl keeps complaining that I'm missing the required "selector" field and that the field "spec" is unknown. This seems like a basic image configuration so I don't know what else to try.</p> <blockquote> <p>kubectl : error: error validating "myimage.yaml": error validating data: [ValidationError(Deployment.spec): unknown field "spec" in io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false At line:1 char:1</p> </blockquote> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myimage spec: replicas: 1 template: metadata: labels: app: myimage spec: containers: - name: myimage image: mycontainers.azurecr.io/myimage:v1 ports: - containerPort: 5000 </code></pre>
<p>You have incorrect indentation of second <code>spec</code> field and also you missed <code>selector</code> in first <code>spec</code>: </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: myimage spec: replicas: 1 selector: matchLabels: app: myimage template: metadata: labels: app: myimage spec: containers: - name: myimage image: mycontainers.azurecr.io/myimage:v1 ports: - containerPort: 5000 </code></pre>
<p>From the consul-k8s <a href="https://www.consul.io/docs/platform/k8s/service-sync.html" rel="nofollow noreferrer">document</a>: <strong>The Consul server cluster can run either in or out of a Kubernetes cluster. The Consul server cluster does not need to be running on the same machine or same platform as the sync process. The sync process needs to be configured with the address to the Consul cluster as well as any additional access information such as ACL tokens.</strong></p> <p>The consul cluster I am trying to sync is <strong>outside the k8s cluster</strong>, based on the document, I must pass the address to consul cluster for sync process.However, the helm chart for installing the sync process didn’t contains any value to configure the consul cluster ip address.</p> <pre><code>syncCatalog: # True if you want to enable the catalog sync. "-" for default. enabled: false image: null default: true # true will sync by default, otherwise requires annotation # toConsul and toK8S control whether syncing is enabled to Consul or K8S # as a destination. If both of these are disabled, the sync will do nothing. toConsul: true toK8S: true # k8sPrefix is the service prefix to prepend to services before registering # with Kubernetes. For example "consul-" will register all services # prepended with "consul-". (Consul -&gt; Kubernetes sync) k8sPrefix: null # consulPrefix is the service prefix which preprends itself # to Kubernetes services registered within Consul # For example, "k8s-" will register all services peprended with "k8s-". # (Kubernetes -&gt; Consul sync) consulPrefix: null # k8sTag is an optional tag that is applied to all of the Kubernetes services # that are synced into Consul. If nothing is set, defaults to "k8s". # (Kubernetes -&gt; Consul sync) k8sTag: null # syncClusterIPServices syncs services of the ClusterIP type, which may # or may not be broadly accessible depending on your Kubernetes cluster. # Set this to false to skip syncing ClusterIP services. syncClusterIPServices: true # nodePortSyncType configures the type of syncing that happens for NodePort # services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst. # - ExternalOnly will only use a node's ExternalIP address for the sync # - InternalOnly use's the node's InternalIP address # - ExternalFirst will preferentially use the node's ExternalIP address, but # if it doesn't exist, it will use the node's InternalIP address instead. nodePortSyncType: ExternalFirst # aclSyncToken refers to a Kubernetes secret that you have created that contains # an ACL token for your Consul cluster which allows the sync process the correct # permissions. This is only needed if ACLs are enabled on the Consul cluster. aclSyncToken: secretName: null secretKey: null # nodeSelector labels for syncCatalog pod assignment, formatted as a muli-line string. # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector # Example: # nodeSelector: | # beta.kubernetes.io/arch: amd64 nodeSelector: null </code></pre> <p>So How can I set the consul cluster ip address for sync process?</p>
<p>It looks like the sync service <a href="https://github.com/hashicorp/consul-helm/blob/master/templates/sync-catalog-deployment.yaml#L62-L87" rel="nofollow noreferrer">runs via the consul agent</a> on the k8s host.</p> <pre><code> env: - name: HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP </code></pre> <pre><code> command: - consul-k8s sync-catalog \ -http-addr=${HOST_IP}:8500 </code></pre> <p>That can't be configured directly but helm can configure the agent/client via <a href="https://www.consul.io/docs/platform/k8s/helm.html#v-client-join" rel="nofollow noreferrer"><code>client.join</code></a> (<a href="https://github.com/hashicorp/consul-helm/blob/6fa9be91e5af25143c6ad98fab2d10fa7bdbfde4/values.yaml#L164-L167" rel="nofollow noreferrer">yaml src</a>):</p> <blockquote> <p>If this is null (default), then the clients will attempt to automatically join the server cluster running within Kubernetes. This means that with server.enabled set to true, clients will automatically join that cluster. If server.enabled is not true, then a value must be specified so the clients can join a valid cluster. </p> </blockquote> <p>This value is passed to the consul agent as the <a href="https://www.consul.io/docs/agent/options.html#retry-join" rel="nofollow noreferrer"><code>--retry-join</code></a> option.</p> <pre><code>client: enabled: true join: - consul1 - consul2 - consul3 syncCatalog: enabled: true </code></pre>