prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I have gone through couple of articles related how clusterIP abd nodeport service works(like <a href="https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/" rel="nofollow noreferrer">this blog</a>)</p> <p>Say I have 3 different micro-service based web application each running on three separate node. Each runs the replica set of two.</p> <p>My understanding is that there will be separate clusterIP service for application replicaset instead of one single clusterIP service for all application type. Is that correct ? Now if one pode need to connect to another pod, it will call that corresponding clusterIP service to connect to right pod ?</p>
<p>Yes, that's right.<br> In fact, you need to <em>forget</em> about the notion of pod. </p> <p>As you said, you created 3 web based micro-<strong>service</strong>. So the correct terminology (and need) here, is to contact <strong>(micro-)service A</strong> from <strong>(micro-)service B</strong>. In order to do that, you need to create a <code>kind: Service</code> for each of your <code>ReplicaSet</code>. </p> <p>For example :</p> <pre><code>--- # This is one micro-service based on Nginx apiVersion: apps/v1 kind: ReplicaSet metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx:alpine ports: - containerPort: 80 --- # This is the ClusterIp service corresponding apiVersion: v1 kind: Service metadata: name: nginx spec: selector: app: nginx ports: - port: 8080 targetPort: 80 </code></pre> <p>In the example above, we have two replicates of a <em>micro-service</em> based on Nginx. We also have a ClusterIP <code>kind: Service</code> that target our nginx app.</p> <p>Now, if we want to contact nginx from another pod, all we need to do is use the service name and the port configured from <strong>inside the cluster</strong>. In our case, it'll be <code>nginx:8080</code>.</p> <p>To try that, you need to create a pod that will serve us as the entry point in the cluster : </p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: gateway spec: containers: - image: centos:7 name: gateway command: ["bash", "-c", "sleep infinity"] </code></pre> <p>Now, if you want to contact your nginx app from the cluster, you'll have to execute this command : </p> <pre><code>kubectl exec -ti gateway curl nginx:8080 </code></pre>
<p>I created a stateful set but after running <code>kubectl apply -f wordpress-database.yaml</code> and <code>kubectl get statefulset</code> I see the pod is alwayson status zero (why?):</p> <pre><code>wordpress-database 0/1 25m </code></pre> <p>yaml (please do not consider alignment)</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: wordpress-database spec: selector: matchLabels: app: blog serviceName: "blog" replicas: 1 template: metadata: labels: app: blog spec: containers: - name: database image: mysql:5.7 ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD value: rootPassword - name: MYSQL_DATABASE value: database - name: MYSQL_USER value: user - name: MYSQL_PASSWORD value: password volumeMounts: - name: data mountPath: /var/lib/mysql - name: blog image: wordpress:latest ports: - containerPort: 80 env: - name: WORDPRESS_DB_HOST value: 127.0.0.1:3306 - name: WORDPRESS_DB_NAME value: database - name: WORDPRESS_DB_USER value: user - name: WORDPRESS_DB_PASSWORD value: password volumeClaimTemplates: - metadata: name: data spec: resources: requests: storage: 1Gi </code></pre>
<p>Firstly to check what happend with your <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">statefulset</a> execute:</p> <pre><code>$ kubectl describe statefulset wordpress-database </code></pre> <p>You probably don't have storage provided, your persistent volume is not being created. Check if you have configured <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storageclass</a> or set it as default. </p> <p>your <strong>volumeClaimTemplates</strong> section should looks similar:</p> <pre><code> volumeClaimTemplates: - metadata: name: example spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "your-storage-class" resources: requests: storage: 1Gi </code></pre> <p>Useful documentation: <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolumes</a>.</p>
<p>We have a situation now when 2 pods of same type can run on the same node. Sometimes during restarts and rescheduling 2 pods happen to be on the same node and when this node itself is rescheduled all of our pods are gone for a while resulting in connection troubles (we have just 2 pods that are load balanced).</p> <p>I think the best way to fix it is to not allow 2 pods to run on the same node and to use inter-pod anti-affinity for that. </p> <p>Is this the correct solution to the problem? I tried to understand it but got a bit bogged down with <code>topologyKey</code> and the syntax. Can someone explain/give an example on how to achieve this?</p>
<p>Yes, you are right Affinity is your friend here and is the correct solution.</p> <p>Node Affinity : will help your app or microservice to stick to a particular kind of node (in a multinode architecture) like below my app <code>ngnix-ms</code> always sticks to the nodes which have a label <code>role=admin</code>.</p> <p>The pod antiAffinity rule: sticks to the node labels (topologyKey) and makes sure that group (marked with topologyKey) of nodes.</p> <p>If there is a node which already has a pod with the label <code>component=nginx</code>, K8s won't allow to spin up a pod.</p> <p>Here is the explanation:</p> <pre><code>affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: role operator: In values: - app-1 podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: component operator: In values: - nginx-ms topologyKey: &quot;kubernetes.io/1-hostname&quot; </code></pre> <p>and</p> <pre><code>kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS xx-admin-1 Ready master 19d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/1-hostname=xx-admin-1,node-role.kubernetes.io/master=,role=admin xx-admin-2 Ready master 19d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/1-hostname=xx-admin-2,node-role.kubernetes.io/master=,role=admin xx-plat-1-1 Ready &lt;none&gt; 19d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/2-hostname=xx-plat-1-1,role=admin xx-plat-2-1 Ready &lt;none&gt; 19d v1.13.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/2-hostname=xx-plat-2-1,role=admin </code></pre> <p>Explanation of topologyKey: Think of it as a label, now you can have two different topologies in the same cluster.</p> <p>example: <code>kubernetes.io/1-hostname</code> and <code>kubernetes.io/2-hostname</code></p> <p>Now when you are doing podAntiAffinity you set `topologyKey:</p> <p><code>kubernetes.io/1-hostname</code></p> <p>Then your rule is valid in all the nodes with that topologyKey but your rule is not valid in <code>topologyKey: kubernetes.io/2-hostname</code> labelled nodes.</p> <p>Hence, in my example pods are scheduled within nodes with the label <code>kubernetes.io/1-hostname</code> and has podAntiAffinity implied, but nodes with labels <code>kubernetes.io/2-hostname</code> doesn't have the podAntiAffinity rule!</p>
<p>I'm following this task <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes</a> and it's unclear to me whether a container where the check is made is a container only used to check the availability of a pod? Because it makes sense if pod check container fails therefore api won't let any traffic in to the pod.</p> <p>So a health check signal must be coming from container where some image or app runs? (<em>sorry, another question</em>)</p>
<p>Livenes and readiness probes as described by <a href="https://stackoverflow.com/a/58225618/11207414">Ko2r</a> are additional checks inside your containers and verified by <strong>kubelet</strong> according to the settings fro particular probe:</p> <blockquote> <p>If the command (defined by health-check) succeeds, it returns 0, and the kubelet considers the Container to be alive and healthy. If the command returns a non-zero value, the kubelet kills the Container and restarts it.</p> </blockquote> <p>In addition:</p> <blockquote> <p>The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.</p> </blockquote> <p>Fro another point of view:</p> <p><strong>Pod is a top-level resource in the Kubernetes REST API.</strong> </p> <p>As per docs: Pods are ephemeral. They are not designed to run forever, and when a Pod is terminated it cannot be brought back. In general, Pods do not disappear until they are deleted by a user or by a controller. Information about controllers can find <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pods-and-controllers" rel="nofollow noreferrer">here</a>: </p> <blockquote> <p>So the best practise is to use controllers like describe above. You’ll rarely create individual Pods directly in Kubernetes–even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a Controller), it is scheduled to run on a Node in your cluster. The Pod remains on that Node until the process is terminated, the pod object is deleted, the Pod is evicted for lack of resources, or the Node fails.</p> </blockquote> <p>Note:</p> <blockquote> <p>Restarting a container in a Pod should not be confused with restarting the Pod. The Pod itself does not run, but is an environment the containers run in and persists until it is deleted</p> <p>Because Pods represent running processes on nodes in the cluster, it is important to allow those processes to gracefully terminate when they are no longer needed (vs being violently killed with a KILL signal and having no chance to clean up). Users should be able to request deletion and know when processes terminate, but also be able to ensure that deletes eventually complete. When a user requests deletion of a Pod, the system records the intended grace period before the Pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container. Once the grace period has expired, the KILL signal is sent to those processes, and the Pod is then deleted from the API server. If the Kubelet or the container manager is restarted while waiting for processes to terminate, the termination will be retried with the full grace period.</p> </blockquote> <p>The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster’s shared state through which all other components interact.</p> <p>For example, when you use the Kubernetes API to create a Deployment, you provide a new desired state for the system. The Kubernetes Control Plane records that object creation, and carries out your instructions by starting the required applications and scheduling them to cluster nodes–thus making the cluster’s actual state match the desired state.</p> <p>Here you can find information about <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">processing pod termination</a>.</p> <p>There are different probes: For example for HTTP probe:</p> <ul> <li>even if your app isn’t an HTTP server, you can create a lightweight HTTP server inside your app to respond to the liveness probe. </li> <li>Command</li> <li>For command probes, Kubernetes runs a command inside your container. If the command returns with exit code 0 then the container is marked as healthy.</li> </ul> <p>More about <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">probes and best practices</a>.</p> <p>Hope this help.</p>
<p>I'm installing kubernetes(kubeadm) on centos VM running inside <code>Virtualbox</code>, so with yum I installed <code>kubeadm, kubelet</code> and <code>docker</code>. </p> <p>Now while trying to setup cluster with <code>kubeadm init --pod-network-cidr=192.168.56.0/24 --apiserver-advertise-address=192.168.56.33/32</code> i run into the following error : </p> <pre><code>Unable to update cni config: No networks found in /etc/cni/net.d Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre> <p>So I checked, no <code>cni</code> folder in <code>/etc</code> even that <code>kubernetes-cni-0.6.0-0.x86_64</code> is installed. I Tried commenting <code>KUBELET_NETWORK_ARGS</code> in <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> but it didn't work.</p> <p>PS: </p> <ul> <li><p>I'm installing behind proxy.</p></li> <li><p>I have multiple network adapters: </p> <ul> <li><p>NAT : 10.0.2.15/24 for Internet </p></li> <li><p>Host Only : 192.168.56.33/32</p></li> <li><p>And docker interface : 172.17.0.1/16</p></li> </ul></li> </ul> <blockquote> <p>Docker version: 17.12.1-ce<br> kubectl version : Major:"1", Minor:"9", GitVersion:"v1.9.3"<br> Centos 7</p> </blockquote>
<p>Add pod network add-on - Weave Net</p> <pre><code>kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml </code></pre>
<p>I am planning to setup Apache Nifi on Kubernetes and make it to production. During my surfing I didn't find any one who potentially using this combination for production setup. </p> <p>Is this good idea to choose this combination. Could you please share your thoughts/experience here about the same.</p> <p><a href="https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kubernetes/td-p/203864" rel="noreferrer">https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kubernetes/td-p/203864</a></p>
<p>As mentioned in the Comments, work has been done regarding Nifi on Kubernetes, but currently this is not generally available.</p> <p>It is good to know that there will be dataflow offerings where Nifi and Kubernetes meet in some shape or form during the coming year.* So I would recommend to keep an eye out for this and discuss with your local contacts before trying to build it from scratch. </p> <p>*Disclaimer: Though I am an employee of Cloudera, the main driving force behind Nifi, I am not qualified to make promises and this is purely my own view. </p>
<p>Consider two or more applications "talking" to each other and deployed to the cloud (cloud foundry). What are the best practices for a team to follow to work (develop/test/debug) on the same instance of the applications but in his/her "own" space without creating another instance of the application in the cloud? Or should every developer need to have a local copy of those application and run it in a docker/kubernetes, for example?</p>
<p>The question is broad, but there are some directions that are worth mentioning here. So, a short answer might be:</p> <ol> <li>Run the collaborating apps whenever necessary alongside the app(s) you are developing.</li> <li>To ease this, prefer <a href="https://github.com/cloudfoundry-incubator/cflocal" rel="nofollow noreferrer">CF Local</a> (lightweight Docker containers) over <a href="https://github.com/cloudfoundry-incubator/cfdev" rel="nofollow noreferrer">CF Dev</a> (running a whole CF foundation).</li> <li>If running the other collaborating apps is too much of a challenge, create mocks that mimics their behaviors, for the interactions (or the tests scenarios) you need.</li> </ol> <p>Some words about CF Local: nowadays Cloud Foundry developers are no more recommended to run a whole Cloud Foundry foundation on their laptop anymore. When CF Dev arrived, it was already an improvement for running a whole foundation over <a href="https://bosh.io/docs/bosh-lite/" rel="nofollow noreferrer">BOSH-Lite</a> (that still has its use-cases, I use it everyday), but for a typical <code>cf push</code> developer experience, CF Local fits well and is even more lightweight.</p> <p>So, CF Local is now recommended instead. It should help you run a bunch of collaborating micro-services applications on your local machine, within standard Docker containers, running on top of a plain Docker engine. For more information, see the <a href="https://github.com/cloudfoundry-incubator/cflocal" rel="nofollow noreferrer">CF Local Github</a> page, and the <a href="https://pivotal.io/fr/cf-local#comparisonchart" rel="nofollow noreferrer">comparison chart with CF Dev</a> about use-cases for which CF Local is a good fit.</p> <p>For data services (e.g. MySQL or PostgreSQL database), CF Local already provides solutions to re-use the same services instances from your CF foundation. You may also run you own engine on your laptop, and find a way for your cf-local-pushed app to bind to these services in this context.</p> <p>Hope this might give you some interesting directions in which you can dig and find your way!</p>
<p>I am using circleCI for deployments, with AKS version 1.11 , the pipelines were working fine but after the AKS upgradation to 1.14.6, failure is seen while applying the deployment and service object files.</p> <p>I deployed manually at kubernetes cluster, there didn't appear any error but while deploying through circleCI, I am getting following kind of errors while using version 2 of circleCI</p> <pre><code>error: SchemaError(io.k8s.api.extensions.v1beta1.DeploymentRollback): invalid object doesn't have additional properties </code></pre> <p>or the other kind of error appears like -</p> <pre><code>error: SchemaError(io.k8s.api.core.v1.StorageOSVolumeSource): invalid object doesn't have additional properties </code></pre>
<p>It's most likely that the version of kubectl used in CircleCI isn't supported by 1.14.6. Note that kubectl version must be either <code>1.n</code>, <code>1.(n+1)</code> or <code>1.(n-1)</code> where n is the minor version of the cluster. In this case your kubectl must be at least 1.13.x or at most 1.15.x</p> <p>Checkout <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kubelet-1" rel="nofollow noreferrer">Kubernetes version and version skew support policy</a> for more details.</p>
<p>Imagine a secret like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: {{ include "test-cicd.fullname" . }} labels: app.kubernetes.io/name: {{ include "test-cicd.name" . }} helm.sh/chart: {{ include "test-cicd.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} type: Opaque data: secret.yaml: | {{ if eq .Values.env "prod" }} foo: bar-prod foo2: bar2_prod {{ else if eq .Values.evn "dev" }} foo: bar-dev {{ end }} </code></pre> <p>Is it possible to seal this using <a href="https://github.com/bitnami-labs/sealed-secrets" rel="noreferrer">Kubeseal</a>? Upon doing it now, I get <code>invalid map key: map[interface {}]interface {}{"include \"test-cicd.fullname\" .":interface {}(nil)}</code> which is probably because it is not a "valid" yaml file.</p> <p>One thing that I tried was: 1. Removing the helm templating lines 2. Generating the sealedsecret 3. Templating the sealedsecret using helm</p> <p>But by doing this, the sealedsecret could not be decrypted by the cluster-side operator on deployment time. </p>
<p><strong>mkmik</strong> gave an answer to my question on <a href="https://github.com/bitnami-labs/sealed-secrets/issues/277" rel="noreferrer">Github</a>, so I'm quoting it here as well just for the records. </p> <hr> <p>So, you're composing a secret value with client-side templating. Parts of your secret.yaml file are secret, yet parts must be templating directives (the if) and hence cannot be encrypted.</p> <p>You have two options:</p> <ol> <li><p>you encrypt your secrets somehow using some client-side vault software, possibly with helm integration (e.g. <a href="https://github.com/futuresimple/helm-secrets" rel="noreferrer">https://github.com/futuresimple/helm-secrets</a>). That requires every user (and CI environment) that applies that helm chart, to be able to decrypt the secrets.</p></li> <li><p>you re-factor your secrets so that secrets are "atomic", and use sealed-secrets to benefit from its "one-way encryption" approach, which allows your devops users (and CI automation) to apply the helm charts without ever seeing the secret values themselves.</p></li> </ol> <p>The rest of this answer assumes you picked option (2)</p> <hr> <p>Now, since you decided to use Helm, you have to deal with the fact that helm templates are not json/yaml files, but instead they are Go templates, and hence they cannot be manipulated by tools designed to manipulated structured data formats.</p> <p>Luckily, kubeseal has a --raw command, that allows you to encrypt individual secret values and put them manually in whatever file format you're using to describe your k8s resources.</p> <p>So, assuming you want to create a Helm template for SealedSecrets resource, which takes the name and label values as paramters, and also chooses which secrets to put also based on boolean prod/dev parameter, this example might work for you:</p> <pre><code>apiVersion: bitnami.com/v1alpha1 kind: SealedSecret metadata: name: {{ include "test-cicd.fullname" . }} annotations: # this is because the name is a deployment time parameter # consider also using "cluster-wide" if the namespace is also a parameter # please make sure you understand the implications, see README sealedsecrets.bitnami.com/namespace-wide: "true" labels: app.kubernetes.io/name: {{ include "test-cicd.name" . }} helm.sh/chart: {{ include "test-cicd.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} type: Opaque spec: template: metadata: labels: app.kubernetes.io/name: {{ include "test-cicd.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} encryptedData: {{ if eq .Values.env "prod" }} foo: AgASNmKx2+QYbbhSxBE0KTa91sDBeNSaicvgBPW8Y/q/f806c7lKfF0mnxzEirjBsvF67C/Yp0fwSokIpKyy3gXtatg8rhf8uiQAA3VjJGkl5VYLcad0t6hKQyIfHsD7wrocm36uz9hpH30DRPWtL5qy4Z+zbzHj8AvEV+xTpBHCSyJPF2hyvHXTr6iQ6KJrAKy04MDwjyQzllN5OQJT2w4zhVgTxXSg/c7m50U/znbcJ1x5vWLXLSeiDRrsJEJeNoPQM8OHmosf5afSOTDWQ4IhG3srSBfDExSFGBIC41OT2CUUmCCtrc9o61LJruqshZ3PkiS7PqejytgwLpw/GEnj2oa/uNSStiP9oa9mCY6IUMujwjF9rKLIT456DlrnsS0bYXO2NmYwSfFX+KDbEhCIVFMbMupMSZp9Ol2DTim5SLIgIza/fj0CXaO3jGiltSQ0aM8gLSMK9n3c1V+X5hKmzMI3/Xd01QmhMmwqKp+oy21iidLJjtz67EiWyfIg1l7hiD5IIVlM9Gvg3k67zij5mOcXPkFnMmUQhQWxVKgAf4z8qEgprt03C+q+Wwwt25UDhQicpwoGtVQzU5ChJi09ja5LeW4RrvDf2B5KRp9HXoj1eu93MMl1Kcnx+X7uVT5OqQz28c4wOLT4FDItFzh8zREGZbiG/B3o1vI8MmwvxXj++pQ7SfBxoz9Xe8gmQ7BuXno= foo2: AgAkaTBYcESwogPiauZ15YbNldmk4a9esyYuR2GDt7hNcv+ycPLHmnsJcYs0hBtqucmrO3HbgCy/hQ6dMRCY12RA7w7XsFqNjZy3kavnhqwM6YkHntK2INwercRNQpO6B9bH6MxQTXcxfJbPqaPt30iTnTAhtpN47lueoyIoka4WWzwG/3PAikXhIlkTaq0hrclRJHRqg4z8Kmcaf5A/BRL2xX8syHbjA7MK9/OoK+zytv+LGrbLLHUtuhNNNQ2PG9u05rP6+59wRduQojEDtB9FTCa+daS+04/F4H1vi6XUNnjkK+Xna1T2Eavyuq2GieKj/7ig96et/4HoTAz44zwVhh8/pk0IFC8srcH3p+rFtZZmjvbURrFahEjFZbav3BDMBNhrU8SI3MDN0Abiyvz4vJJfSxIYcyLD1EQ507q7ZXrqYN/v1EiYgYUACi0JGxSWHB9TlCkZOAdCl+hroXEhBN2u5utLJ12njBQJ8ACNQDOYf+CmtV0y7foCZ6Aaap0pV7a8twyqK8c17kImzfi102Zel8ALfLAzdAXBV9c1+1pH76turnTCE33aSMQlaVF3VTmFQWqB8uIO/FQhZDPo8u/ki3L8J31nepup4/WE7i59IT0/9qGh2LKql4oAv6v4D7qtKziN6DvG7bsJlj14Dln0roiTfTWEEnBqdDER+GKZJlKayOWsPQdN0Wp+2KVfwLM= {{ else if eq .Values.evn "dev" }} foo: AgAkaTBYcESwogPi.......... {{ end }} </code></pre> <p>An alternative approach would be to have two templates, one for prod and one for dev and use Helm templating logic to pick the right file depending on which environment you're deploying to.</p> <p>Anyway, each of those base64 blobs can be produced with:</p> <pre><code>$ kubeseal --raw --scope namespace-wide --from-file=yoursecret.txt </code></pre> <p>Pro-tip, you can pipe the secret if it's not in a file:</p> <pre><code>$ echo -n yoursecret | kubeseal --raw --scope namespace-wide --from-file=/dev/stdin </code></pre> <p>Then you have to paste the output of that command into your Helm Go template.</p>
<p>My Kubernetes application uses an Ingress to proxy requests to different servers, according to the URL given: I want a fanout configuration. I want the URLs of the requests <strong>not to be rewritten</strong> when forwarded to the servers. How do I do that?</p> <p>I want all the <code>/api</code> URLs forwarded to the <code>be</code> service, and all others forwarded to the <code>fe</code> service. But I want the URLs forwarded unchanged. For example</p> <ul> <li>a request for <code>/api/users</code> should be forwarded to the <code>be</code> service as a request for <code>/api/users</code>.</li> <li>a request for <code>/foo</code> should be forwarded to the <code>fe</code> service as a request for <code>/foo</code>.</li> </ul> <p>My current Ingress resource is like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: ... spec: ... rules: - host: ... - http: paths: - path: /api backend: serviceName: be servicePort: 8080 - path: / backend: serviceName: fe servicePort: 80 </code></pre> <p>but that does not work; it gives 404 Not Found for requests.</p>
<p>The Kubernetes ingress isn't rewriting your request URLs, the ingress controller is doing this (whatever you happen to be using). For instance, if your ingress controller is Nginx, you can control this behavior with <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">annotations</a> on the ingress.</p>
<p>Can you help me? I want to deploy Ingres for nodeport. But I can't understand if this possible? </p> <p>I tried to find some information in Google but I got Ingress for load balancing or some difficult examples Ingress Ruby on rails and etc. </p>
<p>I'll try to provide the simplest example that I can think of below. I will use the <code>nginxdemos/hello</code> docker image for my example. Locally this works as this:</p> <pre><code>$docker run -p 80:80 -d nginxdemos/hello ... $curl -I http://localhost HTTP/1.1 200 OK Server: nginx/1.13.8 Date: Tue, 08 Oct 2019 06:14:52 GMT Content-Type: text/html Connection: keep-alive Expires: Tue, 08 Oct 2019 06:14:51 GMT Cache-Control: no-cache </code></pre> <p>Cool. Here is our backend deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-backend namespace: java2days spec: replicas: 2 selector: matchLabels: app: nginx-server template: metadata: labels: app: nginx-server spec: containers: - name: nginx-server image: nginxdemos/hello ports: - containerPort: 80 name: server-port livenessProbe: httpGet: path: / port: 80 initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 3 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 3 </code></pre> <p>Shortly we will have 2 replicas of an nginx server. Up and running on nodes somewhere in the cloud:</p> <pre><code>$kubectl get pods NAME READY STATUS RESTARTS AGE nginx-backend-dfcdb9797-dnx7x 1/1 Running 0 21m nginx-backend-dfcdb9797-pnrhn 1/1 Running 0 21m </code></pre> <p>Let's create a NodePort service now. Here is the service yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx-service namespace: java2days spec: ports: - port: 80 protocol: TCP targetPort: 80 name: http selector: app: nginx-server type: NodePort </code></pre> <p>Note the selector, it matches the backend service. Now it is time for the ingress controller.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress namespace: java2days spec: rules: - http: paths: - backend: serviceName: nginx-service servicePort: 80 path: /* </code></pre> <p>You will have to wait for about 5-10 minutes until GCloud provisions an IP address for this Ingress. Finally it is done, something like this:</p> <pre><code>$kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * x.y.z.p 80 15m </code></pre> <p>From my local machine now:</p> <pre><code>$curl -I http://x.y.z.p HTTP/1.1 200 OK </code></pre> <p>Great it is working. If you open it in the browser and refresh multiple times you will see that the server ID changes and load balancing works. Additional entry point for reading - <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">here</a>.</p> <p>Do not forget to clean up the resources when you finish experimenting.</p>
<p>I have a service that deploys thousands of short lived jobs on Kubernetes each day. I am trying to get Kubernetes to delete these jobs after completion using the functionality described here:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically</a></p> <p>The job completes, but the Job remains in the cluster after the time limit expressed in <code>ttlSecondsAfterFinished</code>.</p> <p>I'm testing this on GKE and my master node is running <code>1.12.7-gke.25</code> I understand this is an alpha feature, but I haven't gotten a single Job to be deleted.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: ttl-test spec: ttlSecondsAfterFinished: 100 template: metadata: name: ttl-test spec: containers: - name: test-container image: myimage </code></pre>
<p>In Kubernetes v1.21 the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/" rel="noreferrer">TTL Controller</a> is in beta and enabled by default.</p> <p>Until Kubernetes v1.20 it was an alpha feature. If you want to use it with a Kubernetes version older than v1.21, make sure alpha features are enabled on your GKE cluster. You can check this on the GCP console UI under Clusters / &lt;your_cluster&gt; / Details / Kubernetes alpha features.</p>
<p>I'm trying to get a breakdown of the memory usage of my pods running on Kubernetes. I can see the pod's memory usage through <code>kubectl top pod</code> but what I need is a total breakdown of where the memory is used.</p> <p>My container might download or write new files to disk, so I'd like to see at a certain moment how much of the used memory is used for each file and how much is used by software running. currently, there's no real disk but just the TempFS, so that means every file is consuming the allocated memory resources, that is okay as long as I can inspect and know how much memory is where.</p> <p>Couldn't find anything like that, it seems that <code>cAdvisor</code> helps to get memory statics but it just uses <code>docker/cgroups</code> which doesn't give a breakdown as I described.</p>
<p>A better solution would be to install a metrics server along with <a href="https://prometheus.io/docs/guides/cadvisor/" rel="nofollow noreferrer">Prometheus</a> and Grafana in your cluster. Prometheus will <strong>scrape</strong> the metrics which can be used by Grafana for displaying as graphs. This might be useful.</p> <p>If you want the processes consumption inside the container, you can go into the container and monitor the processes.</p> <pre><code>$ docker exec -it &lt;container-name&gt; watch ps -aux </code></pre> <p>Moreover, you can check <a href="https://docs.docker.com/engine/reference/commandline/stats/" rel="nofollow noreferrer">docker stats</a>.</p> <p>Following Linux command will summarize the sizes of the directories:</p> <pre><code>$ du -h </code></pre>
<p>Could anyone explain me what is the best way to add basic Auth to a kubernetes cluster deployment that is running a webapp on Google Cloud (GCP)</p> <p>We are exposing it using: </p> <pre><code>kubectl expose deployment testSanbox --type=LoadBalancer --port 80 --target-port 80 </code></pre> <p>We don't need anything fancy as this is only a dev sandbox but we don't want anyone to be able to reach it. It could be a single user/pass combo or maybe use the google credentials that we manage with IAM.</p> <p>Sorry as you probably already noticed I'm not really experienced with kubernetes or GCP.</p> <p>Thanks</p>
<p>If you looking for HTTP Basic Auth you can use NGINX and Ingress. Here is setup instruction <a href="https://docs.bitnami.com/kubernetes/how-to/secure-kubernetes-services-with-ingress-tls-letsencrypt/" rel="nofollow noreferrer">authentication-ingress-nginx</a>, <a href="https://banzaicloud.com/blog/ingress-auth/" rel="nofollow noreferrer">ingress-auth</a>. </p> <p>But in context of security http authentication is not good enough secure authentication method. The problem is that, unless the process is strictly enforced throughout the entire data cycle to SSL for security, the authentication is transmitted in open on insecure lines. This lends itself to man in the middle attacks, where a user can simply capture the login data and authenticate via a copy-cat HTTP header attached to a malicious packet.</p> <p>Here is overview in kubernetes official documentation about authorization <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">authorization-kubernetes</a>.</p> <p>If you look for better solutions use API Keys, OAuth provider such as Google, Auth0, etc. developers.google.com/identity/protocols/OAuth2WebServer AND developers.google.com/identity/protocols/OAuth2UserAgent There are many options for authentication and authorization. Here are xplainations of above terms: <a href="https://nordicapis.com/3-common-methods-api-authentication-explained/" rel="nofollow noreferrer">api-authentication</a>.</p> <p>Approach to authenticate users using Auth on GCP: <a href="https://cloud.google.com/endpoints/docs/openapi/authenticating-users-auth0" rel="nofollow noreferrer">authentication-gcp-app</a>.</p> <p>Please let me know if it helps.</p>
<hr /> <h1>Background</h1> <p>I'm attempting to configure a cluster via <code>kubeadm</code>. I normally create the (test) cluster via:</p> <pre><code>sudo kubeadm init --pod-network-cidr 10.244.0.0/16 </code></pre> <p>This parameter appears to eventually find its way into the static pod definition for the controllerManager (<code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code>):</p> <pre><code>- --cluster-cidr=10.244.0.0/16 </code></pre> <p>Larger portions of <code>sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml</code>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf - ... - --cluster-cidr=10.244.0.0/16 </code></pre> <hr /> <h1>Question 1:</h1> <p>How can I pass this setting, <code>--pod-network-cidr=10.244.0.0/16</code> via a config file, i.e. <code>kubeadm init --config my_config.yaml</code>? I found a <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/admin/kubeadm/" rel="nofollow noreferrer">sample config file template on an unofficial K8S documentation wiki</a>, but I can't seem to find any documentation at all that maps these command-line arguments to <code>kubeadm</code> to their <code>kubeadm_config.yaml</code> equivalents.</p> <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/" rel="nofollow noreferrer">There's also a document showing how I can create a baseline static pod definition/<code>yaml</code></a> via <code>kubeadm config print init-defaults &gt; kubeadm_config.yaml</code>, but again, no documentation that shows how to set <code>pod-network-cidr</code> by modifying and applying this <code>yaml</code> file (i.e. <code>kubeadm upgrade -f kubeadm_config.yaml</code>).</p> <p><strong>Sample output of <code>kubeadm config view</code>:</strong></p> <pre><code>apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.15.4 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} </code></pre> <hr /> <h1>Question 2:</h1> <p>How can I do the above, but pass something like <code>--experimental-cluster-signing-duration=0h30m0s</code>? I'd like to experiment with tests involving manually/automatically renewing all <code>kubeadm</code>-related certs.</p> <hr />
<p><strong>1.</strong> Accorindg to the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p>It’s possible to configure <code>kubeadm init</code> with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed with the <code>--config</code> option.</p> <p>The default configuration can be printed out using the <code>kubeadm config print</code> <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/" rel="nofollow noreferrer">command</a>.</p> <p>It is recommended that you migrate your old v1beta1 configuration to v1beta2 using the <code>kubeadm config migrate</code> <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/" rel="nofollow noreferrer">command</a>.</p> <p>During <code>kubeadm init</code>, kubeadm uploads the ClusterConfiguration object to your cluster in a ConfigMap called kubeadm-config in the kube-system namespace. This configuration is then read during <code>kubeadm join</code>, <code>kubeadm reset</code> and <code>kubeadm upgrade</code>. To view this ConfigMap call <code>kubeadm config view</code>.</p> <p>You can use <code>kubeadm config print</code> to print the default configuration and <code>kubeadm config migrate</code> to convert your old configuration files to a newer version. <code>kubeadm config images list</code> and <code>kubeadm config images pull</code> can be used to list and pull the images that kubeadm requires.</p> </blockquote> <p>Subnets are defined by <code>--pod-network-cidr</code> argument in kubeadm OR by a config file such as the example below below:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha3 kind: InitConfiguration api: advertiseAddress: 0.0.0.0 bindPort: 6443 kubernetesVersion: v1.12.1 --- apiVersion: kubeadm.k8s.io/v1alpha3 kind: ClusterConfiguration networking: podSubnet: 192.168.0.0/24 </code></pre> <p><strong>2.</strong> I was not able to find anything like this in the official documentation nor in other sources. </p> <p>You instead can use <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a> to pass that kind of configuration.</p> <p>Please let me know if that helped.</p>
<p>I have a kubernetes cluster on GCP which I created using the GCP dashboard. I created 3 default nodes in it. </p> <p>Now, I want to add my laptop as a node to this cluster. I found that we can use <code>kubeadm init</code> for that but whenever I try that in my gcloud console (after logging in using <code>gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project river-autumn-253318</code>) it says that I have only 1 CPU. </p> <p>Another problem is that the tutorials which I saw online always give out cluster IP as 192.x.x.x or 172.x.x.x after running <code>kubeadm init</code>, which, I understand are private IPs.</p> <p>How can I add my laptop as a node to the GCP Kubernetes Cluster. </p>
<p>I was able to solve the problem. All I had to do was add the flag <code>--control-plane-endpoint</code>.</p> <p>My final token was something like <code>kubeadm init --pod-network-cidr=10.240.0.0/16 --control-plane-endpoint=35.222.246.129</code></p> <p>Thank you everyone fo their inputs. </p>
<p>When I do</p> <pre><code>kubectl get pods -A </code></pre> <p>I get all pods, and I always have 17 pods that are not "apps", they belong to namespace <code>kube-system</code>. I would like to have an alias not to print them. </p> <p>Is there a way to print all pods, excluding a namespace ?</p>
<p>You can accomplish this via <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="noreferrer">field selectors</a>:</p> <pre><code>kubectl get pods -A --field-selector=metadata.namespace!=kube-system </code></pre> <p>Additionally, the field selector list can have multiple parameters, separated by <code>,</code> (comma literals), and use <code>==</code> or <code>!=</code> to specify additional criteria.</p>
<p>I have a working Kubernetes deployment of my application.</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: ... template: ... spec: containers: - name: my-app image: my-image ... readinessProbe: httpGet: port: 3000 path: / livenessProbe: httpGet: port: 3000 path: / </code></pre> <p>When I apply my deployment I can see it runs correctly and the application responds to my requests.</p> <pre><code>$ kubectl describe pod -l app=my-app ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m7s default-scheduler Successfully assigned XXX Normal Pulled 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine Normal Created 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app Normal Started 4m5s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app </code></pre> <p>The application has a defect and crashes under certain circumstances. I "invoke" such a condition and then I see the following in pod events:</p> <pre><code>$ kubectl describe pod -l app=my-app ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m45s default-scheduler Successfully assigned XXX Normal Pulled 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Container image "my-app" already present on machine Normal Created 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Created container my-app Normal Started 6m43s kubelet, pool-standard-4gb-2cpu-b9vc Started container my-app Warning Unhealthy 9s kubelet, pool-standard-4gb-2cpu-b9vc Readiness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Warning Unhealthy 4s (x3 over 14s) kubelet, pool-standard-4gb-2cpu-b9vc Liveness probe failed: Get http://10.244.2.14:3000/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) Normal Killing 4s kubelet, pool-standard-4gb-2cpu-b9vc Container crawler failed liveness probe, will be restarted </code></pre> <p>It is expected the liveness probe fails and the container is restarted. But why do I see <code>Readiness probe failed</code> event?</p>
<p>As @suren wrote in the comment, readiness probe is still executed after container is started. Thus if both liveness and readiness probes are defined (and also fx they are the same), both readiness and liveness probe can fail.</p> <p>Here is <a href="https://stackoverflow.com/questions/44309291/kubernetes-readiness-probe-execution-after-container-started">a similar question with a clear in-depth answer</a>.</p>
<p>I have a docker container with a few environment variables setup. I know that in a typical use case, React is unable to read these variables as the process.env variables are replaced during transpiling.</p> <p>I am trying to work around this without having to spin up a back-end server by using a plugin called react-env.</p> <p>The core idea is to create a .env file which can be written to using a bash script. The bash script will copy any environmental variables to the .env file. The plugin will generate a env.js file which then will be consumed by the React application at runtime.</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM node:9 # Install yarn RUN npm install yarn # Install serve.js RUN yarn global add serve # Create app directory WORKDIR /usr/app # Copy all necessary files and grant permissions. # - yarn.lock # - package.json # - .env # - docker-entrypoint.sh COPY . /usr/app/ COPY yarn.lock /usr/app COPY package.json /usr/app COPY .env /usr/app COPY docker-entrypoint.sh /usr/app/docker-entrypoint.sh RUN chmod 777 /usr/app/docker-entrypoint.sh # Install dependencies. RUN printf "\n" &gt;&gt; /usr/app/.env RUN echo 'REACT_APP_GCP_PROJECT_ID='$GCP_PROJECT_ID &gt;&gt; /usr/app/.env //READ ENVIRONMENTAL VARIABLE HERE RUN yarn # Build application. RUN yarn build # Set entrypoint of application. ENTRYPOINT [ "/usr/app/docker-entrypoint.sh" ] </code></pre> <p><strong>docker-entrypoint.sh</strong></p> <pre><code>serve build </code></pre> <p><strong>App.jsx</strong></p> <pre><code>... console.log(env("GCP_PROJECT_ID")); ... </code></pre> <p><strong>Inspect Console</strong></p> <pre><code>'' //Empty string. </code></pre> <p>However, after making the image and deploying the application, the console outputs an empty string of the environmental variable.</p> <p>Manually <code>exec</code> into the and running the <code>echo</code> command correctly does work:</p> <p><strong>Querying manually:</strong></p> <pre><code># echo 'REACT_APP_GCP_PROJECT_ID='$GCP_PROJECT_ID REACT_APP_GCP_PROJECT_ID=SOME_VALUE </code></pre> <p>but</p> <pre><code># cat .env REACT_APP_SETUP="OK" REACT_APP_GCP_PROJECT_ID= </code></pre> <p><strong>How can I get my pod environmental variables into my <code>.env</code> file before React runs?</strong></p>
<p>It looks like $GCP_PROJECT_ID is not defined in the build environment, and thus it isn't included in the echo command when it runs at build time. (I presume this is because you are having kubectl define it).</p> <p>Assuming that you expect that $GCP_PROJECT_ID does not exist in the build environment and is instead something you are trying to copy from at the time your container executes (as indicated by your exec example), your problem is that the quoting of the RUN line</p> <pre><code>RUN echo 'REACT_APP_GCP_PROJECT_ID='$GCP_PROJECT_ID &gt;&gt; /usr/app/.env </code></pre> <p>Is interpreting $GCP_PROJECT_ID during the build, which of course is the empty string. In your test in the shell inside the container, it appears that $GCP_PROJECT_ID actually exists in the environment and all is well.</p> <p>You need to actually quote the $GCP_PROJECT_ID variable to get it included into /usr/app/.env, like so (instead of being interpreted as an empty string!):</p> <pre><code>RUN echo 'REACT_APP_GCP_PROJECT_ID=$GCP_PROJECT_ID' &gt;&gt; /usr/app/.env </code></pre> <p>Of course, if you need the literal string which is the value of $GCP_PROJECT_ID to be included in the .env file, your only alternative may be to actually do the echo into .env at the beginning of the container execution (e.g. at the top of <code>docker-entrypoint.sh</code>, and remove all the related code from your Dockerfile).</p> <hr> <p>If I am wrong about you having kubectl define it when the process starts, an alternative solution would require you to ensure that the docker build itself has access to that environment variable. You can do this with the <code>--build-arg</code> argument, such as:</p> <pre><code>docker build --build-arg GCP_PROJECT_ID=[correct value] . </code></pre> <p>In this case, you need to add an argument command to the Dockerfile, like so:</p> <pre><code>ARG GCP_PROJECT_ID </code></pre> <p>at some point before the RUN command.</p>
<p>This is more a design question than an issue. We have deployed in our company our own Kubernetes infrastructure and we are trying to use ingresses and NGINX ingress controller to externally expose our services, but since it is not a cloud environment such as GCP or AWS, we can't use service type "LoadBalancer". Should we just expose our ingress controller through a service type "NodePort"? Is that the normal way to go for production environments (non-cloud)?</p> <p>From what I've read in another <a href="https://stackoverflow.com/questions/52654769/how-to-expose-a-ingress-for-external-access-in-kubernetes">post</a>, one suitable recommendation is to use NodePort, and manually point yet another external load balancer to the port on your Kubernetes nodes.</p> <p>It just seems that exposing the ingress controller through this mechanism is somehow not very practical or robust (e.g. you don’t know what port your service is going to be allocated, and the port might get re-allocated at some point, etc.)</p> <p>Is there any other mechanism maybe to expose the ingress controller to the external world?</p>
<p>The Loadbalancer service approach is one way to do it but behind it it's nothing more than a nodeport on the cluster.</p> <p>Even if you use a service that create a LB on cloud provider, the LB needs to have a target port to communicate with the cluster.</p> <p>When using a nginx-ingress that will mostly handle web requests, it's common usage to put an ingress in front of a nodeport service.</p> <p>So with this I think using NodePort services is a good idea to do what you want ;)</p> <p>This is my opinion, I'm interested if anyone else has another way to do it.</p>
<p>The use case I'm trying to get my head around takes place when you have various burstable pods scheduled on the same node. How can you ensure that the workload in a specific pod takes priority over another pod when the node's kernel is scheduling CPU and the CPU is fully burdened? In a typical Linux host my thoughts on contention between processes immediately goes to 'niceness' of the processes, however I don't see any equivalent k8s mechanism allowing for specification of CPU scheduling priority between the processes within pods on a node.</p> <p>I've read of the <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="nofollow noreferrer">newest capabilities</a> provided by k8s which (if I interpret the documentation correctly) is just providing a mechanism for CPU pinning to pods which doesn't really scratch my itch. I'd still like to maximize CPU utilization by the "second class" pods if the higher priority pods don't have an active workload while allowing the higher priority workload to have CPU scheduling priority should the need arise.</p> <p>So far, having not found a satisfactory answer I'm thinking that the community will opt for an architectural solution, like auto-scaling or segregating the workloads between nodes. I don't consider these to be truly addressing the issue, but really just throwing more CPUs at it which is what I'd like to avoid. Why spin up more nodes when you've got idle CPU?</p>
<p>Let me first explain how CPU allocation and utilization happen in k8s (memory is bit different)</p> <p>You define CPU requirement as below. where we define CPU as shares of thousand. </p> <pre><code>resources: requests: cpu: 50m limits: cpu: 100m </code></pre> <p>In the above example, we ask for min 5% and max 10% of CPU shares. </p> <p>Requests are used by kubernetes to schedule the pod. If a node has free CPU more than 5% only then the pod is scheduled on that node.</p> <p>The limits are passed to docker(or any other runtime) which then configure cpu.shares in cgroups.</p> <p>So if you Request for 5% of CPU and use only 1% then remaining are not locked to this pod and other pods can use this free CPU's to ensure that all pod gets required CPU which ensures high CPU utilization of node.</p> <p>If you limit for 10% and then try to use more than that then Linux will throttle CPU uses but it won't kill pod.</p> <p>So coming to your question you can set higher limits for your burstable pod and unless all pod cpu bursting at the same time you are ok. If they burst at the same time they will get equal CPU as avaliability.</p> <p>you can use pod <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">affinity-and-anti-affinity</a> to schedule all burstable pods on a different node.</p>
<p>I try to run some ansible tasks with the k8s module. Locally this works perfect, but on my Jenkins instance, it fails with the following error message:</p> <blockquote> <p>...</p> <p>MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxxxxxxxxxxxxx', port=443): Max retries exceeded with url: /version (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 }</p> </blockquote> <p>I am quite sure this is because the Jenkins requires a proxy to communicate to the outside world. I´ve seen how to set up ansible for using a proxy, but that does not seem to work with the k8s module. Any ideas? Here´s what I´ve tried so far:</p> <pre><code> - hosts: ansible_server connection: local gather_facts: no environment: https_proxy: "xxx" http_proxy: "xxx" tasks: - name: Gather facts to check connectivity k8s_facts: api_key: "{{api_key}}" host: "{{cluster_url}}" kind: Project register: listed_projects </code></pre> <p>PS: I added the -vvv flag and can see that it tries to use the proxy somehow:</p> <blockquote> <p> EXEC /bin/sh -c '/usr/bin/python &amp;&amp; sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/clustering/k8s/k8s_facts.py PUT /root/.ansible/tmp/ansible-local-1fHx5f6/tmpDUhlNa TO /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/ /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py &amp;&amp; sleep 0' EXEC /bin/sh -c 'https_proxy=xxx http_proxy=xxx /usr/bin/python /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/AnsiballZ_k8s_facts.py &amp;&amp; sleep 0' EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1570565569.96-190678136757098/ > /dev/null 2>&amp;1 &amp;&amp; sleep 0'</p> </blockquote>
<p>I agree with @ilias-sp but it also appears that <a href="https://github.com/ansible/ansible/blob/v2.8.5/lib/ansible/module_utils/k8s/common.py" rel="nofollow noreferrer"><code>k8s/common.py</code></a> does not support the <a href="https://github.com/kubernetes-client/python/blob/v10.0.1/kubernetes/client/configuration.py#L103" rel="nofollow noreferrer"><code>configuration.proxy</code> attribute</a>, , and as best I can tell <code>urllib3</code> does not honor those proxy environment variables the way "normal" urllib does, opting instead to use its own <code>ProxyManager</code> that is driven by an explicit constructor kwarg</p> <p>However, thanks to the "override" mechanism of ansible, I believe you can test this theory:</p> <ol> <li>Copy <code>k8s_facts.py</code> into the <code>library</code> folder of your playbook</li> <li>Modify it to expose <code>proxy</code> in the <a href="https://github.com/ansible/ansible/blob/v2.8.5/lib/ansible/module_utils/k8s/common.py#L128-L139" rel="nofollow noreferrer"><code>AUTH_ARG_MAP</code></a>, which I believe the patch below will do (the patch is against v2.8.5 so you may need to fiddle with it if your version is different)</li> <li><p>Explicitly set your <code>proxy:</code> attribute on your new <code>k8s_facts</code> module and see if it works</p> <pre><code>- k8s_facts: host: api-server-whatever kind: Project proxy: http://my-proxy:3128 </code></pre></li> <li>Assuming it does, <a href="https://github.com/ansible/ansible/issues" rel="nofollow noreferrer">open an issue in ansible</a> to let them know</li> </ol> <pre><code>--- a/library/k8s_facts.py 2019-10-08 22:23:24.000000000 -0700 +++ b/library/k8s_facts.py 2019-10-08 22:24:50.000000000 -0700 @@ -130,13 +130,14 @@ ''' -from ansible.module_utils.k8s.common import KubernetesAnsibleModule, AUTH_ARG_SPEC +from ansible.module_utils.k8s.common import KubernetesAnsibleModule, AUTH_ARG_SPEC, AUTH_ARG_MAP import copy class KubernetesFactsModule(KubernetesAnsibleModule): def __init__(self, *args, **kwargs): + AUTH_ARG_MAP['proxy'] = 'proxy' KubernetesAnsibleModule.__init__(self, *args, supports_check_mode=True, **kwargs) @@ -163,6 +164,7 @@ namespace=dict(), label_selectors=dict(type='list', default=[]), field_selectors=dict(type='list', default=[]), + proxy=dict(type='str', required=False), ) ) return args </code></pre>
<p>I have java app in k8s and fluentd (daemonset). In fluentd conf:</p> <pre><code> *`&lt;source&gt; @type forward port 24224 &lt;/source&gt; &lt;match **&gt; @type stdout &lt;/match&gt;`* </code></pre> <p>I am little bit confused. Do I need use fluentd-logger-java lib? I read in docs, that I need add remotehost for fluentd, but here i don't use service in general. How app will send logs to fluentd pods?</p> <p>Thanks in advance!</p>
<p>Given that your Java application can log to <code>stdout</code> and <code>stderr</code> you’ll use fluentd to read that log and, in most cases, ship these logs to a system that can aggregate the logs.</p> <p>This picture, from the <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">official docs</a>, shows a common pattern of configuring node-level logging in Kubernetes with e.g. fluentd as Pods deployed with a DaemonSet:</p> <p><a href="https://i.stack.imgur.com/givOm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/givOm.jpg" alt="enter image description here"></a></p> <p>In the above picture, the <code>logging-agent</code> will be fluentd and <code>my-pod</code> will be your Pod with a container running your Java app. The <code>Logging Backend</code>, from a fluentd configuration perspective, is optional but of course highly recommended. Basically you can choose to output your logs via fluentd <code>stdout</code>.</p> <p>For this to function properly fluentd will need read access to the container logs, this is accomplished by mounting the log dir e.g. <code>/var/lib/docker/containers</code> into the fluentd container.</p> <p>We’ve successfully used <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml" rel="nofollow noreferrer">this fluentd example ConfigMap</a>, with some modifications to read logs from the nodes and ship them to Elasticsearch. Check out the <code>containers.input.conf</code> part of that ConfigMap for more info on container logs and how to digest them.</p> <p>Note that you shouldn't need to use the <code>fluentd-logger-java</code> library to start using fluentd, although you could use it as another type of logger in your Java application. Out-of-the-box you should be able to let Java log everything to <code>stdout</code> and <code>stderr</code> and read the logs with fluentd.</p>
<p>I want to use k8s go client to exec command in a pod. However I cannot find any example about this. So I read <code>kubectl exec</code> source code, and write code as below. And <code>err = exec.Stream(sopt)</code> always get an error without any message. Can anyone tell me how to debug this problem, or give me a correct example.</p> <pre class="lang-golang prettyprint-override"><code>config := &amp;restclient.Config{ Host: &quot;http://192.168.8.175:8080&quot;, Insecure: true, } config.ContentConfig.GroupVersion = &amp;api.Unversioned config.ContentConfig.NegotiatedSerializer = api.Codecs restClient, err := restclient.RESTClientFor(config) if err != nil { panic(err.Error()) } req := restClient.Post().Resource(&quot;pods&quot;).Name(&quot;wordpress-mysql-213049546-29s7d&quot;).Namespace(&quot;default&quot;).SubResource(&quot;exec&quot;).Param(&quot;container&quot;, &quot;mysql&quot;) req.VersionedParams(&amp;api.PodExecOptions{ Container: &quot;mysql&quot;, Command: []string{&quot;ls&quot;}, Stdin: true, Stdout: true, }, api.ParameterCodec) exec, err := remotecommand.NewExecutor(config, &quot;POST&quot;, req.URL()) if err != nil { panic(err.Error()) } sopt := remotecommand.StreamOptions{ SupportedProtocols: remotecommandserver.SupportedStreamingProtocols, Stdin: os.Stdin, Stdout: os.Stdout, Stderr: os.Stderr, Tty: false, } err = exec.Stream(sopt) if err != nil { panic(err.Error()) } </code></pre>
<pre><code>package k8s import ( "io" v1 "k8s.io/api/core/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes/scheme" restclient "k8s.io/client-go/rest" "k8s.io/client-go/tools/remotecommand" ) // ExecCmd exec command on specific pod and wait the command's output. func ExecCmdExample(client kubernetes.Interface, config *restclient.Config, podName string, command string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error { cmd := []string{ "sh", "-c", command, } req := client.CoreV1().RESTClient().Post().Resource("pods").Name(podName). Namespace("default").SubResource("exec") option := &amp;v1.PodExecOptions{ Command: cmd, Stdin: true, Stdout: true, Stderr: true, TTY: true, } if stdin == nil { option.Stdin = false } req.VersionedParams( option, scheme.ParameterCodec, ) exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL()) if err != nil { return err } err = exec.Stream(remotecommand.StreamOptions{ Stdin: stdin, Stdout: stdout, Stderr: stderr, }) if err != nil { return err } return nil } </code></pre> <p>It works to me.</p>
<p>I want to access limits.memory variable returned by get command in k8s</p> <pre><code>kubectl get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used USED map[limits.memory:0 requests.cpu:0 requests.memory:0] </code></pre> <p>I tried many ways but couldn't succeed</p> <pre><code>[root@iaasn00126847 ~]# k get resourcequota default -n 103000-p4-dev -o custom-columns=USED:.status.used.limits.memory </code></pre> <p>returns nothing</p> <p>Is there a delimiter to fetch the same</p>
<p>Try with <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">jsonpath</a></p> <pre><code>kubectl get resourcequota default -n 103000-p4-dev -o jsonpath="{.status.used.limits\.memory}" </code></pre> <p>This is what I tried</p> <pre><code>$ kubectl apply -f https://k8s.io/examples/admin/resource/quota-mem-cpu.yaml resourcequota/mem-cpu-demo created $ kubectl get resourcequota NAME CREATED AT mem-cpu-demo 2019-10-09T06:38:39Z $ $ kubectl get resourcequota mem-cpu-demo -o json { "apiVersion": "v1", "kind": "ResourceQuota", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"ResourceQuota\",\"metadata\":{\"annotations\":{},\"name\":\"mem-cpu-demo\",\"namespace\":\"default\"},\"spec\":{\"hard\":{\"limits.cpu\":\"2\",\"limits.memory\":\"2Gi\",\"requests.cpu\":\"1\",\"requests.memory\":\"1Gi\"}}}\n" }, "creationTimestamp": "2019-10-09T06:38:39Z", "name": "mem-cpu-demo", "namespace": "default", "resourceVersion": "975", "selfLink": "/api/v1/namespaces/default/resourcequotas/mem-cpu-demo", "uid": "0d74d782-b717-4845-a0da-424776c05d45" }, "spec": { "hard": { "limits.cpu": "2", "limits.memory": "2Gi", "requests.cpu": "1", "requests.memory": "1Gi" } }, "status": { "hard": { "limits.cpu": "2", "limits.memory": "2Gi", "requests.cpu": "1", "requests.memory": "1Gi" }, "used": { "limits.cpu": "0", "limits.memory": "0", "requests.cpu": "0", "requests.memory": "0" } } } $ $ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.used}" map[limits.cpu:0 limits.memory:0 requests.cpu:0 requests.memory:0]$ $ $ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.used.limits\.memory}" 0 $ $ kubectl get resourcequota mem-cpu-demo -o jsonpath="{.status.hard.limits\.memory}" 2Gi $ </code></pre>
<p>I'm currently exploring running an Istio / Kubernetes cluster on AWS using EKS. I would like to be able to assign a different IAM role to each service running in the cluster to limit the AWS privileges of each service.</p> <p>In non-Istio Kubernetes clusters this facility is provided by projects such as <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">kube2iam</a> but this doesn't seem ideal in the Istio world as <code>kube2iam</code> relies on <code>iptables</code> rules and Istio is already using <code>iptables</code> rules to divert all outbound traffic to the Envoy sidecar.</p> <p>The Istio <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">security documentation</a> says that identity model caters for different underlying implementations and on AWS that implementation is IAM:</p> <blockquote> <p>In the Istio identity model, Istio uses the first-class service identity to determine the identity of a service. This gives great flexibility and granularity to represent a human user, an individual service, or a group of services. On platforms that do not have such identity available, Istio can use other identities that can group service instances, such as service names.</p> <p>Istio service identities on different platforms:</p> <p>Kubernetes: Kubernetes service account<br /> GKE/GCE: may use GCP service account<br /> GCP: GCP service account<br /> AWS: AWS IAM user/role account</p> </blockquote> <p>But I haven't come across any additional documentation about how to assign IAM roles to Istio <a href="https://istio.io/docs/concepts/security/#servicerole" rel="nofollow noreferrer">ServiceRoles</a>.</p> <p>Has anyone found a solution to this?</p> <p>UPDATE: See <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IRSA</a></p>
<p>Istio configuration allows to exclude some IP ranges from the proxy <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services</a></p> <p>So you if you add <code>global.proxy.excludeIPRanges: "169.254.169.254/32"</code> to the istio configuration, all requests to AWS metadata IP will not be handled by istio but directly sent to this IP.</p> <p>It will allow the kube2iam iptables rule to be applied.</p> <p>cf. <a href="https://github.com/istio/istio/issues/9297#issuecomment-516353921" rel="nofollow noreferrer">https://github.com/istio/istio/issues/9297#issuecomment-516353921</a></p>
<p>I want to know if I can manipulate different CRD's in my Job/CronJob in kubernetes. The documentation only show's a pod template, but I want to use different objects.</p>
<p>No, Kubernetes does not support any other resource but pods, as these are the only resources that run containers in Kubernetes and Kubernetes, will need to spin up a Pod in order to run your task</p> <p>If you want to setup a CronJob to use your CRD's then you might need to create your own CronJob spec, a very good example of this would be the <a href="https://book.kubebuilder.io/cronjob-tutorial/cronjob-tutorial.html" rel="nofollow noreferrer">Kubebuilder tutorial</a>, they rebuild the whole CronJob</p>
<p>If I have 2 pods is there a way for them to talk to each other without any other resource created and used?</p> <p>The question goes for the both situations - if they are in the same namespace or in different ones.</p>
<p>Yes, they can!</p> <p>Assuming you don't have any network policies restricting the calls, it just need to know its DNS name, this is how it works:</p> <ul> <li>the cluster must have DNS enabled</li> <li>if the pods are manually create on the same namespace(not via deployment), you just need make a call to the podname that act as the host. <ul> <li>POD1 running on namespace NS1 exposing the container port 31333</li> <li>POD2 running on namespace NS1 </li> <li>POD2 call POD1 via <a href="http://POD1:31333" rel="nofollow noreferrer">http://POD1:31333</a></li> </ul></li> <li>if the pods are on different namespaces, you need to include the namespace to the host. <ul> <li>POD1 running on namespace NS1 exposing the container port 31333</li> <li>POD2 running on namespace NS2 </li> <li>POD2 call POD1 via <a href="http://POD1.NS1:31333" rel="nofollow noreferrer">http://POD1.NS1:31333</a></li> </ul></li> <li>if the pod is being created by a deployment, it's name is dynamic, is hard to predic, in this case, you need a service to expose the pods to others by using a common name(the service) <ul> <li>DEPLOYMENT1 deployed to namespace NS1 will create a pod with following format deploymentname-hash(example: DEPLOYMENT1-f82hsh)</li> <li>DEPLOYMENT1-f82hsh is the pod created by the deployment and is running on namespace NS1, created exposing the container port 31333</li> <li>POD2 running on namespace NS2 </li> <li>POD2 could call DEPLOYMENT1-f82hsh via <a href="http://DEPLOYMENT1-f82hsh.NS1:31333" rel="nofollow noreferrer">http://DEPLOYMENT1-f82hsh.NS1:31333</a>, but because the name is dynamic, at any time it could change to something else and break POD2</li> <li>The solution is deploy service SVC1 that forward requests to DEPLOYMENT1 pods</li> <li>POD2 then call <a href="http://SVC1:31333" rel="nofollow noreferrer">http://SVC1:31333</a> that will forward the call to DEPLOYMENT1-f82hsh or whatever pod is available in the DEPLOYMENT1.</li> </ul></li> </ul> <p>The scenarios above assume you <strong>haven't</strong> set the hostname neither subdomain in the pod and is using the default configuration.</p> <p>In more advanced scenarios you would also use the cluster dns suffix to call these services. The following docs describes everything in more details <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></p>
<p>I have a DigitalOcean Kubernetes cluster. I have installed the NGINX Ingress Controller via Helm &amp; also installed Prometheus &amp; Grafana.</p> <p>My ingresses are in the <code>default</code> namespace, my monitoring is in a <code>monitoring</code> namespace.</p> <p>Here are the versions of the charts i have installed.</p> <pre><code>❯ helm list NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE grafana 1 Mon Oct 7 08:04:15 2019 DEPLOYED grafana-3.8.18 6.3.5 monitoring metrics-server 1 Thu Aug 29 09:07:21 2019 DEPLOYED metrics-server-2.8.2 0.3.2 kube-system nginx-ingress 1 Wed Aug 21 21:32:06 2019 DEPLOYED nginx-ingress-1.17.1 0.25.1 default prometheus 1 Mon Oct 7 09:24:21 2019 DEPLOYED prometheus-9.1.2 2.11.1 monitoring </code></pre> <p>I'm trying to get some NGINX Metrics, so i can monitor in Grafana.</p> <p>However, none of them appear in the Prometheus UI. I have tried adding the prometheus flags to all my ingresses, but still get nothing. e.g.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ****-ingress namespace: monitoring annotations: kubernetes.io/ingress.class: nginx enable-vts-status: "true" prometheus.io/scrape: "true" prometheus.io/port: "10254" </code></pre> <p>I used the default <a href="https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml" rel="nofollow noreferrer">values.yaml</a> file for the nginx controller, but i did change to enable metrics:</p> <pre><code> metrics: enabled: true </code></pre> <p>From what i read, it should work out of the box. So i have no idea what's going wrong.</p> <p><a href="https://i.stack.imgur.com/sQjOP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQjOP.png" alt="Prometheus Service Discovery"></a> <a href="https://i.stack.imgur.com/JuOb1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JuOb1.png" alt="Prometheus Search Query"></a></p> <p>One thing that does look suspicious, is that the service discovery doesn't seem to be monitoring any services, but i've never used Prometheus and i'm at a dead end with what to look for.</p> <p>Thank you</p>
<p>I got there in the end. I have to create a prometheus job, to watch the NGINX Metrics server. I figured, that difference between my setup and all the examples online (even though i used the same helm charts) is that i have a separate NGINX metrics service, running on a different port. So OOB Prometheus was not configured to check it.</p> <pre><code>- job_name: nginx-ingress metrics_path: /metrics scrape_interval: 5s static_configs: - targets: - nginx-ingress-controller-metrics:9913 </code></pre> <p>Added that to the prometheus configmap, and reloaded the config. </p>
<p>I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".</p> <p>The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.</p> <p><strong>Specs:</strong></p> <p>I'm creating the PV and PVC using a helm chart.</p> <p>I'm using the Rancher UI to see if they are bound or not and if the PV is generated.</p> <p>The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).</p> <p><strong>Error:</strong></p> <p>The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".</p> <p><strong>Attempts:</strong></p> <p>I've tried using claimRef and matchLabels to no avail.</p> <p>I've added "volumetype: none" to my PV specs.</p> <p>If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.</p> <pre><code>## Create Persistent Storage for SFTP ## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/ kind: PersistentVolume apiVersion: v1 metadata: name: sftp-pv-storage labels: type: local name: sftp-pv-storage spec: storageClassName: rook-ceph-block capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain allowVolumeExpansion: true volumetype: none --- ## Create Claim (links user to PV) ## ==&gt; If pod is created, need to automatically create PVC for user (without their input) kind: PersistentVolumeClaim apiVersion: v1 metadata: name: sftp-pv-claim spec: storageClassName: sftp-pv-storage accessModes: - ReadWriteOnce resources: requests: storage: 2Gi </code></pre>
<p><strong>The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.</strong></p> <p>In PV manifest you must provide type of volume. List of all supported types are described <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">here</a>. As you are using <code>Ceph</code> I assume you will use <code>CephFS</code>.</p> <blockquote> <p>A cephfs volume allows an existing CephFS volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a cephfs volume are preserved and the volume is merely unmounted. This means that a CephFS volume can be pre-populated with data, and that data can be “handed off” between Pods. CephFS can be mounted by multiple writers simultaneously.</p> </blockquote> <p>Example of <code>CephFS</code> you can find in <a href="https://github.com/kubernetes/examples/tree/master/volumes/cephfs" rel="nofollow noreferrer">Github</a>.</p> <p><strong>If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.</strong></p> <p>If you will check <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1" rel="nofollow noreferrer">Official Kubernetes docs</a> about <code>storageClassName</code>.</p> <blockquote> <p>A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.</p> </blockquote> <p><code>storageClassName</code> of your <code>PV</code> and <code>PVC</code> are different.</p> <p>PV:</p> <pre><code>spec: storageClassName: rook-ceph-block </code></pre> <p>PVC:</p> <pre><code>spec: storageClassName: sftp-pv-storage </code></pre> <p>Hope it will help.</p>
<p>I have tried the database application for mysql and postgres too. When I use pv type as OSS, and deploy the application, pods goes in CrashbackLoopOff. The error which I am constantly getting is </p> <pre><code>chown: changing ownership of '/var/lib/mysql/': Input/output error </code></pre> <p>(For PostgreSQL I get the same error with <code>var/lib/postgresql/data</code>.)</p> <p>The path which it is giving error is of container.The ways which I tried is, before uploading files to OSS I changed the ownership of files and folders from 999 to root and then uploaded to OSS. But then also I am getting this error for every database. Please give me solution for this as I am stuck from longer time. Thanks in Advance </p>
<p>If I understand you correctly there are few things you can do:</p> <ol> <li><p>Launch the db container as <code>root</code> and than <code>chown</code> the directory. In case of mysql if you still can't change it than try running <code>sudo chown -R mysql:mysql /var/lib/mysql</code> as <code>mysql:mysql</code> is the default ownership there.</p></li> <li><p>Use <code>initContainer</code> in order to change the target folder to <code>/var/lib/mysql/</code></p></li> <li><p>Use <code>securityContext</code> for <code>containers</code>. For example:</p></li> </ol> <pre><code>containers: - name: mysql image: &lt;msql_image&gt; securityContext: runAsUser: 0 </code></pre> <p>It should all also work for <code>postgresql</code>.</p> <p>Please let me know if that helped. </p>
<p>I just created a cluster on GKE with 2 n1-standard-2 nodes and installed a prometheusOperator using the official <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">helm</a>.</p> <p>Prometheus seems to be working fine but i'm getting alerts like this :</p> <pre><code>message: 33% throttling of CPU in namespace kube-system for container metrics-server in pod metrics-server-v0.3.1-8d4c5db46-zddql. 22 minutes agocontainer: metrics-serverpod: metrics-server-v0.3.1-8d4c5db46-zddql message: 35% throttling of CPU in namespace kube-system for container heapster-nanny in pod heapster-v1.6.1-554bfbc7d-tg6fm. an hour agocontainer: heapster-nannypod: heapster-v1.6.1-554bfbc7d-tg6fm message: 77% throttling of CPU in namespace kube-system for container prometheus-to-sd in pod prometheus-to-sd-789b2. 20 hours agocontainer: prometheus-to-sdpod: prometheus-to-sd-789b2 message: 45% throttling of CPU in namespace kube-system for container heapster in pod heapster-v1.6.1-554bfbc7d-tg6fm. 20 hours agocontainer: heapsterpod: heapster-v1.6.1-554bfbc7d-tg6fm message: 38% throttling of CPU in namespace kube-system for container default-http-backend in pod l7-default-backend-8f479dd9-9n77b. </code></pre> <p>All those pods are part of the default GKE installation and I haven't done any modification on them. I believe they are part of some google cloud tools that I haven't really tried yet.</p> <p>My nodes aren't really under heavy load :</p> <pre><code>kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-psi-cluster-01-pool-1-d5650403-cl4g 230m 11% 2973Mi 52% gke-psi-cluster-01-pool-1-d5650403-xn35 146m 7% 2345Mi 41% </code></pre> <p>Here are my prometheus helm config : </p> <pre><code>alertmanager: alertmanagerSpec: storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi config: global: resolve_timeout: 5m receivers: - name: "null" - name: slack_k8s slack_configs: - api_url: REDACTED channel: '#k8s' send_resolved: true text: |- {{ range .Alerts }} {{- if .Annotations.summary }} *{{ .Annotations.summary }}* {{- end }} *Severity* : {{ .Labels.severity }} {{- if .Labels.namespace }} *Namespace* : {{ .Labels.namespace }} {{- end }} {{- if .Annotations.description }} {{ .Annotations.description }} {{- end }} {{- if .Annotations.message }} {{ .Annotations.message }} {{- end }} {{ end }} title: '{{ (index .Alerts 0).Labels.alertname }}' title_link: https://karma.REDACTED?q=alertname%3D{{ (index .Alerts 0).Labels.alertname }} route: group_by: - alertname - job group_interval: 5m group_wait: 30s receiver: slack_k8s repeat_interval: 6h routes: - match: alertname: Watchdog receiver: "null" - match: alertname: KubeAPILatencyHigh receiver: "null" ingress: enabled: false hosts: - alertmanager.REDACTED coreDns: enabled: false grafana: adminPassword: REDACTED ingress: annotations: kubernetes.io/tls-acme: "true" enabled: true hosts: - grafana.REDACTED tls: - hosts: - grafana.REDACTED secretName: grafana-crt-secret persistence: enabled: true size: 5Gi kubeControllerManager: enabled: true kubeDns: enabled: true kubeScheduler: enabled: true nodeExporter: enabled: true prometheus: ingress: enabled: false hosts: - prometheus.REDACTED prometheusSpec: additionalScrapeConfigs: - basic_auth: password: REDACTED username: prometheus retention: 30d ruleSelectorNilUsesHelmValues: false serviceMonitorSelectorNilUsesHelmValues: false storageSpec: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi prometheusOperator: createCustomResource: false </code></pre> <p>I've found this git issue <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108" rel="nofollow noreferrer">https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/108</a> but i'm not sure if this apply to my case because this is default GKE pods. I want to make sure everything is running smoothly and Stackdriver is able to retrieve all my logs properly even if I haven't really looked up how to use it yet.</p> <p>Should I modify the limits on GKE default deployement in kube-system? Is there any problem with deploying prometheusOperator on GKE ?</p>
<p>After looking through many links, I think that I understand the issue here.</p> <p>I think that this is the k8s issue that you’re experiencing. [1]</p> <p>There seems to be an issue with CFS quotas in Linux that is affecting all containerized clouds including Kubernetes, you can workaround the issue by adding a higher CPU Limit to your cluster or remove CPU limits from your containers. Please do test this on a staging environment and not straight in production.</p> <p>Best of Luck!</p> <hr> <p>[1] <a href="https://github.com/kubernetes/kubernetes/issues/67577" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/67577</a></p>
<p>Setup:</p> <ul> <li>minikube version: v0.27.0</li> <li>Kong (helm install stable/kong) / version 1.0.2</li> <li>Keycloak (helm install stable/keycloak) / version 4.8.3.Final</li> </ul> <p>I have a self signed SSL certificate for my "hello.local".</p> <p>What I need to achieve: Keycloak behind Kong at "<a href="https://hello.local/" rel="nofollow noreferrer">https://hello.local/</a>".</p> <p>My steps:</p> <p>1) fresh minikube</p> <p>2) Install Keycloak with helm, following values.yaml:</p> <pre><code>keycloak: basepath: "" replicas: 1 ... extraEnv: | - name: PROXY_ADDRESS_FORWARDING value: "true" </code></pre> <p>(that would create service <code>auth-keycloak-http</code>)</p> <p>3) Install Kong with helm, following values.yaml:</p> <pre><code>replicaCount: 1 admin: ingress: enabled: true hosts: ['hello.local'] proxy: type: LoadBalancer ingress: enabled: true hosts: ['hello.local'] tls: - hosts: - hello.local secretName: tls-certificate annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/ssl-passthrough: "true" tls: enabled: true postgresql: ... </code></pre> <p>4) I setup service and route for Kong</p> <p>Service:</p> <pre><code>Protocol: http Host: auth-keycloak-http Port: 80 </code></pre> <p>Route:</p> <pre><code>Hosts: hello.local </code></pre> <p>After that I can open "<a href="https://hello.local" rel="nofollow noreferrer">https://hello.local</a>" and can see welcome page from Keycloak where I can click Administration Console and after that I have redirect to "<a href="https://hello.local:8443/admin/master/console/" rel="nofollow noreferrer">https://hello.local:8443/admin/master/console/</a>" in my browser. So we should not have redirect with another port at this point. </p> <p>Setup with 2 docker images (Keycloak + Kong) is working if <code>PROXY_ADDRESS_FORWARDING</code> is <code>true</code>.</p> <p>How can I make Keycloak (helm chart) to work behind Kong (helm chart) in kubernetes cluster as expected, without redirect?</p> <p>This is being discussed in <a href="https://github.com/helm/charts/issues/12272" rel="nofollow noreferrer">github issue 1</a>, <a href="https://github.com/Kong/kubernetes-ingress-controller/issues/291" rel="nofollow noreferrer">github issue 2</a> and <a href="https://github.com/codecentric/helm-charts/issues/17" rel="nofollow noreferrer">github issue 3</a>. Also, Similar questions <a href="https://stackoverflow.com/questions/56021042/cant-access-administration-console-page-with-keycloak-behind-kong-ingress-contr">on stackoverflow</a> </p>
<p>Original answer:</p> <p>Seems, it is necessary to setup following environment variables in values.yaml of keycloak helm chart:</p> <pre><code>... extraEnv: | - name: KEYCLOAK_HTTP_PORT value: &quot;80&quot; - name: KEYCLOAK_HTTPS_PORT value: &quot;443&quot; - name: KEYCLOAK_HOSTNAME value: example.com ... </code></pre> <p>All of them are required, after that, redirect would work correctly.</p> <p><strong>Added 2021 Sep:</strong></p> <p>Issue with <a href="https://github.com/codecentric/helm-charts/issues/473" rel="nofollow noreferrer">weird behavior with redirect to port 8443</a> for some action (like go to Account management with the link on the top right of admin console).</p> <p>In fact we do not need to set any <code>KEYCLOAK_HTTP_PORT </code> or <code>KEYCLOAK_HTTPS_PORT</code>.</p> <p>Some changes are required on proxy side. On proxy we need to set <code>x-forwarded-port</code> to <code>443</code> for this route.</p> <p>In my case we use Kong:</p> <p>On the route, where Keycloak is exposed, we need to add (this one worked for me):</p> <p><code>serverless</code> &gt; <code>post function</code> with following content:</p> <pre class="lang-lua prettyprint-override"><code>ngx.var.upstream_x_forwarded_port=443 </code></pre> <p><a href="https://github.com/Kong/kong/issues/5559#issuecomment-585972520" rel="nofollow noreferrer">More info on KONG and x_forwarded_*</a></p>
<p>Similar to <a href="https://stackoverflow.com/questions/53299238/how-can-i-check-what-ingress-controller-i-have-on-my-kube-and-what-is-the-defaul">this question</a> but for generic kubernetes. How can software I'm writing programmatically detect whether there is any ingress controller already existing in the cluster? </p>
<p>You most probably can't, since an ingress controller simply is a deployment that reads and interprets Kubernetes ingress resources by fetching them from the Kubernetes API server using REST or some other Kubernetes API client.</p> <p>If ingress resources are present, that's a strong hint that an ingress controller is present though.</p> <p>An <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">ingress resource</a> is a Kubernetes resource to define ingress rules. You can query the Kubernetes API server at <a href="https://kubernetes/apis/extensions/v1beta1/ingresses" rel="nofollow noreferrer">https://kubernetes/apis/extensions/v1beta1/ingresses</a> or use <code>kubectl get ingress</code> as mentioned by others.</p> <p>It's not mandatory to annotate the <code>ingress.class</code> unless multiple ingress controllers are present (see <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#using-multiple-ingress-controllers</a>).</p> <p>So in the end you may find hints, that an ingerss controller is present, and by reading ingress rules and their annotations you may also get hints which ingress controller is present, but you can't depend on it.</p>
<p>Given the following PVC and PV:</p> <ul> <li>PVC:</li> </ul> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: packages-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeName: packages-volume </code></pre> <ul> <li>PV:</li> </ul> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: packages-volume namespace: test spec: claimRef: name: packages-pvc namespace: test accessModes: - ReadWriteMany nfs: path: {{NFS_PATH}} server: {{NFS_SERVER}} capacity: storage: 1Gi persistentVolumeReclaimPolicy: Retain </code></pre> <p>if I create the PV, then the PVC, they bind together. However if I delete the PVC then re-create it, they do not bind (pvc pending). Why?</p>
<p>Note that after deleting <code>PVC</code>, <code>PV</code> remains in <code>Released</code> status:</p> <pre><code>$ kubectl get pv packages-volume NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE packages-volume 1007Gi RWX Retain Released default/packages-pvc 10m </code></pre> <p>It should have status <code>Available</code> so it can be reused by another <code>PersistentVolumeClaim</code> instance.</p> <p><strong>Why it isn't <code>Available</code> ?</strong></p> <p>If you display current <code>yaml</code> definition of the <code>PV</code>, which you can easily do by executing:</p> <pre><code>kubectl get pv packages-volume -o yaml </code></pre> <p>you may notice that in <code>claimRef</code> section it contains the <code>uid</code> of the recently deleted <code>PersistentVolumeClaim</code>:</p> <pre><code> claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: packages-pvc namespace: default resourceVersion: "10218121" uid: 1aede3e6-eaa1-11e9-a594-42010a9c0005 </code></pre> <p>You can easily verify it by issuing:</p> <pre><code>kubectl get pvc packages-pvc -o yaml | grep uid </code></pre> <p>just before you delete your <code>PVC</code> and compare it with what <code>PV</code> definition contains. You'll see that this is exactly the same <code>uid</code> that is still referred by your <code>PV</code> after <code>PVC</code> is deleted. This remaining reference is the actual reason that <code>PV</code> remains in <code>Released</code> status.</p> <p><strong>Why newly created <code>PVC</code> remains in a <code>Pending</code> state ?</strong></p> <p>Although your newly created <code>PVC</code> may seem to you exactly the same <code>PVC</code> that you've just deleted as you're creating it using the very same <code>yaml</code> file, from the perspective of <code>Kubernetes</code> it's a completely new instance of <code>PersistentVolumeClaim</code> object which has completely different <code>uid</code>. This is the reason why it remains in <code>Pending</code> status and is unable to bind to the <code>PV</code>.</p> <p><strong>Solution:</strong></p> <p>To make the <code>PV</code> <code>Available</code> again you just need to remove the mentioned <code>uid</code> reference e.g. by issuing:</p> <pre><code>kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]' </code></pre> <p>or alternatively by removing the whole <code>claimRef</code> section which can be done as follows:</p> <pre><code>kubectl patch pv packages-volume --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]' </code></pre>
<p>I am new to containerize environment and not such sequestial guidance.</p> <p>I have installed <strong>mysql</strong> service in <strong>GKE environment</strong>(using <strong>GCP free account</strong>) and created basic spring boot app which will <strong>communicate to mysql database</strong>. In My local environment (without container) It can talk to MySQL DB as we can use <strong>localhost</strong> url. But when I am building the spring-boot application in GKE environment with mysql using "<strong>mysql</strong>" as service name(which is available in env) as connection url then its fails to connect.</p> <pre><code>kubectl get ep NAME ENDPOINTS AGE mysql 10.32.3.24:3306 20h service-one 10.32.3.7:8080 47h </code></pre> <hr> <pre><code>Every 2.0s: kubectl get pods -l app=mysql -o wide cs-6000-devshell-vm-9160cf3e-c260-495d-8f85-75a4a731b464: Mon Sep 30 11:57:30 2019 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-799956477c-gmlvk 1/1 Running 0 20h 10.32.3.24 gke-ms-k8s-cluster-default-pool-90dce83a-c2xf &lt;none&gt; &lt;none&gt; </code></pre> <p><strong>properties used:</strong></p> <pre><code>server.port=8080 spring.application.name=myservice server.servlet.context-path=/ spring.datasource.url= jdbc:mysql://mysql/TestDB #mysql is service name #spring.datasource.url= jdbc:mysql://ClusterIP:3306/TestDB # tried with this #spring.datasource.url= jdbc:mysql://podIP:3306/TestDB # tried with this #spring.datasource.url= jdbc:mysql://mysql:3306/TestDB # service name :port as well #spring.datasource.url= jdbc:mysql://localhost:3306/TestDB spring.datasource.username=root spring.datasource.password=password spring.datasource.testWhileIdle = true spring.datasource.timeBetweenEvictionRunsMillis = 60000 spring.datasource.validationQuery = SELECT 1 #spring.jpa.hibernate.ddl-auto=create-drop spring.jpa.show-sql=true --- </code></pre> <p>And while maven build in GKE the error is showing as :</p> <pre><code>2019-09-30 11:26:54.332 INFO 41020 --- [ restartedMain] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2019-09-30 11:26:58.909 ERROR 41020 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization. com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_191] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_191] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:990) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:341) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2186) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2219) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2014) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.&lt;init&gt;(ConnectionImpl.java:776) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.JDBC4Connection.&lt;init&gt;(JDBC4Connection.java:47) ~[mysql-connector-java-5.1.45.jar:5.1.45] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_191] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_191] at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:386) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:117) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:123) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:365) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:194) ~[HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:460) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:534) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.pool.HikariPool.&lt;init&gt;(HikariPool.java:115) [HikariCP-2.7.8.jar:na] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-2.7.8.jar:na] at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:151) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:115) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:78) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:318) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:355) [spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup.getDatabase(DatabaseLookup.java:72) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaProperties.determineDatabase(JpaProperties.java:168) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.jpaVendorAdapter(JpaBaseConfiguration.java:111) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.CGLIB$jpaVendorAdapter$4(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9$$FastClassBySpringCGLIB$$4d05eb24.invoke(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) [spring-core-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361) [spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.jpaVendorAdapter(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1085) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:858) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549) ~[spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234) ~[spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE] at com.test.servicethree.ServiceThreeApplication.main(ServiceThreeApplication.java:33) ~[classes/:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.0.0.RELEASE.jar:2.0.0.RELEASE] Caused by: java.net.UnknownHostException: mysql at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_191] at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_191] at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_191] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_191] at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:188) ~[mysql-connector-java-5.1.45.jar:5.1.45] at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:300) ~[mysql-connector-java-5.1.45.jar:5.1.45] ... 90 common frames omitted 2019-09-30 11:26:58.919 WARN 41020 --- [ restartedMain] o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc url from datasource org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:327) ~[spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:355) ~[spring-jdbc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup.getDatabase(DatabaseLookup.java:72) ~[spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaProperties.determineDatabase(JpaProperties.java:168) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.jpaVendorAdapter(JpaBaseConfiguration.java:111) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.CGLIB$jpaVendorAdapter$4(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9$$FastClassBySpringCGLIB$$4d05eb24.invoke(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) [spring-core-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:361) [spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$4683a9.jpaVendorAdapter(&lt;generated&gt;) [spring-boot-autoconfigure-2.0.0.RELEASE.jar:2.0.0.RELEASE] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:579) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:251) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1138) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1065) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:815) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:721) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:470) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1250) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1099) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:545) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:502) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:312) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) ~[spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:310) [spring-beans-5.0.4.RELEASE.jar:5.0.4.RELEASE] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-2.0.0.RELEASE.jar:2.0.0.RELEASE] Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure </code></pre> <p>UPDATE1:</p> <pre><code>kubectl get -o yaml svc mysql&gt; apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"mysql"},"name":"mysql","namespace":"ms-k8s-ns"},"spec":{"ports":[{"port":3306}],"selector":{"app":"mysql"}}} creationTimestamp: "2019-09-29T09:33:31Z" labels: app: mysql name: mysql namespace: ms-k8s-ns resourceVersion: "361958" selfLink: /api/v1/namespaces/ms-k8s-ns/services/mysql uid: 331538cb-e29c-11e9-b837-42010a800039 spec: clusterIP: 10.35.243.227 ports: - port: 3306 protocol: TCP targetPort: 3306 selector: app: mysql sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <hr> <pre><code>kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql ClusterIP 10.35.243.227 &lt;none&gt; 3306/TCP 23h service-one LoadBalancer 10.35.244.117 35.225.130.162 80:31736/TCP 2d2h </code></pre>
<p>The error message is due to the A record for kubedns not configured properly to connect to the mysql.</p> <p>From the application:</p> <h1>spring.datasource.url= jdbc:mysql://mysql:3306/TestDB</h1> <p>The url is pointing to mysql which is not going to resolve to the service.</p> <p>The proper format for configuring the A record in kubedns is my-svc.my-namespace.svc.cluster.local[1]</p> <p>In your case, it should be mysql.ms-k8s-ns.svc.cluster.local.</p> <p>[1] <a href="https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="nofollow noreferrer">https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records</a></p>
<p>Trying to figure out , how to apply new label "app=green" to the pods that currently marked with "color=green" ?</p> <p>Looks like I could not use "--field-selector" and I don not want to specify name of each pod in "kubectl label" command. </p> <p>Thank you !</p>
<p>This should work:</p> <pre><code>kubectl label pods --selector=color=green app=green </code></pre>
<p>I have a MongoDB instance running on Kubernetes and I'm trying to connect to it using Python with the Kubernetes library.</p> <p>I'm connecting to the context on cmd line using:</p> <pre><code>kubectl config use-context CONTEXTNAME </code></pre> <p>With Python, I'm using:</p> <pre><code>from kubernetes import client, config config.load_kube_config( context = 'CONTEXTNAME' ) </code></pre> <p>To connect to MongoDB in cmd line:</p> <pre><code>kubectl port-forward svc/mongo-mongodb 27083:27017 -n production &amp; </code></pre> <p>I then open a new terminal and use PORT_FORWARD_PID=$! to connect</p> <p>I'm trying to get connect to the MongoDB instance using Python with the Kubernetes-client library, any ideas as to how to accomplish the above?</p>
<p>Define a kubernetes service for example <a href="https://github.com/kubernetes/examples/blob/master/staging/nodesjs-mongodb/mongo-service.yaml" rel="nofollow noreferrer">like this</a>, and then reference your mongodb using a connection string similar to <code>mongodb://&lt;service-name&gt;.default.svc.cluster.local</code></p>
<p>I'm setting up ELK services in Azure Kubernetes Service. But I only see this error:</p> <blockquote> <p>"Kibana server is not ready yet"</p> </blockquote> <p>I'm using Helm to install the stable/elastic-stack release without any changes (default for everything) in AKS.</p> <pre><code>helm install --name elk stable/elastic-stack </code></pre> <p>And I also added an ingress controller to expose the Kibana server to public. However, I only see "Kibana server is not ready yet" error.</p> <p>I've checked the Kibana pod is running, as well as the ElasticSearch. As a newbie in Kubernetes, I have no idea about how to find the error log in Kibana instance. Can anyone help me on this? It is also appreciated if you can indicate what step I am missing.</p>
<p>It might be the version incompatible issue. Just follow the console to get the errors. Kibana version should be always higher than Elasticsearch.In that case, it gives an error following.</p> <p>[error][status][plugin:xpack_main@7.4.0] Status changed from yellow to red - This version of Kibana requires Elasticsearch v7.4.0 on all nodes. I found the following incompatible nodes in your cluster: v7.1.1 @ 127.0.0.1:9200 (127.0.0.1)</p>
<p>I want to load-balance 2 stateful applications running on 2 pods. This application cannot have 2 replicas as it is stateful. </p> <p>I tried giving the same service names to both the pods but it looks like Kubernetes get confused and nothing is served.</p> <p>I am using on-premies Kubernetes cluster with metallb as a load-balancer.</p> <p>Currently, these pods are exposed over public IP with service TYPE as a load-balancer and added A record to both the pods. But it cannot do a health check with DNS.</p> <p>I only think of having Nginx pod and do mod-proxy to it. Is there any better solution other than this?</p>
<p>The selector on a service can be anything, and can match pods from multiple statefulsets (or deployments). So make a label on your pods and use that in the selector of a new service to target both.</p>
<p>I am working on a GKE cluster. I have a simple server running. There are three routes on the server. </p> <pre><code>route 1 - / route 2 - /ping route 3 - /health </code></pre> <p>These paths return 200 response with generic but different "ok" messages.</p> <p>This is what my ingress yaml looks like - </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress spec: tls: - hosts: - www.simple.com secretName: simple-server-tls rules: - host: www.simple.com http: paths: - path: /ping backend: serviceName: simple-server servicePort: 8080 </code></pre> <p>I have my server exposed as a load balancer. I can access all three routes using loadbalancer as <code> https://&lt;loadbalancer_ip&gt;:8080/ https://&lt;loadbalancer_ip&gt;:8080/ping https://&lt;loadbalancer_ip&gt;:8080/health</code></p> <p>However, when trying to use an ingress, I only receive a 200 response on <code>https://www.simple.com/ping</code> The other two routes i.e. <code>https://www.simple.com/health</code> and <code>https://www.simple.com/</code> return <code>default backend - 404 error</code>.</p> <p>I can confirm that the server is running and is serving requests perfectly and I have given ingress enough time(upwards of 30 minutes) to finish setting-up. <code>https://www.simple.com/healthz returns "OK"</code>.</p> <p>I think I am making a mistake in configuring the ingress correctly. Any help/suggestions are appreciated.</p> <p>I cannot provide the ingress logs as I am noticing that <code>kubectl describe ingress/basic-ingress</code> returns the error <code>Error from server (NotFound): the server could not find the requested resource</code> However, kubectl get ingress returns <code>basic-ingress www.simple.com &lt;ingress_ip&gt; 80, 443 31m</code></p>
<p>Figured out the issue. You need to add a wild card to the path. I only had <code>/</code> in my path and hence it was rejecting (read throwing 404 error) for all my other urls. I added <code>/*</code> and removed <code>the rewrite-target annotation</code> as @bserdar correctly suggested and it worked. Link to the issue on github that helped a lot - <code>https://github.com/kubernetes/ingress-nginx/issues/1120</code></p>
<p>Before I begin, I would like to mention that I am on the free trial version of GKE. I have a simple server running in a GKE cluster. I have a service that I use to expose the server. I am trying to configure an Ingress Controller and attach it to this service. </p> <p>Everything works perfectly if my service is of type LoadBalancer, NodePort. However, if my service is of type ClusterIP, I get an error saying </p> <pre><code>error while evaluating the ingress spec: service "default/simple-server" is type "ClusterIP" , expected "NodePort" or "LoadBalancer" </code></pre> <p>GKE then stops trying to provision an IP for the ingress. Why can't I provision a service of type clusterIP and is there a work around? </p> <p>I tried using <code>annotations.kubernetes.io/ingress.class: "nginx"</code> and it still didn't work.</p>
<p>The native GKE ingress controller does not support <code>ClusterIP</code>, but it works perfectly with <code>LoadBalancer</code> and <code>NodePort</code> type. Take a look at this <a href="https://github.com/kubernetes/kubernetes/issues/26508#issuecomment-222376886" rel="nofollow noreferrer">issue</a> </p> <p>Non-native ingress controller <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx</a> works with <code>ClusterIP</code>.</p>
<p>Before I begin, I would like to mention that I am on the free trial version of GKE. I have a simple server running in a GKE cluster. I have a service that I use to expose the server. I am trying to configure an Ingress Controller and attach it to this service. </p> <p>Everything works perfectly if my service is of type LoadBalancer, NodePort. However, if my service is of type ClusterIP, I get an error saying </p> <pre><code>error while evaluating the ingress spec: service "default/simple-server" is type "ClusterIP" , expected "NodePort" or "LoadBalancer" </code></pre> <p>GKE then stops trying to provision an IP for the ingress. Why can't I provision a service of type clusterIP and is there a work around? </p> <p>I tried using <code>annotations.kubernetes.io/ingress.class: "nginx"</code> and it still didn't work.</p>
<p>That's normal and it's pretty simple why you have the error.</p> <p>A ClusterIP service is an internal only service inside of your Kubernetes cluster and what you are trying to create (from what I undestood) is a publicly exposed service using ingress, meaning you are going to create a Google Load balancer.</p> <p>Now the why it doesn't support ClusterIP is because when you create the ingress, the LB resource created inside google cloud needs a target port on your cluster to call on And for that you need to expose an ip/port for it.</p> <p>I would not recommend combining LB service (wich by default create a LB on cloud provider) and ingress, but stay on a nodeport/ingress duo that is cleaner.</p> <p>Bonus: The annotation you used is for internal services with ingress to be linked to your ingress controller. This will allow the ingress-controller to list the hosts and proxy traffic to the right service.</p>
<p>i'm using azure files as a volume for kubernetes, but i'm facing a problem with it.</p> <p>i'm able to set permissions on the whole share but i can’t change permissions on a specific file/directory.</p> <p>If i set the default permissions to the whole share (example 644 (rw/r/r)) and try to change this permissions on specific file (example Chmod 0644 file) chmod doesn’t take any effect it always the default permissions.</p> <p>Even if i didn’t mention the default permission it will take 0777 and yet i can’t change permissions of file/directory inside the share.</p> <p>i’ve done some research and i found that this feature is not yet implemented the source is from 2016 I hope this feature is enabled now?</p> <p>Otherwise is there any other solution to propose ?</p> <p>Regards,</p> <hr> <h1>Update</h1> <p>Many thanks <strong>@Charles Xu</strong> for your suggestion it's very useful. I used your preposition but let me explain my use case a little bit:</p> <ul> <li>I need to have different files permissions on the same directory because this directory is for SSH. I could change the permission of the folder <strong>.ssh</strong> so all the files inside it have 0644 but as u know i have to change the permission of the private key to 0600</li> <li><p>It’s mandatory to change the permission on the private key otherwise SSH will not work.</p> <p>-rw-r--r-- 1 userx userx 182 Oct 8 15:13 config</p> <p><strong>-rw-r--r--</strong> 1 userx userx 1675 Oct 8 15:13 id_rsa</p> <p>-rw-r--r-- 1 userx userx 409 Oct 8 15:13 id_rsa.pub</p></li> </ul>
<p>First of all, you only can control the permission of the whole Azure file share when you mount it as the persistent volume on the pods. It does not support to change the permission of a special file inside the share.</p> <p>What I recommend is that you can filter the files into the different directories and the files in the same directory have the same permission. Then you can mount the directories of the Azure file share as the different persistent volumes to the pods and set the persistent volumes with different permissions as you need.</p> <p>For example, you need a directory with permission 0644 and the files in it with permission 0664. Then you can create a persistent volume like this:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: azurefile spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: azurefile azureFile: secretName: azure-secret shareName: aksshare/subfolder1 readOnly: false mountOptions: - dir_mode=0644 - file_mode=0664 - uid=1000 - gid=1000 - mfsymlinks - nobrl </code></pre> <p>And create a PersistentVolumeClaim which will use the PersistentVolume that you created above:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azurefile spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 5Gi </code></pre> <p>Finally, create the pod using the PersistentVolumeClaim with the <code>volumes</code> like this:</p> <pre><code>... volumes: - name: azure persistentVolumeClaim: claimName: azurefile </code></pre> <p>When it's OK, you can see things like you set below:</p> <p><a href="https://i.stack.imgur.com/lLGeL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lLGeL.png" alt="enter image description here"></a></p>
<p>We upgraded our existing development cluster from 1.13.6-gke.13 to 1.14.6-gke.13 and our pods can no longer reach our in-house network over our Google Cloud VPN. Our production cluster (still on 1.13) shares the same VPC network and VPN tunnels and is still working fine. The only thing that changed was the upgrade of the admin node and node pool to 1.14 on the development cluster. </p> <p>I have opened a shell into a pod on the development cluster and attempted to ping the IP address of an in-house server to which we need access. No response received. Doing the same on a pod in our production cluster works as expected.</p> <p>I ssh'd into a node in the cluster and was able to ping the in-house network. so it's just pods that have networking issues.</p> <p>Access to the publicly exposed services in the cluster is still working as expected. Health checks are OK.</p> <p>UPDATE: </p> <p>I created a new node pool using the latest 1.13 version, drained the pods from the 1.14 pool and all is well with the pods running on the 1.13 pool again. Something is definitely up with 1.14. It remains to be seen if this is an issue cause by some new configuration option or just a bug. </p> <p>RESOLUTION: </p> <p>IP masquerading is discussed here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent</a>. My resolution was to add the pod subnets for each of my clusters to the list of advertised networks in my VPN Cloud Routers on GCP. So now the pod networks can traverse the VPN. </p>
<p>Until GKE 1.13.x, even if not necessary, GKE will masquerade pods trying to reach external IPs, even on the same VPC of the cluster, unless the destination is on the 10.0.0.0/8 range.</p> <p>Since 1.14.x versions, this rule is no longer added by default on clusters. This means that pods trying to reach any endpoint will be seen with their Pod IP instead of the node IP as the masquerade rule was removed.</p> <p>You could try recreating your Cloud VPN in order to include the POD IP range.</p>
<p>Similar to <a href="https://stackoverflow.com/questions/53299238/how-can-i-check-what-ingress-controller-i-have-on-my-kube-and-what-is-the-defaul">this question</a> but for generic kubernetes. How can software I'm writing programmatically detect whether there is any ingress controller already existing in the cluster? </p>
<p>Yes it's a bit tricky because there is no such API object as <code>ingress-controler</code> but only <code>ingress</code>. <code>Ingress</code> in it's turn is only loosely coupled to ingress controller and do not need the latter to be deployed successfully. So the presence of Ingress resource could not be the indication of ingress controller on it's own.</p> <p>However the trick you <strong><em>might</em></strong> want to exploit is using the <code>Events</code> section of Ingress as a source of some indicative information.</p> <p>So when you deploy an Ingress in the fresh k8s cluster with no ingress-controller yet deployed the Events section is empty:</p> <pre><code># deploy Ingress (no ingress controller yet) $ kubectl apply -f - &lt;&lt;EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80 EOF # check that Ingress deployed successfully $ k get ing NAME HOSTS ADDRESS PORTS AGE test-ingress * 80 36m # confirm there are no Events so far $ k describe ing Name: test-ingress Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- * /testpath test:80 (&lt;none&gt;) ... nginx.ingress.kubernetes.io/rewrite-target: / Events: &lt;none&gt; </code></pre> <p>Then if you <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">deploy the ingress controller</a> the Ingress' Events does get some extra info:</p> <pre><code># deploy ingress controller $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.apps/nginx-ingress-controller created # check Ingress details once again $ k describe ing test-ingress Name: test-ingress Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- * /testpath test:80 (&lt;none&gt;) ... nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 7m11s nginx-ingress-controller Ingress default/test-ingress </code></pre>
<p>I am fairly new to Kubernetes and had a question concerning kube-state-metrics. When I simply monitor Kubernetes using Prometheus I obtain a set of metrics from the cAdvisor, the nodes (node exporter), the pods, etc. When I include the kube-state-metrics, I seem to obtain more "relevant" metrics. Do kube-state-metrics allow to scrape <strong>"new"</strong> information from Kubernetes or are they rather <strong>"formatted"</strong> metrics using the initial Kubernetes metrics (from the nodes, etc. I mentioned earlier). </p>
<p>The two are basically unrelated. Cadvisor is giving you low-level stats about the containers like how much RAM and CPU they are using. KSM gives you info from the Kubernetes API like the Pod object status. Both are useful for different things and you probably want both.</p>
<p>My EKS cluster in us-east-1 stopped working with all nodes NotReady since kubelet cannot pull the pause container. This is the kubelet command that gets executed at boot</p> <pre><code>/usr/bin/kubelet --cloud-provider aws --config /etc/kubernetes/kubelet/kubelet-config.json --kubeconfig /var/lib/kubelet/kubeconfig --container-runtime docker --network-plugin cni --node-ip=10.0.21.107 --pod-infra-container-image=602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 --node-labels=kubernetes.io/lifecycle=spot </code></pre> <p>The problem is with pulling the image </p> <pre><code>602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/pause-amd64:3.1 </code></pre> <p>Other required containers are also not available, for example: </p> <pre><code>602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/kube-proxy:v1.14.6 602401143452.dkr.ecr.us-east-1.amazonaws.com/eks/coredns:v1.3.1 </code></pre> <p>On the other hand, the container images are available from other regions, just not the one where the cluster is.</p> <p>Kubernetes events mention cni plugin not initialized. That is expected since the aws-node pods do not start.</p>
<p>The VPC where the worker nodes live has a PrivateLink endpoint for ECR. That endpoint and the DNS entries that come with it, make the ECR domains within the same region resolve to a private IP. That's why docker pull was failing only for ECRs in the same region.</p> <p>The security groups of the worker nodes need to allow (https) traffic out to the PrivateLink endpoint security group.</p>
<p>I have top level chart and one of the subcharts. In subchart I want to use variables that defined in level chart, and if it is not found - use default value.</p> <p>I have code like this in one of deployment definitions in my subchart</p> <pre><code>name: {{tpl .Values.global.my.GlobalValue .}} </code></pre> <p>where Values.global.my.GlobalValue - is parameter from top level chart. </p> <p>Problem is when I try to install only subchart - I am failing, I need some defaults. </p> <p>I tried to puth like below and it is not working</p> <pre><code>name: {{default defaultName tpl .Values.global.my.GlobalValue .}} name: {{tpl .Values.global.my.GlobalValue . | defaultName}} </code></pre> <p>Could you please advise the correct way to do that.</p>
<p>As per <a href="https://helm.sh/docs/chart_template_guide/#using-the-default-function" rel="nofollow noreferrer">Using the default function</a>:</p> <blockquote> <p>One function frequently used in templates is the <code>default</code> function: <code>default DEFAULT_VALUE GIVEN_VALUE</code>. This function allows you to specify a default value inside of the template, in case the value is omitted.</p> </blockquote> <p>You should use:</p> <p><code>name: {{ .Values.global.my.GlobalValue | default "defaultName" | quote }}</code></p>
<p>I want to run a kloudless docker image with the command below in kubernetes. Just wanted to know it there was any straightforward solution. </p> <pre><code>docker run -d \ --privileged --tmpfs /run --tmpfs /run/lock --tmpfs /tmp \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --ulimit nofile=1024000:1024000 \ --sysctl net.ipv4.ip_local_port_range='1025 65535' \ --name kenterprise \ --env KLOUDLESS_CONFIG="$(cat kloudless.yml)" \ # [ports,/data volume|db config] \ docker.kloudless.com/prod:1.29.0 docker_entry </code></pre> <p>I know we can run docker image in kubernetes with similar configure by configuring the container in pod yaml. Example if I wanted to give --privileged argument I could </p> <pre><code>containers: securityContext: privileged: true </code></pre> <p>Just wanted to know if there are any straightforward way.</p>
<p>kubectl run <em>used</em> to exist. It was probably what you were looking for. <a href="https://github.com/LevelUpEducation/kubernetes-demo/issues/31" rel="nofollow noreferrer">But it is depecrated now</a>.</p> <p><code>kubectl run -i --tty load-generator --image=busybox /bin/sh</code></p> <p>The closest thing to running a pod/deployment from the command line without creating a file I could find is <code>kubectl create</code></p> <pre><code>kubectl create deployment prod --image=busybox -o yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: prod name: prod spec: replicas: 1 selector: matchLabels: app: prod strategy: {} template: metadata: creationTimestamp: null labels: app: prod spec: containers: - image: busybox name: busybox resources: {} status: {} </code></pre> <p>Although, I would strongly advise against running one off commands like this as it goes against the concept of <a href="https://en.wikipedia.org/wiki/Infrastructure_as_code" rel="nofollow noreferrer">infrastructure as code</a> which Kubernetes encourages through the use of manifests.</p> <p>Using manifests in a Version Control System such as git allows you to explore the history of your commands and deployments easily and manage branches of changes to your deployment.</p> <p>However, if what you are looking to do is abstract your deployment so that users don't have to get their hands dirty with the internals, then I would recommend a tool like <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, which allows you to create charts and change simple values at release time like so:</p> <p><code>helm install --set foo=bar ./mychart</code></p>
<p>I am trying to change priority of an existing Kubernetes Pod using 'patch' command, but it returns error saying that this is not one of the fields that can be modified. I can patch the priority in the Deployment spec, but it would cause the Pod to be recreated (following the defined update strategy). </p> <p>The basic idea is to implement a mechanism conceptually similar to nice levels (for my application), so that certain Pods can be de-prioritized based on certain conditions (by my controller), and preempted by the default scheduler in case of resource congestion. But I don't want them to be restarted if there is no congestion.</p> <p>Is there a way around it, or there is something inherent in the way scheduler works that would prevent something like this from working properly?</p>
<p>Priority values are applied to a pod based on the priority value of the PriorityClass assigned to their deployment at the time that the pod is scheduled. Any changes made to the PriorityClass will not be applied to pods which have already been scheduled, so you would have to redeploy the pod for the priority to take effect anyway.</p>
<p>I have a .NET-core web application. This is deployed to an Azure Container Registry. I deploy this to my Azure Kubernetes Service using </p> <blockquote> <p>kubectl apply -f testdeployment.yaml</p> </blockquote> <p>with the yaml-file below</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: myweb spec: replicas: 1 selector: matchLabels: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: mycontainerregistry.azurecr.io/myweb:latest ports: - containerPort: 80 imagePullSecrets: - name: my-registry-key</code></pre> </div> </div> </p> <p>This works splendid, but when I change some code, push new code to container and run the </p> <blockquote> <p>kubectl apply -f testdeployment</p> </blockquote> <p>again, the AKS/website does not get updated, until I remove the deployment with</p> <blockquote> <p>kubectl remove deployment myweb</p> </blockquote> <p><strong>What should I do to make it overwrite whatever is deployed?</strong> I would like to add something in my yaml-file. (Im trying to use this for continuous delivery in Azure DevOps). </p>
<p>I believe what you are looking for is imagePullPolicy. The default is ifNotPresent which means that the latest version will not be pulled.</p> <p><a href="https://kubernetes.io/docs/concepts/containers/images/" rel="noreferrer">https://kubernetes.io/docs/concepts/containers/images/</a></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myweb spec: replicas: 1 selector: matchLabels: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: mycontainerregistry.azurecr.io/myweb imagePullPolicy: Always ports: - containerPort: 80 imagePullSecrets: - name: my-registry-key </code></pre> <p>To ensure that the pod is recreated, rather run:</p> <pre><code>kubectl delete -f testdeployment &amp;&amp; kubectl apply -f testdeployment </code></pre>
<p>While working through the tutorial <a href="https://docs.docker.com/get-started/part3/" rel="noreferrer">Get Started, Part 3: Deploying to Kubernetes</a> I stumbled over the Pod template within the deployment definition of the manifest file. There are no ports specified, neither in the pod nor in the container section.</p> <p>That led me to my initial question: How does the port publishing work from the docker container into the pod?</p> <p>The following quote sounds like kubernetes obtains an insight into the running container once started and gets the port from the service listening at 0.0.0.0:PORT and maps it to the same port in the pod environment (network namespace).</p> <blockquote> <p>Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="noreferrer">Source</a></p> </blockquote> <p>If my assumption goes in the right direction, what does this mean for pods with multiple containers? Does kubernetes only allow containers with internal services listening on different ports? Or is it possible to map the container internal ports to different ports in the pod environment (network namespace)?</p> <p>According to the following quote, I assume port mapping from container to pod is not possible. Indeed it does not make too much sens to specify two services within two containers with the same ports, just to change them via a mapping immediately following this.</p> <blockquote> <p>Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="noreferrer">Source</a></p> </blockquote> <hr> <p><strong>UPDATE 2019-10-15</strong></p> <p>As the following quote states, a docker container does not publish any port to the outside world by default. </p> <blockquote> <p>By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. <a href="https://docs.docker.com/config/containers/container-networking/" rel="noreferrer">Source</a> </p> </blockquote> <p>That means kubernetes must configuer the docker container running within a pod somehow, so that the container's ports are published to the pod.</p> <p>Regarding the following quote, is it possible that kubernetes runs the docker containers by using the <em>--network host</em> configuration? Assumed the pod is the docker host in kubernetes.</p> <blockquote> <p>If you use the host network mode for a container, that container’s network stack is not isolated from the Docker host [...] For instance, if you run a container which binds to port 80 and you use host networking, the container’s application is available on port 80 on the host’s IP address. <a href="https://docs.docker.com/network/host/" rel="noreferrer">Source</a></p> </blockquote>
<p>Containers running in a pod are similar to processes running on a node connected to a network. Pod gets a network address, and all containers share the same network address. They can talk to each other using <code>localhost</code>.</p> <p>A container running in a pod can listen to any port on that address. If there are multiple containers running in a pod, they cannot bind to the same port, only one of them can. Note that there is no requirement about publishing those ports. A container can listen to any port, and traffic will be delivered to it if a client connects to that port of the pod.</p> <p>So, port mapping from container to pod is not possible.</p> <p>The exposed ports of containers/pods are mainly informational, and tools use them to create resources.</p>
<p>I installed the Prometheus helm chart to a kubernetes cluster for monitoring. By default, </p> <ul> <li>persistent volume size for prometheus server is defined as 8Gi.</li> <li>Prometheus server will store the metrics in this volume for 15 days (retention period)</li> </ul> <p>After some days of deploying the chart, the prometheus server pod enetered to a crashloopbackoff state. The reason found from pod logs was:</p> <pre><code>level=error ts=2019-10-09T11:03:10.802847347Z caller=main.go:625 err="opening storage failed: zero-pad torn page: write /data/wal/00000429: no space left on device" </code></pre> <p>That means there is no space available in the disk (persistent volume) to save the data. So I cleared the existing data of the volume and fixed the issue temporarily.</p> <p>What would be the proper solution for this? </p> <p>The <a href="https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects" rel="nofollow noreferrer">Prometheus documentation</a> says:</p> <p><strong>To plan the capacity of a Prometheus server, you can use the rough formula:</strong></p> <pre><code>needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample </code></pre> <p>Can someone explain how to use this formula deliberately?</p> <p>Why the 8Gi size is not enough with 15days retention period?</p> <p><strong>EDIT :</strong></p> <p>The default 8Gi space was 100% used after 6 days.</p>
<p>15 days is about 1.3 million seconds. Let’s overestimate 8 bytes per sample. So each metric takes about 10mb. So 8gb would let you store 800 metrics. You probably have more than that. Multiply the number of series you want to store by 10 and that’s the number of megabytes you need. Roughly, that will get you the right order of magnitude at least.</p>
<ol> <li>My mac can visit k8s dashboard, but other pc can't. What's the reason ? <ul> <li>@kubernetes/UI @kubernetes/dashboard</li> <li>I have tried with the latest version of my channel (Stable or Edge)</li> <li>macOS Version: 10.14</li> <li>Docker for Mac: version: 19.03.1</li> <li>k8s version : 1.14.3</li> <li>eneble k8s on docker for mac setting</li> <li>apply k8s dashboard.yaml</li> <li>my mac ip is : 192.168.0.200</li> </ul></li> <li><code>kubectl get service --all-namaspaces</code></li> </ol> <blockquote> <p>NAMESPACE NAME TYPE<br> CLUSTER-IP EXTERNAL-IP PORT(S) default ......... kubernetes ......... ClusterIP .........<br> 10.96.0.1 ......... 443/TCP kube-system .......... kubernetes-dashboard .........<br> NodePort ......... 10.104.38.247 .........<br> 443:31317/TCP</p> </blockquote> <ol start="3"> <li><code>kubectl cluster-info</code></li> </ol> <blockquote> <p>Kubernetes master is running at <a href="https://kubernetes.docker.internal:6443" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443</a> KubeDNS is running at <a href="https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a></p> </blockquote> <ol start="4"> <li>I can visit dashboard with below url on my pc.</li> </ol> <blockquote> <p>kubernetes.docker.internal:31317 localhost:31317 127.0.0.1:31317 192.168.0.200:31317</p> </blockquote> <p>And I have stopped firewall.</p> <ol start="5"> <li>One Lan, other pc can't visit 192.168.0.200:31317</li> </ol> <p>And I don't know why. help me , thanks. Do you need other info?</p> <p>Actually, I ask same question on github, and they suggest me to ask it on stackoverflow. And this is my first time ask question on stackoverflow, and if I do something wrong, please tell me.</p> <ol start="6"> <li>I excepted that other pc including windows and mac on the same LAN can visit my mac's k8s dashboard.</li> </ol>
<p>You need to run kubectl proxy locally for accessing the dashboard outside the kubernetes cluster. You need to scp admin.conf file (file on your kubernetes master at <code>/etc/kubernetes/admin.conf</code>) to the machine from which you want to access the dashboard and pass it to kubectl command. Please, refer to following posts:</p> <ul> <li><a href="https://stackoverflow.com/questions/39864385/how-to-access-expose-kubernetes-dashboard-service-outside-of-a-cluster">How to access/expose kubernetes-dashboard service outside of a cluster?</a></li> <li><a href="https://stackoverflow.com/questions/58041415/kubernetes-dashboar">Kubernetes dashboard</a></li> </ul> <p>To access the Dashboard navigate your browser to <code>https://&lt;server_IP&gt;:31317</code></p>
<p>I understand <code>gcloud</code> uses the Dockerfile specified in the root directory of the source (<code>.</code>) as in the command: <code>gcloud builds submit --tag gcr.io/[PROJECT_ID]/quickstart-image .</code></p> <p>but I am trying to specify the Dockerfile to use to build the image which I have not found any resource on how to do that, I don't know if that is possible.</p>
<p>The only way to specify a Dockerfile (i.e. other than <code>./Dockerfile</code>) would be to create a <code>cloudbuild.yaml</code> per techtabu@. This config could then use the <code>docker</code> builder and provide the specific Dockerfile, i.e.:</p> <pre><code>steps: - name: "gcr.io/cloud-builders/docker" args: - build - "--tag=gcr.io/$PROJECT_ID/quickstart-image" - "--file=./path/to/YourDockerFile" - . ... images: - "gcr.io/$PROJECT_ID/quickstart-image" </code></pre> <p>If you wish, you also get to specify an alternative name than <code>cloudbuild.yaml</code>.</p> <p>The <code>./Dockerfile</code> assumption is presumably to ease the transition to Cloud Build. </p> <p>I recommend you switch to using <code>cloudbuild.yaml</code> for the flexibility it provides.</p>
<p>This is what I currently have:</p> <p>With the basic <code>helm create</code> comes a <code>values.yaml</code> file and a <code>template</code> folder with a <code>deployment.yaml</code>. The <code>Deployment</code> file has been adapted to manage a spring-boot application in a docker image.</p> <p>Now the <code>values.yaml</code> contains something like </p> <pre><code>myApp: repository: myApp.mycompany.com/demo/my-app-customers tag: stable </code></pre> <p>And the <code>deployment.yaml</code> contains </p> <pre><code>spec: template: spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.myApp.repository }}:{{ .Values.myApp.tag }}" </code></pre> <p>Let's say I have to manage another spring-boot app, really close to what "my-app-customers" is. I thought that I would just have to add another line to <code>values.yaml</code>, like that:</p> <pre><code>mySecondApp: repository: myApp.mycompany.com/demo/my-app-others tag: stable </code></pre> <p>But this wouldn't work with <code>deployment.yaml</code> because it uses <code>.Values.myApp.repository</code>. Which means that I need to create another file <code>deployment-others.yaml</code> which uses <code>.Values.mySecondApp.repository</code> to deploy this second app. So it is not possible to use the same template for differents applications.</p> <ul> <li>Am I correct or did I miss something ?</li> <li>Would there be another way to do this or do I need to copy past similar deployment (and others) file and have one template file per application ?</li> </ul>
<p>The standard way to do this is to have a <code>second-deployment.yaml</code> file which is essentially a copy of the first one, but with different labels and value references. It's customary to put some templates in <code>_helpers.tpl</code> to generate things like the labels blocks and you can extend these for common environment variables, or you can offload some of the configuration into a ConfigMap to reduce the duplication.</p> <p>The templating layer of Helm isn't really aware of YAML syntax at all, so just as long as each file produces valid YAML, you can do anything you want. That can include multiple documents in a single file, and they can be generated by templating. This can get tricky, though. The basic outline might look like</p> <pre><code>{{- range list .Values.myApp .Values.mySecondApp -}} --- ... spec: template: spec: containers: - name: {{ $.Chart.Name }} image: "{{ .repository }}:{{ .tag }}" ... {{ end -}} </code></pre> <p>If you try this, you need to be aware that <code>.</code> is a fragment of the values object and not the root object like it usually is; note for example <code>$.Chart.Name</code> to explicitly reference the top-level chart name.</p>
<p>I have a very simple asp.net core app (C# Web Application with Docker Support for Linux) and when i build the docker image and try to run it on my local PC the following happens; In docker with my image called test, i type docker run test, at which point it states "Content root path: /app Now listening on: <a href="http://[::]:80" rel="nofollow noreferrer">http://[::]:80</a>" And even though when i type docker ps i can see the process running, when i try to navigate to localhost:80 all i get is a long wait and then "This site can’t be reached, localhost refused to connect."</p> <p>I typed </p> <p><code>docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ec158cc3b344</code></p> <p>which gave me the containers IP address, but even navigating directly to the container i either get "This site can’t be reached " if i navigate on port 80, or "Your connection was interrupted" if i try to access the IP directly.</p> <p>I also tried to step over docker completely and deploy the image to Kubernetes to see if this would give me any luck, but instead when i try to access the services External-IP (In this case localhost), i get the following "This page isn’t working, localhost didn’t send any data."</p> <p>I also tried to use </p> <p><code>kubectl get pods -o wide</code></p> <p>and access the IP's of the pods directly, but this just gives me "This 10.1.0.32 page can’t be found", for example.</p> <p>And incase you're wondering, this is my dockerfile and kubernetes deployment .yml</p> <pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base WORKDIR /app EXPOSE 80 FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build WORKDIR /src COPY ["Test/Test.csproj", "Test/"] RUN dotnet restore "Test/Test.csproj" COPY . . WORKDIR "/src/Test" RUN dotnet build "Test.csproj" -c Release -o /app FROM build AS publish RUN dotnet publish "Test.csproj" -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "Test.dll"] </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: test spec: replicas: 3 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test image: &lt;DockerEndpoint&gt;.io/test:v5 #Sorry, can't include the real endpoint! ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: test spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: test </code></pre> <p>I also understand that .net core work in weird way that don't allow it to expose its ports to the outside world unless you tell it to, but that combined with my relative newness to the docker/kubernetes stack and leaving me bewildered.</p> <p>Does anybody have any idea how i can make a .net core app, any app, work with docker?</p> <p>P.S. I am really using such a simple app that even if i create a brand new .net core app with docker support, and try to immediately build and run the basic .net core app, it doesnt work. i cannot make it work with literally any .net core app!</p>
<p>When it says listening on <code>http://[::]:80</code>, it's talking about localhost <em>in the container</em>. When you try to access it via <code>http://localhost</code> in your web browser running on your computer, <code>localhost</code> is your computer, <em>not</em> the container. You need to use the container's IP.</p> <p>From your description, it sounds like you tried that as well, but there's no reason you should have any issues with that. You either didn't get the right IP or you did something else incorrect not detailed here.</p>
<p>I'm trying to deploy the Vanilla MineCraft Server from <a href="https://github.com/helm/charts/tree/master/stable/minecraft" rel="nofollow noreferrer">stable/minecraft</a> using Helm on Kubernetes 1.14 running on AWS EKS but I am consitently either getting <code>CrashLoopBackOff</code> or <code>Liveness Probe Failures</code>. This seems strange to me as I'm deploying the chart as specified per the documentation:</p> <pre><code>helm install --name mine-release --set minecraftServer.eula=TRUE --namespace=mine-release stable/minecraft </code></pre> <p>Already Attempted Debugging:</p> <ol> <li>Tried decreasing and increasing memory <code>helm install --name mine-release --set resources.requests.memory="1024Mi" --set minecraftServer.memory="1024M" --set minecraftServer.eula=TRUE --namespace=mine-release stable/minecraft</code></li> <li>Tried viewing logs through <code>kubectl logs mine-release-minecraft-56f9c8588-xn9pv --namespace mine-release</code> but this error is allways appearing</li> </ol> <pre><code>Error from server: Get https://10.0.143.216:10250/containerLogs/mine-release/mine-release-minecraft-56f9c8588-xn9pv/mine-release-minecraft: dial tcp 10.0.143.216:10250: i/o timeout </code></pre> <p>To give more context the <code>kubectl describe pods mine-release-minecraft-56f9c8588-xn9pv --namespace mine-release</code> output for pod description and events are below:</p> <pre><code>Name: mine-release-minecraft-56f9c8588-xn9pv Namespace: mine-release Priority: 0 PriorityClassName: &lt;none&gt; Node: ip-10-0-143-216.ap-southeast-2.compute.internal/10.0.143.216 Start Time: Fri, 11 Oct 2019 08:48:34 +1100 Labels: app=mine-release-minecraft pod-template-hash=56f9c8588 Annotations: kubernetes.io/psp: eks.privileged Status: Running IP: 10.0.187.192 Controlled By: ReplicaSet/mine-release-minecraft-56f9c8588 Containers: mine-release-minecraft: Container ID: docker://893f622e1129937fab38dc902e25e95ac86c2058da75337184f105848fef773f Image: itzg/minecraft-server:latest Image ID: docker-pullable://itzg/minecraft-server@sha256:00f592eb6660682f327770d639cf10692b9617fa8b9a764b9f991c401e325105 Port: 25565/TCP Host Port: 0/TCP State: Running Started: Fri, 11 Oct 2019 08:50:56 +1100 Last State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 11 Oct 2019 08:50:03 +1100 Finished: Fri, 11 Oct 2019 08:50:53 +1100 Ready: False Restart Count: 2 Requests: cpu: 500m memory: 1Gi Liveness: exec [mcstatus localhost:25565 status] delay=30s timeout=1s period=5s #success=1 #failure=3 Readiness: exec [mcstatus localhost:25565 status] delay=30s timeout=1s period=5s #success=1 #failure=3 Environment: EULA: true TYPE: VANILLA VERSION: 1.14.4 DIFFICULTY: easy WHITELIST: OPS: ICON: MAX_PLAYERS: 20 MAX_WORLD_SIZE: 10000 ALLOW_NETHER: true ANNOUNCE_PLAYER_ACHIEVEMENTS: true ENABLE_COMMAND_BLOCK: true FORCE_gameMode: false GENERATE_STRUCTURES: true HARDCORE: false MAX_BUILD_HEIGHT: 256 MAX_TICK_TIME: 60000 SPAWN_ANIMALS: true SPAWN_MONSTERS: true SPAWN_NPCS: true VIEW_DISTANCE: 10 SEED: MODE: survival MOTD: Welcome to Minecraft on Kubernetes! PVP: false LEVEL_TYPE: DEFAULT GENERATOR_SETTINGS: LEVEL: world ONLINE_MODE: true MEMORY: 512M JVM_OPTS: JVM_XX_OPTS: Mounts: /data from datadir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-j8zql (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mine-release-minecraft-datadir ReadOnly: false default-token-j8zql: Type: Secret (a volume populated by a Secret) SecretName: default-token-j8zql Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s </code></pre> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m25s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) Normal Scheduled 2m24s default-scheduler Successfully assigned mine-release/mine-release-minecraft-56f9c8588-xn9pv to ip-10-0-143-216.ap-southeast-2.compute.internal Warning FailedAttachVolume 2m22s (x3 over 2m23s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-b48ba754-eba7-11e9-b609-02ed13ff0a10" : "Error attaching EBS volume \"vol-08b29bb4eeca4df56\"" to instance "i-00ae1f5b96eed8e6a" since volume is in "creating" state Normal SuccessfulAttachVolume 2m18s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-b48ba754-eba7-11e9-b609-02ed13ff0a10" Warning Unhealthy 60s kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed: Traceback (most recent call last): File "/usr/bin/mcstatus", line 11, in &lt;module&gt; sys.exit(cli()) File "/usr/lib/python2.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python2.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/lib/python2.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python2.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python2.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/lib/python2.7/site-packages/mcstatus/scripts/mcstatus.py", line 58, in status response = server.status() File "/usr/lib/python2.7/site-packages/mcstatus/server.py", line 49, in status connection = TCPSocketConnection((self.host, self.port)) File "/usr/lib/python2.7/site-packages/mcstatus/protocol/connection.py", line 129, in __init__ self.socket = socket.create_connection(addr, timeout=timeout) File "/usr/lib/python2.7/socket.py", line 575, in create_connection raise err socket.error: [Errno 99] Address not available Normal Pulling 58s (x2 over 2m14s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal pulling image "itzg/minecraft-server:latest" Normal Killing 58s kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Killing container with id docker://mine-release-minecraft:Container failed liveness probe.. Container will be killed and recreated. Normal Started 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Started container Normal Pulled 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Successfully pulled image "itzg/minecraft-server:latest" Normal Created 55s (x2 over 2m11s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Created container Warning Unhealthy 25s (x2 over 100s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed: Traceback (most recent call last): File "/usr/bin/mcstatus", line 11, in &lt;module&gt; sys.exit(cli()) File "/usr/lib/python2.7/site-packages/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python2.7/site-packages/click/core.py", line 717, in main rv = self.invoke(ctx) File "/usr/lib/python2.7/site-packages/click/core.py", line 1137, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python2.7/site-packages/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python2.7/site-packages/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "/usr/lib/python2.7/site-packages/mcstatus/scripts/mcstatus.py", line 58, in status response = server.status() File "/usr/lib/python2.7/site-packages/mcstatus/server.py", line 61, in status raise exception socket.error: [Errno 104] Connection reset by peer Warning Unhealthy 20s (x8 over 95s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Readiness probe failed: Warning Unhealthy 17s (x5 over 97s) kubelet, ip-10-0-143-216.ap-southeast-2.compute.internal Liveness probe failed: </code></pre> <p>I bit more about my Kubernetes Setup:</p> <p>Kubernetes version 1.14 and nodes running on <code>m5.larges</code></p>
<p>I made reproduction of your problem and the answer is <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">readiness and liveness probe.</a></p> <p>Your chart dont have enough time to get up,so after readiness probe return false, liveness probe kill it and try to do it again,and again.</p> <blockquote> <p>livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.</p> <p>readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.</p> </blockquote> <p>You can either use your command after my edit</p> <pre><code>helm install --name mine-release --set resources.requests.memory=&quot;1024Mi&quot; --set minecraftServer.memory=&quot;1024M&quot; --set minecraftServer.eula=TRUE --set livenessProbe.initialDelaySeconds=90 --set livenessProbe.periodSeconds=15 --set readinessProbe.initialDelaySeconds=90 --set readinessprobe.periodSeconds=15 --namespace=mine-release stable/minecraft </code></pre> <p><strong>OR</strong></p> <p>Use helm fetch to download helm chart to your pc</p> <pre><code>helm fetch stable/minecraft --untar </code></pre> <p>instead of changing values in helm install command, you can use text editor like vi or nano, and update everything in <a href="https://github.com/helm/charts/blob/master/stable/minecraft/values.yaml" rel="nofollow noreferrer">minecraft/values.yaml</a></p> <pre><code>vi/nano ./minecraft/values.yaml </code></pre> <p>minecraft/values.yaml file after edit</p> <pre><code># ref: https://hub.docker.com/r/itzg/minecraft-server/ image: itzg/minecraft-server imageTag: latest ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: requests: memory: 1024Mi cpu: 500m nodeSelector: {} tolerations: [] affinity: {} securityContext: # Security context settings runAsUser: 1000 fsGroup: 2000 # Most of these map to environment variables. See Minecraft for details: # https://hub.docker.com/r/itzg/minecraft-server/ livenessProbe: command: - mcstatus - localhost:25565 - status initialDelaySeconds: 90 periodSeconds: 15 readinessProbe: command: - mcstatus - localhost:25565 - status initialDelaySeconds: 90 periodSeconds: 15 minecraftServer: # This must be overridden, since we can't accept this for the user. eula: &quot;TRUE&quot; # One of: LATEST, SNAPSHOT, or a specific version (ie: &quot;1.7.9&quot;). version: &quot;1.14.4&quot; # This can be one of &quot;VANILLA&quot;, &quot;FORGE&quot;, &quot;SPIGOT&quot;, &quot;BUKKIT&quot;, &quot;PAPER&quot;, &quot;FTB&quot;, &quot;SPONGEVANILLA&quot; type: &quot;VANILLA&quot; # If type is set to FORGE, this sets the version; this is ignored if forgeInstallerUrl is set forgeVersion: # If type is set to SPONGEVANILLA, this sets the version spongeVersion: # If type is set to FORGE, this sets the URL to download the Forge installer forgeInstallerUrl: # If type is set to BUKKIT, this sets the URL to download the Bukkit package bukkitDownloadUrl: # If type is set to SPIGOT, this sets the URL to download the Spigot package spigotDownloadUrl: # If type is set to PAPER, this sets the URL to download the PaperSpigot package paperDownloadUrl: # If type is set to FTB, this sets the server mod to run ftbServerMod: # Set to true if running Feed The Beast and get an error like &quot;unable to launch forgemodloader&quot; ftbLegacyJavaFixer: false # One of: peaceful, easy, normal, and hard difficulty: easy # A comma-separated list of player names to whitelist. whitelist: # A comma-separated list of player names who should be admins. ops: # A server icon URL for server listings. Auto-scaled and transcoded. icon: # Max connected players. maxPlayers: 20 # This sets the maximum possible size in blocks, expressed as a radius, that the world border can obtain. maxWorldSize: 10000 # Allows players to travel to the Nether. allowNether: true # Allows server to announce when a player gets an achievement. announcePlayerAchievements: true # Enables command blocks. enableCommandBlock: true # If true, players will always join in the default gameMode even if they were previously set to something else. forcegameMode: false # Defines whether structures (such as villages) will be generated. generateStructures: true # If set to true, players will be set to spectator mode if they die. hardcore: false # The maximum height in which building is allowed. maxBuildHeight: 256 # The maximum number of milliseconds a single tick may take before the server watchdog stops the server with the message. -1 disables this entirely. maxTickTime: 60000 # Determines if animals will be able to spawn. spawnAnimals: true # Determines if monsters will be spawned. spawnMonsters: true # Determines if villagers will be spawned. spawnNPCs: true # Max view distance (in chunks). viewDistance: 10 # Define this if you want a specific map generation seed. levelSeed: # One of: creative, survival, adventure, spectator gameMode: survival # Message of the Day motd: &quot;Welcome to Minecraft on Kubernetes!&quot; # If true, enable player-vs-player damage. pvp: false # One of: DEFAULT, FLAT, LARGEBIOMES, AMPLIFIED, CUSTOMIZED levelType: DEFAULT # When levelType == FLAT or CUSTOMIZED, this can be used to further customize map generation. # ref: https://hub.docker.com/r/itzg/minecraft-server/ generatorSettings: worldSaveName: world # If set, this URL will be downloaded at startup and used as a starting point downloadWorldUrl: # force re-download of server file forceReDownload: false # If set, the modpack at this URL will be downloaded at startup downloadModpackUrl: # If true, old versions of downloaded mods will be replaced with new ones from downloadModpackUrl removeOldMods: false # Check accounts against Minecraft account service. onlineMode: true # If you adjust this, you may need to adjust resources.requests above to match. memory: 1024M # General JVM options to be passed to the Minecraft server invocation jvmOpts: &quot;&quot; # Options like -X that need to proceed general JVM options jvmXXOpts: &quot;&quot; serviceType: LoadBalancer rcon: # If you enable this, make SURE to change your password below. enabled: false port: 25575 password: &quot;CHANGEME!&quot; serviceType: LoadBalancer query: # If you enable this, your server will be &quot;published&quot; to Gamespy enabled: false port: 25565 ## Additional minecraft container environment variables ## extraEnv: {} persistence: ## minecraft data Persistent Volume Storage Class ## If defined, storageClassName: &lt;storageClass&gt; ## If set to &quot;-&quot;, storageClassName: &quot;&quot;, which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS &amp; OpenStack) ## # storageClass: &quot;-&quot; dataDir: # Set this to false if you don't care to persist state between restarts. enabled: true Size: 1Gi podAnnotations: {} </code></pre> <p>Then we use helm install</p> <pre><code>helm install --name mine-release --namespace=mine-release ./minecraft -f ./minecraft/values.yaml </code></pre> <p>Results from helm install:</p> <pre><code>NAME: mine-release LAST DEPLOYED: Fri Oct 11 14:52:17 2019 NAMESPACE: mine-release STATUS: DEPLOYED RESOURCES: ==&gt; v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mine-release-minecraft-datadir Pending standard 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE mine-release-minecraft-f4558bfd5-mwm55 0/1 Pending 0 0s ==&gt; v1/Secret NAME TYPE DATA AGE mine-release-minecraft Opaque 1 0s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mine-release-minecraft LoadBalancer 10.0.13.180 &lt;pending&gt; 25565:32020/TCP 0s ==&gt; v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE mine-release-minecraft 0/1 1 0 0s NOTES: Get the IP address of your Minecraft server by running these commands in the same shell: !! NOTE: It may take a few minutes for the LoadBalancer IP to be available. !! You can watch for EXTERNAL-IP to populate by running: kubectl get svc --namespace mine-release -w mine-release-minecraft </code></pre> <p>Results from logs:</p> <pre><code>[12:53:45] [Server-Worker-1/INFO]: Preparing spawn area: 98% [12:53:45] [Server thread/INFO]: Time elapsed: 26661 ms [12:53:45] [Server thread/INFO]: Done (66.833s)! For help, type &quot;help&quot; [12:53:45] [Server thread/INFO]: Starting remote control listener [12:53:45] [RCON Listener #1/INFO]: RCON running on 0.0.0.0:25575 </code></pre>
<p>I have a problem with my Kubernetes cluster where my kube-scheduler pod is stuck in the 'CrashLoopBackOff' state and I am unable to rectify it. the logs are complaining of a missing service token:</p> <pre><code>kubectl logs kube-scheduler-master -n kube-system I1011 09:01:04.309289 1 serving.go:319] Generated self-signed cert in-memory W1011 09:01:20.579733 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W1011 09:01:20.579889 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. W1011 09:01:20.579917 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. W1011 09:01:20.579990 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W1011 09:01:20.580040 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. invalid configuration: no configuration has been provided </code></pre> <p>Can anyone please explain what <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> is, where is it supposed to be stored (is the path on the host or within the container) and how do I go about regenerating it?</p> <p>I'm running version 1.15.4 across all of my nodes which were set up using <code>kubeadm</code>. I have <em>stupidly</em> upgrade the cluster since this error first started (I read that it could possibly be a bug in the version I was using). I was previously using version 1.14.*.</p> <p>Any help would be greatly appreciated; everything runs on this cluster and I feel like my arms have been cut off with out it.</p> <p>Thanks in advance,</p> <p>Harry</p>
<p>By default <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> is mounted in every pod and contains the authentication token to access your Kubernetes API server.</p> <p>You can disable mounting it by specifying <code>automountServiceAccountToken: false</code> in your deployment configuration. Some tools like <code>terraform</code> with Kubernetes provisioner disable mounting the token by default as well. On <code>terraform</code> this can be reenabled by adding <code>automount_service_account_token = true</code> to the deployment spec.</p>
<p>Trying to do something that should be pretty simple: starting up an Express pod and fetch the <code>localhost:5000/</code> which should respond with <code>Hello World!</code>.</p> <ul> <li>I've installed <code>ingress-nginx</code> for <a href="https://kubernetes.github.io/ingress-nginx/deploy/#docker-for-mac" rel="nofollow noreferrer">Docker for Mac</a> and <a href="https://kubernetes.github.io/ingress-nginx/deploy/#minikube" rel="nofollow noreferrer">minikube</a> <ul> <li><strong>Mandatory</strong>: <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml</code></li> <li><strong>Docker for Mac</strong>: <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml</code></li> <li><strong>minikube</strong>: <code>minikube addons enable ingress</code></li> </ul></li> <li>I run <code>skaffold dev --tail</code></li> <li>It prints out <code>Example app listening on port 5000</code>, so apparently is running</li> <li>Navigate to <code>localhost</code> and <code>localhost:5000</code> and get a "Could not get any response" error</li> <li>Also, tried <code>minikube ip</code> which is <code>192.168.99.100</code> and experience the same results</li> </ul> <p>Not quite sure what I am doing wrong here. Code and configs are below. Suggestions?</p> <hr> <p><strong>index.js</strong></p> <pre><code>// Import dependencies const express = require('express'); // Set the ExpressJS application const app = express(); // Set the listening port // Web front-end is running on port 3000 const port = 5000; // Set root route app.get('/', (req, res) =&gt; res.send('Hello World!')); // Listen on the port app.listen(port, () =&gt; console.log(`Example app listening on port ${port}`)); </code></pre> <hr> <p><strong>skaffold.yaml</strong></p> <pre><code>apiVersion: skaffold/v1beta15 kind: Config build: local: push: false artifacts: - image: sockpuppet/server context: server docker: dockerfile: Dockerfile.dev sync: manual: - src: '**/*.js' dest: . deploy: kubectl: manifests: - k8s/ingress-service.yaml - k8s/server-deployment.yaml - k8s/server-cluster-ip-service.yaml </code></pre> <hr> <p><strong>ingress-service.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /?(.*) backend: serviceName: server-cluster-ip-service servicePort: 5000 </code></pre> <hr> <p><strong>server-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment spec: replicas: 3 selector: matchLabels: component: server template: metadata: labels: component: server spec: containers: - name: server image: sockpuppet/server ports: - containerPort: 5000 </code></pre> <hr> <p><strong>server-cluster-ip-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: server-cluster-ip-service spec: type: ClusterIP selector: component: server ports: - port: 5000 targetPort: 5000 </code></pre> <hr> <p><strong>Dockerfile.dev</strong></p> <pre><code>FROM node:12.10-alpine EXPOSE 5000 WORKDIR "/app" COPY ./package.json ./ RUN npm install COPY . . CMD ["npm", "run", "dev"] </code></pre> <hr> <p><strong>Output from <code>describe</code></strong></p> <pre><code>$ kubectl describe ingress ingress-service Name: ingress-service Namespace: default Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- localhost / server-cluster-ip-service:5000 (172.17.0.7:5000,172.17.0.8:5000,172.17.0.9:5000) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-service","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"server-cluster-ip-service","servicePort":5000},"path":"/"}]}}]}} kubernetes.io/ingress.class: nginx Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 16h nginx-ingress-controller Ingress default/ingress-service Normal CREATE 21s nginx-ingress-controller Ingress default/ingress-service </code></pre> <hr> <p><strong>Output from <code>kubectl get po -l component=server</code></strong></p> <pre><code>$ kubectl get po -l component=server NAME READY STATUS RESTARTS AGE server-deployment-cf6dd5744-2rnh9 1/1 Running 0 11s server-deployment-cf6dd5744-j9qvn 1/1 Running 0 11s server-deployment-cf6dd5744-nz4nj 1/1 Running 0 11s </code></pre> <p><strong>Output from <code>kubectl describe pods server-deployment</code></strong>: Noticed that the <code>Host Port: 0/TCP</code>. Possibly the issue? </p> <pre><code>Name: server-deployment-6b78885779-zttns Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: minikube/10.0.2.15 Start Time: Tue, 08 Oct 2019 19:54:03 -0700 Labels: app.kubernetes.io/managed-by=skaffold-v0.39.0 component=server pod-template-hash=6b78885779 skaffold.dev/builder=local skaffold.dev/cleanup=true skaffold.dev/deployer=kubectl skaffold.dev/docker-api-version=1.39 skaffold.dev/run-id=c545df44-a37d-4746-822d-392f42817108 skaffold.dev/tag-policy=git-commit skaffold.dev/tail=true Annotations: &lt;none&gt; Status: Running IP: 172.17.0.5 Controlled By: ReplicaSet/server-deployment-6b78885779 Containers: server: Container ID: docker://2d0aba8f5f9c51a81f01acc767e863b7321658f0a3d0839745adb99eb0e3907a Image: sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7 Image ID: docker://sha256:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7 Port: 5000/TCP Host Port: 0/TCP State: Running Started: Tue, 08 Oct 2019 19:54:05 -0700 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-qz5kr (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-qz5kr: Type: Secret (a volume populated by a Secret) SecretName: default-token-qz5kr Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned default/server-deployment-6b78885779-zttns to minikube Normal Pulled 7s kubelet, minikube Container image "sockpuppet/server:668dfe550d93a0ae76eb07e0bab900f3968a7776f4f177c97f61b18a8b1677a7" already present on machine Normal Created 7s kubelet, minikube Created container server Normal Started 6s kubelet, minikube Started container server </code></pre>
<p>OK, got this sorted out now.</p> <p>It boils down to the kind of Service being used: <code>ClusterIP</code>.</p> <blockquote> <p>ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.</p> </blockquote> <p>If I am wanting to connect to a Pod or Deployment directly from outside of the cluster (something like Postman, pgAdmin, etc.) <em>and</em> I want to do it using a Service, I should be using <code>NodePort</code>:</p> <blockquote> <p>NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <code>&lt;NodeIP&gt;</code>:<code>&lt;NodePort&gt;</code>.</p> </blockquote> <p>So in my case, if I want to continue using a Service, I'd change my Service manifest to:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: server-cluster-ip-service spec: type: NodePort selector: component: server ports: - port: 5000 targetPort: 5000 nodePort: 31515 </code></pre> <p>Making sure to manually set <code>nodePort: &lt;port&gt;</code> otherwise it is kind of random and a pain to use.</p> <p>Then I'd get the <code>minikube</code> IP with <code>minikube ip</code> and connect to the Pod with <code>192.168.99.100:31515</code>.</p> <p>At that point, everything worked as expected.</p> <p>But that means having separate sets of development (<code>NodePort</code>) and production (<code>ClusterIP</code>) manifests, which is probably totally fine. But I want my manifests to stay as close to the production version (i.e. <code>ClusterIP</code>).</p> <p>There are a couple ways to get around this:</p> <ol> <li>Using something like <a href="https://github.com/kubernetes-sigs/kustomize" rel="noreferrer">Kustomize</a> where you can set a base.yaml and then have overlays for each environment where it just changes the relevant info avoiding manifests that are mostly duplicative.</li> <li><p>Using <code>kubectl port-forward</code>. I think this is the route I am going to go. That way I can keep my one set of production manifests, but when I want to QA Postgres with pgAdmin I can do:</p> <p><code>kubectl port-forward services/postgres-cluster-ip-service 5432:5432</code></p> <p>Or for the back-end and Postman:</p> <p><code>kubectl port-forward services/server-cluster-ip-service 5000:5000</code></p></li> </ol> <p>I'm playing with doing this through the <code>ingress-service.yaml</code> using <code>nginx-ingress</code>, but don't have that working quite yet. Will update when I do. But for me, <code>port-forward</code> seems the way to go since I can just have one set of production manifests that I don't have to alter.</p> <p><strong>Skaffold Port-Forwarding</strong></p> <p>This is even better for my needs. Appending this to the bottom of the <code>skaffold.yaml</code> and is basically the same thing as <code>kubectl port-forward</code> without tying up a terminal or two:</p> <pre><code>portForward: - resourceType: service resourceName: server-cluster-ip-service port: 5000 localPort: 5000 - resourceType: service resourceName: postgres-cluster-ip-service port: 5432 localPort: 5432 </code></pre> <p>Then run <code>skaffold dev --port-forward</code>.</p>
<p>I installed the Prometheus helm chart to a kubernetes cluster for monitoring. By default, </p> <ul> <li>persistent volume size for prometheus server is defined as 8Gi.</li> <li>Prometheus server will store the metrics in this volume for 15 days (retention period)</li> </ul> <p>After some days of deploying the chart, the prometheus server pod enetered to a crashloopbackoff state. The reason found from pod logs was:</p> <pre><code>level=error ts=2019-10-09T11:03:10.802847347Z caller=main.go:625 err="opening storage failed: zero-pad torn page: write /data/wal/00000429: no space left on device" </code></pre> <p>That means there is no space available in the disk (persistent volume) to save the data. So I cleared the existing data of the volume and fixed the issue temporarily.</p> <p>What would be the proper solution for this? </p> <p>The <a href="https://prometheus.io/docs/prometheus/latest/storage/#operational-aspects" rel="nofollow noreferrer">Prometheus documentation</a> says:</p> <p><strong>To plan the capacity of a Prometheus server, you can use the rough formula:</strong></p> <pre><code>needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample </code></pre> <p>Can someone explain how to use this formula deliberately?</p> <p>Why the 8Gi size is not enough with 15days retention period?</p> <p><strong>EDIT :</strong></p> <p>The default 8Gi space was 100% used after 6 days.</p>
<p>As of Prometheus 2.7, theye've introduced a new flag to manage retention. From docs:</p> <blockquote> <p><code>--storage.tsdb.retention.size</code>: [EXPERIMENTAL] This determines the maximum number of bytes that storage blocks can use (note that this does not include the WAL size, which can be substantial). The oldest data will be removed first. Defaults to 0 or disabled. This flag is experimental and can be changed in future releases. Units supported: KB, MB, GB, PB. Ex: "512MB"</p> </blockquote> <p>You can set this argument option in your Deployment configuration to limit the retention according to size, instead of time.</p> <p>As it is experimental yet, according to <a href="https://www.robustperception.io/configuring-prometheus-storage-retention" rel="nofollow noreferrer">this</a> source, it would be safe to allow for space for the WAL and one maximum size block (which is the smaller of 10% of the retention time and a month).</p>
<p>This seems like it should be simple enough, but I haven't been able to find an answer.</p> <p>We have three labeled nodes:</p> <p><code>thalia0</code> <code>thalia1</code> <code>thalia2</code></p> <p>and a Deployment spec, named <code>mm-sa</code>.</p> <p>I would simply like to set number of replicas of <code>mm-sa</code> to 48 and have the scheduler assign 16 pods to each node.</p> <p>The closest I could find to this, was here, but it looks like this is not yet avaialble: <a href="https://github.com/kubernetes/kubernetes/issues/68981" rel="nofollow noreferrer">Node affinity for even spread of pods across multiple availability zones</a>. I also found this, <a href="https://stackoverflow.com/questions/57035421/why-kubernetes-scheduler-ignores-nodeaffinity">why-kubernetes-scheduler-ignores-nodeaffinity</a>, but it says that "According to the documentation, nodeAffinity must exist for each node that can be used for a scheduled pod and the node having the biggest weight sum is chosen."</p> <p>This seems like such a simple use-case, but I cannot figure out how to achieve it without using a naïve approach, as in defining 3-different Deployments named <code>mm-sa1</code>, ... <code>mm-sa3</code> and using a <code>nodeSelector</code> to assign each accordingly. (NB: I am currently using a <code>nodeSelector</code> to assign 16-replicas of <code>mm-sa</code> on a single node).</p>
<p>Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone cluster. So @David Maze's solution should apply if there are no other nodes. However since you mentioned that you have other nodes that you do not want to schedule on, I would recommend using the <em>nodeSelector</em>. By using <em>nodeSelector</em>, as long as the node has the same key-value pair in the YAML, the pod will be able to schedule on the node. </p> <p>You can read more about <em>nodeSelector</em> here: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector</a></p>
<p>Currently practicing with Kubernetes (managed, on DO), I ran into a issue I couldn't resolve for two days. I have nginx-ingress setup along with cert-manager, and a domain where git.domain.com points to the IP of the load balancer. I can reach my Gitea deployment via the web, everything seems to work.</p> <p>What I want to achieve now is, that I can also use SSH like so</p> <pre><code>git clone git@git.domain.com:org/repo.git </code></pre> <p>So I somehow need to expose the container port 22 via the service, then via the ingress. I tried a couple of things, but none of them seemed to work, probably because I'm a starter at K8S. Here is the working setup I use.</p> <p>Service definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gitea-service spec: selector: app: gitea ports: - name: gitea-http port: 3000 targetPort: gitea-http - name: gitea-ssh port: 22 targetPort: gitea-ssh </code></pre> <p>Ingress definiton</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - git.domain.com secretName: letsencrypt-prod rules: - host: git.domain.com http: paths: - backend: serviceName: gitea-service servicePort: gitea-http </code></pre> <p>And part of my deployment, just to make sure:</p> <pre><code>... ports: - containerPort: 3000 name: gitea-http - containerPort: 22 name: gitea-ssh ... </code></pre> <p>Sorry if it's a dumb question, I think there is some basics that I confuse here. Thanks!</p>
<blockquote> <p>So I somehow need to expose the container port 22 via the service, then via the ingress</p> </blockquote> <p>So yes and no: an Ingress is specifically for virtual-hosting using the <code>host:</code> header (or SNI) of the incoming request to know which backend to use. There is no such mechanism in SSH, or at the very least there's no Ingress controller that I'm aware of which supports protocols other than http for doing that.</p> <p>However, the <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx Ingress controller</a> supports <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">TCP and UDP services</a> so long as you can assign a dedicated port for them (which in your case, you can). You would create a <code>ConfigMap</code> entry saying which port on the <strong>ingress controller's</strong> <code>Service</code> to map to the port on <strong>gitea's</strong> <code>Service</code>, and then you'll need to expose port 22 on whatever is Internet-facing in Digital Ocean that routes traffic to the ingress controller's <code>Service</code>.</p> <pre><code>[Internet] -&gt; :22[load balancer] --&gt; :12345[controller Service] --&gt; :22[gitea-service] </code></pre> <p>There are <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md#servicebetakubernetesiodo-loadbalancer-protocol" rel="noreferrer">Digital Ocean annotations</a> that you can use to switch certain ports over to TCP, but I didn't study that further than a quick search</p> <p>I just used the nginx ingress controller as a concrete example, but the haproxy based ingress controllers will almost certainly do that, and other controllers may have similar options, because your question is a reasonable one</p>
<p>I'm using persistent volume claim to store data in container:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc labels: type: amazonEBS spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>Declaration in spec:</p> <pre><code>spec: volumes: - name: test-data-vol persistentVolumeClaim: claimName: test-pvc containers: - name: test image: my.docker.registry/test:1.0 volumeMounts: - mountPath: /var/data name: test-data-vol </code></pre> <p>When I started it first time, this volume was mounted correctly. But when I Tried to update container image:</p> <pre><code>- image: my.docker.registry/test:1.0 + image: my.docker.registry/test:1.1 </code></pre> <p>This volume failed to mount to new pod:</p> <pre><code># kubectl get pods test-7655b79cb6-cgn5r 0/1 ContainerCreating 0 3m test-bf6498559-42vvb 1/1 Running 0 11m # kubectl describe test-7655b79cb6-cgn5r Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m5s default-scheduler Successfully assigned test-7655b79cb6-cgn5r to ip-*-*-*-*.us-west-2.compute.internal Warning FailedAttachVolume 3m5s attachdetach-controller Multi-Attach error for volume "pvc-2312eb4c-c270-11e8-8d4e-065333a7774e" Volume is already exclusively attached to one node and can't be attached to another Normal SuccessfulMountVolume 3m4s kubelet, ip-*-*-*-*.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "default-token-x82km" Warning FailedMount 62s kubelet, ip-*-*-*-*.us-west-2.compute.internal Unable to mount volumes for pod "test-7655b79cb6-cgn5r(fab0862c-d1cf-11e8-8d4e-065333a7774e)": timeout expired waiting for volumes to attach/mount for pod "test-7655b79cb6-cgn5r". list of unattached/unmounted volumes=[test-data-vol] </code></pre> <p>It seems that Kubernetes can't re-attach this volume from one container to another. How to handle it correctly? I need this data on volume to be used by new version of deployment when old version stopped.</p>
<p>From the context you provided in your question, I can't tell if your intention was to run a <strong>single instance</strong> stateful application, or a <strong>clustered</strong> stateful application.</p> <p>I ran into this problem recently and from <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#updating" rel="noreferrer">this section in the docs</a>, here's how to go about this...</p> <p><strong>If you're running a single instance stateful app:</strong></p> <ul> <li>You should <strong>not</strong> scale the app, that is, leave the default value of <code>spec.replicas</code> as 1 if you're using a <code>Deployment</code></li> <li>You should instruct Kubernetes to <strong>not</strong> use rolling updates, that is, you should set <code>spec.strategy.type</code> to <code>Recreate</code> in your <code>Deployment</code></li> </ul> <p>Sample <code>Deployment</code> (from the <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#deploy-mysql" rel="noreferrer">docs</a>):</p> <pre> <strong># application/mysql/mysql-deployment.yaml</strong> apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 <strong>kind: Deployment</strong> metadata: name: mysql spec: selector: matchLabels: app: mysql <strong>strategy:</strong> <strong>type: Recreate</strong> template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: # Use secret in real usage - name: MYSQL_ROOT_PASSWORD value: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </pre> <p>And the sample <code>PersistentVolume</code> &amp; <code>PersistentVolumeClaim</code> (from the <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#deploy-mysql" rel="noreferrer">docs</a>):</p> <pre> <strong># application/mysql/mysql-pv.yaml</strong> apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </pre> <p>The obvious underlying matter here is that a rolling update will not work, because there can be no more than one pod running at any time. Setting <code>spec.strategy.type</code> to <code>Recreate</code> tells Kubernetes to stop the running pod before deploying a new one, so presumably there will be some downtime, even if minimal.</p> <p><strong>If you need a clustered stateful application</strong>, then using the already mentioned <code>StatefulSet</code> as a controller type or <code>ReadWriteMany</code> as a storage type would probably be the way to go.</p>
<p>I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.</p> <p>For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.</p> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> <p>I considered using <code>NodePort</code> services, but this requires the nodes themselves to be publicly accessible (relying on <code>kube-proxy</code> to route to the appropriate node). I'd prefer to avoid this if possible.</p> <p><code>LoadBalancer</code> services seem like another option, though I don't want to create a dedicated cloud load balancer for <em>each</em> TCP service I want to expose.</p> <p>I'm aware that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">NGINX Ingress controller supports exposing TCP and UDP services</a>, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static <code>ConfigMap</code>.</p>
<blockquote> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> </blockquote> <p>Well, that depends on how you expect the ultimate user to <strong>address</strong> those Services? As you pointed out, with an Ingress, it is possible to use virtual hosting to route all requests to the same Ingress controller, and then use the <code>Host:</code> header to dispatch within the cluster.</p> <p>With a TCP service, such as PostgreSQL, there is no such header. So, you would necessarily have to have either an IP based mechanism, or assign each one a dedicated port on your Internet-facing IP</p> <p>If your clients are IPv6 aware, assigning each Service a dedicated IP address is absolutely reasonable, given the absolutely massive IP space that IPv6 offers. But otherwise, you have two knobs to turn: the IP and the port.</p> <p>From there, how you get those connections routed within your cluster to the right Service is going to depend on how you solved the first problem</p>
<p><code>kubectl proxy</code> and <code>kubectl port-forwarding</code> look similar and confusing to me, what are their main differences and use cases?</p>
<p>As mentioned in "<a href="https://stackoverflow.com/a/51469150/6309">How <code>kubectl port-forward</code> works?</a>"</p> <blockquote> <p><code>kubectl port-forward</code> forwards connections to a local port to a port on a pod. </p> <p>Compared to <code>kubectl proxy</code>, <code>kubectl port-forward</code> is more generic as it can forward TCP traffic while <code>kubectl proxy</code> can only forward HTTP traffic.</p> </blockquote> <p>As an example, see "<a href="https://medium.com/pixelpoint/kubernetes-port-forwarding-simple-like-never-before-20a8ab16370f" rel="noreferrer">Kubernetes port forwarding simple like never before</a>" from <a href="https://twitter.com/alex_barashkov" rel="noreferrer">Alex Barashkov</a>:</p> <blockquote> <p><strong>Port forwarding mostly used for the purpose of getting access to internal cluster resources and debugging</strong>.</p> <p>How does it work?</p> <p>Generally speaking, using port forwarding you could get on your ‘localhost’ any services launched in your cluster.<br> For example, if you have Redis installed in the cluster on 6379, by using a command like this:</p> <pre><code>kubectl port-forward redis-master-765d459796-258hz 7000:6379 </code></pre> <p>you could forward Redis from the cluster to localhost:7000, access it locally and do whatever you want to do with it.</p> </blockquote> <p>For a limited HTTP access, see kubectl proxy, and, as an example, "<a href="https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca" rel="noreferrer">On Securing the Kubernetes Dashboard</a>" from <a href="https://twitter.com/jbeda" rel="noreferrer">Joe Beda</a>:</p> <blockquote> <p>The easiest and most common way to access the cluster is through kubectl proxy. This creates a local web server that securely proxies data to the dashboard through the Kubernetes API server.</p> </blockquote> <p>As shown in "<a href="https://medium.com/edureka/kubernetes-dashboard-d909b8b6579c" rel="noreferrer">A Step-By-Step Guide To Install &amp; Use Kubernetes Dashboard</a>" from <a href="https://twitter.com/edurekaIN" rel="noreferrer">Awanish</a>:</p> <blockquote> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml </code></pre> <p>Accessing Dashboard using the kubectl</p> <pre><code>kubectl proxy </code></pre> <p>It will proxy server between your machine and Kubernetes API server.</p> <p>Now, to view the dashboard in the browser, navigate to the following address in the browser of your Master VM:</p> <pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ </code></pre> </blockquote>
<p>I have two containers,maybe A and B, which A should run before B, but A is a server application, which the final type is Running but not Complete, so I wonder in this way, will B be never executed? So how can I deal with it?</p>
<p>If A and B are part of the same pod, then <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> is the legacy way to establish ordering.</p> <p>From the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="nofollow noreferrer">Kubernetes Pod lifecycle</a>, I suppose you mean "Running, but no Terminated"</p> <p>A <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">pod liveness/readiness probe</a> is in your case a better fit, since the server will not accept request until ready. </p> <p>Read "<a href="https://medium.com/faun/straight-to-the-point-kubernetes-probes-e5b23e267d9d" rel="nofollow noreferrer">Straight to the Point: Kubernetes Probes</a>" from <a href="https://twitter.com/petomalina" rel="nofollow noreferrer">Peter Malina</a></p> <blockquote> <p>Both readiness and liveness probe run in parallel throughout the life of a container. </p> <ul> <li>Use the liveness probe to detect an internal failure and restart the container (e.g. HTTP server down). </li> <li>Use the readiness probe to detect if you can serve traffic (e.g. established DB connection) and wait (not restart) for the container. </li> </ul> <p>A dead container is also not a ready container.<br> To serve traffic, all containers within a pod must be ready.</p> </blockquote> <p>You can add a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate" rel="nofollow noreferrer">pod readiness gate (stable from 1.14)</a> to specify additional conditions to be evaluated for Pod readiness.</p> <p>Read also "<a href="https://blog.colinbreck.com/kubernetes-liveness-and-readiness-probes-how-to-avoid-shooting-yourself-in-the-foot/" rel="nofollow noreferrer">Kubernetes Liveness and Readiness Probes: How to Avoid Shooting Yourself in the Foot</a>" from <a href="https://blog.colinbreck.com/" rel="nofollow noreferrer">Colin Breck</a></p> <p>"<a href="https://stackoverflow.com/a/53874319/6309">Should Health Checks call other App Health Checks</a>" compares that approach with the <a href="https://medium.com/@xcoulon/initializing-containers-in-order-with-kubernetes-18173b9cc222" rel="nofollow noreferrer">InitContainer approach</a></p>
<p>I am running <strong>prometheus</strong> in my <strong>kubernetes</strong> cluster. I have the following system in kubernetes:</p> <p>I have 4 nodes. I want to calculate free memory. I want to have the summation of those four nodes. Then I want to find the maximum over 1 day. So, for example,</p> <p>at time=t1 node1: 500 MB node2: 600 MB node3: 200 MB node4: 300 MB Total = 1700 MB</p> <p>at time=t2 node1: 400 MB node2: 700 MB node3: 100 MB node4: 200 MB Total = 1300 MB</p> <p>at time=t3 node1: 600 MB node2: 800 MB node3: 1200 MB node4: 1300 MB Total = 3900 MB</p> <p>at time=t4 node1: 100 MB node2: 200 MB node3: 300 MB node4: 400 MB Total = 1000 MB</p> <p>So, The answer to my query should be 3900 MB. I am not able to do max_over_time for the sum.</p> <p>I have done like this(which is not working at all): </p> <pre><code>max_over_time(sum(node_memory_MemFree)[2m]) </code></pre>
<p><a href="https://prometheus.io/blog/2019/01/28/subquery-support/" rel="noreferrer">Since</a> version 2.7 (Jan 2019), Prometheus supports sub-queries:</p> <pre><code>max_over_time( sum(node_memory_MemFree_bytes{instance=~"foobar.*"})[1d:1h] ) </code></pre> <p>(metric for the past 2 days, with a resolution of 1 hour.)</p> <p>Read the documentation for more informations on using recording rules: <a href="https://prometheus.io/docs/prometheus/latest/querying/examples/#subquery" rel="noreferrer">https://prometheus.io/docs/prometheus/latest/querying/examples/#subquery</a></p> <p>However, do note the blog recommendation:</p> <blockquote> <p><strong>Epilogue</strong></p> <p>Though subqueries are very convenient to use in place of recording rules, using them unnecessarily has performance implications. Heavy subqueries should eventually be converted to recording rules for efficiency.</p> <p>It is also not recommended to have subqueries inside a recording rule. Rather create more recording rules if you do need to use subqueries in a recording rule.</p> </blockquote> <p>The use of recording rules is explained in brian brazil article: <a href="https://www.robustperception.io/composing-range-vector-functions-in-promql/" rel="noreferrer">https://www.robustperception.io/composing-range-vector-functions-in-promql/</a></p>
<p>I'm trying to get access to mongodb replicaset via kubectl, so I won't expose it to internet, I can't use OpenVPN since Calico blocks it.</p> <p>So I'm using this script:</p> <pre><code>export MONGO_POD_NAME1=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[0].metadata.name}") export MONGO_POD_NAME2=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[1].metadata.name}") export MONGO_POD_NAME3=$(kubectl get pods --namespace develop -l "app=mongodb-replicaset" -o jsonpath="{.items[2].metadata.name}") echo $MONGO_POD_NAME1, $MONGO_POD_NAME2, $MONGO_POD_NAME3 kubectl port-forward --namespace develop $MONGO_POD_NAME1 27020:27017 &amp; p3=$! kubectl port-forward --namespace develop $MONGO_POD_NAME2 27021:27017 &amp; p4=$! kubectl port-forward --namespace develop $MONGO_POD_NAME3 27022:27017 &amp; p5=$! wait -n [ "$?" -gt 1 ] || kill "$p3" "$p4" "$p5" wait </code></pre> <p>And my connection string looks like this:</p> <pre><code>mongodb://LOGIN:PW@localhost:27020,localhost:27021,localhost:27022/animedb?replicaSet=rs0 </code></pre> <p>However, I still can't connect to my mongodb replicaset, it says:</p> <pre><code>connection error: { MongoNetworkError: failed to connect to server [anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local anime-data-develop-mongodb-replicaset-0.anime-data-develop-mongodb-replicaset.develop.svc.cluster.local:27017] </code></pre> <p>But if I use direct connection, I still can connect to each node!<br> What might be a problem here? How can I connect to mongodb for development?</p>
<p><strong>Port Forwarding</strong> will make a <strong>local port</strong> on your machine <strong>redirect</strong> (forward) <strong>traffic to some pod</strong>. In your case, you've asked Kubernetes to forward traffic on 127.0.0.1:27020 to your pod's 27017 port.</p> <p>The issue happen because the Replica Set configuration points to the other nodes using your internal cluster IPs, so you will see something like <code>[ReplicaSetMonitor-TaskExecutor] changing hosts to rs0/&lt;ClusterIP-1&gt;:27017,&lt;ClusterIP-2&gt;:27017,&lt;ClusterIP-3&gt;:27017 from rs/localhost:27020,localhost:27021,localhost:27022</code> on your mongo client session, and your machine can't reach your Cluster's IPs, of course.</p> <p>For development purposes, you'd have to connect to your primary Mongo node only (as in <code>mongodb://localhost:27020/animedb</code>), which will replicate your data into your secondaries. That's safe enough for development/debugging, but <strong>not suitable for production!</strong></p> <p>If you need to set it up for permanent/production access, you should update your replicaSet settings so they find each other using public IPs or hostnames, see <a href="https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/" rel="nofollow noreferrer">https://docs.mongodb.com/manual/tutorial/change-hostnames-in-a-replica-set/</a>.</p>
<p>I'm setting up a service, which use k8s to start jupyter pod for each user. I need to provide a real-time resource usage of the jupyter pod when user query for it. I tried metrics-server and kubectl top. Their results are from cache and have about 1 minute delay. Is there any solution?</p> <pre><code>&gt; kubectl version </code></pre> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"14",GitVersion:"v1.14.0",GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z",GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-aliyun.1", GitCommit:"8cb561c", GitTreeState:"", BuildDate:"2019-05-23T11:28:17Z", GoVersion:"go1.10.8", Compiler:"gc",Platform:"linux/amd64"}</p> </blockquote>
<p>you could query the kubelet stats endpoint:</p> <pre><code>curl --insecure https://&lt;node url&gt;:10250/stats/summary </code></pre> <p>you can also make the query more specific to pod/container</p> <pre><code>curl --insecure https://&lt;node url&gt;:10250/{namespace}/{podName}/{uid}/{containerName} </code></pre> <p>where <code>uid</code> is basically any string...</p> <p>The part of the code for your version is <a href="https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/server/stats/handler.go#L123-L126" rel="nofollow noreferrer">here</a></p> <p>Another tip: if your pod run as part of host network you could query <code>localhost</code> also and would need to assign a service account with access to it. The query would look like this:</p> <pre><code>TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) curl https://localhost:10250/stats/summary --header "Authorization: Bearer $TOKEN" --insecure </code></pre>
<p>Install Rancher 2.x HA cluster flow the <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/" rel="nofollow noreferrer">offical document</a>. But I can't install it without public DNS - hostname. Is there any way to avoid this? I try to use /etc/hosts file but it seems like there an issue with agent docker doesn't get config from custom DNS.<br> I want to access the load-balancing cluster via IP, not via public DNS.</p>
<p>Indeed, in a standard installation of Kubernetes, access to the API is done in HTTPS, and you need a certificate.</p> <p>You can have a look at this doc: <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/" rel="nofollow noreferrer">Controlling Access to the Kubernetes API</a>.</p> <p>If your goal is just running a lab, maybe you can use <a href="http://xip.io/" rel="nofollow noreferrer">xip.io</a> if you do not have a DNS server on which you have the hand.</p>
<p>If I'm running processes in 2 pods that communicate with each other over tcp (addressing each other through Kubernetes services) and the pods are scheduled to the same node will the communication take place over the network or will Kubernetes know to use the loopback device?</p>
<p>In a kubernetes cluster, a pod could be scheduled in any node in the cluster. The another pod which wants to access it should not ideally know where this pod is running or its POD IP address. Kubernetes provides a basic service discovery mechanism by providing DNS names to the kubernetes services (which are associated with pods). When a pod wants to talk to another pod, it should use the DNS name (e.g. svc1.namespace1.svc.cluster.local)</p>
<p>I have a config yaml file for a kubernetes deployment that looks like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: &lt;some_app&gt; name: &lt;some_app&gt; namespace: dataengineering spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: &lt;some_app&gt; spec: dnsPolicy: ClusterFirst restartPolicy: Always terminationGracePeriodSeconds: 30 containers: - image: 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/&lt;some_app&gt;:latest imagePullPolicy: Always name: &lt;some_app&gt; env: - name: ES_HOST value: "vpc-some-name-dev-wrfkk5v7kidaro67ozjrv4wdeq.us-west-2.es.amazonaws.com" - name: ES_PORT value: "443" - name: DATALOADER_QUEUE valueFrom: configMapKeyRef: name: &lt;some_name&gt; key: DATALOADER_QUEUE - name: AWS_DEFAULT_REGION value: "us-west-2" - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: &lt;some_name&gt; key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: &lt;some_name&gt; key: AWS_SECRET_ACCESS_KEY ... </code></pre> <p>Currently, this file is in <code>dev/deployment.yaml</code> but I also want a <code>prod/deployment.yaml</code>. Instead of copying this whole file over, is there a better way to DRY up this file so it can be used for both dev and prod clusters? The parts of this file that differ are some of the environment variables (I used a different <code>DATALOADER_QUEUE</code> variable for prod and dev, and the AWS keys. What can be done?</p> <p>I looked into some options like a configmap. How does one do this? What's a mounted volume? I'm reading this: <code>https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume</code> but I'm not sure what it is.... what is a volume? How do I access the data stored in this "volume"?</p> <p>Can the image be switched from prod to dev? I know that seems odd...</p>
<p>Something like this would help with the env vars:</p> <pre><code> envFrom: - configMapRef: name: myapp-config - secretRef: name: myapp-secrets </code></pre> <p>You can then use different namespaces for dev vs. prod so the references don't have to vary. For handling labels, look at Kustomize overlays and setting labels at the overlay level.</p>
<p>How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?</p> <p>e.g. Duplicate default to my-namespace which will have the same content.</p> <p>I'm interested just by services and deployments, so when I try with method with kubectl get all and with api-resources i have error with services IP like :</p> <pre><code>Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated </code></pre>
<p>There is no specific way to do this. You could probably get close with something like <code>kubectl get all -n sourcens -o yaml | sed -e 's/namespace: sourcens/namespace: destns/' | kubectl apply -f -</code> but <code>get all</code> is always a bit wonky and this could easily miss weird edge cases.</p>
<p>I have created a role, below is definition the role:</p> <pre><code>--- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: ${SERVICE_ACCOUNT_NAME}-full-access-role namespace: ${NAMESPACE} rules: - apiGroups: ["", "extensions", "apps"] resources: ["*"] verbs: ["*"] - apiGroups: ["batch"] resources: - jobs - cronjobs verbs: ["*"] </code></pre> <p>Using this role a user can create, list and delete any resource in a namespace but problem is it can also list all namespaces in cluster.</p> <p>I want to enable users to create namespaces and can perform any action in it as well but also they can't list any resource even other namespaces that are't created by the user.</p>
<p>I think you cannot deny user access right to list all namespaces and give them the access you need at the same. At least not in the bare metal kubernetes. Read the following <a href="https://github.com/kubernetes/community/issues/1486" rel="nofollow noreferrer">GH issue</a>, it's about the similar issue that you encounter.</p>
<p>I have a cluster Kubernetes cluster on a local machine and one raspberry pi. In order to test the cluster I created a nginx deployment and a service that I want to access as a NodePort. But for God know's why, I can't access said service. Bellow are my deployment and service files.</p> <p><code>kubectl get nodes</code></p> <pre><code>NAME STATUS ROLES AGE VERSION anima Ready master 7d5h v1.16.1 bahamut Ready &lt;none&gt; 7d4h v1.16.1 </code></pre> <p>My service and deployment files:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 80 nodeSelector: kubernetes.io/hostname: bahamut </code></pre> <pre><code> apiVersion: v1 kind: Service metadata: name: nginx spec: selector: app: nginx ports: - port: 3030 targetPort: 80 type: NodePort </code></pre> <p>After <code>kubectl get pods -o wide</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-67c8c4b564-6x7g5 1/1 Running 0 6m21s 10.244.1.13 bahamut &lt;none&gt; &lt;none&gt; </code></pre> <p>My Deployments, <code>kubectl get deployments -o wide</code>:</p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx 1/1 1 1 7m55s nginx nginx:latest app=nginx </code></pre> <p>My Services, <code>kubectl get svc -o wide</code>:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 7d5h &lt;none&gt; nginx NodePort 10.102.203.77 &lt;none&gt; 3030:30508/TCP 8m54s app=nginx </code></pre> <p>And finally, <code>kubectl get endpoints -o wide</code>:</p> <pre><code>NAME ENDPOINTS AGE kubernetes 192.168.25.4:6443 7d5h nginx 10.244.1.13:80 9m41s </code></pre> <p>My Kubernetes master local IP is <code>192.168.25.4</code> and my raspberry ip is <code>192.168.25.6</code>. After deploying the service I tried:</p> <pre><code>curl 192.168.25.6:3030 curl: (7) Failed to connect to 192.168.25.6 port 3030: Connection refused curl 192.168.25.6:80 curl: (7) Failed to connect to 192.168.25.6 port 80: Connection refused curl 192.168.25.6:30508 (hangs) </code></pre> <p>Also tried using the master node IP, the Service IP and the listed Cluster IP, but nothing works.</p> <p><strong>EDIT</strong></p> <p>It works if I use <code>hostNetwork=true</code> on the deployment and access it using the node local IP on the container port, but obviously that's not what I want. I want to understand why Kubernetes isn't let me access the container through the service.</p>
<p>NodePort exposes the Service on each Node’s IP at a static port (the NodePort) in your case it is <code>30508</code>. Please see more details <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a>. </p> <p>And <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">this</a> will share more details on the bare-metal clusters.</p>
<p>I have been trying to get a basic Kubernetes cluster running according to the following tutorial <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a></p> <p>I started with an up-to-date ubuntu 16.04 system and installed docker.</p> <p><code>wget -qO- https://get.docker.com/ | sed 's/docker-ce/docker-ce=18.06.3~ce~3-0~ubuntu/' | sh</code></p> <p>After that I installed the kubelet / Kubeadm and kubectl modules</p> <pre><code>apt-get update &amp;&amp; apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat &lt;&lt;EOF &gt;/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl </code></pre> <p>Made sure that swap etc was off <code>sudo swapoff -a</code></p> <p>Performed the init using <code>sudo kubeadm init</code></p> <pre><code>[init] Using Kubernetes version: v1.13.3 ... To start using your cluster ... mkdir ... You can now join any ... ... </code></pre> <p>I make the .kube folder and config</p> <pre><code>mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p><code>kubectl cluster-info</code> then shows</p> <pre><code>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port? </code></pre> <p>After giving it a few attempt I once received:</p> <pre><code>sudo kubectl cluster-info Kubernetes master is running at https://10.117.xxx.xxx:6443 KubeDNS is running at https://10.117.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>But a second later its back to the permission denied</p> <pre><code>sudo kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port? </code></pre> <p>I tried with and without sudo... and <code>sudo kubectl get nodes</code> also refused to work.</p> <pre><code>The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port? </code></pre> <p>What am I missing that it won't connect?</p> <p><code>ping 10.117.xxx.xxx</code> works fine and even <code>ssh</code> to this address works and is the same server.</p> <p><strong>Edit</strong></p> <p><code>sudo systemctl restart kubelet.service</code> shows that the cluster comes online but for some reason goes offline within minutes.</p> <pre><code>kubectl cluster-info Kubernetes master is running at https://10.117.0.47:6443 KubeDNS is running at https://10.117.0.47:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy ... kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server 10.117.0.47:6443 was refused - did you specify the right host or port? </code></pre> <p><strong>edit2</strong></p> <p>After doing a full reset and using the following init...</p> <pre><code>sudo kubeadm init --pod-network-cidr=10.244.0.0/16 </code></pre> <p>Followed by</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml </code></pre> <p>Allowed me to install the pod network add-on but was only short-lived.</p> <pre><code>clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created </code></pre> <pre><code>kubectl cluster-info Kubernetes master is running at https://10.117.xxx.xxx:6443 KubeDNS is running at https://10.117.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. ➜ ~ kubectl cluster-info To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port? </code></pre> <p><strong>edit3</strong></p> <p>Removing all docker images, containers etc... and then perform a restart using <code>sudo systemctl restart kubelet.service</code> seems to do the trick for a few minutes but then all docker containers are killed and removed without any signal. How can I look into the logging of these containers to perhaps find out why they are killed?</p> <p><a href="https://pastebin.com/wWSWXM31" rel="nofollow noreferrer">https://pastebin.com/wWSWXM31</a> log file</p>
<p>After reboot got the same error. <code>swapoff -a</code> works for me</p>
<p>I download a pod's log, it has size bigger then 1G, how to shrike/truncate the pod's log? so I can download it faster?</p> <p>PS: I need erase the old part of logs so I can free the disk space</p>
<p><code>kubectl logs --tail=1000</code> to show only the most recent 1000 lines. Adjust as needed.</p> <blockquote> <p>--tail=-1: Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise 10, if a selector is provided.</p> </blockquote>
<p>The setup is on GCP GKE. I deploy a Postgres database with a persistent volume (retain reclaim policy), and:</p> <pre><code> strategy: type: Recreate </code></pre> <p>Will the data be retained or re-initialized if the database pod gets deleted?</p>
<p>The update strategy has nothing to do with the on-delete behavior. That's used when a change to the pod template triggers an update. Basically does it nuke the old ReplicaSet all at once or gradually scale things up/down. You almost always way RollingUpdate unless you are working with software that requires all nodes be on exactly the same version and understand this will cause downtime on any change.</p> <p>As for the Retain volume mode, this is mostly a safety net for admins. Assuming you used a PVC, deleting the pod will have no effect on the data since the volume is tied to the claim rather than the pod itself (obviously things will go down while the pod restarts but that's unrelated). If you delete the PVC, a Retain volume will be kept on the backend but if you wanted to do anything with it you would have to go in and do it manually. It's like a "oops" protection, requires two steps to actually delete the data.</p>
<p>In namespace A, I have a service <code>nginx</code> running. In namespace B, I can use <code>nginx.A</code> or <code>nginx.A.svc.cluster.local</code> to get access to the <code>nginx</code> in namespace A. </p> <p>So what's the difference between these two? Which one is more recommended? Why?</p>
<p><strong>Both forms are considered to be correct</strong> (compare with <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="nofollow noreferrer">this</a> article) and in majority of cases work fine however I could find a few issues on <strong>github</strong> when people encountered some problems related only to short names resolution e.g.:</p> <p><a href="https://github.com/kubernetes/dns/issues/109" rel="nofollow noreferrer">https://github.com/kubernetes/dns/issues/109</a></p> <p><a href="https://github.com/kubernetes/kubernetes/issues/10014" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/10014</a></p> <p>As you can read in official <strong>Kubernetes</strong> documentation (<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns" rel="nofollow noreferrer">ref1</a>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#understanding-namespaces-and-dns" rel="nofollow noreferrer">ref2</a>), it recommends use of the long form in case of reaching services across namespaces:</p> <blockquote> <p>When you create a Service, it creates a corresponding DNS entry. This entry is of the form <strong><code>&lt;service-name&gt;.&lt;namespace-name&gt;.svc.cluster.local</code></strong>, which means that if a container just uses <strong><code>&lt;service-name&gt;</code></strong>, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the <strong>fully qualified domain name (FQDN)</strong>.</p> </blockquote> <p>In my opinion it's much better to stick to <a href="https://en.wikipedia.org/wiki/Fully_qualified_domain_name" rel="nofollow noreferrer">FQDN (fully qualified domain name)</a> standard and often being explicit is considered to be a better practice than being implicit.</p>
<p>I have some internal services (Logging, Monitoring, etc) exposed via nginx-ingress and protected via oauth2-proxy and some identity manager (Okta) behind. We use 2fa for additional security for our users. </p> <p>This works great for user accounts. It does not work for other systems like external monitoring as we can not make a request with a token or basic auth credentials.</p> <p>Is there any known solution to enable multiple authentication types in an ingress resource? </p> <p>Everything I found so far is specific for one authentication process and trying to add basic auth as well did not work. </p> <p>Current ingress</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: cert-manager-extra-issuer kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/auth-signin: https://sso-proxy/oauth2/start?rd=https://$host$request_uri nginx.ingress.kubernetes.io/auth-url: https://sso-proxy/oauth2/auth </code></pre>
<p>This is simply not an advisable solution. You cannot use multiple authentication types in a single Ingress resource. </p> <p>The better way to deal with it would be to create separate Ingresses for different authentication types. </p> <p>I hope it helps. </p>
<p>I have created simple nginx deplopyment in Ubuntu EC2 instance and exposed to port through service in kubernetes cluster, but I am unable to ping the pods even in local envirnoment. My Pods are running fine and service is also created successfully. I am sharing some outputs of commands below</p> <p>kubectl get nodes</p> <pre><code>NAME STATUS ROLES AGE VERSION ip-172-31-39-226 Ready &lt;none&gt; 2d19h v1.16.1 master-node Ready master 2d20h v1.16.1 </code></pre> <p>kubectl get po -o wide</p> <pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-54f57cf6bf-dqt5v 1/1 Running 0 101m 192.168.39.17 ip-172-31-39-226 &lt;none&gt; &lt;none&gt; nginx-deployment-54f57cf6bf-gh4fz 1/1 Running 0 101m 192.168.39.16 ip-172-31-39-226 &lt;none&gt; &lt;none&gt; sample-nginx-857ffdb4f4-2rcvt 1/1 Running 0 20m 192.168.39.18 ip-172-31-39-226 &lt;none&gt; &lt;none&gt; sample-nginx-857ffdb4f4-tjh82 1/1 Running 0 20m 192.168.39.19 ip-172-31-39-226 &lt;none&gt; &lt;none&gt; </code></pre> <p>kubectl get svc</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2d20h nginx-deployment NodePort 10.101.133.21 &lt;none&gt; 80:31165/TCP 50m sample-nginx LoadBalancer 10.100.77.31 &lt;pending&gt; 80:31854/TCP 19m </code></pre> <p>kubectl describe deployment nginx-deployment</p> <pre><code>Name: nginx-deployment Namespace: default CreationTimestamp: Mon, 14 Oct 2019 06:28:13 +0000 Labels: &lt;none&gt; Annotations: deployment.kubernetes.io/revision: 1 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replica... Selector: app=nginx Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: &lt;none&gt; Volumes: &lt;none&gt; Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: &lt;none&gt; NewReplicaSet: nginx-deployment-54f57cf6bf (2/2 replicas created) Events: &lt;none&gt; </code></pre> <p>Now I am unable to ping 192.168.39.17/16/18/19 from master, also not able to access curl 172.31.39.226:31165/31854 from master as well. Any help will be highly appreciated..</p>
<p>From the information, you have provided. And from the discussion we had the worker node has the Nginx pod running. And you have attached a NodePort Service and Load balancer Service to it.</p> <p>The only thing which is missing here is the server from which you are trying to access this. </p> <p>So, I tried to reach this URL <code>52.201.242.84:31165</code>. I think all you need to do is whitelist this port for public access or the IP. This can be done via security group for the worker node EC2.</p> <p>Now the URL above is constructed from the public IP of the worker node plus(+) the <strong>NodePort</strong> svc which is attached. Thus here is a simple formula you can use to get the exact address of the pod running.</p> <pre><code>Pod Access URL = Public IP of Worker Node + The NodePort </code></pre>
<p>I have a kubernetes cluster with a few different pod types. </p> <ul> <li>An Nginx frontend, </li> <li>A flask backend on gunicorn, </li> <li>Redis, and </li> <li>A Redis queue (RQ).</li> </ul> <p>Based on what I can tell, the default liveness probing for the frontend, and flask backend are sufficient (200 OK returning, as I have created a '/' backend that returns 200 and all my tasks should run quickly). Crash detection works well. </p> <p>Additionally, I have a setup aliveness monitor that pings Redis with the Redis-cli. That also is working well.</p> <p>However, I am not sure if the default configuration for the RQ is good enough. The pod has restarted itself a few times and is generally well behaved, but since I don't know the mechanism that is used, I'm worried.</p> <p>My questions are: what is the liveness probe used by something like an RQ worker and what might I do to make sure it's robust?</p> <p>Should I be using something like Supervisor or systemd? Any recommendations on which one?</p>
<p>It would appear that RQ sets a heartbeat key in Redis: <a href="https://github.com/rq/rq/blob/e43bce4467c3e1800d75d9cedf75ab6e7e01fe8c/rq/worker.py#L545-L561" rel="nofollow noreferrer">https://github.com/rq/rq/blob/e43bce4467c3e1800d75d9cedf75ab6e7e01fe8c/rq/worker.py#L545-L561</a></p> <p>You could check if this exists somehow. This would probably require an exec probe though, and at this time I wouldn't recommend that as exec probes have several open bugs that cause zombie processes leading to escalating resource usage over time.</p>
<p>I'm using <code>mongodb-exporter</code> for store/query the metrics via <code>prometheus</code>. I have set up a custom metric server and storing values for that . </p> <p>That is the evidence of <code>prometheus-exporter</code> and <code>custom-metric-server</code> works compatible .</p> <p><strong>Query:</strong></p> <pre><code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/mongodb_mongod_wiredtiger_cache_bytes" </code></pre> <p><strong>Result:</strong></p> <pre><code>{"kind":"MetricValueList","apiVersion":"custom.metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/mongodb_mongod_wiredtiger_cache_bytes"},"items":[{"describedObject":{"kind":"Pod","namespace":"monitoring","name":"mongo-exporter-2-prometheus-mongodb-exporter-68f95fd65d-dvptr","apiVersion":"/v1"},"metricName":"mongodb_mongod_wiredtiger_cache_bytes","timestamp":"TTTTT","value":"0"}]} </code></pre> <p>In my case when I create a hpa for this custom metrics from mongo exporter, hpa return this error to me :</p> <pre><code>failed to get mongodb_mongod_wiredtiger_cache_bytes utilization: unable to get metrics for resource mongodb_mongod_wiredtiger_cache_bytes: no metrics returned from resource metrics API </code></pre> <p>What is the main issue on my case ? I have checked all configs and flow is looking fine, but where is the my mistake .</p> <p>Help </p> <p>Thanks :) </p>
<p>In comments you wrote that you have enabled <code>external.metrics</code>, however in original question you had issues with <code>custom.metrics</code></p> <p>In short:</p> <ul> <li><code>metrics</code> supports only basic metric like CPU or Memory.</li> <li><code>custom.metrics</code> allows you to extend basic metrics to all Kubernetes objects (http_requests, number of pods, etc.).</li> <li><code>external.metrics</code> allows to gather metrics which are not Kubernetes objects:</li> </ul> <blockquote> <p>External metrics allow you to autoscale your cluster based on any metric available in your monitoring system. Just provide a metric block with a name and selector, as above, and use the External metric type instead of Object</p> </blockquote> <p>For more detailed description, please check <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">this doc</a>.</p> <p><strong>Minikube</strong></p> <p>To verify if <code>custom.metrics</code> are enabled you need to execute command below and check if you can see any <code>metrics-server...</code> pod.</p> <pre><code>$ kubectl get pods -n kube-system ... metrics-server-587f876775-9qrtc 1/1 Running 4 5d1h </code></pre> <p>Second way is to check if <code>minikube</code> have enabled <code>metrics-server</code> by</p> <pre><code>$ minikube addons list ... - metrics-server: enabled </code></pre> <p>If it is disabled just execute </p> <pre><code>$ sudo minikube addons enable metrics-server ✅ metrics-server was successfully enabled </code></pre> <p><strong>GKE</strong></p> <p>Currently at GKE <code>heapster</code> and <code>metrics-server</code> are turn on as default but <code>custom.metrics</code> are not supported by default. You have to install <code>prometheus adapter</code> or <code>stackdriver</code>.</p> <p><strong>Kubeadm</strong></p> <p>Kubeadm do not include <code>heapster</code> or <code>metrics server</code> at the beginning. For easy installation, you can use <a href="https://github.com/luxas/kubeadm-workshop/blob/master/demos/monitoring/metrics-server.yaml" rel="nofollow noreferrer">this YAML</a>.</p> <p>Later you have to install <code>prometheus adapter</code>.</p> <p><strong>Apply custom.metrics</strong></p> <p>It's the same for <code>Minikube</code>, <code>Kubeadm</code>, <code>GKE</code>.</p> <p>Easiest way to apply <code>custom.metrics</code> is to install <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">prometheus adapter</a> via <code>Helm</code>. </p> <p>After helm installation you will be able to see note:</p> <pre><code>NOTES: my-release-prometheus-adapter has been deployed. In a few minutes you should be able to list metrics using the following command(s): kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 </code></pre> <p>As additional information, you can use <code>jq</code> to get more user friendly output.</p> <p><code>kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq .</code></p>
<p>How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?</p> <p>e.g. Duplicate default to my-namespace which will have the same content.</p> <p>I'm interested just by services and deployments, so when I try with method with kubectl get all and with api-resources i have error with services IP like :</p> <pre><code>Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated </code></pre>
<p>As @coderanger mentioned in his answer, there is no straight way to make a copy of origin k8s resources to the separate namespace. </p> <p>As was proposed, when you invoke <code>kubectl get all</code> command, k8s looks through resources catalog bounded to <strong><em>all</em></strong> category. Therefore, if you didn't add this category for each custom <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition" rel="nofollow noreferrer"><em>CRD</em></a> object, throughout specific <a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">API group</a>, you might probably miss some relevant k8s resources in the command output. </p> <p>Furthermore, if you desire to export all k8s resources from the particular namespace, besides user <a href="https://kubernetes.io/docs/concepts/#kubernetes-objects" rel="nofollow noreferrer"><em>workloads</em></a>, I would recommend exploring <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#resource-types" rel="nofollow noreferrer">API resources</a>, filtering out only namespace scoped objects, and then apply bash processing to generate manifest files across each resource group:</p> <pre><code>kubectl api-resources --namespaced=true| awk '{print $1}'| sed '1d'| while read -r line; do echo "$(kubectl get $line -n namespace -o yaml &gt; $line.yaml)"; done </code></pre> <p>You can also consider using <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> (as @David Maze pointed out in the comment), in order to manage user workloads through Helm <a href="https://helm.sh/docs/developing_charts/#charts" rel="nofollow noreferrer"><em>Charts</em></a>, as a more flexible and structured way to describe k8s native resources.</p>
<p>We deploy a cluster with kubeadm (1 master 4 worker node). </p> <pre><code>$ kubectl describe node worker1 Name: worker1 Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=worker1 role=slave1 Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 24 Sep 2019 14:15:42 +0330 Taints: node.kubernetes.io/disk-pressure:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Tue, 24 Sep 2019 14:16:19 +0330 Tue, 24 Sep 2019 14:16:19 +0330 WeaveIsUp Weave pod has set this OutOfDisk False Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure True Mon, 07 Oct 2019 15:35:53 +0330 Mon, 07 Oct 2019 13:58:23 +0330 KubeletHasDiskPressure kubelet has disk pressure PIDPressure False Mon, 07 Oct 2019 15:35:53 +0330 Tue, 24 Sep 2019 14:15:42 +0330 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 192.168.88.206 Hostname: worker1 Capacity: attachable-volumes-azure-disk: 16 cpu: 4 ephemeral-storage: 19525500Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16432464Ki pods: 110 Allocatable: attachable-volumes-azure-disk: 16 cpu: 4 ephemeral-storage: 17994700771 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16330064Ki pods: 110 System Info: Machine ID: 2fc8f9eejgh5274kg1ab3f5b6570a8 System UUID: 52454D5843-391B-5454-BC35-E0EC5454D19A Boot ID: 5454514e-4e5f-4e46-af9b-2809f394e06f Kernel Version: 4.4.0-116-generic OS Image: Ubuntu 16.04.4 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://17.3.2 Kubelet Version: v1.12.1 Kube-Proxy Version: v1.12.1 Non-terminated Pods: (0 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 0 (0%) 0 (0%) memory 0 (0%) 0 (0%) attachable-volumes-azure-disk 0 0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 45m kube-proxy, worker1 Starting kube-proxy. Normal Starting 23m kube-proxy, worker1 Starting kube-proxy. Warning EvictionThresholdMet 2m29s (x502 over 5d5h) kubelet, worker1 Attempting to reclaim ephemeral-storage Normal Starting 75s kube-proxy, worker1 Starting kube-proxy. </code></pre> <p>As it can be seen from description of the worker1, there is a disk pressure(<strong>ephemeral-storage: 19525500Ki</strong>). We mount a hard disk to <strong>/dev/sdb1</strong>. </p> <p>In worker1:</p> <pre><code>$ df -h` Filesystem Size Used Avail Use% Mounted on udev 7.9G 0 7.9G 0% /dev tmpfs 1.6G 163M 1.5G 11% /run /dev/sda1 19G 16G 2.4G 87% / tmpfs 7.9G 5.1M 7.9G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/sdb1 99G 61M 94G 1% /data tmpfs 1.6G 0 1.6G 0% /run/user/1003 </code></pre> <p>But the problem still exists. How can I tell the kubelet to add this mount point to the ephemeral-storage of the worker1? Actually, how can we increase the ephemeral storage of a node in a Kubernetes cluster? </p>
<p>After a lot of search, I decided to extend the size of <strong>/sda1</strong>. It is not pleasant to do that, but that is the only way I can find. Now the the ephemeral storage of the worker is increased. </p> <pre><code>Filesystem Size Used Avail Use% Mounted on udev 7.9G 0 7.9G 0% /dev tmpfs 1.6G 151M 1.5G 10% /run /dev/sda1 118G 24G 89G 22% / </code></pre> <p>$ <code>kubectl describe node worker1</code></p> <pre><code>attachable-volumes-azure-disk: 16 cpu: 4 ephemeral-storage: 123729380Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16432464Ki pods: 110 Allocatable: attachable-volumes-azure-disk: 16 cpu: 4 ephemeral-storage: 114028996420 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16330064Ki pods: 110 </code></pre>
<p>I have an application that relies on 4 docker data containers for the data it needs to run some tests. At the moment I have their docker-compose file and their images but I don't have access to the data containers Dockerfile or actual data.</p> <p>How this application works in docker-compose is as below: The application is called <code>rtsp_server</code> and it loads different volumes from the data containers such as <code>st-andries4</code>(I just wrote one of them).</p> <p>This is their docker-compose:</p> <pre><code># The application rtsp_server: image: rtsp_server_image volumes_from: - rtsp_data_st_andries4:ro ports: - "8554:8554" command: python ./rtsp-server.py /rtsp_data/st-andries-4 st_andries_4 # Data container rtsp_data_st_andries4: image: st_andries_4:1 volumes: - /rtsp_data/st-andries-4 command: /bin/true mem_limit: 10000000000 </code></pre> <p>I already brought the application, <code>rtsp_server</code>, to Kubernetes and it works fine but I need to also bring the data containers to k8s but I don't know how to do it.</p>
<p>I'm not exactly sure if you want to create a Persistent Volume on Kubernetes or move date from existing Docker data containers.</p> <p>Here is a Kubernetes documentation on how to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">Configure a Pod to Use a PersistentVolume for Storage</a>.</p> <p>First you should create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer"><code>PersistentVolume</code></a>:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" </code></pre> <p>Then <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim" rel="nofollow noreferrer"><code>PersistentVolumeClaim</code></a> because this is was <code>pod</code> uses to request physical storage.</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre> <p>And finally you can add a <code>volumeMounts</code> inside your pod that might look like the following:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage </code></pre> <p><strong>Update</strong></p> <p>To share PVC between many pods you would have to set up:</p> <pre><code>spec: accessModes: - ReadWriteMany </code></pre> <p>I would also recommend checking this blog <a href="https://medium.com/asl19-developers/create-readwritemany-persistentvolumeclaims-on-your-kubernetes-cluster-3a8db51f98e3" rel="nofollow noreferrer">Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster</a> and choosing which solution fits your needs.</p> <p>You might be also interested in project <a href="https://rook.io/docs/rook/v1.1/" rel="nofollow noreferrer">Rook</a> which is Open-Source, Cloud-Native Storage for Kubernetes.</p>
<p>How can Apache Airflow's <code>KubernetesPodOperator</code> pull docker images from a private repository? </p> <p>The <code>KubernetesPodOperator</code> has an <code>image_pull_secrets</code> which you can pass a <code>Secrets</code> object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos. </p> <p>Using <code>kubectl</code> you can create the required secret with something like </p> <pre><code>$ kubectl create secret docker-registry $SECRET_NAME \ --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \ --docker-username=AWS \ --docker-password="${TOKEN}" \ --docker-email="${EMAIL}" </code></pre> <p>But how can you create the authentication secret in Airflow? </p>
<p>There is <code>secret</code> object with <code>docker-registry</code> type according to kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials" rel="nofollow noreferrer">documentation</a> which can be used to authenticate to private repository.</p> <p>As You mentioned in Your question; You can use <code>kubectl</code> to <strong>create</strong> secret of <code>docker-registry</code> <strong>type</strong> that you can then try to pass with <code>image_pull_secrets</code>.</p> <p>However depending on platform You are using this might have <strong>limited</strong> or <strong>no use at all</strong> according to <a href="https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry" rel="nofollow noreferrer">kubernetes documentation</a>:</p> <blockquote> <h3>Configuring Nodes to Authenticate to a Private Registry</h3> <p><strong>Note:</strong> If you are running on Google Kubernetes Engine, there will already be a <code>.dockercfg</code> on each node with credentials for Google Container Registry. You cannot use this approach.</p> <p><strong>Note:</strong> If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.</p> <p><strong>Note:</strong> This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.</p> <p><strong>Note:</strong> Kubernetes as of now only supports the <code>auths</code> and <code>HttpHeaders</code> section of docker config. This means credential helpers (<code>credHelpers</code> or <code>credsStore</code>) are not supported.</p> </blockquote> <p>Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.</p> <p>Like in Amazon ECR example: <a href="https://github.com/awslabs/amazon-ecr-credential-helper" rel="nofollow noreferrer">Amazon ECR Docker Credential Helper</a> would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.</p> <p>As for Airflow itself I don't think it has functionality to create its own docker-repository secrets. You can request functionality like that in <a href="https://issues.apache.org/jira/projects/AIRFLOW/issues/" rel="nofollow noreferrer">Apache Airflow JIRA</a>.</p> <p>P.S.</p> <p>If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.</p>
<p>I am trying to deploy a Elasticsearch cluster<code>(replicas: 3)</code> using Statefulset in kubernetes and need to store the Elasticsearch data in a Persistent Volume (PV). Since each Elasticsearch instance has its own data folder, I need to have separate data folder for each replica in the PV. I am trying to use <code>volumeClaimTemplates</code> and <code>mountPath: /usr/share/elasticsearch/data</code> but this is resulting in an error: <code>pod has unbound immediate PersistentVolumeClaims</code> in the second pod. Hence how can I achieve this using Statefulset? </p> <p>Thanks in advance.</p>
<p>There is no information how you are trying to install elastic-search however: As an example please follow:</p> <ul> <li><a href="https://medium.com/@zhimin.wen/persistent-volume-claim-for-statefulset-8050e396cc51" rel="nofollow noreferrer">this tutorial</a>, </li> <li><a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">helm-charts</a>,</li> </ul> <p>As per documentation for StatefulSet - <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations" rel="nofollow noreferrer">limitations</a>:</p> <blockquote> <p>The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.</p> </blockquote> <p>This looks like your example, problem with dynamic storage provisioning. </p> <p>Please verify storage class, if pv and pvc were created and bind together and <code>storage class</code> in <code>volumeClaimTemplates</code>:</p> <pre><code> volumeMounts: - name: "elasticsearch-master" mountPath: /usr/share/elasticsearch/data volumeClaimTemplates: - metadata: name: elasticsearch-master spec: accessModes: - ReadWriteOnce storageClassName: name #please refer to this settings to see if you are using default storage class. In other case you should spceify this parameter manually resources: requests: storage: 30Gi </code></pre> <p>Hope this help.</p>
<p>Is it possible to restart pods automatically based on the time?</p> <p>For example, I would like to restart the pods of my cluster every morning at 8.00 AM.</p>
<p>Use a cronjob, but not to run your pods, but to schedule a Kubernetes API command that will restart the deployment everyday (<code>kubectl rollout restart</code>). That way if something goes wrong, the old pods will not be down or removed.</p> <p>Rollouts create new ReplicaSets, and wait for them to be up, before killing off old pods, and rerouting the traffic. Service will continue uninterrupted.</p> <p>You have to setup RBAC, so that the Kubernetes client running from inside the cluster has permissions to do needed calls to the Kubernetes API.</p> <pre><code>--- # Service account the client will use to reset the deployment, # by default the pods running inside the cluster can do no such things. kind: ServiceAccount apiVersion: v1 metadata: name: deployment-restart namespace: &lt;YOUR NAMESPACE&gt; --- # allow getting status and patching only the one deployment you want # to restart apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: deployment-restart namespace: &lt;YOUR NAMESPACE&gt; rules: - apiGroups: ["apps", "extensions"] resources: ["deployments"] resourceNames: ["&lt;YOUR DEPLOYMENT NAME&gt;"] verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed # if you want to use `rollout status` --- # bind the role to the service account apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: deployment-restart namespace: &lt;YOUR NAMESPACE&gt; roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: deployment-restart subjects: - kind: ServiceAccount name: deployment-restart namespace: &lt;YOUR NAMESPACE&gt; </code></pre> <p>And the cronjob specification itself:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: deployment-restart namespace: &lt;YOUR NAMESPACE&gt; spec: concurrencyPolicy: Forbid schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock jobTemplate: spec: backoffLimit: 2 # this has very low chance of failing, as all this does # is prompt kubernetes to schedule new replica set for # the deployment activeDeadlineSeconds: 600 # timeout, makes most sense with # "waiting for rollout" variant specified below template: spec: serviceAccountName: deployment-restart # name of the service # account configured above restartPolicy: Never containers: - name: kubectl image: bitnami/kubectl # probably any kubectl image will do, # optionaly specify version, but this # should not be necessary, as long the # version of kubectl is new enough to # have `rollout restart` command: - 'kubectl' - 'rollout' - 'restart' - 'deployment/&lt;YOUR DEPLOYMENT NAME&gt;' </code></pre> <p>Optionally, if you want the cronjob to wait for the deployment to roll out, change the cronjob command to:</p> <pre><code>command: - bash - -c - &gt;- kubectl rollout restart deployment/&lt;YOUR DEPLOYMENT NAME&gt; &amp;&amp; kubectl rollout status deployment/&lt;YOUR DEPLOYMENT NAME&gt; </code></pre>
<p>i have a question and a problem about capabilities.</p> <p>Why my program work when i run <code>docker run --cap-add=NET_ADMIN ...</code> ?</p> <p>And it's doesn't work if i run my program with file .yml which is:</p> <pre><code> containers: - name: snake image: docker.io/kelysa/snake:lastest imagePullPolicy: Always securityContext: privileged: true capabilities: add: ["NET_ADMIN","NET_RAW"] </code></pre> <p>What is the difference between run docker with --cap-add and run a pod with the same capabilities ? </p>
<p>As described by <a href="https://stackoverflow.com/questions/58377469/difference-between-cap-add-net-admin-and-add-capabilities-in-yml#">David Maze</a> and According to the docker <a href="https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities" rel="noreferrer">docs:Runtime privilege and Linux capabilities</a> </p> <blockquote> <p>By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all devices (see the documentation on cgroups devices).</p> <pre><code>--cap-add: Add Linux capabilities, --cap-drop: Drop Linux capabilities, --privileged=false: Give extended privileges to this container --device=[]: Allows you to run devices inside the container without the --privileged flag. </code></pre> <p>When the operator executes <code>docker run --privileged</code>, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. </p> <p>In addition to --privileged, the operator can have fine grain control over the capabilities using --cap-add and --cap-drop. </p> </blockquote> <p>You can find there two kinds of capabilities:</p> <ul> <li>Docker with default list of capabilities that are kept.</li> <li>capabilities which are not granted by default and may be added.</li> </ul> <p>This command <code>docker run --cap-add=NET_ADMIN</code> will apply additional linux capibilities.</p> <p>As per docs:</p> <blockquote> <p>For interacting with the network stack, instead of using --privileged they should use --cap-add=NET_ADMIN to modify the network interfaces.</p> </blockquote> <p><strong>Note</strong>:</p> <p>To reduce syscall attacks it's good practice to give the container only required privileges. Please refer also to <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="noreferrer">Enabling Pod Security Policies</a>.</p> <p>From container it can be achieved by using:</p> <pre><code>securityContext: capabilities: drop: ["all"] add: ["NET_BIND"] </code></pre> <p>To see applied capibilities inside your container you can use: <code>getpcaps process_id or $(pgrep your-proces_name)</code> to list and explore linux capibilities you an use <code>capsh --print</code></p> <p><strong>Resources</strong>:</p> <ul> <li><a href="https://linux-audit.com/linux-capabilities-101/" rel="noreferrer">Linux capibilities</a>, </li> <li><a href="https://github.com/docker/labs/tree/master/security/capabilities" rel="noreferrer">docker labs</a>, </li> <li><a href="http://man7.org/linux/man-pages/man1/capsh.1.html" rel="noreferrer">capsh</a> </li> <li><a href="https://www.weave.works/blog/container-capabilities-kubernetes/" rel="noreferrer">Configuring Container Capabilities with Kubernetes</a> </li> <li><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#what-is-a-pod-security-policy" rel="noreferrer">What is a Pod Security Policy</a></li> </ul> <p>Hope this help.</p>
<p>I am running a Java Spring Boot boot application on Tomcat in a Kubernetes cluster, I am using a lifecycle hook to shut down Tomcat before the container gets terminated. Also defining a termination grace period of 30 secs.</p> <p>Expected behaviour: tomcat shuts down gracefully when I kill a pod or delete a deployment.</p> <p>Actual behaviour: tomcat shuts down abruptly.</p> <pre><code>lifecycle: preStop: exec: command: - /usr/local/tomcat/bin/catalina.sh </code></pre> <p>Can anyone help please?</p>
<p>In Kubernetes, the <code>preStop</code> hook is executed before the pod is terminated.</p> <p>For the above use case, tomcat process has to be shutdown gracefully before the pod is terminated when a pod / deployment is deleted.</p> <p>The following sample pod definition works, by graceful shutdown of the tomcat server, before pod termination. Note: The container in this definition is <code>FROM tomcat:9.0.19-jre8</code></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: demopod spec: containers: - image: demo:web name: demo-container ports: - containerPort: 8080 lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;echo Hello from the postStart handler &gt; /usr/share/message&quot;] preStop: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;echo Stopping server &gt; /usr/share/message1; sleep 10; sh /usr/local/tomcat/bin/catalina.sh stop&quot;] </code></pre> <p>In the above definition file, we run three commands (change as per need).</p> <ol> <li><p>Write a shutdown message into /usr/share/message1</p> </li> <li><p>sleep 10 (Just to give time to view pod / container logs)</p> </li> <li><p>Run <code>Catalina.sh stop</code> script to stop the tomcat server.</p> </li> </ol> <p><strong>To Debug:</strong> For PreStop, if the handler fails, an event is broadcasted, the FailedPreStopHook event. You can see these events by running <code>kubectl describe pod &lt;pod_name&gt;</code></p> <p>Refer below post for more detailed instructions: <a href="http://muralitechblog.com/kubernetes-graceful-shutdown-of-pods/" rel="nofollow noreferrer">http://muralitechblog.com/kubernetes-graceful-shutdown-of-pods/</a></p> <p><strong>Shutdown logs:</strong> <a href="https://i.stack.imgur.com/xjxqd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xjxqd.png" alt="Valid shutdown logs" /></a></p>
<p>I am trying to set up a fresh kubernetes cluster, and facing issue with using weave as the networking solution. Weave pods are hung in pending state and no events/logs available from kubectl command line.</p> <p>I am trying to set-up a kubernetes cluster from scratch as part of an online course. I have set up master nodes - with api server, controller manager and scheduler up and running. And the worker nodes with kubelets and kube-proxy running.</p> <p>Node status:</p> <pre><code>vagrant@master-1:~$ kubectl get nodes -n kube-system </code></pre> <p><code>NAME STATUS ROLES AGE VERSION worker-1 NotReady &lt;none&gt; 25h v1.13.0 worker-2 NotReady &lt;none&gt; 9h v1.13.0</code></p> <p>As next step to enable networking, I am using weave. I have installed weave and extracted on worker nodes.</p> <p>Now when I try to run below command:</p> <p><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"</code></p> <p>I see DaemonSet getting initialized, but the pods created continue to be in "Pending state".</p> <pre><code>vagrant@master-1:~$ kubectl get pods -n kube-system </code></pre> <p><code>NAME READY STATUS RESTARTS AGE weave-net-ccrqs 0/2 Pending 0 73m weave-net-vrm5f 0/2 Pending 0 73m</code></p> <p>The below command: <code>vagrant@master-1:~$ kubectl describe pods -n kube-system</code> does not return any events ongoing.</p> <p>From the scheduler service logs, I could see below errors logged.</p> <pre><code>Oct 13 16:46:51 master-2 kube-scheduler[14569]: E1013 16:46:51.973883 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:anonymous" cannot list resource "statefulsets" in API group "apps" at the cluster scope Oct 13 16:46:51 master-2 kube-scheduler[14569]: E1013 16:46:51.982228 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.338171 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:anonymous" cannot list resource "persistentvolumes" in API group "" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.745288 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.765103 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:anonymous" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.781419 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:anonymous" cannot list resource "replicasets" in API group "apps" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.785872 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:anonymous" cannot list resource "replicationcontrollers" in API group "" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.786117 14569 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:232: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.786790 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Node: nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 13 16:46:52 master-2 kube-scheduler[14569]: E1013 16:46:52.787016 14569 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:anonymous" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope </code></pre> <p>Since I am quite new to kubernetes, please excuse if I missed to add relevant information. Will share with immediate effect. Kind help required.</p> <p>Added kubeconfig for scheduler:</p> <pre><code> { kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.crt \ --embed-certs=true \ --server=https://127.0.0.1:6443 \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.crt \ --client-key=kube-scheduler.key \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig kubectl config set-context default \ --cluster=kubernetes-the-hard-way \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube- scheduler.kubeconfig } </code></pre> <p>Added scheduler service definition:</p> <pre><code>cat &lt;&lt;EOF | sudo tee /etc/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \\ --kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\ --address=127.0.0.1 \\ --leader-elect=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF </code></pre> <p>Started scheduler using:</p> <pre><code>sudo systemctl enable kube-scheduler sudo systemctl start kube-scheduler </code></pre> <p>Component status:</p> <pre><code>vagrant@master-1:~$ kubectl get componentstatuses --kubeconfig admin.kubeconfig NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} </code></pre>
<p>I have restarted kube scheduler and controller manager on both master nodes participating in HA which I believe allowed load balancer URL for the api server to take effect and the errors observed earlier were eliminated.</p> <p>After this, I have set up a worker node and installed weave, the pod got deployed and node became ready.</p> <pre><code>vagrant@master-1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE weave-net-zswht 1/2 Running 0 41s vagrant@master-1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION worker-1 Ready &lt;none&gt; 4m51s v1.13.0 </code></pre>
<p>I have a Kubernetes cluster on google cloud. I accidentally deleted a namespace which had a few pods running in it. Luckily, the pods are still running, but the namespace is in terminations state. </p> <p>Is there a way to restore it back to active state? If not, what would the fate of my pods running in this namespace be?</p> <p>Thanks </p>
<p>A few interesting articles about <strong>backing up and restoring <code>Kubernetes cluster</code></strong> using various tools:</p> <p><a href="https://medium.com/@pmvk/kubernetes-backups-and-recovery-efc33180e89d" rel="nofollow noreferrer">https://medium.com/@pmvk/kubernetes-backups-and-recovery-efc33180e89d</a></p> <p><a href="https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487" rel="nofollow noreferrer">https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487</a></p> <p><a href="https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-heptio-ark</a></p> <p><a href="https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state" rel="nofollow noreferrer">https://www.revolgy.com/blog/kubernetes-in-production-snapshotting-cluster-state</a></p> <p>I guess they may be useful rather in future than in your current situation. If you don't have any backup, unfortunately there isn't much you can do.</p> <p>Please notice that in all of those articles they use <code>namespace deletion</code> to <strong>simulate disaster scenario</strong> so you can imagine what are the consequences of such operation. However the results may not be seen immediately and you may see your pods running for some time but eventually <strong>namespace deletion</strong> <em>removes all kubernetes cluster resources in a given namespace</em> including <code>LoadBalancers</code> or <code>PersistentVolumes</code>. It may take some time. Some resource may not be deleted because it is still used by another resource (e.g. <code>PersistentVolume</code> by running <code>Pod</code>).</p> <p>You can try and run <a href="https://gist.github.com/negz/c3ee465b48306593f16c523a22015bec" rel="nofollow noreferrer">this</a> script to dump all your resources that are still available to yaml files however some modification may be needed as you will not be able to list objects belonging to deleted namespace anymore. You may need to add <code>--all-namespaces</code> flag to list them.</p> <p>You may also try to dump any resource which is still available manually. If you still can see some resources like <code>Pods</code>, <code>Deployments</code> etc. and you can run on them <code>kubectl get</code> you may try to save their definition to a yaml file:</p> <pre><code>kubectl get deployment nginx-deployment -o yaml &gt; deployment_backup.yaml </code></pre> <p>Once you have your resources backed up you should be able to recreate your cluster more easily.</p>
<p>I have created an ingress resource in my Kubernetes cluster on google cloud. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gordion annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: gordion-ingress networking.gke.io/managed-certificates: gordion-certificate,gordion-certificate-backend spec: rules: - host: backend.gordion.io http: paths: - path: / backend: serviceName: backend servicePort: 80 </code></pre> <p>Everything works. However, I have not created any <code>ingress-controller</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">the official docs state</a> that it must have it.</p> <blockquote> <p>You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.</p> </blockquote> <p>So where is my ingress-controller if my routing actually works? how do I see its configuration?</p>
<p>In GKE, if you do not specify the <code>kubernetes.io/ingress.class: nginx</code> annotation, the GCE L7 load balancer controller (GLBC) will be used by default (it is actually equivalent to setting <code>kubernetes.io/ingress.class: gce</code>). This controller will be automatically provisioned and will rely on GCP's global HTTP(s) load balancer. </p> <p>You can find more information on this topic on the <a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">GLBC GitHub's page</a> as well as <a href="https://kubectl.wordpress.com/2019/09/14/using-gclb-as-an-ingress-controller/" rel="nofollow noreferrer">in this How-To article</a>.</p>
<p>I have created an ingress resource in my Kubernetes cluster on google cloud. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gordion annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: gordion-ingress networking.gke.io/managed-certificates: gordion-certificate,gordion-certificate-backend spec: rules: - host: backend.gordion.io http: paths: - path: / backend: serviceName: backend servicePort: 80 </code></pre> <p>Everything works. However, I have not created any <code>ingress-controller</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">the official docs state</a> that it must have it.</p> <blockquote> <p>You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.</p> </blockquote> <p>So where is my ingress-controller if my routing actually works? how do I see its configuration?</p>
<p>In Google Kubernetes Engine (GKE), when you create an Ingress object, the built-in GKE ingress controller will take care of creating the appropriate HTTP(S) load balancer which conforms to your Ingress and its Service(s). For more information, have a look at this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">Google Cloud Document</a> on "HTTP(S) load balancing with Ingress".</p>
<p>I have Skaffold working well with local development server and database deployments. I'm trying to get working on the <code>create-react-app</code> front-end, but the behavior is incredibly slow and erratic.</p> <p><strong>Issues</strong></p> <p>The main problems are the following:</p> <ol> <li>It takes upwards of five minutes from running <code>skaffold dev --port-forward --tail</code> for it finally start spinning up. Running just a <code>docker build</code> takes less than 30 seconds.</li> <li>When it finally starts spinning up, it just sits on <code>Starting the development server...</code> for another two minutes.</li> <li><p>Then, nine times out of ten, I get the following errors after several minutes (there are three because that is how many replicas there are): <a href="https://i.stack.imgur.com/pmyC2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmyC2.png" alt="enter image description here"></a></p> <p>One out of ten times, it will actually go into the <code>Compiled Successfully! You can now view in the browser.</code> It never does launch in Chrome though.</p></li> <li>Changes to JS in create-react-app are never reflected in new browser. You have to stop and run Skaffold again. Skaffold does say <code>Syncing 1 files for &lt;image&gt;... Watching for changes...</code>, but nothing changes even after a refresh.</li> </ol> <p><strong>What I've tried</strong></p> <ol> <li>I've really simplified what I'm trying to do to make it easier to sort this out, so I'm using just an OOTB <code>create-react-app</code> application. The behavior is the same regardless.</li> <li><code>minikube delete</code> and <code>minikube start</code> several times (did this because even the server deployment started acting erratically after trying <code>create-react-app</code>)</li> </ol> <p><strong>Code and Steps to Reproduce</strong></p> <p>I'm on macOS Mojave (10.14.6) using Docker for Mac, Kubernetes (v1.16.0), minikube (v1.4.0), Skaffold (v0.39.0), and <code>create-react-app</code>. I'll have to skip the installation process for all of these since it is fairly lengthy, so the following steps assume you have this already setup.</p> <ol> <li><p>Make a project directory:</p> <p><code>mkdir project</code></p></li> <li><p>Make a Kubernetes manifest directory and move into it:</p> <p><code>mkdir k8s &amp;&amp; cd k8s</code></p></li> <li><p>Make a <code>client-deployment.yaml</code> and add the following:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: client-deployment spec: replicas: 3 selector: matchLabels: component: web template: metadata: labels: component: web spec: containers: - name: client image: testapp/client ports: - containerPort: 3000 </code></pre></li> <li><p>Make a <code>client-cluster-ip-service.yaml</code> and add the following:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: client-cluster-ip-service spec: type: ClusterIP selector: component: web ports: - port: 3000 targetPort: 3000 </code></pre></li> <li><p>Move back into the parent:</p> <p><code>cd ..</code></p></li> <li><p>Create a <code>skaffold.yaml</code> and add the following:</p> <pre><code>apiVersion: skaffold/v1beta15 kind: Config build: local: push: false artifacts: - image: testapp/client context: web docker: dockerfile: Dockerfile.dev sync: manual: - src: "**/*.js" dest: . - src: "**/*.html" dest: . - src: "**/*.css" dest: . deploy: kubectl: manifests: - k8s/client-deployment.yaml - k8s/client-cluster-ip-service.yaml portForward: - resourceType: service resourceName: client-cluster-ip-service port: 3000 localPort: 3000 </code></pre></li> <li><p>Start a new <code>create-react-app</code> project:</p> <p><code>npx create-react-app test-app</code></p></li> <li><p>Change into the directory:</p> <p><code>cd test-app</code></p></li> <li><p>Create a <code>Dockerfile.dev</code> and add the following:</p> <pre><code>FROM node:alpine WORKDIR '/app' EXPOSE 3000 CMD ["npm", "run", "start"] COPY package* ./ RUN npm install COPY . . </code></pre></li> <li><p>Create a <code>.dockerignore</code> file and add the following:</p> <pre><code>node_modules *.swp </code></pre></li> <li><p>Go back into the parent directory:</p> <p><code>cd ..</code></p></li> <li><p>Make sure <code>minikube</code> is running:</p> <p><code>minikube start</code></p></li> <li><p>Run the <code>skaffold.yaml</code>:</p> <p><code>skaffold dev --port-forward --tail</code></p></li> </ol> <p>This is what produces the issues for me.</p>
<p>Ok. Disregard. Started with one replica and it worked fine. Two worked fine. Three worked if <code>skaffold</code> was already running, but not from a fresh <code>skaffold dev --port-forward --tail</code>. </p> <p><code>skaffold ssh</code> and then did a <code>top</code>. Was running out of RAM... well was at 86% utilization. Increased it from the default 2GB to 8GB and now it works fine.</p> <p>First deleted the VM with <code>minikube delete</code> and then created a new one with <code>minikube start --memory='8g'</code>. All good now.</p>
<p>I'm trying to find a solution for the problem that seems like something very common.</p> <ol> <li>I have a k8s cluster ip service which exposes two ports: 8088 and 60004</li> <li>I would like to expose these same ports on ALB and not use path based routing</li> </ol> <p>This works for exposing one service on 8088 port:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingress namespace: myns annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/healthcheck-path: /ping alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/tags: Environment=dev,Team=test alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8088}]' spec: rules: - host: myhost http: paths: - path: /* backend: serviceName: firstservice servicePort: 8088 </code></pre> <p>How can the same thing be achieved for both services using ONE ingress?</p> <p>Thanks in advance.</p>
<p>Eventually, to solve this problem, I've used ALB ingress controller group feature, which is currently in alpha state: <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914" rel="noreferrer">https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914</a></p> <p>This is how my ingress resource looks now:</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingress_1 namespace: myns annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/tags: Environment=dev,Team=test alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: mygroup alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8088}]' spec: rules: - host: &lt;HOST&gt; http: paths: - path: /* backend: serviceName: myservice servicePort: 8088 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingress_2 namespace: myns annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/tags: Environment=dev,Team=test alb.ingress.kubernetes.io/target-type: ip alb.ingress.kubernetes.io/group.name: mygroup alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 60004}]' spec: rules: - host: &lt;HOST&gt; http: paths: - path: /* backend: serviceName: myservice servicePort: 60004 </code></pre> <p>where key thing is </p> <pre><code>alb.ingress.kubernetes.io/group.name: mygroup </code></pre> <p>which connects these two ingress resources.</p> <p>Therefore, I end up with following:</p> <ul> <li>Service with multiple (two) ports in k8s exposed with two separate ingress resources but they both point to the same AWS ALB (because of the same group name)</li> <li>On the AWS ALB side, I get one ALB with two ports exposed: 8088 and 60004 and each of them points to same k8s service but different port on the same pod (this could easily be two different k8s services if that was needed)</li> </ul>
<p>I am running a service in Docker Swarm on a single machine. This is what I did to deploy the service:</p> <pre><code>docker swarm init docker stack deploy -c docker-compose.yml MyApplication </code></pre> <p>Content of <strong>docker-compose.yml:</strong></p> <pre><code>version: "3" services: web: image: myimage:1.0 ports: - "9000:80" - "9001:443" deploy: replicas: 3 resources: limits: cpus: "0.5" memory: 256M restart_policy: condition: on-failure </code></pre> <p>Is Docker Swarm able to increase number of replicas automatically based on current traffic? If yes, how to configure it to do so? If no, how can I achieve it, maybe use Kubernetes?</p>
<p>Based on CPU utilization of Pods it is possible to autoscale Deployments. You need to use <code>kubectl autoscale</code> command, which creates a HorizontalPodAutoscaler object that targets a specified resource and scales it as needed. The HPA periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify.</p> <p>When using <code>kubectl autoscale</code>, you need to specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target. </p> <p>Take a look for the example: to set the maximum number of replicas to five and the minimum to two, with a CPU utilization target of 60% utilization, run the following command:</p> <pre><code>$ kubectl autoscale deployment my-app --max 5 --min 2 --cpu-percent 60 </code></pre> <p>Please, found more about it in the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale" rel="nofollow noreferrer">documentation</a> and in the <a href="https://dzone.com/articles/how-to-use-kubernetes-for-autoscaling" rel="nofollow noreferrer">following article</a>. I hope it will helps you.</p>