prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video <a href="https://www.youtube.com/watch?v=Y7syr9d5yrg" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Y7syr9d5yrg</a>. All seem to &quot;almost&quot; be usefull for me, but there is always something missing and I'm not able to get this working by myself.</p> <p>The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered <strong>myinnovx.com</strong>. I want to use it with an openshift app. Cluster details:</p> <pre><code>oc v3.11.0+0cbc58b kubernetes v1.11.0+d4cacc0 openshift v3.11.146 kubernetes v1.11.0+d4cacc0 </code></pre> <p>I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.</p> <p><a href="https://i.stack.imgur.com/TcZRZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TcZRZ.png" alt="routes screenshot" /></a></p> <blockquote> <p>mobile-blue: I created this one manually pointing to my custom domain <strong>mobileoffice.myinnovx.com</strong></p> <p>mobile-office: Created with <code>oc expose service mobile-office --name=mobile-blue</code> to use external access.</p> <p>mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)</p> <p>mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)</p> </blockquote> <p>I've set up a two CNAME record on my DNS edit page as follows:</p> <p><a href="https://i.stack.imgur.com/JTzXV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTzXV.png" alt="enter image description here" /></a></p> <p>In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.</p> <p>I'm at a loss here as to what I'm missing. Any help is greatly appreciated. This is the response I get testing my DNS:</p> <p><a href="https://i.stack.imgur.com/DpezY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DpezY.png" alt="enter image description here" /></a></p> <p>This is a current export of my DNS:</p> <pre><code>$ORIGIN myinnovx.com. $TTL 86400 @ IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. ( 2019102317 ; Serial 7200 ; Refresh 600 ; Retry 1728000 ; Expire 3600) ; Minimum @ 86400 IN NS ns1.softlayer.com. @ 86400 IN NS ns2.softlayer.com. *.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud. mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud mobile-test.myinnovx.com 900 IN A 169.63.244.76 </code></pre>
<p>Most of what I'm about to say only applies to OpenShift 3.x. In OpenShift 4.x things are sufficiently different that most of the below doesn't quite apply.</p> <p>By default OpenShift 3.11 exposes applications via Red Hat's custom HAProxy Ingress Controller (colloquially known as the "Router"). The typical design in a OpenShft 3.x cluster is to designate particular cluster hosts for running cluster infrastructure workloads like the HAProxy router and the internal OpenShift registry (usually using the <a href="https://docs.openshift.com/container-platform/3.11/install/example_inventories.html#multi-masters-single-etcd-using-native-ha" rel="nofollow noreferrer">node-role.kubernetes.io/infra=true</a> node labels).</p> <p>For convenience purposes so admins don't have to manually create a DNS record for each exposed OpenShift application, there is a wildcard DNS entry that points to the load balancer associated with the HAProxy Router. The DNS name of this is configured in the <a href="https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#advanced-install-configuring-docker-route" rel="nofollow noreferrer">openshift_master_default_subdomain</a> of the ansible inventory file used to do your cluster installation.</p> <p>The structure of this record is generally something like <code>*.apps.&lt;cluster name&gt;.&lt;dns subdomain&gt;</code>, but it can be anything you like.</p> <p>If you want to have a prettier DNS name for your applications you can do a couple things. </p> <p>The first is to create a DNS entry <code>myapp.example.com</code> pointing to your load balancer and have your load balancer configured to forward those requests to the cluster hosts where the HAProxy Router is running on port 80/443. You can then configure your application's <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/routes.html" rel="nofollow noreferrer">Route</a> object to use hostname <code>myapp.example.com</code> instead of the default <code>&lt;app name&gt;-&lt;project name&gt;.apps.&lt;cluster name&gt;.&lt;dns subdomain&gt;</code>.</p> <p>Another method would be to do what your suggesting and let the application use the default wildcard route name, but create a DNS CNAME pointing to the original wildcard route name. For example if my <code>openshift_master_default_subdomain</code> is <code>apps.openshift-dev.example.com</code> and my application route is <code>myapp-myproject.apps.openshift-dev.example.com</code> then I could create a CNAME DNS record <code>myapp.example.com</code> pointing to <code>myapp-myproject.apps.openshift-dev.example.com</code>.</p> <p>The key thing that makes either of the above work is that the HAProxy router doesn't care what the hostname of the request is. All its going to do is match the Host header (SNI must be set in the case of TLS requests and the HAProxy router configured for pass through) of the incoming request against all of Route objects in the cluster and see if any of them match. So if your DNS/Load Balancer configuration is setup to bring requests to the HAProxy Router and the Host header matches a Route, that request will get forwarded to the appropriate OpenShift service.</p> <p>In your case I don't think you have the CNAME pointed at the right place. You need to point your CNAME at the wildcard hostname your application Route is using.</p>
<p>I've been trying to simply run the SparkPi example on Kubernetes with Spark 2.4.0 and it doesn't seem to behave at all like in the documentation.</p> <p>I followed <a href="https://spark.apache.org/docs/2.4.0/running-on-kubernetes.html" rel="noreferrer">the guide</a>. I built a vanilla docker image with the <code>docker-image-tool.sh</code> script. Added it to my registry.</p> <p>I launch the job from my spark folder with a command like this:</p> <pre><code>bin/spark-submit \ --master k8s://https://&lt;k8s-apiserver-host&gt;:&lt;k8s-apiserver-port&gt; \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=&lt;spark-image&gt; \ --conf spark.kubernetes.namespace=mynamespace \ --conf spark.kubernetes.container.image.pullSecrets=myPullSecret \ local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar </code></pre> <p>This is virtually the same as in the documentation except for the <code>namespace</code> and <code>pullSecrets</code> options. I need these options because of constraints in a multi user kubernetes environment. Even so I tried using the default namespace and I got the same outcome.</p> <p>What happens is that the pod gets stuck in the failed state and two abnormal conditions occur:</p> <ul> <li>There's an error: <code>MountVolume.SetUp failed for volume "spark-conf-volume" : configmaps "spark-pi-1547643379283-driver-conf-map" not found</code>. Indicating that k8s could not mount the config map to /opt/spark/conf which should contain a properties file. The config map (with the same name) exists so I don't understand why k8s cannot mount it.</li> <li>In the container logs there are several essential environment variables in the launch command that are empty.</li> </ul> <p>container log:</p> <pre><code>CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[@]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS) exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -cp ':/opt/spark/jars/*' -Xms -Xmx -Dspark.driver.bindAddress=10.11.12.13 </code></pre> <p>You can control some of these variables directly with properties such as <code>spark.kubernetes.driverEnv.SPARK_DRIVER_CLASS</code> but this should not be necessary as (in this example the class is already specified with <code>--class</code>).</p> <p>For clarity the following environment variables are empty:</p> <ul> <li><code>SPARK_DRIVER_MEMORY</code></li> <li><code>SPARK_DRIVER_CLASS</code></li> <li><code>SPARK_DRIVER_ARGS</code></li> </ul> <p>The <code>SPARK_CLASSPATH</code> is also missing the container-local jar I specified on the command line (spark-examples_2.11-2.4.0.jar).</p> <p>It seems that even if we resolve the problem with mounting the configmap it won't help populate <code>SPARK_DRIVER_MEMORY</code> because it does not contain an equivalent configuration parameter.</p> <p>How do I resolve the problem of mounting the config map and how do I resolve these environment variables?</p> <p>The kubernetes yaml configuration is created by Spark, but in case it help I am posting here:</p> <p>pod-spec.yaml</p> <pre><code> { "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "spark-pi-1547644451461-driver", "namespace": "frank", "selfLink": "/api/v1/namespaces/frank/pods/spark-pi-1547644451461-driver", "uid": "90c9577c-1990-11e9-8237-00155df6cf35", "resourceVersion": "19241392", "creationTimestamp": "2019-01-16T13:13:50Z", "labels": { "spark-app-selector": "spark-6eafcf5825e94637974f39e5b8512028", "spark-role": "driver" } }, "spec": { "volumes": [ { "name": "spark-local-dir-1", "emptyDir": {} }, { "name": "spark-conf-volume", "configMap": { "name": "spark-pi-1547644451461-driver-conf-map", "defaultMode": 420 } }, { "name": "default-token-rfz9m", "secret": { "secretName": "default-token-rfz9m", "defaultMode": 420 } } ], "containers": [ { "name": "spark-kubernetes-driver", "image": "my-repo:10001/spark:latest", "args": [ "driver", "--properties-file", "/opt/spark/conf/spark.properties", "--class", "org.apache.spark.examples.SparkPi", "spark-internal" ], "ports": [ { "name": "driver-rpc-port", "containerPort": 7078, "protocol": "TCP" }, { "name": "blockmanager", "containerPort": 7079, "protocol": "TCP" }, { "name": "spark-ui", "containerPort": 4040, "protocol": "TCP" } ], "env": [ { "name": "SPARK_DRIVER_BIND_ADDRESS", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "status.podIP" } } }, { "name": "SPARK_LOCAL_DIRS", "value": "/var/data/spark-368106fd-09e1-46c5-a443-eec0b64b5cd9" }, { "name": "SPARK_CONF_DIR", "value": "/opt/spark/conf" } ], "resources": { "limits": { "memory": "1408Mi" }, "requests": { "cpu": "1", "memory": "1408Mi" } }, "volumeMounts": [ { "name": "spark-local-dir-1", "mountPath": "/var/data/spark-368106fd-09e1-46c5-a443-eec0b64b5cd9" }, { "name": "spark-conf-volume", "mountPath": "/opt/spark/conf" }, { "name": "default-token-rfz9m", "readOnly": true, "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent" } ], "restartPolicy": "Never", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "serviceAccountName": "default", "serviceAccount": "default", "nodeName": "kube-worker16", "securityContext": {}, "imagePullSecrets": [ { "name": "mypullsecret" } ], "schedulerName": "default-scheduler", "tolerations": [ { "key": "node.kubernetes.io/not-ready", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 }, { "key": "node.kubernetes.io/unreachable", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 } ] }, "status": { "phase": "Failed", "conditions": [ { "type": "Initialized", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-01-16T13:15:11Z" }, { "type": "Ready", "status": "False", "lastProbeTime": null, "lastTransitionTime": "2019-01-16T13:15:11Z", "reason": "ContainersNotReady", "message": "containers with unready status: [spark-kubernetes-driver]" }, { "type": "ContainersReady", "status": "False", "lastProbeTime": null, "lastTransitionTime": null, "reason": "ContainersNotReady", "message": "containers with unready status: [spark-kubernetes-driver]" }, { "type": "PodScheduled", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-01-16T13:13:50Z" } ], "hostIP": "10.1.2.3", "podIP": "10.11.12.13", "startTime": "2019-01-16T13:15:11Z", "containerStatuses": [ { "name": "spark-kubernetes-driver", "state": { "terminated": { "exitCode": 1, "reason": "Error", "startedAt": "2019-01-16T13:15:23Z", "finishedAt": "2019-01-16T13:15:23Z", "containerID": "docker://931908c3cfa6c2607c9d493c990b392f1e0a8efceff0835a16aa12afd03ec275" } }, "lastState": {}, "ready": false, "restartCount": 0, "image": "my-repo:10001/spark:latest", "imageID": "docker-pullable://my-repo:10001/spark@sha256:58e319143187d3a0df14ceb29a874b35756c4581265f0e1de475390a2d3e6ed7", "containerID": "docker://931908c3cfa6c2607c9d493c990b392f1e0a8efceff0835a16aa12afd03ec275" } ], "qosClass": "Burstable" } } </code></pre> <p>config-map.yml</p> <pre><code>{ "kind": "ConfigMap", "apiVersion": "v1", "metadata": { "name": "spark-pi-1547644451461-driver-conf-map", "namespace": "frank", "selfLink": "/api/v1/namespaces/frank/configmaps/spark-pi-1547644451461-driver-conf-map", "uid": "90eda9e3-1990-11e9-8237-00155df6cf35", "resourceVersion": "19241350", "creationTimestamp": "2019-01-16T13:13:50Z", "ownerReferences": [ { "apiVersion": "v1", "kind": "Pod", "name": "spark-pi-1547644451461-driver", "uid": "90c9577c-1990-11e9-8237-00155df6cf35", "controller": true } ] }, "data": { "spark.properties": "#Java properties built from Kubernetes config map with name: spark-pi-1547644451461-driver-conf-map\r\n#Wed Jan 16 13:14:12 GMT 2019\r\nspark.kubernetes.driver.pod.name=spark-pi-1547644451461-driver\r\nspark.driver.host=spark-pi-1547644451461-driver-svc.frank.svc\r\nspark.kubernetes.container.image=aow-repo\\:10001/spark\\:latest\r\nspark.kubernetes.container.image.pullSecrets=mypullsecret\r\nspark.executor.instances=5\r\nspark.app.id=spark-6eafcf5825e94637974f39e5b8512028\r\nspark.app.name=spark-pi\r\nspark.driver.port=7078\r\nspark.kubernetes.resource.type=java\r\nspark.master=k8s\\://https\\://10.1.2.2\\:6443\r\nspark.kubernetes.python.pyFiles=\r\nspark.kubernetes.executor.podNamePrefix=spark-pi-1547644451461\r\nspark.kubernetes.namespace=frank\r\nspark.driver.blockManager.port=7079\r\nspark.jars=/opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar\r\nspark.submit.deployMode=cluster\r\nspark.kubernetes.submitInDriver=true\r\n" } } </code></pre>
<p>Spark on Kubernetes has a bug.</p> <p>During Spark job submission to the Kubernetes cluster we first create Spark Driver Pod: <a href="https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L130" rel="noreferrer">https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L130</a> .</p> <p>Only after that we create all other resources (eg.: Spark Driver Service), including ConfigMap: <a href="https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L135" rel="noreferrer">https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L135</a> .</p> <p>We do that so to be able to set Spark Driver Pod as the <code>ownerReference</code> to all of those resources (which cannot be done before we create the owner Pod): <a href="https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L134" rel="noreferrer">https://github.com/apache/spark/blob/02c5b4f76337cc3901b8741887292bb4478931f3/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala#L134</a>.</p> <p>It helps us to delegate the deletion of all those resources to the Kubernetes responsibility, which is useful for collecting the unused resources more easily in the cluster. All we need to cleanup in that case is just delete Spark Driver Pod. But there is a risk that Kubernetes will instantiate Spark Driver Pod creation before the ConfigMap is ready, which will cause your issue.</p> <p>This is still true for 2.4.4.</p>
<p>I have a problem when I install the kube-dns add on. My OS is CentOS Linux release 7.0.1406 (Core)</p> <pre><code>Kernel:Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux </code></pre> <p>My api-server config:</p> <pre><code>### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=177.1.1.40" # The port on the local server to listen on. KUBE_API_PORT="--secure-port=443" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://master:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--enable-admission-plugins=AlwaysAdmit,NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount" # Add your own! KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/k8s-certs/CA/rootCA.crt --tls-private-key-file=/etc/kubernetes/k8s-certs/master/api-server.pem --tls-cert-file=/etc/kubernetes/k8s-certs/master/api-server.crt" </code></pre> <p>The api-server authorization-mode is set to AlwaysAllow</p> <pre><code>Sep 29 17:31:22 master kube-apiserver: I0929 17:31:22.952730 1311 flags.go:27] FLAG: --authorization-mode="AlwaysAllow" </code></pre> <p>My kube-dns config YAML file is :</p> <pre><code># Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.254.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP --- #apiVersion: rbac.authorization.k8s.io/v1 #kind: RoleBinding #metadata: # name: kube-dns # namespace: kube-system #roleRef: # apiGroup: rbac.authorization.k8s.io # kind: ClusterRole # name: Cluster-admin #subjects: #- kind: ServiceAccount # name: default # namespace: kube-system #--- #apiVersion: v1 #kind: ServiceAccount #metadata: # name: kube-dns # namespace: kube-system # labels: # kubernetes.io/cluster-service: "true" # addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: v1 kind: ConfigMap metadata: name: kubecfg-file namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: kubecfg-file: | apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://177.1.1.40:443 name: kube-test users: - name: kube-admin user: token: /var/run/secrets/kubernetes.io/serviceaccount/token contexts: - context: cluster: kube-test namespace: default user: kube-admin name: test-context current-context: test-context --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true - name: kube-kubecfg-file configMap: name: kubecfg-file optional: true containers: - name: kubedns image: 177.1.1.35/library/kube-dns:1.14.8 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --kubecfg-file=/kubecfg-file/kubecfg-file - --kube-master-url=https://10.254.0.1:443 - --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: kube-kubecfg-file mountPath: /kubecfg-file - name: dnsmasq image: 177.1.1.35/library/dnsmasq:1.14.8 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: 177.1.1.35/library/sidecar:1.14.8 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. #serviceAccountName: kube-dns </code></pre> <p>When I start the kube-dns the kubedns container log:</p> <pre><code>I0929 09:33:22.666182 1 dns.go:48] version: 1.14.8 I0929 09:33:22.668521 1 server.go:71] Using configuration read from directory: /kube-dns-config with period 10s I0929 09:33:22.668586 1 server.go:119] FLAG: --alsologtostderr="false" I0929 09:33:22.668604 1 server.go:119] FLAG: --config-dir="/kube-dns-config" I0929 09:33:22.668613 1 server.go:119] FLAG: --config-map="" I0929 09:33:22.668619 1 server.go:119] FLAG: --config-map-namespace="kube-system" I0929 09:33:22.668629 1 server.go:119] FLAG: --config-period="10s" I0929 09:33:22.668637 1 server.go:119] FLAG: --dns-bind-address="0.0.0.0" I0929 09:33:22.668643 1 server.go:119] FLAG: --dns-port="10053" I0929 09:33:22.668662 1 server.go:119] FLAG: --domain="cluster.local." I0929 09:33:22.668671 1 server.go:119] FLAG: --federations="" I0929 09:33:22.668683 1 server.go:119] FLAG: --healthz-port="8081" I0929 09:33:22.668689 1 server.go:119] FLAG: --initial-sync-timeout="1m0s" I0929 09:33:22.668695 1 server.go:119] FLAG: --kube-master-url="https://10.254.0.1:443" I0929 09:33:22.668707 1 server.go:119] FLAG: --kubecfg-file="/kubecfg-file/kubecfg-file" I0929 09:33:22.668714 1 server.go:119] FLAG: --log-backtrace-at=":0" I0929 09:33:22.668727 1 server.go:119] FLAG: --log-dir="" I0929 09:33:22.668733 1 server.go:119] FLAG: --log-flush-frequency="5s" I0929 09:33:22.668739 1 server.go:119] FLAG: --logtostderr="true" I0929 09:33:22.668744 1 server.go:119] FLAG: --nameservers="" I0929 09:33:22.668754 1 server.go:119] FLAG: --stderrthreshold="2" I0929 09:33:22.668760 1 server.go:119] FLAG: --v="2" I0929 09:33:22.668765 1 server.go:119] FLAG: --version="false" I0929 09:33:22.668774 1 server.go:119] FLAG: --vmodule="" I0929 09:33:22.668831 1 server.go:201] Starting SkyDNS server (0.0.0.0:10053) I0929 09:33:22.669125 1 server.go:220] Skydns metrics enabled (/metrics:10055) I0929 09:33:22.669170 1 dns.go:146] Starting endpointsController I0929 09:33:22.669181 1 dns.go:149] Starting serviceController I0929 09:33:22.669508 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0929 09:33:22.669523 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] E0929 09:33:22.695489 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Unauthorized E0929 09:33:22.696267 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Unauthorized I0929 09:33:23.169540 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0929 09:33:23.670206 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... </code></pre> <p>After few minutes the pod crash.</p> <pre><code>kubectl describe pod -n kube-system kube-dns-b7d556f59-h8xqp Name: kube-dns-b7d556f59-h8xqp Namespace: kube-system Node: node3/177.1.1.43 Start Time: Sat, 29 Sep 2018 17:50:17 +0800 Labels: k8s-app=kube-dns pod-template-hash=638112915 Annotations: scheduler.alpha.kubernetes.io/critical-pod= Status: Running IP: 172.30.59.3 Controlled By: ReplicaSet/kube-dns-b7d556f59 Containers: kubedns: Container ID: docker://5d62497e0c966c08d4d8c56f7a52e2046fd05b57ec0daf34a7e3cd813e491f09 Image: 177.1.1.35/library/kube-dns:1.14.8 Image ID: docker-pullable://177.1.1.35/library/kube-dns@sha256:6d8e0da4fb46e9ea2034a3f4cab0e095618a2ead78720c12e791342738e5f85d Ports: 10053/UDP, 10053/TCP, 10055/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --kubecfg-file=/kubecfg-file/kubecfg-file --kube-master-url=https://10.254.0.1:443 --v=2 State: Running Started: Sat, 29 Sep 2018 17:50:20 +0800 Ready: False Restart Count: 0 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 Environment: PROMETHEUS_PORT: 10055 Mounts: /kube-dns-config from kube-dns-config (rw) /kubecfg-file from kube-kubecfg-file (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dsxql (ro) dnsmasq: Container ID: docker://17ae73b52eb69c35a027cb5645a3801d649b262a8650862d64e7959a22c8e92e Image: 177.1.1.35/library/dnsmasq:1.14.8 Image ID: docker-pullable://177.1.1.35/library/dnsmasq@sha256:93c827f018cf3322f1ff2aa80324a0306048b0a69bc274e423071fb0d2d29d8b Ports: 53/UDP, 53/TCP Host Ports: 0/UDP, 0/TCP Args: -v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 State: Running Started: Sat, 29 Sep 2018 17:50:21 +0800 Ready: True Restart Count: 0 Requests: cpu: 150m memory: 20Mi Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dsxql (ro) sidecar: Container ID: docker://9449b13ff4e4ba1331d181bd6f309a34a4f3da1ce536c61af7a65664e3ad803a Image: 177.1.1.35/library/sidecar:1.14.8 Image ID: docker-pullable://177.1.1.35/library/sidecar@sha256:23df717980b4aa08d2da6c4cfa327f1b730d92ec9cf740959d2d5911830d82fb Port: 10054/TCP Host Port: 0/TCP Args: --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV State: Running Started: Sat, 29 Sep 2018 17:50:22 +0800 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 20Mi Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-dsxql (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: kube-dns-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-dns Optional: true kube-kubecfg-file: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubecfg-file Optional: true default-token-dsxql: Type: Secret (a volume populated by a Secret) SecretName: default-token-dsxql Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: CriticalAddonsOnly node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 3m kubelet, node3 MountVolume.SetUp succeeded for volume "kube-dns-config" Normal SuccessfulMountVolume 3m kubelet, node3 MountVolume.SetUp succeeded for volume "kube-kubecfg-file" Normal SuccessfulMountVolume 3m kubelet, node3 MountVolume.SetUp succeeded for volume "default-token-dsxql" Normal Pulled 3m kubelet, node3 Container image "177.1.1.35/library/kube-dns:1.14.8" already present on machine Normal Created 3m kubelet, node3 Created container Normal Started 3m kubelet, node3 Started container Normal Pulled 3m kubelet, node3 Container image "177.1.1.35/library/dnsmasq:1.14.8" already present on machine Normal Created 3m kubelet, node3 Created container Normal Started 3m kubelet, node3 Started container Normal Pulled 3m kubelet, node3 Container image "177.1.1.35/library/sidecar:1.14.8" already present on machine Normal Created 3m kubelet, node3 Created container Normal Started 3m kubelet, node3 Started container Warning Unhealthy 3m (x3 over 3m) kubelet, node3 Readiness probe failed: Get http://172.30.59.3:8081/readiness: dial tcp 172.30.59.3:8081: getsockopt: connection refused Normal Scheduled 43s default-scheduler Successfully assigned kube-dns-b7d556f59-h8xqp to node3 </code></pre> <p>My kubernetes version is:</p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"archive", BuildDate:"2018-03-29T08:38:42Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"archive", BuildDate:"2018-03-29T08:38:42Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>docker version:</p> <pre><code>docker version Client: Version: 1.13.1 API version: 1.26 Package version: &lt;unknown&gt; Go version: go1.8.3 Git commit: 774336d/1.13.1 Built: Wed Mar 7 17:06:16 2018 OS/Arch: linux/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.12) Package version: &lt;unknown&gt; Go version: go1.8.3 Git commit: 774336d/1.13.1 Built: Wed Mar 7 17:06:16 2018 OS/Arch: linux/amd64 Experimental: false </code></pre> <p>My kubernetes service config: api-server</p> <pre><code>/usr/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=http://master:2379 --secure-port=443 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=AlwaysAdmit,NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount --client-ca-file=/etc/kubernetes/k8s-certs/CA/rootCA.crt --tls-private-key-file=/etc/kubernetes/k8s-certs/master/api-server.pem --tls-cert-file=/etc/kubernetes/k8s-certs/master/api-server.crt </code></pre> <p>controller-manager:</p> <pre><code>/usr/bin/kube-controller-manager --logtostderr=true --v=4 --master=https://master:443 --root-ca-file=/etc/kubernetes/k8s-certs/CA/rootCA.crt --service-account-private-key-file=/etc/kubrnetes/k8s-certs/master/api-server.pem --kubeconfig=/etc/kubernetes/cs_kubeconfig ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. #KUBE_API_ADDRESS="--insecure-bind-address=177.1.1.40" # The port on the local server to listen on. KUBE_API_PORT="--secure-port=443" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://master:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--enable-admission-plugins=AlwaysAdmit,NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota,ServiceAccount" # Add your own! KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/k8s-certs/CA/rootCA.crt --tls-private-key-file=/etc/kubernetes/k8s-certs/master/api-server.pem --tls-cert-file=/etc/kubernetes/k8s-certs/master/api-server.crt" [root@master ~]# cat /etc/kubernetes/controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate #--root-ca-file=/var/run/kubernetes/CA/rootCA.crt --service-account-private-key-file=/var/run/kubernetes/controler_scheduler/cs_client.crt # Add your own! KUBE_CONTROLLER_MANAGER_ARGS= "--root-ca-file=/etc/kubernetes/k8s-certs/CA/rootCA.crt --service-account-private-key-file=/etc/kubernetes/k8s-certs/master/api-server.pem --kubeconfig=/etc/kubernetes/cs_kubeconfig" </code></pre> <p>scheduler:</p> <pre><code>/usr/bin/kube-scheduler --logtostderr=true --v=4 --master=https://master:443 --kubeconfig=/etc/kubernetes/cs_kubeconfig ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/cs_kubeconfig" </code></pre> <p>common config:</p> <pre><code>### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=4" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=https://master:443" </code></pre> <p>kubelet:</p> <pre><code>/usr/bin/kubelet --logtostderr=true --v=4 --cluster-dns=10.254.0.10 --cluster-domain=cluster.local --hostname-override=master --allow-privileged=false --cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest --kubeconfig=/etc/kubernetes/kp_kubeconfig ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) #KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=master" #KUBELET_API_SERVER="--api-servers=http://master:8080 #KUBECONFIG="--kubeconfig=/root/.kube/config-demo" KUBECONFIG="--kubeconfig=/etc/kubernetes/kp_kubeconfig" KUBELET_DNS="--cluster-dns=10.254.0.10" KUBELET_DOMAIN="--cluster-domain=cluster.local" # Add your own! KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest" </code></pre> <p>kube-proxy:</p> <pre><code>/usr/bin/kube-proxy --logtostderr=true --v=4 --master=https://master:443 --kubeconfig=/etc/kubernetes/kp_kubeconfig ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kp_kubeconfig" </code></pre> <p>controller-manager &amp; scheduler kubeconfig file:</p> <pre><code>apiVersion: v1 kind: Config users: - name: controllermanger user: client-certificate: /etc/kubernetes/k8s-certs/controler_scheduler/cs_client.crt client-key: /etc/kubernetes/k8s-certs/controler_scheduler/cs_client.pem clusters: - cluster: certificate-authority: /etc/kubernetes/k8s-certs/CA/rootCA.crt server: https://master:443 name: kubernetes contexts: - context: cluster: kubernetes user: controllermanger name: cs-context current-context: cs-context </code></pre> <p>kubelet &amp; kube-proxy kubeconfig file:</p> <pre><code>apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate: /etc/kubernetes/k8s-certs/kubelet_proxy/kp_client.crt client-key: /etc/kubernetes/k8s-certs/kubelet_proxy/kp_client.pem clusters: - name: kubernetes cluster: certificate-authority: /etc/kubernetes/k8s-certs/CA/rootCA.crt server: https://master:443 contexts: - context: cluster: kubernetes user: kubelet name: kp-context current-context: kp-context </code></pre> <p>APISERVER logs:</p> <pre><code>authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, invalid bearer token]] authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, Token has been invalidated]] </code></pre> <p>I tried ues the normal pod to access the apiserver using:</p> <pre><code>curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://10.254.0.1:443 </code></pre> <p>It's successed and return data.</p>
<p>Not directly related to this problem, but with the same error (<code>Failed to list *v1.Endpoints: Unauthorized</code>), but it may be useful for someone while searching:</p> <p>If in the master, <code>kube-apiserver.log</code> show this error:</p> <pre><code>E1022 14:13:35.488259 1 reflector.go:125] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ServiceAccount: Get https://[::1]:443/api/v1/serviceaccounts?limit=500&amp;resourceVersion=0: dial tcp [::1]:443: connect: cannot assign requested address </code></pre> <p>then the problem is a disabled ipv6 in the kubernetes master when it was enabled during install.</p> <p>Doing this in the kubernetes masters fix the problem:</p> <pre><code>sudo sysctl net.ipv6.conf.all.disable_ipv6=0 </code></pre> <p>After this, running <code>ip -6 a</code> should show basic IPv6 support, namely the <code>lo</code> interface with <code>::1</code></p>
<p>I'm trying to define an Horizontal Pod Autoscaler for two Kubernetes services.</p> <p>The Autoscaler strategy relies in 3 metrics:</p> <ol> <li>cpu </li> <li>pubsub.googleapis.com|subscription|num_undelivered_messages</li> <li>loadbalancing.googleapis.com|https|request_count</li> </ol> <p><em>CPU</em> and <em>num_undelivered_messages</em> are correctly obtained, but no matter what i do, i cannot get the <em>request_count</em> metric.</p> <p>The first service is a backend service (Service A), and the other (Service B) is an API that uses an Ingress to manage the external access to the service.</p> <p>The Autoscaling strategy is based on Google documentation: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">Autoscaling Deployments with External Metrics</a>.</p> <p>For service A, the following defines the metrics used for Autoscaling:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: ServiceA spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: ServiceA minReplicas: 1 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - external: metricName: pubsub.googleapis.com|subscription|num_undelivered_messages metricSelector: matchLabels: resource.labels.subscription_id: subscription_id targetAverageValue: 100 type: External </code></pre> <p>For service B, the following defines the metrics used for Autoscaling:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: ServiceB spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: ServiceB minReplicas: 1 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - external: metricName: loadbalancing.googleapis.com|https|request_count metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--3a908157de956ba7 targetAverageValue: 100 type: External </code></pre> <p>As defined in the above article, the metrics server is running, and the metrics server adapter is deployed:</p> <pre><code>$ kubectl get apiservices |egrep metrics v1beta1.custom.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h v1beta1.external.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h v1beta1.metrics.k8s.io kube-system/metrics-server True 2h v1beta2.custom.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h </code></pre> <p>For service A, all metrics, CPU and num_undelivered_messages, are correctly obtained:</p> <pre><code>$ kubectl get hpa ServiceA NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ServiceA Deployment/ServiceA 0/100 (avg), 1%/80% 1 3 1 127m </code></pre> <p>For service B, HPA cannot obtain the Request Count:</p> <pre><code>$ kubectl get hpa ServiceB NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ServiceB Deployment/ServiceB &lt;unknown&gt;/100 (avg), &lt;unknown&gt;/80% 1 3 1 129m </code></pre> <p>When accessing the Ingress, i get this warning:</p> <blockquote> <p>unable to get external metric default/loadbalancing.googleapis.com|https|request_count/&amp;LabelSelector{MatchLabels:map[string]string{resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--3a908157de956ba7,},MatchExpressions:[],}: no metrics returned from external metrics API </p> </blockquote> <p>The <strong>metricSelector</strong> for the forwarding-rule is correct, as confirmed when describing the ingress (only the relevant information is show):</p> <pre><code>$ kubectl describe ingress serviceb Annotations: ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-serviceb--3a908157de956ba7 </code></pre> <p>I've tried to use a different metric selector, for example, using <em>url_map_name</em>, to no avail, i've got a similar error.</p> <p>I've followed the exact guidelines on Google Documentation, and checked with a few online tutorials that refer the exact same process, but i haven't been able to understand what i'm missing. I'm probably lacking some configuration, or some specific detail, but i cannot find it documented anywhere.</p> <p>What am i missing, that explains why i'm not being able to obtain the <em>loadbalancing.googleapis.com|https|request_count</em> metric?</p>
<p>Thank you very much for your detailed response.</p> <p>When using the metricSelector to select the specific <em>forwarding_rule_name</em>, we need to use the exact <em>forwarding_rule_name</em> as defined by the ingress:</p> <pre><code>metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--3a908157de956ba7 </code></pre> <pre><code>$ kubectl describe ingress Name: serviceb ... Annotations: ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-serviceb--9bfb478c0886702d ... kubernetes.io/ingress.allow-http: false kubernetes.io/ingress.global-static-ip-name: static-ip </code></pre> <p>The problem, is that the suffix of the <em>forwarding_rule_name</em> (3a908157de956ba7) changes for every deployment, and is created dynamically on Ingress creation:</p> <ul> <li>k8s-fws-default-serviceb--<strong>3a908157de956ba7</strong></li> </ul> <p>We have a fully automated deployment using Helm, and, as such, when the HPA is created, we don't know what the <em>forwarding_rule_name</em> will be.</p> <p>And, it seems that the <em>matchLabels</em> does not accept regular expressions, or else we would simply do something like:</p> <pre><code>metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--* </code></pre> <p>I've tried several approaches, all without success:</p> <ol> <li>Use Annotations to force the <em>forwarding_rule_name</em></li> <li>Use a different machLabel, as <em>backend_target_name</em> </li> <li>Obtain the <em>forwarding_rule_name</em> using a command, so i can insert it later in the yaml file.</li> </ol> <p><strong>Use Annotations to force the <em>forwarding_rule_name</em>:</strong></p> <p>When creating the ingress, i can use specific annotations to change the default behavior, or define specific values, for example, on Ingress.yaml:</p> <pre><code> annotations: kubernetes.io/ingress.global-static-ip-name: static-ip </code></pre> <p>I tried to use the https-forwarding-rule annotation to force a specific "static" name, but this didn't work:</p> <pre><code> annotations: ingress.kubernetes.io/https-forwarding-rule: some_name annotations: kubernetes.io/https-forwarding-rule: some_name </code></pre> <p><strong>Use a different machLabel, as <em>backend_target_name</em></strong></p> <pre><code>metricSelector: matchLabels: resource.labels.backend_target_name: serviceb </code></pre> <p>Also failed.</p> <p><strong>Obtain the <em>forwarding_rule_name</em> using a command</strong></p> <p>When executing the following command, i get the list of Forwarding Rules, but for all the clusters. And according to the <a href="https://cloud.google.com/sdk/gcloud/reference/compute/forwarding-rules/" rel="nofollow noreferrer">documentation</a>, is not possible to filter by cluster:</p> <pre><code>gcloud compute forwarding-rules list </code></pre> <pre><code>NAME P_ADDRESS IP_PROTOCOL TARGET k8s-fws-default-serviceb--4e1c268b39df8462 xx TCP k8s-tps-default-serviceb--4e1c268b39df8462 k8s-fws-default-serviceb--9bfb478c0886702d xx TCP k8s-tps-default-serviceb--9bfb478c0886702d </code></pre> <p>Is there any way to allow me to select the resource i need, in order to get the Requests count metric?</p>
<p>I'm trying to create a small cronjob in k8s that simply executes a HTTP Post using curl in a busybox container. The formatting is wrong though and i cannot figure out what i have to change.</p> <p>I've tried googling the error message i'm getting as well as change the formatting of the curl command in various ways without any success.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob #namespace: test metadata: name: test-cron spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: test-cron image: busybox args: - /bin/sh - -c - curl "https://test-es01.test.svc.clu01.test.local:9200/logs_write/_rollover" -H 'Content-Type: application/json' -d '{"conditions: {"max_size": "5gb"}}' restartPolicy: OnFailure </code></pre> <p>I then try to run the file:</p> <p><code>kubectl -n test apply -f test-cron.yaml</code></p> <p>and get the following error:</p> <p><code>error: error parsing test-cron.yaml: error converting YAML to JSON: yaml: line 20: mapping values are not allowed in this context</code></p> <p>Does anyone know what the issue is with the formatting?</p> <p>Thanks</p>
<p>thats because your <code>curl</code> command contains semicolon so yaml thinks you are trying to define an object. to fix the error:</p> <ol> <li>escape every <code>"</code> with backlslash: <code>\"</code></li> <li>wrap the entire line with <code>"</code></li> </ol> <p>therefore-</p> <pre><code> - curl "https://test-es01.test.svc.clu01.test.local:9200/logs_write/_rollover" -H 'Content-Type: application/json' -d '{"conditions: {"max_size": "5gb"}}' </code></pre> <p>should be escaped to</p> <pre><code>- "curl \"https://test-es01.test.svc.clu01.test.local:9200/logs_write/_rollover\" -H 'Content-Type: application/json' -d '{\"conditions: {\"max_size\": \"5gb\"}}'\n" </code></pre> <p><a href="http://www.yamllint.com/" rel="nofollow noreferrer">http://www.yamllint.com/</a> is a great place to track down such bugs.</p>
<p>I had read on the documentation that liveness probes make a new pod and stop the other one. But in the kubernetes dashboard it shows me only restarts with my tcp livness probe. I was wondering what kubernetes does during a liveness probe. Can i control it?</p>
<p>The kubelet uses <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">liveness probes</a> to know when to <strong>restart a Container</strong>, not recreate the pods.</p> <p>Probes have a number of fields that you can use to more precisely control the behavior of the checks (<code>initialDelaySeconds</code>,<code>periodSeconds</code>, <code>timeoutSeconds</code>, <code>successThreshold</code> and <code>failureThreshold</code>). You can find details about them <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">here</a>.</p> <p>For container restart, SIGTERM is first sent with waits for a parameterized grace period, and then Kubernetes sends SIGKILL. You can control some of this behavior by tweaking the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podspec-v1-core" rel="nofollow noreferrer"><code>terminationGracePeriodSeconds</code></a> value and/or <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">Attaching Handlers to Container Lifecycle Events</a>.</p>
<p>Right now I'm using Ingress-Nginx as the routing service for the traffic externally. However, there are few articles introduce how Ingress plays JWT authentications to protect internal APIs. Can someone share some information about it?</p>
<p>As per research:</p> <blockquote> <p>Different authenticating API calls were has merged in the form of OAuth 2.0 access tokens.</p> </blockquote> <blockquote> <p>These are authentication credentials passed from client to API server, and typically carried as an HTTP header.</p> </blockquote> <p>JSON Web Token (JWT) as defined by <a href="https://www.rfc-editor.org/rfc/rfc7519" rel="nofollow noreferrer">RFC 7519</a> is one of those.</p> <p>As per docs:</p> <blockquote> <p>JSON Web Token (JWT) is a compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted.</p> </blockquote> <p>This mechanism can be applied using different ingress controllers like <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/" rel="nofollow noreferrer">kubernetes nginx-ingress</a> or <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/configmap-and-annotations.md#auth-and-ssltls" rel="nofollow noreferrer">nginxinc ingress controller</a>.</p> <p>As per nginx inc docs:</p> <blockquote> <p>NGINX auth_request Module is used to Validate Tokens on behalf of backend sercvices.</p> </blockquote> <p>Requests reach the backend services only when the client has presented a valid token Existing backend services can be protected with access tokens, without requiring code changes Only the NGINX instance (not every app) need be registered with the IdP Behavior is consistent for every error condition, including missing or invalid tokens</p> <blockquote> </blockquote> <p>So for NGINX acting as a reverse proxy for one or more applications, we can use the auth_request module to trigger an API call to an IdP before proxying a request to the backend.</p> <ul> <li>In the kubernetes ingress you can find information about <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#external-authentication" rel="nofollow noreferrer">External Authentication</a></li> </ul> <blockquote> <p>To use an existing service that provides authentication the Ingress rule can be annotated with <strong>nginx.ingress.kubernetes.io/auth-url</strong> to indicate the URL where the HTTP request should be sent.</p> </blockquote> <p>Here you can find <a href="https://github.com/carlpett/nginx-subrequest-auth-jwt" rel="nofollow noreferrer">working example nginx-subrequest-auth-jwt</a></p> <blockquote> <p>This project implements a simple JWT validation endpoint meant to be used with NGINX's subrequest authentication, and specifically work well with the Kubernetes NGINX Ingress Controller external auth annotations</p> </blockquote> <blockquote> <p>It validates a JWT token passed in the Authorization header against a configured public key, and further validates that the JWT contains appropriate claims.</p> </blockquote> <p>This example is using <a href="https://github.com/veonua/jwt_auth" rel="nofollow noreferrer">PyJwt python library</a> which allows you to encode and decode JSON Web Tokens (JWT)</p> <p>Additional resource:</p> <ul> <li><a href="https://www.nginx.com/blog/authenticating-api-clients-jwt-nginx-plus/" rel="nofollow noreferrer">nginxinc controler</a></li> <li><a href="https://github.com/kubernetes/ingress-nginx/issues/1850" rel="nofollow noreferrer">kubernetes on github JWT Authentication</a></li> </ul> <p>Hope this help.</p>
<p>I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video <a href="https://www.youtube.com/watch?v=Y7syr9d5yrg" rel="nofollow noreferrer">https://www.youtube.com/watch?v=Y7syr9d5yrg</a>. All seem to &quot;almost&quot; be usefull for me, but there is always something missing and I'm not able to get this working by myself.</p> <p>The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered <strong>myinnovx.com</strong>. I want to use it with an openshift app. Cluster details:</p> <pre><code>oc v3.11.0+0cbc58b kubernetes v1.11.0+d4cacc0 openshift v3.11.146 kubernetes v1.11.0+d4cacc0 </code></pre> <p>I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.</p> <p><a href="https://i.stack.imgur.com/TcZRZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TcZRZ.png" alt="routes screenshot" /></a></p> <blockquote> <p>mobile-blue: I created this one manually pointing to my custom domain <strong>mobileoffice.myinnovx.com</strong></p> <p>mobile-office: Created with <code>oc expose service mobile-office --name=mobile-blue</code> to use external access.</p> <p>mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)</p> <p>mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)</p> </blockquote> <p>I've set up a two CNAME record on my DNS edit page as follows:</p> <p><a href="https://i.stack.imgur.com/JTzXV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTzXV.png" alt="enter image description here" /></a></p> <p>In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.</p> <p>I'm at a loss here as to what I'm missing. Any help is greatly appreciated. This is the response I get testing my DNS:</p> <p><a href="https://i.stack.imgur.com/DpezY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DpezY.png" alt="enter image description here" /></a></p> <p>This is a current export of my DNS:</p> <pre><code>$ORIGIN myinnovx.com. $TTL 86400 @ IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. ( 2019102317 ; Serial 7200 ; Refresh 600 ; Retry 1728000 ; Expire 3600) ; Minimum @ 86400 IN NS ns1.softlayer.com. @ 86400 IN NS ns2.softlayer.com. *.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud. mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud mobile-test.myinnovx.com 900 IN A 169.63.244.76 </code></pre>
<p>I think you almost got it, Matias.</p> <p>The FQDN - <code>mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud</code> - resolves for me to an IP that is part of SOFTLAYER-RIPE-4-30-31 and is accessible from the Internet. So, it should be possible to configure what you want.</p> <p>That snapshot in your question of the DNS records isn't displaying the entries in full but what might be missing is a dot <code>.</code> at the end of both the "Host/Service" and "Value/Target". Something like this:</p> <pre><code>mobileoffice.myinnovx.com. CNAME 900 (15min) mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud. </code></pre>
<p>I get a question about sharing Volume by Containers in one pod.</p> <p>Here is my yaml, pod-volume.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: volume-pod spec: containers: - name: tomcat image: tomcat imagePullPolicy: Never ports: - containerPort: 8080 volumeMounts: - name: app-logs mountPath: /usr/local/tomcat/logs - name: busybox image: busybox command: ["sh", "-c", "tail -f /logs/catalina.out*.log"] volumeMounts: - name: app-logs mountPath: /logs volumes: - name: app-logs emptyDir: {} </code></pre> <p>create pod:</p> <pre><code>kubectl create -f pod-volume.yaml </code></pre> <p>wacth pod status:</p> <pre><code>watch kubectl get pod -n default </code></pre> <p>finally,I got this:</p> <pre><code>NAME READY STATUS RESTARTS AGE redis-php 2/2 Running 0 15h volume-pod 1/2 CrashLoopBackOff 5 6m49s </code></pre> <p>then,I check logs about busybox container:</p> <pre><code>kubectl logs pod/volume-pod -c busybox tail: can't open '/logs/catalina.out*.log': No such file or directory tail: no files </code></pre> <p>I don't know where is went wrong. Is this an order of container start in pod, please help me, thanks</p>
<p>For this case: </p> <p>Catalina logs file is : <code>catalina.$(date '+%Y-%m-%d').log</code></p> <p>And in <code>shell script</code> you should not put <code>*</code> into. </p> <p>So please try: </p> <p><code>command: ["sh", "-c", "tail -f /logs/catalina.$(date '+%Y-%m-%d').log"]</code></p>
<p>I have a kubernetes cluster, and I basically have an authenticated api for deploying tasks within the cluster without having kubectl etc set-up locally. I'm aware of the client libraries etc for the Kubernetes api, however they don't seem to support all of the different primatives etc (including some custom ones like Argo). So I just wondered if there was a way I could effectively run <code>$ kubectl apply -f ./file.yml</code> within a container on the cluster? </p> <p>Obviously I can create a container with kubectl installed, but I just wondered how that could then 'connect' to the Kubernetes controller?</p>
<p>Yes, it is possible. refer halyard container. spinnaker is deployed from halyard container.</p>
<p>I am currently working on a monitoring service that will monitor Kubernetes' deployments and their pods. I want to notify users when a deployment is not running the expected amount of replicas and also when pods' containers restart unexpectedly. This may not be the right things to monitor and I would greatly appreciate some feedback on what I should be monitoring. </p> <p>Anyways, the main question is the differences between all of the <em>Statuses</em> of pods. And when I say <em>Statuses</em> I mean the Status column when running <code>kubectl get pods</code>. The statuses in question are:</p> <pre><code>- ContainerCreating - ImagePullBackOff - Pending - CrashLoopBackOff - Error - Running </code></pre> <p>What causes pod/containers to go into these states? <br/> For the first four Statuses, are these states recoverable without user interaction? <br/> What is the threshold for a <code>CrashLoopBackOff</code>? <br/> Is <code>Running</code> the only status that has a <code>Ready Condition</code> of True? <br/> <br/> Any feedback would be greatly appreciated! <br/> <br/> Also, would it be bad practice to use <code>kubectl</code> in an automated script for monitoring purposes? For example, every minute log the results of <code>kubectl get pods</code> to Elasticsearch?</p>
<p>I will try to tell what I see hidden behind these terms</p> <ul> <li>ContainerCreating</li> </ul> <p>Showing when we wait to image be downloaded and the container will be created by a docker or another system.</p> <ul> <li>ImagePullBackOff</li> </ul> <p>Showing when we have problem to download the image from a registry. Wrong credentials to log in to the docker hub for example.</p> <ul> <li>Pending</li> </ul> <p>The container starts (if start take time) or started but redinessProbe failed. </p> <ul> <li>CrashLoopBackOff</li> </ul> <p>This status showing when container restarts occur too much often. For example, we have process that tries to read not exists file and crash. Then the container will be recreated by Kube and repeat. </p> <ul> <li>Error</li> </ul> <p>This is pretty clear. We have some errors to run the container. </p> <ul> <li>Running</li> </ul> <p>All is good container running and livenessProbe is OK. </p>
<p>I'm trying my hand on iam roles for services account to secure the autoscaller. But I seem to be missing something. Little precision I'm using terraform to create the cluster.</p> <p>I followed these documentation:</p> <ul> <li><a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/</a></li> <li><a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html</a></li> </ul> <p>So I've created a role other than the one for the nodes and applied the policy for the autoscaller to this new role. This part is basic, no issue there.</p> <p>I also activated the openid provider in terraform:</p> <pre><code>resource "aws_iam_openid_connect_provider" "example" { client_id_list = ["sts.amazonaws.com"] thumbprint_list = [] url = aws_eks_cluster.eks.identity.0.oidc.0.issuer } </code></pre> <p>No issue the cluster is creating itself with no issue.</p> <p>No I added the annotation to service account for the autoscalling:</p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::ID:role/terraform-eks-autoscaller labels: k8s-addon: cluster-autoscaler.addons.k8s.io k8s-app: cluster-autoscaler name: cluster-autoscaler namespace: kube-system </code></pre> <p>My problem is that it does not seems to works and the pod is still trying to use the new IAM role but still using the node role:</p> <pre><code>Failed to create AWS Manager: cannot autodiscover ASGs: AccessDenied: User: arn:aws:sts::ID:assumed-role/terraform-eks-node/i-ID is not authorized to perform: autoscaling:DescribeTags </code></pre> <p>Does someone know what step I'm missing here?</p> <p>Thanks in advance for the help ;)</p>
<p>So answer is very simple. Your OIDC provider configuration is missing the thumbprint. It is essential for Iam to work correctly. Normally if you create OIDC provider in AWS console that thumbprint gets populated automatically, however it is not the case when you do it through terraform.</p> <p>I have been caught by this as well so I have written a blog about this that you can find here: <a href="https://medium.com/@marcincuber/amazon-eks-with-oidc-provider-iam-roles-for-kubernetes-services-accounts-59015d15cb0c" rel="noreferrer">https://medium.com/@marcincuber/amazon-eks-with-oidc-provider-iam-roles-for-kubernetes-services-accounts-59015d15cb0c</a></p> <p>To solve your issue simply add the following:</p> <p>9E99A48A9960B14926BB7F3B02E22DA2B0AB7280</p> <p>The above is the hashed root CA that doesn’t change for another 10+ years and it is the same across all regions. How to acquire it, you can read the blog I added link to above.</p> <p>Additionally, ensure to use the latest autoscaler version which is matching the version of your kubernetes. Also, try adding security context with fsGroup: 65534. That is the current workaround to make the OIDC work properly for some apps.</p>
<p>I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.</p> <p><strong>Confusion</strong></p> <p>Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?</p> <ol> <li>Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ? </li> <li>If I am using imagePullSecrets way How I can create separate YAML file for credentials ?</li> </ol>
<p>From Kubernetes point of view: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Pull an Image from a Private Registry</a> you can create secrets and add necessary information into your yaml (Pod/Deployment)</p> <p>Steps:</p> <p><strong>1</strong>. Create a Secret by providing credentials on the command line:</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p><strong>2</strong>. Create a Pod that uses your Secret (example pod):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: &lt;your-private-image&gt; imagePullSecrets: - name: regcred </code></pre>
<p>I use the command <code>docker container update --cpus 1 target_container</code> to update the setting. But in most cases it not working and response "you must provide one or more flags when using this command", but somethings it working</p> <p>Any one know the reason?</p> <p>Some log is like,</p> <pre><code>$ docker container update --cpus 0.5 target_container you must provide one or more flags when using this command $ docker container update --cpus 0.5 target_container you must provide one or more flags when using this command $ docker container update --cpus 0.5 target_container target_container </code></pre> <p>the docker version is </p> <pre><code>Client: Version: 18.09.7 API version: 1.39 Go version: go1.10.4 Git commit: 2d0083d Built: Fri Aug 16 14:19:38 2019 OS/Arch: linux/amd64 Experimental: false Server: Engine: Version: 18.09.7 API version: 1.39 (minimum version 1.12) Go version: go1.10.4 Git commit: 2d0083d Built: Thu Aug 15 15:12:41 2019 OS/Arch: linux/amd64 Experimental: false </code></pre>
<p>The <a href="https://docs.docker.com/engine/reference/commandline/update/" rel="nofollow noreferrer">docker update</a> man page includes:</p> <pre><code>--cpus API 1.29+ </code></pre> <p>A <code>docker version</code> can show you if your API is equal or above <a href="https://docs.docker.com/engine/api/v1.29/" rel="nofollow noreferrer">the API 29</a>, which was with Docker 17.05.</p> <p>For testing, try <code>--cpus=1</code> or <code>--cpus=0.5</code>, considering the argument is supposed to be "number of CPUs"</p> <p>As usual with commands including an hyphen: <a href="https://stackoverflow.com/a/170148/6309">don't copy-paste it</a>, copy it manually.</p>
<p>I use k8s and deploy my application with gitlab. My cluster has for example the namespace <code>production</code>. If I install initially the application I run:</p> <pre><code>$ helm install --name super-app -f values.yml ./Path/To/Project/helm </code></pre> <p>This command will install successfully install the application in the namespace production since its specified in the helm values:</p> <pre><code>replicaCount: 3 imagePullSecret: regcred namespace: production </code></pre> <p>In the project helm charts the I se the default namespace:</p> <pre><code>replicaCount: 3 imagePullSecret: regcred namespace: default </code></pre> <p>When I run the following command from my gitlab ci runner:</p> <pre><code>helm upgrade -f ./values.yaml --set image.tag=master-$DOCKER_IMAGE_TAG super-app ./helm </code></pre> <p>In the values.yaml is again the namespace production specified, I get the following result:</p> <pre><code>Release "super-app has been upgraded. LAST DEPLOYED: Wed Oct 23 12:15:36 2019 NAMESPACE: production STATUS: DEPLOYED RESOURCES: ==&gt; v1/ConfigMap NAME DATA AGE super-app 1 0s ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE super-app 0/3 3 0 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE super-app-5d6dc6c9d-25q9g 0/1 ContainerCreating 0 0s super-app-5d6dc6c9d-tdfhh 0/1 ContainerCreating 0 0s super-app-5d6dc6c9d-z7h96 0/1 ContainerCreating 0 0s ==&gt; v1/Secret NAME TYPE DATA AGE super-app Opaque 0 0s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE super-app LoadBalancer 10.100.115.194 &lt;pending&gt; 8080:32645/TCP 0s </code></pre> <p>The application is now deployed in the namespace default and not in production. Even though the existing application (before the helm upgrade command) is running in production namespace. Helm just creates a new Service and application in default namespace. The same logic works for other applications, why does k8s ignore my namespace config?</p> <p>Thanks</p>
<p>Although it's not documented in <a href="https://helm.sh/docs/chart_best_practices/" rel="nofollow noreferrer">Chart Best Practices</a> yet, this issue (<a href="https://github.com/helm/helm/issues/5465#issuecomment-473942223" rel="nofollow noreferrer">#5465</a>) addresses namespaces considerations:</p> <blockquote> <p>In general, templates should not define a namespace. This is because Helm installs objects into the namespace provided with the <code>--namespace</code> flag. By omitting this information, it also provides templates with some flexibility for post-render operations (like <code>helm template | kubectl create --namespace foo -f -</code>)</p> </blockquote> <p>As quoted, your best option is to add the <code>--namespace</code> to your install/upgrade commands instead of defining it on your templates.</p>
<ul> <li>We have kubernetes cluster where I have service account "kube", namespace "monitoring" with cluster role binding created to monitor cluster </li> <li>We have prometheus installed on a linux system (on prem) outside the cluster and is installed using "root"</li> <li>When I try to connect to the k8 cluster with the https api using <code>ca.crt</code> and user <code>token</code> (given by kubernetes admin), it throws multiple errors. </li> </ul> <p>Error messages:</p> <pre><code>component="discovery manager scrape" msg="Cannot create service discovery" err="unable to use specified CA cert /root/prometheus/ca.crt" type=*kubernetes.SDConfig component="discovery manager scrape" msg="Cannot create service discovery" err="unable to use specified CA cert /root/prometheus/ca.crt" type=*kubernetes.SDConfig </code></pre> <p>Prometheus configuration:</p> <pre><code> - job_name: 'kubernetes-apiservers' scheme: https tls_config: ca_file: /root/prometheus/ca.crt bearer_token_file: /root/prometheus/user_token kubernetes_sd_configs: - role: endpoints api_server: https://example.com:1234 bearer_token_file: /root/prometheus/user_token tls_config: ca_file: /root/prometheus/prometheus-2.12.0.linux-amd64/ca.crt relabel_configs: - source_labels: [monitoring, monitoring-sa, 6443] action: keep regex: default;kubernetes;https - job_name: 'kubernetes-nodes' scheme: https tls_config: ca_file: /root/prometheus/ca.crt bearer_token_file: /root/prometheus/user_token kubernetes_sd_configs: - role: node api_server: https://example.com:1234 bearer_token_file: /root/prometheus/user_token tls_config: ca_file: /root/prometheus/ca.crt relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: https://example.com:1234 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics </code></pre>
<p>The main problem you're facing is: <code>"unable to use specified CA cert /root/prometheus/ca.crt"</code></p> <p>Someone recently faced the same problem: <a href="https://github.com/prometheus/prometheus/issues/6015#issuecomment-532058465" rel="nofollow noreferrer">https://github.com/prometheus/prometheus/issues/6015#issuecomment-532058465</a></p> <p>He solved it by reinstalling the new version.</p> <p>Version <code>2.13.1</code> is out. Try installing the latest version, it might solve your problem too.</p>
<p>I wrote a job and I always get init error. I have noticed that if I remove the related <code>command</code> all goes fine and I do not get any init error. My question is: how can I debug commands that need to run in the job? I use pod describe but all I can see is an exit status code 2.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: database-import spec: template: spec: initContainers: - name: download-dump image: google/cloud-sdk:alpine command: ##### ERROR HERE!!! - bash - -c - "gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz" volumeMounts: - name: application-default-credentials mountPath: "/secrets/" readOnly: true - name: data mountPath: "/data/" env: - name: GOOGLE_APPLICATION_CREDENTIALS value: /secrets/application_default_credentials.json containers: - name: database-import image: postgres:9.6-alpine command: - bash - -c - "gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch " env: - name: PGPASSWORD value: password volumeMounts: - name: data mountPath: "/data/" volumes: - name: application-default-credentials secret: secretName: application-default-credentials - name: data emptyDir: {} restartPolicy: Never backoffLimit: 4 </code></pre> <p>And this is the job describe:</p> <pre><code>Name: database-import Namespace: sbg Selector: controller-uid=a70d74a2-f596-11e9-a7fe-025000000001 Labels: app.kubernetes.io/managed-by=tilt Annotations: &lt;none&gt; Parallelism: 1 Completions: 1 Start Time: Wed, 23 Oct 2019 15:11:40 +0200 Pods Statuses: 1 Running / 0 Succeeded / 3 Failed Pod Template: Labels: app.kubernetes.io/managed-by=tilt controller-uid=a70d74a2-f596-11e9-a7fe-025000000001 job-name=database-import Init Containers: download-dump: Image: google/cloud-sdk:alpine Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/bash -c gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz Environment: GOOGLE_APPLICATION_CREDENTIALS: /secrets/application_default_credentials.json Mounts: /data/ from data (rw) /secrets/ from application-default-credentials (ro) Containers: database-import: Image: postgres:9.6-alpine Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/bash -c gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch Environment: PGPASSWORD: password Mounts: /data/ from data (rw) Volumes: application-default-credentials: Type: Secret (a volume populated by a Secret) SecretName: application-default-credentials-464thb4k85 Optional: false data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m5s job-controller Created pod: database-import-9tsjw Normal SuccessfulCreate 119s job-controller Created pod: database-import-g68ld Normal SuccessfulCreate 109s job-controller Created pod: database-import-8cx6v Normal SuccessfulCreate 69s job-controller Created pod: database-import-tnjnh </code></pre>
<p>The command to see the log of an init container ran in a job is:</p> <pre><code>kubectl logs -f &lt;pod name&gt; -c &lt;initContainer name&gt; </code></pre>
<p>I have 3 node in k8s cluster, where all are masters, i.e. I have removed the <code>node-role.kubernetes.io/master</code> taint.</p> <p>I physically removed the network cable on <code>foo2</code>, so I have </p> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION foo1 Ready master 3d22h v1.13.5 foo2 NotReady master 3d22h v1.13.5 foo3 Ready master 3d22h v1.13.5 </code></pre> <p>After several hours some of the pods are still in <code>STATUS = Terminating</code> though I think they should be in <code>Terminated</code> ?</p> <p>I read at <a href="https://www.bluematador.com/docs/troubleshooting/kubernetes-pod" rel="nofollow noreferrer">https://www.bluematador.com/docs/troubleshooting/kubernetes-pod</a></p> <blockquote> <p>In rare cases, it is possible for a pod to get stuck in the terminating state. This is detected by finding any pods where every container has been terminated, but the pod is still running. Usually, this is caused when a node in the cluster gets taken out of service abruptly, and the cluster scheduler and controller-manager do not clean up all of the pods on that node. </p> <p>Solving this issue is as simple as manually deleting the pod using kubectl delete pod .</p> </blockquote> <p>The pod describe says if unreachable for 5 minutes will be tolerated ...</p> <pre><code>Conditions: Type Status Initialized True Ready False ContainersReady True PodScheduled True Volumes: etcd-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>I have tried <code>kubectl delete pod etcd-lns4g5xkcw</code> which just hung, though forcing it <strong>does</strong> work as per this <a href="https://stackoverflow.com/questions/55935173/kubernetes-pods-stuck-with-in-terminating-state">answer</a> ...</p> <pre><code>kubectl delete pod etcd-lns4g5xkcw --force=true --grace-period=0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "etcd-lns4g5xkcw" force deleted </code></pre> <p>(1) Why is this happening ? Shouln't it change to terminated? </p> <p>(2) Where even is <code>STATUS = Terminating</code> coming from? At <a href="https://v1-13.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">https://v1-13.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/</a> I See only Waiting/Running/Terminated as the options ?</p>
<p>Pods volume and network cleanup can consume more time while in <code>termination</code> status. Proper way to do it is to drain node in order to get pods terminated successfully in grace period. Because you plugged out the network cable the node has changed its status to <code>not ready</code> with pods already running on it. Due to this pod could not terminate.</p> <p>You may find this information from k8s documentation about <code>terminating</code> status useful:</p> <blockquote> <p>Kubernetes (versions 1.5 or newer) will not delete Pods just because a Node is unreachable. The Pods running on an unreachable Node enter the ‘Terminating’ or ‘Unknown’ state after a timeout. Pods may also enter these states when the user attempts graceful deletion of a Pod on an unreachable Node:</p> <p>There are 3 suggested ways to remove it from apiserver:</p> <p>The Node object is deleted (either by you, or by the Node Controller). The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver. Force deletion of the Pod by the user.</p> </blockquote> <p>Here you can find more information about background deletion from k8s <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">offical documentation</a></p>
<p>We are currently in the process of deploying a new spring data flow stream application in our aws EKS cluster. As part of this, the pods launched by the skipper should have the IAM roles defined in the annotation so that they can access the required AWS services. I have created the required iam role in AWS account and trying to pass the role using the pod-annotations property in the deploy stream for the application,</p> <p>sample deployment property:</p> <pre><code>deployer.datastreamdemosource.kubernetes.pod-annotations=iam.amazonaws.com/role: arn:aws:iam::XXXXXXXX:role/spring-dataflow-test-role </code></pre> <p>The skipper is able to launch pods in the cluster but I can see the requested annotation is not assigned to the pod template,</p> <p>The annotation value assigned to the pod is </p> <p>Am i using the correct property to assign the required iam role? or how to assign IAM roles to the pods launched by the skipper as part of the stream?. We do not want to use a global IAM role for the streams since some pods will have additional permissions that they do not need. has anyone successfully deployed dataflow streaming applications in AWS EKS with iam roles for each pods ?</p>
<p>After reading documents for hours I found the required configurations in the below link </p> <p><a href="https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_annotations" rel="nofollow noreferrer">https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_annotations</a></p> <p>So you need to use the below property definitions under the deployment Platform in your Streams which will allow the skipper to launch the pods with required annotations. </p> <pre><code> deployer.data-stream-demo-source.kubernetes.podAnnotations=iam.amazonaws.com/role: arn:aws:iam::XXXXXXX:role/spring-dataflow-test-role deployer.data-stream-demo-sink.kubernetes.podAnnotations=iam.amazonaws.com/role: arn:aws:iam::XXXXXX:role/spring-dataflow-test-role </code></pre> <p>The default pod-annotation property in the deploy stream definition is not working as expected and not sure it's because of a typo in pod-annotations. </p> <p>its the same case with other annotations as well (job-annotations,service-annotations). </p> <p>For now, you can define the required annotation definition in the freetext tab or you can define in the application property level when registering apps to the dataflow server. By the way, I am using spring-cloud-dataflow-server Version: 2.2.1.RELEASE</p>
<p>Let's say I'm hosting my pod that uses my pvc. My pvc only provisioned 10 Gi of storage, so my pod eventually crashes because it runs out of disk space. At this point, how do I add more space?</p> <p>Preferably without losing data.</p> <p>I am aware of <code>allowVolumeExpansion</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims</a>, but I don't think my cloud provider's storage classes <code>ibm.io/ibmc-file</code>, <code>ibm.io/ibmc-block</code>, and <code>vpc.block.csi.ibm.io</code> supports this functionality.</p>
<p>As far as I know this is not possible on <code>ibm.io/ibmc-block</code></p> <p>Here is the documentation for <a href="https://cloud.ibm.com/docs/containers?topic=containers-block_storage" rel="nofollow noreferrer">Storing data on classic IBM Cloud Block Storage</a>.</p> <p>In section <a href="https://cloud.ibm.com/docs/containers?topic=containers-block_storage#block_predefined_storageclass" rel="nofollow noreferrer">Deciding on the block storage configuration</a> we can find:</p> <blockquote> <h2>Important:</h2> <p>Make sure to choose your storage configuration carefully to have enough capacity to store your data. After you provision a specific type of storage by using a storage class, you cannot change the size, type, IOPS, or retention policy for the storage device. If you need more storage or storage with a different configuration, you must create a new storage instance and copy the data from the old storage instance to your new one.</p> </blockquote> <p>However this seems to be possible for <code>ibm.io/ibmc-file</code>, which is mentioned <a href="https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_predefined_storageclass" rel="nofollow noreferrer">here</a>:</p> <blockquote> <h2>Important:</h2> <p>After you provision a specific type of storage by using a storage class, you cannot change the type, or retention policy for the storage device. However, you can change the size and the IOPS if you want to increase your storage capacity and performance. To change the type and retention policy for your storage, you must create a new storage instance and copy the data from the old storage instance to your new one.</p> </blockquote> <p>This is described <a href="https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_change_storage_configuration" rel="nofollow noreferrer">Changing the size and IOPS of your existing storage device</a></p>
<p>When using <code>eksctl</code> to create Kubernetes cluster using AWS EKS, the process get stuck waiting for the nodes to join the cluster:</p> <pre><code>nodegroup "my-cluster" has 0 node(s) waiting for at least 3 node(s) to become ready in “my-cluster” timed out (after 25m0s) waiting for at least 3 nodes to join the cluster and become ready in "my-cluster" </code></pre> <p>The message is displayed, without any additional logs, until the process eventually times out. It looks like behind the scenes, the newly created nodes are unable to communicate with the Kubernetes cluster</p>
<p>When using an existing VPC network, you have to make sure that the VPC conforms with all EKS-specific requirements [1, 2]. The blog post by logz.io provides detailed guidance on setting up a VPC network, as well as an example AWS Cloud Formation template that you can use as the basis [3]. Missing IAM Policies The AmazonEKSWorkerNodePolicy and AmazonEKS_CNI_Policy policies [4] are required by the EKS worker nodes to be able to communicate with the cluster. </p> <p>By default, eksctl automatically generates a role containing these policies. However, when you use “attachPolicyARNs” property to attach specific policies by ARN, you have to include these policies explicitly [5]:</p> <pre><code>nodeGroups: - name: my-special-nodegroup iam: attachPolicyARNs: - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess </code></pre> <p>[1] <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html</a></p> <p>[2] <a href="https://eksctl.io/usage/vpc-networking" rel="noreferrer">https://eksctl.io/usage/vpc-networking</a></p> <p>[3] <a href="https://logz.io/blog/amazon-eks" rel="noreferrer">https://logz.io/blog/amazon-eks</a></p> <p>[4] <a href="https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html</a></p> <p>5] <a href="https://eksctl.io/usage/iam-policies/" rel="noreferrer">https://eksctl.io/usage/iam-policies/</a></p>
<p>In my team we have a single huge ConfigMap Resource that holds all the important variables distributed to all our pods.</p> <p>After some time we realized that it is very hard to follow up where those variables are finally used. I was wondering if there is any way with Helm or <code>kubectl</code> to figure out where the values of the ConfigMap are actually used. Like a list of all the pods being supplied by the ConfigMap etc.</p> <p>I researched for it but somehow it seems nobody is talking about that. Therefore I might understand the concept wrong here?</p> <p>Am thankful for any guidance here.</p>
<p>You cannot directly use kubectl <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/#supported-fields" rel="nofollow noreferrer">field selectors</a> to get the result. You can output all the pods in json and use <code>jq</code> to query from the output. For example, this query outputs name of all pods that uses configMap "kube-proxy" as volumes</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get pods --all-namespaces -o json | jq '.items[] | select(.spec.volumes[].configMap.name=="kube-proxy")' | jq .metadata.name </code></pre>
<p>I made a Kubernetes cluster one year ago with an external etcd cluster (3 members).</p> <p>A the time, I did not know that it was possible to make an etcd internal, so I made an external cluster and connected Kubernetes to it.</p> <p>Now I am seeing that an internal cluster is a thing and it is a cleaner solution because the etcd nodes are updated when you update your Kubernetes cluster.</p> <p>I can't find a clean solution to migrate an external etcd cluster to an internal cluster. I hope there is a solution with zero downtime. Do you know if it is possible please ?</p> <p>Thank you for your response and have a nice day !</p>
<p>As I can understand you have 3 etcd cluster members, external from Kubernetes cluster perspective. The expected outcome is to have all three members running on Kubernetes master nodes. There is some information left undisclosed, so I try to explain several possible options.</p> <p>First of all, there are several reasonable ways to run etcd process to use as Kubernetes control-plane key-value storage:</p> <ul> <li>etcd run as static pod, having startup configuration in <code>/etc/kubernetes/manifests/etcd.yaml</code> file</li> <li>etcd run as a system service defined in <code>/etc/systemd/system/etcd.service</code> or similar file</li> <li>etcd run as a docker container configured using command line options. (this solution is not really safe, unless you can make the contaner restarted after failure or host reboot)</li> </ul> <p>For experimental purposes, you can also run etcd:</p> <ul> <li>as a simple process in linux userspace</li> <li>as a stateful set in the kubernetes cluster</li> <li>as a etcd cluster managed by <a href="https://github.com/coreos/etcd-operator/" rel="nofollow noreferrer">etcd-operator</a>.</li> </ul> <p>My personal recommendation is to have 5 members etcd cluster: 3 members runs as a static pods on 3 master kubernetes nodes and two more runs as static pods on external (Kubernetes cluster independent) hosts. In this case you will still have a quorum if you have at least one master node running or if you loose two external nodes by any reason.</p> <p>There are at least two way to migrate etcd cluster from external instances to the Kubernetes cluster master nodes. It works in the opposite way too.</p> <h2>Migration</h2> <p>It's quite straighforward way to migrate the cluster. During this procedure members are turned off (one at a time), moved to another host and started again. Your cluster shouldn't have any problems while you still have quorum in the etcd cluster. My recommendation is to have at least 3 or better 5 nodes etcd cluster to make the migration safer. For bigger clusters it's may be more convenient to use the other solution from my second answer.</p> <p>The process of moving etcd member to another IP address is described in the <a href="https://github.com/etcd-io/etcd/blob/a621d807f061e1dd635033a8d6bc261461429e27/Documentation/v2/admin_guide.md#member-migration" rel="nofollow noreferrer">official documentation</a>:</p> <blockquote> <p>To migrate a member:</p> <ol> <li>Stop the member process.</li> <li>Copy the data directory of the now-idle member to the new machine.</li> <li>Update the peer URLs for the replaced member to reflect the new machine according to the runtime reconfiguration instructions.</li> <li>Start etcd on the new machine, using the same configuration and the copy of the data directory.</li> </ol> </blockquote> <p>Now let's look closer on each step:</p> <p><strong>0.1 Ensure your etcd cluster is healthy</strong> and all members are in a good condition. I would recommend also checking the logs of all etcd members, just in case.</p> <p>(To successfuly run the following commands please refer to step 3 for auth variables and aliases)</p> <pre><code># last two commands only show you members specified by using --endpoints command line option # the following commands is suppose to run with root privileges because certificates are not accessible by regular user e2 cluster-health e3 endpoint health e3 endpoint status </code></pre> <p><strong>0.2 Check each etcd member configuration</strong> and find out where etcd data-dir is located, then ensure that it will remain accessible after etcd process termination. In most cases it's located under /var/lib/etcd on the host machine and used directly or mounted as a volume to etcd pod or docker container.</p> <p><strong>0.3 <a href="https://github.com/etcd-io/etcd/blob/a621d807f061e1dd635033a8d6bc261461429e27/Documentation/op-guide/recovery.md#snapshotting-the-keyspace" rel="nofollow noreferrer">Create a snapshot</a> of each etcd cluster member</strong>, it's better don't use it, than don't have it.</p> <p><strong>1. Stop etcd member process.</strong></p> <p>If you use <code>kubelet</code> to start etcd, as recommended <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/#setting-up-the-cluster" rel="nofollow noreferrer">here</a>, move <code>etcd.yaml</code> file out of <code>/etc/kubernetes/manifests/</code>. Right after that etcd Pod will be terminated by <code>kubelet</code>:</p> <pre><code>sudo mv /etc/kubernetes/manifests/etcd.yaml ~/ sudo chmod 644 ~/etcd.yaml </code></pre> <p>In case if you start etcd process <a href="https://github.com/coreos/etcd-operator/" rel="nofollow noreferrer">as a systemd service</a> you can stop it using the following command:</p> <pre><code>sudo systemctl stop etcd-service-name.service </code></pre> <p>In case of docker container you can stop it using the following command:</p> <pre><code>docker ps -a docker stop &lt;etcd_container_id&gt; docker rm &lt;etcd_container_id&gt; </code></pre> <p>If you run the etcd process from the command line, you can kill it using the following command:</p> <pre><code>kill `pgrep etcd` </code></pre> <p><strong>2. Copy the data directory of the now-idle member to the new machine.</strong></p> <p>Not much complexity here. Compact etcd data-dir to the file and copy it to the destination instance. I also recommend to copy etcd manifest or systemd service configuration if you plan to run etcd on the new instance in the same way.</p> <pre><code>tar -C /var/lib -czf etcd-member-name-data.tar.gz etcd tar -czf etcd-member-name-conf.tar.gz [etcd.yaml] [/etc/systemd/system/etcd.service] [/etc/kubernetes/manifests/etcd.conf ...] scp etcd-member-name-data.tar.gz destination_host:~/ scp etcd-member-name-conf.tar.gz destination_host:~/ </code></pre> <p><strong>3. Update the peer URLs for the replaced member</strong> to reflect the new member IP address according to the runtime reconfiguration instructions.</p> <p>There are two way to do it, by using <code>etcd API</code> or by running <code>etcdctl</code> utility.</p> <p>That's how <code>etcdctl</code> way may look like:<br /> <em>(replace etcd endpoints variables with the correct etcd cluster members ip addresses)</em></p> <pre><code># all etcd cluster members should be specified export ETCDSRV=&quot;--endpoints https://etcd.ip.addr.one:2379,https://etcd.ip.addr.two:2379,https://etcd.ip.addr.three:2379&quot; #authentication parameters for v2 and v3 etcdctl APIs export ETCDAUTH2=&quot;--ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key&quot; export ETCDAUTH3=&quot;--cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key&quot; # etcdctl API v3 alias alias e3=&quot;ETCDCTL_API=3 etcdctl $ETCDAUTH3 $ETCDSRV&quot; # etcdctl API v2 alias alias e2=&quot;ETCDCTL_API=2 etcdctl $ETCDAUTH2 $ETCDSRV&quot; # list all etcd cluster members and their IDs e2 member list e2 member update member_id http://new.etcd.member.ip:2380 #or e3 member update member_id --peer-urls=&quot;https://new.etcd.member.ip:2380&quot; </code></pre> <p>That's how <code>etcd API</code> way may look like:</p> <pre><code>export CURL_ETCD_AUTH=&quot;--cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt&quot; curl https://health.etcd.istance.ip:2379/v2/members/member_id -XPUT -H &quot;Content-Type: application/json&quot; -d '{&quot;peerURLs&quot;:[&quot;http://new.etcd.member.ip:2380&quot;]}' ${CURL_ETCD_AUTH} </code></pre> <p><strong>4. Start etcd on the new machine</strong>, using the adjusted configuration and the copy of the data directory.</p> <p>Unpack etcd data-dir on the new host:</p> <pre><code>tar -xzf etcd-member-name-data.tar.gz -C /var/lib/ </code></pre> <p>Adjust etcd startup configuration according to your needs. At this point it's easy to select another way to run etcd. Depending on your choice prepare manifest or service definition file and replace there old ip address with new. E.g.:</p> <pre><code>sed -i 's/\/10.128.0.12:/\/10.128.0.99:/g' etcd.yaml </code></pre> <p>Now it's time to start etcd by moving <code>etcd.yaml</code> to <code>/etc/kubernetes/manifests/</code>, or by running the following command (if you run <code>etcd</code> as a <code>systemd</code> service)</p> <pre><code>sudo systemctl start etcd-service-name.service </code></pre> <p><strong>5. Check updated etcd process logs and etcd cluster health</strong> to ensure that member is healthy.</p> <p>To do that you can use the following commands:</p> <pre><code>$ e2 cluster-health $ kubectl logs etct_pod_name -n kube-system $ docker logs etcd_container_id 2&gt;&amp;1 | less $ journalctl -e -u etcd_service_name </code></pre>
<p>We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).</p> <p>Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.</p> <p>We recently ran into <strong>some issues</strong> with using the <strong>Consul service discovery from within K8s</strong>. Right now our <strong>apps</strong> talk directly to the Consul Servers to <strong>register and unregister services</strong> they provide.</p> <p>This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the <strong>local</strong> Consul agent. In our setup there are no local Consul agents.</p> <p><strong>My Question</strong>: Should we run <em>local</em> Consul agents as <em>sidekick</em> containers in each pod?</p> <p>IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.</p> <p>I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.</p>
<p>As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.</p> <ul> <li>Consul: Connect Sidecar; <a href="https://www.consul.io/docs/platform/k8s/connect.html" rel="nofollow noreferrer">https://www.consul.io/docs/platform/k8s/connect.html</a></li> </ul>
<p>I made a Kubernetes cluster one year ago with an external etcd cluster (3 members).</p> <p>A the time, I did not know that it was possible to make an etcd internal, so I made an external cluster and connected Kubernetes to it.</p> <p>Now I am seeing that an internal cluster is a thing and it is a cleaner solution because the etcd nodes are updated when you update your Kubernetes cluster.</p> <p>I can't find a clean solution to migrate an external etcd cluster to an internal cluster. I hope there is a solution with zero downtime. Do you know if it is possible please ?</p> <p>Thank you for your response and have a nice day !</p>
<p>The second solution I've mentioned in another answer is</p> <h2>Growing and then shrinking etcd cluster</h2> <p>The downside of this method is that etcd quorum size is temporary increased, and in case of several nodes failure, etcd cluster may <a href="https://medium.com/@zhimin.wen/testing-of-etcd-failure-936dc5833a62" rel="nofollow noreferrer">break</a>. To avoid it, you may want to remove one existing etcd cluster member before adding another one.</p> <p>Here is the brief overview of the process:</p> <ol> <li>generate certificates for all additional members using etcd <code>ca.crt</code> and <code>ca.key</code> from existing etcd node folder (<code>/etc/kubernetes/pki/etcd/</code>).</li> <li>add new member to the cluster using <code>etcdctl</code> command</li> <li>create etcd config for new member</li> <li>start new etcd member using new keys and config</li> <li>check cluster health</li> <li><strong>repeat steps 2-5</strong> until all required etcd nodes are added</li> <li>remove one exessive etcd cluster member using etcdctl command</li> <li>check cluster health</li> <li><strong>repeat steps 7-8</strong> until the desired size of etcd cluster is achieved</li> <li>Adjust all etcd.yaml files for all etcd cluster members.</li> <li>Adjust etcd endpoints in all kube-apiserver.yaml manifests</li> </ol> <p>Another possible sequence:</p> <ol> <li>generate certificates for all additional members using etcd <code>ca.crt</code> and <code>ca.key</code> from existing etcd node folder (<code>/etc/kubernetes/pki/etcd/</code>).</li> <li>remove one etcd cluster member using etcdctl command</li> <li>add new member to the cluster using etcdctl command</li> <li>create etcd config for new member</li> <li>start new etcd member using new keys and config</li> <li>check cluster health</li> <li><strong>repeat steps 2-6</strong> until required etcd configuration is achieved</li> <li>Adjust all <code>etcd.yaml</code> files for all etcd cluster members.</li> <li>Adjust etcd endpoints in all kube-apiserver.yaml manifests</li> </ol> <hr /> <h3>How to generate certificates:</h3> <ul> <li>using kubeadm command (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/#setting-up-the-cluster" rel="nofollow noreferrer">manual</a>)</li> <li>using cfssl tool (<a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md" rel="nofollow noreferrer">Kubernetes the hard way guide</a>)</li> <li>using openssl (<a href="https://deliciousbrains.com/ssl-certificate-authority-for-local-https-development/" rel="nofollow noreferrer">link1</a>, <a href="https://jamielinux.com/docs/openssl-certificate-authority/" rel="nofollow noreferrer">link2</a>)</li> </ul> <p><strong>Note:</strong> If you have etcd cluster, you likely have etcd-CA certificate somewhere. Consider to use it along with the etcd-CA key to generate certificates for all additional etcd members.</p> <p><strong>Note:</strong> In case you choose to generate certificates manually, usual Kubernetes certificates' parameters are:</p> <ul> <li>Signature Algorithm: sha256WithRSAEncryption</li> <li>Public Key Algorithm: rsaEncryption</li> <li>RSA Public-Key: (2048 bit)</li> <li>CA certs age: 10 years</li> <li>other certs age: 1 year</li> </ul> <p>You can check the content of the certificates using the following command:</p> <pre><code>find /etc/kubernetes/pki/ -name *.crt | xargs -l bash -c 'echo $0 ; openssl x509 -in $0 -text -noout' </code></pre> <hr /> <h3>How to <a href="https://docs.okd.io/latest/admin_guide/assembly_restore-etcd-quorum.html#removing-etcd-host_restore-etcd-quorum" rel="nofollow noreferrer">remove a member</a> from the etcd cluster</h3> <p>(Please refer to my another answer, step 3, for variables and alias definitions)</p> <pre><code>e3 member list b67816d38b8e9d2, started, kube-ha-m3, https://10.128.0.12:2380, https://10.128.0.12:2379 3de72bd56f654b1c, started, kube-ha-m1, https://10.128.0.10:2380, https://10.128.0.10:2379 ac98ece88e3519b5, started, kube-etcd2, https://10.128.0.14:2380, https://10.128.0.14:2379 cfb0839e8cad4c8f, started, kube-ha-m2, https://10.128.0.11:2380, https://10.128.0.11:2379 eb9b83c725146b96, started, kube-etcd1, https://10.128.0.13:2380, https://10.128.0.13:2379 401a166c949e9584, started, kube-etcd3, https://10.128.0.15:2380, https://10.128.0.15:2379 # Let's remove this one e2 member remove 401a166c949e9584 </code></pre> <p>The member will shutdown instantly. To prevent further attempt of joining the cluster, move/delete etcd.yaml from /etc/kubernetes/manifests/ or shutdown etcd service on the etcd member node</p> <hr /> <h3>How to <a href="https://docs.okd.io/latest/admin_guide/assembly_restore-etcd-quorum.html#adding-etcd-after-restoring_restore-etcd-quorum" rel="nofollow noreferrer">add a member</a> to the etcd cluster</h3> <pre><code>e3 member add kube-etcd3 --peer-urls=&quot;https://10.128.0.16:2380&quot; </code></pre> <p>The output shows the parameters required to start the new etcd cluster member, e.g.:</p> <pre><code>ETCD_NAME=&quot;kube-etcd3&quot; ETCD_INITIAL_CLUSTER=&quot;kube-ha-m3=https://10.128.0.15:2380,kube-ha-m1=https://10.128.0.10:2380,kube-etcd2=https://10.128.0.14:2380,kube-ha-m2=https://10.128.0.11:2380,kube-etcd1=https://10.128.0.13:2380,kube-etcd3=https://10.128.0.16:2380&quot; ETCD_INITIAL_ADVERTISE_PEER_URLS=&quot;https://10.128.0.16:2380&quot; ETCD_INITIAL_CLUSTER_STATE=&quot;existing&quot; </code></pre> <p><strong>Note:</strong> <code>ETCD_INITIAL_CLUSTER</code> variable contains all existing etcd cluster members and also the new node. If you need to add several nodes it should be done one node at a time.</p> <p><strong>Note:</strong> <code>All ETCD_INITIAL_*</code> variables and corresponded command line parameters only required for the first etcd Pod start. After successful addition of the node to the etcd cluster, these parameters are ignored and can be removed from startup configuration. All required information is stored in <code>/var/lib/etcd</code> folder in etcd database file.</p> <p>The default <code>etcd.yaml</code> manifest could be generated using the following kubeadm commmand:</p> <pre><code>kubeadm init phase etcd local </code></pre> <p>It's better to move <code>etcd.yaml</code> file from <code>/etc/kubernetes/manifests/</code> somewhere to make adjustments.</p> <p>Also delete content of the <code>/var/lib/etcd</code> folder. It contains data of new etcd cluster, so it can't be used to add member to existing cluster.</p> <p>Then it should be adjusted according to member add command output. (<code>--advertise-client-urls, -initial-advertise-peer-urls, --initial-cluster, --initial-cluster-state, --listen-client-urls, --listen-peer-urls</code>) E.g.:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: etcd tier: control-plane name: etcd namespace: kube-system spec: containers: - command: - etcd - --advertise-client-urls=https://10.128.0.16:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --initial-advertise-peer-urls=https://10.128.0.16:2380 - --initial-cluster=kube-ha-m3=https://10.128.0.15:2380,kube-ha-m1=https://10.128.0.10:2380,kube-etcd2=https://10.128.0.14:2380,kube-ha-m2=https://10.128.0.11:2380,kube-etcd1=https://10.128.0.13:2380,kube-etcd3=https://10.128.0.16:2380 - --initial-cluster-state=existing - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://10.128.0.16:2379 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://10.128.0.16:2380 - --name=kube-etcd3 - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt image: k8s.gcr.io/etcd:3.3.10 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /health port: 2381 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd resources: {} volumeMounts: - mountPath: /var/lib/etcd name: etcd-data - mountPath: /etc/kubernetes/pki/etcd name: etcd-certs hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/kubernetes/pki/etcd type: DirectoryOrCreate name: etcd-certs - hostPath: path: /var/lib/etcd type: DirectoryOrCreate name: etcd-data </code></pre> <p>After saving the file, kubelet will restart etcd pod. Check etcd container logs to ensure it is joined to the cluster.</p> <hr /> <h3>How to check cluster health</h3> <pre><code>$ e2 cluster-health member b67816d38b8e9d2 is healthy: got healthy result from https://10.128.0.15:2379 member 3de72bd56f654b1c is healthy: got healthy result from https://10.128.0.10:2379 member ac98ece88e3519b5 is healthy: got healthy result from https://10.128.0.14:2379 member cfb0839e8cad4c8f is healthy: got healthy result from https://10.128.0.11:2379 member eb9b83c725146b96 is healthy: got healthy result from https://10.128.0.13:2379 cluster is healthy $ e2 member list b67816d38b8e9d2: name=kube-ha-m3 peerURLs=https://10.128.0.15:2380 clientURLs=https://10.128.0.15:2379 isLeader=true 3de72bd56f654b1c: name=kube-ha-m1 peerURLs=https://10.128.0.10:2380 clientURLs=https://10.128.0.10:2379 isLeader=false ac98ece88e3519b5: name=kube-etcd2 peerURLs=https://10.128.0.14:2380 clientURLs=https://10.128.0.14:2379 isLeader=false cfb0839e8cad4c8f: name=kube-ha-m2 peerURLs=https://10.128.0.11:2380 clientURLs=https://10.128.0.11:2379 isLeader=false eb9b83c725146b96: name=kube-etcd1 peerURLs=https://10.128.0.13:2380 clientURLs=https://10.128.0.13:2379 isLeader=false $ e3 endpoint health # the output includes only etcd members that are specified in --endpoints cli option or corresponded environment variable. I've included only three out of five members https://10.128.0.13:2379 is healthy: successfully committed proposal: took = 2.310436ms https://10.128.0.15:2379 is healthy: successfully committed proposal: took = 1.795723ms https://10.128.0.14:2379 is healthy: successfully committed proposal: took = 2.41462ms $ e3 endpoint status # the output includes only etcd members that are specified in --endpoints cli option or corresponded environment variable. I've included only three out of five members https://10.128.0.13:2379 is healthy: successfully committed proposal: took = 2.531676ms https://10.128.0.15:2379 is healthy: successfully committed proposal: took = 1.285312ms https://10.128.0.14:2379 is healthy: successfully committed proposal: took = 2.266932ms </code></pre> <hr /> <h3>How to check etcl Pod logs without using kubectl?</h3> <p>If you run etcd member using kubelet only, you can check its log using the following command:</p> <pre><code>docker logs `docker ps -a | grep etcd | grep -v pause | awk '{print $1}' | head -n1` 2&gt;&amp;1 | less </code></pre> <p><strong>Note:</strong> Usually, only one etcd Pod can be run on the same node at the same time, because it uses database in the host directory <code>/var/lib/etcd/</code> and it cannot be shared between two pods. Also etcd Pod uses node network interface to communicate with the etcd cluster.<br /> Of course, you can configure etcd Pod to use different host directory and use different host ports as a workaround, but the above command assumes that the only one etcd Pod is present on the node.</p>
<p>In my Kubernetes cluster, on my DB container, my persistent storage (dynamically provisioned on Digital Ocean) does not persist the storage if the pod is deleted.</p> <p>I have changed the reclaim policy of the storage from Delete to Retain but this does not make a difference.</p> <p>This is a copy of DB YAML file:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: db namespace: hm-namespace01 app: app1 spec: type: NodePort ports: - port: 5432 selector: app: app1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hm-pv-claim namespace: hm-namespace01 labels: app: app1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: do-block-storage --- apiVersion: apps/v1 kind: Deployment metadata: name: app1 namespace: hm-namespace01 labels: app: app1 spec: selector: matchLabels: app: app1 tier: backend strategy: type: Recreate template: metadata: labels: app: app1 tier: backend spec: containers: - name: app1 image: postgres:11 imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 volumeMounts: - name: postgredb mountPath: /var/lib/postgresql volumes: - name: postgredb persistentVolumeClaim: claimName: hm-pv-claim </code></pre>
<p>You must match your <code>mountPath</code> with the Postgres <code>PGDATA</code> environment variable.</p> <p>The <a href="https://hub.docker.com/_/postgres" rel="noreferrer">default value of <code>PGDATA</code></a> is <code>/var/lib/postgresql/data</code> (not <code>/var/lib/postgresql</code>).</p> <p>You need to either adjust your <code>mountPath</code> or set the <code>PGDATA</code> env to match it.</p>
<p>I have a local custom cluster i'm trying to run a php application with a MySQL database. I have exposed the MySQL service and deployment with PersistentVolumes and can access them fine through a local PHP instance but when trying to deploy Apache to run the web server my browser keeps rejecting the connection. </p> <p>Ive tried to expose different ports on the deployment.yaml in the phpmyadmin deployment, i've tried port 80 and 8080 but they wouldn't expose correctly. Once i tried port 8088 they did deploy correctly but now my browser rejects the connection.</p> <p>Ive tried going into the individual pod and run lsof to see if apache is listening on 80 and it is so im really at a loss with this.</p> <pre class="lang-sh prettyprint-override"><code>root@ras1:/home/pi/k3s# ./k3s kubectl get endpoints NAME ENDPOINTS AGE kubernetes 192.168.1.110:6443 16d mysql-service 10.42.1.79:3306 51m phpmyadmin-service 10.42.1.85:8088 2m45s root@ras1:/home/pi/k3s# ./k3s kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 &lt;none&gt; 443/TCP 16d mysql-service LoadBalancer 10.43.167.186 192.168.1.110,192.168.1.111 3306:31358/TCP 49m phpmyadmin-service LoadBalancer 10.43.126.107 192.168.1.110,192.168.1.111 8088:31445/TCP 10s </code></pre> <p>The Cluster IP is 192.168.1.110 for the node1 and 192.168.1.111 for node2 (where the deployment is running)</p> <p>Thanks for the help.</p>
<p>Managed to find a solution for this. Turns out my own ingress controller was already using port 80 and 8080 as "LoadBalancer" so I created an ingress.yaml and linked it to my phpmyadmin service which I set to "ClusterIP" rather than "LoadBalancer" now I can access my PHP app through port 80.</p>
<p>I'm trying to create a quicklab on GCP to implement CI/CD with Jenkins on GKE, I created a <strong>Multibranch Pipeline</strong>. When I push the modified script to git, Jenkins kicks off a build and fails with the following error:</p> <blockquote> <pre><code> Branch indexing &gt; git rev-parse --is-inside-work-tree # timeout=10 Setting origin to https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default &gt; git config remote.origin.url https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default # timeout=10 Fetching origin... Fetching upstream changes from origin &gt; git --version # timeout=10 &gt; git config --get remote.origin.url # timeout=10 using GIT_ASKPASS to set credentials qwiklabs-gcp-gcpd-502b5f86f641 &gt; git fetch --tags --progress -- origin +refs/heads/*:refs/remotes/origin/* Seen branch in repository origin/master Seen branch in repository origin/new-feature Seen 2 remote branches Obtained Jenkinsfile from 4bbac0573482034d73cee17fa3de8999b9d47ced Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task Waiting for next available executor Agent sample-app-f7hdx-n3wfx is provisioned from template Kubernetes Pod Template --- apiVersion: "v1" kind: "Pod" metadata: annotations: buildUrl: "http://cd-jenkins:8080/job/sample-app/job/new-feature/1/" labels: jenkins: "slave" jenkins/sample-app: "true" name: "sample-app-f7hdx-n3wfx" spec: containers: - command: - "cat" image: "gcr.io/cloud-builders/kubectl" name: "kubectl" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "gcr.io/cloud-builders/gcloud" name: "gcloud" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "golang:1.10" name: "golang" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "cd-jenkins-agent:50000" - name: "JENKINS_AGENT_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http://cd-jenkins:8080/" image: "jenkins/jnlp-slave:alpine" name: "jnlp" volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false nodeSelector: {} restartPolicy: "Never" serviceAccountName: "cd-jenkins" volumes: - emptyDir: {} name: "workspace-volume" Running on sample-app-f7hdx-n3wfx in /home/jenkins/agent/workspace/sample-app_new-feature [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline java.lang.IllegalStateException: Jenkins.instance is missing. Read the documentation of Jenkins.getInstanceOrNull to see what you are doing wrong. at jenkins.model.Jenkins.get(Jenkins.java:772) at hudson.model.Hudson.getInstance(Hudson.java:77) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.areOnMaster(GoogleRobotUsernamePassword.java:146) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.readObject(GoogleRobotUsernamePassword.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at hudson.remoting.UserRequest.deserialize(UserRequest.java:290) at hudson.remoting.UserRequest.perform(UserRequest.java:189) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743) at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357) at hudson.remoting.Channel.call(Channel.java:957) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) Caused: java.lang.Error: Failed to deserialize the Callable object. at hudson.remoting.UserRequest.perform(UserRequest.java:195) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:97) Caused: java.io.IOException: Remote call on JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 failed at hudson.remoting.Channel.call(Channel.java:963) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) Caused: hudson.remoting.RemotingSystemException at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:299) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE </code></pre> </blockquote> <pre><code> </code></pre>
<p>This issue has now been fixed. Please update the Google Authenticated Source Plugin to version 0.4.</p> <p><a href="https://github.com/jenkinsci/google-source-plugin/pull/9" rel="nofollow noreferrer">https://github.com/jenkinsci/google-source-plugin/pull/9</a></p>
<h2>Steps to reproduce the issue</h2> <ol> <li>Create a webapi project template using the dotnet CLI (dotnet --version = 3.0.100)</li> <li>Build the dockerfile to create image (see dockerfile below)</li> <li>Run the docker image on local docker (works fine)</li> <li>Push this docker image to a docker registry</li> <li>Deploy this onto K8s (were using EKS on AWS)</li> </ol> <h2>Expected behavior</h2> <p>Pod starts up, and runs the webapi</p> <h2>Actual behavior</h2> <p>Pod fails to start up, with the following error in logs:</p> <pre><code>terminate called after throwing an instance of 'boost::wrapexcept&lt;util::AprException&gt;' what(): BaseException: Could not find the requested symbol. at /opt/jenkins/oa-s173/cmake-64-d/agent/native/shared/libutil/src/main/cpp/util/AprException.h:54 at static void util::AprException::check(apr_status_t) at liboneagentdotnet.so [0x90ec22] at liboneagentdotnet.so [0x90ed1b] at liboneagentdotnet.so [0x90ed7c] at liboneagentdotnet.so [0x143d61] at liboneagentdotnet.so [0x144adb] at unknown [0x7f4a024acee2] at unknown [0x7f4a024accef] at unknown [0x7f4a024a4e48] at unknown [0x7f4a024acb40] at unknown [0x7f4a024aca6c] at unknown [0x7f4a024ac77a] at unknown [0x7f4a024a3299] at unknown [0x7f4a024a2dd1] at unknown [0x7f4a024a2d4e] at unknown [0x7f4a024a2425] at unknown [0x7f4a020ac426] at unknown [0x7f4a020aca3d] at libcoreclr.so(CallDescrWorkerInternal+0x7b) [0x24dcae] at libcoreclr.so(CallDescrWorkerWithHandler(CallDescrData*, int)+0x74) [0x17dfc4] at libcoreclr.so(DispatchCallDebuggerWrapper(CallDescrData*, ContextTransitionFrame*, int)+0x86) [0x17e0b6] at libcoreclr.so(DispatchCallSimple(unsigned long*, unsigned int, unsigned long, unsigned int)+0xb6) [0x17e236] at libcoreclr.so(MethodTable::RunClassInitEx(Object**)+0x1a1) [0x115d61] at libcoreclr.so(MethodTable::DoRunClassInitThrowing()+0x2f8) [0x1162e8] at libcoreclr.so(JIT_GetSharedNonGCStaticBase_Helper+0xcf) [0x1b916f] at unknown [0x7f4a020ac2ea] at unknown [0x7f4a020ab4ec] at unknown [0x7f4a020aadad] at libcoreclr.so(CallDescrWorkerInternal+0x7b) [0x24dcae] at libcoreclr.so(CallDescrWorkerWithHandler(CallDescrData*, int)+0x74) [0x17dfc4] at libcoreclr.so(DispatchCallDebuggerWrapper(CallDescrData*, ContextTransitionFrame*, int)+0x86) [0x17e0b6] at libcoreclr.so(DispatchCallSimple(unsigned long*, unsigned int, unsigned long, unsigned int)+0xb6) [0x17e236] at libcoreclr.so(MethodTable::RunClassInitEx(Object**)+0x1a1) [0x115d61] at libcoreclr.so(MethodTable::DoRunClassInitThrowing()+0x2f8) [0x1162e8] at libcoreclr.so [0x29c00b] at libcoreclr.so [0x2551cc] at libcoreclr.so [0x2548c4] at libcoreclr.so [0x2546b3] at libcoreclr.so [0x29ba01] at libcoreclr.so(MethodTable::EnsureInstanceActive()+0x92) [0x115a92] at libcoreclr.so(MethodDesc::EnsureActive()+0x22) [0x109ae2] at libcoreclr.so [0x262507] at libcoreclr.so [0x26291f] at libcoreclr.so(CorHost2::ExecuteAssembly(unsigned int, char16_t const*, int, char16_t const**, unsigned int*)+0x240) [0xcae90] at libcoreclr.so(coreclr_execute_assembly+0xd3) [0xa3c63] at libhostpolicy.so [0x161c2] at libhostpolicy.so(corehost_main+0xcb) [0x16a1b] at libhostfxr.so [0x2ab0f] at libhostfxr.so [0x296a2] at libhostfxr.so(hostfxr_main_startupinfo+0x92) [0x246f2] at dotnet [0xc440] at dotnet [0xc9fd] at libc.so.6(__libc_start_main+0xea) [0x2409a] at dotnet(_start+0x28) [0x2f3f] </code></pre> <h2>Additional information</h2> <p>dockerfile is below:</p> <pre><code>FROM mcr.microsoft.com/dotnet/core/sdk:3.0 AS build-env WORKDIR /build COPY /src . RUN dotnet restore RUN dotnet build RUN dotnet publish -c Release -o ./out FROM mcr.microsoft.com/dotnet/core/aspnet:3.0 WORKDIR /app COPY --from=build-env /build/out . ENTRYPOINT ["dotnet", "Weather.dll"] </code></pre> <p>It is unusual that this is happening only on K8s, were testing this as part of a migration to dotnetcore 3.0</p> <p>This behavior does not change when specifying the <code>runtime</code> flag when executing <code>dotnet publish</code>.</p> <h2>Output of <code>docker version</code></h2> <pre><code>Local docker for windows version: Docker version 19.03.2, build 6a30dfc Container Runtime Version (on K8s): docker://18.6.1 </code></pre> <h2>General Comments</h2> <p>Not totally sure what the issue is here, I would be happy with a mere understanding of the error.</p>
<p>This was caused by Dynatrace not supporting the version of dotnet core we were targeting. Disabling this resolved the issue.</p>
<p>I am trying to get the <em>first</em> pod from within a deployment (filtered by labels) with status <em>running</em> - currently I could only achieve the following, which will just give me the first pod within a deployment (filtered by labels) - and not for sure a running pod, e.g. it might also be a terminating one:</p> <pre><code>kubectl get pod -l "app=myapp" -l "tier=webserver" -l "namespace=test" -o jsonpath="{.items[0].metadata.name}" </code></pre> <p>How is it possible to</p> <p>a) get only a pod list of "RUNNING" pods and (couldn't find anything here or on google)</p> <p>b) select the first one from that list. (thats what I'm currently doing)</p> <p>Regards</p> <p>Update: I already tried the link posted in the comments earlier with the following:</p> <pre><code>kubectl get pod -l "app=ms-bp" -l "tier=webserver" -l "namespace=test" -o json | jq -r '.items[] | select(.status.phase = "Running") | .items[0].metadata.name' </code></pre> <p>the result is 4x "null" - there are 4 running pods.</p> <p>Edit2: Resolved - see comments</p>
<p>Since kubectl 1.9 you have the option to pass a <code>--field-selector</code> argument (see <a href="https://github.com/kubernetes/kubernetes/pull/50140" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/50140</a>). E.g.</p> <pre><code>kubectl get pod -l app=yourapp --field-selector=status.phase==Running -o jsonpath=&quot;{.items[0].metadata.name}&quot; </code></pre> <p>Note however that for not too old <code>kubectl</code> versions, many reasons to find a running pod are moot, because a lot of commands which expect a pod also accept a deployment or service and automatically select a corresponding pod. To quote from the documentation:</p> <pre><code>$ echo exec logs port-forward | xargs -n1 kubectl help | grep -C1 'service\|deploy\|job' # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default kubectl exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default kubectl exec svc/myservice -- date -- # Return snapshot logs from first container of a job named hello kubectl logs job/hello # Return snapshot logs from container nginx-1 of a deployment named nginx kubectl logs deployment/nginx -c nginx-1 -- Use resource type/name such as deployment/mydeployment to select a pod. Resource type defaults to 'pod' if omitted. -- # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment kubectl port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named &quot;https&quot; in a pod selected by the service kubectl port-forward service/myservice 8443:https </code></pre> <p>(Note <code>logs</code> also accepts a service, even though an example is omitted in the help.)</p> <p>The selection algorithm favors &quot;active pods&quot; for which a main criterion is having a status of &quot;Running&quot; (see <a href="https://github.com/kubernetes/kubectl/blob/2d67b5a3674c9c661bc03bb96cb2c0841ccee90b/pkg/polymorphichelpers/attachablepodforobject.go#L51" rel="nofollow noreferrer">source</a>).</p>
<p>Istio Newbie here,</p> <p>I’m doing my first tests with Istio (on version 1.3.0). Most things run nice without much effort. What I’m having an issue is a service that talks with varnish to clean up the cache. This service makes a HTTP request to every pod behind a headless service and its failing with a HTTP 400 (Bad Request) error. This request uses the HTTP Method “BAN” which I believe is the source of the problem since other request method reach varnish without problems.</p> <p>As a temporary workaround I changed the port name from http to varnish and everything start working again</p> <p>I installed istio using the helm chart for 1.3.0:</p> <pre><code>helm install istio install/kubernetes/helm/istio --set kiali.enabled=true --set global.proxy.accessLogFile="/dev/stdout" --namespace istio-system --version 1.3.0 </code></pre> <p>Running on GKE 1.13.9-gke.3 and Varnish is version 6.2</p>
<p>I was able to get it working using Istio without mTLS using the following definitions:</p> <p><strong>ConfigMap</strong></p> <p>Just allowing the pod and service CIDRs for BAN requests and expecting them to come from the Varnish service FQDN.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: varnish-configuration data: default.vcl: | vcl 4.0; import std; backend default { .host = "varnish-service"; .port = "80"; } acl purge { "localhost"; "10.x.0.0"/14; #Pod CIDR "10.x.0.0"/16; #Service CIDR } sub vcl_recv { # this code below allows PURGE from localhost and x.x.x.x if (req.method == "BAN") { if (!client.ip ~ purge) { return(synth(405,"Not allowed.")); } return (purge); } } </code></pre> <p><strong>Deployment</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: varnish spec: replicas: 1 selector: matchLabels: app: varnish template: metadata: labels: app: varnish spec: containers: - name: varnish image: varnish:6.3 ports: - containerPort: 80 name: varnish-port imagePullPolicy: IfNotPresent volumeMounts: - name: varnish-conf mountPath: /etc/varnish volumes: - name: varnish-conf configMap: name: varnish-configuration </code></pre> <p><strong>Service</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: varnish-service labels: workload: varnish spec: selector: app: varnish ports: - name: varnish-port protocol: TCP port: 80 targetPort: 80 </code></pre> <p>After deploying these, you can run a cURL enabled pod:</p> <pre><code>kubectl run bb-$RANDOM --rm -i --image=yauritux/busybox-curl --restart=Never --tty -- /bin/sh </code></pre> <p>And then, from the <code>tty</code> try curling it:</p> <pre><code>curl -v -X BAN http://varnish-service </code></pre> <p>From here, either you'll get <code>200 purged</code> or <code>405 Not allowed</code>. Either way, you've hit the Varnish pod across the mesh.</p> <p>Your issue might be related to mTLS in your cluster. You can check if it's enabled by issuing this command*:</p> <pre><code>istioctl authn tls-check $(k get pod -l app=varnish -o jsonpath={.items..metadata.name}) varnish-service.default.svc.cluster.local </code></pre> <p><em>*The command assumess that you're using the definitions shared in this post. If not, you can adjust accordingly.</em></p> <p>I tested this running GKE twice: One with open source Istio via Helm install and another using the Google managed Istio installation (in permissive mode).</p>
<p>We have a Prometheus running in our cluster and we are able to use grafana to watch our cluster / pods metrics, now I want to add some custom metrics , is there a way to do it ? if so how should I connect the code to Prometheus , I mean if i write golang program using Prometheus API , and deploy it as docker to k8s , now does the program know to connect with Prometheus ? e.g. this program is exposing data to the /metrics endpoint but what else should I do to make prom to be able to read this data ? </p> <p><a href="https://gist.github.com/sysdig-blog/3640f39a7bb1172f986d0e2080c64a75#file-prometheus-metrics-golang-go" rel="nofollow noreferrer">https://gist.github.com/sysdig-blog/3640f39a7bb1172f986d0e2080c64a75#file-prometheus-metrics-golang-go</a></p>
<p>You probably want Prometheus to discover the targets to scrape from automatically, if so you should configure <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">K8s service discovery</a> in Prometheus to discover pods, nodes and services from your cluster (maybe you already did something like this since you are already monitoring k8s metrics).</p> <p>To get your go application monitored you can for example add annotations to your pods or to your services to enable scraping from this targets and define where the metrics are available (path, port). However this depends on your scrape configuration and relabeling. Good example can be found <a href="https://linuxacademy.com/blog/kubernetes/running-prometheus-on-kubernetes/" rel="nofollow noreferrer">here</a></p> <p>If you are using <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md" rel="nofollow noreferrer">Prometheus Operator</a> you need to define a servicemonitor resource instead.</p>
<p>Deployed k8s on AWS with KOPS. I have created nginx ingress <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">https://github.com/kubernetes/ingress-nginx</a> nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0</p> <p>Everything is up and running and I could able to access applications externally using aws classic load balancer which is created by nginx service.</p> <p>Recently we started working on websockets. I deployed my services in k8s and trying to access externally.</p> <p>I created service and ingress for my application. Ingress is now pointing to loadbalancer(below json file).</p> <p>I created route53 entry in aws and trying to connect to that but I am getting below error when I am trying to connect from my client application through chrome browser</p> <blockquote> <p>WebSocket connection to 'wss://blockchain.aro/socket.io/?EIO=3&amp;transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 400</p> </blockquote> <p>I tried creating Application load balancer but I could not able to connect to <code>wss://&lt;host&gt;</code></p> <p>Error:</p> <blockquote> <p>WebSocket connection to 'wss://blockchain.aro/socket.io/?EIO=3&amp;transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 400</p> </blockquote> <pre><code>const config: SocketIoConfig = { url: 'wss://blockchain.aro', options: { autoConnect: false, transports: ['websocket']} }; Ingress: "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"blockchain\",\"namespace\":\"adapt\"},\"spec\":{\"rules\":[{\"host\":\"blockchain.aro\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"blockchain\",\"servicePort\":8097},\"path\":\"/\"},{\"backend\":{\"serviceName\":\"blockchain\",\"servicePort\":8097},\"path\":\"/socket.io\"},{\"backend\":{\"serviceName\":\"blockchain\",\"servicePort\":8097},\"path\":\"/ws/\"}]}}],\"tls\":[{\"hosts\":[\"blockchain.aro\"],\"secretName\":\"blockchain-tls-secret\"}]}}\n", "nginx.ingress.kubernetes.io/proxy-read-timeout": "3600", "nginx.ingress.kubernetes.io/proxy-send-timeout": "3600" } </code></pre> <p>included <code>tls</code> and <code>secretname</code> and <code>rules</code> in ingress file. I tried creating <code>ApplicationLoadbalancer</code> but I could not able to connect with that too.</p>
<pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: core-prod kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/websocket-services: core-service nginx.org/websocket-services: core-service name: core-ingress spec: rules: - host: test.io http: paths: - backend: serviceName: core-service servicePort: 80 tls: - hosts: - test.io secretName: core-prod </code></pre>
<p>When I am implementing the CI/CD pipeline, I am using docker, kubernetes and jenkins for implementation. And I am pushing the resulting Docker image to Dockerhub repository.</p> <p>When I am pulling it is not pulling the latest from the Dockerhub.com registry. So it's not showing the updated response in my application. I added the <code>testdeployment.yaml</code> file like the following. And repository credentials are storing in Jenkinsfile only.</p> <pre><code>spec: containers: - name: test-kube-deployment-container image: "spacestudymilletech010/spacestudykubernetes:latest" imagePullPolicy: Always ports: - name: http containerPort: 8085 protocol: TCP </code></pre> <p>Jenkinsfile </p> <pre><code> sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline' sh 'docker login --username=&lt;my-username&gt; --password=&lt;my-password&gt;' sh 'docker push spacestudymilletech010/spacestudykubernetes:latest' </code></pre> <p>How can I identify why it is not pulling latest image from the Dockerhub.com?</p>
<p>It looks like you are repeatedly pushing <code>:latest</code> to dockerhub?</p> <p>If so, then that's the reason for your issue. You push latest to the hub from your Jenkins job, but if the k8s node which runs the deployment pod already has a tag called <code>latest</code> stored locally, then that's what it will use.</p> <p>To clarify - <code>latest</code> is just a string, it could equally well be <code>foobar</code>. It doesn't actually <em>mean</em> that docker will pull the most recent version of the container.</p> <p>There are two takeaways from this:</p> <ul> <li>It's almost always a very bad idea to use <code>latest</code> in k8s.</li> <li>It <em>is</em> always a bad idea to push the same tag multiple times, in fact many repo's won't let you.</li> </ul> <p>With regards to using <code>latest</code> at all. This comes from personal experience, at my place of work, in the early days of our k8s adoption, we used it everywhere. That is until we found one day that our puppet server wasn't working any more. On investigation we found that the node had died, the pod re-spun on a different node and a different <code>latest</code> was pulled which was a new major release, breaking things.</p> <p>It was not obvious, because <code>kubectl describe pod</code> showed the same tag name as before so nothing, apparently, had changed.</p> <p>To add an excellent point mentioned in the comments: You have <code>ImagePullPolicy: 'Always'</code>, but if you're doing <code>kubectl apply -f mypod.yaml</code>, with the same tag name, k8s has no way of knowing you've actually changed the image</p>
<p>I have been deploying apps to Kubernetes for the last 2 years. And in my org, all our apps(especially stateless) are running in Kubernetes. I still have a fundamental question, just because very recently we found some issues with respect to our few python apps.</p> <p>Initially when we deployed, our python apps(Written in Flask and Django), we ran it using <code>python app.py</code>. It's known that, because of GIL, python really doesn't have support for system threads, and it will only serve one request at a time, but in case the one request is CPU heavy, it will not be able to process further requests. This is causing sometimes the health API to not work. We have observed that, at this moment, if there is a single request which is not IO and doing some operation, we will hold the CPU and cannot process another request in parallel. And since it's only doing fewer operations, we have observed there is no increase in the CPU utilization also. This has an impact on how <code>HorizontalPodAutoscaler</code> works, its unable to scale the pods.</p> <p>Because of this, we started using <code>uWSGI</code> in our pods. So basically <code>uWSGI</code> can run multiple pods under the hood and handle multiple requests in parallel, and automatically spin new processes on demand. But here comes another problem, that we have seen, <code>uwsgi</code> is lacking speed in auto-scaling the process tocorrected serve the request and its causing <code>HTTP 503</code> errors, Because of this we are unable to serve our few APIs in 100% availability. </p> <p>At the same time our all other apps, written in <code>nodejs</code>, <code>java</code> and <code>golang</code>, is giving 100% availability.</p> <p>I am looking at what is the best way by which I can run a python app in 100%(99.99) availability in Kubernetes, with the following</p> <blockquote> <ol> <li>Having health API and liveness API served by the app</li> <li>An app running in Kubernetes</li> <li>If possible without uwsgi(Single process per pod is the fundamental docker concept)</li> <li>If with uwsgi, are there any specific config we can apply for k8s env</li> </ol> </blockquote>
<p>We use Twisted's WSGI server with 30 threads and it's been solid for our Django application. Keeps to a single process per pod model which more closely matches Kubernetes' expectations, as you mentioned. Yes, the GIL means only one of those 30 threads can be running Python code at time, but as with most webapps, most of those threads are blocked on I/O (usually waiting for a response from the database) the vast majority of the time. Then run multiple replicas on top of that both for redundancy and to give you true concurrency at whatever level you need (we usually use 4-8 depending on the site traffic, some big ones are up to 16).</p>
<p>I am using Docker Desktop on Windows and have created a local Kubernetes cluster. I've been following this (<a href="https://blog.sourcerer.io/a-kubernetes-quick-start-for-people-who-know-just-enough-about-docker-to-get-by-71c5933b4633#8787" rel="nofollow noreferrer">quick start guide</a>) and am running into issues identifying my external IP. When creating a service I'm supposed to list the "master server's IP address".</p> <p>I've identified the master node <code>kubectl get node</code>:</p> <pre><code>NAME STATUS ROLES AGE VERSION docker-desktop Ready master 11m v1.14.7 </code></pre> <p>Then used <code>kubectl describe node docker-desktop</code>...but an external IP is not listed anywhere.</p> <p>Where can I find this value?</p>
<p>Use the following command so you can see more information about the nodes.</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>or</p> <pre><code>kubectl get nodes -o json </code></pre> <p>You'll be able to see the internal-ip and external-ip.</p> <p>Pd: In my cluster, the internal-ip works as external-ip, even tho the external-ip is listed as none.</p>
<p>I'd like to use kubectl with only jsonpath to obtain the current cluster address. I've tried the following, and many permutations of it, but this doesn't seem to work.</p> <pre><code>kubectl config view -o jsonpath='{.clusters[?($.current-context)].cluster.server}' </code></pre> <p>Is this possible using only jsonpath?</p>
<p>You can use the <code>--minify</code> flag:</p> <blockquote> <p>--minify=false: Remove all information not used by current-context from the output</p> </blockquote> <p>And then filter the <code>server</code> field from the current context output:</p> <pre><code>kubectl config view --minify -o jsonpath='{.clusters[].cluster.server}' </code></pre>
<p>I am struggling trying to replace an existing container with a container from my container-register from Google Cloud Platform.</p> <p>This is my cloudbuild.yaml file.</p> <p>steps:</p> <pre><code> # This steps clone the repository into GCP - name: gcr.io/cloud-builders/git args: ['clone', 'https:///user/:password@github.com/PatrickVibild/scrappercontroller'] # This step runs the unit tests on the src - name: 'docker.io/library/python:3.7' id: Test entrypoint: /bin/sh args: - -c - 'pip install -r requirements.txt &amp;&amp; python -m pytest src/tests/**' #This step creates a container and leave it on CloudBuilds repository. - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller', '.'] #Adds the container to Google container registry as an artefact - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller'] #Uses the container and replaces the existing one in Kubernetes - name: 'gcr.io/cloud-builders/kubectl' args: ['set', 'image', 'deployment/scrappercontroller', 'scrappercontroller-sha256=gcr.io/abiding-robot-255320/scrappercontroller:latest'] env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a' - 'CLOUDSDK_CONTAINER_CLUSTER=scrapper-admin' </code></pre> <p>I have no issues building my project and I get green on all the steps, I might be missing in the last step but I cant find a way to replace my container in my cluster with a newer version of my code. </p> <p>I can create a new workload inside my existing cluster manually using the GUI and selecting a container from my container registry, but from there the step to replace that workload container with my new version from the clouds fails.</p>
<p>It's a common pitfall. According with the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>Note: A Deployment’s rollout is triggered if and only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.</p> </blockquote> <p>Your issue come from the tag of your image doesn't change: the <code>:latest</code> is deployed and you ask for deploying <code>:latest</code>. No image name change, no rollout.</p> <p>For changing this, I propose you to use <a href="https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values#using_default_substitutions" rel="nofollow noreferrer">substitution variables</a>, especially <code>COMMIT_SHA</code> or <code>SHORT_SHA</code>. You can not this in the documentation:</p> <blockquote> <p>only available for triggered builds</p> </blockquote> <p>This means that this variable is only populated when the build is automatically triggered and not manually. </p> <p>For manual run, you have to specify your own variable, like this</p> <pre><code>gcloud builds submit --substitutions=COMMIT_SHA=&lt;what you want&gt; </code></pre> <p>And update your build script like this:</p> <pre><code> # This steps clone the repository into GCP - name: gcr.io/cloud-builders/git args: ['clone', 'https:///user/:password@github.com/PatrickVibild/scrappercontroller'] # This step runs the unit tests on the src - name: 'docker.io/library/python:3.7' id: Test entrypoint: /bin/sh args: - -c - 'pip install -r requirements.txt &amp;&amp; python -m pytest src/tests/**' #This step creates a container and leave it on CloudBuilds repository. - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller:$COMMIT_SHA', '.'] #Adds the container to Google container registry as an artefact - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller:$COMMIT_SHA'] #Uses the container and replaces the existing one in Kubernetes - name: 'gcr.io/cloud-builders/kubectl' args: ['set', 'image', 'deployment/scrappercontroller', 'scrappercontroller-sha256=gcr.io/abiding-robot-255320/scrappercontroller:COMMIT_SHA'] env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a' - 'CLOUDSDK_CONTAINER_CLUSTER=scrapper-admin' </code></pre> <p>And during the deployment, you should see this line:</p> <pre><code>Step #2: Running: kubectl set image deployment.apps/test-deploy go111=gcr.io/&lt;projectID&gt;/postip:&lt;what you want&gt; Step #2: deployment.apps/test-deploy image updated </code></pre> <p>If you don't see it, this mean that your rollout has not take into account.</p>
<p>Have a Prometheus operator on GKE and some ConfigMap with Prometheus rules, created by me. Today I figured out, that I can't change/delete that ConfigMap anymore. Each time it gets recreated in the previous state. Back in days, it wasn't immutable.</p> <p>What can be the cause of that? </p> <ul> <li>K8S master: 1.13.7-gke.24</li> <li>K8S node: 1.13.6-gke.13 </li> <li>Prometheus: v2.4.3</li> <li>Prometheus-operator: v0.24.0</li> <li>Configmap-reload: v0.0.1</li> <li>Prometheus-config-reloader: v0.24.0</li> </ul>
<p>Prometheus Operator acts on <a href="https://github.com/coreos/prometheus-operator#customresourcedefinitions" rel="nofollow noreferrer">CRDs</a>. These objects are continually watched, and any drift configuration will trigger a config-reload.</p> <p>The operator is intended to fully control the ConfigMap; if you directly edit it, the config-reloader will eventually revert your changes to match the CRD configs.</p> <p>The correct way to edit your rules is by changing the <a href="https://github.com/coreos/prometheus-operator/blob/29199d546e0bcb98b82f90b266eb437f1b1934c2/Documentation/design.md#prometheusrule" rel="nofollow noreferrer"><code>PrometheusRule</code></a> object. Your changes will be caught by the operator, which will update the ConfigMap and trigger a config-reload.</p>
<p>Let us assume kubernetes cluster with one worker node (1core and 256MB RAM). all pods will be scheduled in worker node.</p> <p>At first i deployed a pod with config (request: cpu 0.4, limit: cpu 0.8), it deployed successfully. as the machine has 1 core free it took 0.8 cpu</p> <p>Can i able to deploy another pod with same config? If yes will first pod's cpu reduce to 0.4?</p>
<p>Resource requests and limits are considered in two different places.</p> <p>Requests are only considered when scheduling a pod. If you're scheduling two pods that each request 0.4 CPU on a node that has 1.0 CPU, then they fit and could both be scheduled there (along with other pods requesting up to a total of 0.2 CPU more).</p> <p>Limits throttle CPU utilization, but are also subject to the actual physical limits of the node. If one pod tries to use 1.0 CPU but its pod spec limits it to 0.8 CPU, it will get throttled. If two of these pods run on the same hypothetical node with only 1 actual CPU, they will be subject to the kernel scheduling policy and in practice will probably each get about 0.5 CPU.</p> <p>(Memory follows the same basic model, except that if a pod exceeds its limits or if the total combined memory used on a node exceeds what's available, the pod will get OOM-killed. If your node has 256 MB RAM, and each pod has a memory request of 96 MB and limit of 192 MB, they can both get scheduled [192 MB requested memory fits] but could get killed if either one individually allocates more than 192 MB RAM [its own limit] or if the total memory used by all Kubernetes and non-Kubernetes processes on that node goes over the physical memory limit.)</p>
<p>I am trying to run <code>nginx</code> container as a non-root user I am trying to configure my <code>nginx.conf</code> file, which I am then putting into a k8s configmap, but when the container starts, it keeps throwing errors such as </p> <blockquote> <p>"pid" directive is not allowed here in /etc/nginx/conf.d/nginx-kibana.conf:4</p> </blockquote> <p>and for every subsequent ones</p> <p>What do i need to fix or adjust in the config, or do i need to adjust the <code>volume:</code> in the nginx-deployment.yaml</p> <p>This is my nginx.conf</p> <pre><code>error_log /tmp/error.log; # The pidfile will be written to /var/run unless this is set. pid /tmp/nginx.pid; worker_processes 1; events { worker_connections 1024; } http { # Set an array of temp and cache file options that will otherwise default to # restricted locations accessible only to root. client_body_temp_path /tmp/client_body; fastcgi_temp_path /tmp/fastcgi_temp; proxy_temp_path /tmp/proxy_temp; scgi_temp_path /tmp/scgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; index index.html index.htm index.php; default_type application/octet-stream; server { listen 8080 default_server; listen [::]:8080 default_server ipv6only=on; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # the UI will send the request with query string pageId to kibana to load a specific page # e.g: iframe src="/kibana/page?pageId=dashboard" # set proxy_pass to root kibana does not see the query params, so we have to go to /app/kibana location ^~ /${KIBANA_PATH}/page { proxy_pass http://127.0.0.1:5601/app/kibana/${ESC}is_args${ESC}args; proxy_http_version 1.1; proxy_set_header Upgrade ${ESC}http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host ${ESC}host; proxy_cache_bypass ${ESC}http_upgrade; } # have to re-write URLs for kibana to strip out the /kibana part location /${KIBANA_PATH}/ { proxy_pass http://127.0.0.1:5601/; proxy_http_version 1.1; proxy_set_header Upgrade ${ESC}http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host ${ESC}host; proxy_cache_bypass ${ESC}http_upgrade; } } } </code></pre> <p>This is how i mount the configmap onto the container</p> <pre><code> securityContext: fsGroup: 2000 runAsUser: 2000 volumes: - name: nginxconfigmap-volume configMap: name: my-nginx-configmap containers: - name: nginx image: nginx:stable ports: - containerPort: 8080 name: http protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 2 failureThreshold: 6 volumeMounts: - mountPath: /etc/nginx/conf.d name: nginxconfigmap-volume </code></pre>
<p>Problem solved by fixing two things. Firstly I had to ensure to name my conf file as nginx.conf instead of nginx-kibana.conf</p> <p>And secondly had to ensure that I set the mount path to mountPath: /etc/nginx</p>
<p>I have a local Kubernetes cluster inside Minikube on my Mac. I deploy a Minio standalone server as a single container with resource limits specified. When I upload a file which is larger than container memory limit, the container is terminated with reason <code>OOMKilled</code>. On Ubuntu with the same setup file is uploaded with no errors.</p> <p>Minikube is running in VirtualBox and configured to use 4GB of memory. I also use Heapster and Metric Server to check memory usage over time.</p> <pre><code>$ minikube config set memory 4096 $ minikube addons enable heapster $ minikube addons enable metrics-server $ minikube start </code></pre> <p><a href="https://i.stack.imgur.com/EM9TS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EM9TS.png" alt="minikube in vb"></a></p> <p>I use a slightly modified version of <a href="https://docs.min.io/docs/deploy-minio-on-kubernetes.html#minio-standalone-server-deployment" rel="nofollow noreferrer">Kubernetes confiuration for Minio standalone setup provided in Minio documentation</a>. I create PV and PVC for storage, Deployment and Service for Minio server. Container configuration:</p> <ul> <li>Using official Minio Docker image</li> <li>Using initContainer to create a bucket</li> </ul> <p>Resource limits are set to have a Guaranteed QoS. Container is limited to 256 MB of memory and 0.5 CPU.</p> <pre><code>resources: requests: cpu: '500m' memory: '256Mi' limits: cpu: '500m' memory: '256Mi' </code></pre> <p>Full <code>videos-storage.yaml</code></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: videos-storage-pv labels: software: minio spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: /data/storage-videos/ --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: videos-storage-pv-claim spec: storageClassName: '' accessModes: - ReadWriteOnce resources: requests: storage: 2Gi selector: matchLabels: software: minio --- apiVersion: apps/v1 kind: Deployment metadata: name: videos-storage-deployment spec: selector: matchLabels: app: videos-storage template: metadata: labels: app: videos-storage spec: initContainers: - name: init-minio-buckets image: minio/minio volumeMounts: - name: data mountPath: /data/storage-videos command: ['mkdir', '-p', '/data/storage-videos/videos'] containers: - name: minio image: minio/minio volumeMounts: - name: data mountPath: /data/storage-videos args: - server - /data/storage-videos env: - name: MINIO_ACCESS_KEY value: 'minio' - name: MINIO_SECRET_KEY value: 'minio123' ports: - containerPort: 9000 resources: requests: cpu: '500m' memory: '256Mi' limits: cpu: '500m' memory: '256Mi' readinessProbe: httpGet: path: /minio/health/ready port: 9000 initialDelaySeconds: 5 periodSeconds: 20 livenessProbe: httpGet: path: /minio/health/live port: 9000 initialDelaySeconds: 5 periodSeconds: 20 volumes: - name: data persistentVolumeClaim: claimName: videos-storage-pv-claim --- apiVersion: v1 kind: Service metadata: name: videos-storage-service spec: type: LoadBalancer ports: - port: 9000 targetPort: 9000 protocol: TCP selector: app: videos-storage </code></pre> <p>I deploy the config to the cluster:</p> <pre><code>$ kubectl apply -f videos-storage.yaml </code></pre> <p>and access Minio dashboard using Minikube, the following command opens it in browser:</p> <pre><code>$ minikube service videos-storage-service </code></pre> <p>Then I select a <code>videos</code> bucket and upload a 1 GB file. After about ~250 MB are uploaded, I get an error in Minio dashboard. Playing with limits and analyzing Heapster graphs I can see a correlation between file size and memory usage. Container uses exactly the same amount of memory equal to file size, which is weird to me. My impression is that file won't be stored directly in memory during file upload.</p> <p><a href="https://i.stack.imgur.com/pKIKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pKIKe.png" alt="minio dashboard error"></a> </p> <p>Describe pod</p> <pre><code>Name: videos-storage-deployment-6cd94b697-p4v8n Namespace: default Priority: 0 Node: minikube/10.0.2.15 Start Time: Mon, 22 Jul 2019 11:05:53 +0300 Labels: app=videos-storage pod-template-hash=6cd94b697 Annotations: &lt;none&gt; Status: Running IP: 172.17.0.8 Controlled By: ReplicaSet/videos-storage-deployment-6cd94b697 Init Containers: init-minio-buckets: Container ID: docker://09d75629a39ad1dc0dbdd6fc0a6a6b7970285d0a349bccee2b0ed2851738a9c1 Image: minio/minio Image ID: docker-pullable://minio/minio@sha256:456074355bc2148c0a95d9c18e1840bb86f57fa6eac83cc37fce0212a7dae080 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: mkdir -p /data/storage-videos/videos State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 22 Jul 2019 11:08:45 +0300 Finished: Mon, 22 Jul 2019 11:08:45 +0300 Ready: True Restart Count: 1 Environment: &lt;none&gt; Mounts: /data/storage-videos from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgs9q (ro) Containers: minio: Container ID: docker://1706139f0cc7852119d245e3cfe31d967eb9e9537096a803e020ffcd3becdb14 Image: minio/minio Image ID: docker-pullable://minio/minio@sha256:456074355bc2148c0a95d9c18e1840bb86f57fa6eac83cc37fce0212a7dae080 Port: 9000/TCP Host Port: 0/TCP Args: server /data/storage-videos State: Running Started: Mon, 22 Jul 2019 11:08:48 +0300 Last State: Terminated Reason: OOMKilled Exit Code: 0 Started: Mon, 22 Jul 2019 11:06:06 +0300 Finished: Mon, 22 Jul 2019 11:08:42 +0300 Ready: True Restart Count: 1 Limits: cpu: 500m memory: 256Mi Requests: cpu: 500m memory: 256Mi Liveness: http-get http://:9000/minio/health/live delay=5s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:9000/minio/health/ready delay=5s timeout=1s period=20s #success=1 #failure=3 Environment: MINIO_ACCESS_KEY: minio MINIO_SECRET_KEY: minio123 Mounts: /data/storage-videos from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-zgs9q (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: videos-storage-pv-claim ReadOnly: false default-token-zgs9q: Type: Secret (a volume populated by a Secret) SecretName: default-token-zgs9q Optional: false QoS Class: Guaranteed Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>Minikube logs in <code>dmesg</code>:</p> <pre><code>[ +3.529889] minio invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=-998 [ +0.000006] CPU: 1 PID: 8026 Comm: minio Tainted: G O 4.15.0 #1 [ +0.000001] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ +0.000000] Call Trace: [ +0.000055] dump_stack+0x5c/0x82 [ +0.000010] dump_header+0x66/0x281 [ +0.000006] oom_kill_process+0x223/0x430 [ +0.000002] out_of_memory+0x28d/0x490 [ +0.000003] mem_cgroup_out_of_memory+0x36/0x50 [ +0.000004] mem_cgroup_oom_synchronize+0x2d3/0x310 [ +0.000002] ? get_mem_cgroup_from_mm+0x90/0x90 [ +0.000002] pagefault_out_of_memory+0x1f/0x4f [ +0.000002] __do_page_fault+0x4a3/0x4b0 [ +0.000003] ? page_fault+0x36/0x60 [ +0.000002] page_fault+0x4c/0x60 [ +0.000002] RIP: 0033:0x427649 [ +0.000001] RSP: 002b:000000c0002eaae8 EFLAGS: 00010246 [ +0.000154] Memory cgroup out of memory: Kill process 7734 (pause) score 0 or sacrifice child [ +0.000013] Killed process 7734 (pause) total-vm:1024kB, anon-rss:4kB, file-rss:0kB, shmem-rss:0kB </code></pre> <p>Initially the problem occurred when I did not have any resource limits. When I tried to upload a big file, container with Minio would use all of the memory available in the node, and because there is no memory left, Kubernetes services become unresponsive and it started killing other containers, like <code>apiserver</code>; and file won't upload either. After that I added resource limits to Minio container and cluster itself became stable, but Minio container still dies.</p> <p>I would like Minio to function within provided limits and not consume memory equal to file size. I am not sure on what side the problem might be, whether it is Minio or Minikube or VirtualBox or Docker or Kubernetes. I am not familiar with how memory works in this setup as well. As I said, the same setup worked fine on Ubuntu 18.04.</p> <p>Versions:</p> <ul> <li>MacOS Mojave 10.14.5, 2.6 GHz Intel Core i5, 8 GB 1600 MHz DDR3</li> <li>Minikube v1.2.0</li> <li>VirtualBox 5.2.18 r124319</li> <li>kubectl v1.15.0</li> <li>Docker version 18.09.2, build 6247962</li> <li>Minio 2019-07-17T22:54:12Z (<a href="https://hub.docker.com/r/minio/minio" rel="nofollow noreferrer">https://hub.docker.com/r/minio/minio</a>)</li> </ul>
<p>I've also <a href="https://github.com/minio/minio/issues/7962" rel="nofollow noreferrer">posted this issue to Minio repository</a>, got a response that <code>256mb</code> is too low. After increasing the memory to <code>512mb</code> it worked fine.</p>
<p>I have a spring boot oauth2 server that uses a JDBC implementation. It is configured as an authorization server with @EnableAuthorizationServer.</p> <p>I'd like to scale that application horyzontally but it doesn't seem to work properly. </p> <p>I can connect only if I have one instance (pods) of the server. </p> <p>I use autorisation_code_client grant from another client service to get the token. So first the client service redirect the user to the oauth2 server form, then once the user is authenticated he is supposed to be redirect to the client-service with a code attached to the url, finally the client use that code to request the oauth2 server again and obtain the token. </p> <p>Here the user is not redirected at all if I have several instance of the oauth2-server. With one instance it works well.</p> <p>When I check the log of the two instances in real time, I can see that the authentication works on one of them. I don't have any specific error the user is just not redirected.</p> <p>Is there a way to configure the oauth2-server to be stateless or other way to fix that issue ? </p> <p>Here is my configuration, the AuthorizationServerConfigurerAdapter implementation. </p> <pre><code>@Configuration public class AuthorizationServerConfiguration extends AuthorizationServerConfigurerAdapter { @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource oauthDataSource() { return DataSourceBuilder.create().build(); } @Autowired @Qualifier("authenticationManagerBean") private AuthenticationManager authenticationManager; @Bean public JdbcClientDetailsService clientDetailsSrv() { return new JdbcClientDetailsService(oauthDataSource()); } @Bean public TokenStore tokenStore() { return new JdbcTokenStore(oauthDataSource()); } @Bean public ApprovalStore approvalStore() { return new JdbcApprovalStore(oauthDataSource()); } @Bean public AuthorizationCodeServices authorizationCodeServices() { return new JdbcAuthorizationCodeServices(oauthDataSource()); } @Bean public TokenEnhancer tokenEnhancer() { return new CustomTokenEnhancer(); } @Bean @Primary public AuthorizationServerTokenServices tokenServices() { DefaultTokenServices tokenServices = new DefaultTokenServices(); tokenServices.setTokenStore(tokenStore()); tokenServices.setTokenEnhancer(tokenEnhancer()); return tokenServices; } @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.withClientDetails(clientDetailsSrv()); } @Override public void configure(AuthorizationServerSecurityConfigurer oauthServer) { oauthServer .tokenKeyAccess("permitAll()") .checkTokenAccess("isAuthenticated()") .allowFormAuthenticationForClients(); } @Override public void configure(AuthorizationServerEndpointsConfigurer endpoints) { endpoints .authenticationManager(authenticationManager) .approvalStore(approvalStore()) //.approvalStoreDisabled() .authorizationCodeServices(authorizationCodeServices()) .tokenStore(tokenStore()) .tokenEnhancer(tokenEnhancer()); } } </code></pre> <p>The main class </p> <pre><code>@SpringBootApplication @EnableResourceServer @EnableAuthorizationServer @EnableConfigurationProperties @EnableFeignClients("com.oauth2.proxies") public class AuthorizationServerApplication { public static void main(String[] args) { SpringApplication.run(AuthorizationServerApplication.class, args); } } </code></pre> <p>The Web Security Configuration</p> <pre><code>@Configuration @Order(1) public class WebSecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean @Override public UserDetailsService userDetailsServiceBean() throws Exception { return new JdbcUserDetails(); } @Override @Bean public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Override protected void configure(HttpSecurity http) throws Exception { // @formatter:off http.requestMatchers() .antMatchers("/", "/login", "/login.do", "/registration", "/registration/confirm/**", "/registration/resendToken", "/password/forgot", "/password/change", "/password/change/**", "/oauth/authorize**") .and() .authorizeRequests()//autorise les requetes .antMatchers( "/", "/login", "/login.do", "/registration", "/registration/confirm/**", "/registration/resendToken", "/password/forgot", "/password/change", "/password/change/**") .permitAll() .and() .requiresChannel() .anyRequest() .requiresSecure() .and() .authorizeRequests() .anyRequest() .authenticated() .and() .formLogin() .loginPage("/login") .loginProcessingUrl("/login.do") .usernameParameter("username") .passwordParameter("password") .and() .userDetailsService(userDetailsServiceBean()); } // @formatter:on @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsServiceBean()).passwordEncoder(passwordEncoder()); } } </code></pre> <p>Client side the WebSecurityConfigurerAdapter</p> <pre><code>@EnableOAuth2Sso @Configuration public class UiSecurityConfig extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http.antMatcher("/**") .authorizeRequests() .antMatchers( "/", "/index.html", "/login**", "/logout**", //resources "/assets/**", "/static/**", "/*.ico", "/*.js", "/*.json").permitAll() .anyRequest() .authenticated() .and() .csrf().csrfTokenRepository(csrfTokenRepository()) .and() .addFilterAfter(csrfHeaderFilter(), SessionManagementFilter.class); } } </code></pre> <p>the oauth2 configuration properties </p> <p>oauth2-server is the service name (load balancer) on kubernetes and also the server path that is why it appears twice.</p> <pre><code>security: oauth2: client: clientId: ********** clientSecret: ******* accessTokenUri: https://oauth2-server/oauth2-server/oauth/token userAuthorizationUri: https://oauth2.mydomain.com/oauth2-server/oauth/authorize resource: userInfoUri: https://oauth2-server/oauth2-server/me </code></pre> <p>Here an important detail, the value of userAuthorizationUri is the address to access the oauth2-server from the outside of the k8s cluster. The client-service send back that address into the response with a 302 http code if the user is not connected and tries to access to the /login path of the client-service. then the user is redirected to the /login path of the oauth2-server.<br> <a href="https://oauth2.mydomain.com" rel="nofollow noreferrer">https://oauth2.mydomain.com</a> target an Nginx Ingress controller that handle the redirection to the load balancer service.</p>
<p>Here is a solution to this problem. It's not a Spring issue at all but a bad configuration of the Nginx Ingress controller.</p> <p>The authentication process is done in several stages :</p> <p>1 - the user clic on a login button that target the /login path of the client-server</p> <p>2 - the client-server, if the user is not authenticated yet, send a response to the browser with a 302 http code to redirect the user to the oauth2-server, the value of the redirection is composed with the value of the <strong>security.oauth2.client.userAuthorizationUri</strong> property and the redirection url that will be used by the browser to allow the client-server to get the Token once the user is authenticated. That url look like this :</p> <pre><code>h*tps://oauth2.mydomain.com/oauth2-server/oauth/authorize?client_id=autorisation_code_client&amp;redirect_uri=h*tps://www.mydomain.com/login&amp;response_type=code&amp;state=bSWtGx </code></pre> <p>3 - the user is redirected to the previous url</p> <p>4 - the oauth2-server send a 302 http code to the browser with the login url of the oauth2-server, h*tps://oauth2.mydomain.com/oauth2-server/login</p> <p>5 - the user submit his credentials and the token is created if they are correct.</p> <p>6 - the user is redirected to the same address as at the step two, and the oauth-server add informations to the redirect_uri value</p> <p>7 - the user is redirected to the client-server. The redirection part of the response look like this :</p> <pre><code>location: h*tps://www.mydomain.com/login?code=gnpZ0r&amp;state=bSWtGx </code></pre> <p>8 - the client-server contact the oauth2-server and obtain the token from the code and the state that authenticates it. It doesn't matter if the instance of the oauth2 server is different than the one used by the user to authenticate himself. Here the client-server use the value of security.oauth2.client.accessTokenUri to get the token, this is the internal load balancing service address that targets the oauth2 server pods, so it doesn't pass through any Ingress controller.</p> <p>So at the steps 3 to 6 the user must communicate with the same instance of the oauth2-server throught the Ingress controller in front of the load balancer service.</p> <p>Its is possible by configuring the Nginx Ingress controller with a few annotations :</p> <pre><code>&quot;annotations&quot;: { ... &quot;nginx.ingress.kubernetes.io/affinity&quot;: &quot;cookie&quot;, &quot;nginx.ingress.kubernetes.io/session-cookie-expires&quot;: &quot;172800&quot;, &quot;nginx.ingress.kubernetes.io/session-cookie-max-age&quot;: &quot;172800&quot;, &quot;nginx.ingress.kubernetes.io/session-cookie-name&quot;: &quot;route&quot; } </code></pre> <p>That way we ensure that the user will be redirected to the same pods/instance of the oauth2-server during the authentication process as long he's identified with the same cookie.</p> <p>The affinity session mecanism is a great way to scale the authentication server and also the client-server. Once the user is authenticated he will always use the same instance of the client and keep his session informations.</p> <p>Thanks to Christian Altamirano Ayala for his help.</p>
<p>Let's say I have classic application with </p> <ul> <li>Web</li> <li>Backend</li> <li>DB</li> </ul> <p>If I understand correctly I will create deployment for each of them. What if I want to deploy all in one step? How should I group the deployments. I have read something about labels and services, I'm totally not sure which concept is the right one. There are two ports necessary to face outside(http and debug). Just for clarity skipping any DB initializations and readiness probes.</p> <p>How can I deploy everything at once? </p>
<p>You need multiple Kubernetes objects for this and there is multiple ways to solve this.</p> <p><strong>Web</strong> - it depends what this is. Is it just <strong>static</strong> JavaScript files? In that case, it is easiest to deploy it with a CDN solution, on any cloud provider, an on-prem solution or possible using a Kubernetes based product like e.g. <a href="https://github.com/ilhaan/kubeCDN" rel="nofollow noreferrer">KubeCDN</a>.</p> <p><strong>Backend</strong> - When using Kubernetes, we design a backend to be <strong>stateless</strong> following the <a href="https://12factor.net/" rel="nofollow noreferrer">Twelve Factor</a> principles. This kind of application is deployed on Kubernetes using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> that will rollout one or more instances of your application, possible elastically scaled depending on load. In front of all your instances, you need a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> and you want to expose this service with a loadbalancer/proxy using an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>.</p> <p><strong>DB</strong> - this is a <strong>stateful</strong> application, if deployed on Kubernetes, this kind of application is deployed as a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> and you also need to think about how you handle the storage with this, it may possibly be handled with Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a>.</p> <p>As you see, many Kubernetes objects is needed for this setup. If you use plain declarative Kubernetes yaml files, you can have them in a directory e.g <code>my-application/</code> and deploy all files with a single command:</p> <pre><code>kubectl apply -f my-application/ </code></pre> <p>However, there is more alternatives to this, e.g. using <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a></p>
<p>I am trying to mount postgrsql persitance volume in kubernetes locally with static provisioning. Here is my yaml file i created pv , pvc and pod</p> <pre><code> apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume namespace: manhattan labels: type: local spec: storageClassName: manual capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pv-claim namespace: manhattan spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 80Mi --- apiVersion: v1 kind: Pod metadata: name: dbr-postgres namespace: manhattan spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: task-pv-claim containers: - name: task-pv-container image: postgresql:2.0.1 tty: true volumeMounts: - mountPath: "/var/lib/pgsql/9.3/data" name: task-pv-storage subPath: data readOnly: false nodeSelector: kubernetes.io/hostname: k8s-master </code></pre> <p>My volume lies in the /var/lib/pgsql/9.3/data but my pod fails and i don't want to change the location of mountPath to /var/lib/pgsql/9.3/data/backup</p> <p>can you please suggest any overwrite option in yaml file</p> <p>I don't want to create the folder with new name here .</p> <p><a href="https://i.stack.imgur.com/TaE1J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TaE1J.png" alt="enter image description here"></a></p> <p>If i changed the mountPath to /var/lib/pgsql/9.3/data/backup the pod starts running but i dont want that i want the data to writtend in the same directory /var/lib/pgsql/9.3/data</p>
<p>As per official documentation for postgres:9.4 from dockerhub:</p> <p><strong>1</strong>. Please Note:</p> <blockquote> <p>Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.</p> </blockquote> <p><strong>2</strong>. <code>PGDATA</code> environment variable:</p> <blockquote> <p>This optional variable can be used to define another location - like a subdirectory - for the database files. The default is /var/lib/postgresql/data, but if the data volume you're using is a filesystem mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.</p> </blockquote> <p>As workaround please refer to:</p> <ul> <li>the official docs while working with <a href="https://github.com/docker-library/postgres/blob/4b652bf95baee9cd7ef31300a4938ad72d09ad88/9.4/alpine/Dockerfile" rel="nofollow noreferrer">Dockerfile</a> </li> <li>helm charts for stable <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">postgresql</a> </li> </ul> <p>You can also change the image to the specific one <a href="https://github.com/helm/charts/tree/master/stable/postgresql#deploy-chart-using-docker-official-postgresql-image" rel="nofollow noreferrer">using helm chart</a>: </p> <blockquote> <p>From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image. Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory.</p> </blockquote> <pre><code>helm install --name postgres \ --set image.repository=postgres \ --set image.tag=10.6 \ --set postgresqlDataDir=/data/pgdata \ --set persistence.mountPath=/data/ \ stable/postgresql </code></pre> <p>You can template this helm chart and customizing. From another point of view the main problem is that you have initialized your database - as you mentioned under /var/lib/pgsql/9.3/data/backup so please verify your docker file and your docker-entrypoint.sh to figure out where is the problem.</p> <p>Please let me know if this helped.</p>
<p>I want to run kubectl and get all the secrets of type = X. Is this possible?</p> <p>I.e if I want to get all secrets where type=tls</p> <p>something like <code>kubectl get secrets --type=tls</code>?</p>
<p>How about <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="noreferrer">field-selector</a>:</p> <pre><code>$ kubectl get secrets --field-selector type=kubernetes.io/tls </code></pre>
<p>From the time I have upgraded the versions of my eks terraform script. I keep getting error after error.</p> <p>currently I am stuck on this error:</p> <blockquote> <p>Error: Get <a href="http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller" rel="nofollow noreferrer">http://localhost/api/v1/namespaces/kube-system/serviceaccounts/tiller</a>: dial tcp 127.0.0.1:80: connect: connection refused</p> <p>Error: Get <a href="http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller" rel="nofollow noreferrer">http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller</a>: dial tcp 127.0.0.1:80: connect: connection refused</p> </blockquote> <p>The script is working fine and I can still use this with old version but I am trying to upgrade the cluster version .</p> <p><strong>provider.tf</strong></p> <pre><code>provider "aws" { region = "${var.region}" version = "~&gt; 2.0" assume_role { role_arn = "arn:aws:iam::${var.target_account_id}:role/terraform" } } provider "kubernetes" { config_path = ".kube_config.yaml" version = "~&gt; 1.9" } provider "helm" { service_account = "${kubernetes_service_account.tiller.metadata.0.name}" namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}" kubernetes { config_path = ".kube_config.yaml" } } terraform { backend "s3" { } } data "terraform_remote_state" "state" { backend = "s3" config = { bucket = "${var.backend_config_bucket}" region = "${var.backend_config_bucket_region}" key = "${var.name}/${var.backend_config_tfstate_file_key}" # var.name == CLIENT role_arn = "${var.backend_config_role_arn}" skip_region_validation = true dynamodb_table = "terraform_locks" encrypt = "true" } } </code></pre> <p><strong>kubernetes.tf</strong></p> <pre><code>resource "kubernetes_service_account" "tiller" { #depends_on = ["module.eks"] metadata { name = "tiller" namespace = "kube-system" } automount_service_account_token = "true" } resource "kubernetes_cluster_role_binding" "tiller" { depends_on = ["module.eks"] metadata { name = "tiller" } role_ref { api_group = "rbac.authorization.k8s.io" kind = "ClusterRole" name = "cluster-admin" } subject { kind = "ServiceAccount" name = "tiller" api_group = "" namespace = "kube-system" } } </code></pre> <p>terraform version: 0.12.12 eks module version: 6.0.2</p>
<p>It means your <code>server:</code> entry in your <code>.kube_config.yml</code> is pointing to the wrong port (and perhaps even the wrong protocol, as normal kubernetes communication travels over <code>https</code> and is secured via mutual TLS authentication), or there is no longer a proxy that <em>was</em> listening on <code>localhost:80</code>, or perhaps the <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips" rel="nofollow noreferrer"><code>--insecure-port</code></a> used to be <code>80</code> and is now <code>0</code> (as is strongly recommended)</p> <p>Regrettably, without more specifics, no one can <strong>guess</strong> what the correct value was or should be changed to</p>
<p>I'm currently setting up a Kubernetes cluster where both private and public services are run. While public services should be accessible via the internet (and FQDNs), private services should not (the idea is to run a VPN inside the cluster where private services should be accessible via <em>simple</em> FQDNs).</p> <p>At the moment, I'm using nginx-ingress and configure Ingress resources where I set the hostname for public resources. external-dns then adds the corresponding DNS records (in Google CloudDNS) - this already works.</p> <p>The problem I'm facing now: I'm unsure about how I can add DNS records in the same way (i.e. simply specifying a host in <code>Ingress</code> definitions and using some ingress-class <code>private</code>), yet have these DNS records only be accessible from within the cluster.</p> <p>I was under the impression that I can add these records to the <code>Corefile</code> that CoreDNS is using. However, I fail to figure out how this can be automated.</p> <p>Thank you for any help!</p>
<p>If you don't want them to be accessed publicly, you don't want to add ingress rules for them. Ingress is only to route external traffic into your cluster.</p> <p>All your services are already registered in CoreDNS and accessible with their local name, no need to add anything else.</p>
<p>I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:</p> <ul> <li>Validating manifests or helm charts</li> <li>Validating the well behaving of a component as part of a bigger whole</li> <li>Validating the global behaviour of a product</li> </ul> <p>The point here is not really about the testing framework but more about the environment on top of which the tests could be run.</p> <p>Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?</p> <p>Thanks a lot</p>
<p>Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.</p> <p>Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.</p> <p>For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.</p> <ol> <li>Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.</li> <li>Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?</li> <li>Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.</li> </ol> <p>One thing to look at is GitLab are providing some <a href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html" rel="nofollow noreferrer">documentation</a> on how to build and test images in their CI pipeline.</p>
<p>My <code>.bash_profile</code> has many aliases that I use regularly. When I exec into a kubernetes pod, though, those aliases become (understandably) inaccessible. And when I say "exec into" I mean:</p> <p><code>kubectl exec -it [pod-name] -c [container-name] bash</code></p> <p>Is there any way to make it so that I can still use my bash profile after exec'ing in?</p>
<p>You said those are only the aliases. In that case and only in that case you could save the .<code>bash_profile</code> in the <code>ConfigMap</code> using <code>--from-env-file</code></p> <pre><code>kubectl create configmap bash-profile --from-env-file=.bash_profile </code></pre> <p>Keep in mind that each line in the env file has to be in VAR=VAL format.</p> <p><strong>Lines with # at the beginning and blank lines will be ignored.</strong></p> <p>You can then load all the key-value pairs as container environment variables:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: bash-test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "env" ] envFrom: - configMapRef: name: bash-profile restartPolicy: Never </code></pre> <p>Or <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="nofollow noreferrer">Populate a Volume with data stored in a ConfigMap</a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: bash-test-pod spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "/bin/sh", "-c", "ls /etc/config/" ] volumeMounts: - name: config-volume mountPath: /root/.bash_profile volumes: - name: config-volume configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: bash-profile restartPolicy: Never </code></pre> <p>The idea mentioned by @<a href="https://stackoverflow.com/users/2203038/mark">Mark</a> should also work.</p> <p>If you do <code>kubectl cp .bash_profile &lt;pod_name&gt;:/root/</code> if you need to put it into a specific containers you can add option <code>-c, --container='': Container name. If omitted, the first container in the pod will be chosen</code>.</p>
<p>I just upgrade Ubuntu from 19.04 to 19.10</p> <p>Now Minikube won't start.</p> <p>So, after a while, I just removed completely Minikube with.</p> <pre><code>minikube stop; minikube delete docker stop $(docker ps -aq) rm -r ~/.kube ~/.minikube sudo rm /usr/local/bin/localkube /usr/local/bin/minikube systemctl stop '*kubelet*.mount' sudo rm -rf /etc/kubernetes/ docker system prune -af --volumes </code></pre> <p>Now I want to reinstall everything, but I can't make it working. </p> <p>I download minikube, and move it to <code>/usr/local/bin</code></p> <pre><code>curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \\n &amp;&amp; chmod +x minikube &amp;&amp; sudo mv ./minikube /usr/local/bin </code></pre> <p>I start minikube</p> <pre><code>sudo minikube start --vm-driver=none </code></pre> <p>Everything is OK, minukube starts successfully.</p> <pre><code>~ sudo minikube start --vm-driver=none 😄 minikube v1.4.0 on Ubuntu 19.10 🤹 Running on localhost (CPUs=4, Memory=7847MB, Disk=280664MB) ... ℹ️ OS release is Ubuntu 19.10 🐳 Preparing Kubernetes v1.16.0 on Docker 18.09.6 ... ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf 🚜 Pulling images ... 🚀 Launching Kubernetes ... 🤹 Configuring local host environment ... ⚠️ The 'none' driver provides limited isolation and may reduce system security and reliability. ⚠️ For more information, see: 👉 https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ⚠️ kubectl and minikube configuration will be stored in /root ⚠️ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: ▪ sudo mv /root/.kube /root/.minikube $HOME ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube 💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true ⌛ Waiting for: apiserver proxy etcd scheduler controller dns 🏄 Done! kubectl is now configured to use "minikube" </code></pre> <p>I finally do:</p> <pre><code> ~ sudo mv /root/.kube /root/.minikube $HOME ➜ ~ sudo chown -R $USER $HOME/.kube $HOME/.minikube </code></pre> <p>But when I want to check pods:</p> <pre><code>kubectl get po </code></pre> <p>I get: </p> <pre><code>➜ ~ kubectl get po Error in configuration: * unable to read client-cert /root/.minikube/client.crt for minikube due to open /root/.minikube/client.crt: permission denied * unable to read client-key /root/.minikube/client.key for minikube due to open /root/.minikube/client.key: permission denied * unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: permission denied </code></pre> <p>And if using sudo:</p> <pre><code> ~ sudo kubectl get po [sudo] password for julien: The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>here is the result of <code>minikube logs</code></p> <p><a href="https://gist.github.com/xoco70/8a9c7042238400e370796cb23cb11c88" rel="noreferrer">https://gist.github.com/xoco70/8a9c7042238400e370796cb23cb11c88</a></p> <p>What should I do ?</p> <p>EDIT:</p> <p>After a reboot, when starting minikube with:</p> <pre><code>sudo minikube start --vm-driver=none </code></pre> <p>I get:</p> <pre><code>Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap : running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap output: [init] Using Kubernetes version: v1.16.0 [preflight] Running pre-flight checks [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING FileExisting-ethtool]: ethtool not found in system path [WARNING FileExisting-socat]: socat not found in system path [WARNING Hostname]: hostname "minikube" could not be reached [WARNING Hostname]: hostname "minikube": lookup minikube on 8.8.8.8:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-8443]: Port 8443 is in use [ERROR Port-10251]: Port 10251 is in use [ERROR Port-10252]: Port 10252 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable--var-lib-minikube-etcd]: /var/lib/minikube/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher : running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap .: exit status 1 😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new/choose ❌ Problems detected in kube-apiserver [3e0d8c59345d]: I1025 07:09:56.349120 1 log.go:172] http: TLS handshake error from 127.0.0.1:46254: remote error: tls: bad certificate I1025 07:09:56.353714 1 log.go:172] http: TLS handshake error from 127.0.0.1:46082: remote error: tls: bad certificate I1025 07:09:56.353790 1 log.go:172] http: TLS handshake error from 127.0.0.1:46080: remote error: tls: bad certificate </code></pre>
<p>Okay, so I reproduced and got the same errors with minikube after upgrading it to 19.10.</p> <p>How I initiated cluster on 19.04:</p> <pre><code>#Install kubectl curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl &amp;&amp; chmod +x ./kubectl &amp;&amp; sudo mv ./kubectl /usr/local/bin/kubectl #Install minikube. Make sure to check for latest version curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 &amp;&amp; chmod +x minikube &amp;&amp; sudo mv minikube /usr/local/bin/ #Install Docker curl -fsSL get.docker.com -o get-docker.sh &amp;&amp; chmod +x get-docker.sh sh get-docker.sh sudo usermod -aG docker $USER export MINIKUBE_WANTUPDATENOTIFICATION=false export MINIKUBE_WANTREPORTERRORPROMPT=false export MINIKUBE_HOME=$HOME export CHANGE_MINIKUBE_NONE_USER=true export KUBECONFIG=$HOME/.kube/config sudo minikube start --vm-driver none sudo chown -R $USER $HOME/.kube $HOME/.minikube vkr@ubuntu-minikube:~$ docker version Client: Docker Engine - Community Version: 19.03.3 API version: 1.40 Go version: go1.12.10 Git commit: a872fc2f86 Built: Tue Oct 8 01:00:44 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.3 API version: 1.40 (minimum version 1.12) Go version: go1.12.10 Git commit: a872fc2f86 Built: Tue Oct 8 00:59:17 2019 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.10 GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 runc: Version: 1.0.0-rc8+dev GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657 docker-init: Version: 0.18.0 GitCommit: fec3683 vkr@ubuntu-minikube:~$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-cv8c5 1/1 Running 0 2m25s kube-system coredns-5644d7b6d9-gk725 1/1 Running 0 2m25s kube-system etcd-minikube 1/1 Running 0 75s kube-system kube-addon-manager-minikube 1/1 Running 0 75s kube-system kube-apiserver-minikube 1/1 Running 0 98s kube-system kube-controller-manager-minikube 1/1 Running 0 88s kube-system kube-proxy-59jp9 1/1 Running 0 2m25s kube-system kube-scheduler-minikube 1/1 Running 0 82s kube-system storage-provisioner 1/1 Running 0 2m24s </code></pre> <p>After upgrading to 19.10 and clean minikube install:</p> <pre><code>vkr@ubuntu-minikube:~$ kubectl get all -A Error in configuration: * unable to read client-cert /root/.minikube/client.crt for minikube due to open /root/.minikube/client.crt: permission denied * unable to read client-key /root/.minikube/client.key for minikube due to open /root/.minikube/client.key: permission denied * unable to read certificate-authority /root/.minikube/ca.crt for minikube due to open /root/.minikube/ca.crt: permission denied </code></pre> <p>There are a lots of discussions stating you should use <code>root</code> for <code>none driver</code> since minikube runs the kubernetes system components directly on your machine...</p> <p><a href="https://github.com/kubernetes/minikube/issues/1899" rel="noreferrer">Running minikube as normal user</a></p> <p><a href="https://github.com/kubernetes/minikube/issues/4349" rel="noreferrer">Can't start minikube-- permissions</a></p> <p><a href="https://minikube.sigs.k8s.io/docs/reference/drivers/none/" rel="noreferrer">https://minikube.sigs.k8s.io/docs/reference/drivers/none/</a>:</p> <blockquote> <p>Usage The none driver requires minikube to be run as root, until <a href="https://github.com/kubernetes/minikube/issues/3760" rel="noreferrer">#3760</a> can be addressed</p> </blockquote> <p>However.. here is a small trick for you..</p> <p>1) wipe everything</p> <pre><code>vkr@ubuntu-minikube:~$ minikube stop ✋ Stopping "minikube" in none ... 🛑 "minikube" stopped. vkr@ubuntu-minikube:~$ minikube delete 🔄 Uninstalling Kubernetes v1.16.0 using kubeadm ... 🔥 Deleting "minikube" in none ... 💔 The "minikube" cluster has been deleted. vkr@ubuntu-minikube:~$ rm -rf ~/.kube vkr@ubuntu-minikube:~$ rm -rf ~/.minikube vkr@ubuntu-minikube:~$ sudo rm -rf /var/lib/minikube vkr@ubuntu-minikube:~$ sudo rm -rf /etc/kubernetes vkr@ubuntu-minikube:~$ sudo rm -rf /root/.minikube vkr@ubuntu-minikube:~$ sudo rm -rf /usr/local/bin/minikube </code></pre> <p>2) Install minikube, export variables, check</p> <pre><code>vkr@ubuntu-minikube:~$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 &amp;&amp; chmod +x minikube &amp;&amp; sudo mv minikube /usr/local/bin/ vkr@ubuntu-minikube:~$ export MINIKUBE_WANTUPDATENOTIFICATION=false vkr@ubuntu-minikube:~$ export MINIKUBE_WANTREPORTERRORPROMPT=false vkr@ubuntu-minikube:~$ export MINIKUBE_HOME=$HOME vkr@ubuntu-minikube:~$ export CHANGE_MINIKUBE_NONE_USER=true vkr@ubuntu-minikube:~$ export KUBECONFIG=$HOME/.kube/config vkr@ubuntu-minikube:~$ sudo minikube start --vm-driver none 😄 minikube v1.4.0 on Ubuntu 19.10 🤹 Running on localhost (CPUs=2, Memory=7458MB, Disk=9749MB) ... ℹ️ OS release is Ubuntu 19.10 🐳 Preparing Kubernetes v1.16.0 on Docker 19.03.3 ... ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf 💾 Downloading kubelet v1.16.0 💾 Downloading kubeadm v1.16.0 🚜 Pulling images ... 🚀 Launching Kubernetes ... 🤹 Configuring local host environment ... ⚠️ The 'none' driver provides limited isolation and may reduce system security and reliability. ⚠️ For more information, see: 👉 https://minikube.sigs.k8s.io/docs/reference/drivers/none/ ⚠️ kubectl and minikube configuration will be stored in /root ⚠️ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: ▪ sudo mv /root/.kube /root/.minikube $HOME ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube 💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true ⌛ Waiting for: apiserver proxy etcd scheduler controller dns 🏄 Done! kubectl is now configured to use "minikube" </code></pre> <p>What I do next is copy everything from <code>/root/.kube</code> and <code>/root/.minikube</code> to <code>$HOME</code>, grant user permissions and finally edit <code>$HOME/.kube/config</code> specifying new path to certs (<code>$HOME/.minikube/</code> instead of <code>/root/.minikube/</code>). Right now it looks like</p> <pre><code>vkr@ubuntu-minikube:~$ cat $KUBECONFIG apiVersion: v1 ... certificate-authority: /root/.minikube/ca.crt ... client-certificate: /root/.minikube/client.crt client-key: /root/.minikube/client.key </code></pre> <p>Lets do it :)</p> <pre><code>vkr@ubuntu-minikube:~$ sudo cp -r /root/.kube /root/.minikube $HOME vkr@ubuntu-minikube:~$ sudo chown -R $USER $HOME/.kube vkr@ubuntu-minikube:~$ sudo chown -R $USER $HOME/.minikube sed 's/root/home\/vkr/g' $KUBECONFIG &gt; tmp; mv tmp $KUBECONFIG </code></pre> <p>And finally result..</p> <pre><code>vkr@ubuntu-minikube:~$ kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-5644d7b6d9-bt897 1/1 Running 0 81m kube-system pod/coredns-5644d7b6d9-hkm5t 1/1 Running 0 81m kube-system pod/etcd-minikube 1/1 Running 0 80m kube-system pod/kube-addon-manager-minikube 1/1 Running 0 80m kube-system pod/kube-apiserver-minikube 1/1 Running 0 80m kube-system pod/kube-controller-manager-minikube 1/1 Running 0 80m kube-system pod/kube-proxy-wm52p 1/1 Running 0 81m kube-system pod/kube-scheduler-minikube 1/1 Running 0 80m kube-system pod/storage-provisioner 1/1 Running 0 81m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 81m kube-system service/kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 81m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/os=linux 81m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 2/2 2 2 81m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-5644d7b6d9 2 2 2 81m </code></pre>
<p>Currently I am using Kubernetes v1.11.6. I deployed kubernetes in AWS by using KOPS. In k8s cluster, deployed kafka, elasticsearch.</p> <p>PVC for kafka and elasticsearch are EBS volumes in AWS.</p> <p>My question is how to monitor PVC used and remaining available.</p> <p>This did not worked, <a href="https://stackoverflow.com/questions/44718268/how-to-monitor-disk-usage-of-kubernetes-persistent-volumes">How to monitor disk usage of kubernetes persistent volumes?</a> They no longer seem to be exposed starting from 1.12</p> <p>I thought of using aws cloudwatch but I am thinking kubernetes will have some answer for this generic problem.</p> <p>I should be able to see PVC used and remaining available disk space</p>
<p>generally speaking you can monitor following metrics:</p> <pre><code>kubelet_volume_stats_capacity_bytes kubelet_volume_stats_available_bytes </code></pre> <p>These metrics can be scraped from the kubelet endpoint on each node with tools like Prometheus :) </p>
<p>I was wondering if there is an easier way to create daemonsets in k8s other than yaml files. For eg, for pods we have <code>kubectl run --generator=run-pod/v1</code> command. I was wondering if there is something similar for DS.</p> <p>Thanks in advance.</p>
<p>There is no such quick <code>kubectl create</code> type command for creating <code>daemonsets</code>. But you can do it in some other way.</p> <ul> <li>One way to do this is: <pre class="lang-bash prettyprint-override"><code>$ kubectl create deploy nginx --image=nginx --dry-run=client -o yaml | \ sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' &gt; nginx-ds.yaml $ kubectl apply -f nginx-ds.yaml </code></pre> </li> <li>If you don't want to save the <code>yaml</code> data to any file, here's how you can do this: <pre class="lang-bash prettyprint-override"><code>$ kubectl create deploy nginx --image=nginx --dry-run=client -o yaml | \ sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' | \ kubectl apply -f - </code></pre> </li> </ul> <p>You have your daemonset now.</p> <p>What we are doing here is: at first we're creating a deployment <code>yaml</code> and then replacing <code>kind: Deployment</code> with <code>kind: DaemonSet</code> and remove <code>replicas: 1</code> from the deployment <code>yaml</code>.</p> <p>Thats's how we get <code>yaml</code> for <code>daemonset</code>.</p>
<p>I using a kubernetes StatefulSet for a hangfire pod in gke kubernetes. It is working perfectly except when I make a replication. I get this exception due the Antiforgery token validation:</p> <blockquote> <p>Microsoft.AspNetCore.Antiforgery.Internal.DefaultAntiforgery[7] An exception was thrown while deserializing the token. Microsoft.AspNetCore.Antiforgery.AntiforgeryValidationException: The antiforgery token could not be decrypted. ---> System.Security.Cryptography.CryptographicException: The key {9f4f1619-10ff-4283-a529-eb48a0799815} was not found in the key ring.</p> </blockquote> <p>Is there any solution please ?</p>
<p>The response is to Persist DataProtection key to redis </p> <pre><code>var redis = ConnectionMultiplexer.Connect("localhost,password=password"); services.AddDataProtection() .PersistKeysToStackExchangeRedis(redis, "DataProtection-Keys"); </code></pre>
<p>I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using <code>helm init</code> and install kubernetes dashboard through <code>helm install stable/kubernetes-dashboard</code>.</p> <p>The <code>helm install</code> command seems to be successful and <code>helm ls</code> outputs:</p> <pre><code>NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default </code></pre> <p>However after waiting a few minutes the deployment is still not ready. </p> <p>Running <code>kubectl get pods</code> shows that the pod's status as <code>CrashLoopBackOff</code>.</p> <pre><code>NAME READY STATUS RESTARTS AGE exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m </code></pre> <p>The description for the pod shows the following events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1 Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container </code></pre> <p>And the logs show the following panic message</p> <pre><code>panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system" </code></pre> <p>Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?</p> <p>Is it possible to setup without giving the dashboard account cluster-admin permissions?</p>
<p>By default i have puted namespace equals default, but if is other you need to replace for yours</p> <pre><code>kubectl create serviceaccount exhaling-ladybug-kubernetes-dashboard kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:exhaling-ladybug-kubernetes-dashboard </code></pre>
<p>I want delete label from a node or a pod by kubernetes API, my kubernetes version:1.24</p> <pre><code>kubectl get pod --show-labels | grep all-flow-0fbah all-flow-0fbah 1/1 Running 2 9d app=all-flow,op=vps1,version=1001 </code></pre> <p>I use command as below:</p> <pre><code> curl --request PATCH --header "Content-Type:application/json-patch+json" http://10.93.78.11:8080/api/v1/namespaces/default/pods/all-flow-0fbah --data '{"metadata":{"labels":{"a":"b"}}}' </code></pre> <p>But this doesn't work. Return message as below:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "the server responded with the status code 415 but did not return more information", "details": {}, "code": 415 } </code></pre> <p>Then I change the curl header like this :</p> <pre><code>curl --request PATCH --header "Content-Type:application/merge-patch+json" http://10.93.78.11:8080/api/v1/namespaces/default/pods/all-flow-0fbah --data '{"meadata":{"labels":{"op":"vps3"}}}' </code></pre> <p>It not delete label but add a new label to that pod. So is there any one can tell me how to delete a label for a pod like use command :</p> <pre><code>kubectl label pod all-flow-0fbah key- </code></pre> <p>Thanks!</p>
<p>I think this is the straightforward way to do that, which I found easier:</p> <pre class="lang-sh prettyprint-override"><code>kubectl label pod &lt;pod-name&gt; &lt;label key&gt;- </code></pre>
<p>I deployed a CronJob in a GKE cluster to periodically replicate secrets in namespaces (for <code>cert-manager</code>), but I always get the following error:</p> <pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>Here is my deployment:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: certificate-replicator-cron-job namespace: default spec: jobTemplate: spec: template: metadata: labels: app: default release: default spec: automountServiceAccountToken: false containers: - command: - /bin/bash - -c - for i in $(kubectl get ns -o json |jq -r ".items[].metadata.name" |grep "^bf-"); do kubectl get secret -o json --namespace default dev.botfront.cloud-staging-tls --export |jq 'del(.metadata.namespace)' |kubectl apply -n ${i}-f -; done image: bitnami/kubectl:latest name: certificate-replicator-container restartPolicy: OnFailure serviceAccountName: sa-certificate-replicator schedule: '* * * * *' </code></pre> <p>I also set up a role for the service account:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe role certificate-replicator-role Name: certificate-replicator-role Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- secrets [] [] [list create get] namespaces [] [] [list get] </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe rolebinding certificate-replicator-role-binding git:(master|✚4… Name: certificate-replicator-role-binding Labels: &lt;none&gt; Annotations: &lt;none&gt; Role: Kind: Role Name: certificate-replicator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount sa-certificate-replicator default </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl describe serviceaccount sa-certificate-replicator git:(master|✚4… Name: sa-certificate-replicator Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Image pull secrets: &lt;none&gt; Mountable secrets: sa-certificate-replicator-token-ljsfb Tokens: sa-certificate-replicator-token-ljsfb Events: &lt;none&gt; </code></pre> <p>I understand that I could probably create another Docker image with <code>gcloud</code> preinstalled and authenticate with a service account key, but I'd like to be cloud provider agnostic and also avoid having to authenticate to the cluster since <code>kubectl</code> is being invoked from inside.</p> <p>Is that possible?</p>
<p>Gcloud demands that you authenticate in some way. I used a .json file to authenticate a google-cloud's service account every time I wanted to run a kubectl remotely. However, this is a pretty dirty solution.</p> <p>Instead, I would recomment to use kubernetes api to achieve your goal. Create a role that allows you to operate on namespaces and configmaps resources. Associate it with a service account and then make curls to make the copy from inside the cronjob. </p> <p>Here is an example for the default namespace.</p> <p>First create a role and associate it with your service account (default in this example).</p> <pre><code>--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nssc-clusterrole namespace: default rules: - apiGroups: [""] resources: ["namespaces", "configmaps", "secrets"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: nssc-clusterrolebinding namespace: default roleRef: name: nssc-clusterrole apiGroup: rbac.authorization.k8s.io kind: ClusterRole subjects: - name: default namespace: default kind: ServiceAccount </code></pre> <p>Second, create a secret to test.</p> <pre><code>--- apiVersion: v1 kind: Secret metadata: name: secrets-test namespace: default type: Opaque stringData: mysecret1: abc123 mysecret2: def456 </code></pre> <p>Third, make a curl request to get your secret.</p> <pre class="lang-sh prettyprint-override"><code>curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernet es.io/serviceaccount/token)" -H "Accept: application/json" https://kubernetes.default.svc/api/v1/namespaces/default/secrets/sec rets-test </code></pre> <p>You will get a json with the content of your secret. </p> <pre><code>{ "kind": "Secret", "apiVersion": "v1", "metadata": { "name": "secrets-test", "namespace": "default", "selfLink": "/api/v1/namespaces/default/secrets/secrets-test", "uid": "...", "resourceVersion": "...", "creationTimestamp": "2019-10-26T01:52:29Z", "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{...}\n" } }, "data": { "mysecret1": "base64value", "mysecret2": "base64value" }, "type": "Opaque" } </code></pre> <p>Fourth, create the secret in a new namespace by changing the json and making a new curl request. Also associate the service account with the role.</p> <pre><code>--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: nssc-clusterrolebinding namespace: new-namespace roleRef: name: nssc-handler-clusterrole apiGroup: rbac.authorization.k8s.io kind: ClusterRole subjects: - name: default namespace: default kind: ServiceAccount </code></pre> <pre><code>{ "apiVersion": "v1", "data": { "mysecret1": "Y29udHJvbDEyMyE=", "mysecret2": "Y29udHJvbDQ1NiE=" }, "kind": "Secret", "metadata": { "name": "secrets-test", "namespace": "new-namespace" }, "type": "Opaque" } </code></pre> <pre class="lang-sh prettyprint-override"><code>curl -X POST -d @test.json --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /va r/run/secrets/kubernetes.io/serviceaccount/token)" -H "Accept: application/json" -H "Content-Type: application/json" https://kub ernetes.default.svc/api/v1/namespaces/new-namespace/secrets </code></pre>
<p>I have followed a <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">tutorial</a> to deploy Redis master and slave deployment. Both slave and master have its own services. I have Spring boot app that has master host in its configuration to save/read the data from it. So when I terminate redis-master pod the Spring boot app is going down as it doesn't know that it should connect to slave. How to solve that? I was thinking about creating a common service for both master and slave, but this way the spring boot app will at some point try to save data to a slave pod instead of master.</p>
<p>Use StatefulSets for redis deployment In HA. Use sentinel as sidecar container to manage failover</p>
<p>For example, I have a rule (<a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a>):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) </code></pre> <p>So anything that goes to <code>rewrite.bar.com/something</code> will go to <code>http-svc/</code>. And if <code>http-svc</code> will answer another location, e.g. <code>/static</code> there will be redirect to <code>rewrite.bar.com/static</code> and here we go - 404. I wonder is there any simple and clear solution to fix such situations without asking the developer to implement proxy_path variable or something like that?</p> <p>Tried <a href="https://stackoverflow.com/questions/55718282/make-links-in-response-relative-to-new-path">Make links in response relative to new path</a></p>
<p>Change the rewrite annotation to $1. Note I have also changed the path regex. That should do it. </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something/(.*) </code></pre>
<p>I have several CronJobs, and I want to be able to query for a list of completed jobs for a specific one. I tried <code>--field-selector metadata.ownerReferences.uid</code> but that's not a supported field selector for batchv1.job (Looking at <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/batch/v1/conversion.go#L40-L50" rel="nofollow noreferrer">this</a> it appears there are no useful field selectors for this use case).</p> <p>I don't completely understand the DownwardAPI, but I was wondering if i can set a label on the job_spec (in the cronjob definition) that references the cronjobs uid, so I can then use label selectors to filter. I can't tell if thats possible.</p> <p>Filtering for jobs by owner seems like a reasonable thing to be able to do, but I can't seem to find any useful information when searching around.</p>
<p>You can use this two commands to get a list of jobs belonging to a Cronjob:</p> <pre><code>uid=$(kubectl get cronjob [MY-CRONJOB-NAME] --output jsonpath='{.metadata.uid}') kubectl get jobs -o json | jq -r --arg uid "$uid" '.items | map(select(.metadata.ownerReferences[]? | .uid==$uid) | .metadata.name) | .[]' </code></pre> <p>The first command fetches the <code>metadata.uid</code> from your specified <code>[MY-CRONJOB-NAME]</code>. Then, a <code>kubectl get</code> all jobs, and the output is processed with <a href="https://github.com/stedolan/jq" rel="nofollow noreferrer">jq</a> to filter them by the <code>.metadata.ownerReferences.uid</code>.</p> <hr> <p>If you want a more simple approach, you can play with the <code>--custom-columns</code> flag. The following example list all jobs and their respective controller names:</p> <pre><code>kubectl get jobs --all-namespaces -o custom-columns=NAME:.metadata.name,CONTROLLER:.metadata.ownerReferences[].name,NAMESPACE:.metadata.namespace </code></pre>
<p>I have created HPA for my deployment, it’s working fine for scaling up to max replicas (6 in my case), when load reduces its scale down to 5 but it supposed to come to my original state of replicas (1 in my case) as load becomes normal . I have verified after 30-40 mins still my application have 5 replicas .. It supposed to be 1 replica.</p> <pre><code>[ec2-user@ip-192-168-x-x ~]$ kubectl describe hpa admin-dev -n dev Name: admin-dev Namespace: dev Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Thu, 24 Oct 2019 07:36:32 +0000 Reference: Deployment/admin-dev Metrics: ( current / target ) resource memory on pods (as a percentage of request): 49% (1285662037333m) / 60% Min replicas: 1 Max replicas: 10 Deployment pods: 3 current / 3 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request) ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 13m horizontal-pod-autoscaler New size: 2; reason: memory resource utilization (percentage of request) above target Normal SuccessfulRescale 5m27s horizontal-pod-autoscaler New size: 3; reason: memory resource utilization (percentage of request) above target </code></pre>
<p>When the load decreases, the HPA intentionally waits a certain amount of time before scaling the app down. This is known as the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="noreferrer">cooldown delay</a> and helps that the app is scaled up and down too frequently. The result of this is that for a certain time the app runs at the previous high replica count even though the metric value is way below the target. This may look like the HPA doesn't respond to the decreased load, but it eventually will.</p> <p>However, the default duration of the cooldown delay is 5 minutes. So, if after 30-40 minutes the app still hasn't been scaled down, it's strange. Unless the cooldown delay has been set to something else with the <code>--horizontal-pod-autoscaler-downscale-stabilization</code> flag of the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="noreferrer">controller manager</a>.</p> <p>In the output that you posted the metric value is 49% with a target of 60% and the current replica count is 3. This seems actually not too bad.</p> <p>An issue might be that you're using the <strong>memory utilisation</strong> as a metric, which is <strong>not a good autoscaling metric.</strong></p> <p>An autoscaling metric should linearly respond to the current load across the replicas of the app. If the number of replicas is doubled, the metric value should halve, and if the number of replicas is halved, the metric value should double. The memory utilisation in most cases doesn't show this behaviour. For example, if each replica uses a fixed amount of memory, then the average memory utilisation across the replicas stays roughly the same regardless of how many replicas were added or removed. The CPU utilisation generally works much better in this regard.</p>
<p>hoping all you are doing good, my question is about the installation minikube's machine. </p> <p>I want to change the default path to C: to D: </p> <p>My code to install it is: <strong><em>minikube start --cpus 2 --memory 4096 --vm-driver=hyperv</em></strong></p> <p>i did read documentation but i didn't find anything about change the default path.</p> <p>Thanks for the early response</p>
<p>You can set the <a href="https://minikube.sigs.k8s.io/docs/reference/environment_variables/" rel="nofollow noreferrer"><code>$MINIKUBE_HOME</code></a> env var:</p> <blockquote> <p><strong>MINIKUBE_HOME</strong> - (string) sets the path for the <code>.minikube</code> directory that minikube uses for state/configuration</p> </blockquote>
<pre><code>kubectl set image deployment/$DEPLOYMENT_EXTENSION $INSTANCE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest </code></pre> <p>I use this command do load new created image to my existing cluster (update version of my app) . But when I do it , and then go to the site , I don't see any changes .</p> <pre><code>spec: terminationGracePeriodSeconds: 30 containers: - name: booknotes image: gcr.io/my-image:latest imagePullPolicy: Always </code></pre> <p>I've also added this 2 lines in deployment.yaml file and applied it for my cluser: </p> <pre><code>imagePullPolicy: Always terminationGracePeriodSeconds: 30 </code></pre> <p>But it still doesn't work. Can it be because I use <code>:latest</code> tag ? Or it isn't related ? If you have some idias pls let me know. And also if you need additional info , I will attach it !</p> <p><strong>gitlab-ci.yml</strong></p> <pre><code> stages: - build - docker-push - deploy cache: paths: - node_modules/ build: stage: build image: node:latest script: - yarn install - npm run build artifacts: paths: - dist/ only: - master docker: stage: docker-push image: docker:18.09.7 services: - docker:18.09.7-dind - google/cloud-sdk:latest script: - echo $GCP_ACCESS_JSON &gt; $CI_PIPELINE_ID.json - cat $CI_PIPELINE_ID.json | docker login -u _json_key --password-stdin $GCP_REGION - docker build -t gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest . - docker push gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest only: - master test: stage: deploy image: google/cloud-sdk:latest script: - echo $GCP_ACCESS_JSON &gt; $CI_PIPELINE_ID.json - gcloud auth activate-service-account $GCP_CE_PROJECT_EMAIL --key-file $CI_PIPELINE_ID.json --project $GCP_PROJECT_ID - gcloud container clusters get-credentials $CLUSTER_NAME --zone $ZONE --project $PROJECT_NAME - kubectl set image deployment/$DEPLOYMENT_EXTENSION $INSTANCE_NAME=gcr.io/$PROJECT_ID/$DOCKER_REPOSITORY:latest only: - master </code></pre>
<p>This question, the symptoms and the reasons are very close to [this one],(<a href="https://stackoverflow.com/questions/58531740/google-cloud-platform-creating-a-pipeline-with-kubernetes-and-replacing-the-same/58543316#58543316">Google cloud platform creating a pipeline with Kubernetes and replacing the same container</a>)</p> <p>Apply the same solution in your Gitlab-CI pipeline (use global variable to change the image tag and to deploy everytime a new one to force Kubernetes to pull and to deploy it).</p> <p>I can help you on Gitlib-CI if you have difficulties to do this.</p>
<p><strong>I'm stuck at creating a <code>user</code> for my <code>serivice_account</code> which I can use in my kubeconfig</strong></p> <p><strong>Background</strong>: I have a cluser-A, which I have created using the <a href="https://github.com/googleapis/google-cloud-python" rel="nofollow noreferrer">google-cloud-python</a> library. I can see the cluster created in the console. Now I want to deploy some manifests to this cluster so i'm trying to use the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes-python</a> client. To create a <code>Client()</code> object, I need to have a KUBECONFIG so I can:</p> <pre><code>client = kubernetes.client.load_kube_config(&lt;MY_KUBE_CONFIG&gt;) </code></pre> <p>I'm stuck at generating a <code>user</code> for this service_account in my kubeconfig. I don't know what kind of authentication certificate/key I should use for my <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">user</a>.</p> <p>Searched everywhere but still can't figure out how to use my service_account to access my GKE cluster through the kubernetes-python library.</p> <p><strong>Additional Info</strong>: I already have a <code>Credentials()</code> object (<a href="https://google-auth.readthedocs.io/en/latest/reference/google.auth.credentials.html#google.auth.credentials.Credentials" rel="nofollow noreferrer">source</a>) created using the <code>service_accounts.Credentails()</code> class (<a href="https://google-auth.readthedocs.io/en/latest/reference/google.oauth2.service_account.html#google.oauth2.service_account.Credentials" rel="nofollow noreferrer">source</a>). </p>
<p>A Kubernetes ServiceAccount is generally used by a service within the cluster itself, and most clients offer a version of <a href="https://github.com/kubernetes/client-go/blob/master/rest/config.go#L433-L439" rel="nofollow noreferrer"><code>rest.InClusterConfig()</code></a>. If you mean a GCP-level service account, it is activated as below:</p> <pre><code>gcloud auth activate-service-account --key-file gcp-key.json </code></pre> <p>and then probably you would set a project and use <code>gcloud container clusters get-credentials</code> as per normal with GKE to get Kubernetes config and credentials.</p>
<p>So this has been working forever. I have a few simple services running in GKE and they refer to each other via the standard service.namespace DNS names.</p> <p>Today all DNS name resolution stopped working. I haven't changed anything, although this may have been triggered by a master upgrade.</p> <pre><code>/ambassador # nslookup ambassador-monitor.default nslookup: can't resolve '(null)': Name does not resolve nslookup: can't resolve 'ambassador-monitor.default': Try again /ambassador # cat /etc/resolv.conf search default.svc.cluster.local svc.cluster.local cluster.local c.snowcloud-01.internal google.internal nameserver 10.207.0.10 options ndots:5 </code></pre> <p>Master version 1.14.7-gke.14</p> <p>I can talk cross-service using their IP addresses, it's just DNS that's not working.</p> <p>Not really sure what to do about this...</p>
<p>The easiest way to verify if there is a problem with your Kube DNS is to look at the logs StackDriver [https://cloud.google.com/logging/docs/view/overview].</p> <p>You should be able to find DNS resolution failures in the logs for the pods, with a filter such as the following:</p> <p>resource.type="container" </p> <p>("UnknownHost" OR "lookup fail" OR "gaierror") </p> <p>&nbsp;</p> <p>Be sure to check logs for each container. Because the exact names and numbers of containers can change with the GKE version, you can find them like so:</p> <p> kubectl get pod -n kube-system -l k8s-app=kube-dns -o \ </p> <p>jsonpath='{range .items[*].spec.containers[*]}{.name}{"\n"}{end}' | sort -u kubectl get pods -n kube-system -l k8s-app=kube-dns </p> <p>&nbsp;</p> <p>Has the pod been restarted frequently? Look for OOMs in the node console. The nodes for each pod can be found like so:</p> <p>kubectl get pod -n kube-system -l k8s-app=kube-dns -o \ </p> <p>jsonpath='{range .items[*]}{.spec.nodeName} pod={.metadata.name}{"\n"}{end}' </p> <p>&nbsp;</p> <p>The&nbsp;<code>kube-dns</code>&nbsp;pod contains four containers:</p> <ul> <li><code>kube-dns</code>&nbsp;process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to serve DNS requests,</li> <li><code>dnsmasq</code>&nbsp;adds DNS caching to improve performance,</li> <li><code>sidecar</code>&nbsp;provides a single health check endpoint while performing dual health checks (for&nbsp;<code>dnsmasq</code>&nbsp;and&nbsp;<code>kubedns</code>). It also collects dnsmasq metrics and exposes them in the Prometheus format,</li> <li><code>prometheus-to-sd</code>&nbsp;scraping the metrics exposed by&nbsp;<code>sidecar</code>&nbsp;and sending them to Stackdriver.</li> </ul> <p>By default, the&nbsp;<code>dnsmasq</code>&nbsp;container accepts 150 concurrent requests. Requests beyond this are simply dropped and result in failed DNS resolution, including resolution for&nbsp;<code>metadata</code>. To check for this, view the logs with the following filter:</p> <p>resource.type="container"<br />resource.labels.cluster_name="&lt;cluster-name&gt;"<br />resource.labels.namespace_id="kube-system"<br />logName="projects/&lt;project-id&gt;/logs/dnsmasq"<br />"Maximum number of concurrent DNS queries reached"</p> <p>&nbsp;</p> <p>If legacy stackdriver logging of cluster is disabled, use the following filter:</p> <p>resource.type="k8s_container"<br />resource.labels.cluster_name="&lt;cluster-name&gt;"<br />resource.labels.namespace_name="kube-system"<br />resource.labels.container_name="dnsmasq"<br />"Maximum number of concurrent DNS queries reached"</p> <p>&nbsp;</p> <p>If Stackdriver logging is disabled, execute the following:</p> <p>kubectl logs --tail=1000 --namespace=kube-system -l k8s-app=kube-dns -c dnsmasq | grep 'Maximum number of concurrent DNS queries reached'</p> <p>&nbsp;</p> <p>Additionally, you can try to use the command [dig ambassador-monitor.default @10.207.0.10] from each nodes to verify if this is only impacting one node. If it is, you can simple re-create the impacted node.</p>
<p>I’d like to use Kubernetes, as I’m reading everywhere : “Kubernetes is, without a doubt, the leading container orchestrator available today.“</p> <p>My issue here is with the networking. I need to expose external IP to each of the pods. I need that my pods are seen as if they were traditional VM or HW servers. Meaning that I need all the ports to be exposed.</p> <p>What I see so far, is I can expose only a limited list of ports.</p> <p>Am I correct ? Or do I miss something ?</p> <p>Cheers, Raoul</p>
<p>In Kubernetes, you will need <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> to communicate with pods. To expose the pods outside the Kubernetes cluster, you can use k8s service of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> type.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: MyApp type: NodePort ports: - port: 8080 nodePort: 30000 name: my-port-8080 - port: 8081 nodePort: 30001 name: my-port-8081 - port: 8082 nodePort: 30002 name: my-port-8081 </code></pre> <p>Then you will be able to reach your pods at, <code>https://&lt;node-ip&gt;:nodePort</code>. For in-cluster communication, you can use service's dns: <code>&lt;service-name&gt;.&lt;namespace&gt;.svc:PORT</code></p> <p><strong>Update:</strong></p> <p>Take a look at this guide: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">Using a Service to Expose Your App</a></p>
<p>We have defined HPA for an application to have min 1 and max 4 replicas with 80% cpu as the threshold.</p> <p>What we wanted was, if the pod cpu goes beyond 80%, the app needs to be scaled up 1 at a time. Instead what is happening is the application is getting scaled up to max number of replicas.</p> <p>How can we define the scale velocity to scale 1 pod at a time. And again if one of the pod consumes more than 80% cpu then scale one more pod up but not maximum replicas.</p> <p>Let me know how do we achieve this.</p>
<p>First of all, the 80% CPU utilisation is not a threshold but a target value.</p> <p>The <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">HPA algorithm</a> for calculating the desired number of replicas is based on the following formula:</p> <blockquote> <p><code>X = N * (C/T)</code></p> </blockquote> <p>Where:</p> <ul> <li><code>X</code>: desired number of replicas</li> <li><code>N</code>: current number of replicas</li> <li><code>C</code>: current value of the metric</li> <li><code>T</code>: target value for the metric</li> </ul> <p>In other words, the algorithm aims at calculating a replica count that keeps the observed metric value as close as possible to the target value.</p> <p>In your case, this means if the average CPU utilisation across the pods of your app is below 80%, the HPA tends to decrease the number of replicas (to make the CPU utilisation of the remaining pods go up). On the other hand, if the average CPU utilisation across the pods is above 80%, the HPA tends to increase the number of replicas, so that the CPU utilisation of the individual pods decreases.</p> <p>The number of replicas that are added or removed in a single step depends on how far apart the current metric value is from the target value and on the current number of replicas. This decision is internal to the HPA algorithm and you can't directly influence it. The only contract that the HPA has with its users is to keep the metric value as close as possible to the target value.</p> <p>If you need a very specific autoscaling behaviour, you can write a custom controller (or operator) to autoscale your application instead of using the HPA.</p>
<p>I am developing a series of microservices using Spring Boot and plan to deploy them on Kubernetes.</p> <p>Some of the microservices are composed of an API which writes messages to a kafka queue and a listener which listens to the queue and performs the relevant actions (e.g. write to DB etc, construct messsages for onward processing).</p> <p>These services work fine locally but I am planning to run multiple instances of the microservice on Kubernetes. I'm thinking of the following options:</p> <ol> <li><p>Run multiple instances as is (i.e. each microservice serves as an API and a listener).</p></li> <li><p>Introduce a FRONTEND, BACKEND environment variable. If the FRONTEND variable is true, do not configure the listener process. If the BACKEND variable is true, configure the listener process. This way I can start scale how may frontend / backend services I need and also have the benefit of shutting down the backend services without losing requests.</p></li> </ol> <p>Any pointers, best practice or any other options would be much appreciated.</p>
<p>You can do as you describe, with environment variables, or you may also be interested in building your app with different profiles/bean configuration and make two different images.</p> <p>In both cases, you should use two different Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> so you can scale and configure them independently.</p> <p>You may also be interested in a <a href="https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/" rel="nofollow noreferrer">Leader Election pattern</a> where you want only <strong>one active replica</strong> if it only make sense if one single replica processes the events from a queue. This can also be solved by only using a single replica depending on your <em>availability</em> requirements.</p>
<p>So I have setup my application on Google cloud using Kubernetes. I have a Pod which I want to expose out of the cluster that expects TCP requests.</p> <p>I came to know that this is possible via <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer" title="ingress-nginx">ingress-nginx</a> and researched about it. As mentioned in the <a href="https://github.com/kubernetes/ingress-nginx/blob/nginx-0.9.0/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">docs here</a>, it can be done by setting up a configMap like below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-configmap-example data: 9000: "default/my-service-name:7051 </code></pre> <p>, but it's full usage is not clearly described nor I could find a complete example in the docs properly.</p> <p>I have installed ingress-nginx as mentioned in the <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">Installation Guide</a> but I am unsure what the next steps are to expose my Pod.</p> <p><strong>Extra Info</strong></p> <ul> <li>The port in the Pod that I want to expose out of cluster is <code>7051</code> </li> <li>I have a NodePort Service that targets my Pod's port that can be used with Ingress to expose.</li> </ul>
<p>So, in order to achieve this you can do this:</p> <ol> <li>First create the configMap that you added to the post.</li> </ol> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-configmap-example data: 9000: "default/my-service-name:7051 </code></pre> <ol start="2"> <li><p>Then edit your nginx-ingress-controller deployment by adding this flag to container args like below:</p> <pre><code>... containers: - name: nginx-ingress-controller image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1" imagePullPolicy: "IfNotPresent" args: - /nginx-ingress-controller - --default-backend-service=nginx-ingress/nginx-ingress-default-backend - --election-id=ingress-controller-leader - --ingress-class=nginx - --configmap=nginx-ingress/nginx-ingress-controller - --tcp-services-configmap=default/tcp-configmap-example ... </code></pre></li> <li><p>Edit LoadBalancer service by adding port to your LoadBalancer</p> <pre><code>... ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: some-service-port port: 7051 protocol: TCP </code></pre></li> </ol> <p>Hope it helps!</p>
<p>What is the best approach and what are the benefits of using sophisticated solutions (Kubernetes, Docker Swarm) for the deployment of a few microservices developed using Spring Boot, Angular and Django projects backed by PostgreSQL database?</p> <p>Consider that I have the following microservices:</p> <ul> <li>microservice 1 and 2 (developed in Spring Boot), backed by PostgreSQL</li> <li>microservice 3 (developed in Django), backed by PostgreSQL</li> <li>microservice 4 (front-end developed in Angular) </li> </ul> <p>Each microservice contains one Dockerfile to build it up.</p> <p>What is the best approach for production-based deployments of such platform?</p> <ol> <li>I could deploy it using docker-compose quite easy but what is the real advantage of using sophisticated solutions as Kubernetes or Docker Swarm in the scenario like this?</li> <li>How many containers of PostgreSQL database should I create? One PostgreSQL container per microservice or shared one for all microservices using PostgreSQL?</li> </ol>
<p>Like most of these kind of questions regarding deployment "it depends" but let me try to answer. Lest first see the differences:</p> <p><strong>Docker compose vs (Docker swarm or Kubernetes)</strong> </p> <p>The most important difference which you need to acknowledge is that docker compose can run your multi container application on a <strong>Single Host only</strong>. It can not run your application on cluster of computers. If you have to run your application on a cluster you can remove the option to use docker compose for that. For this you will need to use either Docker swarm or Kubernetes. You can read more about it <a href="https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes?rq=1">here</a> and <a href="https://stackoverflow.com/questions/43875117/what-benefits-does-docker-compose-have-over-docker-swarm-and-docker-stack?rq=1">here</a>.</p> <p><strong>Docker swarm vs Kubernetes</strong></p> <p>In short they are both container orchestration solutions. This means that you would use them to orchestrate your containers in a cluster. Both of them tend to be solving the same problem but in a different way. Both of them are used for:</p> <ul> <li>container orchestration across the cluster</li> <li>scaling containers</li> <li>load balancing containers</li> <li>communication between containers</li> <li>and many more. In <a href="https://stackoverflow.com/questions/40844726/how-is-docker-swarm-different-than-kubernetes">This</a> answer you can see the summary of the comparison of them.</li> </ul> <p>One of the benefits if you decide to use docker swarm is that the CLI tooling used for docker swarm will very similar to the standard Docker cli with some additional options. So setting it up and using can be much easier if you are familiar with docker command line. Also you can use your docker compose yaml file to run on swarm directly. You can read about it <a href="https://docs.docker.com/compose/production/" rel="nofollow noreferrer">here</a> and <a href="https://docs.docker.com/compose/swarm/" rel="nofollow noreferrer">here</a>.</p> <p>Kubernetes on the other hand is very powerful and is supported by all major cloud providers like: AWS, Azure and Google Cloud. It is: </p> <ul> <li>open source</li> <li>backed by the Cloud Native Computing Foundation (CNCF)</li> <li>has a big community supporting it</li> <li>can be used with other container solutions(not only docker)</li> </ul> <p>If you would use use Kubernetes keep in mind that you will need to learn a separate tools for managing it including kubectl CLI.</p> <p>There are pros and cons for using both but my suggestion would be to use Kubernetes if you are deploying your solution on the cloud and aim to be cloud agnostic as much as possible.</p> <hr> <blockquote> <p>What is the best approach for production-based deployments of such platform?</p> </blockquote> <p>Knowing only the infos that you gave us here I would say it depends also where you plan to deploy it:</p> <ul> <li>On some cloud provider like AWS, Azure, Google Cloud or some other?</li> <li>On non cloud provider but still multiple servers or on some cluster of servers?</li> <li>On one dedicated server for all of your components?</li> </ul> <p><strong>On some cloud provider like AWS, Azure, Google Cloud or some other?</strong></p> <p>If you are using some cloud provider and you plan to deploy it there then I would suggest using Kubernetes as the major cloud providers have a very good support for it and you can port it to another cloud provider if needed. I would also suggest taking a look at some specific solutions for Container orchestration for the specific cloud provider you are using. I was using AWS ECS for deployment and I had very good experience with it. Keep in mind that this way you will be bound more to that specific cloud provider compared to if you are using Kubernetes or Docker swarm.</p> <p><strong>On non cloud provider but still multiple servers or on some cluster of servers?</strong></p> <p>Here again your need to use Kubernetes or Docker swarm in order to deploy your solution to multiple servers. </p> <p><strong>On one dedicated server for all of your components?</strong></p> <p>By deploying on one server technically you could also use docker-compose. The same one that you use for your development setup with some different configurations. This will of course limit you if you at some point decide to deploy on a cluster as you will need to migrate to docker swarm or Kubernetes later.</p> <blockquote> <p>I could deploy it using docker-compose quite easy but what is the real advantage of using sophisticated solutions as Kubernetes or Docker Swarm in the scenario like this?</p> </blockquote> <p>I think that in the above section I explained what would be the benefits and drawbacks of using docker compose for production deployment. Usually how I personaly do it is that I have 3 environments:</p> <ol> <li>Development environment setup with docker-compose</li> <li>Production environment setup with Kubernetes and deployed to a cluster on AWS</li> <li>Staging environment(Copy of Production) setup with Kubernetes and deployed to a cluster on AWS but using less servers(resources)</li> </ol> <p>So for the Staging I use the same Kubernetes configuration but with less servers(It is expensive :)). This way you ensure that your setup is working as you use the same configuration but with less resources as your staging will not need so much resources as Production which is used by your customers.</p> <blockquote> <p>How many containers of PostgreSQL database should I create? One PostgreSQL container per microservice or shared one for all microservices using PostgreSQL?</p> </blockquote> <p>I would not recommend to use containers for databases on production. Why? There are many reasons why not. Simply there are too many risks and a a few people have made a mistake by doing it. Sure it can work for very small application but still I would advise not to use docker for Postgres on production. You can read more about it <a href="https://vsupalov.com/database-in-docker/" rel="nofollow noreferrer">here</a>. Here is one quote from the post:</p> <blockquote> <p>Here’s a pretty bad anti-pattern, which can cause you a lot of trouble, even if you’re just working on a small project. You should not run stateful applications in orchestration tools which are built for stateless apps.</p> </blockquote> <p>If you are using a cloud provider I would suggest to use the build in Services for databases like Database as a Service or on AWS for example RDS. <a href="https://aws.amazon.com/rds/postgresql/" rel="nofollow noreferrer">Here</a> you can read about it. For Azure this would be Aurora and other cloud providers have similar solutions. On the other hand for your development setup it is fine to use your databases inside your docker containers.</p> <hr> <p><strong>Conclusion</strong></p> <p>Depending on your preference and needs use Kubernetes or Docker swarm for all your services except of the databases. Since you are developing a solution using micro-services using kubernetes will allow you to be cloud agnostic. This means that you will be able to have your infrastructure setup portable so you can move it easily to another cloud provider. Of course if that is important to you? You can even deploy it on standard hosting providers which is not a cloud provider on one or multiple servers.</p>
<p>Here is a manifest file for minikube Kubernetes, for a deployment and a service:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: hello-deployment spec: selector: matchLabels: app: hello replicas: 3 template: metadata: labels: app: hello spec: containers: - name: hello image: hello_hello imagePullPolicy: Never ports: - containerPort: 4001 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: hello spec: selector: app: hello ports: - port: 4001 nodePort: 30036 protocol: TCP type: NodePort </code></pre> <p>And a simple HTTP-server written in Golang</p> <pre><code>package main import ( http "net/http" "github.com/gin-gonic/gin" ) func main() { r := gin.Default() r.GET("/ping", func(c *gin.Context) { c.JSON(200, gin.H{ "message": "pong", }) }) server := &amp;http.Server{ Addr: ":4001", Handler: r, } server.ListenAndServe() } </code></pre> <p>When I make several requests to <em>IP:30036/ping</em> and then open pod's logs, I can see that only 1 of 3 pods handles all requests. How to make other pods response on requests? </p>
<p>You are exposing a service using a NodePort, so there is no reverse proxy in place, but you directly connect to your Pod(s). This is a good choice to start with. (Later you might want to use an Ingress)</p> <p>What you are seeing is that only one Pod handles your requests. You expect that each request is load balanced to a different pod. And your assumption is correct, but the load balancing does not happen on HTTP request layer, but on the TCP layer.</p> <p>So when you have a persistent TCP connection and re-use it, you will not experience the load balancing that you expect. Since establishing a TCP connection is rather expensive latency wise usually an optimization is in place to avoid repeatedly opening new TCP connections: HTTP keep-alive.</p> <p>Keep alive is by default enabled in most frameworks and clients, this is true for Go as well. Try <code>s.SetKeepAlivesEnabled(false)</code> and see if that fixes your issue. (Recommended only for testing!)</p> <p>You can also use multiple different clients, f.e. from the command line with curl or disable keep-alive in Postman.</p>
<p>This is the <a href="https://docs.openshift.com/container-platform/3.3/dev_guide/integrating_external_services.html#step-2-consume-a-service" rel="nofollow noreferrer">documentation on External Database Environment Variables</a>. It says,</p> <blockquote> <p>Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:</p> </blockquote> <pre><code>EXTERNAL_MYSQL_SERVICE_SERVICE_HOST=&lt;ip_address&gt; EXTERNAL_MYSQL_SERVICE_SERVICE_PORT=&lt;port_number&gt; MYSQL_USERNAME=&lt;mysql_username&gt; MYSQL_PASSWORD=&lt;mysql_password&gt; MYSQL_DATABASE_NAME=&lt;mysql_database&gt; </code></pre> <p>This part is not clear - <strong>Your application will be assigned environment variables for the service</strong>. </p> <p>How should the application be configured so that the <em>environment variables for the service</em> are assigned? I understand that, the ones defined in <code>DeploymentConfig</code> will flow into the application in say NodeJS as <code>process.env.MYSQL_USERNAME</code>, etc. I am not clear, how <code>EXTERNAL_MYSQL_SERVICE_SERVICE_HOST</code> or <code>EXTERNAL_MYSQL_SERVICE_SERVICE_PORT</code> will flow into.</p>
<p>From <code>Step 1</code> of the link that you posted, if you create a Service object</p> <pre><code>oc expose deploymentconfig/&lt;name&gt; </code></pre> <p>This will automatically generate environment variables (<a href="https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html#automatically-added-environment-variables" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html#automatically-added-environment-variables</a>) for all pods in your namespace. (The environment variables may not be immediately available if the Service was added <em>after</em> your pods were already created...delete the pods to have them added on restart)</p>
<p>I have a container with an exposed port in a pod. When I check the log in the containerized app, the source of the requests is always 192.168.189.0 which is a cluster IP. I need to be able to see the original source IP of the request. Is there any way to do this? I tried modifying the service (externalTrafficPolicy: Local) instead of Cluster but it still doesn't work. Please help.</p>
<p>When you are working on an application or service that needs to know the source IP address you need to know the topology of the network you are using. This means that you need to know how the different layers of loadbalancers or proxies works to deliver the traffic to your service.</p> <p>Depending on what cloud provider you are using or the loadbalancer you have in front of your application the source IP address should be on a header of the request. The header you have to look for is <code>X-Fordwared-for</code>, more info <a href="https://developer.mozilla.org/es/docs/Web/HTTP/Headers/X-Forwarded-For" rel="nofollow noreferrer">here</a>, depending on the proxy or loadbalancer you are using sometimes you need to activate this header to receive the correct IP address.</p>
<p>When I'm creating resources for OpenShift/K8s, I might be out of coverage area. I'd like to get schema definition being offline.</p> <p>How I can get from command line a schema for a kind. For example I would like to get a generic schema for Deployment, DeploymentConfig, Pod or a Secret. Is there a way to get schema without using google? Ideally if I could get some documentation description for it.</p>
<p>Posting @Graham Dumpleton comment as a community wiki answer basing on the response from OP saying that it solved his/her problem:</p> <blockquote> <p>Have you tried running <code>oc explain --recursive Deployment</code>? You still need to be connected when you generate it, so you would need to save it to a file for later reference. Maybe also get down and read the free eBook at openshift.com/deploying-to-openshift which mentions this command and lots of other stuff as well. – Graham Dumpleton</p> </blockquote>
<p>Several resources on the web point to the existence of Cloud Run for GKE. For example, this Google <a href="https://codelabs.developers.google.com/codelabs/cloud-run-gke/" rel="nofollow noreferrer">codelabs</a>, this YouTube <a href="https://www.youtube.com/watch?v=RVdhyprptTQ" rel="nofollow noreferrer">video</a> from Google and this LinkedIn training <a href="https://www.linkedin.com/learning/google-cloud-platform-essential-training-3/google-cloud-run-on-gke" rel="nofollow noreferrer">video</a>.</p> <p>However the Cloud Run for GKE functionality seems to have disappeared when you try to create a new Kubernetes cluster, using the Google Cloud web console. The checkboxes to enable Istio and Cloud Run for GKE underneath "Additional features" are not available anymore. (see <a href="https://www.linkedin.com/learning/google-cloud-platform-essential-training-3/google-cloud-run-on-gke" rel="nofollow noreferrer">3:40</a> on this LinkedIn video tutorial)</p> <p>The official <a href="https://cloud.google.com/run/docs/gke/setup" rel="nofollow noreferrer">documentation</a> about Cloud run for GKE also seems to have disappeared or changed and replaced with documentation about Cloud Run on Anthos.</p> <p>So, in short, what happened to Cloud Run for GKE?</p>
<p>You first need to create a GKE cluster and then when creating cloud run choose <code>CloudRun for Anthos</code> so, it's not really gone anywhere.</p> <p><a href="https://i.stack.imgur.com/vLpku.png" rel="noreferrer"><img src="https://i.stack.imgur.com/vLpku.png" alt="Here it is!"></a></p> <p>If it was greyed out, that was probably because you had to tick "enabled stackdriver..."</p>
<p>I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE rakesh rakesh-pod1 1/1 Running 0 4d18h rakesh rakesh-pod2 1/1 Running 0 4d18h </code></pre> <p>I can create a service that selects the pods I want using a service-definition yaml file - </p> <pre><code>apiVersion: v1 kind: Service metadata: name: np-service namespace: rakesh spec: type: NodePort ports: - name: port1 port: 30005 targetPort: 30005 nodePort: 30005 selector: abc.property: rakesh </code></pre> <p>However, I need to achieve the same via commandline. I tried the following - </p> <pre><code>kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort </code></pre> <p>and a few other variations without success.</p> <p><em>Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.</em></p> <p><em>Also note: These pods are not part of a deployment unfortunately. Otherwise, <code>expose deployment</code> might have worked.</em></p> <p>So, it kind of boils down to selecting pods based on selector and exposing them as a service.</p> <p>My <code>kubernetes</code> version: 1.12.5 upstream</p> <p>My <code>kubectl</code> version: 1.12.5 upstream</p>
<p>You can do:</p> <pre><code>kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort </code></pre>
<p>I have deployed a windows container which runs successfully in my local system using docker. Moved the image to Azure container registry and deployed the image from ACR to AKS.</p> <p>I have created a windows node using AKS windows preview</p> <pre><code>C:\Users\HTECH&gt;kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME aks-nodepool1-36563144-vmss000000 Ready agent 2d3h v1.14.0 10.240.0.4 &lt;none&gt; Ubuntu 16.04.6 LTS 4.15.0-1042-azure docker://3.0.4 akssample000000 Ready agent 2d2h v1.14.0 10.240.0.35 &lt;none&gt; Windows Server Datacenter 10.0.17763.379 docker://18.9.2 </code></pre> <p>Docker File:</p> <pre><code>FROM microsoft/iis:latest SHELL ["powershell"] RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \ Install-WindowsFeature Web-Asp-Net45 COPY . ewims RUN Remove-WebSite -Name 'Default Web Site' RUN New-Website -Name 'sample' -Port 80 \ -PhysicalPath 'c:\sample' -ApplicationPool '.NET v4.5' EXPOSE 80 </code></pre> <p>Manifest YAML</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: samplecloudpoc-v1 spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 selector: matchLabels: app: samplecloudpoc-v1 template: metadata: labels: app: samplecloudpoc-v1 spec: containers: - name: samplecloudpoc-v1 image: samplecloud.azurecr.io/sample:v1 ports: - containerPort: 80 resources: requests: cpu: 100m limits: cpu: 100m env: - name: dev value: "samplecloudpoc-v1" imagePullSecrets: - name: sampleauth nodeSelector: beta.kubernetes.io/os: windows --- apiVersion: v1 kind: Service metadata: name: samplecloudpoc-v1 spec: loadBalancerIP: 13.90.205.141 type: LoadBalancer ports: - port: 80 selector: app: samplecloudpoc-v1 </code></pre> <p>While checking the deployement status using below command, I'm getting the following error.</p> <pre><code>D:\Cloud&gt;kubectl describe po samplecloudpoc-v1-5d567d48d9-7gtx8 Name: samplecloudpoc-v1-5d567d48d9-7gtx8 Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: akssample000000/10.240.0.35 Start Time: Thu, 30 May 2019 13:05:13 +0530 Labels: app=samplecloudpoc-v1 pod-template-hash=5d567d48d9 Annotations: &lt;none&gt; Status: Running IP: 10.240.0.44 Controlled By: ReplicaSet/samplecloudpoc-v1-5d567d48d9 Containers: sample: Container ID: docker://0cf6c92b15738c2786caca5b989aa2773c9375352cb4f1d95472ff63cc7b5112 Image: samplecloud.azurecr.io/sample:v1 Image ID: docker-pullable://samplecloud.azurecr.io/sample@sha256:55ac14dc512abc0f8deebb8b87ee47d51fdfbfd997ce6cee0ab521bd69d42b08 Port: 80/TCP Host Port: 0/TCP Args: -it State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: CreateComputeSystem 0cf6c92b15738c2786caca5b989aa2773c9375352cb4f1d95472ff63cc7b5112: The container operating system does not match the host operating system. (extra info: {"SystemType":"Container","Name":"0cf6c92b15738c2786caca5b989aa2773c9375352cb4f1d95472ff63cc7b5112","Owner":"docker","VolumePath":"\\\\?\\Volume{f5ff1135-4e83-4baa-961d-f4533dcb6985}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\0cf6c92b15738c2786caca5b989aa2773c9375352cb4f1d95472ff63cc7b5112","Layers":[{"ID":"b3b88c23-310f-5e95-86bc-117e9f6a6184","Path":"C:\\ProgramData\\docker\\windowsfilter\\0cc15446c028e2fe68601b10a1921a809dedb2a981162c4ed90344d2fde58f0e"},{"ID":"fb1ae57e-89dc-502c-996b-75804b972adc","Path":"C:\\ProgramData\\docker\\windowsfilter\\7217ca2e8bbd2c431c9db44050946ec4f8040c42fdd79f7ceae321e48dc5ca0d"},{"ID":"bb5e3864-b1af-51c8-a8e4-63d88749f082","Path":"C:\\ProgramData\\docker\\windowsfilter\\16f07ffe70a600c95bea2e8297c04cbb6af877562c2cc2ac1693267b152d3793"},{"ID":"2fae8c16-582f-5ab1-acfe-0a88980adec3","Path":"C:\\ProgramData\\docker\\windowsfilter\\a325070d766dd4af490b433d75eac6e1d71297961d89011e5798949eae2e7e4a"},{"ID":"dffd6df2-a500-5985-9c9c-1bc03c9efce3","Path":"C:\\ProgramData\\docker\\windowsfilter\\1221f773d66647fd1dc7aad44693f28843c8385612edb520231c1cb754eb2f97"},{"ID":"7e349a26-81b9-554e-aa13-a6e4286de93e","Path":"C:\\ProgramData\\docker\\windowsfilter\\67d6d22eae7f829e590fde792c6b8129aff3d9f9242850fe72e8d167e284a6b7"},{"ID":"8730db1a-385d-5e9a-a4ec-c45525b5fcb3","Path":"C:\\ProgramData\\docker\\windowsfilter\\2a53ed97b10bd4f67e62e8511e8922496651f3d343dd1889425ba1bedca134fa"},{"ID":"d1e23520-6c0b-5909-8e52-bb6961f80876","Path":"C:\\ProgramData\\docker\\windowsfilter\\d3a27083556be1bb7e36997f0eee2b544f6a16eab94797715bc21db99bf42e88"},{"ID":"18d8ab30-09e9-54e3-a991-f48cca651c8d","Path":"C:\\ProgramData\\docker\\windowsfilter\\9b4143f537ff70f6b1e05b2a5e38e3b05dd2a4b2f624822e32bb2b7cd17b7cca"},{"ID":"2acb6fa3-f27c-50cf-9033-eedb06d5bf32","Path":"C:\\ProgramData\\docker\\windowsfilter\\f71b6708cc4045bf9633f971dd4d6eddb1c5ffeda52d38e648c740e0e277b2df"},{"ID":"0dc40cf1-482a-5fed-af35-c5d1902b95ae","Path":"C:\\ProgramData\\docker\\windowsfilter\\100f3380579a77f2fb2c0f997201e34a0dd2c42e4b0d9a39fb850706aa16e474"}],"ProcessorWeight":5000,"HostName":"samplecloudpoc-v1-5d567d48d9-7gtx8","MappedDirectories":[{"HostPath":"c:\\var\\lib\\kubelet\\pods\\75f48f65-82ad-11e9-8e99-a24a72224ed5\\volumes\\kubernetes.io~secret\\default-token-67g2m","ContainerPath":"c:\\var\\run\\secrets\\kubernetes.io\\serviceaccount","ReadOnly":true,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false}],"HvPartition":false,"NetworkSharedContainerName":"d11237887aec604bfbb9b3cd56fca586975e5a92e04dab4d4ba19b1fcc56ed99"}) Exit Code: 128 Started: Thu, 30 May 2019 13:25:00 +0530 Finished: Thu, 30 May 2019 13:25:00 +0530 Ready: False Restart Count: 5 Limits: cpu: 100m Requests: cpu: 100m Environment: dev: samplecloudpoc-v1 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-67g2m (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-67g2m: Type: Secret (a volume populated by a Secret) SecretName: default-token-67g2m Optional: false QoS Class: Burstable Node-Selectors: beta.kubernetes.io/os=windows Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m default-scheduler Successfully assigned default/samplecloudpoc-v1-5d567d48d9-7gtx8 to akssample000000 Normal Pulling 21m kubelet, akssample000000 Pulling image "samplecloud.azurecr.io/sample:v1" Normal Pulled 4m31s kubelet, akssample000000 Successfully pulled image "samplecloud.azurecr.io/sample:v1" Normal Created 3m (x5 over 4m31s) kubelet, akssample000000 Created container sample Normal Pulled 3m (x4 over 4m28s) kubelet, akssample000000 Container image "samplecloud.azurecr.io/sample:v1" already present on machine Warning Failed 2m59s (x5 over 4m30s) kubelet, akssample000000 Error: failed to start container "sample": Error response from daemon: CreateComputeSystem sample: The container operating system does not match the host operating system. (extra info: {"SystemType":"Container","Name":"sample","Owner":"docker","VolumePath":"\\\\?\\Volume{f5ff1135-4e83-4baa-961d-f4533dcb6985}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\sample","Layers":[{"ID":"b3b88c23-310f-5e95-86bc-117e9f6a6184","Path":"C:\\ProgramData\\docker\\windowsfilter\\0cc15446c028e2fe68601b10a1921a809dedb2a981162c4ed90344d2fde58f0e"},{"ID":"fb1ae57e-89dc-502c-996b-75804b972adc","Path":"C:\\ProgramData\\docker\\windowsfilter\\7217ca2e8bbd2c431c9db44050946ec4f8040c42fdd79f7ceae321e48dc5ca0d"},{"ID":"bb5e3864-b1af-51c8-a8e4-63d88749f082","Path":"C:\\ProgramData\\docker\\windowsfilter\\16f07ffe70a600c95bea2e8297c04cbb6af877562c2cc2ac1693267b152d3793"},{"ID":"2fae8c16-582f-5ab1-acfe-0a88980adec3","Path":"C:\\ProgramData\\docker\\windowsfilter\\a325070d766dd4af490b433d75eac6e1d71297961d89011e5798949eae2e7e4a"},{"ID":"dffd6df2-a500-5985-9c9c-1bc03c9efce3","Path":"C:\\ProgramData\\docker\\windowsfilter\\1221f773d66647fd1dc7aad44693f28843c8385612edb520231c1cb754eb2f97"},{"ID":"7e349a26-81b9-554e-aa13-a6e4286de93e","Path":"C:\\ProgramData\\docker\\windowsfilter\\67d6d22eae7f829e590fde792c6b8129aff3d9f9242850fe72e8d167e284a6b7"},{"ID":"8730db1a-385d-5e9a-a4ec-c45525b5fcb3","Path":"C:\\ProgramData\\docker\\windowsfilter\\2a53ed97b10bd4f67e62e8511e8922496651f3d343dd1889425ba1bedca134fa"},{"ID":"d1e23520-6c0b-5909-8e52-bb6961f80876","Path":"C:\\ProgramData\\docker\\windowsfilter\\d3a27083556be1bb7e36997f0eee2b544f6a16eab94797715bc21db99bf42e88"},{"ID":"18d8ab30-09e9-54e3-a991-f48cca651c8d","Path":"C:\\ProgramData\\docker\\windowsfilter\\9b4143f537ff70f6b1e05b2a5e38e3b05dd2a4b2f624822e32bb2b7cd17b7cca"},{"ID":"2acb6fa3-f27c-50cf-9033-eedb06d5bf32","Path":"C:\\ProgramData\\docker\\windowsfilter\\f71b6708cc4045bf9633f971dd4d6eddb1c5ffeda52d38e648c740e0e277b2df"},{"ID":"0dc40cf1-482a-5fed-af35-c5d1902b95ae","Path":"C:\\ProgramData\\docker\\windowsfilter\\100f3380579a77f2fb2c0f997201e34a0dd2c42e4b0d9a39fb850706aa16e474"}],"ProcessorWeight":5000,"HostName":"samplecloudpoc-v1-5d567d48d9-7gtx8","MappedDirectories":[{"HostPath":"c:\\var\\lib\\kubelet\\pods\\75f48f65-82ad-11e9-8e99-a24a72224ed5\\volumes\\kubernetes.io~secret\\default-token-67g2m","ContainerPath":"c:\\var\\run\\secrets\\kubernetes.io\\serviceaccount","ReadOnly":true,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false}],"HvPartition":false,"NetworkSharedContainerName":"d11237887aec604bfbb9b3cd56fca586975e5a92e04dab4d4ba19b1fcc56ed99"}) Warning BackOff 119s (x10 over 3m59s) kubelet, akssample000000 Back-off restarting failed container </code></pre> <p>I have tried to fix this by pulling the latest version IIS image from docker also throws the same error.</p>
<p>I had the problem, the issue was that my node in Kubernetes was running in Windows 2019 and my images was built with Windows 2016.</p> <p>To check the Windows version for your image:</p> <pre><code>docker inspect &lt;image&gt;:&lt;tag&gt; </code></pre> <p>Then you can get nods with kubectl to view the Windows version of your node:</p> <pre><code>kubectl get nodes -o wide </code></pre> <p>To resolve it, I re-build my image from a Windows 2019 machine and then it worked from AKS.</p>
<p>I'm aware of the usual way to "tail --follow" logs of a Kubernetes service:</p> <pre><code>kubectl logs -f -lapp=service-name --all-containers=true </code></pre> <p>But every time I push a change to this service the <code>kubectl</code> dies when the pods get killed and I have to run it again.</p> <p>Example error message:</p> <blockquote> <p>Received interrupt, cleaning up... rpc error: code = Unknown desc = Error: No such container: b48a1ad3c0080680465c79903d03748a026becf397bc780921674b2f0d7078ffReceived interrupt, cleaning up...</p> <p>rpc error: code = Unknown desc = Error: No such container: 6258e31702ea678eacec5ad0df15b5620b5609cd5e4822f2e4991fd26c9906b6 $</p> </blockquote> <p>What I wonder is if there is a way to tell Kubernetes to keep looking for new pods which match the <code>-lapp=service-name</code> tag and tail them. A bit similar to <code>tail --follow=name --retry</code></p> <p>I suppose I can run the command in a simple shell loop but was wondering if there is something smarter that can use <code>Kubernetes</code>, one which will avoid lots of errors while the new pods get deployed.</p>
<p>What about instead of using <a href="https://github.com/wercker/stern" rel="nofollow noreferrer"><code>stern</code></a>?</p> <p>Basically, you can use the same structure of the command, just replace <code>kubectl logs -f -lapp=service-name --all-containers=true</code> into <code>stern -l app=service-name</code></p>
<p>How can I check which admission controllers are enabled by default on the cluster using some command like kubectl.</p>
<p>You can try using the following command Kubectl cluster-info dump | grep -i admission</p>
<pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: / ingress.bluemix.net/rewrite-path: "serviceName=nginx rewrite=/" name: nginx-ingress namespace: 'default' spec: rules: - host: www.domain.com http: paths: - path: /* backend: serviceName: nginx servicePort: 80 </code></pre> <p>Here I have some ingress config <code>yaml</code> file. When I apply it all working correctly but only when you go by path <code>www.domain.com</code> , when I try to use <code>domain.com</code> , it doesn't work and return me </p> <blockquote> <p>default backend - 404</p> </blockquote> <p>What should I do ? Add one more host to the rules: </p> <pre><code> - host: domain.com http: paths: - path: /* backend: serviceName: nginx servicePort: 80 </code></pre> <p>Like this or I can use better solving of this problem ? </p>
<p>add an ingress alias annotation to the <code>annotations</code> block:</p> <pre><code>nginx.ingress.kubernetes.io/server-alias: domain.com </code></pre>
<p>I have 2 preinstall scripts that should run sequentially. They have obviously different weights The second script must have the first one to finish running. Is there anyway to make helm support this behavior?</p> <p>And also for bonus point can you make a pre-install hook to block till the object becomes ready?</p> <p><strong>The scenario is as follows:</strong></p> <p>You have a database, and an application. </p> <p>The setup is:</p> <ol> <li>create a database (deployment),</li> <li>create a database (service),</li> <li>ran a script that creates all the database users on that db (job)</li> <li>start the application server (deployment).</li> </ol>
<p>First, you need to set the <a href="https://helm.sh/docs/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">hook weights</a> properly. E.g:</p> <pre><code> annotations: "helm.sh/hook-weight": "5" </code></pre> <blockquote> <p>Hook weights can be positive or negative numbers but must be represented as strings. When Tiller starts the execution cycle of hooks of a particular kind (ex. the <code>pre-install</code> hooks or <code>post-install</code> hooks, etc.) it will sort those hooks in ascending order.</p> </blockquote> <p>According to the <a href="https://helm.sh/docs/charts_hooks/#hooks-and-the-release-lifecycle" rel="nofollow noreferrer">Hooks and release lifecycle</a>, by default, the Tiller waits until a hook becomes “Ready” before executing the next ones. The catch is: When dealing with scripts managed by hooks, you need to create the resource as a <code>Job</code>:</p> <blockquote> <p>What does it mean to wait until a hook is ready? This depends on the resource declared in the hook. If the resources is a Job kind, Tiller will wait until the job successfully runs to completion. And if the job fails, the release will fail. <strong>This is a <em>blocking operation</em></strong>, so the Helm client will pause while the Job is run.</p> </blockquote> <p>If you want to run jobs that depend on the database or application to be Ready, it's better to use the hooks as <code>post-install</code>, combined with the <a href="https://helm.sh/docs/using_helm/#helpful-options-for-install-upgrade-rollback" rel="nofollow noreferrer"><code>--wait</code></a> flag. When this flag is set, Tiller will wait until all release resources are deployed and in a ready state and will not run the <code>post-install</code> hook until they are ready.</p>
<p>With spark-submit I launch application on a Kubernetes cluster. And I can see Spark-UI only when I go to the <a href="http://driver-pod:port" rel="noreferrer">http://driver-pod:port</a>.</p> <p>How can I start Spark-UI History Server on a cluster? How to make, that all running spark jobs are registered on the Spark-UI History Server. </p> <p>Is this possible?</p>
<p>Yes it is possible. Briefly you will need to ensure following:</p> <ul> <li>Make sure all your applications store event logs in a specific location (<code>filesystem</code>, <code>s3</code>, <code>hdfs</code> etc).</li> <li>Deploy the history server in your cluster with access to above event logs location.</li> </ul> <p>Now spark (by default) only read from the <code>filesystem</code> path so I will elaborate this case in details with <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="noreferrer">spark operator</a>:</p> <ul> <li>Create a <code>PVC</code> with a volume type that supports <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-mode" rel="noreferrer">ReadWriteMany</a> mode. For example <code>NFS</code> volume. The following snippet assumes you have storage class for <code>NFS</code> (<code>nfs-volume</code>) already configured:</li> </ul> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: spark-pvc namespace: spark-apps spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 5Gi storageClassName: nfs-volume </code></pre> <ul> <li>Make sure all your spark applications have event logging enabled and at the correct path:</li> </ul> <pre><code> sparkConf: "spark.eventLog.enabled": "true" "spark.eventLog.dir": "file:/mnt" </code></pre> <ul> <li>With event logs volume mounted to each application (you can also use operator mutating web hook to centralize it ) pod. An example manifest with mentioned config is show below:</li> </ul> <pre><code>--- apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: spark-java-pi namespace: spark-apps spec: type: Java mode: cluster image: gcr.io/spark-operator/spark:v2.4.4 mainClass: org.apache.spark.examples.SparkPi mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar" imagePullPolicy: Always sparkVersion: 2.4.4 sparkConf: "spark.eventLog.enabled": "true" "spark.eventLog.dir": "file:/mnt" restartPolicy: type: Never volumes: - name: spark-data persistentVolumeClaim: claimName: spark-pvc driver: cores: 1 coreLimit: "1200m" memory: "512m" labels: version: 2.4.4 serviceAccount: spark volumeMounts: - name: spark-data mountPath: /mnt executor: cores: 1 instances: 1 memory: "512m" labels: version: 2.4.4 volumeMounts: - name: spark-data mountPath: /mnt </code></pre> <ul> <li>Install spark history server mounting the shared volume. Then you will have access events in history server UI:</li> </ul> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: spark-history-server namespace: spark-apps spec: replicas: 1 template: metadata: name: spark-history-server labels: app: spark-history-server spec: containers: - name: spark-history-server image: gcr.io/spark-operator/spark:v2.4.0 resources: requests: memory: "512Mi" cpu: "100m" command: - /sbin/tini - -s - -- - /opt/spark/bin/spark-class - -Dspark.history.fs.logDirectory=/data/ - org.apache.spark.deploy.history.HistoryServer ports: - name: http protocol: TCP containerPort: 18080 readinessProbe: timeoutSeconds: 4 httpGet: path: / port: http livenessProbe: timeoutSeconds: 4 httpGet: path: / port: http volumeMounts: - name: data mountPath: /data volumes: - name: data persistentVolumeClaim: claimName: spark-pvc readOnly: true </code></pre> <p>Feel free to configure <code>Ingress</code>, <code>Service</code> for accessing the <code>UI</code>. <a href="https://i.stack.imgur.com/C7BXT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/C7BXT.png" alt="enter image description here"></a></p> <p>Also you can use Google Cloud Storage, Azrue Blob Storage or AWS S3 as event log location. For this you will need to install some extra <code>jars</code> so I would recommend having a look at lightbend spark history server <a href="https://github.com/lightbend/spark-history-server-docker" rel="noreferrer">image</a> and <a href="https://github.com/helm/charts/tree/master/stable/spark-history-server" rel="noreferrer">charts</a>.</p>
<p>I am trying to setup an ASPNET.Core 3.0 web app using Kubernetes but I am unable to get Kestrel work with https. </p> <p>I found some information searching the web regarding two environment variables I can declare to pass the path and the password for the certificate.</p> <p>I've made a Kubernetes deployment using those variables like so:</p> <pre><code> spec: containers: - env: - name: ASPNETCORE_URLS value: http://+:80;https://+:443 - name: ASPNETCORE_KESTREL_CERTIFICATE_PASSWORD value: password - name: ASPNETCORE_KESTREL_CERTIFICATE_PATH value: /app/tls/certificate.pfx volumeMounts: - name: storage mountPath: "/app/tls" volumes: - name: storage persistentVolumeClaim: claimName: tls-storage </code></pre> <p>I ran the app without https enabled and saw the volume is mounted correctly inside the pod and <code>certificate.pfx</code> is present in <code>/app/tls/</code>.</p> <p>Anybody know if Kestrel is configured to get the values from those env variables by default or should I also write some code in <code>Program.cs</code>/<code>Startup.cs</code>?</p>
<p>I actually just found the correct environment variables:</p> <p><code>ASPNETCORE_Kestrel__Certificates__Default__Password</code></p> <p><code>ASPNETCORE_Kestrel__Certificates__Default__Path</code></p>
<p>Using Kubernetes, I'm trying to map *.api requests to *.<br> <a href="https://stackoverflow.com/questions/54509142/coredns-suffix-rewrite-causes-dns-queries-to-return-the-rewritten-name">I found this thread</a> which helps me how to achieve this and it's working by updating the CoreDNS configuration.<br> But I would like to do this via a yaml apply so it can be easily deployed to different environments. Also if the CoreDNS config changes in a later release, I won't be getting those changes.<br> So my question is, how could I apply a yaml file to achieve this:</p> <pre><code>rewrite stop { name regex (.*)\.api {1}.some-namespace.svc.cluster.local answer name (.*)\.some-namespace\.svc\.cluster\.local {1}.api } </code></pre> <p>I found this article: <a href="https://learn.microsoft.com/en-us/azure/aks/coredns-custom" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/coredns-custom</a><br> But I'm unable to figure out how I can use that example for my use-case.</p>
<p>Given that there are no other answers yet, let me describe a possible approach...two, actually.</p> <p>The main idea is to use CoreDNS' <a href="https://coredns.io/plugins/import/" rel="nofollow noreferrer">import</a> directive - "...The import plugin can be used to include files into the main configuration". And from the CoreDNS manual - "...This plugin is a bit special in that it may be used anywhere in the Corefile".</p> <p>One option (#1) is to edit the <code>coredns</code> configMap to add <code>import</code> directive to include configuration from another file like in these configMap-s for <a href="https://github.com/Azure/aks-engine/blob/v0.42.2/parts/k8s/addons/coredns.yaml#L62" rel="nofollow noreferrer">AKS</a> and <a href="https://github.com/rancher/k3s/blob/60801e215316c95dd4e65e0203556376a2ca6dd1/manifests/coredns.yaml#L58" rel="nofollow noreferrer">k3s</a>; then add a new volume in the deployment config - see <a href="https://github.com/Azure/aks-engine/blob/v0.42.2/parts/k8s/addons/coredns.yaml#L171-L173" rel="nofollow noreferrer">here</a> and <a href="https://github.com/Azure/aks-engine/blob/v0.42.2/parts/k8s/addons/coredns.yaml#L214-L220" rel="nofollow noreferrer">here</a>.</p> <p>Another option (#2) could be to add a new configMap with your configuration and that also imports the <code>/etc/coredns/Corefile</code> file mounted as volume from the "stock" <code>coredns</code> configMap; change the <code>coredns</code> <em>deployment</em> configuration to add a volume from the new configMap and point the <a href="https://github.com/Azure/aks-engine/blob/v0.42.2/parts/k8s/addons/coredns.yaml#L166" rel="nofollow noreferrer">"-conf" argument</a> to the file mounted as volume from the new configMap.</p> <p>A drawback is that in both cases you'll have to re-implement the change if in a later release the coredns configMap and/or deployment configuration change.</p>
<p>I can't seem to get cert-manager working:</p> <pre><code>$ kubectl get certificates -o wide NAME READY SECRET ISSUER STATUS AGE example-ingress False example-ingress letsencrypt-prod Waiting for CertificateRequest "example-ingress-2556707613" to complete 6m23s $ kubectl get CertificateRequest -o wide NAME READY ISSUER STATUS AGE example-ingress-2556707613 False letsencrypt-prod Referenced "Issuer" not found: issuer.cert-manager.io "letsencrypt-prod" not found 7m7s </code></pre> <p>and in the logs i see:</p> <pre><code>I1025 06:22:00.117292 1 sync.go:163] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate already exists for ingress resource, ensuring it is up to date" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.117341 1 sync.go:176] cert-manager/controller/ingress-shim "level"=0 "msg"="certificate resource is already up to date for ingress" "related_resource_kind"="Certificate" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.117382 1 controller.go:135] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="default/example-ingress" I1025 06:22:00.118026 1 sync.go:361] cert-manager/controller/certificates "level"=0 "msg"="no existing CertificateRequest resource exists, creating new request..." "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" I1025 06:22:00.147147 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-venafi "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147267 1 sync.go:373] cert-manager/controller/certificates "level"=0 "msg"="created certificate request" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "request_name"="example-ingress-2556707613" I1025 06:22:00.147284 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-acme "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147273 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147254385 +0000 UTC m=+603.871617341 I1025 06:22:00.147392 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147380513 +0000 UTC m=+603.871743521 E1025 06:22:00.147560 1 pki.go:128] cert-manager/controller/certificates "msg"="error decoding x509 certificate" "error"="error decoding cert PEM block" "related_resource_kind"="Secret" "related_resource_name"="example-ingress" "related_resource_namespace"="default" "resource_kind"="Certificate" "resource_name"="example-ingress" "resource_namespace"="default" "secret_key"="tls.crt" I1025 06:22:00.147620 1 conditions.go:155] Setting lastTransitionTime for Certificate "example-ingress" condition "Ready" to 2019-10-25 06:22:00.147613112 +0000 UTC m=+603.871976083 I1025 06:22:00.147731 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-ca "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147765 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.14776244 +0000 UTC m=+603.872125380 I1025 06:22:00.147912 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-selfsigned "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.147942 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.147938966 +0000 UTC m=+603.872301909 I1025 06:22:00.147968 1 controller.go:129] cert-manager/controller/certificaterequests-issuer-vault "level"=0 "msg"="syncing item" "key"="default/example-ingress-2556707613" I1025 06:22:00.148023 1 conditions.go:200] Setting lastTransitionTime for CertificateRequest "example-ingress-2556707613" condition "Ready" to 2019-10-25 06:22:00.148017945 +0000 UTC m=+603.872380906 </code></pre> <p>i deployed cert-manager via the manifest:</p> <p><a href="https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml" rel="noreferrer">https://github.com/jetstack/cert-manager/releases/download/v0.11.0/cert-manager.yaml</a></p> <pre><code>$ kubectl get clusterissuer letsencrypt-prod -o yaml apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cert-manager.io/v1alpha2","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"me@me.com","privateKeySecretRef":{"name":"letsencrypt-prod"},"server":"https://acme-staging-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"nginx"}},"selector":{}}]}}} creationTimestamp: "2019-10-25T06:27:06Z" generation: 1 name: letsencrypt-prod resourceVersion: "1759784" selfLink: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-prod uid: 05831417-b359-42de-8298-60da553575f2 spec: acme: email: me@me.com privateKeySecretRef: name: letsencrypt-prod server: https://acme-staging-v02.api.letsencrypt.org/directory solvers: - http01: ingress: class: nginx selector: {} status: acme: lastRegisteredEmail: me@me.com uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/11410425 conditions: - lastTransitionTime: "2019-10-25T06:27:07Z" message: The ACME account was registered with the ACME server reason: ACMEAccountRegistered status: "True" type: Ready </code></pre> <p>and my ingress is:</p> <pre><code>$ kubectl get ingress example-ingress -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: cert-manager.io/issuer: letsencrypt-prod kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/issuer":"letsencrypt-prod","kubernetes.io/ingress.class":"nginx","kubernetes.io/tls-acme":"true"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"example-ingress.example.com","http":{"paths":[{"backend":{"serviceName":"apple-service","servicePort":5678},"path":"/apple"},{"backend":{"serviceName":"banana-service","servicePort":5678},"path":"/banana"}]}}],"tls":[{"hosts":["example-ingress.example.com"],"secretName":"example-ingress"}]}} kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" creationTimestamp: "2019-10-25T06:22:00Z" generation: 1 name: example-ingress namespace: default resourceVersion: "1758822" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress uid: 921b2e91-9101-4c3c-a0d8-3f871dafdd30 spec: rules: - host: example-ingress.example.com http: paths: - backend: serviceName: apple-service servicePort: 5678 path: /apple - backend: serviceName: banana-service servicePort: 5678 path: /banana tls: - hosts: - example-ingress.example.com secretName: example-ingress status: loadBalancer: ingress: - ip: x.y.z.a </code></pre> <p>any idea whats wrong? cheers,</p>
<p>Your ingress is referring to an issuer, but the issuer is a ClusterIssuer. Could that be the reason? I have a similar setup with Issuer instead of a ClusterIssuer and it is working. </p>
<p>I am setting up SSL for Postgres9.6 connections. I could not mount SSH private key and cert in a Kubernetes secret with appropriate permissions. I believe without any explicit user id set on the Kubernetes container, the mounted secret should be owned by root. I have set <code>416</code> decimal for octal <code>0640</code>. This is a recommendation from Postgres if files are owned by root. </p> <p>Any help is appreciated.</p> <p>Error:</p> <pre><code> FATAL: could not load private key file "/var/lib/postgresql/certs/server.key": Permission denied </code></pre> <p>Helm statefulset config:</p> <pre><code> volumes: - name: {{ .Values.certs_secret.volume_name }} secret: secretName: {{ .Values.certs_secret.secret_name }} items: - key: server.key path: server.key mode: 416 - key: server.crt path: server.crt mode: 511 containers: - name: {{ .Chart.Name }} args: - -c - ssl=on - -c - ssl_cert_file={{ .Values.certs_secret.cert_path }} - -c - ssl_key_file={{ .Values.certs_secret.private_key_path }} volumeMounts: - name: {{ .Values.certs_secret.volume_name }} mountPath: {{ .Values.certs_secret.mount_path }} </code></pre> <p><strong>Updated</strong></p> <p>I have exec'd in without turning SSL on and found secret files are mounted as symlinks. Could this be a problem? The cluster is in AKS.</p> <pre><code>root@postgres-timescale-db-0:/var/lib/postgresql/certs# find . -ls 2 0 drwxrwxrwt 3 root root 120 Oct 29 16:40 . 8 0 lrwxrwxrwx 1 root root 31 Oct 29 16:40 ./..data -&gt; ..2019_10_29_16_40_00.233198123 7 0 lrwxrwxrwx 1 root root 17 Oct 29 16:40 ./server.crt -&gt; ..data/server.crt 6 0 lrwxrwxrwx 1 root root 17 Oct 29 16:40 ./server.key -&gt; ..data/server.key 3 0 drwxr-xr-x 2 root root 80 Oct 29 16:40 ./..2019_10_29_16_40_00.233198123 5 8 -rwxrwxrwx 1 root root 4450 Oct 29 16:40 ./..2019_10_29_16_40_00.233198123/server.crt 4 4 -rw-r----- 1 root root 1679 Oct 29 16:40 ./..2019_10_29_16_40_00.233198123/server.key </code></pre>
<p>As what user does postgres run - root or something else? Some Docker images use <code>postgres</code> with <code>uid</code> of 999...</p> <p>Without having the complete deployment configuration I'll suggest that, once you know the user, take a look at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">this doc</a> for how to configure <code>securityContext</code> to set the ownership of directories and files from mounted volumes.</p>
<p>Trying to setup a EKS cluster. An error occurred (AccessDeniedException) when calling the <code>DescribeCluster operation</code>: <strong>Account xxx is not authorized</strong> to use this service. This error came form the CLI, on the console I was able to crate the cluster and everything successfully. I am logged in as the <code>root user</code> (its just my personal account).</p> <p>It says <em>Account</em> so sounds like its not a user/permissions issue? Do I have to enable my account for this service? I don't see any such option.</p> <p>Also if login as a user (rather than root) - will I be able to see everything that was earlier created as root. I have now created a user and assigned admin and eks* permissions. <strong>I checked this</strong> <em>when I sign in as the user - I can see everything.</em></p> <p>The aws cli was setup with root credentials (<em>I think</em>) - so do I have to go back and undo fix all this and just use this user.</p> <p><strong>Update 1</strong><br> I redid/restarted everything including user and awscli configure - just to make sure. But still the issue did not get resolved.</p> <p>There is an option to <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="nofollow noreferrer">create the file manually</a> - that finally worked.</p> <p>And I was able to : <code>kubectl get svc</code></p> <p>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 443/TCP 48m</p> <p><strong>KUBECONFIG:</strong> I had setup the env:KUBECONFIG </p> <pre><code>$env:KUBECONFIG="C:\Users\sbaha\.kube\config-EKS-nginixClstr" $Env:KUBECONFIG C:\Users\sbaha\.kube\config-EKS-nginixClstr kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * aws kubernetes aws kubectl config current-context aws </code></pre> <p>My understanding is is I should see both the <code>aws</code> and my <code>EKS-nginixClstr</code> contexts but I only see <code>aws</code> - is this (also) a issue?</p> <p>Next Step is to create and add worker nodes. I updated the node arn correctly in the .yaml file: <code>kubectl apply -f ~\.kube\aws-auth-cm.yaml</code><br> <code>configmap/aws-auth configured</code> <em>So this perhaps worked.</em></p> <p>But next it fails:</p> <blockquote> <p><code>kubectl get nodes</code> <strong>No resources found in default namespace.</strong> </p> </blockquote> <p>On AWS Console NodeGrp shows- <strong>Create Completed</strong>. Also on CLI <code>kubectl get nodes --watch</code> - <em>it does not even return</em>.</p> <p>So this this has to be debugged next- <em>(it never ends)</em></p> <p><strong>aws-auth-cm.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: arn:aws:iam::arn:aws:iam::xxxxx:role/Nginix-NodeGrpClstr-NodeInstanceRole-1SK61JHT0JE4 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes </code></pre>
<p>This problem was related to not having the correct version of eksctl - it must be at least 0.7.0. The documentation states this and I knew this, but initially whatever I did could not get beyond 0.6.0. The way you get it is to configure your AWS CLI to a region that supports EKS. Once you get 0.7.0 this issue gets resolved.<br> Overall to make EKS work - you must have the same user both on console and CLI, and you must work on a region that supports EKS, and have correct eksctl version 0.7.0.</p>
<p>I have deployed kibana in a kubernetes environment. If I give that a LoadBalancer type Service, I could access it fine. However, when I try to access the same via a nginx-ingress it fails. The configuration that I use in my nginx ingress is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: my-ingress spec: rules: - http: paths: - backend: serviceName: kibana servicePort: {{ .Values.kibanaPort }} path: /kibana </code></pre> <p>I have launched my kibana with the following setting:</p> <pre><code> - name: SERVER_BASEPATH value: /kibana </code></pre> <p>and I am able to access the kibana fine via the <code>LoadBalancer</code> IP. However when I try to access via the Ingress, most of the calls go through fine except for a GET call to <code>vendors.bundle.js</code> where it fails almost consistently.</p> <p>The log messages in the ingress during this call is as follows:</p> <pre><code>2019/10/25 07:31:48 [error] 430#430: *21284 upstream prematurely closed connection while sending to client, client: 10.142.0.84, server: _, request: "GET /kibana/bundles/vendors.bundle.js HTTP/2.0", upstream: "http://10.20.3.5:3000/kibana/bundles/vendors.bundle.js", host: "1.2.3.4", referrer: "https://1.2.3.4/kibana/app/kibana" 10.142.0.84 - [10.142.0.84] - - [25/Oct/2019:07:31:48 +0000] "GET /kibana/bundles/vendors.bundle.js HTTP/2.0" 200 1854133 "https://1.2.3.4/kibana/app/kibana" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36" 47 13.512 [some-service] 10.20.3.5:3000 7607326 13.513 200 506f778b25471822e62fbda2e57ccd6b </code></pre> <p>I am not sure why I get the <code>upstream prematurely closed connection while sending to client</code> across different browsers. I have tried setting <code>proxy-connect-timeout</code> and <code>proxy-read-timeout</code> to 100 seconds and even then it fails. I am not sure if this is due to some kind of default size or chunks.</p> <p>Also it is interesting to note that only some kibana calls are failing and not all are failing.</p> <p>In the browser, I see the error message:</p> <pre><code>GET https://&lt;ip&gt;/kibana/bundles/vendors.bundle.js net::ERR_SPDY_PROTOCOL_ERROR 200 </code></pre> <p>in the developer console.</p> <p>Anyone has any idea on what kind of config options I need to pass to my nginx-ingress to make kibana proxy_pass fine ?</p>
<p>I have found the cause of the error. The <code>vendors.bundle.js</code> file was relatively bigger and since I was accessing from a relatively slow network, the requests were terminated. The way I fixed this is, by adding to the nginx-ingress configuration the following fields:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-body-size: 10m (Change this as you need) nginx.ingress.kubernetes.io/proxy-connect-timeout: "100" nginx.ingress.kubernetes.io/proxy-send-timeout: "100" nginx.ingress.kubernetes.io/proxy-read-timeout: "100" nginx.ingress.kubernetes.io/proxy-buffering: "on" </code></pre>
<p>Due to some internal issues, we need to remove unused images as soon as they become unused.<br> I do know it's possible to use <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">Garbage collection</a> but it doesn't offer strict policy as we need. I've come across <a href="https://hub.docker.com/r/meltwater/docker-cleanup/" rel="nofollow noreferrer">this</a> solution but</p> <ol> <li>it's deprecated</li> <li>it also removes containers and possible mounted volumes</li> </ol> <p>I was thinking about setting a <code>cron</code> job directly over the nodes to run <code>docker prune</code> but I hope there is a better way</p> <p>No idea if it makes a difference but we are using AKS</p>
<p>This doesn't really accomplish much since things will be re-downloaded if they are requested again. But if you insist on a silly thing, best bet is a DaemonSet that runs with the host docker control socket hostPath-mounted in and runs <code>docker system prune</code> as you mentioned. You can't use a cron job so you need to write the loop yourself, probably just <code>bash -c 'while true; do docker system prune &amp;&amp; sleep 3600; done'</code> or something.</p>
<p>We are deploying Spring Cloud Data Flow v2.2.1.RELEASE in Kubernetes. Everything or almost seems to work but scheduling is not. In fact, even when running tasks by manual launch using the UI (or api) we see an error log. That same log is generated when trying to schedule but this time, it makes the schedule creation fails. Here is a stack trace extract:</p> <pre><code>java.lang.IllegalArgumentException: taskDefinitionName must not be null or empty at org.springframework.util.Assert.hasText(Assert.java:284) at org.springframework.cloud.dataflow.rest.resource.ScheduleInfoResource.&lt;init&gt;(ScheduleInfoResource.java:58) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.instantiateResource(TaskSchedulerController.java:174) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.instantiateResource(TaskSchedulerController.java:160) at org.springframework.hateoas.mvc.ResourceAssemblerSupport.createResourceWithId(ResourceAssemblerSupport.java:89) at org.springframework.hateoas.mvc.ResourceAssemblerSupport.createResourceWithId(ResourceAssemblerSupport.java:81) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.toResource(TaskSchedulerController.java:168) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController$Assembler.toResource(TaskSchedulerController.java:160) at org.springframework.data.web.PagedResourcesAssembler.createResource(PagedResourcesAssembler.java:208) at org.springframework.data.web.PagedResourcesAssembler.toResource(PagedResourcesAssembler.java:120) at org.springframework.cloud.dataflow.server.controller.TaskSchedulerController.list(TaskSchedulerController.java:85) at sun.reflect.GeneratedMethodAccessor180.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) </code></pre> <p>...,</p> <p>We've looked at the table content, the task does have a name.</p> <p>Any idea?</p>
<p>I've finally found the source of the error by debugging live Data Flow. The problem arises when CronJob that are not created by Data Flow are present in the namespace, which is by my evaluation a problem. The scheduler launches a process that loops on Kubernetes CronJob resources and tries to process them.</p> <p>Data Flow should certainly do its processing on those using labels, like all Kubernetes native tools, to select only the elements that concerns it. Any process could use CronJob.</p> <p>So Pivotal - Data Flow people, it would probably be a good idea to enhance that part and this way prevent that kind of "invisible" problems. I say invisible because the only error we get is the validation of the Schedule item, complaining about the fact that the name is empty and that is because that CronJob was not in any way linked to an SCDF task.</p> <p>Hope that can help someone in the future.</p> <p>Bug reported: <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues/347" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/issues/347</a></p> <p>PR issued: <a href="https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/pull/348" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-deployer-kubernetes/pull/348</a></p>
<p>I am testing role differences right now so I have a context for each role setup.</p> <p>Terminal session <em>Admin</em>, I want to be able to use context <code>Admin</code> in one session so I can update the rules as needed. </p> <p>In terminal session <em>User</em>, I want to be able to test that role via its context.</p> <p>(Note: I am on EKS so roles map to IAM roles)</p>
<p>Well, I am an idiot.</p> <p>Natively, there is no answer in the --help output for <code>kubectl</code>; however, there is output for this in the man page.</p> <p>All one has to do is throw the <code>--context</code> flag into their command.</p> <p>However, the below-mentioned <code>kubectx</code> tool is what I use day to day now.</p>
<p>I was trying to showcase binary authorization to my client as POC. During the deployment, it is failing with the following error message:</p> <blockquote> <p>pods "hello-app-6589454ddd-wlkbg" is forbidden: image policy webhook backend denied one or more images: Denied by cluster admission rule for us-central1.staging-cluster. Denied by Attestor. Image gcr.io//hello-app:e1479a4 denied by projects//attestors/vulnz-attestor: Attestor cannot attest to an image deployed by tag</p> </blockquote> <p>I have adhered all steps mentioned in the site.</p> <p>I have verified the image repeatedly for few occurances, for example using below command to force fully make the attestation:</p> <pre><code>gcloud alpha container binauthz attestations sign-and-create --project "projectxyz" --artifact-url "gcr.io/projectxyz/hello-app@sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --keyversion "1" --keyversion-key "vulnz-signer" --keyversion-location "us-central1" --keyversion-keyring "binauthz" --keyversion-project "projectxyz" </code></pre> <p>It throws error as:</p> <blockquote> <p>ERROR: (gcloud.alpha.container.binauthz.attestations.sign-and-create) Resource in project [project xyz] is the subject of a conflict: occurrence ID "c5f03cc3-3829-44cc-ae38-2b2b3967ba61" already exists in project "projectxyz"</p> </blockquote> <p>So when I verify, I found the attestion present:</p> <pre><code>gcloud beta container binauthz attestations list --artifact-url "gcr.io/projectxyz/hello-app@sha256:82f1887cf5e1ff80ee67f4a820703130b7d533f43fe4b7a2b6b32ec430ddd699" --attestor "vulnz-attestor" --attestor-project "projectxyz" --format json | jq '.[0].kind' \ &gt; | grep 'ATTESTATION' "ATTESTATION" </code></pre> <p>Here are the screen shots:</p> <p><img src="https://user-images.githubusercontent.com/27581174/67691987-c3b44b80-f99f-11e9-92c0-384dc41a120d.PNG" alt="deployment error"></p> <p><img src="https://user-images.githubusercontent.com/27581174/67691988-c3b44b80-f99f-11e9-80c9-0a3fce511500.PNG" alt="container"></p> <p><img src="https://user-images.githubusercontent.com/27581174/67691989-c44ce200-f99f-11e9-9917-880d665cbe83.PNG" alt="cloud build"></p> <p>Any feedback please?</p> <p>Thanks in advance.</p>
<p>Thank you for trying Binary Authorization. I just updated the <a href="https://cloud.google.com/solutions/binary-auth-with-cloud-build-and-gke" rel="nofollow noreferrer">Binary Authorization Solution</a>, which you might find helpful.</p> <p>A few things I noticed along the way:</p> <blockquote> <p>... denied by projects//attestors/vulnz-attestor:</p> </blockquote> <p>There should be a project ID in between <code>projects</code> and <code>attestors</code>, like:</p> <pre><code>projects/my-project/attestors/vulnz-attestor </code></pre> <p>Similarly, your gcr.io links should include that same project ID, for example:</p> <blockquote> <p>gcr.io//hello-app:e1479a4</p> </blockquote> <p>should be</p> <pre><code>gcr.io/my-project/hello-app:e1479a4 </code></pre> <p>If you followed a tutorial, it likely asked you to set a variable like <code>$PROJECT_ID</code>, but you may have accidentally unset it or ran the command in a different terminal session.</p>
<p>I am trying to install <a href="https://github.com/confluentinc/cp-helm-charts" rel="nofollow noreferrer">cp-helm-charts</a>.</p> <p>I want to be able to access the topology from outside.</p> <p>So I did:</p> <pre><code>helm install --set external.enabled=true confluentinc/cp-helm-charts </code></pre> <p>But a <code>kubectl get services</code> still tells me:</p> <pre><code>wishful-newt-cp-kafka ClusterIP 10.106.112.201 &lt;none&gt; 9092/TCP 115s wishful-newt-cp-kafka-connect ClusterIP 10.104.46.32 &lt;none&gt; 8083/TCP 115s wishful-newt-cp-kafka-headless ClusterIP None &lt;none&gt; 9092/TCP 115s wishful-newt-cp-kafka-rest ClusterIP 10.105.4.206 &lt;none&gt; 8082/TCP 115s wishful-newt-cp-ksql-server ClusterIP 10.104.90.228 &lt;none&gt; 8088/TCP 115s wishful-newt-cp-schema-registry ClusterIP 10.103.12.45 &lt;none&gt; 8081/TCP 115s wishful-newt-cp-zookeeper ClusterIP 10.101.18.171 &lt;none&gt; 2181/TCP 115s wishful-newt-cp-zookeeper-headless ClusterIP None &lt;none&gt; 2888/TCP,3888/TCP 115s </code></pre> <p>Any ideas what I might be missing?</p>
<p>The <code>external.enable</code> value is specific for some sub-charts. When specifying values from the parent, you need to prefix the sub-chart name that you are changing the config for. For example:</p> <p>Setting external access for <a href="https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-ksql-server#external-access" rel="nofollow noreferrer">KSQL</a>:</p> <p><code>helm install --set=cp-ksql-server.external.enabled=true confluentinc/cp-helm-charts</code></p> <p>Setting external access for <a href="https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka-rest#external-access" rel="nofollow noreferrer">Kafka Rest</a>:</p> <p><code>helm install --set=cp-kafka-rest.external.enabled=true confluentinc/cp-helm-charts</code></p> <hr> <p>If your intention is to set external access to <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/README.md#external-access" rel="nofollow noreferrer">Kafka</a>, you should use:</p> <p><code>helm install --set=cp-kafka.nodeport.enabled=true confluentinc/cp-helm-charts</code></p>
<p>I have created a new disk in Google Compute Engine.</p> <pre><code>gcloud compute disks create --size=10GB --zone=us-central1-a dane-disk </code></pre> <p>It says I need to format it. But I have no idea how could I mount/access the disk?</p> <pre><code>gcloud compute disks list NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS notowania-disk us-central1-a zone 10 pd-standard READY </code></pre> <blockquote> <p>New disks are unformatted. You must format and mount a disk before it can be used. You can find instructions on how to do this at:</p> <p><a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting" rel="noreferrer">https://cloud.google.com/compute/docs/disks/add-persistent-disk#formatting</a></p> </blockquote> <p>I tried instruction above but <code>lsblk</code> is not showing the disk at all</p> <p>Do I need to create a VM and somehow attach it to it in order to use it? My goal was to mount the disk as a <a href="https://kubernetes.io/docs/concepts/storage/#gcepersistentdisk" rel="noreferrer">persistent GKE volume</a> independent of the VM (last time GKE upgrade caused recreation of VM and data loss)</p>
<p>Thanks for the clarification of what you are trying to do in the comments.</p> <p>I have 2 different answers here.</p> <hr> <p>The first is that my testing shows that the <a href="https://kubernetes.io/docs/concepts/storage/#gcepersistentdisk" rel="nofollow noreferrer">Kubernetes GCE PD</a> documentation is exactly right, and the warning about formatting seems like it can be safely ignored.</p> <p>If you just issue:</p> <pre><code>gcloud compute disks create --size=10GB --zone=us-central1-a my-test-data-disk </code></pre> <p>And then use it in a pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: nginx name: nginx-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume # This GCE PD must already exist. gcePersistentDisk: pdName: my-test-data-disk fsType: ext4 </code></pre> <p>It will be formatted when it is mounted. This is likely because the <code>fsType</code> parameter <a href="https://kubernetes.io/docs/concepts/storage/volumes/#csi" rel="nofollow noreferrer">instructs the system how to format the disk</a>. You don't need to do anything with a separate GCE instance. The disk is retained even if you delete the pod or even the entire cluster. It is not reformatted on uses after the first and the data is kept around.</p> <p>So, the warning message from gcloud is confusing, but can be safely ignored in this case.</p> <hr> <p>Now, in order to <em>dynamically</em> create a persistent volume based on GCE PD that isn't automatically deleted, you will need to create a new <code>StorageClass</code> that sets the Reclaim Policy to <code>Retain</code>, and then create a <code>PersistentVolumeClaim</code> based on that <code>StorageClass</code>. This also keeps basically the entire operation inside of Kubernetes, without needing to do anything with gcloud. Likewise, a similar approach is what you would want to use with a <code>StatefulSet</code> as opposed to a single pod, as described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps" rel="nofollow noreferrer">here</a>.</p> <p>Most of what you are looking to do is described in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#dynamic_provisioning" rel="nofollow noreferrer">this GKE documentation about dynamically allocating PVCs</a> as well as the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#the-storageclass-resource" rel="nofollow noreferrer">Kubernetes StorageClass documentation</a>. Here's an example:</p> <p>gce-pd-retain-storageclass.yaml:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gce-pd-retained reclaimPolicy: Retain provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: none </code></pre> <p>The above storage class is basically the same as the 'standard' GKE storage class, except with the <code>reclaimPolicy</code> set to Retain.</p> <p>pvc-demo.yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-demo-disk spec: accessModes: - ReadWriteOnce storageClassName: gce-pd-retained resources: requests: storage: 10Gi </code></pre> <p>Applying the above will dynamically create a disk that will be retained when you delete the claim.</p> <p>And finally a demo-pod.yaml that mounts the PVC as a volume (this is really a nonsense example using nginx, but it demonstrates the syntax):</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: nginx name: nginx-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume persistentVolumeClaim: claimName: pvc-demo-disk </code></pre> <p>Now, if you apply these three in order, you'll get a container running using the PersistentVolumeClaim which has automatically created (and formatted) a disk for you. When you delete the pod, the claim keeps the disk around. If you delete the <em>claim</em> the StorageClass still keeps the disk from being deleted.</p> <p>Note that the PV that is left around after this won't be automatically reused, as the data is still on the disk. See <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="nofollow noreferrer">the Kubernetes documentation</a> about what you can do to reclaim it in this case. Really, this mostly says that you shouldn't delete the PVC unless you're ready to do work to move the data off the old volume.</p> <p>Note that these disks will even continue to exist when the entire GKE cluster is deleted as well (and you will continue to be billed for them until you delete them).</p>
<p>I'm new to k8s, prometheus. I'm trying to collect the metrics of each pods with prometheus but unable to so because of the error: <a href="https://i.stack.imgur.com/QZajf.png" rel="nofollow noreferrer">API ERROR</a>.</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/metrics\"", "reason": "Forbidden", "details": { }, "code": 403 } </code></pre>
<p><code>system:anonymous</code> means that an unauthenticated user is trying to get a resource from your cluster, which is forbidden. You will need to create a service account, then give that service account some permissions through RBAC, then make that service account to get the metrics. All that is documented.</p> <p>As a workaround, you can do this:</p> <pre><code>kubectl create clusterrolebinding prometheus-admin --clusterrole cluster-admin --user system:anonymous </code></pre> <p>Now, note that this is a <strong>terrible</strong> idea, unless you are playing with kubernetes. With this permission you are giving any unauthenticated user total permissions into your cluster.</p>
<p>I try to pull image from an ACR using a secret and I can't do it.</p> <p>I created resources using azure cli commands:</p> <pre><code>az login az provider register -n Microsoft.Network az provider register -n Microsoft.Storage az provider register -n Microsoft.Compute az provider register -n Microsoft.ContainerService az group create --name aksGroup --location westeurope az aks create --resource-group aksGroup --name aksCluster --node-count 1 --generate-ssh-keys -k 1.9.2 az aks get-credentials --resource-group aksGroup --name aksCluster az acr create --resource-group aksGroup --name aksClusterRegistry --sku Basic --admin-enabled true </code></pre> <p>After that I logged in and pushed image successfully to created ACR from local machine.</p> <pre><code>docker login aksclusterregistry.azurecr.io docker tag jetty aksclusterregistry.azurecr.io/jetty docker push aksclusterregistry.azurecr.io/jetty </code></pre> <p>The next step was creating a secret:</p> <pre><code>kubectl create secret docker-registry secret --docker-server=aksclusterregistry.azurecr.io --docker-username=aksClusterRegistry --docker-password=&lt;Password from tab ACR/Access Keys&gt; --docker-email=some@email.com </code></pre> <p>And eventually I tried to create pod with image from the ACR:</p> <pre><code>#pod.yml apiVersion: v1 kind: Pod metadata: name: jetty spec: containers: - name: jetty image: aksclusterregistry.azurecr.io/jetty imagePullSecrets: - name: secret kubectl create -f pod.yml </code></pre> <p>In result I have a pod with status ImagePullBackOff:</p> <pre><code>&gt;kubectl get pods NAME READY STATUS RESTARTS AGE jetty 0/1 ImagePullBackOff 0 1m &gt; kubectl describe pod jetty Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m default-scheduler Successfully assigned jetty to aks-nodepool1-62963605-0 Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-62963605-0 MountVolume.SetUp succeeded for volume "default-token-w8png" Normal Pulling 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 pulling image "aksclusterregistry.azurecr.io/jetty" Warning Failed 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 Failed to pull image "aksclusterregistry.azurecr.io/jetty": rpc error: code = Unknown desc = Error response from daemon: Get https://aksclusterregistry.azurecr.io/v2/jetty/manifests/latest: unauthorized: authentication required Warning Failed 2m (x2 over 2m) kubelet, aks-nodepool1-62963605-0 Error: ErrImagePull Normal BackOff 2m (x5 over 2m) kubelet, aks-nodepool1-62963605-0 Back-off pulling image "aksclusterregistry.azurecr.io/jetty" Normal SandboxChanged 2m (x7 over 2m) kubelet, aks-nodepool1-62963605-0 Pod sandbox changed, it will be killed and re-created. Warning Failed 2m (x6 over 2m) kubelet, aks-nodepool1-62963605-0 Error: ImagePullBackOff </code></pre> <p>What's wrong? Why does approach with secret not work? Please don't advice me approach with service principal, because I would like to understand why this aproach doesn't work. I think it must be working. </p>
<p>The "old" way with AKS was to do <code>create secret</code> as you mentioned. That is no longer recommended.</p> <p>The "new" way is to attach the container registry. <a href="https://thorsten-hans.com/aks-and-acr-integration-revisited" rel="noreferrer">This</a> article explains the "new" way to attach ACR, and also provides a link to the old way to clear up confusion. When you create your cluster, attach with:</p> <pre><code>az aks create -n myAKSCluster -g myResourceGroup --attach-acr $MYACR </code></pre> <p>Or if you've already created your cluster, update it with:</p> <pre><code>az aks update -n myAKSCluster -g myResourceGroup --attach-acr $MYACR </code></pre> <p>Notes:</p> <ul> <li><p><code>$MYACR</code> is just the name of your registry without the <code>.azurecr.io</code>. Ex: <code>MYACR=foobar</code> not <code>MYACR=foobar.azurecr.io</code>.</p></li> <li><p>After you attach your ACR, it will take a few minutes for the <code>ImagePullBackOff</code> to transition to <code>Running</code>.</p></li> </ul>
<p>When I try to deploy something with docker registry I every time view errors: </p> <blockquote> <p>x509: cannot validate certificate for 10.2.10.7 because it doesn't contain any IP SANs</p> </blockquote> <p>Question: How I can disable <code>ssl</code> from deploy image in docker registry to Kubernetes ?</p>
<p>Assuming relaxed security is OK for your environment, a way to accomplish <em>in Kubernetes</em> what you want is to configure Docker to connect to the private registry as an insecure registry.</p> <p>Per the <a href="https://docs.docker.com/registry/insecure/" rel="nofollow noreferrer">doc here</a>:</p> <blockquote> <p>With insecure registries enabled, Docker goes through the following steps:</p> <ul> <li>First, try using HTTPS. If HTTPS is available but the certificate is invalid, ignore the error about the certificate.</li> <li>If HTTPS is not available, fall back to HTTP.</li> </ul> </blockquote> <p>Notice that the change to <code>/etc/docker/daemon.json</code> described in that doc - adding "insecure-registries" configuration - has to be applied to all nodes in the Kubernetes cluster on which pods/containers can be scheduled to run. Plus, Docker has to be restarted for the change to take effect.</p> <p>It is also to note that the above assumes the cluster uses the Docker container runtime and not some other runtime (e.g. CRI-O) that supports the Docker image format and registry.</p>
<p>I want to output the kubernetes log to a file. but, I could only output it as json data. I want to output only "message" part to file.</p> <p>How do I choose "message" to print? Which filter should I choose?</p> <pre><code>&lt;match output_tag&gt; @type rewrite_tag_filter &lt;rule&gt; key $['kubernetes']['labels']['app'] pattern ^(.+)$ tag app.$1 &lt;/rule&gt; &lt;/match&gt; &lt;match app.tom1&gt; @type file path /logs/tom1 &lt;/match&gt; Execute result:---&gt; 2019-10-30T00:46:05+09:00 app.tom1 { .. "message": "2019-10-29 15:46:05,253 DEBUG [org.springframework.web.servlet.DispatcherServlet] Successfully completed request", "kubernetes": { "labels": { "app": "tom1", .. } Desired result: ---&gt; 2019-10-29 15:46:05,253 DEBUG [org.springframework.web.servlet.DispatcherServlet] Successfully completed request </code></pre> <p>Thank!</p>
<p>Try the <code>single_value</code> <code>&lt;formater&gt;</code> plugin: <a href="https://docs.fluentd.org/formatter/single_value" rel="nofollow noreferrer">https://docs.fluentd.org/formatter/single_value</a></p> <pre><code>&lt;match app.tom1&gt; @type file path /logs/tom1 &lt;format&gt; @type single_value message_key message &lt;/format&gt; &lt;/match&gt; </code></pre>
<p>I need to get a list of all pods that were not created by a controller so I can decide how to handle them before doing a drain on a node. </p> <p>Otherwise I get the message: </p> <pre><code>error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override) while running the drain. </code></pre> <p>I can find the information by running <code>kubectl describe &lt;pod&gt;</code> and looking to see if the Controlled By: is missing but I want to programmatically search all pods on the node and since <code>kubectl describe</code> is not designed for that. I need to find an alternative method.</p>
<p>You can relly on the <code>ownerReferences</code> API object to find this:</p> <blockquote> <p>$ kubectl explain pod.metadata.ownerReferences</p> <p>KIND: Pod</p> <p>VERSION: v1</p> <p>RESOURCE: ownerReferences &lt;[]Object></p> <p>DESCRIPTION: List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.</p> </blockquote> <p>Bare pods (i.e., pods without controllers/owners) will not contain the <code>ownerReferences</code> field, so you can use the <code>--custom-columns</code> to find out which pods are controlled or not:</p> <pre><code>$ kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,CONTROLLER:.metadata.ownerReferences[].kind,NAMESPACE:.metadata.namespace NAME CONTROLLER NAMESPACE nginx-85ff79dd56-tvpts ReplicaSet default static-pod1 &lt;none&gt; default static-pod2 &lt;none&gt; default coredns-5644d7b6d9-6hg82 ReplicaSet kube-system coredns-5644d7b6d9-wtph7 ReplicaSet kube-system etcd-minikube &lt;none&gt; kube-system kube-addon-manager-minikube &lt;none&gt; kube-system kube-apiserver-minikube &lt;none&gt; kube-system kube-controller-manager-minikube &lt;none&gt; kube-system kube-proxy-fff5c DaemonSet kube-system kube-scheduler-minikube &lt;none&gt; kube-system storage-provisioner &lt;none&gt; kube-system tiller-deploy-55c9c4b4df-hgzwm ReplicaSet kube-system </code></pre> <hr> <p>If you want only the pod names that are not owned by a controller manager, you can process the output of <code>kubectl get -o json</code> with <a href="https://github.com/stedolan/jq" rel="noreferrer">jq</a> (very useful for post script processing):</p> <pre><code>$ kubectl get pods --all-namespaces -o json | jq -r '.items | map(select(.metadata.ownerReferences == null ) | .metadata.name) | .[]' static-pod1 static-pod1 etcd-minikube kube-addon-manager-minikube kube-apiserver-minikube kube-controller-manager-minikube kube-scheduler-minikube storage-provisioner </code></pre>
<p>I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.</p> <p>For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.</p> <p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p> <p>I considered using <code>NodePort</code> services, but this requires the nodes themselves to be publicly accessible (relying on <code>kube-proxy</code> to route to the appropriate node). I'd prefer to avoid this if possible.</p> <p><code>LoadBalancer</code> services seem like another option, though I don't want to create a dedicated cloud load balancer for <em>each</em> TCP service I want to expose.</p> <p>I'm aware that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">NGINX Ingress controller supports exposing TCP and UDP services</a>, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static <code>ConfigMap</code>.</p>
<p>Maybe this workflow can help:</p> <p>(I make the assumption that the cloud provider is AWS)</p> <ul> <li><p><strong>AWS Console:</strong> Create a segregated VPC and create your Kubernetes ec2 instances (or autoscaling group) disabling the creation of public IP. This makes it impossible to access the instance from the Internet, you still can access through the private IP (ex, 172.30.1.10) via a Site 2 Site VPN or through a secondary ec2 instance in the same VPC with Public IP. </p></li> <li><p><strong>Kubernetes:</strong> Create a service with a Fixed NodePort (eg 35432 for Postgres).</p></li> <li><p><strong>AWS console:</strong> create a Classic o Layer 4 Loadblancer inside the same VPC of your nodes, in the Listeners Tab open the port 35432 (and other ports that you might need) pointing to one or all of your nodes via a "Target Group". There is no charge in the number of ports. </p></li> </ul> <p>At this point, I don't know how to automate the update of the current living nodes in the Load Balancer's Target Group, this maybe could be an issue with Autoscaling features, if any... Maybe a Cron job with a bash script pulling info from AWS API and update the Target Group?</p>